CN114764330A - Data blood margin analysis method and device, electronic equipment and computer readable storage medium - Google Patents

Data blood margin analysis method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN114764330A
CN114764330A CN202111546926.7A CN202111546926A CN114764330A CN 114764330 A CN114764330 A CN 114764330A CN 202111546926 A CN202111546926 A CN 202111546926A CN 114764330 A CN114764330 A CN 114764330A
Authority
CN
China
Prior art keywords
field
node
array
source
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111546926.7A
Other languages
Chinese (zh)
Inventor
郭子轩
陈凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhenai Jieyun Information Technology Co ltd
Original Assignee
Shenzhen Zhenai Jieyun Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhenai Jieyun Information Technology Co ltd filed Critical Shenzhen Zhenai Jieyun Information Technology Co ltd
Priority to CN202111546926.7A priority Critical patent/CN114764330A/en
Publication of CN114764330A publication Critical patent/CN114764330A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/42Syntactic analysis
    • G06F8/427Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2246Trees, e.g. B+trees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to a data blood margin analysis method, a data blood margin analysis system, a computer device and a storage medium. The method comprises the following steps: receiving a data blood margin analysis instruction, and extracting sql to be analyzed from the data blood margin analysis instruction; analyzing the sql to be analyzed to generate an abstract syntax tree; traversing nodes of the abstract syntax tree in a depth-first subsequent order, identifying the node type of each node aiming at each node, and executing width traversal analysis on the nodes based on the node type to generate a blood relationship tree; and packaging the blood relationship tree into field level blood relationship data, and storing the field level blood relationship data. The embodiment of the application has the effect of improving the blood source tracing efficiency of the data.

Description

Data blood margin analysis method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of big data processing technologies, and in particular, to a data blood margin analysis method, an apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of computer technology, the world formally enters a big data era, big data, also called huge data, is a data set with a large scale which greatly exceeds the capability range of traditional database software tools in the aspects of acquisition, storage, management and analysis, and has the four characteristics of huge data scale, rapid data circulation, various data types and low value density.
The big data has the value of data analysis and data mining and intelligent decision on the basis of analysis, and the change of the data and the correlation among the data are found through the analysis of the data, the rules which are ignored in the past are mined, and things with insights and new values are obtained, so that the prediction of human behaviors is realized, and the business decision can be made in a targeted manner.
In order to be more beneficial to improving the accuracy of data analysis and the troubleshooting of data problems, clear data blood margins are needed, the data blood margins can clearly show the source of data, the processing mode and the mapping mode of the data and the outlet of the data, at present, the data blood margins are mainly generated by Atlas, however, in the process of generating the data blood margins by Atlas, all blood margin relations can be kept, a large number of temporary tables can be generated when the data amount is huge, excessive storage resources are occupied, and the temporary tables cause the expansion of invalid blood margin relations, so that the tracing efficiency of effective blood margins is low.
Disclosure of Invention
The embodiment of the application provides a data blood margin analysis method, a data blood margin analysis device, electronic equipment and a computer-readable storage medium, which are used for
In a first aspect, an embodiment of the present application provides a data blood relationship analysis method, including:
receiving a data blood margin analysis instruction, and extracting sql to be analyzed from the data blood margin analysis instruction;
analyzing the sql to be analyzed to generate an abstract syntax tree;
traversing nodes of the abstract syntax tree in a depth-first subsequent order, identifying the node type of each node aiming at each node, and executing width traversal analysis on the nodes based on the node type to generate a blood relationship tree;
and packaging the blood relationship tree into field level blood relationship data, and storing the field level blood relationship data.
In one embodiment, the identifying the node type of the node includes: and sequentially judging whether the node type of the node is TOK _ TABREF, TOK _ INSERT, TOK _ SUBQUERY, TOK _ CREATEABLE and TOK _ QUERY.
In one embodiment, the performing width traversal resolution on the node based on the node type includes: if the node type is the TOK _ INSERT, determining a field array to be inserted based on the sql to be analyzed; determining a table to be inserted corresponding to the field array to be inserted through the child node TOK _ DESTINATION; if the table to be inserted is a temporary table, acquiring the temporary table type of the temporary table; if the temporary table is the physical source table, the width traversal analysis is performed on the field array to be inserted, aiming at each field array to be inserted, the field to be inserted is determined, the source field array corresponding to the field to be inserted is traversed, the source field array is added into a currentTable colums set and a from set corresponding to the field to be inserted in each source field array, the fields to be inserted are added into the currentColumns set, and the currentColumns set is marked to be in a state to be distributed.
In one embodiment, the performing width traversal resolution on the node based on the node type further includes: if the temporary table is a sub-query temporary query table, performing field source analysis on the field array to be inserted, and determining a source field array corresponding to each field to be inserted in the field array to be inserted; and for each field to be inserted, traversing and adding the source field array to a from set corresponding to the field to be inserted, adding the field to be inserted to the currentColumns set, and marking the currentColumns set as a state to be allocated.
In one embodiment, the performing width traversal resolution on the node based on the node type further includes: if the table to be inserted is a target table, acquiring a library name and a table name of the target table; creating a result table targetTable, performing field source analysis on the field array to be inserted, and determining a source field array corresponding to each field to be inserted in the field array to be inserted; and for each field to be inserted, traversing and adding a source field array corresponding to the field to be inserted to a from set corresponding to the field to be inserted, and adding the field to be inserted to a columns set of the result table targetTable.
In one embodiment, the performing field source resolution on the array of fields to be inserted includes: acquiring a field alias and a field type corresponding to the field to be inserted aiming at each field to be inserted of the field array to be inserted; if the field type is a preset first type, determining a sub-tree node of the field to be inserted corresponding to the field to be inserted based on the sub-query name, and querying the sub-tree node of the field to be inserted based on a preset mapping relation to obtain the source field array; if the field type is a preset second type, a constant name field is newly established based on the field to be inserted; and if the field type is a preset third type, traversing the field subtree to be inserted by the width, judging whether the field subtree to be inserted has a table alias or not, and if so, inquiring to obtain the source field array based on the table alias and the field alias.
In one embodiment, the determining whether the table alias exists in the field sub-tree to be inserted further includes: if the sub-tree of the field to be inserted does not have a table alias, acquiring a field name of the field to be inserted, and searching the source field array in an upper-layer sub-query corresponding to the field to be inserted or a column set of a corresponding physical source table based on the field name; if the column set has the source field array, returning the source field array; if the column set does not have the source field array, acquiring metadata of a physical source table corresponding to the field to be inserted, extracting field information of the physical table corresponding to the metadata, and determining the source field array based on the field information of the physical table.
In a second aspect, an embodiment of the present application provides a data blood margin analysis device, including:
the receiving unit is used for receiving a data blood margin analysis instruction and extracting to-be-analyzed sql from the data blood margin analysis instruction;
the analysis unit is used for analyzing the sql to be analyzed to generate an abstract syntax tree;
a blood margin generation unit which traverses nodes of the abstract syntax tree in a depth-first subsequent order, identifies the node type of each node, and executes width traversal analysis on the nodes based on the node type to generate a blood margin relation tree;
and the packaging unit is used for packaging the blood relationship tree into field-level blood relationship data and storing the field-level blood relationship data.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, a communication interface, and one or more programs, stored in the memory and configured to be executed by the processor, the programs including instructions for performing some or all of the steps described in the method according to the first aspect of the embodiments of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium is used to store a computer program, where the computer program is executed by a processor to implement part or all of the steps described in the method according to the first aspect of the present application.
It can be seen that, in the embodiment of the present application, a data blood margin analysis instruction is received, and sql to be analyzed is extracted from the data blood margin analysis instruction; analyzing the sql to be analyzed to generate an abstract syntax tree; traversing nodes of the abstract syntax tree in a depth-first subsequent order, identifying the node type of each node aiming at each node, and executing width traversal analysis on the nodes based on the node type to generate a blood relationship tree; and packaging the blood relationship tree into field level blood relationship data, and storing the field level blood relationship data. Therefore, the blood relationship of the data can be inferred in a mixed traversal mode combining depth-first subsequent traversal and width traversal, the collection and reasoning processes of the blood relationship are simplified, the generation of temporary tables is reduced, and the generation and storage of invalid blood relationships are reduced, so that the storage resources are released, and the blood relationship tracing efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic application environment diagram of a data blood relationship analysis method according to an embodiment of the present disclosure.
Fig. 2 is a schematic flow chart of a data blood relationship analysis method according to an embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating a method for analyzing a TOK _ INSERT node according to an embodiment of the present disclosure.
Fig. 4 is a schematic structural diagram of an electronic device 400 according to an embodiment of the present disclosure.
Fig. 5 is a schematic structural diagram of a data blood margin analysis device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of the invention and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Hereinafter, some terms in the present application are explained to facilitate understanding by those skilled in the art.
Electronic devices may include a variety of handheld devices, vehicle-mounted devices, wearable devices (e.g., smartwatches, smartbands, pedometers, etc.), computing devices or other processing devices communicatively connected to wireless modems, as well as various forms of User Equipment (UE), Mobile Stations (MS), terminal Equipment (terminal device), and so forth having wireless communication capabilities. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
The data blood margin analysis method provided by the application can be applied to the application environment shown in fig. 1.
As shown in fig. 1, a terminal 101 performs network communication with an electronic device 102 through a network, the electronic device 102 is connected with a database 103, the terminal 101 sends a data blood margin analysis instruction to the electronic device 102 through the network, the electronic device 101 receives the data blood margin analysis instruction, and extracts sql to be analyzed from the data blood margin analysis instruction; analyzing the sql to be analyzed to generate an abstract syntax tree; traversing nodes of the abstract syntax tree in a depth-first subsequent order, identifying the node type of each node, and performing width traversal analysis on the nodes based on the node type to generate a blood relation tree; the vessel-level data is packaged into the vessel-level data, the electronic device 102 builds a mapping relation between the vessel-level data and data stored in the database 103, stores the vessel-level data, and extracts the vessel-level data from the database 103 and visually displays the vessel-level data when the electronic device 102 receives a vessel-level display instruction of the terminal 101.
The terminal 101 and the electronic device 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the database 103 may be implemented by an independent server or a server cluster formed by a plurality of servers.
Referring to fig. 2, as shown in fig. 2, a flow chart of a data blood margin analysis method is provided, which is illustrated by applying the method to the electronic device in fig. 1, and includes the following steps:
step 201, receiving a data blood margin analysis instruction, and extracting sql to be analyzed from the data blood margin analysis instruction;
optionally, a data blood margin analysis instruction is received, where the data blood margin analysis instruction is obtained from an HTTP request sent to the terminal, or the terminal periodically queries a task information base, and an obtaining manner of the data blood margin analysis instruction is not limited herein.
Wherein the data blood margin analysis instruction may comprise: task sql, database connection address, user name, password, etc.
Optionally, extracting the sql to be analyzed from the data blood relationship analysis instruction includes: extracting a task sql from a data blood relationship analysis instruction, wherein the task sql comprises at least one sql statement, acquiring a preset sql filter, taking the task sql as the input of the sql filter to obtain a service sql related to data, cutting the service sql according to a mark to obtain an sql array to be analyzed, the sql array to be analyzed comprises at least one sql to be analyzed, and executing sql analysis operation for each sql to be analyzed.
Further, after the sql parsing operation is executed for each sql to be parsed, the method further includes: analyzing the to-be-analyzed sql to obtain a result table targetTable corresponding to the to-be-analyzed sql, and continuously traversing the to-be-analyzed sql array to execute sql analysis based on the result table targetTable to obtain a blood relationship corresponding to the to-be-analyzed sql array.
Therefore, in the process of traversing and analyzing the to-be-analyzed sql array, the to-be-analyzed sql is analyzed based on the result table targetTable generated after the analysis operation is completed, so that the context and blood relationship association effect exists in the blood relationship tree obtained after the traversal and analysis of the to-be-analyzed sql array, and the blood relationship tracing effect is improved.
Step 202, analyzing the sql to be analyzed to generate an abstract syntax tree;
optionally, the hive tool is started, and a HiveParser parser built in the hive tool is invoked to parse the sql to be parsed into the abstract syntax tree.
Before a HiveParser parser built in the hive tool is called to parse the sql to be parsed into the abstract syntax tree, the method further comprises the following steps: acquiring a connection mode corresponding to the sql to be analyzed, judging whether the connection mode is metadata connection, if so, entering an online analysis mode by the HiveParser analyzer to generate the abstract syntax tree; if not, the HiveParser parser enters an offline parsing mode to generate the abstract syntax tree. Step 203, traversing nodes of the abstract syntax tree in a depth-first subsequent order, identifying the node type of each node, and performing width traversal analysis on the nodes based on the node type to generate a blood relation tree;
optionally, the depth-first sequentially traversing nodes of the abstract syntax tree includes: and aiming at the abstract syntax tree, traversing a left sub-tree of the abstract syntax tree, traversing a right sub-tree of the abstract syntax tree and traversing a root node of the abstract syntax tree.
Optionally, the identifying the node type of the node includes: and sequentially judging whether the node type of the node is TOK _ TABREF, TOK _ INSERT, TOK _ SUBQUERY, TOK _ CREATEABLE and TOK _ QUERY.
Wherein the TOK _ TABREF, the TOK _ INSERT, the TOK _ SUBQUERY, the TOK _ CREATABLE, and the TOK _ QUERY respectively correspond to five analysis processes: analyzing a physical source table, analyzing an insertion subtree, analyzing a sub-query subtree, analyzing a table building subtree and analyzing a query subtree, wherein the nodes correspond to and only correspond to one node type. Optionally, performing width traversal analysis on the node based on the node type includes: judging whether the node type of the node is TOK _ TABREF or not, if the node type is TOK _ TABREF, entering a physical source table analyzing process, acquiring a table name and a library name of the node, if the node does not have the library name, acquiring an initialized preset library name, judging whether a preset source table set table contains a physical source table corresponding to the table name or not, if so, directing currentTable to the physical source table, if not, creating a current table object based on the table name, adding the current table object to the preset source table set table, judging whether the node has a node alias or not, and if so, mapping tree nodes corresponding to the node alias and the currentTable to be added to the alias table; if the node type is not TOK _ TABREF, judging whether the node type is TOK _ INSERT.
Optionally, if the node type is TOK _ INSERT, a process of parsing an insertion sub-tree is entered, and width traversal parsing is performed on the node based on the node type TOK _ INSERT.
Wherein the performing width traversal resolution on the node based on the node type comprises: if the node type is the TOK _ INSERT, determining a field array to be inserted based on the sql to be analyzed; determining a table to be inserted corresponding to the field array to be inserted through the child node TOK _ DESTINATION; if the table to be inserted is a temporary table, acquiring the temporary table type of the temporary table; if the temporary table is the physical source table, the width traversal analysis is performed on the field array to be inserted, aiming at each field array to be inserted, the field to be inserted is determined, the source field array corresponding to the field to be inserted is traversed, the source field array is added into a currentTable colums set and a from set corresponding to the field to be inserted in each source field array, the fields to be inserted are added into the currentColumns set, and the currentColumns set is marked to be in a state to be distributed.
Wherein, the width traversal parsing the array to be inserted comprises: and generating a field sub-tree to be inserted based on the array to be inserted, and traversing the field sub-tree in width.
Optionally, the performing width traversal analysis on the node based on the node type further includes: if the temporary table is a sub-query temporary query table, performing field source analysis on the field array to be inserted, and determining a source field array corresponding to each field to be inserted in the field array to be inserted; and for each field to be inserted, traversing and adding the source field array to a from set corresponding to the field to be inserted, adding the field to be inserted to the currentColumns set, and marking the currentColumns set as a state to be allocated.
Optionally, the performing width traversal parsing on the node based on the node type further includes: if the table to be inserted is a target table, acquiring a library name and a table name of the target table; creating a result table targetTable, performing field source analysis on the field array to be inserted, and determining a source field array corresponding to each field to be inserted in the field array to be inserted; and for each field to be inserted, traversing and adding a source field array corresponding to the field to be inserted to a from set corresponding to the field to be inserted, and adding the field to be inserted to a columns set of the result table targetTable.
Optionally, the performing field source parsing on the field array to be inserted includes: acquiring a field alias and a field type corresponding to the field to be inserted aiming at each field to be inserted of the field array to be inserted; if the field type is a preset first type, determining a sub-tree node of the field to be inserted corresponding to the field to be inserted based on the sub-query name, and querying the sub-tree node of the field to be inserted based on a preset mapping relation to obtain the source field array; if the field type is a preset second type, a constant name field is newly established based on the field to be inserted; and if the field type is a preset third type, traversing the field subtree to be inserted by the width, judging whether the field subtree to be inserted has a table alias or not, and if so, inquiring to obtain the source field array based on the table alias and the field alias.
The preset first type comprises a field, the preset second type comprises a constant field, and the preset third type comprises a common field.
Wherein, the determining whether the field sub-tree to be inserted has a table alias further includes: if the sub-tree of the field to be inserted does not have a table alias, acquiring a field name of the field to be inserted, and searching the source field array in an upper-layer sub-query corresponding to the field to be inserted or a column set of a corresponding physical source table based on the field name; if the column set has the source field array, returning the source field array; if the column set does not have the source field array, acquiring metadata of a physical source table corresponding to the field to be inserted, extracting field information of the physical table corresponding to the metadata, and determining the source field array based on the field information of the physical table.
Optionally, if the node type is not TOK _ INSERT, determining whether the node type is TOK _ SUBQUERY, if the node type is TOK _ SUBQUERY, entering a sub-query sub-tree parsing process, obtaining at least one first sub-query node corresponding to the node, traversing the at least one first sub-query node, determining whether the node type of any first sub-query node in the at least one first sub-query node is not TOK _ SUBQUERY, if so, obtaining a first sub-query node whose node type is not TOK _ SUBQUERY, obtaining a second sub-query node of the first sub-query node, determining whether the node type of the second sub-query node is TOK _ union, if so, obtaining the node type of the second sub-query node is TOK _ onall, adding the tree node corresponding to the first sub-query node and the mapping of the corresponding unioncColumns array into the alias columns, and emptying the currentColumns array; and if the node type of the second sub-query node is not TOK _ UNIONALL, adding the tree node corresponding to the first sub-query node and the mapping of the currentColumns into the alias Columns, and emptying the currentColumns array.
Optionally, if the node type is not TOK _ SUBQUERY, determining whether the node type is TOK _ CREATETABLE, if the node type is TOK _ CREATETABLE, entering a process of analyzing and establishing a table subtree, acquiring a library name and a table name under the TOK _ CREATETABLE subtree of the node, creating a result table targetTable object based on the library name and the table name, traversing currentColumns array, and adding each field in the currentColumns array to a columns set of the result table targetTable.
Optionally, if the node type is not TOK _ CREATETABLE, determining whether the node type is TOK _ QUERY, if the node type is TOK _ QUERY, entering an QUERY subtree process, acquiring a child node corresponding to the node, determining whether the child node is TOK _ UNIONALL, if the child node is TOK _ UNIONALL, entering an analysis Union subtree process, acquiring a hidden alias of a Union corresponding to the child node, determining a tree node corresponding to the hidden alias, determining a Union field array corresponding to the Union through unionColumns, traversing the Union field array and currentColumns array at the same time, and combining the Union field and the currentColumns field aiming at the Union field and the currentColumns field at the same position to obtain a combined field object, where the Union field and the currentColumns field corresponding to the Union field point to the combined field object; and after the Union field array and the currentColumns array are traversed, generating a merged field object array, traversing the merged field object array, storing each merged field object to a unionColumns table, and emptying the currentColumns array.
Optionally, if the node type is not TOK _ QUERY, skipping the node and continuing to perform the depth-first subsequent traversal parsing on the abstract syntax tree.
Optionally, the generating the blood relationship tree includes: and traversing and analyzing the abstract syntax tree in a depth-first subsequent order to obtain a targetTable, wherein a columns set in the targetTable comprises fields of a target table corresponding to the sql to be analyzed, and a blood relationship tree corresponding to the fields is generated based on a from set corresponding to each field.
And step 204, packaging the blood relationship tree into field level blood relationship data, and storing the field level blood relationship data.
Optionally, the packaging the blood relationship tree into field-level blood relationship data includes: and traversing and packaging a result table targetTable, a columns set and a blood relationship tree corresponding to each field to obtain the field-level blood relationship data.
Further, the field-level blood vessel data is encapsulated into high _ COLUMN _ line, and whether the field has constant field data is determined, where the constant field data may include: and a date field generated by the HIVE function, a constant field generated by the HIVE function and the like, wherein the data of the constant field is set as a passive field, and the HIVE _ COLUMN _ LINEAGE is stored in a preset database.
In a specific implementation process, the preset database is associated with an enterprise big data task scheduling system, HIVE _ COLUMN _ LINEAGE of the preset database is extracted through the enterprise big data task scheduling system, and the field level blood data is displayed in a visualized mode.
It can be seen that, in the embodiment of the present application, a data blood relationship analysis instruction is received, and an sql to be analyzed is extracted from the data blood relationship analysis instruction; analyzing the sql to be analyzed to generate an abstract syntax tree; depth-first-order traversal is performed on the nodes of the abstract syntax tree, the node types of the nodes are identified aiming at each node, when the node type is TOK _ INSERT, width traversal analysis is performed on the nodes, a source field array is determined, a blood relationship tree is generated based on the source field array, the blood relationship of data can be inferred through a mixed traversal mode of combination of depth-first-order traversal and width traversal, and the blood relationship tree is generated; and meanwhile, analyzing the next to-be-analyzed sql based on the generated blood relationship tree to realize blood relationship context association, finally packaging the blood relationship tree as field level blood relationship data, and storing the field level blood relationship data. The method and the device can infer the blood relationship of the data through a mixed traversal mode combining depth-first subsequent traversal and width traversal, simplify the collection and reasoning process of the blood relationship, reduce the generation of temporary tables, and reduce the generation and storage of invalid blood relationships, thereby being favorable for releasing storage resources and improving the blood relationship tracing efficiency.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a method for analyzing a TOK _ INSERT node according to an embodiment of the present application, where the method is applied to the electronic device shown in fig. 1, and is shown as follows:
receiving a data blood margin analysis instruction, extracting an SQL to be analyzed from the data blood margin analysis instruction through an SQL Parser, starting a HiveParser to execute analysis on the SQL to be analyzed to generate an abstract syntax tree, traversing the abstract syntax tree in a depth-first subsequent order, sequentially judging whether the node type of each node is TOK _ TABREF, TOK _ INSERT, TOK _ SUBQUERY, TOK _ CREATABLE and TOK _ QUERY aiming at each node of the abstract syntax tree, if the node type of each node is TOK _ INSERT, determining a field to be inserted corresponding to the SQL to be analyzed, extracting a table to be inserted corresponding to the field to be inserted, if the table to be inserted is a temporary table, executing a flow of inserting the temporary table, and if the table to be inserted is a target table, executing a flow of inserting the target table.
Wherein, the process of inserting the temporary table is executed, which comprises the following steps: and obtaining the type of the temporary table, and traversing the width of the analysis field subtree if the temporary table is a physical source table.
Further, if the temporary table is a sub-query or the table to be inserted is a target table, analyzing a subtree of the TOK _ SELEXPR, determining a field type of a field to be inserted, traversing and analyzing the field subtree based on the field type width, constructing a blood relationship, judging whether a field alias exists in the field to be inserted, if so, performing alias inference, determining a source field array of the field to be inserted, and if not, performing non-alias inference, and determining the source field array of the field to be inserted.
Therefore, the method and the device can infer the blood relationship of the data in a mixed traversal mode combining depth-first postorder traversal and width traversal, simplify the collection and inference process of the blood relationship, construct the data blood relationship based on the type of the field to be inserted, reduce the generation of temporary tables, and reduce the generation and storage of invalid blood relationships, thereby being beneficial to releasing storage resources and improving the blood relationship tracing efficiency.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device 400 according to an embodiment of the present application, and as shown in the figure, the server 400 includes an application processor 410, a memory 420, a communication interface 430, and one or more programs 421, where the one or more programs 421 are stored in the memory 420 and configured to be executed by the application processor 410, and the one or more programs 421 include instructions for:
receiving a data blood margin analysis instruction, and extracting sql to be analyzed from the data blood margin analysis instruction;
analyzing the sql to be analyzed to generate an abstract syntax tree;
traversing nodes of the abstract syntax tree in a depth-first subsequent order, identifying the node type of each node aiming at each node, and executing width traversal analysis on the nodes based on the node type to generate a blood relationship tree;
and packaging the blood relationship tree into field level blood relationship data, and storing the field level blood relationship data.
It can be seen that, in the embodiment of the present application, a data blood margin analysis instruction is received, and sql to be analyzed is extracted from the data blood margin analysis instruction; analyzing the sql to be analyzed to generate an abstract syntax tree; traversing nodes of the abstract syntax tree in a depth-first subsequent order, identifying the node type of each node aiming at each node, and executing width traversal analysis on the nodes based on the node type to generate a blood relationship tree; and packaging the vessel-level data into the vessel-level data, and storing the vessel-level data. Therefore, the blood relationship of the data can be inferred in a mixed traversal mode combining depth-first subsequent traversal and width traversal, the collection and reasoning processes of the blood relationship are simplified, the generation of temporary tables is reduced, and the generation and storage of invalid blood relationships are reduced, so that the storage resources are released, and the blood relationship tracing efficiency is improved.
In a possible example, the identifying the node type of the node, the instructions in the program are specifically configured to: and sequentially judging whether the node type of the node is TOK _ TABREF, TOK _ INSERT, TOK _ SUBQUERY, TOK _ CREATABLE and TOK _ QUERY.
In a possible example, the performing breadth traversal parsing on the nodes based on the node types, the instructions in the program are specifically configured to perform the following operations: if the node type is the TOK _ INSERT, determining a field array to be inserted based on the sql to be analyzed; determining a table to be inserted corresponding to the field array to be inserted through the child node TOK _ DESTINATION; if the table to be inserted is a temporary table, acquiring the temporary table type of the temporary table; if the temporary table is the physical source table, the width traversal analysis is performed on the field array to be inserted, aiming at each field array to be inserted, the field to be inserted is determined, the source field array corresponding to the field to be inserted is traversed, the source field array is added into a currentTable colums set and a from set corresponding to the field to be inserted in each source field array, the fields to be inserted are added into the currentColumns set, and the currentColumns set is marked to be in a state to be distributed.
In a possible example, the performing width traversal resolution on the node based on the node type, the instructions in the program are specifically configured to: if the temporary table is a sub-query temporary lookup table, performing field source analysis on the field array to be inserted, and determining a source field array corresponding to each field to be inserted in the field array to be inserted; and for each field to be inserted, traversing and adding the source field array to a from set corresponding to the field to be inserted, adding the field to be inserted to the currentColumns set, and marking the currentColumns set as a state to be allocated.
In a possible example, the performing width traversal resolution on the node based on the node type, the instructions in the program are specifically configured to: if the table to be inserted is a target table, acquiring a library name and a table name of the target table; creating a result table targetTable, performing field source analysis on the field array to be inserted, and determining a source field array corresponding to each field to be inserted in the field array to be inserted; and for each field to be inserted, traversing and adding a source field array corresponding to the field to be inserted to a from set corresponding to the field to be inserted, and adding the field to be inserted to a columns set of the result table targetTable.
In a possible example, the field source parsing is performed on the field array to be inserted, and the instructions in the program are specifically configured to perform the following operations: acquiring a field alias and a field type corresponding to the field to be inserted aiming at each field to be inserted of the field array to be inserted; if the field type is a preset first type, determining a sub-tree node of the field to be inserted corresponding to the field to be inserted based on the sub-query name, and querying the sub-tree node of the field to be inserted based on a preset mapping relation to obtain the source field array; if the field type is a preset second type, a constant name field is newly established based on the field to be inserted; and if the field type is a preset third type, traversing the field subtree to be inserted in width, judging whether the field subtree to be inserted has a table alias or not, and if so, inquiring to obtain the source field array based on the table alias and the field alias.
In a possible example, the determining whether the field sub-tree to be inserted has a table alias or not is specifically configured to perform the following operations: if the sub-tree of the field to be inserted does not have a table alias, acquiring a field name of the field to be inserted, and searching the source field array in an upper-layer sub-query corresponding to the field to be inserted or a column set of a corresponding physical source table based on the field name; if the column set has the source field array, returning the source field array; if the column set does not have the source field array, acquiring metadata of a physical source table corresponding to the field to be inserted, extracting field information of the physical table corresponding to the metadata, and determining the source field array based on the field information of the physical table.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one control unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a data blood margin analysis device according to an embodiment of the present application, including a receiving unit 501, an analysis unit 502, a blood margin generation unit 503, and an encapsulation unit 504, where:
a receiving unit 501, configured to receive a data blood relationship analysis instruction, and extract an sql to be analyzed from the data blood relationship analysis instruction;
the analyzing unit 502 is used for analyzing the sql to be analyzed to generate an abstract syntax tree;
a blood margin generating unit 503, configured to perform depth-first subsequent traversal on the nodes of the abstract syntax tree, identify a node type of each node, perform width traversal analysis on the nodes based on the node type, and generate a blood margin relationship tree;
an encapsulating unit 504, configured to encapsulate the blood relationship tree into field-level blood relationship data, and store the field-level blood relationship data.
It can be seen that, in the embodiment of the present application, a data blood margin analysis instruction is received, and sql to be analyzed is extracted from the data blood margin analysis instruction; analyzing the sql to be analyzed to generate an abstract syntax tree; traversing nodes of the abstract syntax tree in a depth-first subsequent order, identifying the node type of each node aiming at each node, and executing width traversal analysis on the nodes based on the node type to generate a blood relationship tree; and packaging the blood relationship tree into field level blood relationship data, and storing the field level blood relationship data. Therefore, the blood relationship of the data can be inferred in a mixed traversal mode combining depth-first subsequent traversal and width traversal, the collection and reasoning processes of the blood relationship are simplified, the generation of temporary tables is reduced, and the generation and storage of invalid blood relationships are reduced, so that the storage resources are released, and the blood relationship tracing efficiency is improved.
In a possible example, in terms of the identifying the node type of the node, the blood margin generating unit 503 is specifically configured to: and sequentially judging whether the node type of the node is TOK _ TABREF, TOK _ INSERT, TOK _ SUBQUERY, TOK _ CREATEABLE and TOK _ QUERY.
In a possible example, in terms of performing width traversal analysis on the nodes based on the node types, the blood margin generation unit 503 is specifically configured to: if the node type is the TOK _ INSERT, determining a field array to be inserted based on the sql to be analyzed; determining a table to be inserted corresponding to the field array to be inserted through the child node TOK _ DESTINATION; if the table to be inserted is a temporary table, acquiring the temporary table type of the temporary table; if the temporary table is the physical source table, the width traversal analysis is performed on the field array to be inserted, aiming at each field array to be inserted, the field to be inserted is determined, the source field array corresponding to the field to be inserted is traversed, the source field array is added into a currentTable colums set and a from set corresponding to the field to be inserted in each source field array, the fields to be inserted are added into the currentColumns set, and the currentColumns set is marked to be in a state to be distributed.
In a possible example, in terms of performing a breadth traversal analysis on the nodes based on the node types, the blood margin generating unit 503 is specifically configured to: if the temporary table is a sub-query temporary query table, performing field source analysis on the field array to be inserted, and determining a source field array corresponding to each field to be inserted in the field array to be inserted; and for each field to be inserted, traversing and adding the source field array to a from set corresponding to the field to be inserted, adding the field to be inserted to the currentColumns set, and marking the currentColumns set as a state to be allocated.
In a possible example, in terms of performing width traversal analysis on the nodes based on the node types, the blood margin generation unit 503 is specifically configured to: if the table to be inserted is a target table, acquiring a library name and a table name of the target table; creating a result table targetTable, performing field source analysis on the field array to be inserted, and determining a source field array corresponding to each field to be inserted in the field array to be inserted; and traversing and adding a source field array corresponding to the field to be inserted to a from set corresponding to the field to be inserted aiming at each field to be inserted, and adding the field to be inserted to a columns set of the result table targetTable.
In a possible example, in the performing field source parsing on the field array to be inserted, the blood margin generating unit 503 is specifically configured to: aiming at each field to be inserted of the field array to be inserted, acquiring a field alias and a field type corresponding to the field to be inserted; if the field type is a preset first type, determining a sub-tree node of the field to be inserted corresponding to the field to be inserted based on the sub-query name, and querying the sub-tree node of the field to be inserted based on a preset mapping relation to obtain the source field array; if the field type is a preset second type, a constant name field is newly established based on the field to be inserted; and if the field type is a preset third type, traversing the field subtree to be inserted by the width, judging whether the field subtree to be inserted has a table alias or not, and if so, inquiring to obtain the source field array based on the table alias and the field alias.
In a possible example, in the aspect of determining whether the table alias exists in the field sub-tree to be inserted, the blood margin generating unit 503 is specifically configured to: if the sub-tree of the field to be inserted does not have a table alias, acquiring a field name of the field to be inserted, and searching the source field array in an upper-layer sub-query corresponding to the field to be inserted or a column set of a corresponding physical source table based on the field name; if the column set has the source field array, returning the source field array; if the column set does not have the source field array, acquiring metadata of a physical source table corresponding to the field to be inserted, extracting field information of the physical table corresponding to the metadata, and determining the source field array based on the field information of the physical table.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art will recognize that the embodiments described in this specification are preferred embodiments and that acts or modules referred to are not necessarily required for this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some interfaces, indirect coupling or communication connection between devices or units, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solutions of the present application, which are essential or part of the technical solutions contributing to the prior art, or all or part of the technical solutions, may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the above methods of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method for data blood margin analysis, comprising:
receiving a data blood relationship analysis instruction, and extracting sql to be analyzed from the data blood relationship analysis instruction;
analyzing the sql to be analyzed to generate an abstract syntax tree;
traversing nodes of the abstract syntax tree in a depth-first subsequent order, identifying the node type of each node, and performing width traversal analysis on the nodes based on the node types to generate a blood relationship tree;
and packaging the blood relationship tree into field level blood relationship data, and storing the field level blood relationship data.
2. The method of claim 1, wherein the identifying the node type of the node comprises:
and sequentially judging whether the node type of the node is TOK _ TABREF, TOK _ INSERT, TOK _ SUBQUERY, TOK _ CREATEABLE and TOK _ QUERY.
3. The method of any of claims 1-2, wherein performing a breadth traversal resolution on the node based on the node type comprises:
if the node type is the TOK _ INSERT, determining a field array to be inserted based on the sql to be analyzed;
determining a table to be inserted corresponding to the field array to be inserted through the child node TOK _ DESTINATION;
if the table to be inserted is a temporary table, acquiring the temporary table type of the temporary table;
if the temporary table is the physical source table, the width traversal analysis is performed on the field array to be inserted, aiming at each field array to be inserted, the field to be inserted is determined, the source field array corresponding to the field to be inserted is traversed, the source field array is added into a currentTable colums set and a from set corresponding to the field to be inserted in each source field array, the fields to be inserted are added into the currentColumns set, and the currentColumns set is marked to be in a state to be distributed.
4. The method of claim 3, wherein performing a breadth traversal resolution on the node based on the node type further comprises:
if the temporary table is a sub-query temporary query table, performing field source analysis on the field array to be inserted, and determining a source field array corresponding to each field to be inserted in the field array to be inserted;
and for each field to be inserted, traversing and adding the source field array to a from set corresponding to the field to be inserted, adding the field to be inserted to the currentColumns set, and marking the currentColumns set as a state to be allocated.
5. The method of claim 3, wherein performing a breadth traversal resolution on the node based on the node type further comprises:
if the table to be inserted is a target table, acquiring a library name and a table name of the target table;
creating a result table targetTable, performing field source analysis on the field array to be inserted, and determining a source field array corresponding to each field to be inserted in the field array to be inserted;
and for each field to be inserted, traversing and adding a source field array corresponding to the field to be inserted to a from set corresponding to the field to be inserted, and adding the field to be inserted to a columns set of the result table targetTable.
6. The method according to any one of claims 4 or 5, wherein performing field source resolution on the array of fields to be inserted comprises:
acquiring a field alias and a field type corresponding to the field to be inserted aiming at each field to be inserted of the field array to be inserted;
if the field type is a preset first type, determining a sub-tree node of the field to be inserted corresponding to the field to be inserted based on the sub-query name, and querying the sub-tree node of the field to be inserted based on a preset mapping relation to obtain the source field array;
if the field type is a preset second type, a constant name field is newly established based on the field to be inserted;
and if the field type is a preset third type, traversing the field subtree to be inserted in width, judging whether the field subtree to be inserted has a table alias or not, and if so, inquiring to obtain the source field array based on the table alias and the field alias.
7. The method of claim 6, wherein the determining whether the table alias exists in the field sub-tree to be inserted further comprises:
if the sub-tree of the field to be inserted does not have a table alias, acquiring a field name of the field to be inserted, and searching the source field array in an upper-layer sub-query corresponding to the field to be inserted or a column set of a corresponding physical source table based on the field name;
if the column set has the source field array, returning the source field array;
if the column set does not have the source field array, acquiring metadata of a physical source table corresponding to the field to be inserted, extracting field information of the physical table corresponding to the metadata, and determining the source field array based on the field information of the physical table.
8. A data blood margin analysis device is characterized by comprising:
the receiving unit is used for receiving a data blood relationship analysis instruction and extracting sql to be analyzed from the data blood relationship analysis instruction;
the parsing unit is used for parsing the sql to be parsed to generate an abstract syntax tree;
a blood margin generation unit which traverses nodes of the abstract syntax tree in a depth-first subsequent order, identifies the node type of each node, and executes width traversal analysis on the nodes based on the node type to generate a blood margin relation tree;
and the packaging unit is used for packaging the blood relationship tree into field-level blood relationship data and storing the field-level blood relationship data.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executed by a processor to implement the method of any one of claims 1 to 7.
CN202111546926.7A 2021-12-17 2021-12-17 Data blood margin analysis method and device, electronic equipment and computer readable storage medium Pending CN114764330A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111546926.7A CN114764330A (en) 2021-12-17 2021-12-17 Data blood margin analysis method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111546926.7A CN114764330A (en) 2021-12-17 2021-12-17 Data blood margin analysis method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114764330A true CN114764330A (en) 2022-07-19

Family

ID=82364964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111546926.7A Pending CN114764330A (en) 2021-12-17 2021-12-17 Data blood margin analysis method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114764330A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117370620A (en) * 2023-12-08 2024-01-09 广东航宇卫星科技有限公司 Data blood margin construction method and device, terminal equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117370620A (en) * 2023-12-08 2024-01-09 广东航宇卫星科技有限公司 Data blood margin construction method and device, terminal equipment and storage medium
CN117370620B (en) * 2023-12-08 2024-04-05 广东航宇卫星科技有限公司 Data blood margin construction method and device, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109800258A (en) Data file dispositions method, device, computer equipment and storage medium
CN111177788A (en) Hive dynamic desensitization method and dynamic desensitization system
CN110007906B (en) Script file processing method and device and server
CN114741070A (en) Code generation method and device, electronic equipment and storage medium
CN111427784B (en) Data acquisition method, device, equipment and storage medium
CN114090671A (en) Data import method and device, electronic equipment and storage medium
CN113672628A (en) Data blood margin analysis method, terminal device and medium
CN112069052B (en) Abnormal object detection method, device, equipment and storage medium
CN114764330A (en) Data blood margin analysis method and device, electronic equipment and computer readable storage medium
CN112988163B (en) Intelligent adaptation method, intelligent adaptation device, intelligent adaptation electronic equipment and intelligent adaptation medium for programming language
CN114490658A (en) Node display method, device, storage medium and program product
CN112835901A (en) File storage method and device, computer equipment and computer readable storage medium
CN114281842A (en) Method and device for sub-table query of database
CN114816364A (en) Method, device and application for dynamically generating template file based on Swagger
CN113505143A (en) Statement type conversion method and device, storage medium and electronic device
CN114281810A (en) Network security operation data query method and device based on natural language interaction
CN110209885B (en) Graph query method and system
CN112433943A (en) Method, device, equipment and medium for detecting environment variable based on abstract syntax tree
CN113536762A (en) JSON text comparison method and device
CN110569243A (en) data query method, data query plug-in and data query server
CN112799638B (en) Non-invasive rapid development method, platform, terminal and storage medium
CN117251384B (en) Interface automation test case generation method and system
CN111125147B (en) Extra-large set analysis method and device based on extended pre-calculation model and SQL function
CN116594628A (en) Data tracing method and device and computer equipment
KR101921123B1 (en) Field-Indexing Method for Message

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination