CN115438087B - Data query method, device, storage medium and equipment based on cache library - Google Patents

Data query method, device, storage medium and equipment based on cache library Download PDF

Info

Publication number
CN115438087B
CN115438087B CN202211401691.7A CN202211401691A CN115438087B CN 115438087 B CN115438087 B CN 115438087B CN 202211401691 A CN202211401691 A CN 202211401691A CN 115438087 B CN115438087 B CN 115438087B
Authority
CN
China
Prior art keywords
data
target
field
node
target field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211401691.7A
Other languages
Chinese (zh)
Other versions
CN115438087A (en
Inventor
陈大伟
吴华夫
熊海霞
黄潮勇
肖熙
黄浩
莫会治
黄鹏
禤文君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Smart Software Co ltd
Original Assignee
Guangzhou Smart Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Smart Software Co ltd filed Critical Guangzhou Smart Software Co ltd
Priority to CN202211401691.7A priority Critical patent/CN115438087B/en
Publication of CN115438087A publication Critical patent/CN115438087A/en
Application granted granted Critical
Publication of CN115438087B publication Critical patent/CN115438087B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24564Applying rules; Deductive queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2471Distributed queries
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to a data query method, a device, a storage medium and equipment based on a cache library, wherein when a query instruction for data of a first target field is received, if the first target field is contained in a buffer pool and the data of the field in the cache library is not changed, the first target data corresponding to the first target field is directly obtained from the buffer pool, so that the data processing amount of repeatedly queried data is reduced; otherwise, by generating the first target task for acquiring the first target data and sending the first target task to the cache library, the cache library generates parallel first target subtasks of each node, so that each node simultaneously executes the corresponding first target subtasks to improve the efficiency of acquiring the first target data.

Description

Data query method and device based on cache library, storage medium and equipment
Technical Field
The present invention relates to the field of data processing, and in particular, to a data query method, apparatus, storage medium, and device based on a cache library.
Background
The existing data query mode generally reads data of corresponding fields from a database according to the fields which need to be queried by a user.
When a user continuously queries data of the same field for multiple times, the data of the corresponding field needs to be read from the database every query, and the query efficiency is low.
Disclosure of Invention
The embodiment of the application provides a data query method, a data query device, a storage medium and data query equipment based on a cache library, and the data query efficiency can be improved.
In a first aspect, an embodiment of the present application provides a data query method based on a cache library, including the following steps:
in response to a query instruction for data of at least one first target field, determining whether the at least one first target field is contained in a buffer pool; the method comprises the steps that a buffer pool stores a target field obtained in response to a query instruction in a first time period before the current time and target data corresponding to the target field; the target field of the buffer pool comprises first correlation information; the first correlation information is used for determining whether data of a field corresponding to a target field of the buffer pool in a buffer library is changed;
if the buffer pool comprises the at least one first target field and the data of the corresponding field in the buffer pool is determined to be unchanged according to the first correlation information of the first target field, acquiring first target data corresponding to the first target field from the buffer pool;
otherwise, generating a first target task for acquiring first target data according to the at least one first target field which is not in the buffer pool, sending the first target task to the buffer library, enabling the buffer library to generate at least one first target subtask corresponding to a node in the buffer library according to the first target task and simultaneously sending the first target subtask to the corresponding node, receiving node data returned by the at least one node, and acquiring the first target data corresponding to the first target field according to the node data; the cache library comprises a plurality of nodes, and the nodes are connected through a node interconnection network; each node in the cache library stores data of a plurality of fields extracted from a target database in advance.
In a second aspect, an embodiment of the present application provides a data query apparatus based on a cache library, including:
the buffer pool determining module is used for responding to a query instruction of data of at least one first target field, and determining whether the at least one first target field is contained in the buffer pool; the buffer pool stores a second target field acquired in response to a query instruction within a first time period and second target data corresponding to the second target field; the target field of the buffer pool and the field in the buffer library have first correlation information; the first correlation information is used for determining whether data of a field corresponding to a target field of the buffer pool in a buffer library is changed;
the first query module is configured to, if the buffer pool includes the at least one first target field and it is determined according to the first association information of the first target field that data of a corresponding field in the buffer pool is not changed, obtain first target data corresponding to the at least one first target field from the buffer pool;
the second query module is used for generating a first target task for acquiring first target data according to the at least one first target field which is not in the buffer pool and sending the first target task to the buffer library, so that the buffer library generates at least one first target subtask corresponding to a node in the buffer library according to the first target task and sends the first target subtask to the corresponding node at the same time, receives node data returned by the at least one node, and acquires the first target data corresponding to the first target field according to the node data; the cache library comprises a plurality of nodes, the nodes are connected through a node interconnection network, and each node in the cache library stores data of a plurality of fields extracted from a target database in advance.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the cache library-based data query method as described in any one of the above.
In a fourth aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable by the processor, where the processor, when executing the computer program, implements the steps of the cache library-based data query method described in any one of the above.
When a query instruction for data of a first target field is received, if the buffer pool contains the first target field and the data of the field in the buffer pool is not changed, directly acquiring the first target data corresponding to the first target field from the buffer pool, thereby reducing the data processing amount of repeatedly queried data; otherwise, by generating the first target task for acquiring the first target data and sending the first target task to the cache library, the cache library generates parallel first target subtasks of each node, so that each node simultaneously executes the corresponding first target subtasks to improve the efficiency of acquiring the first target data.
For a better understanding and practice, the invention is described in detail below with reference to the accompanying drawings.
Drawings
FIG. 1 is a flowchart of a method for querying data based on a cache library according to an embodiment of the present invention;
FIG. 2 is a flow chart of a data query method according to another embodiment of the present invention;
FIG. 3 is a diagram illustrating a data query process according to an embodiment of the invention;
FIG. 4 is a diagram illustrating a display interface of a user terminal in accordance with an embodiment of the present invention;
FIG. 5 is a flowchart of step S202 according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a method for querying data in a cache library according to another embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a cache-based data query apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
To make the objects, technical modules and advantages of the present application clearer, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that the embodiments described are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without any creative effort belong to the protection scope of the embodiments in the present application.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the present application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of methods and methods consistent with certain aspects of the present application, as detailed in the appended claims. In the description of the present application, it is to be understood that the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not necessarily used to describe a particular order or sequence, nor are they to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
In addition, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Referring to fig. 1, an embodiment of the present application provides a data query method based on a cache library, including the following steps:
s101: in response to a query instruction for data of at least one first target field, determining whether the first target field is contained in a buffer pool;
the method comprises the steps that a buffer pool stores a target field obtained in response to a query instruction in a first time period before the current time and target data corresponding to the target field; the target field of the buffer pool comprises first correlation information; the first correlation information is used for determining whether data of a field corresponding to a target field of the buffer pool in a buffer library is changed;
the first target field may be a field which needs to be queried and is input by a user at a user terminal.
In one embodiment, the first target field may be one or more fields of a data table in a database that establishes a connection relationship with a cache library.
The buffer pool is a storage area defined by a user, and the buffer pool is used for caching a target field queried in a first time period before the current time and target data corresponding to the target field, for example, the target field queried in the last few days and the target data corresponding to the target field.
Different data can be provided with different buffer pools, each buffer pool can correspond to one data set, and when a query instruction of a user to a certain field of a certain data set is newly received each time, the data stored in the buffer pool of the data set is updated.
The first correlation information may be obtained according to timestamp information of the target data and data of a field corresponding to the cache library, and specifically, when it is determined according to the timestamp that the time of the target data is consistent with the time of the data of the field corresponding to the cache library, it is determined that the data of the field corresponding to the cache library is not changed, and if the time of the data of the field corresponding to the cache library is later than the time of the target data, it is determined that the data of the field corresponding to the cache library is changed.
S102: if the buffer pool comprises the at least one first target field and the data of the corresponding field in the buffer pool is determined to be unchanged according to the first correlation information of the first target field, acquiring first target data corresponding to the first target field from the buffer pool;
determining whether the buffer pool contains the first target field may first determine a data set corresponding to the first target field, and by searching whether a target field identical to the first target field exists in the buffer pool corresponding to the data set, if so, determining that the buffer pool corresponding to the data set contains the first target field, otherwise, determining that the buffer pool does not contain the first target field. When the buffer pool contains the first target field and the data of the corresponding field in the buffer pool is determined to be unchanged according to the first correlation information, the corresponding first target data is directly obtained from the target data stored in the buffer pool.
In one embodiment, the buffer pools may include a first buffer pool for storing the target fields retrieved in response to the query instruction within a first time period before the current time, and a second buffer pool for storing the target data retrieved in response to the query instruction within the first time period before the current time.
When the corresponding first target data is obtained from the target data stored in the buffer pool, whether the first buffer pool contains the first target field or not is determined, and only when the first buffer pool contains the first target field, the second buffer pool is accessed to obtain the corresponding first target data, so that the data volume of field comparison can be reduced, and the data comparison efficiency of the buffer pool is improved.
S103: otherwise, generating a first target task for acquiring first target data according to the at least one first target field which is not in the buffer pool, sending the first target task to the buffer library, enabling the buffer library to generate at least one first target subtask corresponding to a node in the buffer library according to the first target task and simultaneously sending the first target subtask to the corresponding node, receiving node data returned by the at least one node, and acquiring the first target data corresponding to the first target field according to the node data; the cache library comprises a plurality of nodes, and the nodes are connected through a node interconnection network;
the cache library can be a cache library, original data of a plurality of fields extracted from a target database in advance are stored in the cache library, when a first target field is not contained in the buffer pool, first target data corresponding to the first target field is obtained from the cache library, second-level obtaining of a large-level data result is guaranteed, and data query efficiency is improved.
The first target fields to be queried may be one or more, when the number of the first target fields is at least two and the buffer pool only contains part of the first target fields, and the data of the part of the first target fields is not changed, the first target data corresponding to the part of the first target fields is obtained from the buffer pool, and for the other first target fields not contained in the buffer pool or the first target fields whose data are changed, the first target data corresponding to the first target fields is obtained from the buffer pool.
According to the method and the device, the buffer pool is used for storing the query data in the first time period before the current moment, when a query instruction for the data of the first target field is received, whether the data of the same target field is searched is determined by searching the buffer pool, when the buffer pool contains the first target field, the first target data corresponding to the first target field is directly obtained from the buffer pool, and when the buffer pool does not contain the first target field, the first target data corresponding to the first target field is obtained from the buffer pool, so that the data processing amount of repeated query data is reduced, and the query efficiency is improved.
When a query instruction for data of a first target field is received, if the buffer pool contains the first target field and the data of the field in the buffer pool is not changed, directly acquiring the first target data corresponding to the first target field from the buffer pool, thereby reducing the data processing amount of repeatedly queried data; otherwise, by generating the first target task for acquiring the first target data and sending the first target task to the cache library, the cache library generates parallel first target subtasks of each node, so that each node simultaneously executes the corresponding first target subtasks to improve the efficiency of acquiring the first target data.
In step S102, the data stored in the buffer may be extracted from a plurality of databases that have established connection relationships with the buffer according to the extraction instruction of the user.
Specifically, as shown in fig. 2, before determining whether the first target field is contained in the object buffer pool, the method further includes the following steps:
s201: acquiring address information of a plurality of databases, and establishing a connection relation with the databases according to the address information of the databases;
the plurality of databases may be one or more databases, and the address information of the database may include information such as port, name, IP address, etc. of the database. Address information of several databases may be input by a user through a user terminal.
Establishing a connection relationship with the databases may refer to a user terminal directly establishing communication connection with the databases, and specifically, may establish communication connection with the databases by loading a connection driver of the corresponding database in the user terminal.
The connection driving program of the database can be stored in a preset driving program storage directory, after address information input by a user is received, the connection driving program of the corresponding database is loaded from the preset driving program storage directory in response to a database connection instruction of the user, and a connection relation is established between the connection driving program and the plurality of databases.
For the database successfully establishing the connection relationship, a connection feedback message can be returned to the user terminal, and after the user terminal receives the connection feedback message of the corresponding data, the user terminal determines that the database and the user terminal have successfully established the connection.
S202: in response to an extraction instruction, extracting data of a plurality of extraction fields from a plurality of databases based on an extraction rule and storing the data in the cache library;
wherein the extraction instruction comprises an extraction rule and a plurality of extraction fields.
The extraction rules may include a timed extraction, an immediate extraction, an exception extraction, an incremental extraction, or a full extraction.
The timing extraction is used for regularly extracting data according to a set time plan; incremental extraction can be to extract data larger than the maximum time in the last extraction result in a centralized manner, or to extract data according to an incremental parameter, for example, a certain field or a certain time period to extract related data; the full amount extracts all data used to extract the data.
In step S202, extracting data from several databases may be extracting data of several extraction fields by directly connecting corresponding databases.
In one embodiment, the step of establishing a connection relationship with the databases according to the address information of the databases specifically includes:
defining a target data interface so that each database to be connected defines a corresponding access interface according to the target data interface;
and establishing connection with the access interfaces corresponding to the databases to be connected through the target data interface.
When multiple databases are present, data communication between the user terminal and the cross-repository data source may be achieved using the target data interface. The cross-database data source is provided with a plurality of data transmission channels of the databases, and a user can access the corresponding databases by accessing the cross-database data source to obtain data of the corresponding databases.
As shown in fig. 3, the target data interface is used to enable data communication between the user terminal and the cross-repository data source. The cross-database data source can be provided with access interfaces corresponding to a plurality of databases, such as an Oracle access interface corresponding to an Oracle database, a MySQL access interface corresponding to a MySQL database, a Hive access interface corresponding to a Hive database, a MongoDB access interface corresponding to a MongoDB database, and the like.
When defining the access interface corresponding to the database, the access interface may be defined according to a data connection protocol corresponding to the database. For example, access interfaces may be defined based on JDBC interfaces for Oracle databases, mySQL databases, and Hive databases, and on corresponding MongoDB interfaces for MongoDB databases.
When data is acquired, communication connection is only needed to be established with the cross-database data source through the uniform target data interface, and then different data of the databases are accessed through different access interfaces according to the cross-database data source.
Specifically, the step of extracting data of a plurality of extraction fields from a plurality of databases based on extraction rules and storing the data in the cache library comprises the following steps:
determining access interface information of a database to be extracted according to the extraction instruction;
and extracting data corresponding to the extraction fields through the access interfaces of the corresponding databases based on extraction rules according to the access interface information and storing the data in the cache library.
In the embodiment of the application, the connection relation is established between the address information of the databases and the databases, and the original data is extracted from the source database to the cache library, so that the second-level acquisition of a large-scale data result can be ensured, and the data query efficiency is improved.
Preferably, the cache library of the present application may adopt an MPP distributed architecture to perform data transmission and collaborative computation, and the cache library includes a plurality of nodes, and the plurality of nodes are connected through a node interconnection network.
After receiving the first target task, the cache library respectively generates parallel first target subtasks of each node and sends the parallel first target subtasks to the corresponding nodes, and each node simultaneously executes the first target subtasks to acquire node data, so that the data acquisition efficiency is improved.
The nodes can be SMP server nodes, each SMP server node has an independent disk storage and memory system, and after data of the database is extracted, the data can be stored in the disk storage and memory system of the SMP server node corresponding to the database. In the using process, the SMP server nodes only access local resources of the SMP server nodes, all the nodes do not share and operate independently, and a user can infinitely expand the number of the nodes according to needs, so that the SMP server nodes have good expansion capability. In one embodiment, if there are at least two databases, after extracting data of a plurality of extraction fields from a plurality of databases based on extraction rules and storing the data in the cache library, the method further includes the following steps:
responding to a selection instruction of at least two data tables of at least two databases, and displaying a plurality of fields of the at least two data tables;
responding to a selection instruction of at least two fields in the fields, acquiring the incidence relation of the at least two data tables, and determining first target field information of a data display area according to the incidence relation of the at least two fields in the data tables and the incidence relation of the at least two data tables; the first target field information comprises a first target field and display information of the first target field;
before the first target data corresponding to the first target field is obtained, the method further comprises the following steps:
and responding to configuration instructions of the maximum query memory, the maximum query memory of each node and the maximum used memory, generating configuration information and sending the configuration information to the server so that the server configures the query parameters of the cache library according to the configuration information.
Specifically, when the data size to be queried is large or the number of concurrent processes of the cache library is large, the query performance can be improved by increasing the maximum query memory, the maximum query memory of each node, and the maximum used memory, and by increasing the query memory of the cache library.
After the first target data corresponding to the first target field is obtained, the method further comprises the following steps:
and displaying the first target data in a data display area according to the display information of the first target field.
The selection instruction can be generated according to triggering operations such as clicking and double clicking of at least two data tables of at least two databases by a user.
Each data table comprises at least one field, and when a selection instruction of a user for at least two data tables is received, a plurality of fields contained in the selected data tables are displayed;
the selection instruction may be generated according to a dragging operation of a user on at least two fields of the plurality of fields, and specifically, when the user drags the at least two fields of the plurality of fields to a set data selection area, the selection instruction is generated.
The incidence relation identification of the data tables is used for identifying the incidence relation of the two data tables.
As shown in fig. 4, which is a schematic diagram of a display interface of a user terminal in an embodiment; the display interface of the user terminal comprises a database selection area, a table relation view area and a data display area.
In the embodiment of the application, when it is detected that a user drags at least two fields to a table relation view area in a database selection area, two data table 1 and table 2 corresponding to the at least two fields and association relation identifiers of the two data tables are displayed in the table relation view area.
The incidence relation identification of the data tables is used for identifying the incidence relation of the two data tables.
The association relationship between the two data tables is used to determine the connection order and connection type of the at least two data tables, the connection order is used to determine the position relationship in the at least two data tables, as in fig. 4, table 1 is set as a master table, table 2 is a slave table, table 1 is placed before table 2, and for convenience of description, the association relationship between table 1 and table 2 is described below with table 1 as a left table and table 2 as a right table.
For data tables belonging to different databases, corresponding database identifiers may be set to identify the data tables in a table relation view area, where each database may be provided with corresponding database identifiers, such as the database identifier 101 and the database identifier 102 in fig. 4, so that a user may quickly identify database information to which the data table belongs.
The connection types may include inner connection, full connection, left connection, and right connection, and each different connection type may have a different connection type identifier, for example, connection type identifier 103 in fig. 4 indicates that the connection types of table 1 and table 2 are inner connections.
When the connection type is internal connection, the connection result comprises record rows with equal values in the left table and the right table; when the connection type is full connection, the connection result comprises all rows of the left table and the right table; when the connection type is left connection, the connection result contains all rows in the left table; when the connection type is right connection, the connection result contains all the rows in the right table.
In this embodiment of the present application, connection positions of two data tables may be determined according to a dragging sequence of a user, for example, in fig. 3, a table 1 corresponding to a target field dragged first is a left table, a table 2 corresponding to a target field dragged later is a right table, after at least two fields in a database selection area are dragged to a table relation view area, a connection relationship between the at least two data tables is established by automatically setting fields with consistent field names and data types in the two data tables, and a default setting is made that the connection types of the at least two data tables are internal connections, that is, data of all rows where the two tables intersect are taken from a connection result.
According to the method and the device, the connection relation is established between the address information of the databases and the databases, the association relation of the selected data tables is established according to the selection instruction of a user to different target fields of different data tables of different databases in the database selection area, the display field information of the data display area is determined according to the association relation of the target fields in the data tables and the association relation of at least two data tables, and then the target data corresponding to the display fields are obtained, so that a large amount of data do not need to be read from the databases in advance, and the efficiency of cross-database data query is improved.
As shown in fig. 5, specifically, the step S202 of storing the data of the extracted fields in the cache library specifically includes:
s301: fragmenting the data of the extraction fields to obtain a plurality of fragment data;
s302: respectively storing the fragment data to a plurality of nodes under the cache library, and generating a fragment table; the fragment table comprises a plurality of fields of a plurality of fragment data and node identifications of the plurality of fragment data;
in step 301, when data of a plurality of extracted fields is fragmented, random () function may be used to fragment the data randomly, or other existing fragmentation modes may also be used to fragment the data according to user requirements.
In step 302, each piece of shard data may be stored in one node, and the shard table is used to determine the node where each piece of shard data is stored and the content of the shard data.
The maximum query memory and the maximum used memory of each node may be determined according to a configuration instruction of a user, and specifically, when the amount of data to be queried is large or the number of concurrent processes of the cache library is large, the query performance may be improved by increasing the maximum query memory, the maximum query memory and the maximum used memory of each node, and by increasing the query memory of the cache library.
Conventional line databases typically store data in a whole line, that is, data of each field in a line are stored together, so that data of a certain line can be modified quickly. In the embodiment of the present application, since data of corresponding fields needs to be read, if the data is stored in the line database, a large amount of data of unnecessary fields is added when the data is read, which wastes system resources and affects data reading efficiency.
Therefore, in this embodiment of the present application, the step of respectively storing the plurality of fragmented data to the plurality of nodes under the cache library specifically includes:
determining a data column by using a field, storing a plurality of numerical values belonging to the same field in the same data column, and generating a plurality of data columns by a plurality of nodes under the cache library.
The fragment data comprises a plurality of fields and a plurality of numerical values corresponding to the fields.
According to the embodiment of the application, the fragment data is stored in the cache library as the column-type database, so that only the data of the data column of the corresponding field needs to be read when the data is read, and the reading efficiency of the large-data-volume data can be improved.
The step that the cache library generates at least one first target subtask corresponding to the node in the cache library according to the first target task and simultaneously sends the first target subtask to the corresponding node comprises the following steps:
and the cache library determines at least one target node identifier corresponding to the at least one first target field according to the first target task and the fragment table, and generates a first target subtask of the at least one target node according to the at least one target node identifier.
In this embodiment of the application, the target node identifier stored in the cache library may be obtained according to the first target field, and a first target subtask of each target node is generated, so as to obtain the first target data from each target node.
When the first target field is more than two fields, and the more than two fields are respectively distributed in two or more different target nodes, the step of obtaining the first target data corresponding to the first target field according to the node data specifically includes:
receiving at least two node data returned by the at least two target nodes;
and combining the at least two node data to obtain first target data.
When merging the target data, at least two target data may be merged according to the association relationship between the fields corresponding to the target data to obtain the first target data.
The association relationship between the fields can be determined according to the operation sequence of the user for inputting the fields or dragging the fields, for example, the field input first or dragged first is positioned behind the field input last or dragged last.
In an embodiment, an executing subject of the cache library-based data query method according to the present application may be a server, and after the server executes the cache library-based data query method to obtain the first target data, the method further includes the following steps:
and returning the first target data to the user terminal so that the user terminal displays the first target data in a data display area.
The user terminal may be the end where the user sending the query instruction of the data of the first target field is located, and the user terminal may include, but is not limited to, a terminal device such as a smart phone, a notebook computer, and a tablet computer.
When the existing database is used for data acquisition, the data in the database is often acquired by executing query SQL (structured query language) statements of a user, and because the size of a data display area is limited, the query SQL statements are executed to acquire the data when displaying data of each page.
For example, when the queried data exceeds 1000 and each page of the data display area can display 1000 rows of data, when the data of the first page is displayed, a select _ from table statement needs to be executed in the database to fetch 1000 rows of data from the first row; when the data of the second page is displayed, the select × from table statement needs to be executed again in the database to fetch 1000 rows of data from the first row, and so on, as the page is displayed backwards, the data query speed is slower, and the user experience is affected.
Therefore, in the embodiment of the present application, the query instruction includes a fetch threshold; before responding to a query instruction for data of the first target field, the method further comprises the following steps:
determining a data fetching threshold value according to the maximum data row quantity which can be displayed in the data display area;
responding to a selection instruction of at least one first target field, and generating a query instruction of data of the at least one first target field according to the access threshold;
the step of obtaining the first target data corresponding to the first target field includes:
sequentially acquiring first target data of data rows corresponding to the access threshold;
and displaying the first target data in the data display area, and responding to a page switching instruction of the data display area, and displaying the first target data acquired next time in the data display area.
The access threshold is used to limit the number of accesses to the database at each time, and in this embodiment, the access threshold may be the maximum displayable data row in the data display area.
The maximum displayable data row may be determined according to a configuration instruction of a user on the display data row of the data display area, for example, 10 rows may be displayed per page, and the access threshold may also be set to 10 accordingly, and when the page is displayed, only 10 rows of data are read.
The data quantity of the first target data acquired each time is limited, so that the first target data acquired each time just can meet the maximum displayable data row of the data display area, and when a page switching instruction for switching the data display area by a user is received, the first target data of the data row of the data acquisition threshold acquired next time is displayed in the data display area, and the data query efficiency is improved.
As shown in FIG. 6, in one embodiment, the data display area includes a filter criteria text box area; before determining whether the at least one first target field is contained in the buffer pool in response to a query instruction for data of the at least one first target field, the method further comprises the following steps:
s401: responding to the operation of dragging any field to the screening condition text box area, and displaying a field expression of the any field; wherein, the field expression comprises a condition object, a logic operator and a value range setting item;
s402: receiving a setting instruction of the logical operator and the value range setting item, and generating a screening condition of any field;
s403: and screening the at least one first target field according to the screening condition to generate the query instruction of the data of the at least one first target field.
The condition object is the name of the field to be screened, and when the user adds the screening condition, the condition object is automatically generated according to the field name to be screened.
The logical operator is used to join the condition object and the value range setting to form a conditional expression. Logical operators can include equal, not equal, contain, not contain, fuzzy match, not match, begin not, end not, null, not null, and the like.
And the value domain setting item is a specific value of the screening condition, the data is filtered according to the selected value, and only the data corresponding to the selected value is displayed.
It should be noted that when the logical operator is "null" or "not null", the value range setting item may not be displayed.
Specifically, after the user selects the fields of "delivery area", "sales province", "unit price" and "sales", the screening condition may be further set to "delivery area equals to northeast", where "delivery area" is a condition object, "equals" is a logical operator, and "northeast" is a value field setting item, so as to obtain data of the fields of sales province, unit price, sales, etc. whose delivery area is northeast.
In the embodiment of the application, the user screens the first target field by setting the screening condition, so that the query result data set is reduced according to the screening condition, and the query performance is improved.
In order to ensure that the displayed target data is updated in time, the embodiment of the present application further includes, after the first target data is displayed in the data display area:
and refreshing the first target data corresponding to the first target field according to a preset automatic refreshing mode, and displaying the refreshed first target data in a data display area.
The automatic refreshing mode can comprise a plurality of data refreshing modes such as timing refreshing, automatic refreshing when opening, automatic refreshing of modification components, switching refreshing of filters and the like. However, for cross-database data query, if the query is frequently refreshed automatically, system resources are occupied, and query performance is reduced. Therefore, in one embodiment, the cross-database data query method further comprises the following steps:
and closing the automatic refresh function in response to the refresh closing instruction.
In the embodiment of the application, the automatic refreshing function is closed, so that the occupation of system resources is avoided, and the data query performance is improved.
As shown in fig. 7, an embodiment of the present application further provides a data query apparatus based on a cache library, and the method may be implemented by software, hardware, or a combination of the two as all or a part of an electronic device. The device includes:
a buffer pool determining module 201, configured to determine, in response to an instruction for querying data of at least one first target field, whether the at least one first target field is included in a buffer pool; the method comprises the steps that a buffer pool stores a target field obtained in response to a query instruction in a first time period before the current time and target data corresponding to the target field; the target field of the buffer pool comprises first correlation information; the first correlation information is used for determining whether data of a field corresponding to a target field of the buffer pool in a buffer library is changed;
a first query module 202, configured to, if the buffer pool includes the at least one first target field and it is determined that data of a corresponding field in the cache library is not changed according to first association information of the first target field, obtain first target data corresponding to the at least one first target field from the buffer pool;
the second query module 203 is configured to generate a first target task for obtaining first target data according to the at least one first target field not in the buffer pool, and send the first target task to the cache library, so that the cache library generates at least one first target subtask corresponding to a node in the cache library according to the first target task and sends the first target subtask to the corresponding node at the same time, receives node data returned by the at least one node, and obtains first target data corresponding to the first target field according to the node data; the cache library comprises a plurality of nodes, and the nodes are connected through a node interconnection network; each node in the cache library stores data of a plurality of fields extracted from a target database in advance.
It should be noted that, when the data query apparatus based on the cache library provided in the foregoing embodiment executes the data query method based on the cache library, only the division of the functional modules is taken as an example, and in practical applications, the functions may be allocated to different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the functions described above. In addition, the data query method based on the cache library provided in the above embodiment and the data query device based on the cache library belong to the same concept, and details of implementation processes thereof are referred to in the method embodiment and are not described herein again.
The present embodiment provides an electronic device, which may be used to execute all or part of the steps of the cache library-based data query method according to the embodiment of the present application. For details not disclosed in the present embodiment, please refer to the method embodiments of the present application.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device 300 may be, but is not limited to, a combination of one or more of various servers, personal computers, notebook computers, smart phones, tablet computers, and the like.
In the preferred embodiment of the present application, the electronic device 300 comprises a memory 301, at least one processor 302, at least one communication bus 303, and a transceiver 304.
Those skilled in the art will appreciate that the configuration of the electronic device shown in fig. 8 is not limited to the embodiments of the present application, and may be a bus-type configuration or a star-type configuration, and the electronic device 300 may include more or less hardware or software than those shown, or different component arrangements.
In some embodiments, the electronic device 300 is a device capable of automatically performing numerical calculation and/or information processing according to instructions set or stored in advance, and the hardware includes but is not limited to a microprocessor, an application specific integrated circuit, a programmable gate array, a digital processor, an embedded device, and the like. The electronic device 300 may further include a client device, which includes, but is not limited to, any electronic product capable of interacting with a client through a keyboard, a mouse, a remote controller, a touch pad, or a voice control device, for example, a personal computer, a tablet computer, a smart phone, a digital camera, and the like.
It should be noted that the electronic device 300 is only an example, and other existing or future electronic products, such as those that may be adapted to the present application, are also included in the scope of the present application and are incorporated by reference herein.
In some embodiments, the memory 301 has stored therein a computer program which, when executed by the at least one processor 302, implements all or part of the steps of the cache library-based data query method according to the embodiments. The Memory 301 includes a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), a One-time Programmable Read-Only Memory (OTPROM), an electronically Erasable Programmable Read-Only Memory (Electrically-Erasable Programmable Read-Only Memory (EEPROM)), an optical Read-Only Memory (CD-ROM) or other optical disk Memory, a magnetic disk Memory, a tape Memory, or any other medium readable by a computer that can be used to carry or store data.
In some embodiments, the at least one processor 302 is a Control Unit (Control Unit) of the electronic device 300, connects various components of the electronic device 300 by using various interfaces and lines, and executes various functions and processes data of the electronic device 300 by running or executing programs or modules stored in the memory 301 and calling data stored in the memory 301. For example, the at least one processor 302, when executing the computer program stored in the memory, implements all or part of the steps of the cache library-based data query method described in the embodiments of the present application; or implement all or part of the functions of the cache library-based data query method. The at least one processor 302 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital processing chips, graphics processors, and combinations of various control chips.
In some embodiments, the at least one communication bus 303 is arranged to enable connectivity communication between the memory 301 and the at least one processor 302, and the like.
The electronic device 300 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, where the instructions are suitable for being loaded by a processor and executing the cache library-based data query method according to the present embodiment, and a specific execution process may refer to a specific description of the method embodiment, which is not described herein again.
For the apparatus embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points. The above-described device embodiments are merely illustrative, wherein the components described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the modules of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (9)

1. A data query method based on a cache library is characterized by comprising the following steps:
acquiring address information of a plurality of databases, and establishing connection relations with the databases according to the address information of the databases;
in response to an extraction instruction, extracting data of a plurality of extraction fields from at least two databases based on an extraction rule and storing the data into the cache library; the extraction instruction comprises an extraction rule and a plurality of extraction fields;
responding to a selection instruction of at least two data tables of at least two databases, and displaying a plurality of fields of the at least two data tables;
responding to a selection instruction of at least two fields in the plurality of fields, acquiring the incidence relation of the at least two data tables, and determining first target field information of a data display area according to the incidence relation of the at least two fields in the data tables and the incidence relation of the at least two data tables; the association relationship between the two data tables is used for determining the connection sequence and the connection type of at least two data tables, and the first target field information comprises a first target field;
in response to a query instruction for data of at least one first target field, determining whether the at least one first target field is contained in a buffer pool; the method comprises the steps that a buffer pool stores a target field obtained in response to a query instruction in a first time period before the current time and target data corresponding to the target field; the target field of the buffer pool comprises first correlation information; the first correlation information is used for determining whether data of a field corresponding to a target field of the buffer pool in a buffer library is changed;
if the buffer pool comprises the at least one first target field and the data of the corresponding field in the buffer pool is determined to be unchanged according to the first correlation information of the first target field, acquiring first target data corresponding to the first target field from the buffer pool;
otherwise, generating a first target task for acquiring first target data according to the at least one first target field which is not in the buffer pool, sending the first target task to the buffer library, enabling the buffer library to generate at least one first target subtask corresponding to a node in the buffer library according to the first target task and simultaneously sending the first target subtask to the corresponding node, receiving node data returned by the at least one node, and acquiring the first target data corresponding to the first target field according to the node data; the cache library comprises a plurality of nodes, and the nodes are connected through a node interconnection network; each node in the cache library stores data of a plurality of fields extracted from a target database in advance.
2. The cache-library-based data query method of claim 1,
the first target field information includes display information of the first target field, and after first target data corresponding to the first target field is acquired, the method further includes the following steps:
and displaying the first target data in a data display area according to the display information of the first target field.
3. The cache library-based data query method of claim 1, wherein the step of storing the data of the extracted fields into the cache library specifically comprises:
fragmenting the data of the extraction fields to obtain a plurality of fragment data;
respectively storing the fragment data to a plurality of nodes under the cache library, and generating a fragment table; the fragment table comprises a plurality of fields of a plurality of fragment data and node identifications of the plurality of fragment data;
the step that the cache library generates at least one first target subtask corresponding to the node in the cache library according to the first target task and simultaneously sends the first target subtask to the corresponding node comprises the following steps:
and the cache library determines at least one target node identifier corresponding to the at least one first target field according to the first target task and the fragment table, and generates a first target subtask of the at least one target node according to the at least one target node identifier.
4. The cache-library-based data query method of claim 3, wherein, if there are at least two target nodes, the step of obtaining the first target data corresponding to the first target field according to the node data specifically comprises:
receiving at least two node data returned by the at least two target nodes;
and combining the at least two node data to obtain first target data.
5. The cache library-based data query method according to claim 3, wherein the fragment data includes a plurality of fields and a plurality of values corresponding to the fields, and the step of storing the plurality of fragment data to a plurality of nodes under the cache library respectively specifically includes:
determining a data column by using a field, storing a plurality of numerical values belonging to the same field in the same data column, and generating a plurality of data columns by a plurality of nodes under the cache library.
6. The cache library-based data query method of any one of claims 1-5, wherein the query instruction comprises a fetch threshold;
before responding to a query instruction for data of the first target field, the method further comprises the following steps:
determining a data fetching threshold value according to the maximum data row quantity which can be displayed in the data display area;
responding to a selection instruction of at least one first target field, and generating a query instruction of data of the at least one first target field according to the access threshold;
the step of obtaining the first target data corresponding to the first target field includes:
sequentially acquiring first target data of data rows corresponding to the access threshold;
and displaying the first target data in the data display area, and responding to a page switching instruction of the data display area, and displaying the first target data acquired next time in the data display area.
7. A data query device based on a cache library is characterized by comprising:
the buffer pool determining module is used for acquiring address information of a plurality of databases and establishing a connection relation with the databases according to the address information of the databases; in response to an extraction instruction, extracting data of a plurality of extraction fields from at least two databases based on an extraction rule and storing the data into the cache library; the extraction instruction comprises an extraction rule and a plurality of extraction fields; responding to a selection instruction of at least two data tables of at least two databases, and displaying a plurality of fields of the at least two data tables; responding to a selection instruction of at least two fields in the fields, acquiring the incidence relation of the at least two data tables, and determining first target field information of a data display area according to the incidence relation of the at least two fields in the data tables and the incidence relation of the at least two data tables; the association relationship between the two data tables is used for determining the connection sequence and the connection type of at least two data tables, and the first target field information comprises a first target field; in response to a query instruction for data of at least one first target field, determining whether the at least one first target field is contained in a buffer pool; the method comprises the steps that a first target field obtained in response to a query instruction in a first time period and first target data corresponding to the first target field are stored in a buffer pool; the target field of the buffer pool and the field in the buffer library have first correlation information; the first correlation information is used for determining whether data of a field corresponding to a target field of the buffer pool in a buffer library is changed;
the first query module is configured to, if the buffer pool includes the at least one first target field and it is determined according to the first association information of the first target field that data of a corresponding field in the buffer pool is not changed, obtain first target data corresponding to the at least one first target field from the buffer pool;
the second query module is used for generating a first target task for acquiring first target data according to the at least one first target field which is not in the buffer pool and sending the first target task to the buffer library, so that the buffer library generates at least one first target subtask corresponding to a node in the buffer library according to the first target task and sends the first target subtask to the corresponding node at the same time, receives node data returned by the at least one node, and acquires the first target data corresponding to the first target field according to the node data; the cache library comprises a plurality of nodes, the nodes are connected through a node interconnection network, and each node in the cache library stores data of a plurality of fields extracted from a target database in advance.
8. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program implementing the steps of the cache library based data query method as claimed in any one of claims 1-6 when executed by a processor.
9. An electronic device, characterized in that: comprising a memory, a processor and a computer program stored in the memory and executable by the processor, the processor implementing the steps of the cache library-based data query method according to any of the claims 1-6 when executing the computer program.
CN202211401691.7A 2022-11-10 2022-11-10 Data query method, device, storage medium and equipment based on cache library Active CN115438087B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211401691.7A CN115438087B (en) 2022-11-10 2022-11-10 Data query method, device, storage medium and equipment based on cache library

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211401691.7A CN115438087B (en) 2022-11-10 2022-11-10 Data query method, device, storage medium and equipment based on cache library

Publications (2)

Publication Number Publication Date
CN115438087A CN115438087A (en) 2022-12-06
CN115438087B true CN115438087B (en) 2023-03-24

Family

ID=84252302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211401691.7A Active CN115438087B (en) 2022-11-10 2022-11-10 Data query method, device, storage medium and equipment based on cache library

Country Status (1)

Country Link
CN (1) CN115438087B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115840763B (en) * 2023-02-20 2023-05-23 中航信移动科技有限公司 Data storage method based on multiple databases, storage medium and electronic equipment
CN116150162B (en) * 2023-04-20 2023-06-30 北京锐服信科技有限公司 Data chart updating method and device based on time slicing and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368006A (en) * 2020-03-31 2020-07-03 中国工商银行股份有限公司 Mass data strip conditional centralized extraction system and method
WO2022142665A1 (en) * 2020-12-28 2022-07-07 深圳壹账通智能科技有限公司 Database cluster-based data processing method and apparatus, and electronic device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111177213B (en) * 2019-12-16 2024-04-19 北京淇瑀信息科技有限公司 Privacy cluster self-service query platform, method and electronic equipment
CN114218267A (en) * 2021-11-24 2022-03-22 建信金融科技有限责任公司 Query request asynchronous processing method and device, computer equipment and storage medium
CN114756577A (en) * 2022-03-25 2022-07-15 北京友友天宇系统技术有限公司 Processing method of multi-source heterogeneous data, computer equipment and storage medium
CN115168398A (en) * 2022-08-09 2022-10-11 北京京东振世信息技术有限公司 Data query method and device, electronic equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368006A (en) * 2020-03-31 2020-07-03 中国工商银行股份有限公司 Mass data strip conditional centralized extraction system and method
WO2022142665A1 (en) * 2020-12-28 2022-07-07 深圳壹账通智能科技有限公司 Database cluster-based data processing method and apparatus, and electronic device

Also Published As

Publication number Publication date
CN115438087A (en) 2022-12-06

Similar Documents

Publication Publication Date Title
CN115438087B (en) Data query method, device, storage medium and equipment based on cache library
US11768811B1 (en) Managing user data in a multitenant deployment
CN111339171B (en) Data query method, device and equipment
US10922282B2 (en) On-demand collaboration user interfaces
CN113157947A (en) Knowledge graph construction method, tool, device and server
WO2019041500A1 (en) Pagination realization method and device, computer equipment and storage medium
CN111949832A (en) Method and device for analyzing dependency relationship of batch operation
CN112182238A (en) Knowledge graph construction system and method based on graph database
CN113051493A (en) Application program display method and device, storage medium and terminal
US11544282B1 (en) Three-dimensional drill-down data visualization in extended reality environment
CN112307264A (en) Data query method and device, storage medium and electronic equipment
CN115392501A (en) Data acquisition method and device, electronic equipment and storage medium
CN111047434A (en) Operation record generation method and device, computer equipment and storage medium
CN109885729B (en) Method, device and system for displaying data
CN110928900B (en) Multi-table data query method, device, terminal and computer storage medium
CN110765158A (en) Paging query method, system and device
US8051091B2 (en) Parallelizing data manipulation by data set abstraction
CN115617338A (en) Method and device for quickly generating service page and readable storage medium
CN111143582B (en) Multimedia resource recommendation method and device for updating association words in double indexes in real time
CN112667682A (en) Data processing method, data processing device, computer equipment and storage medium
CN109829085A (en) Report method for subscribing, device, computer equipment and storage medium
CN111831874B (en) Webpage data information acquisition method and device, computer equipment and storage medium
CN110084486B (en) Resource management method and device
US20240111410A1 (en) Drag and drop interactions for a browser software application
US20230161596A1 (en) Unified pipeline flow with common and phase-specific paths

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant