CN110555041A - Data processing method, data processing device, computer equipment and storage medium - Google Patents

Data processing method, data processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN110555041A
CN110555041A CN201810277788.9A CN201810277788A CN110555041A CN 110555041 A CN110555041 A CN 110555041A CN 201810277788 A CN201810277788 A CN 201810277788A CN 110555041 A CN110555041 A CN 110555041A
Authority
CN
China
Prior art keywords
data
loading
key name
cache
cache server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810277788.9A
Other languages
Chinese (zh)
Inventor
董峤术
张灿杰
王双宝
王昂
刘军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810277788.9A priority Critical patent/CN110555041A/en
Publication of CN110555041A publication Critical patent/CN110555041A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to a data processing method, a device, a computer readable storage medium and a computer device, wherein the method comprises the following steps: acquiring a data reading request carrying data information; acquiring a key name corresponding to the data requested to be read according to the data information based on a preset key name mapping rule; calling a service interface of a cache server according to the key name, reading a value corresponding to the key name, wherein the cache server caches the full data of the source database; and returning the read value to the request terminal. Because the cache server caches the full amount of data of the source database, the probability of data reading requests hitting the cache is greatly improved, and the reliability of data reading is improved, so that the data reading requests cannot penetrate through the cache to reach the source database, and the pressure on the source database is avoided.

Description

Data processing method, data processing device, computer equipment and storage medium
Technical Field
the present application relates to the field of computer technologies, and in particular, to a data processing method and apparatus, a computer device, and a storage medium.
Background
Caching is an important component of data read processing. The buffer is a buffer for data exchange. When data is to be read, the required data is first looked up from the cache. The hit rate of the cache is high, and the access speed of the system can be greatly improved.
In background services, a conventional caching scheme includes hot spot data caching, that is, hot spot data is added into a cache. In order to improve the hit rate of the cache, elimination management needs to be performed on the hot data. If the cache is not hit, the request penetrates through the cache to the database, and great stress is caused to the database.
Disclosure of Invention
in view of the foregoing, there is a need to provide a data processing method, an apparatus, a computer readable storage medium, and a computer device for reducing database pressure.
A method of data processing, comprising:
Acquiring a data reading request carrying data information;
Acquiring a key name corresponding to the data requested to be read according to the data information based on a preset key name mapping rule;
Calling a service interface of a cache server according to the key name, reading a value corresponding to the key name, wherein the cache server caches the full data of the source database;
And returning the read value to the request terminal.
a data processing apparatus, characterized in that the apparatus comprises:
a read request obtaining module, configured to obtain a data read request carrying data information;
The key name acquisition module is used for acquiring the key name corresponding to the data requested to be read according to the data information based on a preset key name mapping rule;
The reading module is used for calling a service interface of a cache server according to the key name and reading a value corresponding to the key name, and the cache server caches the full data of the source database;
and the sending module is used for returning the read value to the request terminal.
A computer-readable storage medium, in which a computer program is stored which, when executed by a processor, causes the processor to carry out the steps of the above-mentioned method.
a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the method above
according to the data processing method, the data processing device, the computer readable storage medium and the computer equipment, after a data reading request carrying data information is acquired, a key name corresponding to the data requested to be read is acquired according to the data information based on a preset key name mapping rule, and then a service interface of a cache server is called according to the key name to read a value corresponding to the key name. Because the cache server caches the full amount of data of the source database, the probability of data reading requests hitting the cache is greatly improved, and the reliability of data reading is improved, so that the data reading requests cannot penetrate through the cache to reach the source database, and the pressure on the source database is avoided.
Drawings
FIG. 1 is a diagram of an application environment of a data processing method in one embodiment;
FIG. 2 is a flow diagram illustrating a data processing method according to one embodiment;
FIG. 3 is a flowchart illustrating steps of loading data of a source database and writing the data to a cache server in one embodiment;
FIG. 4 is a flowchart illustrating steps for writing loaded data to the cache server according to cache configuration information, in one embodiment;
FIG. 5 is a diagram illustrating an embodiment in which a cluster of application servers invoke Zookeeper election;
FIG. 6 is a flowchart illustrating steps for loading data from a source database according to a data load instruction and data load configuration information, in one embodiment;
FIG. 7 is a flowchart illustrating steps of invoking a service interface of a cache server according to a key name and reading a value corresponding to the key name in one embodiment;
FIG. 8 is a diagram illustrating switching between primary and backup cache server clusters, according to an embodiment;
FIG. 9 is a system architecture diagram of a data processing system in one embodiment;
FIG. 10 is a timing diagram illustrating a data processing method according to one embodiment;
FIG. 11 is a block diagram showing the structure of a data processing apparatus according to an embodiment;
FIG. 12 is a block diagram showing the construction of a data processing apparatus according to another embodiment;
FIG. 13 is a block diagram of a computer device in one embodiment.
Detailed Description
in order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
FIG. 1 is a diagram of an application environment of a data processing method in one embodiment. Referring to fig. 1, the data processing method is applied to a data processing system. The data processing system includes a source database server 101, a cache server 102, an application server 103, and a terminal 104. The terminal 104 and the application server 103 are connected via a network. The source database server 101 is connected to the cache server 102, and the cache server 102 is connected to the application server 103. The application server 103 and the cache server 102 may be implemented by separate servers or a server cluster composed of a plurality of servers. The application server 103 loads the full amount of data from the source database server 101 to provide data for the read service of the application server, and thus, the data read request is prevented from reaching the source database and causing pressure on the source database.
In one embodiment, as shown in FIG. 2, a data processing method is provided. The embodiment is mainly exemplified by applying the method to the application server 103 in fig. 1, and the data processing method provides support for the reading service of the application server. Referring to fig. 2, the data processing method specifically includes the following steps:
s202, a data reading request carrying data information is obtained.
Specifically, a data reading request is sent to the application server based on the operation of the user at the terminal, and the data request information carries data information. Wherein the data information is information related to a request for reading data. The data information is related to the operation of the user at the terminal. For example, if a user opens a page of a certain network course, the terminal requests the application server for information (price or number of purchasers, etc.) related to the network course to be displayed, the data information includes an identifier of the network course, a data table and a query field, wherein the price or number of purchasers is a query field in the data table.
And S204, acquiring the key name corresponding to the data requested to be read according to the data information based on the preset key name mapping rule.
in distributed storage, data is stored in the form of Key-value pairs (Key-Vulue). I.e. the key name corresponds to a value. In this embodiment, when data is written into the cache, a configuration rule of the key name is configured in advance. I.e. the key names in the cache server are organized according to pre-configured rules. When data needs to be read, a value corresponding to the key name can be inquired through the key name, and the value is the data which is requested to be read.
in this embodiment, the key name of the data requested to be read is determined according to the key name mapping rule and the data information configured in advance. The key name mapping rule refers to a preset correspondence relationship between key names and data information. According to the key name mapping rule, when the data information is obtained, the key name can be determined based on the key name mapping rule and the data information. In one embodiment, the key name mapping rule is that the key name is a data table name + query field, and when a data reading request is obtained, the carried data information includes: and the name of the data table is psychological, the query field is price, and the key name is determined to be the psychological price according to the key name mapping rule and the data information.
and S206, calling a service interface of the cache server according to the key name, reading a value corresponding to the key name, and caching the full data of the source database by the cache server.
The full amount of data refers to all data in the source database, i.e. all data of the source database in the cache server. In this embodiment, when the data reading request is obtained, the service interface of the cache server is called to read data from the cache server, and since the cache server caches the full amount of data of the source database, the probability of the data reading request hitting the cache is greatly increased, and the reliability of data reading is improved, so that the data reading request does not penetrate through the cache to reach the source database, and pressure on the source database is avoided.
the source database is a database which provides original data or specific data for the application server, stores the original data of the related service of the application server, and provides data writing service for the application program. Taking the network teaching application as an example, the source service library stores information related to each course, such as course name, price, course video, and so on. In a particular embodiment, the source database may be MySQL (a relational database management system).
The cache server may be a cluster or a plurality of clusters. In this embodiment, a service interface of the cache server is called according to the key name, and a value corresponding to the key name can be read, so that the data structure in the cache server is in a key-value pair form. That is, the cache server in this embodiment stores the database by using a Key-Value. When data is read, the key name is determined according to the data information and the preset key name mapping rule, and when the data of the source database is written into the cache server, the key name corresponding to the data can be determined according to the preset key name mapping rule. The cache server of the embodiment adopts a key value pair data structure, and the key names are managed according to the pre-configured key name mapping rule, compared with the traditional scheme of compiling data cache codes, the key names can be managed through the key name mapping rule, and the operation and maintenance difficulty of the cache server is reduced.
In a specific embodiment, the cache Server employs a Redis (REmote DIctionary Server, a key-value storage system).
And S208, returning the read value to the request terminal.
the application server provides data reading service and is middleware of the terminal and the cache server. In one embodiment, the read service of the application server uses Protocol Buffer (a data exchange format of google) as an external application layer Protocol. When a data reading request is obtained, the data table name and the query field are transmitted as fields in a Protocol Buffer structure. And when the interface of the cache server is called to read the value corresponding to the key name, the reflection function of the Protocol Buffer is used for filling the value into the corresponding Protocol Buffer return structure and returning the value to the request terminal. In the embodiment, by utilizing the reflection and rich configuration of the ProtoBuffer, new data can be read through the interface of the reading service only by simple configuration, the generalization of the interface is realized, the development cost of a data system is reduced, and the method has high expansibility and usability.
According to the data processing method, after a data reading request carrying data information is acquired, a key name corresponding to the data requested to be read is acquired according to the data information based on a preset key name mapping rule, and then a service interface of a cache server is called according to the key name to read a value corresponding to the key name. Because the cache server caches the full amount of data of the source database, the probability of data reading requests hitting the cache is greatly improved, and the reliability of data reading is improved, so that the data reading requests cannot penetrate through the cache to reach the source database, and the pressure on the source database is avoided.
In another embodiment, the data processing method further includes the step of loading the data of the source database and writing the data into the cache server. As shown in fig. 3, this step includes:
s302, loading data from a source database according to the data loading instruction and the data loading configuration information.
The data loading instruction refers to an instruction for loading data from a source database, and includes a full data loading instruction and an incremental data loading instruction. The incremental data refers to new data generated by a source database after full data is loaded to the application server. Specifically, the incremental data is determined according to the update time in the data log in the source database.
the data loading configuration information refers to information configured in advance according to the source database information and is used for loading data. The data loading configuration information includes an address of a source database, a password, a database of loaded data, an SQL statement of loaded data, and a Key (Key, which may be a data table name) of each data source.
It will be appreciated that a configuration page may be provided for instructing the operation and maintenance personnel to configure the loaded configuration information. And inputting the address, the password, the database name, the data table and the SQL statement for loading data of the source database on the page. It can be understood that, for each reading task, there is a corresponding piece of loading configuration information, that is, each reading task is configured with a corresponding database name, a corresponding data table, and a corresponding SQL statement for loading data.
In the caching scheme of this embodiment, the corresponding loading configuration file is configured according to each task, and for the caching service, code writing is not required. The mode can avoid a series of steps of compiling codes, testing programs and the like in the process of caching data, and greatly reduces the workload.
When the data loading instruction is obtained, the application server is connected to the source database according to the data loading configuration information, and executes the SQL statement to obtain the data specified by the loading configuration information.
In one embodiment, the data load instruction includes a full data load instruction, and the loading data from the source database according to the data load instruction and the data load configuration information includes: and loading the full data from the source database according to the full data loading instruction and the data loading configuration information.
The full data loading instruction is generated when the loading service is started for the first time, the application server is connected to the source database according to the data loading configuration information, the SQL statement is executed, and the full data is loaded from the source database to the local part of the application server.
In another embodiment, the data load instruction comprises an incremental data load instruction; loading data from a source database according to the data loading instruction and the data loading configuration information, comprising: and when the set incremental data loading condition is reached, loading the incremental data from the source database according to the incremental data loading instruction and the data loading configuration information.
the incremental data refers to new data generated by the source database after the full data is loaded to the application server. Specifically, the incremental data is determined according to the update time in the data log in the source database. The data load instruction also includes a load frequency. The loading frequency is specifically the loading frequency of the incremental data. The frequency control may be configured in a periodic manner or in a Crontab (periodically executed) manner. The loading service controls the frequency of loading data from the data source according to the configuration information.
And when the loading condition corresponding to the loading frequency is reached, generating an incremental data loading instruction, connecting the application server to the source database according to the loading configuration information, executing the SQL statement, and loading the incremental data from the source database to the local part of the application server.
S304, writing the loaded data into the cache server according to the cache configuration information and the key name mapping rule.
The cache configuration information refers to information configured in advance according to the information of the cache server and the data structure, and is used for writing data. The cache configuration information includes mapping rules of key names corresponding to each data table, addresses and password information of the cache server, and the like. The key name mapping rule refers to a preset correspondence relationship between key names and data information. The data information includes a data table name of the source database and fields of the data table.
It will be appreciated that a configuration page may be provided for instructing the operation and maintenance personnel to configure the cache configuration information. The address, password and key name mapping rule of the cache server are entered at the page. In this embodiment, the cache configuration information is preconfigured, and the key names are managed according to the preconfigured key name mapping rule, compared with a conventional scheme for writing data cache codes, the key names can be managed through the key name mapping rule, so that the operation and maintenance difficulty of the cache server is reduced.
and after the application server loads data from the source database locally, the application server is connected to the cache server according to the cache configuration information, and the loaded data is written into the cache data device.
The data processing method is a highly-configured implementation mode, and code writing is not needed by configuring the loading configuration information and the cache configuration information in advance. Greatly reducing the operation and maintenance difficulty.
In one embodiment, the step of writing the loaded data into the cache server according to the cache configuration information, as shown in fig. 4, includes:
S402, extracting the data table name and the field name from the loaded data according to the key name mapping rule to obtain the key name.
The key name mapping rule refers to a preset correspondence relationship between key names and data information. The data information includes a data table name of the source database and fields of the data table. And extracting the data table name and the field name from the loaded data to obtain the key name.
S404, extracting values corresponding to the data table name and the field name from the loaded data.
S406, calling an interface of the cache server according to the cache configuration information, and writing the key name and the value into the cache server in a key value pair mode.
the application server provides a configuration page and configures cache configuration information. The cache configuration information includes mapping rules of key names corresponding to each data table, addresses and password information of the cache server, and the like. The key name mapping rule refers to a preset correspondence relationship between key names and data information. In one embodiment, the data information includes a data table name of the source database and a field of the data table. After the configuration of the cache configuration information, the name of the data table requested by the user can be mapped to the prefix of the Key name of the cache server, and then a field name in the request is added, so that a complete Key name (Key) of the cache server can be obtained.
The cache server in this embodiment uses a Key-Value (Key-Value) storage database. According to a preset key name mapping rule, the key name corresponding to the data can be determined when the data is written, and then the value corresponding to the key name is determined according to the values of the corresponding data table and the corresponding field. When the data of the loaded source database is written into the cache server, the cache server is connected according to the address and the password of the cache server, an interface of the cache server is called, and the loaded data is written into the cache server in a key value pair mode.
the cache server of the embodiment adopts a data structure of key value pairs, and the key names are managed according to the pre-configured key name mapping rule, so that the operation and maintenance difficulty of the cache server is reduced.
in one embodiment, the load service may be deployed on multiple machines, but it is necessary to ensure that only one load service is implemented in execution. As shown in fig. 5, Zookeeper is used for election in this embodiment, and the elected host machine executes the loading service. The loading service can load a plurality of tasks, each task is an independent configuration file, all the configuration files are read when the loading service is started, then all the tasks are executed simultaneously, and each task is executed by using an independent thread.
specifically, as shown in fig. 6, loading data from the source database according to the data loading instruction and the data loading configuration information includes:
S602: and calling the zookeeper cluster to determine the main node in the application server cluster according to the data loading instruction.
specifically, the Zookeeper cluster is composed of multiple machines that execute the load service. Zookeeper is a distributed, open source distributed application coordination service. In this embodiment, the plurality of application servers form an application server cluster, so as to improve the disaster tolerance performance of the application servers. After the loading service is started, each application server registers a temporary node to the Zookeeper cluster. And writing the address of each application server into the temporary node. And (4) sequencing the temporary nodes according to the registration time by adopting a preemption mechanism. When the application servers acquire the data loading instruction, each application server calls the zookeeper cluster to determine the main node in the application server cluster. And each application server calls the zookeeper cluster to acquire the address of the application server stored by the first temporary node, and the address of the application server is compared with the address of the application server. Taking three application servers as an example, the application server with the number 1 compares the address of the application server stored in the first temporary node with the local address, and if the addresses are the same, the application server takes the local node as the main node. And the application server with the number of 2 compares the address of the application server stored in the first temporary node with the address of the local computer, and if the address of the application server is different from the address of the local computer, the local computer is used as a standby node. And the application server with the number of 2 compares the address of the application server stored in the first temporary node with the address of the local computer, and if the address of the application server is different from the address of the local computer, the local computer is used as a standby node.
When the zookeeper cluster detects an abnormal session (such as disconnection, timeout and the like) with the application server, the corresponding temporary node on the application server is deleted. When the loading service of the application server is started, a temporary node is registered with the Zookeeper cluster again.
And S604, the main node loads data from the source database according to the data loading configuration information.
Specifically, the master node of the application server cluster is connected to the source database according to the data loading configuration information, executes the SQL statement, and loads data corresponding to each configuration information.
In a specific implementation manner, the application server calls the ZooKeeper cluster to record the last loading time, and determines incremental data according to the last loading time. Specifically, the application server calls the ZooKeeper cluster, a node of the ZooKeeper cluster records the last time of loading data from the source database, and incremental loading can be realized by matching with an "update time" field in each table in MySQL according to the last time of loading.
According to the data processing method, the ZooKeeper cluster is arranged to load data from the source database, the disaster tolerance capability is high, meanwhile, the configuration information is loaded, and the ZooKeeper cluster and the ZooKeeper are used for distributed master node selection, so that most data synchronization scenes can be covered, the reproducibility is high, and the utilization rate is high.
in another embodiment, the cache server may be a server cluster formed by a plurality of servers, or may be a plurality of server clusters. In this embodiment, the application server provides the read service to provide an access interface to the outside, instead of allowing the user to directly access the cache server. The application server configures the information of the plurality of cache server clusters through the configuration file, and configures a main cluster according to the configuration information on the ZooKeeper.
Specifically, as shown in fig. 7, invoking a service interface of the cache server according to the key name, and reading a value corresponding to the key name includes:
S702, calling a service interface of the main cache server cluster according to the key name.
And S704, when the calling main cache server cluster is abnormal, calling an interface of the backup cache server cluster, and reading a value corresponding to the key name.
Taking an application server as a Redis as an example, in a general situation, a main Redis cluster is called, and when a problem occurs in calling the main Redis cluster, other Redis clusters can be called according to configuration. Thus, high fault tolerance of the read service is ensured as much as possible, as shown in fig. 8, once a problem occurs in one Redis cluster, the read service can switch the accessed Redis cluster, thereby avoiding data unavailability caused by downtime of one Redis cluster or network unavailability.
According to the data processing method, the cache server cluster is used for providing the cache service, and the disaster tolerance capability of the cache data device is improved.
An architecture diagram of a data processing system of one embodiment is shown in FIG. 9. The method comprises the following steps:
The data source layer, i.e. the source database, stores the original data of the service,
And loading the service layer. The loading service loads data from the data source to the local and then loads the data into the cache server according to the configuration in an incremental time-sharing or timing execution mode. And the instances execute the loading task through the ZooKeeper distributed execution.
a Redis storage tier, i.e., a cache server, is the portion of the storage tier that provides the cache storage.
and reading the service layer. The read service layer provides an RPC interface externally to access data stored in Redis.
Wherein, the loading service layer and the reading service layer can be deployed on the application server for execution.
As shown in fig. 10, the data processing method of an embodiment includes three stages, which are full data load and write cache, incremental data load and write cache, and data read.
Loading and writing the full amount of data to the cache includes loading the full amount of data from the source database to the application server, and writing the loaded full amount of data to the cache server.
the full amount of data refers to all data in the source database, i.e. all data of the source database in the cache server. In this embodiment, when the data loading configuration information is configured, the full data loading instruction is started. And the application server loads the full data from the source database according to the full data loading instruction and the data loading configuration information. The data loading configuration information refers to information configured in advance according to the source database information and is used for loading data. The data loading configuration information includes an address of a source database, a password, a database of loaded data, an SQL statement of loaded data, and a Key (Key, which may be a data table name) of each data source.
It will be appreciated that a configuration page may be provided for instructing the operation and maintenance personnel to configure the loaded configuration information. And inputting the address, the password, the database name, the data table and the SQL statement for loading data of the source database on the page. It can be understood that, for each reading task, there is a corresponding piece of loading configuration information, that is, each reading task is configured with a corresponding database name, a corresponding data table, and a corresponding SQL statement for loading data. Specifically, when the full data loading instruction is obtained, according to the data loading configuration information, the application server is connected to the source database, and executes the SQL statement to obtain the full data of the source database.
And writing the loaded data into the cache server, specifically, writing the loaded data into the cache server according to the cache configuration information and the key name mapping rule.
The cache configuration information is information configured in advance according to the information of the cache server and the data structure, and is used for writing data. The cache configuration information includes mapping rules of key names corresponding to each data table, addresses and password information of the cache server, and the like. The key name mapping rule refers to a preset correspondence relationship between key names and data information. The data information includes a data table name of the source database and fields of the data table.
It will be appreciated that a configuration page may be provided for instructing the operation and maintenance personnel to configure the cache configuration information. The address, password and key name mapping rule of the cache server are entered at the page.
And after the application server loads data from the source database locally, the application server is connected to the cache server according to the cache configuration information, and the loaded data is written into the cache data device. Specifically, according to a key name mapping rule, a data table name and a field name are extracted from loaded data to obtain a key name; extracting values corresponding to the data table name and the field name from the loaded data; and calling an interface of the cache server according to the cache configuration information, and writing the key name and the value into the cache server in a key value pair mode. In one embodiment, the data information includes a data table name of the source database and a field of the data table. After the configuration of the cache configuration information, the name of the data table requested by the user can be mapped to a Key prefix of the cache application server, and then a field name in the request is added, so that a complete Key name (Key) of the cache server can be obtained.
The loading and writing process of the incremental data is similar to the loading and writing process of the full amount of data. The incremental data refers to new data generated by a source database after full data is loaded to the application server. Specifically, the incremental data is determined according to the update time in the data log in the source database. The data load instruction also includes a load frequency. The loading frequency is specifically the loading frequency of the incremental data. The frequency control may be configured in a periodic manner or in a Crontab (periodically executed) manner. The loading service controls the frequency of loading data from the data source according to the configuration information.
and when the loading condition corresponding to the loading frequency is reached, generating an incremental data loading instruction, connecting the application server to the source database according to the loading configuration information, executing the SQL statement, and loading the incremental data from the source database to the local part of the application server.
data reading refers to performing a read service in response to a data request sent by a terminal. Specifically, the data reading includes: acquiring a data reading request carrying data information; acquiring a key name corresponding to the data requested to be read according to the data information based on a preset key name mapping rule; calling a service interface of a cache server according to the key name, reading a value corresponding to the key name, and caching the full data of the source database by the cache server; and returning the read value to the requesting terminal. In this embodiment, when the data reading request is obtained, the service interface of the cache server is called to read data from the cache server, and since the cache server caches the full amount of data of the source database, the probability of the data reading request hitting the cache is greatly increased, and the reliability of data reading is improved, so that the data reading request does not penetrate through the cache to reach the source database, and pressure on the source database is avoided.
A data processing apparatus, as shown in fig. 11, the apparatus comprising:
A read request obtaining module 1101, configured to obtain a data read request carrying data information.
Specifically, a data reading request is sent to the application server based on the operation of the user at the terminal, and the data request information carries data information. Wherein the data information is information related to a request for reading data. The data information is related to the operation of the user at the terminal. For example, if a user opens a page of a certain network course, the terminal requests the application server for information (price or number of purchasers, etc.) related to the network course to be displayed, the data information includes an identifier of the network course, a data table and a query field, wherein the price or number of purchasers is a query field in the data table.
A key name obtaining module 1102, configured to obtain, according to the data information, a key name corresponding to the data requested to be read, based on a pre-configured key name mapping rule.
in this embodiment, the key name of the data requested to be read is determined according to the key name mapping rule and the data information configured in advance. The key name mapping rule refers to a preset correspondence relationship between key names and data information. According to the key name mapping rule, when the data information is obtained, the key name can be determined based on the key name mapping rule and the data information. In one embodiment, the key name mapping rule is that the key name is a data table name + query field, and when a data reading request is obtained, the carried data information includes: and the name of the data table is psychological, the query field is price, and the key name is determined to be the psychological price according to the key name mapping rule and the data information. The reading module 1103 is configured to call a service interface of the cache server according to the key name, read a value corresponding to the key name, and cache the full amount of data in the source database by the cache server.
The full amount of data refers to all data in the source database, i.e. all data of the source database in the cache server. In this embodiment, when the data reading request is obtained, the service interface of the cache server is called to read data from the cache server, and since the cache server caches the full amount of data of the source database, the probability of the data reading request hitting the cache is greatly increased, and the reliability of data reading is improved, so that the data reading request does not penetrate through the cache to reach the source database, and pressure on the source database is avoided.
the source database is a database which provides original data or specific data for the application server, stores the original data of the related service of the application server, and provides data writing service for the application program. Taking the network teaching application as an example, the source service library stores information related to each course, such as course name, price, course video, and so on. In a particular embodiment, the source database may be MySQL (a relational database management system).
The cache server may be a cluster or a plurality of clusters. In this embodiment, a service interface of the cache server is called according to the key name, and a value corresponding to the key name can be read, so that the data structure in the cache server is in a key-value pair form. That is, the cache server in this embodiment stores the database by using a Key-Value. When data is read, the key name is determined according to the data information and the preset key name mapping rule, and when the data of the source database is written into the cache server, the key name corresponding to the data can be determined according to the preset key name mapping rule. The cache server of the embodiment adopts a key value pair data structure, and the key names are managed according to the pre-configured key name mapping rule, compared with the traditional scheme of compiling data cache codes, the key names can be managed through the key name mapping rule, and the operation and maintenance difficulty of the cache server is reduced.
A sending module 1104, configured to return the read value to the requesting terminal.
the application server provides data reading service and is middleware of the terminal and the cache server. In one embodiment, the read service of the application server uses Protocol Buffer (a data exchange format of google) as an external application layer Protocol. When a data reading request is obtained, the data table name and the query field are transmitted as fields in a Protocol Buffer structure. And when the interface of the cache server is called to read the value corresponding to the key name, the reflection function of the Protocol Buffer is used for filling the value into the corresponding Protocol Buffer return structure and returning the value to the request terminal. In the embodiment, by utilizing the reflection and rich configuration of the ProtoBuffer, new data can be read through the interface of the reading service only by simple configuration, the generalization of the interface is realized, the development cost of a data system is reduced, and the method has high expansibility and usability.
After the data processing device obtains the data reading request carrying the data information, the key name corresponding to the data requested to be read is obtained according to the data information based on the key name mapping rule configured in advance, and then the service interface of the cache server is called according to the key name to read the value corresponding to the key name. Because the cache server caches the full amount of data of the source database, the probability of data reading requests hitting the cache is greatly improved, and the reliability of data reading is improved, so that the data reading requests cannot penetrate through the cache to reach the source database, and the pressure on the source database is avoided.
in another embodiment, as shown in fig. 12, the data processing apparatus further includes:
The loading module 1105 is configured to load data from the source database according to the data loading instruction and the data loading configuration information.
Specifically, the loading module is configured to load the full amount of data from the source database according to the full amount of data loading instruction and the data loading configuration information. And the system is also used for loading the incremental data from the source database according to the incremental data loading instruction and the data loading configuration information when the set incremental data loading condition is reached.
the data loading configuration information refers to information configured in advance according to the source database information and is used for loading data. The data loading configuration information includes an address of a source database, a password, a database of loaded data, an SQL statement of loaded data, and a Key (Key, which may be a data table name) of each data source.
It will be appreciated that a configuration page may be provided for instructing the operation and maintenance personnel to configure the loaded configuration information. And inputting the address, the password, the database name, the data table and the SQL statement for loading data of the source database on the page. It can be understood that, for each reading task, there is a corresponding piece of loading configuration information, that is, each reading task is configured with a corresponding database name, a corresponding data table, and a corresponding SQL statement for loading data.
In the caching scheme of this embodiment, the corresponding loading configuration file is configured according to each task, and for the caching service, code writing is not required. The mode can avoid a series of steps of compiling codes, testing programs and the like in the process of caching data, and greatly reduces the workload.
when the data loading instruction is obtained, the application server is connected to the source database according to the data loading configuration information, and executes the SQL statement to obtain the data specified by the loading configuration information.
And the cache writing module is used for writing the loaded data into the cache server according to the cache configuration information.
the cache configuration information refers to information configured in advance according to the information of the cache server and the data structure, and is used for writing data. The cache configuration information includes mapping rules of key names corresponding to each data table, addresses and password information of the cache server, and the like. The key name mapping rule refers to a preset correspondence relationship between key names and data information. The data information includes a data table name of the source database and fields of the data table.
It will be appreciated that a configuration page may be provided for instructing the operation and maintenance personnel to configure the cache configuration information. The address, password and key name mapping rule of the cache server are entered at the page. In this embodiment, the cache configuration information is preconfigured, and the key names are managed according to the preconfigured key name mapping rule, compared with a conventional scheme for writing data cache codes, the key names can be managed through the key name mapping rule, so that the operation and maintenance difficulty of the cache server is reduced.
And after the application server loads data from the source database locally, the application server is connected to the cache server according to the cache configuration information, and the loaded data is written into the cache data device.
The data processing device is a highly-configured implementation mode, and code writing is not needed by configuring loading configuration information and caching configuration information in advance. Greatly reducing the operation and maintenance difficulty.
In another embodiment, the loading module 1105 is configured to invoke a zookeeper cluster to determine a master node in an application server cluster according to the data loading instruction; and the main node loads data from the source database according to the data loading configuration information.
According to the data processing device, the ZooKeeper cluster is arranged to load data from the source database, the disaster tolerance capability is high, meanwhile, the configuration information is loaded, and the ZooKeeper cluster and the ZooKeeper are used for distributed master node selection, so that most data synchronization scenes can be covered, the reproducibility is high, and the utilization rate is high.
In another embodiment, a cache write module includes:
And the key name extraction module is used for extracting the data table name and the field name from the loaded data according to the key name mapping rule to obtain the key name.
The key name mapping rule refers to a preset correspondence relationship between key names and data information. The data information includes a data table name of the source database and fields of the data table. And extracting the data table name and the field name from the loaded data to obtain the key name.
And the value extraction module is used for extracting values corresponding to the data table name and the field name from the loaded data.
And the writing module is used for calling an interface of the cache server according to the cache configuration information and writing the key name and the value into the cache server in a key value pair mode.
the cache server of the embodiment adopts a data structure of key value pairs, and the key names are managed according to the pre-configured key name mapping rule, so that the operation and maintenance difficulty of the cache server is reduced.
in another embodiment, the reading module is configured to call a service interface of the primary cache server cluster, and call an interface of the backup cache server cluster to read a value corresponding to the key name when the calling of the primary cache server cluster is abnormal.
Taking an application server as a Redis as an example, in a general situation, a main Redis cluster is called, and when a problem occurs in calling the main Redis cluster, other Redis clusters can be called according to configuration. Thus, high fault tolerance of the read service is ensured as much as possible, as shown in fig. 8, once a problem occurs in one Redis cluster, the read service can switch the accessed Redis cluster, thereby avoiding data unavailability caused by downtime of one Redis cluster or network unavailability.
According to the data processing device, the cache server cluster is used for providing the cache service, and the disaster tolerance capability of the cache data device is improved.
FIG. 13 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be the application server in fig. 1. As shown in fig. 13, the computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program which, when executed by the processor, causes the processor to implement the data processing method. The internal memory may also have stored therein a computer program that, when executed by the processor, causes the processor to perform a data processing method.
Those skilled in the art will appreciate that the architecture shown in fig. 13 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the data processing apparatus provided herein may be implemented in the form of a computer program that is executable on a computer device such as that shown in fig. 13. The memory of the computer device may store therein various program modules constituting the data processing apparatus, such as a read request acquisition module, a key name acquisition module, a read module, and a transmission module shown in fig. 13. The computer program constituted by the respective program modules causes the processor to execute the steps in the data processing method of the respective embodiments of the present application described in the present specification.
for example, the computer device shown in fig. 13 may execute the step of acquiring the data read request carrying the data information by a read request acquisition module in the data processing apparatus shown in fig. 11. The computer equipment can execute the step of acquiring the key name corresponding to the data requested to be read according to the data information based on the preset key name mapping rule through the key name acquisition module. The computer equipment can execute the step of calling the service interface of the cache server according to the key name and reading the value corresponding to the key name through the reading module. The computer device may perform the step of returning the read value to the requesting terminal through the transmission module.
A computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
Acquiring a data reading request carrying data information;
acquiring a key name corresponding to the data requested to be read according to the data information based on a preset key name mapping rule;
Calling a service interface of a cache server according to the key name, reading a value corresponding to the key name, and caching the full data of the source database by the cache server;
And returning the read value to the requesting terminal.
in another embodiment, the computer program, when executed by the processor, causes the processor to further perform the steps of:
Loading data from a source database according to the data loading instruction and the data loading configuration information;
and writing the loaded data into the cache server according to the cache configuration information.
in another embodiment, the data load instruction includes a full data load instruction, and the loading data from the source database according to the data load instruction and the data load configuration information includes: and loading the full data from the source database according to the full data loading instruction and the data loading configuration information.
In another embodiment, the data load instruction comprises an incremental data load instruction; loading data from a source database according to the data loading instruction and the data loading configuration information, comprising: and when the set incremental data loading condition is reached, loading the incremental data from the source database according to the incremental data loading instruction and the data loading configuration information.
In another embodiment, loading data from a source database according to a data load instruction and data load configuration information includes:
Calling a zookeeper cluster to determine a main node in an application server cluster according to the data loading instruction;
And the main node loads data from the source database according to the data loading configuration information.
In another embodiment, the cache configuration information includes a key name mapping rule, and the writing of the loaded data into the cache server according to the cache configuration information includes:
Extracting a data table name and a field name from the loaded data according to a key name mapping rule to obtain a key name;
Extracting values corresponding to the data table name and the field name from the loaded data;
and calling an interface of the cache server according to the cache configuration information, and writing the key name and the value into the cache server in a key value pair mode.
in another embodiment, calling a service interface of the cache server according to the key name, and reading a value corresponding to the key name includes:
Calling a service interface of the main cache server cluster according to the key name;
and when the calling main cache server cluster is abnormal, calling an interface of the backup cache server cluster, and reading a value corresponding to the key name.
A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of:
Acquiring a data reading request carrying data information;
acquiring a key name corresponding to the data requested to be read according to the data information based on a preset key name mapping rule;
calling a service interface of a cache server according to the key name, reading a value corresponding to the key name, and caching the full data of the source database by the cache server;
and returning the read value to the requesting terminal.
in another embodiment, the computer program, when executed by the processor, causes the processor to further perform the steps of:
Loading data from a source database according to the data loading instruction and the data loading configuration information;
And writing the loaded data into the cache server according to the cache configuration information.
In another embodiment, the data load instruction includes a full data load instruction, and the loading data from the source database according to the data load instruction and the data load configuration information includes: and loading the full data from the source database according to the full data loading instruction and the data loading configuration information.
In another embodiment, the data load instruction comprises an incremental data load instruction; loading data from a source database according to the data loading instruction and the data loading configuration information, comprising: and when the set incremental data loading condition is reached, loading the incremental data from the source database according to the incremental data loading instruction and the data loading configuration information.
In another embodiment, loading data from a source database according to a data load instruction and data load configuration information includes:
Calling a zookeeper cluster to determine a main node in an application server cluster according to the data loading instruction;
And the main node loads data from the source database according to the data loading configuration information.
in another embodiment, the cache configuration information includes a key name mapping rule, and the writing of the loaded data into the cache server according to the cache configuration information includes:
extracting a data table name and a field name from the loaded data according to a key name mapping rule to obtain a key name;
Extracting values corresponding to the data table name and the field name from the loaded data;
And calling an interface of the cache server according to the cache configuration information, and writing the key name and the value into the cache server in a key value pair mode.
In another embodiment, calling a service interface of the cache server according to the key name, and reading a value corresponding to the key name includes:
calling a service interface of the main cache server cluster according to the key name;
And when the calling main cache server cluster is abnormal, calling an interface of the backup cache server cluster, and reading a value corresponding to the key name.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (14)

1. A method of data processing, comprising:
acquiring a data reading request carrying data information;
acquiring a key name corresponding to the data requested to be read according to the data information based on a preset key name mapping rule;
calling a service interface of a cache server according to the key name, reading a value corresponding to the key name, wherein the cache server caches the full data of the source database;
And returning the read value to the request terminal.
2. the method of claim 1, further comprising:
Loading data from a source database according to the data loading instruction and the data loading configuration information;
And writing the loaded data into the cache server according to the cache configuration information.
3. the method of claim 2, wherein the data load instruction comprises a full data load instruction, and wherein loading data from a source database according to the data load instruction and the data load configuration information comprises: and loading the full data from the source database according to the full data loading instruction and the data loading configuration information.
4. The method of claim 2 or 3, wherein the data load instruction comprises an incremental data load instruction; the loading data from the source database according to the data loading instruction and the data loading configuration information includes: and when the set incremental data loading condition is reached, loading the incremental data from the source database according to the incremental data loading instruction and the data loading configuration information.
5. The method of claim 2, wherein loading data from the source database according to the data load instruction and the data load configuration information comprises:
calling a zookeeper cluster to determine a main node in an application server cluster according to the data loading instruction;
And the main node loads data from the source database according to the data loading configuration information.
6. the method according to claim 2, wherein the cache configuration information includes the key name mapping rule, and the writing the loaded data to the cache server according to the cache configuration information includes:
Extracting a data table name and a field name from the loaded data according to the key name mapping rule to obtain a key name;
Extracting values corresponding to the data table name and the field name from the loaded data;
and calling an interface of the cache server according to the cache configuration information, and writing the key name and the value into the cache server in a key-value pair mode.
7. The method according to claim 1, wherein the calling a service interface of a cache server according to the key name and reading a value corresponding to the key name comprises:
calling a service interface of a main cache server cluster according to the key name;
and when the calling of the main cache server cluster is abnormal, calling an interface of a backup cache server cluster, and reading a value corresponding to the key name.
8. A data processing apparatus, characterized in that the apparatus comprises:
a read request obtaining module, configured to obtain a data read request carrying data information;
The key name acquisition module is used for acquiring the key name corresponding to the data requested to be read according to the data information based on a preset key name mapping rule;
The reading module is used for calling a service interface of a cache server according to the key name and reading a value corresponding to the key name, and the cache server caches the full data of the source database;
And the sending module is used for returning the read value to the request terminal.
9. the apparatus of claim 8, further comprising:
The loading module is used for loading data from a source database according to the data loading instruction and the data loading configuration information;
and the cache writing module is used for writing the loaded data into the cache server according to the cache configuration information.
10. the apparatus of claim 9, wherein the loading module is configured to invoke the zookeeper cluster to determine a master node in the application server cluster according to the data loading instruction; and the main node loads data from the source database according to the data loading configuration information.
11. The apparatus of claim 9, wherein the cache write module comprises:
The key name extraction module is used for extracting a data table name and a field name from the loaded data according to the key name mapping rule to obtain a key name;
The value extraction module is used for extracting values corresponding to the data table name and the field name from the loaded data;
and the writing module is used for calling an interface of the cache server according to the cache configuration information and writing the key name and the value into the cache server in a key-value pair mode.
12. The apparatus according to claim 8, wherein the reading module is configured to invoke a service interface of a primary cache server cluster according to the key name, and invoke an interface of a backup cache server cluster to read a value corresponding to the key name when the primary cache server cluster is invoked abnormally.
13. A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 7.
14. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method according to any one of claims 1 to 7.
CN201810277788.9A 2018-03-30 2018-03-30 Data processing method, data processing device, computer equipment and storage medium Pending CN110555041A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810277788.9A CN110555041A (en) 2018-03-30 2018-03-30 Data processing method, data processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810277788.9A CN110555041A (en) 2018-03-30 2018-03-30 Data processing method, data processing device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110555041A true CN110555041A (en) 2019-12-10

Family

ID=68733789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810277788.9A Pending CN110555041A (en) 2018-03-30 2018-03-30 Data processing method, data processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110555041A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111339139A (en) * 2020-02-21 2020-06-26 广州市百果园信息技术有限公司 Data processing method, device, equipment and storage medium
CN111400350A (en) * 2020-03-13 2020-07-10 上海携程商务有限公司 Configuration data reading method, system, electronic device and storage medium
CN111666265A (en) * 2020-06-04 2020-09-15 南京领行科技股份有限公司 Data management method, device, server and storage medium
CN111984729A (en) * 2020-08-14 2020-11-24 北京人大金仓信息技术股份有限公司 Heterogeneous database data synchronization method, device, medium and electronic equipment
CN112000444A (en) * 2020-10-27 2020-11-27 财付通支付科技有限公司 Database transaction processing method and device, storage medium and electronic equipment
CN113282581A (en) * 2021-05-17 2021-08-20 广西南宁天诚智远知识产权服务有限公司 Database data calling method and device
CN113448962A (en) * 2021-06-02 2021-09-28 中科驭数(北京)科技有限公司 Database data management method and device
CN113849373A (en) * 2021-09-27 2021-12-28 中国电信股份有限公司 Server supervision method and device and storage medium
CN114785878A (en) * 2022-04-24 2022-07-22 北京印象笔记科技有限公司 Information extraction method and device, electronic equipment and computer readable storage medium
CN116319068A (en) * 2023-05-11 2023-06-23 北京久佳信通科技有限公司 Method and system for improving penetrating data processing efficiency in strong isolation environment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7809882B1 (en) * 2006-01-24 2010-10-05 Verizon Services Corp. Session independent backend data cache system
CN103473272A (en) * 2013-08-20 2013-12-25 小米科技有限责任公司 Data processing method, device and system
CN103595776A (en) * 2013-11-05 2014-02-19 福建网龙计算机网络信息技术有限公司 Distributed type caching method and system
CN103853714A (en) * 2012-11-28 2014-06-11 中国移动通信集团河南有限公司 Data processing method and device
CN104123238A (en) * 2014-06-30 2014-10-29 海视云(北京)科技有限公司 Data storage method and device
CN107506396A (en) * 2017-07-31 2017-12-22 努比亚技术有限公司 A kind of data buffer storage initial method, mobile terminal and computer-readable recording medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7809882B1 (en) * 2006-01-24 2010-10-05 Verizon Services Corp. Session independent backend data cache system
CN103853714A (en) * 2012-11-28 2014-06-11 中国移动通信集团河南有限公司 Data processing method and device
CN103473272A (en) * 2013-08-20 2013-12-25 小米科技有限责任公司 Data processing method, device and system
CN103595776A (en) * 2013-11-05 2014-02-19 福建网龙计算机网络信息技术有限公司 Distributed type caching method and system
CN104123238A (en) * 2014-06-30 2014-10-29 海视云(北京)科技有限公司 Data storage method and device
CN107506396A (en) * 2017-07-31 2017-12-22 努比亚技术有限公司 A kind of data buffer storage initial method, mobile terminal and computer-readable recording medium

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111339139A (en) * 2020-02-21 2020-06-26 广州市百果园信息技术有限公司 Data processing method, device, equipment and storage medium
CN111400350A (en) * 2020-03-13 2020-07-10 上海携程商务有限公司 Configuration data reading method, system, electronic device and storage medium
CN111400350B (en) * 2020-03-13 2023-05-02 上海携程商务有限公司 Configuration data reading method, system, electronic device and storage medium
CN111666265B (en) * 2020-06-04 2022-06-24 南京领行科技股份有限公司 Data management method, device, server and storage medium
CN111666265A (en) * 2020-06-04 2020-09-15 南京领行科技股份有限公司 Data management method, device, server and storage medium
CN111984729A (en) * 2020-08-14 2020-11-24 北京人大金仓信息技术股份有限公司 Heterogeneous database data synchronization method, device, medium and electronic equipment
CN112000444A (en) * 2020-10-27 2020-11-27 财付通支付科技有限公司 Database transaction processing method and device, storage medium and electronic equipment
CN113282581A (en) * 2021-05-17 2021-08-20 广西南宁天诚智远知识产权服务有限公司 Database data calling method and device
CN113448962A (en) * 2021-06-02 2021-09-28 中科驭数(北京)科技有限公司 Database data management method and device
CN113448962B (en) * 2021-06-02 2022-10-28 中科驭数(北京)科技有限公司 Database data management method and device
CN113849373A (en) * 2021-09-27 2021-12-28 中国电信股份有限公司 Server supervision method and device and storage medium
CN114785878A (en) * 2022-04-24 2022-07-22 北京印象笔记科技有限公司 Information extraction method and device, electronic equipment and computer readable storage medium
CN116319068A (en) * 2023-05-11 2023-06-23 北京久佳信通科技有限公司 Method and system for improving penetrating data processing efficiency in strong isolation environment
CN116319068B (en) * 2023-05-11 2023-08-08 北京久佳信通科技有限公司 Method and system for improving penetrating data processing efficiency in strong isolation environment

Similar Documents

Publication Publication Date Title
CN110555041A (en) Data processing method, data processing device, computer equipment and storage medium
CN110287709B (en) User operation authority control method, device, equipment and medium
CN112637346B (en) Proxy method, proxy device, proxy server and storage medium
CN108197200B (en) Log tracking method and device, computer equipment and storage medium
CN108829459B (en) Nginx server-based configuration method and device, computer equipment and storage medium
US20190266134A1 (en) Data migration method, apparatus, and storage medium
CN109032824A (en) Database method of calibration, device, computer equipment and storage medium
CN108959385B (en) Database deployment method, device, computer equipment and storage medium
CN112910945A (en) Request link tracking method and service request processing method
CN111309785B (en) Database access method and device based on Spring framework, computer equipment and medium
CN110489429B (en) Data acquisition method and device, computer readable storage medium and computer equipment
CN109586948A (en) Update method, apparatus, computer equipment and the storage medium of system configuration data
CN110602169B (en) Service calling method and device, computer equipment and storage medium
CN110196729B (en) Application program updating method, device and apparatus and storage medium
WO2019127890A1 (en) Vulnerability scanning method, device, computer apparatus, and storage medium
CN110213392B (en) Data distribution method and device, computer equipment and storage medium
CN111475376A (en) Method and device for processing test data, computer equipment and storage medium
CN112100152A (en) Service data processing method, system, server and readable storage medium
CN110866011B (en) Data table synchronization method and device, computer equipment and storage medium
CN116610332A (en) Cloud storage deployment method and device and readable storage medium
CN112783866B (en) Data reading method, device, computer equipment and storage medium
CN112637085B (en) Flow recording method and device, computer equipment and storage medium
CN112015818B (en) UUID (unified user identifier) generation method, device, equipment and medium for distributed graph database
CN112000648B (en) Data clearing method and device, computer equipment and storage medium
CN110928598B (en) System configuration method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40018671

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191210

RJ01 Rejection of invention patent application after publication