CN113204535B - Routing method and device, electronic equipment and computer readable storage medium - Google Patents

Routing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113204535B
CN113204535B CN202110555211.1A CN202110555211A CN113204535B CN 113204535 B CN113204535 B CN 113204535B CN 202110555211 A CN202110555211 A CN 202110555211A CN 113204535 B CN113204535 B CN 113204535B
Authority
CN
China
Prior art keywords
target
file
processed
field
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110555211.1A
Other languages
Chinese (zh)
Other versions
CN113204535A (en
Inventor
刘霞
蔡予萌
马彦
胡凯乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110555211.1A priority Critical patent/CN113204535B/en
Publication of CN113204535A publication Critical patent/CN113204535A/en
Application granted granted Critical
Publication of CN113204535B publication Critical patent/CN113204535B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/14Routing performance; Theoretical aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present disclosure provides a routing method and apparatus, an electronic device, and a computer-readable storage medium, and the present disclosure may be used in the big data technical field, and may also be used in the financial technical field. The routing method of the present disclosure includes: sequentially reading a plurality of files to be processed, wherein the files to be processed comprise a first type routing field; judging whether the current file to be processed is a target file to be processed or not; under the condition that the current file to be processed is a target file to be processed, generating a target second type routing field corresponding to the target file to be processed according to a target first type routing field corresponding to the target file to be processed, so as to route the target file to a target database in a second file system according to the target second type routing field and a routing algorithm; and discarding the current pending file if the current pending file is not the target pending file.

Description

Routing method and device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of big data technology, and more particularly, to a routing method and apparatus, an electronic device, a computer readable storage medium, and a computer program product.
Background
With the advent of the information age, the distributed system is widely applied to various business processing scenes, the distributed platform system adopts a form of database separation, table separation and fragmentation to process and store data, and in the process of database falling, the data is generally determined to which database according to a routing field.
In the process of implementing the disclosed concept, the inventor finds that at least the following problems exist in the related art, because the types of the routing fields according to which different service systems perform data routing may be different, in the process of performing data interaction between different services, the situation that the data file does not carry the routing fields used by the system in the data routing process may exist, and how to perform data routing in this situation is a problem to be solved.
Disclosure of Invention
In view of this, the present disclosure provides a routing method and apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
One aspect of the present disclosure provides a routing method, including:
sequentially reading a plurality of files to be processed, wherein the files to be processed comprise a first type routing field, and the first type routing field is a routing field type according to which the files to be processed are subjected to data routing in a first file system;
Judging whether the current file to be processed is a target file to be processed or not;
under the condition that the current file to be processed is a target file to be processed, generating a target second type routing field corresponding to the target file to be processed according to a target first type routing field corresponding to the target file to be processed, so as to route the target file to a target database in a second file system according to the target second type routing field and a routing algorithm, wherein the second type routing field is a routing field type according to which the data of the file to be processed is routed in the second file system; and
and discarding the current file to be processed under the condition that the current file to be processed is not the target file to be processed.
According to the embodiment of the disclosure, the plurality of files to be processed are files obtained by numbering each file in the original batch of files in sequence in the preprocessing server, each file to be processed comprises a sequence number, and the current file to be processed comprises a current sequence number.
The step of judging whether the current file to be processed is the target file to be processed comprises the following steps: and judging whether the current file to be processed is a target file to be processed or not according to the current sequence number and a preset judging rule.
According to the embodiment of the disclosure, a modulus judgment rule is adopted as a preset judgment rule;
according to the current sequence number and the judging rule, judging whether the current file to be processed is the target file to be processed comprises the following steps: the node number is subjected to modulo operation on the current sequence number to obtain a current modulo value, wherein the node number is the number of processing servers in the second file system, and each processing server is identified by the node number; and determining the current file to be processed, of which the current module value is matched with the node serial number, as a target file to be processed.
According to an embodiment of the present disclosure, generating, according to a target first type routing field corresponding to a target to-be-processed file, a target second type routing field corresponding to the target to-be-processed file includes: and generating a target second type of routing field according to the target first type of routing field and a preset generation rule, wherein the preset generation rule is used for representing the corresponding relation between the first type of routing field and the second type of routing field.
According to an embodiment of the present disclosure, the correspondence between the first type routing field and the second type routing field is: second type routing field = first type routing field + auxiliary field, where the auxiliary field includes a time information field and a random number field.
Generating the target second type routing field according to the target first type routing field and the preset generation rule comprises: acquiring a target auxiliary field corresponding to a target first type routing field; and adding the destination auxiliary field to the destination first type routing field to generate a destination second type routing field.
According to an embodiment of the present disclosure, obtaining a destination auxiliary field corresponding to a destination first type routing field includes: obtaining target raw information data, wherein the target raw information data comprises target auxiliary field information data corresponding to a target first type of routing field, the target raw information data is pre-generated and pre-stored in an information database of a second file system, and analyzing the target raw information data to obtain a target auxiliary field corresponding to the target first type of routing field.
According to the embodiment of the disclosure, a hash algorithm is adopted by the routing algorithm;
according to the target second type routing field and the routing algorithm, routing the target pending file to a target database in the second file system includes: determining a fragment sequence number corresponding to the target to-be-processed file based on the target second type routing field and the total fragment number by utilizing a hash algorithm, wherein the fragment sequence number is used for identifying a target database; and storing the target file to be processed into a target database corresponding to the fragment sequence number.
According to an embodiment of the present disclosure, further comprising: after the fragment serial numbers corresponding to the target to-be-processed files are determined, the fragment serial numbers corresponding to the target to-be-processed files are stored in an information database of the second file system.
A routing device comprises a reading module, a judging module, a first executing module and a second executing module.
The reading module is used for sequentially reading a plurality of files to be processed, wherein the files to be processed comprise a first type routing field, and the first type routing field is the type of the routing field according to which the files to be processed are subjected to data routing in the first file system.
The judging module is used for judging whether the current file to be processed is a target file to be processed or not.
The first execution module is used for generating a target second type routing field corresponding to the target to-be-processed file according to the target first type routing field corresponding to the target to-be-processed file under the condition that the current to-be-processed file is the target to-be-processed file, so that the target to-be-processed file is routed to a target database in a second file system according to the target second type routing field and a routing algorithm, wherein the second type routing field is the type of the routing field according to which the to-be-processed file is routed in the second file system.
And the second execution module is used for discarding the current file to be processed under the condition that the current file to be processed is not the target file to be processed.
According to an embodiment of the disclosure, in the reading module, the plurality of files to be processed are files obtained by numbering each file in the original batch of files in sequence in the preprocessing server, each file to be processed includes a sequence number, and the current file to be processed includes a current sequence number.
The judging module is used for judging whether the current file to be processed is a target file to be processed according to the current sequence number and a preset judging rule.
According to the embodiment of the disclosure, the preset judging rule adopts a modulo judging rule. The above-mentioned judging module includes: and the module taking unit and the judging unit.
The module taking unit is used for taking the module of the node number according to the current sequence number to obtain the current module value, wherein the node number is the number of processing servers in the second file system, and each processing server is identified by the node number.
And the judging unit is used for determining the current file to be processed, of which the current module value is matched with the node serial number, as a target file to be processed.
According to an embodiment of the disclosure, the first execution module is configured to generate a target second type of routing field according to a target first type of routing field and a preset generation rule, where the preset generation rule is used to characterize a correspondence between the first type of routing field and the second type of routing field.
According to an embodiment of the present disclosure, the correspondence between the first type routing field and the second type routing field is: second type routing field = first type routing field + auxiliary field, where the auxiliary field includes a time information field and a random number field.
The first execution module comprises an acquisition unit and an addition unit.
And the acquisition unit is used for acquiring the target auxiliary field corresponding to the target first type routing field. And the adding unit is used for adding the target auxiliary field into the target first type routing field to generate a target second type routing field.
The method comprises the steps of obtaining a sub-unit and analyzing the sub-unit.
The acquisition subunit is configured to acquire target raw information data, where the target raw information data includes target auxiliary field information data corresponding to a target first type routing field, the target raw information data being pre-generated and pre-stored in an information database of the second file system. And the analysis subunit is used for analyzing the target original information data to obtain a target auxiliary field corresponding to the target first type routing field.
According to an embodiment of the present disclosure, the routing algorithm adopts a hash algorithm, and the first execution module further includes: a computing unit and a storage unit.
The computing unit is used for determining the fragment sequence number corresponding to the target to-be-processed file based on the target second type routing field and the total fragment number by utilizing a hash algorithm, wherein the fragment sequence number is used for identifying the target database. And the storage unit is used for storing the target file to be processed into a target database corresponding to the fragment sequence number.
According to an embodiment of the disclosure, the device further includes a storage module, configured to store, after determining the fragment sequence number corresponding to the target to-be-processed file, the fragment sequence number corresponding to the target to-be-processed file into the information database of the second file system.
Another aspect of the present disclosure provides an electronic device, comprising: one or more processors, and memory; wherein the memory is for storing one or more programs; wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the routing method as described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions that, when executed, are configured to implement a routing method as described above.
Another aspect of the present disclosure provides a computer program product comprising computer executable instructions which, when executed, are for implementing a routing method as described above.
According to the embodiment of the disclosure, in the scene of data interaction between the file system and the external file system, the target second type routing field corresponding to the target to-be-processed file is generated according to the target first type routing field corresponding to the target to-be-processed file, so that the target to-be-processed file can be routed to the target database in the second file system according to the target second type routing field and the routing algorithm, and the problem that the routing cannot be completed due to the fact that the data file does not carry the routing field used by the file system in the data interaction scene of different file systems is solved. Meanwhile, after each processing server of the system receives the file, the file can be processed by judging in advance, the routing field required by the system is further calculated and the data routing is carried out according to the routing field under the condition that the received file belongs to the target file to be processed, so that the condition that a single server or a part of servers need to process the whole data file can be avoided.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments thereof with reference to the accompanying drawings in which:
FIG. 1 schematically illustrates an exemplary system architecture to which the routing methods and apparatus of the present disclosure may be applied;
fig. 2 schematically illustrates a flow chart of a routing method according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a flowchart for determining whether a current pending file is a target pending file according to an embodiment of the present disclosure;
fig. 4 schematically illustrates a schematic diagram of a routing method according to an embodiment of the present disclosure;
fig. 5 schematically illustrates a block diagram of a routing device according to an embodiment of the present disclosure;
fig. 6 schematically illustrates a block diagram of an electronic device for implementing a routing method according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a formulation similar to at least one of "A, B or C, etc." is used, in general such a formulation should be interpreted in accordance with the ordinary understanding of one skilled in the art (e.g. "a system with at least one of A, B or C" would include but not be limited to systems with a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Before describing embodiments of the present disclosure in detail, the following description is given to a system structure and an application scenario related to the method provided by the embodiments of the present disclosure.
Fig. 1 schematically illustrates an exemplary system architecture 100 to which the routing methods and apparatus of the present disclosure may be applied. It should be noted that fig. 1 is only an example of a system architecture to which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include a preprocessing server 101, a processing server cluster 102, and a database server 103.
Data transmission interaction is realized among the preprocessing server 101, the processing server cluster 102 and the database server 103 through a network. The network may include various connection types, such as wired and/or wireless communication links, and the like.
The preprocessing server 101 may be configured to preprocess a batch of files to be processed from an external file system and send the preprocessed files to the processing server cluster 102. The preprocessing may be adding a judgment identifier to a batch of files to be processed, where the batch of files to be processed includes a plurality of files, such as file 1, file 2, file 3 … …, and file n, and adding a judgment identifier to the batch of files to be processed, and different judgment identifiers may be respectively added to each file to be processed, so that after receiving the batch of files to be processed, the processing server cluster 102 may determine whether the current file to be processed is a target file to be processed according to the judgment identifier.
In the scene of data interaction between the file system and the external file system, the file to be processed comprises a first type routing field, wherein the first type routing field is the type of the routing field according to which the file to be processed is subjected to data routing in the external file system. In the system, when the file to be processed is subjected to data routing, the type of the routing field is a second type of routing field different from the first type of routing field. The first type routing field and the second type routing field have a one-to-one correspondence. Thus, the second type of routing field may be generated from the first type of routing field and the correspondence described above.
The processing server cluster 102 includes a plurality of processing servers, each processing server sequentially reads a plurality of files to be processed after receiving a batch of files to be processed, and judges whether the current file to be processed is a target file to be processed according to a judgment identification of each file to be processed.
And under the condition that the current to-be-processed file is the target to-be-processed file, the processing server is further used for generating a second type routing field and a data fragment sequence number corresponding to the target to-be-processed file according to the first type routing field and the corresponding relation, performing data routing according to the second type routing field, and storing the to-be-processed file into the corresponding data fragment. And discarding the current file to be processed by the processing server under the condition that the current file to be processed is not the target file to be processed.
In the database server 103, information data for representing the corresponding relation between the first type routing field and the second type routing field is stored, and in the process of generating the second type routing field according to the first type routing field and the corresponding relation, the processing server can acquire the information data from the database server 103, so that further analysis and processing are facilitated. The database server 103 is further configured to store a result of data processing, and may store the corresponding second type routing field generated by the processing server and the data fragment sequence number corresponding to the target to-be-processed file calculated by the processing server in the database server 103, so as to facilitate data maintenance and management.
It should be appreciated that the number of preprocessing servers 101, the number of servers in processing server cluster 102, and the number of database servers 103 in FIG. 1 are merely illustrative. There may be any number of servers, as desired for implementation.
It should be noted that, the routing method and the device of the present disclosure may be used in the big data technical field, the financial technical field, and any field other than the big data technical field and the financial field, and the application field of the routing method and the device is not limited in this disclosure.
In the distributed platform system, data processing and storage are carried out in a form of database splitting, table splitting and slicing, and in the process of database falling, the data is generally determined to which database according to the routing field. Because the types of the routing fields according to which different service systems perform data routing may be different, in the process of performing data interaction between different services, there may be a case that a data file does not carry the routing fields used by the system in the data routing process, so that the routing fields required by the system need to be calculated.
The problem of how to process interactive files, the actual processing method, is that the receiving party applies all the processing of each fragment, and in this case, the whole amount of each fragment needs to be judged and processed, and the processing efficiency is low due to the fact that the whole amount of data needs to be processed. Another way is to pre-route the received file at the common layer, which is not too efficient because the number of actuators at the common layer is limited and only one actuator can handle it. To improve efficiency and fully utilize the utilization rate of the fragments.
In the process of realizing the present disclosure, it is found that after the present system receives the file, the present system may perform the processing by judging in advance, and further calculate the routing field required by the present system when the received file belongs to the target file to be processed, and perform the data routing according to the routing field, based on this, the present system may improve the efficiency of the data routing while completing the data routing.
Accordingly, based on the above concepts, the present disclosure provides a routing method.
Fig. 2 schematically illustrates a flow chart of a routing method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S201 to S204.
In operation S201, a plurality of files to be processed are sequentially read, wherein the files to be processed include a first type routing field, and the first type routing field is a type of routing field according to which the files to be processed are routed in the first file system.
In operation S202, it is determined whether the current pending file is a target pending file.
In operation S203, in the case that the current file to be processed is the target file to be processed, a target second type routing field corresponding to the target file to be processed is generated according to the target first type routing field corresponding to the target file to be processed, so that the target file to be processed is routed to the target database in the second file system according to the target second type routing field and the routing algorithm, wherein the second type routing field is the type of the routing field according to which the data of the file to be processed is routed in the second file system.
In operation S204, in the case where the current pending file is not the target pending file, the current pending file is discarded.
According to an embodiment of the disclosure, the first file system is an external file system, the second file system refers to the file system, and in a scenario where the file system and the external file system perform data interaction, the to-be-processed file includes a first type routing field, where the first type routing field is a type of routing field according to which the to-be-processed file is subjected to data routing in the external file system. In the system, when the file to be processed is subjected to data routing, the type of the routing field is a second type of routing field different from the first type of routing field. The first type routing field and the second type routing field have a one-to-one correspondence. Thus, the second type of routing field may be generated from the first type of routing field and the correspondence described above.
According to the embodiment of the disclosure, the second file system includes a plurality of processing servers, and each processing server sequentially reads a plurality of files to be processed after receiving a batch of files to be processed from an external file system, and judges in advance whether the current file to be processed is a target file to be processed. And carrying out subsequent processing under the condition that the current file to be processed is the target file to be processed, otherwise, discarding the current file to be processed.
According to the embodiment of the disclosure, whether the current to-be-processed file is the target to-be-processed file is judged in advance, whether the current to-be-processed file is the target to-be-processed file may be judged according to the judgment mark of each to-be-processed file, wherein the judgment mark may be pre-processed and added to each to-be-processed file in advance, and the pre-processing process may be a process of carrying out batch pre-processing on batch to-be-processed files at the pre-processing server, or may be an operation of adding the judgment mark to the current to-be-processed file after each processing server receives the current to-be-processed file. The preprocessing can be to add different judgment marks to each to-be-processed file respectively, or can be partly the same judgment mark, and when the processing server judges the current to-be-processed file, the processing server can judge according to a self-defined algorithm in combination with the judgment mark so as to uniformly distribute a plurality of to-be-processed files to each processing service, thereby improving the data processing efficiency.
And under the condition that the current to-be-processed file is the target to-be-processed file, the processing server is further used for generating a target second type routing field corresponding to the target to-be-processed file according to the target first type routing field corresponding to the target to-be-processed file and the corresponding relation, so that data routing is carried out according to the target second type routing field and a routing algorithm, and the target to-be-processed file is routed to a target database in the second file system, namely, the corresponding data fragments.
According to the embodiment of the disclosure, in the scene of data interaction between the file system and the external file system, the target second type routing field corresponding to the target to-be-processed file is generated according to the target first type routing field corresponding to the target to-be-processed file, so that the target to-be-processed file can be routed to the target database in the second file system according to the target second type routing field and the routing algorithm, and the problem that the routing cannot be completed due to the fact that the data file does not carry the routing field used by the file system in the data interaction scene of different file systems is solved. Meanwhile, after each processing server of the system receives the file, the file can be processed by judging in advance, the routing field required by the system is further calculated and the data routing is carried out according to the routing field under the condition that the received file belongs to the target file to be processed, so that the condition that a single server or a part of servers need to process the whole data file can be avoided.
Fig. 3 schematically illustrates a flowchart for determining whether a current pending file is a target pending file according to an embodiment of the present disclosure.
As shown in fig. 3, the method includes operations S321 to S323.
In operation S321, a plurality of files to be processed are obtained after each file in the original batch of files is respectively numbered according to the sequence in the preprocessing server, each file to be processed contains a sequence number, and the current file to be processed contains the current sequence number.
According to the embodiment of the disclosure, whether the current to-be-processed file is the target to-be-processed file is judged in advance, and whether the current to-be-processed file is the target to-be-processed file can be judged according to the judgment identification of each to-be-processed file. The judgment mark can be added by carrying out batch pretreatment on the batch files to be processed in the pretreatment server. According to an embodiment of the present disclosure, the judgment flag added in operation S321 is a sequential number, each file to be processed contains one sequential number, and each sequential number is different.
According to an embodiment of the present disclosure, determining whether a current to-be-processed file is a target to-be-processed file includes: and judging whether the current file to be processed is a target file to be processed or not according to the current sequence number and a preset judging rule.
According to an embodiment of the disclosure, the preset judgment rule may be a modulo judgment rule. And numbering and judging rules according to the current sequence.
In the process of judging according to the current sequence number and the judging rule, in operation S322, the current sequence number is used for modulo the node number to obtain the current modulo value, wherein the node number is the number of processing servers in the second file system, and each processing server is identified by the node number.
In operation S323, it is determined whether the current modulus value matches the node sequence number. And under the condition that the current module value is matched with the node serial number, determining the current file to be processed, of which the current module value is matched with the node serial number, as a target file to be processed, otherwise, discarding the file. For example, the sequence number of the current file to be processed is 1, the node number is 16, then the module taking is carried out to obtain the current module value being 1, and then the current file to be processed is processed by a No. 1 server; for another example, the sequence number of the current file to be processed is 2, the node number is 16, the current module value is obtained by taking the module, and the current file to be processed is processed by the server No. 2, so that a plurality of files can be equally classified into a plurality of servers for parallel processing.
According to the embodiment of the disclosure, by adding sequential numbers to each file in advance and combining a modulo algorithm, whether the current file to be processed is the target file to be processed is judged, so that the files can be uniformly classified into a plurality of servers for parallel processing, all the servers can be fully utilized for simultaneous processing, the data processed by each server is ensured to be uniform and not to be repeated, the data processing efficiency is improved, meanwhile, the application range of the method is wide, and the method is applicable regardless of whether one file or a plurality of files are provided by upstream application of an external system.
According to an embodiment of the present disclosure, generating, according to a target first type routing field corresponding to a target to-be-processed file, a target second type routing field corresponding to the target to-be-processed file includes: and generating a target second type of routing field according to the target first type of routing field and a preset generation rule, wherein the preset generation rule is used for representing the corresponding relation between the first type of routing field and the second type of routing field.
According to an embodiment of the present disclosure, the correspondence between the first type routing field and the second type routing field is: second type routing field = first type routing field + auxiliary field, where the auxiliary field includes a time information field and a random number field.
Generating the target second type routing field according to the target first type routing field and the preset generation rule comprises:
acquiring a target auxiliary field corresponding to a target first type routing field; and adding the destination auxiliary field to the destination first type routing field to generate a destination second type routing field.
According to an embodiment of the present disclosure, obtaining a destination auxiliary field corresponding to a destination first type routing field includes: obtaining target raw information data, wherein the target raw information data comprises target auxiliary field information data corresponding to a target first type of routing field, the target raw information data is pre-generated and pre-stored in an information database of a second file system, and analyzing the target raw information data to obtain a target auxiliary field corresponding to the target first type of routing field.
According to the embodiment of the disclosure, the first routing field adopts a client number ( conference number generated by a client when the client transacts business), the second type routing field adopts a medium number, and the medium number is generated according to the corresponding relation between the first type routing field and the second type routing field, for example, the pre-stored original information data can be acquired from a database. The database stores the original information data used for representing the corresponding relation between the first type routing field and the second type routing field, and the original information data can be obtained from the database, so that the further analysis and the processing are facilitated.
According to embodiments of the present disclosure, the raw information data may be business log data that the customer stores while transacting business, and may include, for example, business transaction time, business type, protocol number, protocol generation time, and protocol random number for identifying each different business generation, etc. Then, the original information data is analyzed to obtain a protocol generation time and a protocol random number (namely a target auxiliary field), and then the protocol generation time and the protocol random number are added into the protocol number, and the generated media number=protocol number+protocol generation time+protocol random number.
According to an embodiment of the present disclosure, the routing algorithm employs a hash algorithm. According to the target second type routing field and the routing algorithm, routing the target pending file to a target database in the second file system includes: determining a fragment sequence number corresponding to the target to-be-processed file based on the target second type routing field and the total fragment number by utilizing a hash algorithm, wherein the fragment sequence number is used for identifying a target database; and storing the target file to be processed into a target database corresponding to the fragment sequence number.
According to the embodiment of the disclosure, when data is routed, a unique field is used as a routing field of a database splitting table, the splitting serial number where the data is located is determined by performing a hash algorithm on the routing field according to the total splitting number, and the splitting number determines which database the data should land in, so that a target to-be-processed file can be stored in a target database corresponding to the splitting serial number according to the splitting serial number.
Fig. 4 schematically illustrates a schematic diagram of a routing method according to an embodiment of the present disclosure. The routing method of the embodiment of the present disclosure is exemplarily described below with reference to fig. 4.
According to the embodiment of the disclosure, a plurality of files to be processed are obtained after sequentially adding sequence numbers to each file in an original batch of files in a preprocessing server, wherein each file to be processed contains a sequence number.
The sequence numbers increase sequentially from 1 onwards (example: 1,2,3,4 … … n).
Before adding the sequence numbers, the files received by the processing server are as follows:
field 1, field 2, field 3 … …
……
Field 1, field 2, field 3 … …
After adding the sequence number, the file received by the processing server is as follows:
sequence number 1, field 2, field 3 … …
Sequence number 2, field 1, field 2, field 3 … …
Sequence number 3, field 1, field 2, field 3 … …
Sequence number 4, field 1, field 2, field 3 … …
……
Sequence number n, field 1, field 2, field 3 … …
The fields of each file include a first routing field, where the first routing field uses, for example, a client number (a protocol number generated by a system when a client handles a service), and the second routing field uses, for example, a media number.
And then, respectively sending the files to be processed to each processing server, and taking a plurality of database servers for executing data slicing storage as the processing servers. After receiving the files to be processed, each database server judges each file in turn, and judges whether the current module value is matched with the node serial number. And under the condition that the current module value is matched with the node serial number, determining the current file to be processed, of which the current module value is matched with the node serial number, as a target file to be processed, otherwise, discarding the file, skipping the data, and continuing to read and process the next file. For example, the sequence number of the current file to be processed is n, the node number is m, the current module value is obtained by taking the module, and the current file to be processed is processed by the n number server, so that a plurality of files are equally classified into a plurality of servers for parallel processing.
Under the condition that the current file to be processed is the target file to be processed, each database server generates a second type of routing field according to the corresponding relation between the first type of routing field and the second type of routing field. I.e. the medium number. Specifically, pre-stored original information data, namely service log data stored by a system when a client handles a service, is firstly obtained from a database, protocol generation time and protocol random number are obtained by analyzing the original information data, and then the protocol generation time and the protocol random number are added into the protocol number to generate a medium number. In addition, each database server is further used for calculating a routing relationship, that is, calculating and obtaining the fragment sequence number according to which the target to-be-processed file is subjected to library falling based on the target second type routing field and the total fragment number by utilizing a hash algorithm, so that subsequent data processing is facilitated, for example, the target to-be-processed file is stored in a target database corresponding to the fragment sequence number.
According to the embodiment of the disclosure, after the fragment sequence number corresponding to the target to-be-processed file is determined, the fragment sequence number corresponding to the target to-be-processed file is stored in the information database of the second file system.
According to embodiments of the present disclosure, the information database may employ a non-relational database hbase. The information database can be used for storing original information data, such as service system log data, and can also be used for storing data processing results, for example, corresponding second type routing fields generated by the processing server and data fragment serial numbers corresponding to the target to-be-processed file calculated by the processing server can be stored in the database, so that data maintenance and management can be conveniently carried out in the later stage.
Fig. 5 schematically illustrates a block diagram of a routing device 500 according to an embodiment of the present disclosure.
The routing device 500 may be used to implement the method shown with reference to fig. 2.
As shown in fig. 5, the routing device 500 includes a reading module 510, a judging module 520, a first executing module 530, and a second executing module 540.
The reading module 510 is configured to sequentially read a plurality of files to be processed, where the files to be processed include a first type routing field, and the first type routing field is a type of routing field according to which data of the files to be processed is routed in the first file system.
The determining module 520 is configured to determine whether the current file to be processed is a target file to be processed.
The first execution module 530 is configured to generate, when the current to-be-processed file is the target to-be-processed file, a target second type routing field corresponding to the target to-be-processed file according to the target first type routing field corresponding to the target to-be-processed file, so as to route the target to-be-processed file to a target database in the second file system according to the target second type routing field and a routing algorithm, where the second type routing field is a type of routing field according to which the to-be-processed file is routed in the second file system.
The second execution module 540 is configured to discard the current pending file if the current pending file is not the target pending file.
According to the embodiment of the disclosure, in the case of data interaction between the file system and the external file system, the first execution module 530 generates the target second type routing field corresponding to the target to-be-processed file according to the target first type routing field corresponding to the target to-be-processed file, so that the target to-be-processed file can be routed to the target database in the second file system according to the target second type routing field and the routing algorithm, and the problem that the routing cannot be completed due to the fact that the data file does not carry the routing field used by the file system in the data interaction scene of different file systems is solved. Meanwhile, through the judging module 520, after the files are received, each processing server of the system can process the files through pre-judging and processing when the received files belong to the target files to be processed, further calculate the routing fields required by the system, and perform data routing according to the routing fields.
According to an embodiment of the present disclosure, in the reading module 510, the plurality of files to be processed are files obtained by respectively numbering each file in the original batch of files in sequence in the preprocessing server, each file to be processed includes a sequence number, and the current file to be processed includes a current sequence number.
The above-mentioned determining module 520 is configured to determine whether the current file to be processed is a target file to be processed according to the current sequence number and a preset determining rule.
According to the embodiment of the disclosure, the preset judging rule adopts a modulo judging rule. The judging module 520 includes: and the module taking unit and the judging unit.
The module taking unit is used for taking the module of the node number according to the current sequence number to obtain the current module value, wherein the node number is the number of processing servers in the second file system, and each processing server is identified by the node number.
And the judging unit is used for determining the current file to be processed, of which the current module value is matched with the node serial number, as a target file to be processed.
According to an embodiment of the present disclosure, the first execution module 530 is configured to generate a target second type of routing field according to a target first type of routing field and a preset generation rule, where the preset generation rule is used to characterize a correspondence between the first type of routing field and the second type of routing field.
According to an embodiment of the present disclosure, the correspondence between the first type routing field and the second type routing field is: second type routing field = first type routing field + auxiliary field, where the auxiliary field includes a time information field and a random number field.
The first execution module 530 includes an acquisition unit and an addition unit.
And the acquisition unit is used for acquiring the target auxiliary field corresponding to the target first type routing field. And the adding unit is used for adding the target auxiliary field into the target first type routing field to generate a target second type routing field.
The method comprises the steps of obtaining a sub-unit and analyzing the sub-unit.
The acquisition subunit is configured to acquire target raw information data, where the target raw information data includes target auxiliary field information data corresponding to a target first type routing field, the target raw information data being pre-generated and pre-stored in an information database of the second file system. And the analysis subunit is used for analyzing the target original information data to obtain a target auxiliary field corresponding to the target first type routing field.
According to an embodiment of the present disclosure, the routing algorithm uses a hash algorithm, and the first executing module 530 further includes: a computing unit and a storage unit.
The computing unit is used for determining the fragment sequence number corresponding to the target to-be-processed file based on the target second type routing field and the total fragment number by utilizing a hash algorithm, wherein the fragment sequence number is used for identifying the target database. And the storage unit is used for storing the target file to be processed into a target database corresponding to the fragment sequence number.
According to an embodiment of the disclosure, the device further includes a storage module, configured to store, after determining the fragment sequence number corresponding to the target to-be-processed file, the fragment sequence number corresponding to the target to-be-processed file into the information database of the second file system.
Any number of modules, sub-modules, units, sub-units, or at least some of the functionality of any number of the sub-units according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented as split into multiple modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in any other reasonable manner of hardware or firmware that integrates or encapsulates the circuit, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be at least partially implemented as computer program modules, which when executed, may perform the corresponding functions.
For example, any of the reading module 510, the judging module 520, the first executing module 530, and the second executing module 540 may be combined in one module/unit/sub-unit, or any of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least some of the functionality of one or more of these modules/units/sub-units may be combined with at least some of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to embodiments of the present disclosure, at least one of the reading module 510, the determining module 520, the first executing module 530, and the second executing module 540 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging the circuitry, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, at least one of the reading module 510, the judging module 520, the first executing module 530 and the second executing module 540 may be at least partially implemented as a computer program module, which may perform the corresponding functions when being executed.
Another aspect of the present disclosure provides an electronic device, comprising: one or more processors, and memory; wherein the memory is for storing one or more programs; wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the routing method as described above.
Fig. 6 schematically illustrates a block diagram of an electronic device for implementing a routing method according to an embodiment of the disclosure. The electronic device shown in fig. 6 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 6, an electronic device 600 according to an embodiment of the present disclosure includes a processor 601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. The processor 601 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. Processor 601 may also include on-board memory for caching purposes. The processor 601 may comprise a single processing unit or a plurality of processing units for performing different actions of the method flows according to embodiments of the disclosure.
In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are stored. The processor 601, the ROM602, and the RAM 603 are connected to each other through a bus 604. The processor 601 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM602 and/or the RAM 603. Note that the program may be stored in one or more memories other than the ROM602 and the RAM 603. The processor 601 may also perform various operations of the method flow according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, the electronic device 600 may also include an input/output (I/O) interface 605, the input/output (I/O) interface 605 also being connected to the bus 604. The system 600 may also include one or more of the following components connected to the I/O interface 605: an input portion 606 including a keyboard, mouse, etc.; an output portion 607 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the I/O interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on drive 610 so that a computer program read therefrom is installed as needed into storage section 608.
According to embodiments of the present disclosure, the method flow according to embodiments of the present disclosure may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 601. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 602 and/or RAM 603 and/or one or more memories other than ROM 602 and RAM 603 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program comprising program code for performing the methods provided by the embodiments of the present disclosure, the program code for causing an electronic device to implement the routing methods provided by the embodiments of the present disclosure when the computer program product is run on the electronic device.
The above-described functions defined in the system/apparatus of the embodiments of the present disclosure are performed when the computer program is executed by the processor 601. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed in the form of signals over a network medium, and downloaded and installed via the communication section 609, and/or installed from the removable medium 611. The computer program may include program code that may be transmitted using any appropriate network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
According to embodiments of the present disclosure, program code for performing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be combined in various combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.

Claims (10)

1. A routing method, comprising:
sequentially reading a plurality of files to be processed, wherein the files to be processed comprise a first type routing field, and the first type routing field is a routing field type according to which the files to be processed are subjected to data routing in a first file system;
judging whether the current file to be processed is a target file to be processed or not;
generating a target second type routing field corresponding to the target to-be-processed file according to a target first type routing field corresponding to the target to-be-processed file under the condition that the current to-be-processed file is the target to-be-processed file, so as to route the target to-be-processed file to a target database in a second file system according to the target second type routing field and a routing algorithm, wherein the second type routing field is a routing field type according to which the to-be-processed file is subjected to data routing in the second file system; wherein generating the destination second type routing field comprises: generating a target second type routing field according to the corresponding relation between the first type routing field and the second type routing field, wherein the corresponding relation between the first type routing field and the second type routing field is as follows: second type routing field = first type routing field + auxiliary field, wherein the auxiliary field includes a time information field and a random number field; and
And discarding the current to-be-processed file under the condition that the current to-be-processed file is not the target to-be-processed file.
2. The method according to claim 1, wherein the files to be processed are files obtained by numbering each file in the original batch of files in sequence in a preprocessing server, each file to be processed contains a sequence number, and the current file to be processed contains a current sequence number;
the judging whether the current file to be processed is the target file to be processed comprises the following steps:
and judging whether the current file to be processed is the target file to be processed or not according to the current sequence number and a preset judging rule.
3. The method of claim 2, wherein the preset judgment rule employs a modulo judgment rule;
according to the current sequence number and the judging rule, judging whether the current file to be processed is the target file to be processed comprises the following steps:
modulo the current sequence number to obtain a current modulo value, wherein the number of nodes is the number of processing servers in the second file system, and each processing server is identified by a node sequence number;
And determining the current file to be processed, of which the current module value is matched with the node sequence number, as the target file to be processed.
4. The method of claim 1, wherein generating a target second type of routing field corresponding to the target pending file from a target first type of routing field corresponding to the target pending file comprises:
acquiring a target auxiliary field corresponding to the target first type routing field;
the target auxiliary field is added to the target first type routing field to generate the target second type routing field.
5. The method of claim 4, wherein obtaining the target auxiliary field corresponding to the target first type routing field comprises:
acquiring target original information data, wherein the target original information data comprises target auxiliary field information data corresponding to the target first type routing field, and the target original information data is pre-generated and pre-stored in an information database of the second file system;
and analyzing the target original information data to obtain the target auxiliary field corresponding to the target first type routing field.
6. The method of claim 1, wherein the routing algorithm employs a hash algorithm;
according to the target second type routing field and the routing algorithm, the routing the target pending file to a target database in a second file system includes:
determining a fragment sequence number corresponding to the target to-be-processed file based on the target second type routing field and the total fragment number by utilizing the hash algorithm, wherein the fragment sequence number is used for identifying the target database;
and storing the target file to be processed into a target database corresponding to the fragment sequence number.
7. The method of claim 6, further comprising:
after the fragment serial numbers corresponding to the target to-be-processed files are determined, the fragment serial numbers corresponding to the target to-be-processed files are stored in an information database of a second file system.
8. A routing device, comprising:
the device comprises a reading module, a data processing module and a data processing module, wherein the reading module is used for sequentially reading a plurality of files to be processed, the files to be processed comprise a first type routing field, and the first type routing field is a routing field type according to which the files to be processed are subjected to data routing in a first file system;
The judging module is used for judging whether the current file to be processed is a target file to be processed or not;
the first execution module is used for generating a target second type routing field corresponding to the target to-be-processed file according to a target first type routing field corresponding to the target to-be-processed file under the condition that the current to-be-processed file is the target to-be-processed file, so that the target to-be-processed file is routed to a target database in a second file system according to the target second type routing field and a routing algorithm, wherein the second type routing field is a routing field type according to which the to-be-processed file is subjected to data routing in the second file system; wherein generating the destination second type routing field comprises: generating a target second type route field according to the corresponding relation between the first type route field and the second type route field, wherein the corresponding relation between the first type route field and the second type route field is as follows: second type routing field = first type routing field + auxiliary field, wherein the auxiliary field includes a time information field and a random number field; and
And the second execution module is used for discarding the current file to be processed under the condition that the current file to be processed is not the target file to be processed.
9. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1 to 7.
10. A computer readable storage medium having stored thereon executable instructions which when executed by a processor cause the processor to implement the method of any of claims 1 to 7.
CN202110555211.1A 2021-05-20 2021-05-20 Routing method and device, electronic equipment and computer readable storage medium Active CN113204535B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110555211.1A CN113204535B (en) 2021-05-20 2021-05-20 Routing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110555211.1A CN113204535B (en) 2021-05-20 2021-05-20 Routing method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113204535A CN113204535A (en) 2021-08-03
CN113204535B true CN113204535B (en) 2024-02-02

Family

ID=77032074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110555211.1A Active CN113204535B (en) 2021-05-20 2021-05-20 Routing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113204535B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107276912A (en) * 2016-04-07 2017-10-20 华为技术有限公司 Memory, message processing method and distributed memory system
WO2017201970A1 (en) * 2016-05-21 2017-11-30 乐视控股(北京)有限公司 Branch base database system and routing method therefor
CN110659298A (en) * 2019-08-14 2020-01-07 金蝶软件(中国)有限公司 Financial data processing method and device, computer equipment and storage medium
CN111258990A (en) * 2020-02-17 2020-06-09 同盾控股有限公司 Index database data migration method, device, equipment and storage medium
CN112231400A (en) * 2020-09-27 2021-01-15 北京金山云网络技术有限公司 Distributed database access method, device, equipment and storage medium
CN112579606A (en) * 2020-12-24 2021-03-30 平安普惠企业管理有限公司 Workflow data processing method and device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107276912A (en) * 2016-04-07 2017-10-20 华为技术有限公司 Memory, message processing method and distributed memory system
WO2017201970A1 (en) * 2016-05-21 2017-11-30 乐视控股(北京)有限公司 Branch base database system and routing method therefor
CN110659298A (en) * 2019-08-14 2020-01-07 金蝶软件(中国)有限公司 Financial data processing method and device, computer equipment and storage medium
CN111258990A (en) * 2020-02-17 2020-06-09 同盾控股有限公司 Index database data migration method, device, equipment and storage medium
CN112231400A (en) * 2020-09-27 2021-01-15 北京金山云网络技术有限公司 Distributed database access method, device, equipment and storage medium
CN112579606A (en) * 2020-12-24 2021-03-30 平安普惠企业管理有限公司 Workflow data processing method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113204535A (en) 2021-08-03

Similar Documents

Publication Publication Date Title
US9053344B2 (en) Securing sensitive data for cloud computing
CN106020948B (en) A kind of process dispatch method and device
CN108491267B (en) Method and apparatus for generating information
CN108933695B (en) Method and apparatus for processing information
CN110719215B (en) Flow information acquisition method and device of virtual network
CN111597120B (en) Interface test apparatus, method, electronic device, and computer-readable storage medium
CN109005208B (en) Method and device for pushing information
CN111629063A (en) Block chain based distributed file downloading method and electronic equipment
US20160078031A1 (en) Sort-merge-join on a large architected register file
CN110719200A (en) Information identification method and device
CN113204535B (en) Routing method and device, electronic equipment and computer readable storage medium
CN111405027B (en) Block chain consensus result screening method, device, computer equipment and storage medium
CN112507265A (en) Method and device for anomaly detection based on tree structure and related products
WO2023071566A1 (en) Data processing method and apparatus, computer device, computer-readable storage medium, and computer program product
CN113132400B (en) Business processing method, device, computer system and storage medium
CN115374207A (en) Service processing method and device, electronic equipment and computer readable storage medium
CN114780932A (en) Cross-block chain data interaction verification method, system and equipment for management three-mode platform
CN113094415B (en) Data extraction method, data extraction device, computer readable medium and electronic equipment
CN113656313A (en) Automatic test processing method and device
CN113569256A (en) Vulnerability scanning method and device, vulnerability scanning system, electronic equipment and computer readable medium
CN112488625A (en) Returned piece identification method, returned piece identification device, returned piece identification equipment and storage medium
CN112181816A (en) Interface testing method and device based on scene, computer equipment and medium
CN113177212B (en) Joint prediction method and device
CN110909191A (en) Graph data processing method and device, storage medium and electronic equipment
US20240098036A1 (en) Staggered payload relayer for pipelining digital payloads across network services

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant