CN110928905B - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN110928905B
CN110928905B CN201911083610.1A CN201911083610A CN110928905B CN 110928905 B CN110928905 B CN 110928905B CN 201911083610 A CN201911083610 A CN 201911083610A CN 110928905 B CN110928905 B CN 110928905B
Authority
CN
China
Prior art keywords
data
processed
target attribute
accumulated
scheme
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911083610.1A
Other languages
Chinese (zh)
Other versions
CN110928905A (en
Inventor
李霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taikang Insurance Group Co Ltd
Original Assignee
Taikang Insurance Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taikang Insurance Group Co Ltd filed Critical Taikang Insurance Group Co Ltd
Priority to CN201911083610.1A priority Critical patent/CN110928905B/en
Publication of CN110928905A publication Critical patent/CN110928905A/en
Application granted granted Critical
Publication of CN110928905B publication Critical patent/CN110928905B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24553Query execution of query operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/338Presentation of query results

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a data processing method and device, and relates to the technical field of computers. One specific implementation mode of the method comprises the steps of encoding according to an accumulation scheme in a log table, sequencing data to be processed in a service interface table based on a target attribute, and further obtaining a preset number of data to be processed; based on the target attribute, carrying out grouping encapsulation on the data to be processed; and placing the packaged data to be processed into the tail part of the data cache queue, and distributing the data to be processed at the head part of the data cache queue to a computing thread in parallel to execute tasks. Therefore, the embodiment of the invention can greatly improve the processing capacity of the data calculation of the reconfirm system.

Description

Data processing method and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a data processing method and apparatus.
Background
The reinsurance refers to the act that the direct insurance company distributes to the reinsurance company, and the reinsurance system core is the distribution. The method is divided into three parts according to the flow: and preparing core system data, collecting reinsurance system data and calculating reinsurance system data. The core system data preparation can extract daily service data of insurance companies from a core library (personal insurance and silver insurance) to a core system intermediate table, and then the data of the intermediate table can be saved to a reinsurance service interface table after the reinsurance system data acquisition. And calculating the data of the re-insurance system according to the re-insurance business interface table, and storing the calculation result into the sub-insurance result table so as to generate report bill data in the later period.
In the process of implementing the present invention, the inventor finds that at least the following problems exist in the prior art:
at present, with the increase of services, a large amount of data needs to be processed in the reinsurance system, and the existing reinsurance system data calculation cannot bear the large amount of data processing.
Disclosure of Invention
In view of this, the embodiments of the present invention provide a data processing method and apparatus, which can greatly improve the processing capability of data calculation of a reinsurance system.
In order to achieve the above object, according to an aspect of the embodiments of the present invention, there is provided a data processing method, including encoding according to an accumulation scheme in a log table, sorting data to be processed in a service interface table based on a target attribute, and further obtaining a preset number of data to be processed; based on the target attribute, carrying out grouping encapsulation on the data to be processed; and placing the packaged data to be processed into the tail part of the data cache queue, and distributing the data to be processed at the head part of the data cache queue to a computing thread in parallel to execute tasks.
Optionally, sorting the data to be processed in the service interface table based on the target attribute includes:
and sorting the data to be processed in the service interface table in a vernier mode based on the target attribute.
Optionally, sorting the data to be processed in the service interface table based on the target attribute according to the accumulated scheme code in the log table, thereby obtaining a preset number of data to be processed, and further including:
reading the accumulated scheme codes in the log table into an accumulated scheme set;
traversing the accumulated scheme codes in the accumulated scheme set to obtain an accumulated scheme code;
taking the accumulated scheme code as a query condition, sorting the data to be processed in the service interface table in a vernier mode based on the target attribute;
and reading the data to be processed from the service interface table, and putting the data into a vernier cache to obtain a preset number of data to be processed in batches into the memory.
Optionally, based on the target attribute, the data to be processed is packaged in a grouping manner, including:
and packaging the data to be processed in the memory according to the target attribute, and placing the data to be processed with the same target attribute value into the same data container.
Optionally, the method further comprises:
each computing thread corresponds to a database, and the data to be processed with the same target attribute value is stored in the same temporary table.
Optionally, placing the encapsulated data to be processed into the tail of the data cache queue, including:
judging whether the quantity of data to be processed in the data cache queue is smaller than a preset first threshold value or not;
if yes, pushing the data to be processed to the tail of the data cache queue; otherwise, suspending the preset time, and judging whether the quantity of the data to be processed in the data cache queue is smaller than a preset first threshold value.
Optionally, distributing the pending data at the head of the data cache queue to the computing thread to perform the task, including:
judging whether an idle computing thread exists or not;
if yes, distributing the data to be processed for the idle computing thread to execute the task;
otherwise, judging whether the number of the active computing threads is smaller than a preset second threshold value, if so, creating a new thread to distribute data to be processed so as to execute tasks; otherwise, the incoming thread task is put into a task queue for queuing.
In addition, according to an aspect of the embodiment of the present invention, there is provided a data processing apparatus, including an acquisition module configured to sort data to be processed in a service interface table based on a target attribute according to an accumulated scheme code in a log table, thereby acquiring a preset number of data to be processed; the packaging module is used for packaging the data to be processed in groups based on the target attribute; and the processing module is used for placing the packaged data to be processed into the tail part of the data cache queue and distributing the data to be processed at the head part of the data cache queue to the thread in parallel to execute the task.
According to another aspect of an embodiment of the present invention, there is also provided an electronic device including:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of the data processing embodiments described above.
According to another aspect of an embodiment of the present invention, there is also provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a method according to any of the above embodiments based on data processing.
One embodiment of the above invention has the following advantages or benefits: the method comprises the steps of sorting data to be processed in a service interface table based on target attributes through coding according to an accumulation scheme in a log table, and further obtaining preset quantity of data to be processed; based on the target attribute, carrying out grouping encapsulation on the data to be processed; and placing the packaged data to be processed into the tail part of the data cache queue, and distributing the data to be processed at the head part of the data cache queue to a computing thread in parallel to execute tasks. Therefore, the invention can greatly improve the calculation efficiency and enhance the system performance.
Further effects of the above-described non-conventional alternatives are described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of the main flow of a data processing method according to a first embodiment of the present invention
FIG. 2 is a schematic diagram of the main flow of a data processing method according to a second embodiment of the present invention;
FIG. 3 is a schematic diagram of the main flow of a data processing method according to a third embodiment of the present invention;
FIG. 4 is a schematic diagram of the main flow of a data processing method according to a fourth embodiment of the present invention;
FIG. 5 is a schematic diagram of the main modules of a data processing apparatus according to an embodiment of the present invention;
FIG. 6 is an exemplary system architecture diagram in which embodiments of the present invention may be applied;
fig. 7 is a schematic diagram of a computer system suitable for use in implementing an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present invention are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram of the main flow of a data processing method according to a first embodiment of the present invention, the data processing method may include:
step S101, sorting the data to be processed in the service interface table based on the target attribute according to the accumulated scheme code in the log table, and further obtaining the preset number of data to be processed.
Preferably, the cursor mode is adopted to sort the data to be processed in the service interface table based on the target attribute, and a preset number of data to be processed is obtained. That is, the computing data is read into the storage in batches in a vernier manner, and is packaged in groups gradually according to the target attribute.
Further, when the preset number of data to be processed is obtained according to the service interface table, the accumulated scheme codes in the log table can be read into the accumulated scheme set. Traversing the accumulated scheme codes in the accumulated scheme set to obtain an accumulated scheme code. And taking the accumulated scheme code as a query condition, and sorting the data to be processed in the service interface table in a vernier mode based on the target attribute. And reading the data to be processed from the service interface table, and putting the data into a vernier cache to obtain a preset number of data to be processed in batches into the memory.
Step S102, based on the target attribute, the data to be processed are packaged in a grouping mode.
Further, according to the target attribute (such as the service number), the data to be processed in the memory is encapsulated, and the data to be processed with the same target attribute value is put into the same data container.
Step S103, the packaged data to be processed is placed at the tail of the data cache queue, and the data to be processed at the head of the data cache queue is distributed to the computing thread in parallel to execute tasks.
Preferably, each computing thread corresponds to a database, and the data to be processed having the same target attribute value is stored in the same temporary table.
It can be seen that the system resource is set to different objects, each resource object is uniquely bound with one thread body, and data reading and data computing are executed in parallel in different threads, namely, an access thread, a distribution thread and a computing thread are adopted, and parallel operation is performed between the access thread and the distribution thread and concurrent operation is performed between the distribution thread and the computing thread.
The fetch thread can execute to acquire a preset amount of data to be processed according to the service interface table. And based on the target attribute, carrying out packet encapsulation on the data to be processed, and placing the encapsulated data to be processed into the tail of a data cache queue. The distributing thread can distribute the data to be processed at the head of the data cache queue to the computing thread to execute the task.
In another embodiment, when the encapsulated data to be processed is placed at the tail of the data cache queue in step S103, it may be determined whether the number of data to be processed in the data cache queue is less than a preset first threshold. If yes, pushing the data to be processed to the tail of the data cache queue; otherwise, suspending the preset time, and judging whether the quantity of the data to be processed in the data cache queue is smaller than a preset first threshold value.
In a specific embodiment, the depth of the data cache queue is assumed to be 3000, and the fetch thread monitors the data cache queue. The data amount in the data cache queue is less than 3000, and the fetch thread pushes data to the data cache queue. The main thread pauses for 1000 ms when the data amount in the data cache queue > =3000, and then judges that the data amount in the data cache queue is less than 3000, and the fetching thread can continue pushing data into the data cache queue. After all data processing is completed, the fetching thread is finished, and resources are released.
In yet another embodiment, when distributing the data to be processed at the head of the data buffer queue to the computing thread to execute the task in step S103, it may be determined whether there is an idle computing thread. And if yes, distributing the data to be processed for the idle computing thread to execute the task. Otherwise, judging whether the number of the active computing threads is smaller than a preset second threshold value, if so, creating a new thread to distribute the data to be processed so as to execute the task. Otherwise, the incoming thread task is put into a task queue for queuing.
Therefore, the invention provides a data processing method, which uses a parallel processing calculation mode, so that data reading and data calculation are executed in different threads in parallel, and the data processing efficiency is improved. And the system resources are set to be different objects, and each resource object is uniquely bound with one thread body, so that the system performance is improved. Meanwhile, all the calculation data are read into the storage in batches in a vernier mode according to a certain sequence, and the calculation data are packaged into data to be processed by a calculation thread step by step according to the target attribute group, so that the data integrity is ensured.
Fig. 2 is a schematic diagram of the main flow of a data processing method according to a second embodiment of the present invention, the data processing method may include:
step S201, according to the accumulated scheme codes in the log table, the data to be processed in the business interface table is sorted based on the target attribute in a vernier mode.
In one particular embodiment, RSWrapper cursor mode is used to sort from the reprotechnervice interface table ropelrecord by the insured number (instedno).
Step S202, obtaining a preset amount of data to be processed.
Preferably, the data to be processed for acquiring a preset number (for example 10000 pieces of responsibility data) is acquired through the getData () method.
Step S203, performing packet encapsulation on the data to be processed based on the target attribute.
For example: and packaging the acquired responsibility information according to the group of the insured person, and preparing data for the business processing thread.
Step S204, the packaged data to be processed is placed at the tail of the data cache queue, and the data to be processed at the head of the data cache queue is distributed to the computing thread in parallel to execute the task.
Preferably, each computing thread corresponds to a database, and the data to be processed having the same target attribute value is stored in the same temporary table.
In an embodiment, a large number of business processing classes and data storage containers, which may be referred to as system resources, are used throughout the computing process. Common system resources are not used among the parallel thread bodies, so that the high performance of the system can be ensured, the system resources are set to be different objects, and each resource object is uniquely bound with one thread body.
Further, the single thread body of the re-guarantee calculation is guaranteed to have independent database links, the data of the same insured person processed by the single thread body of the re-guarantee calculation is guaranteed to be stored in the same temporary table, and the data processed by the same thread body of the re-guarantee calculation is guaranteed to be stored in the same temporary table.
Fig. 3 is a schematic diagram of the main flow of a data processing method according to a third embodiment of the present invention, the data processing method may include:
step S301, reading the accumulated scheme codes in the log table into an accumulated scheme set.
Step S302, traversing the accumulated scheme codes in the accumulated scheme set to obtain an accumulated scheme code.
Step S303, taking the accumulated scheme code as a query condition, and sorting the data to be processed in the service interface table in a vernier mode based on the target attribute.
Step S304, the data to be processed is read from the service interface table and is put into the cursor buffer, so as to obtain the preset quantity of data to be processed in batches into the memory.
Preferably, in the process of obtaining the preset number of data to be processed in batch to the memory, the data to be processed from which the previous batch of data is finally discarded can be put into the preset data set X, and a certain number of data are obtained from the service interface table (for example, the reinserved service interface table) according to the sequence and put into the data set X. That is, each time a predetermined amount of data to be processed is obtained in batch, the last data to be processed is discarded to ensure the integrity of the data.
Step S305, according to the target attribute, the data to be processed in the memory is packaged, and the data to be processed with the same target attribute value is put into the same data container.
Preferably, the data in the data set X is packaged in groups according to the target attribute, and the data about policy splitting and policy migration is placed at the end of the target attribute group, because the priority of the data about policy splitting and policy migration is the lowest.
Step S306, the packaged data to be processed is placed at the tail of the data cache queue, and the data to be processed at the head of the data cache queue is distributed to the computing thread in parallel to execute the task.
Preferably, in order to ensure data integrity, when the packaged data to be processed is placed at the tail of the data cache queue, the packaged data to be processed can be obtained from the memory, and the last group of data to be processed is removed. I.e. the last data to be processed fetched from memory at a time may not be fetched, and some may still be in memory. And simultaneously, putting the removed last group of data to be processed into the temporary set.
FIG. 4 is a schematic diagram of the main flow of a data processing method according to a fourth embodiment of the present invention, distributing data to be processed at the head of a data buffer queue to a computing thread to execute tasks, the data processing method may include:
step S401, judging whether an idle computing thread exists, if yes, proceeding to step S402, otherwise proceeding to step S403.
Step S402, distributing the data to be processed for the idle computing thread to execute the task.
Step S403, judging whether the number of the active computing threads is smaller than a preset second threshold, if yes, proceeding to step S404, otherwise proceeding to step S405.
Step S404, creating a new thread to distribute the data to be processed to execute the task.
Step S405, the incoming thread task is put into a task queue for queuing.
In a specific embodiment, the core thread pool capacity is assumed to be 50 and the task queue depth is assumed to be 5000. Of course, the values of the parameters are adjusted according to the post-pressure test results.
If there is an idle thread, tasks are assigned directly to the idle thread, if there is no idle thread and the number of active computing threads <50, then a new thread is created and tasks are assigned. If the number of active compute threads has reached 50, i.e., greater than or equal to 50, and all compute threads are executing tasks, and the number of data in the task queue is <5000, then the incoming thread task is placed into the task queue for queuing. It should be noted that, the dispatch thread monitors the thread pool and the task queue, and if there is a task in the task queue and a thread in the thread pool is idle, the first task in the task queue is allocated to the idle thread for execution.
In addition, after all task processing is completed, the dispatch thread determines whether all threads in the thread pool are executing tasks. If all threads in the thread pool are idle and no tasks are queued in the task queue, then the dispatch thread ends.
Fig. 5 is a schematic diagram of main modules of a data processing apparatus according to a first embodiment of the present invention, and as shown in fig. 5, the data processing apparatus 500 includes an acquisition module 501, a packaging module 502, and a processing module 503. The obtaining module 501 is configured to sort the data to be processed in the service interface table based on the target attribute according to the accumulated scheme code in the log table, thereby obtaining a preset number of data to be processed. The encapsulation module 502 is configured to encapsulate the data to be processed in packets based on the target attribute. The processing module 503 is configured to put the encapsulated data to be processed into the tail of the data buffer queue, and distribute the data to be processed at the head of the data buffer queue to the thread in parallel to execute the task.
Preferably, when the obtaining module 501 sorts the data to be processed in the service interface table based on the target attribute, a vernier mode may be used to sort the data to be processed in the service interface table based on the target attribute.
Further, the obtaining module 501 may read the accumulated scheme code in the log table into the accumulated scheme set when sorting the data to be processed in the service interface table based on the target attribute according to the accumulated scheme code in the log table, and further obtaining the preset number of data to be processed. Traversing the accumulated scheme codes in the accumulated scheme set to obtain an accumulated scheme code. And taking the accumulated scheme code as a query condition, and sorting the data to be processed in the service interface table in a vernier mode based on the target attribute. And reading the data to be processed from the service interface table, and putting the data into a vernier cache to obtain a preset number of data to be processed in batches into the memory.
Furthermore, the obtaining module 501 may package the data to be processed in the memory according to the target attribute when the data to be processed is packaged based on the target attribute, and place the data to be processed with the same target attribute value into the same data container.
As a reference embodiment, the processing module 503 places the encapsulated data to be processed into the tail of the data buffer queue, which may include:
judging whether the quantity of data to be processed in the data cache queue is smaller than a preset first threshold value or not;
if yes, pushing the data to be processed to the tail of the data cache queue; otherwise, suspending the preset time, and judging whether the quantity of the data to be processed in the data cache queue is smaller than a preset first threshold value.
In yet another embodiment, the processing module 503 distributes the pending data at the head of the data cache queue to the computing thread to perform tasks, including:
judging whether an idle computing thread exists or not;
if yes, distributing the data to be processed for the idle computing thread to execute the task;
otherwise, judging whether the number of the active computing threads is smaller than a preset second threshold value, if so, creating a new thread to distribute data to be processed so as to execute tasks; otherwise, the incoming thread task is put into a task queue for queuing.
It should also be noted that each computing thread corresponds to a database, and stores the data to be processed having the same target attribute value in the same temporary table.
In the present invention, the data processing method and the data processing apparatus have a corresponding relationship in terms of implementation, and therefore, the description of the repeated contents is omitted.
Fig. 6 illustrates an exemplary system architecture 600 in which a data processing method or data processing apparatus of an embodiment of the invention may be applied.
As shown in fig. 6, the system architecture 600 may include terminal devices 601, 602, 603, a network 604, and a server 605. The network 604 is used as a medium to provide communication links between the terminal devices 601, 602, 603 and the server 605. The network 604 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 605 via the network 604 using the terminal devices 601, 602, 603 to receive or send messages, etc. Various communication client applications such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only) may be installed on the terminal devices 601, 602, 603.
The terminal devices 601, 602, 603 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 605 may be a server providing various services, such as a background management server (by way of example only) providing support for shopping-type websites browsed by users using terminal devices 601, 602, 603. The background management server may analyze and process the received data such as the product information query request, and feedback the processing result (e.g., the target push information, the product information—only an example) to the terminal device.
It should be noted that, the data processing method provided in the embodiment of the present invention is generally executed by the server 605, and accordingly, the data processing apparatus is generally disposed in the server 605.
It should be understood that the number of terminal devices, networks and servers in fig. 6 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 7, there is illustrated a schematic diagram of a computer system 700 suitable for use in implementing an embodiment of the present invention. The terminal device shown in fig. 7 is only an example, and should not impose any limitation on the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU) 701, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data required for the operation of the system 700 are also stored. The CPU701, ROM702, and RAM703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input section 706 including a keyboard, a mouse, and the like; an output portion 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 708 including a hard disk or the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 709, and/or installed from the removable medium 711. The above-described functions defined in the system of the present invention are performed when the computer program is executed by a Central Processing Unit (CPU) 701.
The computer readable medium shown in the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present invention may be implemented in software or in hardware. The described modules may also be provided in a processor, for example, as: a processor includes an acquisition module, a packaging module, and a processing module. The names of these modules do not constitute a limitation on the module itself in some cases.
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to include: sequencing the data to be processed in the service interface table based on the target attribute according to the accumulated scheme code in the log table, and further obtaining a preset number of data to be processed; based on the target attribute, carrying out grouping encapsulation on the data to be processed; and placing the packaged data to be processed into the tail part of the data cache queue, and distributing the data to be processed at the head part of the data cache queue to a computing thread in parallel to execute tasks.
According to the technical scheme provided by the embodiment of the invention, the processing capacity of the data calculation of the reconfirm system can be greatly improved.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives can occur depending upon design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (9)

1. A method of data processing, comprising:
sequencing the data to be processed in the service interface table based on the target attribute according to the accumulated scheme code in the log table, and further obtaining a preset number of data to be processed;
based on the target attribute, carrying out grouping encapsulation on the data to be processed, wherein the data with the lowest priority in the data to be processed in the group to which the target attribute belongs is placed at the end of the group of the target attribute so as to carry out encapsulation;
the encapsulated data to be processed is placed at the tail part of a data cache queue, and the data to be processed at the head part of the data cache queue is distributed to a calculation thread in parallel to execute tasks; when the packaged data to be processed is placed at the tail of the data cache queue, the packaged data to be processed is obtained from the memory, and the last group of data to be processed is removed;
according to the accumulated scheme code in the log table, sorting the data to be processed in the service interface table based on the target attribute, further obtaining a preset number of data to be processed, and further comprising:
reading the accumulated scheme codes in the log table into an accumulated scheme set;
traversing the accumulated scheme codes in the accumulated scheme set to obtain an accumulated scheme code;
taking the accumulated scheme code as a query condition, sorting the data to be processed in the service interface table in a vernier mode based on the target attribute;
and reading the data to be processed from the service interface table, and putting the data into a vernier cache to obtain a preset number of data to be processed in batches into the memory.
2. The method of claim 1, wherein ordering the data to be processed in the service interface table based on the target attribute comprises:
and sorting the data to be processed in the service interface table in a vernier mode based on the target attribute.
3. The method of claim 1, wherein grouping the data to be processed based on the target attribute comprises:
and packaging the data to be processed in the memory according to the target attribute, and placing the data to be processed with the same target attribute value into the same data container.
4. The method as recited in claim 1, further comprising:
each computing thread corresponds to a database, and the data to be processed with the same target attribute value is stored in the same temporary table.
5. The method of claim 1, wherein placing the encapsulated data to be processed into the tail of the data cache queue comprises:
judging whether the quantity of data to be processed in the data cache queue is smaller than a preset first threshold value or not;
if yes, pushing the data to be processed to the tail of the data cache queue; otherwise, suspending the preset time, and judging whether the quantity of the data to be processed in the data cache queue is smaller than a preset first threshold value.
6. The method of claim 1, wherein distributing the pending data at the head of the data cache queue to the computing thread to perform the task comprises:
judging whether an idle computing thread exists or not;
if yes, distributing the data to be processed for the idle computing thread to execute the task;
otherwise, judging whether the number of the active computing threads is smaller than a preset second threshold value, if so, creating a new thread to distribute data to be processed so as to execute tasks; otherwise, the incoming thread task is put into a task queue for queuing.
7. A data processing apparatus, comprising:
the acquisition module is used for sorting the data to be processed in the service interface table based on the target attribute according to the accumulated scheme codes in the log table, and further acquiring the preset number of data to be processed;
the packaging module is used for packaging the data to be processed in groups based on the target attribute, wherein the data with the lowest priority in the data to be processed in the group to which the target attribute belongs is placed at the last of the group of the target attribute so as to package the data;
the processing module is used for placing the packaged data to be processed into the tail part of the data cache queue and distributing the data to be processed at the head part of the data cache queue to the thread in parallel to execute the task; when the packaged data to be processed is placed at the tail of the data cache queue, the packaged data to be processed is obtained from the memory, and the last group of data to be processed is removed;
the method comprises the steps that an acquisition module orders data to be processed in a business interface table based on target attributes according to accumulated scheme codes in a log table, and when a preset number of data to be processed are acquired, the accumulated scheme codes in the log table are read into an accumulated scheme set; traversing the accumulated scheme codes in the accumulated scheme set to obtain an accumulated scheme code; taking the accumulated scheme code as a query condition, sorting the data to be processed in the service interface table in a vernier mode based on the target attribute; and reading the data to be processed from the service interface table, and putting the data into a vernier cache to obtain a preset number of data to be processed in batches into the memory.
8. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-6.
9. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-6.
CN201911083610.1A 2019-11-07 2019-11-07 Data processing method and device Active CN110928905B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911083610.1A CN110928905B (en) 2019-11-07 2019-11-07 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911083610.1A CN110928905B (en) 2019-11-07 2019-11-07 Data processing method and device

Publications (2)

Publication Number Publication Date
CN110928905A CN110928905A (en) 2020-03-27
CN110928905B true CN110928905B (en) 2024-01-26

Family

ID=69853461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911083610.1A Active CN110928905B (en) 2019-11-07 2019-11-07 Data processing method and device

Country Status (1)

Country Link
CN (1) CN110928905B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113703933A (en) * 2020-05-22 2021-11-26 北京沃东天骏信息技术有限公司 Task processing method and device
CN113760630B (en) * 2020-06-19 2024-09-20 北京沃东天骏信息技术有限公司 Data processing method and device
CN111782657B (en) * 2020-07-08 2024-06-07 上海乾臻信息科技有限公司 Data processing method and device
CN113760925A (en) * 2020-11-30 2021-12-07 北京沃东天骏信息技术有限公司 Data processing method and device
CN113821506A (en) * 2020-12-23 2021-12-21 京东科技控股股份有限公司 Task execution method, device, system, server and medium for task system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107450971A (en) * 2017-06-29 2017-12-08 北京五八信息技术有限公司 Task processing method and device
CN107729135A (en) * 2016-08-11 2018-02-23 阿里巴巴集团控股有限公司 The method and apparatus for sequentially carrying out parallel data processing
CN107766526A (en) * 2017-10-26 2018-03-06 中国人民银行清算总中心 Data bank access method, apparatus and system
CN109376189A (en) * 2018-09-13 2019-02-22 阿里巴巴集团控股有限公司 Processing method, device and the equipment of batch data operation
CN109800260A (en) * 2018-12-14 2019-05-24 深圳壹账通智能科技有限公司 High concurrent date storage method, device, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9141432B2 (en) * 2012-06-20 2015-09-22 International Business Machines Corporation Dynamic pending job queue length for job distribution within a grid environment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729135A (en) * 2016-08-11 2018-02-23 阿里巴巴集团控股有限公司 The method and apparatus for sequentially carrying out parallel data processing
CN107450971A (en) * 2017-06-29 2017-12-08 北京五八信息技术有限公司 Task processing method and device
CN107766526A (en) * 2017-10-26 2018-03-06 中国人民银行清算总中心 Data bank access method, apparatus and system
CN109376189A (en) * 2018-09-13 2019-02-22 阿里巴巴集团控股有限公司 Processing method, device and the equipment of batch data operation
CN109800260A (en) * 2018-12-14 2019-05-24 深圳壹账通智能科技有限公司 High concurrent date storage method, device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
智能电网大数据处理技术现状与挑战研究;万方林;《通讯世界》;20191031;第26卷(第10期);第197-198页 *

Also Published As

Publication number Publication date
CN110928905A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
CN110928905B (en) Data processing method and device
CN110572422B (en) Data downloading method, device, equipment and medium
CN109039817B (en) Information processing method, device, equipment and medium for flow monitoring
CN112052133B (en) Method and device for monitoring service system based on Kubernetes
CN113127225A (en) Method, device and system for scheduling data processing tasks
CN115525411A (en) Method, device, electronic equipment and computer readable medium for processing service request
CN112667368A (en) Task data processing method and device
CN111461583B (en) Inventory checking method and device
CN112398669A (en) Hadoop deployment method and device
CN113190558A (en) Data processing method and system
CN110825342B (en) Memory scheduling device and system, method and apparatus for processing information
CN109213815B (en) Method, device, server terminal and readable medium for controlling execution times
CN115952050A (en) Reporting method and device for organization service buried point data
CN113760861B (en) Data migration method and device
CN113760482B (en) Task processing method, device and system
CN112688982B (en) User request processing method and device
CN114666319A (en) Data downloading method and device, electronic equipment and readable storage medium
CN112988857B (en) Service data processing method and device
CN111786801B (en) Method and device for charging based on data flow
CN114237902A (en) Service deployment method and device, electronic equipment and computer readable medium
CN113722113A (en) Traffic statistic method and device
CN112015565A (en) Method and device for determining task downloading queue
CN112732417B (en) Method and device for processing application request
CN113760493B (en) Job scheduling method and device
CN112667627B (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant