CN111209311A - Method and apparatus for processing data - Google Patents

Method and apparatus for processing data Download PDF

Info

Publication number
CN111209311A
CN111209311A CN201811310892.XA CN201811310892A CN111209311A CN 111209311 A CN111209311 A CN 111209311A CN 201811310892 A CN201811310892 A CN 201811310892A CN 111209311 A CN111209311 A CN 111209311A
Authority
CN
China
Prior art keywords
data
queue
queues
piece
marking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811310892.XA
Other languages
Chinese (zh)
Other versions
CN111209311B (en
Inventor
高予兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JD Digital Technology Holdings Co Ltd
Original Assignee
JD Digital Technology Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by JD Digital Technology Holdings Co Ltd filed Critical JD Digital Technology Holdings Co Ltd
Priority to CN201811310892.XA priority Critical patent/CN111209311B/en
Publication of CN111209311A publication Critical patent/CN111209311A/en
Application granted granted Critical
Publication of CN111209311B publication Critical patent/CN111209311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/03Credit; Loans; Processing thereof

Landscapes

  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Engineering & Computer Science (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a method and a device for processing data. One embodiment of the method comprises: acquiring at least one piece of data to be processed; creating a predetermined number of queues, wherein each queue corresponds to a filter condition; storing at least one piece of data in each queue; and for the queues in the preset number of queues, reading data from the queues, and marking the read data according to the filter condition corresponding to the queues. The implementation mode realizes the parallel processing of the filtering tasks and saves time.

Description

Method and apparatus for processing data
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for processing data.
Background
Currently, many financial field data filtering (e.g. asset securitization filtering) uses a protocol sublist strategy to migrate a fixed amount of assets into a protocol table, and then performs asset filtering, asset tagging, on its data table according to the filtering condition of the protocol.
If the asset filtering is insufficient, an asset reconciliation procedure is initiated to reconcile assets in other protocol data tables to their protocol data tables until filtering is complete. The following disadvantages may exist: 1. the tasks are executed in series, and the time is long. 2. Unreasonable asset distribution is likely to result in insufficient assets, resulting in multiple asset reconciliation. 3. Asset reconciliation is a process of mass data migration and involves physical deletion of data, consuming performance.
Disclosure of Invention
The embodiment of the application provides a method and a device for processing data.
In a first aspect, an embodiment of the present application provides a method for processing data, including: acquiring at least one piece of data to be processed; creating a predetermined number of queues, wherein each queue corresponds to a filter condition; storing at least one piece of data in each queue; and for the queues in the preset number of queues, reading data from the queues, and marking the read data according to the filter condition corresponding to the queues.
In some embodiments, reading data from the queue, and marking the read data according to a filtering condition corresponding to the queue includes: fetching an unlocked piece of data from the head of the queue; locking the unlocked data; and if the locked data meets the filtering condition corresponding to the queue, marking the locked data.
In some embodiments, the method further comprises: and if the locked data does not meet the filtering condition corresponding to the queue, unlocking the locked data and then returning the unlocked data to the queue.
In some embodiments, the queue is a double ended queue; and the method further comprises: for each queue in the predetermined number of queues, if the data in the queue is emptied, reading the data from the tail of other non-empty queues through a work stealing algorithm, and marking the read data according to the filtering condition of the queue.
In some embodiments, the data includes an amount; and acquiring at least one piece of data to be processed, including: and acquiring at least one piece of data to be processed according to the descending order of the money amount.
In some embodiments, the method further comprises: and for the queues in the preset number of queues, if the sum of the money amounts included in the data meeting the filtering condition corresponding to the queues in the queues reaches a preset numerical value, deleting the queues.
In a second aspect, an embodiment of the present application provides an apparatus for processing data, including: an acquisition unit configured to acquire at least one piece of data to be processed; a creating unit configured to create a predetermined number of queues, wherein each queue corresponds to a filter condition; a storage unit configured to store at least one piece of data in each queue; and the marking unit is configured to read data from the queue in a predetermined number of queues and mark the read data according to the filtering condition corresponding to the queue.
In some embodiments, the marking unit is further configured to: fetching an unlocked piece of data from the head of the queue; locking the unlocked data; and if the locked data meets the filtering condition corresponding to the queue, marking the locked data.
In some embodiments, the apparatus further comprises a release unit configured to: and if the locked data does not meet the filtering condition corresponding to the queue, unlocking the locked data and then returning the unlocked data to the queue.
In some embodiments, the queue is a double ended queue; and the apparatus further comprises a stealing unit configured to: for each queue in the predetermined number of queues, if the data in the queue is emptied, reading the data from the tail of other non-empty queues through a work stealing algorithm, and marking the read data according to the filtering condition of the queue.
In some embodiments, the data includes an amount; and the obtaining unit is further configured to: and acquiring at least one piece of data to be processed according to the descending order of the money amount.
In some embodiments, the apparatus further comprises a deletion unit configured to: and for the queues in the preset number of queues, if the sum of the money amounts included in the data meeting the filtering condition corresponding to the queues in the queues reaches a preset numerical value, deleting the queues.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as in any one of the first aspects.
In a fourth aspect, the present application provides a computer readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method according to any one of the first aspect.
According to the method and the device for processing data, the same data are subjected to parallel filtering by using different filtering conditions, the data processing speed is increased, and the performance consumption is reduced.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method of processing data according to the present application;
3a, 3b, 3c, 3d are schematic diagrams of an application scenario of a method of processing data according to the present application;
FIG. 4 is a flow diagram of yet another embodiment of a method of processing data according to the present application;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for processing data according to the present application;
fig. 6 is a schematic structural diagram of a computer system suitable for implementing the terminal device or the server according to the embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the method of processing data or the apparatus for processing data of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a financial service application, a web browser application, a shopping application, a search application, an instant messenger, a mailbox client, social platform software, and the like.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen and supporting financial loan services, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as a background loan recording server that provides support for the loan tickets displayed on the terminal devices 101, 102, 103. The background loan recording server can analyze and process the received data such as the loan request and feed back the processing result (such as the loan bill meeting the filtering condition) to the terminal equipment.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for processing data provided in the embodiment of the present application is generally performed by the server 105, and accordingly, the apparatus for processing data is generally disposed in the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method of processing data according to the present application is shown. The method for processing data comprises the following steps:
step 201, at least one piece of data to be processed is acquired.
In this embodiment, an execution subject of the method for processing data (for example, a server shown in fig. 1) may acquire at least one piece of data to be processed from a database by a wired connection manner or a wireless connection manner. A predetermined number of data, for example, 5000 loan order records, may be read in a cyclical manner. The data may be ordered in a certain order. And sending the read data to a preset number of protocol processes through a message queue every time the data is read. Each protocol process is responsible for filtering data according to a filter condition (protocol rule). For example, agreement process-a loan order that is responsible for filtering out over 5 billion assets without overdue records. The second agreement process is responsible for filtering out loan notes of more than 10 hundred million assets and which may have overdue records.
At step 202, a predetermined number of queues are created.
In this embodiment, each protocol process creates a queue. Wherein each queue corresponds to a filter condition. Alternatively, the queue may be a double ended queue.
At least one piece of data is stored in each queue, step 203.
In this embodiment, the same at least one piece of data is stored into each queue. And establishing a thread for each queue, wherein each thread is used for processing the data in the queue corresponding to the thread.
Step 204, for a queue in the predetermined number of queues, reading data from the queue, and marking the read data according to the filter condition corresponding to the queue.
In this embodiment, for each thread corresponding to each queue, data is read from the head of the queue corresponding to the thread. And then marking the read data according to the filtering condition corresponding to the queue. When data in one queue is fetched, the data is also fetched in the other queue, i.e. the same data cannot be read by multiple threads at the same time. For example, queue 1 corresponds to protocol one process, and filters data a using the filter condition of protocol one. If the assets in data A are 6 billion with no overdue records, then a label of protocol one may be added to data A. Queue 2 corresponds to protocol two process, and filters data B using the filter condition of protocol two. If the assets in the data B are 11 hundred million and no overdue records exist, a label of protocol two can be added to the data B. For data B in queue 1, it cannot be determined to meet the filter criteria of protocol one because data B is not visible in queue 1 after being pulled from queue 2. Each data in the read queue may be marked by traversing until the queue is empty. An end condition may also be defined, such as the total amount of data that has been marked, or the end of the traversal when certain attributes satisfy the condition.
In some optional implementation manners of this embodiment, reading data from the queue, and marking the read data according to a filtering condition corresponding to the queue includes: fetching an unlocked piece of data from the head of the queue; locking the unlocked data; and if the locked data meets the filtering condition corresponding to the queue, marking the locked data. The locking adopts a redis distributed lock. When the data in one queue is locked, the threads corresponding to other queues cannot read the data from the queues corresponding to the threads. By locking, only one mark of a piece of data meeting a certain protocol filtering condition can be limited.
In some optional implementations of this embodiment, the method further includes: and if the locked data does not meet the filtering condition corresponding to the queue, unlocking the locked data and then returning the unlocked data to the queue. After one queue is unlocked, the data is also in an unlocked state in other queues and can be read out from other queues for use.
In some optional implementations of this embodiment, the queue is a double-ended queue, and the method further includes: for each queue in the predetermined number of queues, if the data in the queue is emptied, reading the data from the tail of other non-empty queues through a work stealing algorithm, and marking the read data according to the filtering condition of the queue. Work-stealing algorithms (work-stealing) algorithms refer to a thread stealing tasks from other queues to execute. A large task is divided into a plurality of independent subtasks, in order to reduce competition among threads, the subtasks are respectively placed into different queues, an independent thread is established for each queue to execute the tasks in the queues, and the threads and the queues are in one-to-one correspondence. For example, thread 1 is responsible for processing tasks in queue 1 and thread 2 is responsible for queue 2. However, some threads will firstly dry up the tasks in their own queues, and the queues corresponding to other threads still have tasks to be processed. The thread which is dried is equal to the thread which is dried, and the thread is not as good as other threads, so that the thread can steal a task to execute in the queue of other threads. At this time, they may access the same queue, so to reduce contention between the stolen task thread and the stolen task thread, a double ended queue is typically used, where the stolen task thread always takes data from the head of the double ended queue for execution, and the stolen task thread always takes data from the tail of the double ended queue for execution.
With continuing reference to fig. 3a-3d, fig. 3a-3d are schematic diagrams of an application scenario of the method of processing data according to the present embodiment. In the application scenario of fig. 3a-3d, 5000 loan bill IDs are cyclically extracted in descending order of loan bill amount, while 5000 loan bills are sent to the executing process service of each agreement, as shown in fig. 3 a. For example: now. If the property securitizes 3 agreements, 5000 loan ticket IDs are sent to the agreement process through 3 message queues. The message format is as follows: { "protocol number": 1, "ID set": … }. As shown in fig. 3b, after the protocol process receives the message sent by the message queue, each protocol process creates a two-way queue, and three protocol processes create three queues in total. And after the queue is established, establishing threads with the same number, wherein the threads correspond to the queue in a one-to-one mode, and each thread is marked according to the filtering condition of the protocol process corresponding to the thread. As shown in fig. 3c, the thread will extract the loan list from the head of its own queue one by one for data filtering, if the loan list is locked, then take out one from the queue, if not locked, first use redis to lock the loan list, then perform protocol rule filtering, and if the filtering is successful, perform data marking. After marking the data, the loan order is filtered and then the loan order is continuously taken out from the head of the queue until the queue is empty. As shown in fig. 3d, according to the attribute condition of the loan ticket, it is certain that some queues consume fast, and some queues consume slow, and a (work stealing) algorithm is adopted to help other threads to consume, so that the asset filtering speed can be greatly increased. And finally, circularly traversing the asset pool until the protocol filtering reaches enough assets.
The method provided by the embodiment of the application improves the data processing speed and solves the problem of data resource contention by processing data in parallel.
With further reference to FIG. 4, a flow 400 of yet another embodiment of a method of processing data is shown. The process 400 of the method for processing data includes the following steps:
step 401, at least one piece of data to be processed is obtained according to the descending order of the money amount.
In this embodiment, an executing entity (for example, the server shown in fig. 1) of the method for processing data may obtain at least one piece of data to be processed from the database in descending order of money amount through a wired connection manner or a wireless connection manner. The amount of money is included in the data. That is, the data that has been sorted in the descending order of the money amount is obtained or the data is sorted in the descending order of the money amount after being obtained. A predetermined number of data, for example, 5000 loan order records, may be read in a cyclical manner. The data may be ordered in a certain order. And sending the read data to a preset number of protocol processes through a message queue every time the data is read. Each protocol process is responsible for filtering data according to a filter condition. For example, agreement process one is responsible for filtering out 5 billion assets with no overdue recorded loan notes. The second agreement process is responsible for filtering out 10 hundred million assets, which may have overdue recorded loan notes.
The data are arranged according to the descending order of the money amount, the hit probability of the large loan bill is improved, and the filtering times are reduced.
At step 402, a predetermined number of queues are created.
In this embodiment, each protocol process creates a queue. Wherein each queue corresponds to a filter condition. Alternatively, the queue may be a double ended queue.
At least one piece of data is stored in each queue, step 403.
In this embodiment, the same at least one piece of data is stored into each queue. And establishing a thread for each queue, wherein each thread is used for processing the data in the queue corresponding to the thread.
Step 404, for a queue of the predetermined number of queues, reading data from the queue, and marking the read data according to the filter condition corresponding to the queue.
Step 404 is substantially the same as step 204 and thus will not be described again.
Step 405, for a queue of the predetermined number of queues, if the sum of money amounts included in the data satisfying the filtering condition corresponding to the queue in the queue reaches a predetermined numerical value, deleting the queue.
In this embodiment, since the data sorted in the order of the money amounts from large to small are obtained in step 401, the data with the large money amount can be preferentially read, and when the sum of the money amounts reaches a predetermined value, the reading of the queue can be ended in advance, and the queue can be deleted. Unprocessed data in the queue corresponding to the protocol is still in other queues and can be filtered by other protocols.
With further reference to fig. 5, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for processing data, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus 500 for processing data of the present embodiment includes: an acquisition unit 501, a creation unit 502, a storage unit 503, and a marking unit 504. Wherein the obtaining unit 501 is configured to obtain at least one piece of data to be processed. The creating unit 502 is configured to create a predetermined number of queues, wherein each queue corresponds to a filter condition. The storage unit 503 is configured to store at least one piece of data in each queue. The marking unit 504 is configured to read data from a predetermined number of queues of the queues, and mark the read data according to a filtering condition corresponding to the queues.
In this embodiment, specific processing of the acquiring unit 501, the creating unit 502, the storing unit 503 and the marking unit 504 of the apparatus 500 for processing data may refer to step 201, step 202, step 203 and step 204 in the corresponding embodiment of fig. 2.
In some optional implementations of this embodiment, the marking unit 504 is further configured to: fetching an unlocked piece of data from the head of the queue; locking the unlocked data; and if the locked data meets the filtering condition corresponding to the queue, marking the locked data.
In some optional implementations of this embodiment, the apparatus 500 further comprises a releasing unit (not shown) configured to: and if the locked data does not meet the filtering condition corresponding to the queue, unlocking the locked data and then returning the unlocked data to the queue.
In some optional implementations of this embodiment, the queue is a double ended queue; and the apparatus 500 further comprises a stealing unit configured to: for each queue in the predetermined number of queues, if the data in the queue is emptied, reading the data from the tail of other non-empty queues through a work stealing algorithm, and marking the read data according to the filtering condition of the queue.
In some optional implementations of this embodiment, the data includes an amount of money; and the obtaining unit 501 is further configured to: and acquiring at least one piece of data to be processed according to the descending order of the money amount.
In some optional implementations of this embodiment, the apparatus 500 further comprises a deletion unit (not shown) configured to: and for the queues in the preset number of queues, if the sum of the money amounts included in the data meeting the filtering condition corresponding to the queues in the queues reaches a preset numerical value, deleting the queues.
Referring now to FIG. 6, a block diagram of a computer system 600 suitable for implementing an electronic device (e.g., the terminal device/server shown in FIG. 1) of an embodiment of the present application is shown. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a creation unit, a storage unit, and a marking unit. Where the names of the units do not in some cases constitute a limitation of the units themselves, for example, the acquisition unit may also be described as a "unit that acquires at least one piece of data to be processed".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring at least one piece of data to be processed; creating a predetermined number of queues, wherein each queue corresponds to a filter condition; storing at least one piece of data in each queue; and for the queues in the preset number of queues, reading data from the queues, and marking the read data according to the filter condition corresponding to the queues.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of the invention as referred to in the present application is not limited to the embodiments with a specific combination of the above-mentioned features, but also covers other embodiments with any combination of the above-mentioned features or their equivalents without departing from the inventive concept. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (14)

1. A method of processing data, comprising:
acquiring at least one piece of data to be processed;
creating a predetermined number of queues, wherein each queue corresponds to a filter condition;
storing the at least one piece of data in each queue;
and for the queues in the preset number of queues, reading data from the queues, and marking the read data according to the filtering condition corresponding to the queues.
2. The method of claim 1, wherein the reading data from the queue and marking the read data according to the filter condition corresponding to the queue comprises:
fetching an unlocked piece of data from the head of the queue;
locking the unlocked data;
and if the locked data meets the filtering condition corresponding to the queue, marking the locked data.
3. The method of claim 2, wherein the method further comprises:
and if the locked data does not meet the filtering condition corresponding to the queue, unlocking the locked data and then returning the unlocked data to the queue.
4. The method of claim 2, wherein the queue is a double ended queue; and the method further comprises:
and for each queue in the predetermined number of queues, if the data in the queue is emptied, reading the data from the tail parts of other non-empty queues through a work stealing algorithm, and marking the read data according to the filtering condition of the queue.
5. The method of any of claims 1-4, wherein the data includes an amount; and the acquiring at least one piece of data to be processed comprises:
and acquiring at least one piece of data to be processed according to the descending order of the money amount.
6. The method of claim 5, wherein the method further comprises:
and for the queues in the preset number of queues, if the sum of the money amounts included in the data meeting the filtering condition corresponding to the queues in the queues reaches a preset numerical value, deleting the queues.
7. An apparatus for processing data, comprising:
an acquisition unit configured to acquire at least one piece of data to be processed;
a creating unit configured to create a predetermined number of queues, wherein each queue corresponds to a filter condition;
a storage unit configured to store the at least one piece of data into each queue;
and the marking unit is configured to read data from the queue in the queues with the preset number, and mark the read data according to the filtering condition corresponding to the queue.
8. The apparatus of claim 7, wherein the tagging unit is further configured to:
fetching an unlocked piece of data from the head of the queue;
locking the unlocked data;
and if the locked data meets the filtering condition corresponding to the queue, marking the locked data.
9. The apparatus of claim 8, wherein the apparatus further comprises a release unit configured to:
and if the locked data does not meet the filtering condition corresponding to the queue, unlocking the locked data and then returning the unlocked data to the queue.
10. The apparatus of claim 8, wherein the queue is a double ended queue; and
the apparatus further comprises a stealing unit configured to:
and for each queue in the predetermined number of queues, if the data in the queue is emptied, reading the data from the tail parts of other non-empty queues through a work stealing algorithm, and marking the read data according to the filtering condition of the queue.
11. The apparatus of one of claims 7-10, wherein the data comprises an amount of money; and the obtaining unit is further configured to:
and acquiring at least one piece of data to be processed according to the descending order of the money amount.
12. The apparatus of claim 11, wherein the apparatus further comprises a deletion unit configured to:
and for the queues in the preset number of queues, if the sum of the money amounts included in the data meeting the filtering condition corresponding to the queues in the queues reaches a preset numerical value, deleting the queues.
13. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
14. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-6.
CN201811310892.XA 2018-11-06 2018-11-06 Method and device for processing data Active CN111209311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811310892.XA CN111209311B (en) 2018-11-06 2018-11-06 Method and device for processing data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811310892.XA CN111209311B (en) 2018-11-06 2018-11-06 Method and device for processing data

Publications (2)

Publication Number Publication Date
CN111209311A true CN111209311A (en) 2020-05-29
CN111209311B CN111209311B (en) 2024-02-06

Family

ID=70787600

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811310892.XA Active CN111209311B (en) 2018-11-06 2018-11-06 Method and device for processing data

Country Status (1)

Country Link
CN (1) CN111209311B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113806384A (en) * 2021-08-19 2021-12-17 紫光云(南京)数字技术有限公司 Method for allocating incremental integer data based on redis
US20220164282A1 (en) * 2020-11-24 2022-05-26 International Business Machines Corporation Reducing load balancing work stealing

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030005025A1 (en) * 2001-06-27 2003-01-02 Shavit Nir N. Load-balancing queues employing LIFO/FIFO work stealing
US20120102003A1 (en) * 2010-10-20 2012-04-26 International Business Machines Corporation Parallel data redundancy removal
CA2784504A1 (en) * 2012-08-06 2014-02-06 Mohammed N. Faridy Systems and methods for identifying and relating asset-tied transactions
US20150319238A1 (en) * 2013-04-25 2015-11-05 Tencent Technology (Shenzhen) Company Limited Method, device and storage medium for data processing
US20170199772A1 (en) * 2016-01-13 2017-07-13 International Business Machines Corporation Lockless multithreaded completion queue access
KR20180026596A (en) * 2016-09-02 2018-03-13 주식회사 포스코아이씨티 Distributed Parallel Processing System for Processing Data Of Continuous Process In Rea Time
CN108389121A (en) * 2018-02-07 2018-08-10 平安普惠企业管理有限公司 Loan data processing method, device, computer equipment and storage medium
CN108462715A (en) * 2018-04-24 2018-08-28 王颖 The On Network Information Filtering System of WM String matching parallel algorithms based on MPI

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030005025A1 (en) * 2001-06-27 2003-01-02 Shavit Nir N. Load-balancing queues employing LIFO/FIFO work stealing
US20120102003A1 (en) * 2010-10-20 2012-04-26 International Business Machines Corporation Parallel data redundancy removal
CA2784504A1 (en) * 2012-08-06 2014-02-06 Mohammed N. Faridy Systems and methods for identifying and relating asset-tied transactions
US20150319238A1 (en) * 2013-04-25 2015-11-05 Tencent Technology (Shenzhen) Company Limited Method, device and storage medium for data processing
US20170199772A1 (en) * 2016-01-13 2017-07-13 International Business Machines Corporation Lockless multithreaded completion queue access
KR20180026596A (en) * 2016-09-02 2018-03-13 주식회사 포스코아이씨티 Distributed Parallel Processing System for Processing Data Of Continuous Process In Rea Time
CN108389121A (en) * 2018-02-07 2018-08-10 平安普惠企业管理有限公司 Loan data processing method, device, computer equipment and storage medium
CN108462715A (en) * 2018-04-24 2018-08-28 王颖 The On Network Information Filtering System of WM String matching parallel algorithms based on MPI

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220164282A1 (en) * 2020-11-24 2022-05-26 International Business Machines Corporation Reducing load balancing work stealing
US11645200B2 (en) * 2020-11-24 2023-05-09 International Business Machines Corporation Reducing load balancing work stealing
CN113806384A (en) * 2021-08-19 2021-12-17 紫光云(南京)数字技术有限公司 Method for allocating incremental integer data based on redis

Also Published As

Publication number Publication date
CN111209311B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
US20150378721A1 (en) Methods for managing applications using semantic modeling and tagging and devices thereof
CN108846753B (en) Method and apparatus for processing data
CN110866709A (en) Order combination method and device
US20080091679A1 (en) Generic sequencing service for business integration
US20170207984A1 (en) Guaranteed response pattern
CN111127181A (en) Voucher bookkeeping method and device
CN115082247B (en) System production method, device, equipment, medium and product based on label library
CN111339743B (en) Account number generation method and device
CN111209311B (en) Method and device for processing data
CN112818026A (en) Data integration method and device
CN108764866B (en) Method and equipment for allocating resources and drawing resources
CN110866001A (en) Method and device for determining order to be processed
CN112433757A (en) Method and device for determining interface calling relationship
CN111723063A (en) Method and device for processing offline log data
CN107678856B (en) Method and device for processing incremental information in business entity
CN115456575A (en) Business trip reimbursement method, device, storage medium and service equipment
CN115442420A (en) Block chain cross-chain service management method and device
CN110309121B (en) Log processing method and device, computer readable medium and electronic equipment
CN110717826A (en) Asset filtering method and device
CN110266526A (en) A kind of loading method and equipment of device tree
CN116450622B (en) Method, apparatus, device and computer readable medium for data warehouse entry
CN113760925A (en) Data processing method and device
CN111460269B (en) Information pushing method and device
CN113283991A (en) Processing method and device for transaction data on block chain
CN118170677A (en) Data analysis method, device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 221, 2nd floor, Block C, 18 Kechuang 11th Street, Daxing Economic and Technological Development Zone, Beijing, 100176

Applicant after: Jingdong Digital Technology Holding Co.,Ltd.

Address before: Room 221, 2nd floor, Block C, 18 Kechuang 11th Street, Daxing Economic and Technological Development Zone, Beijing, 100176

Applicant before: JINGDONG DIGITAL TECHNOLOGY HOLDINGS Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 221, 2nd floor, Block C, 18 Kechuang 11th Street, Daxing Economic and Technological Development Zone, Beijing, 100176

Applicant after: Jingdong Technology Holding Co.,Ltd.

Address before: Room 221, 2nd floor, Block C, 18 Kechuang 11th Street, Daxing Economic and Technological Development Zone, Beijing, 100176

Applicant before: Jingdong Digital Technology Holding Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant