CN115731047B - Batch order processing method, equipment and medium - Google Patents

Batch order processing method, equipment and medium Download PDF

Info

Publication number
CN115731047B
CN115731047B CN202211525337.5A CN202211525337A CN115731047B CN 115731047 B CN115731047 B CN 115731047B CN 202211525337 A CN202211525337 A CN 202211525337A CN 115731047 B CN115731047 B CN 115731047B
Authority
CN
China
Prior art keywords
sub
delegate
task
pipeline model
delegated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211525337.5A
Other languages
Chinese (zh)
Other versions
CN115731047A (en
Inventor
何磊
陈国术
刘勇进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huarui Distributed Technology Co ltd
Original Assignee
Shenzhen Huarui Distributed Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huarui Distributed Technology Co ltd filed Critical Shenzhen Huarui Distributed Technology Co ltd
Priority to CN202211525337.5A priority Critical patent/CN115731047B/en
Publication of CN115731047A publication Critical patent/CN115731047A/en
Application granted granted Critical
Publication of CN115731047B publication Critical patent/CN115731047B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the technical field of data processing, and provides a batch order processing method, equipment and medium, which can construct a pipeline model set, automatically identify and acquire a corresponding pipeline model from the pipeline model set by utilizing task identifiers, and simultaneously configure independent threads for each sub-order, so that the rapid processing of batch order orders is realized by combining the parallel threads and the pipeline model, the concurrency of order processing is improved, and the performance of processing batch order orders and the system throughput are improved.

Description

Batch order processing method, equipment and medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method, an apparatus, and a medium for processing a batch order.
Background
In the financial field, investors often require a trade counter to respond and process a orders quickly in order to report the orders to the exchange with as low a delay as possible, so as to improve the efficiency of reporting to the exchange. Investors will package several or even hundreds of orders into a batch (referred to as bulk orders) for delivery to the trading desk.
In the prior art, for batch consignment, the trading counter can process the next consignment order after processing one consignment order, so that the system receives huge impact under the condition of batch consignment, which is equivalent to receiving a large amount of consignment orders at a certain moment, and if the system cannot process all consignments in time, investors can miss investment opportunities.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, apparatus, and medium for processing a batch order, which aims to solve the problem of low efficiency in processing a batch order.
A batch order processing method, the batch order processing method comprising:
acquiring a pre-configured pipeline model set; wherein each pipeline model in the set of pipeline models has a corresponding task identification, the task identification corresponding to a different task type;
triggering a splitting task when a plurality of delegated orders are received within a preset time length;
acquiring a first task identifier corresponding to the split task, and inquiring in the assembly line model set by utilizing the first task identifier to obtain a first assembly line model;
splitting the plurality of consignment orders by using the first pipeline model to obtain each sub-consignment;
configuring independent threads for each sub-delegate, and carrying out parallel processing on each sub-delegate by utilizing the independent threads corresponding to each sub-delegate;
in the process of processing each sub-commission, sequentially starting a construction task, a validity check task, a risk check task and a message generation task;
Acquiring a second task identifier corresponding to the construction task, and inquiring in the assembly line model set by utilizing the second task identifier to obtain a second assembly line model;
acquiring a third task identifier corresponding to the validity checking task, and inquiring in the assembly line model set by utilizing the third task identifier to obtain a third assembly line model;
acquiring a fourth task identifier corresponding to the risk verification task, and inquiring in the assembly line model set by utilizing the fourth task identifier to obtain a fourth assembly line model;
acquiring a fifth task identifier corresponding to the message generating task, and inquiring in the assembly line model set by utilizing the fifth task identifier to obtain a fifth assembly line model;
constructing a network packet delegated by each child as a message object by utilizing the second pipeline model;
carrying out validity check on the message object delegated by each sub-unit by utilizing the third pipeline model;
carrying out risk verification on each entrusted message object by utilizing the fourth pipeline model;
and generating a system newspaper disc service assembly message according to the message object delegated by each sub-by using the fifth pipeline model, and sending the system newspaper disc service assembly message to a system newspaper disc service assembly.
According to a preferred embodiment of the invention, the method further comprises:
in the process of receiving the multiple delegated orders, when the next delegated order is not received after any delegated order is received and the configuration time length is greater than or equal to the configuration time length, the delegation is simulated by using a time slice message at preset time intervals;
and when each simulation request is received, reading configuration data from the memory, and storing the configuration data increment into the CPU cache.
According to a preferred embodiment of the invention, the method further comprises:
in the process of processing each sub-delegate, the second pipeline model, the third pipeline model, the fourth pipeline model and the fifth pipeline model corresponding to each sub-delegate are sequentially executed.
According to a preferred embodiment of the present invention, the constructing the network packet of each sub-delegate as the message object by using the second pipeline model includes:
decoding the network packet of each sub-delegate by using the second pipeline model to obtain a readable field corresponding to each sub-delegate;
acquiring a data structure corresponding to the system newspaper disc service component;
and constructing a readable field corresponding to each sub-delegate as a corresponding message object according to the data structure.
According to a preferred embodiment of the present invention, the performing validity check on each sub-delegated message object by using the third pipeline model includes:
detecting whether messy codes exist in each sub-delegate or not by using a third pipeline model corresponding to each sub-delegate; and/or
Detecting whether each sub-delegate contains redundant fields or not by using a third pipeline model corresponding to each sub-delegate; and/or
And detecting whether each sub-delegate contains an empty field or not by using a third pipeline model corresponding to each sub-delegate.
According to a preferred embodiment of the present invention, the performing risk verification on each sub-delegated message object by using the fourth pipeline model includes:
obtaining a service type corresponding to each sub-delegate by using a fourth pipeline model corresponding to each sub-delegate;
selecting a target strategy corresponding to each sub-delegate from pre-established risk verification strategies according to the service type corresponding to each sub-delegate;
and carrying out risk verification on the message object of each sub-delegate according to the target strategy corresponding to each sub-delegate.
According to a preferred embodiment of the present invention, the performing risk verification on the message object of each sub-delegate according to the target policy corresponding to each sub-delegate includes:
Acquiring a configuration field of a message object of each sub-delegate according to the target policy;
acquiring the name of an investor corresponding to each sub-commission;
carrying out hash operation according to the configuration field of the message object of each sub-commission and the name of the investor corresponding to each sub-commission to obtain the investor identity prediction mark corresponding to each sub-commission;
acquiring the actual identity of the investor corresponding to each sub-commission;
comparing the investor identity prediction mark corresponding to each sub-commission with the investor identity actual mark corresponding to each sub-commission to obtain a comparison result;
and carrying out risk verification on the message object delegated by each sub-unit according to the comparison result.
According to a preferred embodiment of the present invention, before the risk verification is performed on each sub-delegated message object by using the fourth pipeline model, the method further includes:
acquiring historical service data;
identifying each service type contained in the historical service data;
determining a risk verification method corresponding to each service type in the historical service data;
and establishing the risk verification strategy according to a risk verification method corresponding to each service type.
A batch order processing apparatus, the batch order processing apparatus comprising:
An acquisition unit for acquiring a pre-configured pipeline model set; wherein each pipeline model in the set of pipeline models has a corresponding task identification, the task identification corresponding to a different task type;
the triggering unit is used for triggering the splitting task when a plurality of entrusting orders are received within a preset time length;
the query unit is used for acquiring a first task identifier corresponding to the split task, and querying in the assembly line model set by utilizing the first task identifier to obtain a first assembly line model;
the splitting unit is used for splitting the plurality of delegated orders by using the first pipeline model to obtain each delegation;
the processing unit is used for configuring independent threads for each sub-delegation and carrying out parallel processing on each sub-delegation by utilizing the independent threads corresponding to each sub-delegation;
the starting unit is used for sequentially starting a construction task, a validity check task, a risk check task and a message generation task in the process of processing each delegation;
the query unit is further configured to obtain a second task identifier corresponding to the construction task, and query the pipeline model set by using the second task identifier to obtain a second pipeline model;
The query unit is further configured to obtain a third task identifier corresponding to the validity check task, and query the pipeline model set by using the third task identifier to obtain a third pipeline model;
the query unit is further configured to obtain a fourth task identifier corresponding to the risk verification task, and query the pipeline model set by using the fourth task identifier to obtain a fourth pipeline model;
the query unit is further configured to obtain a fifth task identifier corresponding to the message generating task, and query the pipeline model set by using the fifth task identifier to obtain a fifth pipeline model;
a construction unit, configured to construct a network packet delegated by each child as a message object using the second pipeline model;
the verification unit is used for verifying the validity of each delegated message object by utilizing the third pipeline model;
the verification unit is further used for performing risk verification on the message object delegated by each stroke by utilizing the fourth pipeline model;
and the generating unit is used for generating a system newspaper disc service assembly message according to the message object of each sub-delegated by utilizing the fifth pipeline model and sending the system newspaper disc service assembly message to the system newspaper disc service assembly.
A computer device, the computer device comprising:
a memory storing at least one instruction; a kind of electronic device with high-pressure air-conditioning system
And a processor executing the instructions stored in the memory to implement the batch order processing method.
A computer readable storage medium having stored therein at least one instruction for execution by a processor in a computer device to implement the bulk order handling method.
According to the technical scheme, the method and the device can be used for rapidly processing the batch consignment orders by combining the parallel threads and the pipeline model, so that the concurrency of consignment order processing is improved, and the performance of processing the batch consignment orders and the throughput of a system are improved.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of the batch order processing method of the present invention.
FIG. 2 is a functional block diagram of a preferred embodiment of a batch order processing apparatus of the present invention.
FIG. 3 is a schematic diagram of a computer device implementing a preferred embodiment of a batch order processing method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
FIG. 1 is a flow chart of a preferred embodiment of the batch order processing method of the present invention. The order of the steps in the flowchart may be changed and some steps may be omitted according to various needs.
The batch order processing method is applied to one or more computer devices, wherein the computer device is a device capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and the hardware of the computer device comprises a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a programmable gate array (Field-Programmable Gate Array, FPGA), a digital processor (Digital Signal Processor, DSP), an embedded device and the like.
The computer device may be any electronic product that can interact with a user in a human-computer manner, such as a personal computer, tablet computer, smart phone, personal digital assistant (Personal Digital Assistant, PDA), game console, interactive internet protocol television (Internet Protocol Television, IPTV), smart wearable device, etc.
The computer device may also include a network device and/or a user device. Wherein the network device includes, but is not limited to, a single network server, a server group composed of a plurality of network servers, or a Cloud based Cloud Computing (Cloud Computing) composed of a large number of hosts or network servers.
The server may be an independent server, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The network in which the computer device is located includes, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a virtual private network (Virtual Private Network, VPN), and the like.
S10, acquiring a pre-configured assembly line model set; wherein each pipeline model in the set of pipeline models has a corresponding task identification, the task identification corresponding to a different task type.
In this embodiment, each task identity can uniquely tag a pipeline model.
And S11, triggering a splitting task when a plurality of entrusting orders are received within a preset time.
In this embodiment, the preset duration may be configured in a user-defined manner, for example, 30 seconds.
For example: when 100 order orders are received within 30 seconds, the batch order of the received orders is illustrated.
S12, acquiring a first task identifier corresponding to the split task, and inquiring in the assembly line model set by utilizing the first task identifier to obtain a first assembly line model.
In this embodiment, the first pipeline model acts as an independent thread.
S13, splitting the plurality of delegated orders by using the first pipeline model to obtain each delegation.
In this embodiment, the multiple delegate orders are split by using the first pipeline model, so as to obtain each sub-delegate, so that each sub-delegate is processed separately.
In this embodiment, the method further includes:
in the process of receiving the multiple delegated orders, when the next delegated order is not received after any delegated order is received and the configuration time length is greater than or equal to the configuration time length, the delegation is simulated by using a time slice message at preset time intervals;
upon receipt of each emulation commit, configuration data is read from memory and stored in a CPU (Central Processing Unit ) cache in increments.
The configuration duration may be configured in a user-defined manner, for example, 1 second.
The preset time interval may also be configured in a user-defined manner, for example, 10 ms.
The configuration data can be configured according to actual service requirements.
For example: the configuration data may include, but is not limited to: securities information, rights, shares, funds, rates, etc.
It can be appreciated that when a consignment order is received, if the next consignment order is not received within the configuration time, the system is not busy, and consignment of the batch orders is sparse. Then, due to the fact that the data is filled by other data of the operating system or non-transaction data is applied to fill, cache data of each pipeline model in the system can be invalid in the cache of the CPU, and in this case, some key data can be searched and returned to the cache miss when the processing is entrusted, namely, the data in the cache is lost, so that the processing delay is increased.
For the above-mentioned situation, in order to keep the system data in a hot state, i.e. the critical data of the processing order resides in the cache of the CPU, in this embodiment, when the order of the batch is sparsely ordered, the time slice message is used to simulate the order to trigger the system heating (i.e. the time slice is continuously received, the time slice message may be used to simulate the order once every 10 ms of the preset time interval), when the time slice message is received, the critical data of the one-time reading order processing is triggered, and the critical data is incrementally loaded to the CPU cache to keep the system data in a hot state, so that when the data is subsequently read, abnormal order processing will not be caused by missing data.
And by the incremental storage, the redundancy of data in the CPU cache is avoided, and the invalid occupation of the storage space is reduced.
When the system is busy, namely any order is received, and the next order is received in the configuration time, the system is in a data reading state, so that the time slice message is not needed to be utilized for simulating the order.
S14, configuring independent threads for each sub-delegate, and processing each sub-delegate in parallel by utilizing the independent threads corresponding to each sub-delegate.
At present, when a batch order is ordered, the next order can be processed after one order is processed, the whole processing process takes longer time, and the condition of order blockage easily occurs, so that impact is caused to a system. And, if the system cannot process all the order orders in time, the investors will miss investment opportunities.
According to the embodiment, by configuring independent threads for each sub-commission, parallel processing of each sub-commission can be realized, and time cost waste caused by per-commission processing of the sub-commission is avoided.
S15, in the process of processing each sub-commission, a construction task, a validity check task, a risk check task and a message generation task are sequentially started.
S16, acquiring a second task identifier corresponding to the construction task, and inquiring in the assembly line model set by utilizing the second task identifier to obtain a second assembly line model.
S17, obtaining a third task identifier corresponding to the validity checking task, and inquiring in the assembly line model set by utilizing the third task identifier to obtain a third assembly line model.
S18, acquiring a fourth task identifier corresponding to the risk verification task, and inquiring in the assembly line model set by utilizing the fourth task identifier to obtain a fourth assembly line model.
S19, obtaining a fifth task identifier corresponding to the message generating task, and inquiring in the assembly line model set by utilizing the fifth task identifier to obtain a fifth assembly line model.
S20, constructing the network packet of each sub-delegate as a message object by using the second pipeline model.
In this embodiment, the constructing, by using the second pipeline model, the network packet of each sub-delegate as the message object includes:
decoding the network packet of each sub-delegate by using the second pipeline model to obtain a readable field corresponding to each sub-delegate;
acquiring a data structure corresponding to the system newspaper disc service component (Order Routing Service, ORS);
and constructing a readable field corresponding to each sub-delegate as a corresponding message object according to the data structure.
It will be appreciated that each child delegate is initially of a data type in binary format, and therefore each child delegate is decoded into a readable field so that the system can recognize.
Further, the readable field corresponding to each sub-delegate is assigned to the data structure corresponding to the system newspaper server component, and the message object corresponding to each sub-delegate is obtained.
S21, carrying out validity check on each sub-delegated message object by using the third pipeline model.
In this embodiment, the performing validity check on the message object delegated by each child by using the third pipeline model includes:
detecting whether messy codes exist in each sub-delegate or not by using a third pipeline model corresponding to each sub-delegate; and/or
Detecting whether each sub-delegate contains redundant fields or not by using a third pipeline model corresponding to each sub-delegate; and/or
And detecting whether each sub-delegate contains an empty field or not by using a third pipeline model corresponding to each sub-delegate.
Through the embodiment, the validity of the field value delegated by each sub-delegated can be checked by utilizing the pipeline, so that field errors are avoided.
S22, carrying out risk verification on each sub-delegated message object by using the fourth pipeline model.
In this embodiment, before the performing risk verification on the message object delegated by each child with the fourth pipeline model, the method further includes:
acquiring historical service data;
identifying each service type contained in the historical service data;
determining a risk verification method corresponding to each service type in the historical service data;
and establishing the risk verification strategy according to a risk verification method corresponding to each service type.
The historical service data may include each service type and a corresponding risk verification method.
According to the embodiment, the risk verification strategy is built according to different service types, so that the risk verification is conveniently carried out by directly using the built risk verification strategy, and the verification efficiency is improved.
In this embodiment, the performing risk verification on the message object delegated by each child with the fourth pipeline model includes:
obtaining a service type corresponding to each sub-delegate by using a fourth pipeline model corresponding to each sub-delegate;
selecting a target strategy corresponding to each sub-delegate from pre-established risk verification strategies according to the service type corresponding to each sub-delegate;
and carrying out risk verification on the message object of each sub-delegate according to the target strategy corresponding to each sub-delegate.
For example: the risk check may include, but is not limited to: investors authority verification, verification of coupons, etc.
According to the embodiment, the pipeline model is utilized to automatically execute risk verification for each delegation, so that the processing efficiency of delegation orders is further improved.
S23, generating a system newspaper disc service assembly message according to the message object of each sub-delegated by using a fifth pipeline model, and sending the system newspaper disc service assembly message to the system newspaper disc service assembly.
And after the system newspaper disc service component information is sent to the system newspaper disc service component, the declaration to the exchange can be completed.
In this embodiment, the method further includes:
in the process of processing each sub-delegate, the second pipeline model, the third pipeline model, the fourth pipeline model and the fifth pipeline model corresponding to each sub-delegate are sequentially executed.
In the above embodiment, each order can be processed in parallel by configuring an independent thread, so that the overall processing efficiency of the batch order is improved, and meanwhile, in the processing process of each order, a pipeline model is further adopted to process a part of the content of each order respectively, so that the overall performance and throughput of the system are improved.
According to the technical scheme, the method and the device can be used for rapidly processing the batch consignment orders by combining the parallel threads and the pipeline model, so that the concurrency of consignment order processing is improved, and the performance of processing the batch consignment orders and the throughput of a system are improved.
FIG. 2 is a functional block diagram of a preferred embodiment of a batch order processing apparatus according to the present invention. The batch order processing device 11 comprises an acquisition unit 110, a triggering unit 111, a query unit 112, a splitting unit 113, a processing unit 114, a starting unit 115, a construction unit 116, a verification unit 117 and a generation unit 118. The module/unit referred to in the present invention refers to a series of computer program segments, which are stored in a memory, capable of being executed by a processor and of performing a fixed function. In the present embodiment, the functions of the respective modules/units will be described in detail in the following embodiments.
The acquiring unit 110 is configured to acquire a pre-configured pipeline model set; wherein each pipeline model in the set of pipeline models has a corresponding task identification, the task identification corresponding to a different task type;
the triggering unit 111 is configured to trigger a splitting task when multiple delegated orders are received within a preset duration;
the query unit 112 is configured to obtain a first task identifier corresponding to the split task, and query the pipeline model set by using the first task identifier to obtain a first pipeline model;
the splitting unit 113 is configured to split the multiple delegated orders by using the first pipeline model to obtain each sub-delegate;
the processing unit 114 is configured to configure an independent thread for each sub-delegate, and perform parallel processing on each sub-delegate by using the independent thread corresponding to each sub-delegate;
the starting unit 115 is configured to sequentially start a construction task, a validity check task, a risk check task, and a message generation task in a process of processing each sub-delegate;
the query unit 112 is further configured to obtain a second task identifier corresponding to the construction task, and query the pipeline model set by using the second task identifier to obtain a second pipeline model;
The query unit 112 is further configured to obtain a third task identifier corresponding to the validity check task, and query the pipeline model set by using the third task identifier to obtain a third pipeline model;
the query unit 112 is further configured to obtain a fourth task identifier corresponding to the risk verification task, and query the pipeline model set by using the fourth task identifier to obtain a fourth pipeline model;
the query unit 112 is further configured to obtain a fifth task identifier corresponding to the message generating task, and query the pipeline model set by using the fifth task identifier to obtain a fifth pipeline model;
the constructing unit 116 is configured to construct a network packet delegated by each child into a message object by using the second pipeline model;
the verification unit 117 is configured to perform validity verification on the message object delegated by each child by using the third pipeline model;
the verification unit 117 is further configured to perform risk verification on each delegated message object by using the fourth pipeline model;
the generating unit 118 is configured to generate a system disc reporting service component message according to the message object delegated by each child by using the fifth pipeline model, and send the system disc reporting service component message to a system disc reporting service component.
According to the technical scheme, the method and the device can be used for rapidly processing the batch consignment orders by combining the parallel threads and the pipeline model, so that the concurrency of consignment order processing is improved, and the performance of processing the batch consignment orders and the throughput of a system are improved.
FIG. 3 is a schematic diagram of a computer device for implementing a batch order processing method according to a preferred embodiment of the present invention.
The computer device 1 may comprise a memory 12, a processor 13 and a bus, and may further comprise a computer program, such as a batch order processing program, stored in the memory 12 and executable on the processor 13.
It will be appreciated by those skilled in the art that the schematic diagram is merely an example of the computer device 1 and does not constitute a limitation of the computer device 1, the computer device 1 may be a bus type structure, a star type structure, the computer device 1 may further comprise more or less other hardware or software than illustrated, or a different arrangement of components, for example, the computer device 1 may further comprise an input-output device, a network access device, etc.
It should be noted that the computer device 1 is only used as an example, and other electronic products that may be present in the present invention or may be present in the future are also included in the scope of the present invention by way of reference.
The memory 12 includes at least one type of readable storage medium including flash memory, a removable hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 12 may in some embodiments be an internal storage unit of the computer device 1, such as a removable hard disk of the computer device 1. The memory 12 may in other embodiments also be an external storage device of the computer device 1, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the computer device 1. Further, the memory 12 may also include both an internal storage unit and an external storage device of the computer device 1. The memory 12 may be used not only for storing application software installed on the computer device 1 and various types of data, such as code of a batch order processing program, etc., but also for temporarily storing data that has been output or is to be output.
The processor 13 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, a combination of various control chips, and the like. The processor 13 is a Control Unit (Control Unit) of the computer device 1, connects the respective components of the entire computer device 1 using various interfaces and lines, executes programs or modules stored in the memory 12 (for example, executes a batch order processing program or the like), and invokes data stored in the memory 12 to perform various functions of the computer device 1 and process data.
The processor 13 executes the operating system of the computer device 1 and various types of applications installed. The processor 13 executes the application program to implement the steps of the various batch order processing method embodiments described above, such as the steps shown in fig. 1.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory 12 and executed by the processor 13 to complete the present invention. The one or more modules/units may be a series of computer readable instruction segments capable of performing the specified functions, which instruction segments describe the execution of the computer program in the computer device 1. For example, the computer program may be divided into an acquisition unit 110, a trigger unit 111, a query unit 112, a splitting unit 113, a processing unit 114, a starting unit 115, a construction unit 116, a verification unit 117, a generation unit 118.
The integrated units implemented in the form of software functional modules described above may be stored in a computer readable storage medium. The software functional modules described above are stored in a storage medium and include instructions for causing a computer device (which may be a personal computer, a computer device, or a network device, etc.) or processor (processor) to perform portions of the batch order processing method described in various embodiments of the invention.
The modules/units integrated in the computer device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on this understanding, the present invention may also be implemented by a computer program for instructing a relevant hardware device to implement all or part of the procedures of the above-mentioned embodiment method, where the computer program may be stored in a computer readable storage medium and the computer program may be executed by a processor to implement the steps of each of the above-mentioned method embodiments.
Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory, or the like.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created from the use of blockchain nodes, and the like.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The Blockchain (Blockchain), which is essentially a decentralised database, is a string of data blocks that are generated by cryptographic means in association, each data block containing a batch of information of network transactions for verifying the validity of the information (anti-counterfeiting) and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
The bus may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one straight line is shown in fig. 3, but not only one bus or one type of bus. The bus is arranged to enable a connection communication between the memory 12 and at least one processor 13 or the like.
Although not shown, the computer device 1 may further comprise a power source (such as a battery) for powering the various components, preferably the power source may be logically connected to the at least one processor 13 via a power management means, whereby the functions of charge management, discharge management, and power consumption management are achieved by the power management means. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The computer device 1 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described in detail herein.
Further, the computer device 1 may also comprise a network interface, optionally comprising a wired interface and/or a wireless interface (e.g. WI-FI interface, bluetooth interface, etc.), typically used for establishing a communication connection between the computer device 1 and other computer devices.
The computer device 1 may optionally further comprise a user interface, which may be a Display, an input unit, such as a Keyboard (Keyboard), or a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the computer device 1 and for displaying a visual user interface.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
Fig. 3 shows only a computer device 1 with components 12-13, it being understood by those skilled in the art that the structure shown in fig. 3 is not limiting of the computer device 1 and may include fewer or more components than shown, or may combine certain components, or a different arrangement of components.
In connection with fig. 1, the memory 12 in the computer device 1 stores a plurality of instructions to implement a batch order processing method, the processor 13 being executable to implement:
acquiring a pre-configured pipeline model set; wherein each pipeline model in the set of pipeline models has a corresponding task identification, the task identification corresponding to a different task type;
triggering a splitting task when a plurality of delegated orders are received within a preset time length;
acquiring a first task identifier corresponding to the split task, and inquiring in the assembly line model set by utilizing the first task identifier to obtain a first assembly line model;
splitting the plurality of consignment orders by using the first pipeline model to obtain each sub-consignment;
configuring independent threads for each sub-delegate, and carrying out parallel processing on each sub-delegate by utilizing the independent threads corresponding to each sub-delegate;
in the process of processing each sub-commission, sequentially starting a construction task, a validity check task, a risk check task and a message generation task;
acquiring a second task identifier corresponding to the construction task, and inquiring in the assembly line model set by utilizing the second task identifier to obtain a second assembly line model;
Acquiring a third task identifier corresponding to the validity checking task, and inquiring in the assembly line model set by utilizing the third task identifier to obtain a third assembly line model;
acquiring a fourth task identifier corresponding to the risk verification task, and inquiring in the assembly line model set by utilizing the fourth task identifier to obtain a fourth assembly line model;
acquiring a fifth task identifier corresponding to the message generating task, and inquiring in the assembly line model set by utilizing the fifth task identifier to obtain a fifth assembly line model;
constructing a network packet delegated by each child as a message object by utilizing the second pipeline model;
carrying out validity check on the message object delegated by each sub-unit by utilizing the third pipeline model;
carrying out risk verification on each entrusted message object by utilizing the fourth pipeline model;
and generating a system newspaper disc service assembly message according to the message object delegated by each sub-by using the fifth pipeline model, and sending the system newspaper disc service assembly message to a system newspaper disc service assembly.
Specifically, the specific implementation method of the above instructions by the processor 13 may refer to the description of the relevant steps in the corresponding embodiment of fig. 1, which is not repeated herein.
The data in this case were obtained legally.
In the several embodiments provided in the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The invention is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. The units or means stated in the invention may also be implemented by one unit or means, either by software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (10)

1. A batch order processing method, the batch order processing method comprising:
acquiring a pre-configured pipeline model set; wherein each pipeline model in the set of pipeline models has a corresponding task identification, the task identification corresponding to a different task type;
triggering a splitting task when a plurality of delegated orders are received within a preset time length;
acquiring a first task identifier corresponding to the split task, and inquiring in the assembly line model set by utilizing the first task identifier to obtain a first assembly line model;
Splitting the plurality of consignment orders by using the first pipeline model to obtain each sub-consignment;
configuring independent threads for each sub-delegate, and carrying out parallel processing on each sub-delegate by utilizing the independent threads corresponding to each sub-delegate;
in the process of processing each sub-commission, sequentially starting a construction task, a validity check task, a risk check task and a message generation task;
acquiring a second task identifier corresponding to the construction task, and inquiring in the assembly line model set by utilizing the second task identifier to obtain a second assembly line model;
acquiring a third task identifier corresponding to the validity checking task, and inquiring in the assembly line model set by utilizing the third task identifier to obtain a third assembly line model;
acquiring a fourth task identifier corresponding to the risk verification task, and inquiring in the assembly line model set by utilizing the fourth task identifier to obtain a fourth assembly line model;
acquiring a fifth task identifier corresponding to the message generating task, and inquiring in the assembly line model set by utilizing the fifth task identifier to obtain a fifth assembly line model;
Constructing a network packet delegated by each child as a message object by utilizing the second pipeline model;
carrying out validity check on the message object delegated by each sub-unit by utilizing the third pipeline model;
carrying out risk verification on each entrusted message object by utilizing the fourth pipeline model;
and generating a system newspaper disc service assembly message according to the message object delegated by each sub-by using the fifth pipeline model, and sending the system newspaper disc service assembly message to a system newspaper disc service assembly.
2. The batch order processing method of claim 1 wherein the method further comprises:
in the process of receiving the multiple delegated orders, when the next delegated order is not received after any delegated order is received and the configuration time length is greater than or equal to the configuration time length, the delegation is simulated by using a time slice message at preset time intervals;
and when each simulation request is received, reading configuration data from the memory, and storing the configuration data increment into the CPU cache.
3. The batch order processing method of claim 1 wherein the method further comprises:
in the process of processing each sub-delegate, the second pipeline model, the third pipeline model, the fourth pipeline model and the fifth pipeline model corresponding to each sub-delegate are sequentially executed.
4. The batch order processing method of claim 1, wherein constructing each sub-delegated network packet as a message object using the second pipeline model comprises:
decoding the network packet of each sub-delegate by using the second pipeline model to obtain a readable field corresponding to each sub-delegate;
acquiring a data structure corresponding to the system newspaper disc service component;
and constructing a readable field corresponding to each sub-delegate as a corresponding message object according to the data structure.
5. The batch order processing method of claim 1 wherein the validating each child delegated message object using the third pipeline model comprises:
detecting whether messy codes exist in each sub-delegate or not by using a third pipeline model corresponding to each sub-delegate; and/or
Detecting whether each sub-delegate contains redundant fields or not by using a third pipeline model corresponding to each sub-delegate; and/or
And detecting whether each sub-delegate contains an empty field or not by using a third pipeline model corresponding to each sub-delegate.
6. The batch order processing method of claim 1 wherein the performing risk verification on each child delegated message object using the fourth pipeline model comprises:
Obtaining a service type corresponding to each sub-delegate by using a fourth pipeline model corresponding to each sub-delegate;
selecting a target strategy corresponding to each sub-delegate from pre-established risk verification strategies according to the service type corresponding to each sub-delegate;
and carrying out risk verification on the message object of each sub-delegate according to the target strategy corresponding to each sub-delegate.
7. The batch order processing method of claim 6 wherein the risk checking of each child delegated message object according to the target policy corresponding to each child delegate comprises:
acquiring a configuration field of a message object of each sub-delegate according to the target policy;
acquiring the name of an investor corresponding to each sub-commission;
carrying out hash operation according to the configuration field of the message object of each sub-commission and the name of the investor corresponding to each sub-commission to obtain the investor identity prediction mark corresponding to each sub-commission;
acquiring the actual identity of the investor corresponding to each sub-commission;
comparing the investor identity prediction mark corresponding to each sub-commission with the investor identity actual mark corresponding to each sub-commission to obtain a comparison result;
and carrying out risk verification on the message object delegated by each sub-unit according to the comparison result.
8. The batch order processing method of claim 1 wherein prior to said utilizing the fourth pipeline model to risk check each child delegated message object, the method further comprises:
acquiring historical service data;
identifying each service type contained in the historical service data;
determining a risk verification method corresponding to each service type in the historical service data;
and establishing the risk verification strategy according to a risk verification method corresponding to each service type.
9. A computer device, the computer device comprising:
a memory storing at least one instruction; a kind of electronic device with high-pressure air-conditioning system
A processor executing instructions stored in the memory to implement a batch order processing method as claimed in any one of claims 1 to 8.
10. A computer-readable storage medium, characterized by: the computer readable storage medium having stored therein at least one instruction for execution by a processor in a computer device to implement the batch order processing method of any of claims 1 to 8.
CN202211525337.5A 2022-11-30 2022-11-30 Batch order processing method, equipment and medium Active CN115731047B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211525337.5A CN115731047B (en) 2022-11-30 2022-11-30 Batch order processing method, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211525337.5A CN115731047B (en) 2022-11-30 2022-11-30 Batch order processing method, equipment and medium

Publications (2)

Publication Number Publication Date
CN115731047A CN115731047A (en) 2023-03-03
CN115731047B true CN115731047B (en) 2023-05-02

Family

ID=85299607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211525337.5A Active CN115731047B (en) 2022-11-30 2022-11-30 Batch order processing method, equipment and medium

Country Status (1)

Country Link
CN (1) CN115731047B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007127336A2 (en) * 2006-04-28 2007-11-08 Towsend Analytics, Ltd. Order management for electronic securities trading
US7657537B1 (en) * 2005-04-29 2010-02-02 Netapp, Inc. System and method for specifying batch execution ordering of requests in a storage system cluster
CN110443695A (en) * 2019-07-31 2019-11-12 中国工商银行股份有限公司 Data processing method and its device, electronic equipment and medium
CN113095935A (en) * 2021-03-16 2021-07-09 深圳华锐金融技术股份有限公司 Transaction order processing method and device, computer equipment and storage medium
CN113485812A (en) * 2021-07-23 2021-10-08 重庆富民银行股份有限公司 Partition parallel processing method and system based on large data volume task
CN114237852A (en) * 2021-12-20 2022-03-25 中国平安财产保险股份有限公司 Task scheduling method, device, server and storage medium
CN115147049A (en) * 2022-07-15 2022-10-04 远光软件股份有限公司 Order batch loading method and device, storage medium and computer equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7657537B1 (en) * 2005-04-29 2010-02-02 Netapp, Inc. System and method for specifying batch execution ordering of requests in a storage system cluster
WO2007127336A2 (en) * 2006-04-28 2007-11-08 Towsend Analytics, Ltd. Order management for electronic securities trading
CN110443695A (en) * 2019-07-31 2019-11-12 中国工商银行股份有限公司 Data processing method and its device, electronic equipment and medium
CN113095935A (en) * 2021-03-16 2021-07-09 深圳华锐金融技术股份有限公司 Transaction order processing method and device, computer equipment and storage medium
CN113485812A (en) * 2021-07-23 2021-10-08 重庆富民银行股份有限公司 Partition parallel processing method and system based on large data volume task
CN114237852A (en) * 2021-12-20 2022-03-25 中国平安财产保险股份有限公司 Task scheduling method, device, server and storage medium
CN115147049A (en) * 2022-07-15 2022-10-04 远光软件股份有限公司 Order batch loading method and device, storage medium and computer equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Analytical models for supplier selection and order quantity allocation;Abraham Mendoza etc.;《Applied Mathematical Modelling》;第3826-3835页 *
智能仓储系统中的任务调度策略;裴吴超;《中国优秀硕士学位论文全文数据库(信息科技辑)》;第I140-204页 *
某金融系统海量数据并行处理架构优化设计与实现;王格芳;《中国优秀硕士学位论文全文数据库(信息科技辑)》;第I138-401页 *

Also Published As

Publication number Publication date
CN115731047A (en) 2023-03-03

Similar Documents

Publication Publication Date Title
CN115936886B (en) Failure detection method, device, equipment and medium for heterogeneous securities trading system
CN115731047B (en) Batch order processing method, equipment and medium
CN114816371B (en) Message processing method, device, equipment and medium
CN115345746B (en) Security transaction method, device, equipment and medium
CN116823437A (en) Access method, device, equipment and medium based on configured wind control strategy
CN114185502B (en) Log printing method, device, equipment and medium based on production line environment
CN113923218B (en) Distributed deployment method, device, equipment and medium for coding and decoding plug-in
CN116414699B (en) Operation and maintenance testing method, device, equipment and medium
CN116306591B (en) Flow form generation method, device, equipment and medium
CN116225789B (en) Transaction system backup capability detection method, device, equipment and medium
CN116843454B (en) Channel information management method, device, equipment and medium
CN115964307B (en) Automatic test method, device, equipment and medium for transaction data
CN116662208B (en) Transaction testing method, device and medium based on distributed baffle
CN116701233B (en) Transaction system testing method, equipment and medium based on high concurrency report simulation
CN116483747B (en) Quotation snapshot issuing method, device, equipment and medium
CN116225971B (en) Transaction interface compatibility detection method, device, equipment and medium
CN115934576B (en) Test case generation method, device, equipment and medium in transaction scene
CN116414366B (en) Middleware interface generation method, device, equipment and medium
CN118037453A (en) Order processing method, device, equipment and medium of transaction system
CN116361753B (en) Authority authentication method, device, equipment and medium
CN115065642B (en) Code table request method, device, equipment and medium under bandwidth limitation
CN117952076A (en) Recording material generation method, device, equipment and medium for personnel inquiry process
CN118014696A (en) Transaction order preheating method, device, equipment and medium
CN116843454A (en) Channel information management method, device, equipment and medium
CN115396376A (en) Load balancing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant