CN116777630A - Transaction splitting method and system based on federal learning - Google Patents

Transaction splitting method and system based on federal learning Download PDF

Info

Publication number
CN116777630A
CN116777630A CN202311032958.4A CN202311032958A CN116777630A CN 116777630 A CN116777630 A CN 116777630A CN 202311032958 A CN202311032958 A CN 202311032958A CN 116777630 A CN116777630 A CN 116777630A
Authority
CN
China
Prior art keywords
model
trade
order
transaction
local model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311032958.4A
Other languages
Chinese (zh)
Inventor
张浩海
谌明
马天翼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Xingrui Network Information Technology Co ltd
Original Assignee
Hangzhou Xingrui Network Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Xingrui Network Information Technology Co ltd filed Critical Hangzhou Xingrui Network Information Technology Co ltd
Priority to CN202311032958.4A priority Critical patent/CN116777630A/en
Publication of CN116777630A publication Critical patent/CN116777630A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Technology Law (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Development Economics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

Embodiments of the present specification provide a federal learning-based transaction splitting method and system. The method includes acquiring first transaction information of an order to be transacted of a target product. The method further includes determining a trading strategy for the order to be traded by inputting the first trade information into an order splitting model. The trade strategy includes a number of sub-trade orders into which the order to be trade is split, and second trade information for each sub-trade order. The order splitting model is generated based on the federal learning algorithm, and market information and transaction characteristics of different products can be comprehensively considered, so that the order splitting model has higher universality and accuracy, and further the efficiency and accuracy of transaction splitting are improved.

Description

Transaction splitting method and system based on federal learning
Technical Field
The present description relates to the field of intelligent transactions, and more particularly to federal learning-based transaction splitting laws and systems.
Background
With the continuous opening and development of the financial market, the intelligent demands of users for securities trade are continuously increasing. For stock exchange orders with larger trade volumes, it is often necessary to split the order (i.e., order splitting or de-order) so that the order can be completed by multiple exchanges at intervals, thereby reducing impact on the market and/or saving trade costs. Existing order splitting approaches (e.g., trade volume weighted average price (Volume Weighted Average Price, VWAP) policies, trade time weighted average price (Time Weighted Average Price, TWAP) policies, etc.) generally do not have bargained capability and cannot reduce trade costs while splitting orders. However, the transaction model based on the artificial intelligence algorithm is usually only aimed at single stocks, lacks versatility and cannot be applied to complex financial markets.
Therefore, there is a need to provide a federal learning-based trade splitting method, apparatus, system, and medium that can be applied to a variety of products (e.g., a variety of financial products such as stocks, securities, etc.) to more efficiently and accurately split and trade volume adjust trade orders to better cope with market quotations.
Disclosure of Invention
A first aspect of embodiments of the present description provides a federal learning-based transaction resolution method. The method comprises the steps of obtaining first transaction information of an order to be transacted of a target product. The method includes determining a trading strategy for the order to be traded by inputting the first trade information into an order splitting model. The trade strategy includes a number of sub-trade orders into which the to-be-trade order is split, and second trade information for each of the sub-trade orders. The order splitting model is a shared model generated based on a federal learning algorithm or a local model corresponding to the target product generated based on the federal learning algorithm, and the shared model is generated based on the local model corresponding to the target product and local models corresponding to one or more reference products.
In some embodiments, the sharing model is obtained by: acquiring multiple sets of training data corresponding to multiple products, wherein the multiple products comprise the target product and the one or more reference products, and each set of training data corresponding to each product comprises sample transaction information of a historical transaction order of the product and a gold standard transaction index of the historical transaction order; performing at least one global iteration based on the plurality of sets of training data to obtain the shared model, wherein each global iteration comprises: updating the initial local model corresponding to the global iteration by using training data corresponding to each product to obtain an intermediate local model; determining an intermediate sharing model based on the intermediate local model corresponding to each product; designating the intermediate shared model as the shared model or as an initial local model for the next round of global iterations.
In some embodiments, the placement date of the historical trade order is within a last preset number of days and/or the trade price of the historical trade order differs from the average trade price of all orders within the last preset number of days by no more than a preset threshold.
In some embodiments, the gold standard trade index for the historical trade order may be obtained by: executing multiple simulated orders based on the transaction environment corresponding to the historical transaction orders and obtaining transaction indexes of each simulated order; and selecting the gold standard transaction index from a plurality of groups of transaction indexes obtained by the multiple simulation orders.
In some embodiments, for each of the products, updating the initial local model corresponding to the global iteration by using training data corresponding to the product to obtain an intermediate local model includes: and carrying out at least one local iteration on the initial local model corresponding to the global iteration by using training data corresponding to each product to obtain the intermediate local model, wherein each local iteration comprises the following steps: inputting sample transaction information of the historical transaction order in training data corresponding to the product into the initial local model, and determining a predicted transaction strategy of the historical transaction order; determining a predicted trade index for the historical trade order based on a predicted trade strategy for the historical trade order; based on the predicted trading index and the golden standard trading index, the initial local model is used as the intermediate local model, or is updated to perform a next round of local iteration.
In some embodiments, the determining an intermediate sharing model based on the intermediate local model corresponding to each of the products comprises: determining the weight of the intermediate local model corresponding to each product based on the characteristic information of the product; and carrying out weighted aggregation on the plurality of intermediate local models based on the weights of the plurality of intermediate local models corresponding to the plurality of products so as to determine the intermediate sharing model.
In some embodiments, the determining an intermediate sharing model based on the intermediate local model corresponding to each of the products comprises: determining the weight of the intermediate local model corresponding to each product based on the accuracy of the intermediate local model; and carrying out weighted aggregation on the plurality of intermediate local models based on the weights of the plurality of intermediate local models corresponding to the plurality of products so as to determine the intermediate sharing model.
In some embodiments, the local model corresponding to the target product is obtained by: and taking the intermediate local model corresponding to the target product obtained in one global iteration of the at least one global iteration as the local model corresponding to the target product.
In some embodiments, the method further comprises: and selecting one from the sharing model and the local model corresponding to the target product as the order splitting model by evaluating the sharing model and the local model corresponding to the target product.
A second aspect of the present description provides a federal learning-based transaction tear down device. The device comprises an acquisition module and a determination module. The acquisition module is used for acquiring first transaction information of an order to be transacted of a target product. The determining module is used for determining the trading strategy of the order to be traded by inputting the first trading information into an order splitting model. The trade strategy includes a number of sub-trade orders into which the to-be-trade order is split, and second trade information for each of the sub-trade orders. The order splitting model is a shared model generated based on a federal learning algorithm or a local model corresponding to the target product generated based on the federal learning algorithm, and the shared model is generated based on the local model corresponding to the target product and local models corresponding to one or more reference products.
A third aspect of the present description provides a federal learning-based transaction tear down system. The system includes a processor. The processor is configured to perform the federal learning-based transaction ordering method.
A fourth aspect of the present specification provides a computer-readable storage medium. The storage medium stores computer instructions. After the computer reads the computer instructions in the storage medium, the computer executes the transaction splitting method based on federal learning.
According to the federal learning-based transaction splitting method, according to the first transaction information of the to-be-transacted order of the target product, the number of the plurality of sub-transaction orders and the second transaction information of each sub-transaction order which are automatically split into the to-be-transacted order can be achieved, and the transaction splitting efficiency is improved. In addition, the order splitting model is generated through federal learning, market information of different stocks can be integrated, applicability and accuracy of the order splitting model are improved, and further efficiency and accuracy of trade splitting are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it will be apparent that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an application scenario of an exemplary transaction resolution system shown in accordance with some embodiments of the present description;
FIG. 2 is an exemplary block diagram of a transaction tear down device according to some embodiments of the present description;
FIG. 3 is an exemplary flow chart of a federal learning-based transaction ordering process according to some embodiments of the present description;
FIG. 4 is an exemplary diagram of a flow of generation of an order splitting model shown in accordance with some embodiments of the present description.
Detailed Description
The following description is presented to enable one of ordinary skill in the art to make and use the specification and is provided in the context of a particular application and its requirements. It will be apparent to those having ordinary skill in the art that various changes can be made to the disclosed embodiments and that the general principles defined herein may be applied to other embodiments and applications without departing from the principles and scope of the present description. Thus, the present description is not limited to the embodiments described, but is to be accorded the widest scope consistent with the claims.
The terminology used in the description presented herein is for the purpose of describing particular example embodiments only and is not intended to be limiting of the scope of the description. As used in this specification, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
These and other features, aspects, and functions of the related elements of structure, and methods of operation, as well as combinations of parts and economies of manufacture, will become more apparent upon consideration of the following description of the drawings, all of which form a part of this specification. It is to be understood, however, that the drawings are designed solely for the purposes of illustration and description and are not intended to limit the scope of the present disclosure. It should be understood that the figures are not drawn to scale.
A flowchart is used in this specification to describe the operations performed by a system according to some embodiments of the present specification. It should be understood that the operations in the flow diagrams may be performed out of order. Rather, the various steps may be processed in reverse order or simultaneously. Also, one or more other operations may be added to these flowcharts. One or more operations may also be deleted from the flowchart.
The term "user" in this specification may be a user who needs to conduct an intelligent trade (e.g., a securities trade), such as an investment analyst, investor, financial planner, financial analyst, etc., or a combination thereof.
Fig. 1 is a schematic diagram of an exemplary transaction ordering system 100, according to some embodiments of the present description. The transaction tear-down system 100 may include a network 110, a storage device 120, a server 130, and a terminal device 140.
In some embodiments, network 110 may facilitate the exchange of information and/or data. In some embodiments, one or more components of the transaction resolution system 100 (e.g., the storage device 120, the server 130, or the terminal device 140) may send information and/or data to another component of the transaction resolution system 100 via the network 110. For example, server 130 may receive data from storage device 120 via network 110. For another example, the server 130 may send information (e.g., transaction policies, first transaction information, second transaction information, quotation data, etc.) to the terminal device 140 over the network 110. In some embodiments, the network 110 may be any type of wired network, wireless network, or any combination thereof. By way of example only, the network 110 may include one or any combination of a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a zigbee network, a Near Field Communication (NFC) network, and the like. In some embodiments, network 110 may include one or more network access points. For example, the network 110 may include a wired or wireless network access point, such as a base station and/or an internet switching point, through which one or more components of the transaction ordering system 100 may connect to the network 110 to exchange data and/or information.
The storage device 120 may store data and/or instructions. In some embodiments, storage device 120 may store data obtained from network 110, terminal device 140, and/or server 130. In some embodiments, the storage device 120 may store data and/or instructions used by the server 130 to perform or use the exemplary methods disclosed herein. In some embodiments, the storage device 120 may include one or a combination of mass storage, removable storage, volatile read-write memory, read-only memory (ROM), and the like. In some embodiments, the storage device 120 may execute on a cloud platform. For example only, the cloud platform may include one of a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-layer cloud, or the like, or any combination thereof. In some embodiments, the storage device 120 may be part of the server 130.
In some embodiments, one or more components of the transaction resolution system 100 (e.g., the server 130 or the terminal device 140) may have permission to access the storage device 120. In some embodiments, one or more components of the transaction ordering system 100 may read and/or modify data and/or information in the storage 120 when one or more conditions are met.
In some embodiments, server 130 may be a single server or a group of servers. The server set may be a centralized server set connected to the network 110 via an access point or a distributed server set connected to the network 110 via one or more access points, respectively. In some embodiments, server 130 may be connected to network 110 locally or remotely from network 110. For example, server 130 may access information and/or data stored in terminal device 140 and/or storage device 120 via network 110. For another example, the storage device 120 may serve as back-end memory for the server 130. In some embodiments, server 130 may be implemented on a cloud platform. For example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an intermediate cloud, a multiple cloud, or the like, or any combination thereof.
In some embodiments, server 130 may include a processing device 131. The processing device 131 may process information and/or data related to performing one or more of the functions described in this specification (e.g., information and/or data related to smart order splitting and/or trading). For example, the processing device 131 may obtain the first transaction information of the order to be transacted for the target product from the terminal device 140 and/or the storage device 120. For another example, processing device 131 may obtain an order splitting model. The order splitting model is a shared model generated based on a federal learning algorithm or a local model corresponding to a target product. For another example, the processing device 131 may determine the trading strategy of the order to be traded by inputting the first trade information into an order splitting model. The trade strategy includes a number of sub-trade orders into which the order to be trade is split, and second trade information for each sub-trade order. In some embodiments, processing device 131 may include one or more processing units or processors (e.g., a single core processing engine or a multi-core processing engine). By way of example only, the processing device 131 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a special instruction set processor (ASIP), a Graphics Processing Unit (GPU), a Physical Processing Unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a microcontroller unit, a Reduced Instruction Set Computer (RISC), a microprocessor, or the like, or any combination thereof.
The terminal device 140 may facilitate interactions between the user and the transaction resolution system 100. For example, the user may send instructions for the order to be traded via the terminal device 140. The server 130 may obtain the order to be traded from the terminal device 140. For another example, the user may set, through the terminal device 140, first trade information of the order to be traded for the target product to send to the server 130 for determining a trade policy of the order to be traded. For another example, the server 130 may display the determined transaction policy of the to-be-transacted order to the user through the terminal device 140. In some embodiments, the terminal device 140 may include one or a combination of a laptop 141, a mobile device 142, a tablet 143, and the like. In some embodiments, terminal device 140 may include input devices, output devices, and the like. In some embodiments, the terminal device 140 may be part of the processing device 131.
It is noted that the foregoing description has been provided for the purpose of illustration only and is not intended to limit the scope of the application. It will be understood by those of ordinary skill in the art that various changes in form and details may be made to the application areas in which the above methods and systems are implemented without departing from the principles of the system.
FIG. 2 is an exemplary block diagram of a transaction splitting device, according to some embodiments of the application. As shown in fig. 2, the transaction tear-down device 200 may include an acquisition module 210 and a determination module 220. In some embodiments, the transaction tear-down device 200 may be integrated into the server 130. For example, the transaction tear-down device 200 may be part of the processing device 131.
The acquiring module 210 may be configured to acquire first trade information of an order to be traded for a target product. The target product refers to a product that needs to be transacted. The order to be traded refers to the trade order that the target product needs to make. In some embodiments, the order to be traded may include first trade information. The first trade information may include a trade type, a commission amount, a commission time, a commission price, etc. of the target product in the order to be traded, or any combination thereof. For more description of the first trade information for obtaining a trade order for a target product, see elsewhere in this specification (e.g., step 310 of fig. 3 and its associated description).
The determination module 220 may be configured to determine a trading strategy for an order to be traded by inputting first trade information into an order splitting model. The trade policy may include a number of sub-trade orders into which the to-be-trade order is split, and second trade information for each sub-trade order. The order splitting model may be used to determine a trading strategy for a to-be-traded order of the target object based on the first trading information. In some embodiments, the order splitting model may be a first local model corresponding to the target product, or a shared model generated based on a federal learning algorithm, or a second local model corresponding to the target product generated based on the federal learning algorithm. For more description of the trade strategies for determining orders to trade, see elsewhere in this specification (e.g., step 320 of FIG. 3 and descriptions thereof).
In some embodiments, the transaction tear-down device 200 may also include a training module 230. The training module 230 may be used to train one or more machine learning models (e.g., a first local model, a shared model, a second local model, etc.). In some embodiments, training module 230 may be implemented on processing device 131 or a processing device other than processing device 131. In some embodiments, training module 230 and other modules (e.g., acquisition module 210, determination module 220, etc.) may be implemented on the same processing device (processing device 131). Alternatively, the training module 230 and other modules (e.g., the acquisition module 210, the determination module 220, etc.) may be implemented on different processing devices. For example, training module 230 may be implemented on a processing device of a vendor of the machine learning model, while other modules may be implemented on a processing device of a user of the machine learning model.
It should be understood that the system shown in fig. 2 and its modules may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may then be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or special purpose design hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such as provided on a carrier medium such as a magnetic disk, CD or DVD-ROM, a programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system of the present application and its modules may be implemented not only with hardware circuitry such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also with software executed by various types of processors, for example, and with a combination of the above hardware circuitry and software (e.g., firmware).
It should be noted that the above description of the system and its modules is for convenience of description only and is not intended to limit the application to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the principles of the system, various modules may be combined arbitrarily or a subsystem may be constructed in connection with other modules without departing from such principles. For example, the determination module 220 may include a first determination subunit for determining an order splitting model from the first local model, the sharing model, the second local model, and a second determination subunit for determining a trading strategy for an order to be traded by inputting first trade information into the order splitting model. The modules may share a single storage device (e.g., storage device 120), or the modules may each have a separate storage device. Such variations are within the scope of the application.
FIG. 3 is an exemplary flow diagram of a federal learning-based transaction splitting process 300 according to some embodiments of the present application. In some embodiments, one or more operations of the federal learning-based transaction tear down procedure 300 shown in fig. 3 may be implemented by the transaction tear down system 100 shown in fig. 1 or the transaction tear down device 200 shown in fig. 2. For example, the process 300 may be stored in the storage device 120 in the form of instructions (e.g., an application program) and executed by the processing device 131 for invocation and/or execution.
In step 310, the processing device 131 may obtain first trade information for a to-be-traded order for a target product. In some embodiments, step 310 may be implemented by the acquisition module 210.
The target product refers to a product that needs to be transacted. In some embodiments, the target product may be a financial product, such as a stock, a fund, a bond, a futures, or the like, or any combination thereof. For example, the target product is a single stock. As another example, the target product is a combination product including a plurality of types. For purposes of description, the following description will take stock as an example of the target product. It should be noted that the method of the present description may be applied to any type of product.
The order to be traded refers to the trade order that the target product needs to make. In some embodiments, the order to be traded may include first trade information. The first trade information may include a trade type, a commission amount, a commission time, a commission price, etc. of the target product in the order to be traded, or any combination thereof. The transaction types include buy or sell. The delegated transaction amount refers to the number of target products that need to be transacted. The delegated trade time is the desired execution time of the order to be traded. The commission transaction price refers to the desired transaction price of the target product. For example, the commissioned trading price is a trading price threshold (e.g., lowest bid price or highest bid price) for the target product. That is, the actual trade price of the target product in the order to be traded needs to be no more than the commission trade price (when the target product is purchased) or no less than the commission trade price (when the target product is sold).
For example only, when the target product is a stock, the order to be traded is a stock order to be traded for that stock. The stock order to be traded may be to buy or sell a certain number (e.g., 5 tens of thousands (500 hands), 10 tens of thousands (1 kilohands), 50 tens of thousands (5 kilohands), 100 tens of thousands (1 kilohand), 500 tens of thousands (5 tens of thousands), 1000 tens of thousands (10 tens of thousands) of stocks at a certain price over a certain period of time on a certain day (e.g., 9:30-10:30 on day 27 of 2023). The unit of the delegated trading volume for a stock order to be traded may be the number of shares or hands. The conversion relationship between the number of hands and the number of strands is 1 hand equal to 100 strands.
In some embodiments, the processing device 131 may obtain the first transaction information of the order to be transacted for the target product through the terminal device 140. For example, the user logs in to the transaction platform (e.g., web page version or application program version (i.e., app version)) of the target product through the terminal device 140. The transaction platform of the target product may be implemented by software of the transaction tear-down device 200. After the transaction platform is successfully logged in, the user can determine the first transaction information of the to-be-transacted order of the target product on the login interface. The manner in which the user determines the one or more transaction parameters may include direct input (e.g., text input, voice input, etc.), list selection, etc., or any combination thereof. The processing device 131 may obtain, through the terminal device 140, first transaction information of an order to be transacted for the target product. In some embodiments, the first trade information of the order to be traded of the target product may be preset and stored in the storage device 120, and the processing device 131 may obtain the first trade information of the order to be traded of the target product from the storage device 120.
In step 320, the processing device 131 may determine a trading strategy for the order to be traded by inputting the first trade information into the order splitting model. In some embodiments, step 320 may be implemented by determination module 220.
The trade policy may include a number of sub-trade orders into which the to-be-trade order is split, and second trade information for each sub-trade order. In some embodiments, the second transaction information may be similar to the first transaction information. For example, for any of the plurality of sub-trade orders, the second trade information for that sub-trade order may include a trade type, a commission amount, a commission time, a commission price, etc., or any combination thereof, of the target product in that sub-trade order.
The order splitting model may be used to determine a trading strategy for a to-be-traded order of the target object based on the first trading information. For example, the processing device 131 may input the first trade information into an order splitting model, which may output a trade policy for an order to be traded.
In some embodiments, the order splitting model may be a first local model corresponding to the target product, or a shared model generated based on a federal learning algorithm, or a second local model corresponding to the target product generated based on the federal learning algorithm.
The first local model corresponding to the target product may be generated based on a set of training data training for the target product. The training data for the target product may include a plurality of training samples. Wherein each training sample may include sample trade information for a historical trade order for a target product and a gold standard trade index for the historical trade order. The sample transaction information may include sample first transaction information. In some embodiments, sample trade information for a historical trade order may also be referred to as sample first trade information, which may be used as training input, similar to the first trade information for an order to be traded. The standard trade index of the historical trade order is a training label (label). For example, the first local model may be determined by training the initial local model based on the set of training data using processing device 131 or other processing devices. Exemplary initial local models may include convolutional neural network models (Convolutional Neural Network, CNN), deep neural network models (Deep Neural Network, DNN), recurrent neural network models (Recurrent Neural Network, RNN), generate countermeasure network models (Generative Adversarial Network, GAN), graph neural networks (Graph Neural Network, GNN), and the like, or any combination thereof.
In some embodiments, each training sample may also include historical quotation data corresponding to historical trade orders. For example, processing device 131 may train the initial local model with sample first trade information and historical market data of a historical trade order for a target product as training inputs, and with a gold standard trade index of the historical trade order as a training label (label) to generate the first local model. Accordingly, the processing device 131 may input the first trade information of the order to be traded and the market data corresponding to the first trade information into the order splitting model, and the order splitting model may output the trade policy of the order to be traded. Further description of the marketing data may be found in fig. 4 and will not be described in detail herein.
The shared model may be generated based on a local model (e.g., a first local model, an intermediate local model, or a second local model) corresponding to the target product and a local model corresponding to the one or more reference products. The reference product refers to other products than the target product. For example, the reference product may be a different financial product than the target product. For example only, when the target product is a certain stock, the reference product may be other stocks than the stock, or other products such as funds, bonds, futures, and the like. The local model corresponding to the one or more reference products may be generated in a similar manner to the first local model corresponding to the target product. For example, for any of the one or more reference products, processing device 131 may obtain a set of training data for the reference product and train an initial model based on the training data for the reference product to determine a local model corresponding to the reference product.
In some embodiments, processing device 131 may generate the shared model based on the local model corresponding to the target product and the local model corresponding to the one or more reference products. For example, processing device 131 may obtain multiple sets of training data corresponding to multiple products (target product and one or more reference products) and perform at least one global iteration based on the multiple sets of training data corresponding to the multiple products to obtain the shared model.
A second local model corresponding to the target product may be generated based on the shared model. For example, the processing device 131 may use, as the second local model corresponding to the target product, an intermediate local model corresponding to the target product obtained in another global iteration except the first global iteration in at least one global iteration. See fig. 4 and its associated description for more details regarding the order splitting model.
In some embodiments, processing device 131 may select one of the shared model and the local model (e.g., the first local model or the second local model) corresponding to the target product as the order splitting model. For example, by evaluating the shared model and the local model corresponding to the target product, the processing device 131 may select one from the shared model and the local model corresponding to the target product (e.g., the first local model or the second local model) as the order splitting model. For example only, the processing device 131 obtains test data for a target product. The test data includes test transaction information (e.g., test first transaction information) and test transaction metrics for the target object. Through the test data, the processing device 131 determines a prediction effect of the shared model, the first local model corresponding to the target product, and the second local model corresponding to the target product. The predicted effect may be represented by a difference between a predicted trading indicator and a test trading indicator corresponding to a predicted trading strategy generated by the model. The processing device 131 may determine the order splitting model based on the predicted effects of the sharing model, the first local model, and the second local model. For example, the processing device 131 determines a model corresponding to the predicted trading strategy with the smallest difference as the order splitting model. For another example, the processing device 131 determines the model with the best prediction effect as the order splitting model.
For another example, the processing device 131 obtains test data of the target product. The test data includes test transaction information (e.g., test first transaction information) for the target object. The processing device 131 processes the test first transaction information by using the sharing model, the first local model and the second local model respectively, so as to obtain a predicted transaction policy corresponding to each model, and then determines a predicted transaction index of each predicted transaction policy. The processing device 131 may determine the model corresponding to the best predicted trade index as the order splitting model.
In some embodiments, after determining the order splitting model, the processing device 131 may input the first trade information into the order splitting model and take the output of the order splitting model as a trade policy for the order to be traded for the target product.
In some embodiments, processing device 131 may expose the trading strategy for the order to be traded. For example, the processing device 131 may present the trade policies of the order to be traded via the terminal device 140, and the user may confirm or adjust the trade policies of the order to be traded via the terminal device 140.
In some embodiments, processing device 131 may execute the trading strategy for the order to be traded. For example, based on the second trade information (e.g., the commission trade time T, the commission trade price P, and the commission trade amount M) for each sub-trade order, the processing device 131 may buy or sell an amount M of the target product at the price P at time T.
According to some embodiments of the present disclosure, according to the first trade information and the order splitting model of the to-be-traded order of the target product, the number of the plurality of sub-trade orders and the second trade information of each sub-trade order can be automatically split into the to-be-traded order, so that the trade splitting efficiency is improved. In addition, in some embodiments of the present description, an order splitting model with the highest matching degree with the target product may be selected from a plurality of models. The plurality of models may include a first local model, a second local model, and a shared model. The first local model is generated based on training data of the target product in an independent training mode, can consider the specificity of the target product, and is completely suitable for the target product. The sharing model is generated by a federal learning method, so that market information and transaction characteristics of different products can be comprehensively considered, and universality and accuracy are higher. The second local model is generated based on training data of the target product and a federal learning algorithm, so that the universality of the shared model is realized, and the specificity of the target product is considered. The order splitting model is selected by evaluating the multiple models, so that the finally used order splitting model is more suitable for target products, and the accuracy of trade splitting is improved.
It should be noted that the above description of the flow 300 is for exemplary purposes only and is not intended to limit the scope of the present description. Any alterations or modifications may be effected based on the description herein by those skilled in the art. In some embodiments, the flow 300 may include one or more additional steps or one or more steps of the flow 300 may be omitted. In some embodiments, at least two steps in flow 300 may be combined into one step implementation or one step in flow 300 may be split into two steps implementation. For example, step 320 may include two sub-steps for determining an order splitting model and determining a trading strategy for an order to be traded, respectively.
FIG. 4 is an exemplary diagram of a flow 400 for generating an order splitting model, according to some embodiments of the present description.
In step 410, processing device 131 may obtain multiple sets of training data corresponding to multiple products. In some embodiments, step 410 may be implemented by training module 230.
In some embodiments, each set of training data in the plurality of sets of training data may correspond to a product. For example, as shown in fig. 4, the plurality of sets of training data includes training data D1 for product 1, training data D2, … for product 2, and training data Dn for product n. Wherein n is a positive integer not less than 1. For example only, product 1 may be the target product and the remaining products (i.e., products 2, …, product n) may be the reference products.
In some embodiments, for each of the target product and the one or more reference products, the corresponding set of training data for that product may include sample trade information for historical trade orders for that product and a gold standard trade index for the historical trade orders. The sample transaction information may include sample first transaction information and a sample transaction policy. The first transaction information of the sample is training input, and the transaction strategy of the sample and the standard transaction index of the gold are training labels.
In some embodiments, the historical trade order may satisfy a preset condition. For example, the placement date of the historical trade order is within the last preset number of days. The most recent preset number of days is inclusive of any number of days. For example, the time of placement of the historical trade order is within the last 30. In some embodiments, the last preset number of days may be user-set or automatically determined by the system.
For another example, the trade price of the historical trade order differs from the average trade price of all orders over the last preset number of days by no more than a preset threshold. The preset threshold may be a percentage (e.g., 5%, 10%, 15%, 20%, 25%, 30%, etc.) or a preset value (e.g., 5 minutes, 1 corner, 5 corners, 1 block, 5 blocks, etc.) of the average transaction price. In some embodiments, the preset threshold may be user-set or automatically determined by the system.
As another example, the place of the historical trade order may be a particular user. For example, the place of the historical trade order may be a financial expert. For another example, the historical trading indices of the subscribers to the historical trading order exceed a trading index threshold. In some embodiments, the transaction index threshold may be user-set or automatically determined by the system.
In some embodiments, the aggregate trading indicia for a historical trading order may be a sample trading indicia corresponding to the historical trading order. For example, processing device 131 may determine a sample trading indicator for the historical trade order based on the historical trade order and the historical market data corresponding to the historical trade order. Exemplary trade metrics may include policy profitability, policy cost, performance drop, win rate, and the like, or any combination thereof. Policy profit refers to the difference between the total asset after the transaction is completed and the total asset set by the transaction memory. Policy costs refer to transaction costs (e.g., including surcharges, commissions, tax stamps, etc.). Because of the differences in different trade strategies (e.g., the number of sub-trade orders, the second trade information for the sub-trade orders), trade costs may also be incurred. The execution drop refers to the difference between the expected transaction return and the actual execution return. The winning rate refers to an indicator of the stability of the reaction strategy. For example, the win rate is a comparison of the profitability of each individual under each time period for each strategy, and if the strategy wins the benchmark strategy without regard to the net value of the profitability, the win is counted 1. The reference policy may be a general policy such as a VWAP policy, a TWAP policy, or the like.
For another example, the processing device 131 may perform multiple simulated orders based on the trade environments corresponding to the historical trade orders and obtain trade indicators for each simulated order, and select a gold standard trade indicator from multiple sets of trade indicators obtained from the multiple simulated orders. In particular, the processing device 131 may determine a plurality of simulated trading policies based on the trading environment corresponding to the historical trading order, and determine a trading indicator for each simulated trading policy for a simulated ordering under the trading environment corresponding to the historical trading order. The processing device 131 may identify the best trading indicator of the plurality of simulated trading indicators as the gold standard trading indicator of the historical trading order.
As another example, the processing device 131 may take as a gold standard trade indicator the actual trade index of a historical trade order placed by a financial expert.
In some embodiments, different weights may be set for different trade indicators. The weight represents the degree of impact of a certain trading index on the trading strategy. For example, policy profitability may be given the highest weight. Accordingly, the model obtained based on the training data with the highest strategy profit weight can output the transaction strategy with the highest profit capability, so that the profit capability of the transaction strategy is improved. By setting different weights for different trade indexes, order splitting models of different trade targets can be obtained, so that the application scene of the order splitting models is increased.
In some embodiments, each set of training data may also include historical market data corresponding to historical trade orders for the product. For example, the commission trade time for the historical trade order is 9:30-10:30 for day 27 of year 2023, and the historical market data may be 9:30-10:30 for day 27 of year 2023, 6. The historical market data may include historical level2 data (e.g., multi-file real-time market data or deep market data). The level2 data comprises data in various forms such as ten grades of quotations, buying and selling queues, line-by-line deals, entrusted total amount, weighted prices and the like. Specifically, the level2 data can well reflect whether a market is a seller or a seller, the liquidity level and the like, so that the level2 data can be used for mining future stock price trends. In some embodiments, the sample first transaction information and the corresponding historical market data may be used as training inputs, and the standard-of-gold transaction indicator may be a training tag.
In step 420, processing device 131 may perform at least one global iteration based on the sets of training data to obtain an order splitting model. In some embodiments, step 420 may be implemented by training module 230.
In some embodiments, each round of global iteration may include the following steps. For each product of the plurality of products, processing device 131 may generate an intermediate local model corresponding to the product by training the initial local model for the global iteration of the round using training data for the product. Based on the intermediate local model corresponding to each of the products, processing device 131 may determine an intermediate sharing model. Further, the processing device 131 may designate the intermediate shared model as the shared model or as the initial local model for the next round of global iterations.
For example, as shown in step 421 of fig. 4, processing device 131 may generate an intermediate local model I1 corresponding to product 1 by training the initial local model using training data D1 corresponding to product 1; training the initial local model by using training data D2 corresponding to the product 2, and generating an intermediate local model I2 corresponding to the product 2; …; the intermediate local model In corresponding to product n is generated by training the initial local model using training data Dn corresponding to product n. In a first round of global iterations, the initial local model may be an untrained local model. In the second and subsequent global iterations, the initial local model may be the global model generated in the previous global iteration.
In some embodiments, training the initial local model may include a local iterative process for each of the plurality of products, and training data for that product may be used to iteratively update the initial local model until a first termination condition is met. Exemplary first termination conditions include that a first loss function value corresponding to the initial local model is less than a first threshold, a difference between a first loss function value of a current iteration and a first loss function value of a last iteration is less than a first difference threshold, a number of local iterations exceeds a certain number, and so on. For example, in the current iteration, sample first trade information of a historical trade order of one training sample in training data corresponding to a target product is input into an initial local model, and a predicted trade strategy of the historical trade order is determined. A predicted trade index for the historical trade order is then determined based on the predicted trade strategy for the historical trade order. The processing device 131 determines a first loss function value based on the difference between the predicted trading index and the golden standard trading index in the training samples. For another example, in the current iteration, sample first trade information of a historical trade order of one training sample in training data corresponding to a target product and historical market data corresponding to the historical trade order are input into an initial local model, and the initial local model outputs a prediction result (e.g., a predicted trade index). The processing device 131 determines a first loss function value based on the difference between the prediction result and the training label (the golden standard transaction index in the training sample). If the first loss function value satisfies a first termination condition, the initial local model is determined as a first local model corresponding to the target product. Otherwise, the initial local model is continuously updated based on the first loss function value. For example, the processing device 131 may update the parameters of the initial local model by a machine learning algorithm (e.g., a random gradient descent method) to minimize the first loss function until model training is complete; or stopping training after the iterative training times reach a certain number.
In some embodiments, the first termination conditions corresponding to different products may be the same or different. For example only, in the first global iteration, the first termination condition corresponding to each product is that the number of local iterations reaches k1 times; … in the nth global iteration, the first termination condition corresponding to each product is that the local iteration times reach kn times. That is, in each round of global iterations, the training data corresponding to each product is used to perform the same number of local iterations on the initial local model in that round of global iterations.
In some embodiments, processing device 131 may determine the intermediate sharing model based on an intermediate local model corresponding to each of a plurality of products (e.g., a target product or one or more reference products). For example, processing device 131 may determine the weight of the intermediate local model for each product and then determine the intermediate sharing model based on the intermediate local model for each product and its weight. For example only, as shown In step 422 of fig. 4, processing device 131 may determine a weight W1 of intermediate local model I1 corresponding to product 1, determine weights W2, … of intermediate local model I2 corresponding to product 2, and determine a weight Wn of intermediate local model In corresponding to product n. Processing device 131 may then determine an intermediate sharing model based on the intermediate local model and the weights for each product. In particular, each parameter value in the intermediate shared model may be determined by weighted averaging of the corresponding parameter values of the intermediate local model for each product.
In some embodiments, for each product of the target product or one or more reference products, the processing device 131 may determine the weight of the intermediate local model corresponding to the product based on the characteristic information of the product. Exemplary characteristic information includes type, price, fluidity, etc. of the product or any combination thereof. For example, if the shared model is used in the white spirit domain, the weight of the intermediate local model corresponding to a product related to white spirit is higher than the weight of the intermediate local model corresponding to a product not related or directly related to white spirit. For another example, if the shared model is for a high price strand, the weight of the intermediate local model corresponding to the high price strand product is higher than the weight of the intermediate local model corresponding to the low price strand product. For another example, if the shared model is for a high liquidity stock, the weight of the intermediate local model corresponding to the high liquidity stock is higher than the weight of the intermediate local model corresponding to the low liquidity stock.
In some embodiments, for each intermediate local model of the plurality of intermediate local models, processing device 131 may determine the weight of the intermediate local model corresponding to the product based on the accuracy of the intermediate local model. For example, if the accuracy of the intermediate local model I1 is 0.9, the accuracy of the intermediate local model I2 is 0.7, the accuracy of the intermediate local model I3 is 0.8, then the weight of the intermediate local model I1 is 0.5, the weight of the intermediate local model I2 is 0.2, and the weight of the intermediate local model I3 is 0.3. Taking the intermediate local model I1 as an example, the accuracy of the model may be determined based on the model's predicted trade index and the golden standard trade index for historical trade orders in the training data D1.
It should be appreciated that the weights of the intermediate local models may also be determined based on both the characteristics of their corresponding products and the model accuracy. Through different weighting modes, sharing models with different emphasis can be obtained, so that the specificity and accuracy of the sharing models are improved.
Further, the processing device 131 may designate the intermediate sharing model as the sharing model or the initial local model for the next round of iterations.
In some embodiments, for an intermediate shared model generated for each global iteration, processing device 131 may determine whether the intermediate shared model satisfies the second termination condition. The second termination condition may be similar to the first termination condition. For example, the second termination condition includes that a second loss function value corresponding to the intermediate sharing model is smaller than a second threshold, a second difference between a second loss function value of a current iteration and a second loss function value of a last iteration is smaller than a second difference threshold, a global iteration number exceeds a certain number, and the like. The intermediate sharing model may be determined to be a sharing model when the intermediate sharing model satisfies the second termination condition. When the intermediate sharing model does not satisfy the second termination condition, the intermediate sharing model may be determined as an initial local model of the target product or each of the one or more reference products in a next round of global iterations. For example, in a current global iteration, sample first trade information of a historical trade order of one training sample in training data corresponding to one or more products is input into an intermediate sharing model, and a predicted trade strategy of the historical trade order is determined. A predicted trade index for the historical trade order is then determined based on the predicted trade strategy for the historical trade order. The processing device 131 determines a second loss function value based on the difference between the predicted trading index and the golden standard trading index in the training samples. For another example, in the current global iteration, sample first trade information of a historical trade order of one training sample of the training data corresponding to the one or more products and historical market data corresponding to the historical trade order are input into an intermediate sharing model, and the intermediate sharing model outputs a prediction result (e.g., a predicted trade index). The processing device 131 determines a second loss function value based on the difference between the prediction result and the training label (the gold standard transaction index in the training sample). If the second loss function value satisfies the second termination condition, the intermediate sharing model is determined to be a sharing model. Otherwise, the intermediate sharing model needs to be continuously updated. For example, processing device 131 may take the intermediate shared model as the initial local model for the next round of global training to minimize the second loss function until model training is complete; or stopping training after the iterative training times reach a certain number.
For example only, as shown in step 423 of fig. 4, the processing device 131 may determine whether the global iteration condition is satisfied, i.e., whether the intermediate sharing model satisfies the second termination condition. If the global iteration condition is met, step 424 may be performed. At step 424, processing device 131 may take the intermediate shared model as the shared model and the global iteration may stop. If the global iteration condition is not met, step 425 may be performed. In step 425, processing device 131 may use the intermediate shared model as the initial local model for the next round of global iterations. Further, the next round of global iteration may be performed until the global iteration condition is satisfied.
In some embodiments, processing device 131 may use the intermediate local model corresponding to the target product obtained in one global iteration of the at least one global iteration as the local model corresponding to the target product. For example, the processing device 131 may use the intermediate local model corresponding to the target product obtained before the first global iteration as the first local model corresponding to the target product. For example only, the processing device 131 may save the intermediate local model I1 corresponding to the product 1 generated based on the training data D1 corresponding to the product 1, and designate the intermediate local model I1 as the first local model corresponding to the target product. The first local model is not involved in the model synthesis process of the federal learning process (i.e., the process from the intermediate local model to the global shared model), is generated based only on training data corresponding to the target product, and is therefore more proprietary.
For another example, the processing device 131 may use the intermediate local model corresponding to the target product obtained in the second round or the subsequent global iteration as the second local model corresponding to the target product. For example only, the intermediate local model in the last global iteration may be considered as the second local model. That is, the intermediate sharing model satisfies the second termination condition, and the processing device 131 may designate the intermediate local model corresponding to the target product of the current iteration as the second local model corresponding to the target product. The second local model is generated by further training the intermediate sharing model iterated in the previous round by utilizing training data corresponding to the target product, so that the model has universality of a federal learning model and specificity for the target product.
It should be noted that the above description of the flow 400 is for exemplary purposes only and is not intended to limit the scope of the present description. Any alterations or modifications may be effected based on the description herein by those skilled in the art. In some embodiments, the flow 400 may include one or more additional steps or one or more steps of the flow 400 may be omitted. In some embodiments, at least two steps in flow 400 may be combined into one step implementation or one step in flow 400 may be split into two step implementations.
Benefits that may be realized by embodiments of the present description may include, but are not limited to: (1) According to the first trade information of the to-be-traded order of the target product and the order splitting model, the number of a plurality of sub-trade orders and the second trade information of each sub-trade order which are automatically split into the to-be-traded order can be obtained, so that the trade splitting efficiency is improved; (2) The sharing model is generated through federal learning, so that market information and transaction characteristics of different products can be comprehensively considered, the global model has higher universality and accuracy, and further, the efficiency and accuracy of transaction splitting are improved; (3) The order splitting model comprises a plurality of models (such as a first local model, a second local model and a sharing model), and the model with highest matching degree with a target product can be selected through evaluation, so that the accuracy of trade splitting is improved; (4) By giving different weights to the trade indexes, order splitting models of different trade targets can be obtained, so that the application scene of the order splitting models is increased.
The present specification embodiments are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present description have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the disclosure.
It will be apparent to those skilled in the art that various modifications and variations can be made to the embodiments of the present disclosure without departing from the spirit and scope of the embodiments of the disclosure. Thus, if such modifications and variations of the embodiments of the present specification fall within the scope of the claims and the equivalents thereof, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A federal learning-based transaction splitting method, comprising:
acquiring first transaction information of an order to be transacted of a target product;
determining a trade strategy of the order to be traded by inputting the first trade information into an order splitting model, the trade strategy comprising the number of a plurality of sub-trade orders into which the order to be traded is split and second trade information of each of the sub-trade orders, wherein,
the order splitting model is a shared model generated based on a federal learning algorithm or a local model corresponding to the target product generated based on the federal learning algorithm, and the shared model is generated based on the local model corresponding to the target product and local models corresponding to one or more reference products.
2. The method according to claim 1, wherein the sharing model is obtained by:
Acquiring multiple sets of training data corresponding to multiple products, wherein the multiple products comprise the target product and the one or more reference products, and each set of training data corresponding to each product comprises sample transaction information of a historical transaction order of the product and a gold standard transaction index of the historical transaction order;
performing at least one global iteration based on the plurality of sets of training data to obtain the shared model, wherein each global iteration comprises:
updating the initial local model corresponding to the global iteration by using training data corresponding to each product to obtain an intermediate local model;
determining an intermediate sharing model based on the intermediate local model corresponding to each product;
designating the intermediate shared model as the shared model or as an initial local model for the next round of global iterations.
3. Method according to claim 2, characterized in that the placement date of the historical trade order is not higher than a preset threshold value within a last preset number of days and/or the trade price of the historical trade order differs from the average trade price of all orders within the last preset number of days.
4. The method of claim 2, wherein the standard trading indicia of the historical trade orders are obtained by:
executing multiple simulated orders based on the transaction environment corresponding to the historical transaction orders and obtaining transaction indexes of each simulated order;
and selecting the gold standard transaction index from a plurality of groups of transaction indexes obtained by the multiple simulation orders.
5. The method of claim 2, wherein updating the initial local model corresponding to the global iteration with training data corresponding to the product for each of the products to obtain the intermediate local model comprises:
and carrying out at least one local iteration on the initial local model corresponding to the global iteration by using training data corresponding to each product to obtain the intermediate local model, wherein each local iteration comprises the following steps:
inputting sample transaction information of the historical transaction order in training data corresponding to the product into the initial local model, and determining a predicted transaction strategy of the historical transaction order;
determining a predicted trade index for the historical trade order based on a predicted trade strategy for the historical trade order;
Based on the predicted trading index and the golden standard trading index, the initial local model is used as the intermediate local model, or is updated to perform a next round of local iteration.
6. The method of claim 2, wherein determining an intermediate sharing model based on the intermediate local model for each of the products comprises:
determining the weight of the intermediate local model corresponding to each product based on the characteristic information of the product;
and carrying out weighted aggregation on the plurality of intermediate local models based on the weights of the plurality of intermediate local models corresponding to the plurality of products so as to determine the intermediate sharing model.
7. The method of claim 2, wherein determining an intermediate sharing model based on the intermediate local model for each of the products comprises:
determining the weight of the intermediate local model corresponding to each product based on the accuracy of the intermediate local model;
and carrying out weighted aggregation on the plurality of intermediate local models based on the weights of the plurality of intermediate local models corresponding to the plurality of products so as to determine the intermediate sharing model.
8. The method according to claim 2, wherein the local model corresponding to the target product is obtained by:
and taking the intermediate local model corresponding to the target product obtained in one global iteration of the at least one global iteration as the local model corresponding to the target product.
9. The method according to claim 1, wherein the method further comprises:
and selecting one from the sharing model and the local model corresponding to the target product as the order splitting model by evaluating the sharing model and the local model corresponding to the target product.
10. A federal learning-based transaction billing system comprising a processor for executing the federal learning-based transaction billing method of any of claims 1-9.
CN202311032958.4A 2023-08-16 2023-08-16 Transaction splitting method and system based on federal learning Pending CN116777630A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311032958.4A CN116777630A (en) 2023-08-16 2023-08-16 Transaction splitting method and system based on federal learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311032958.4A CN116777630A (en) 2023-08-16 2023-08-16 Transaction splitting method and system based on federal learning

Publications (1)

Publication Number Publication Date
CN116777630A true CN116777630A (en) 2023-09-19

Family

ID=87994815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311032958.4A Pending CN116777630A (en) 2023-08-16 2023-08-16 Transaction splitting method and system based on federal learning

Country Status (1)

Country Link
CN (1) CN116777630A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580614A (en) * 2019-09-06 2019-12-17 北京神州同道智能信息技术有限公司 Whole-market multi-variety fund financing management system based on intelligent mass strategy processing platform
CN113159941A (en) * 2021-02-02 2021-07-23 上海卡方信息科技有限公司 Intelligent streaming transaction execution method and device
CN114549132A (en) * 2022-02-23 2022-05-27 浙江同花顺智富软件有限公司 Intelligent transaction order splitting method, equipment, system and medium
CN115239104A (en) * 2022-07-12 2022-10-25 深圳市金证科技股份有限公司 Algorithm transaction evaluation method, device, equipment and storage medium
US20220374544A1 (en) * 2021-05-05 2022-11-24 Jpmorgan Chase Bank, N.A. Secure aggregation of information using federated learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580614A (en) * 2019-09-06 2019-12-17 北京神州同道智能信息技术有限公司 Whole-market multi-variety fund financing management system based on intelligent mass strategy processing platform
CN113159941A (en) * 2021-02-02 2021-07-23 上海卡方信息科技有限公司 Intelligent streaming transaction execution method and device
US20220374544A1 (en) * 2021-05-05 2022-11-24 Jpmorgan Chase Bank, N.A. Secure aggregation of information using federated learning
CN114549132A (en) * 2022-02-23 2022-05-27 浙江同花顺智富软件有限公司 Intelligent transaction order splitting method, equipment, system and medium
CN115239104A (en) * 2022-07-12 2022-10-25 深圳市金证科技股份有限公司 Algorithm transaction evaluation method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2020143344A1 (en) Method and device for warehouse receipt pledge financing based on blockchain architecture
WO2020143341A1 (en) Blockchain architecture-based warehouse receipt pledge financing assessment method and device
CN106462795B (en) System and method for allocating capital to trading strategies for big data trading in financial markets
CN110276668A (en) The method and system that finance product intelligently pushing, matching degree determine
Huang et al. Optimal inventory control with sequential online auction in agriculture supply chain: An agent-based simulation optimisation approach
CN108550090A (en) A kind of processing method and system of determining source of houses pricing information
AU2006252169A1 (en) System and method for processing composite trading orders
CN111340244B (en) Prediction method, training method, device, server and medium
CN107730386A (en) Generation method, device, storage medium and the computer equipment of investment combination product
Priyadarshini et al. A comparative analysis for forecasting the NAV's of Indian mutual fund using multiple regression analysis and artificial neural networks
WO2020057659A1 (en) A computer implemented method for compiling a portfolio of assets
CN112613997A (en) Method and apparatus for forecasting combined investment of money fund
Pinto et al. Strategic participation in competitive electricity markets: Internal versus sectorial data analysis
US20220292597A1 (en) System and method for valuation of complex assets
KR20120032606A (en) Stock investment system enabling participattion of stock investment clients and method thereof
US10325319B2 (en) Web platform with customized agents for automated web services
US7801769B1 (en) Computing a set of K-best solutions to an auction winner-determination problem
US20170372420A1 (en) Computer based system and methodology for identifying trading opportunities associated with optionable instruments
CN104517177A (en) Digital framework for business model innovation
CN113302644B (en) Transaction plan management system
US11257108B2 (en) Systems and methods for dynamic product offerings
US20230260036A1 (en) Portfolio optimization and transaction generation
WO2018116098A1 (en) Determining particulars of an item in an online marketplace
CN116777630A (en) Transaction splitting method and system based on federal learning
CN112581153B (en) Resource allocation method, resource allocation device, storage medium, and electronic apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination