CN113487349A - Fare adjusting method, device, readable medium and equipment - Google Patents

Fare adjusting method, device, readable medium and equipment Download PDF

Info

Publication number
CN113487349A
CN113487349A CN202110741251.5A CN202110741251A CN113487349A CN 113487349 A CN113487349 A CN 113487349A CN 202110741251 A CN202110741251 A CN 202110741251A CN 113487349 A CN113487349 A CN 113487349A
Authority
CN
China
Prior art keywords
updated
network
network block
global model
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110741251.5A
Other languages
Chinese (zh)
Inventor
田松
杨永凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Travelsky Technology Co Ltd
Original Assignee
China Travelsky Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Travelsky Technology Co Ltd filed Critical China Travelsky Technology Co Ltd
Priority to CN202110741251.5A priority Critical patent/CN113487349A/en
Publication of CN113487349A publication Critical patent/CN113487349A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • G06Q30/0206Price or cost determination based on market factors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • G06Q30/0202Market predictions or forecasting for commercial activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Resources & Organizations (AREA)
  • Primary Health Care (AREA)
  • Tourism & Hospitality (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a fare adjusting method, a device, a readable medium and equipment, wherein the method comprises the steps of inputting sales condition influence data of a target flight into a pre-constructed global model, and obtaining and outputting a predicted fare of the target flight by the pre-constructed global model; wherein; the pre-constructed global model is obtained by distributing a plurality of network blocks disassembled from the global model to be trained into a plurality of working nodes and training and adjusting parameters of the distributed network blocks by each working node through a fare training data set; the sales impact data of the target flight comprises: structured data and unstructured data influencing the sales condition of the target flight are obtained, and the global model to be trained is a neural network model, so that compared with the prior art, the accuracy of the predicted fare obtained by the pre-constructed global model in the application is higher than the fare calculated only through the structured data.

Description

Fare adjusting method, device, readable medium and equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a readable medium, and a device for adjusting a fare.
Background
In the prior art, in order to improve profitability, many airlines adjust the flight fare according to some current factors influencing the ticket sales condition. At present, the current predicted fare is mainly calculated according to structured data influencing the air ticket sales condition by using traditional statistical algorithms such as multivariate function regression, trend fitting method and the like, and then the current fare is adjusted to the price of the calculated predicted fare. The structured data influencing the ticket sales condition are mainly quantifiable data such as the booking number of the current ticket.
However, there are many factors that affect the sale condition of the air ticket, and some of the factors belong to non-structural data that cannot be quantified, such as flight time, weather conditions, and the like, whereas the existing method for predicting the fare cannot consider the non-structural data because of using the traditional statistical algorithm, so that the finally calculated predicted fare is not very accurate.
Disclosure of Invention
Based on the defects of the prior art, the application provides a fare adjustment method, a device, a readable medium and equipment, so as to achieve the purpose that the more accurate predicted fare of the target flight is obtained by inputting the sales condition influence data of the target flight comprising structured data and unstructured data into a pre-constructed global model.
The application discloses in a first aspect a fare adjustment method, comprising:
acquiring sales condition influence data of the current target flight; wherein the sales impact data of the target flight comprises: structured data affecting the sales condition of the target flight and unstructured data affecting the sales condition of the target flight;
inputting the sales condition influence data of the target flight into a pre-constructed global model, and obtaining and outputting the predicted fare of the target flight by the pre-constructed global model; the pre-constructed global model is obtained by distributing a plurality of network blocks disassembled from the global model to be trained to a plurality of working nodes and training and adjusting parameters of the distributed network blocks by each working node through a fare training data set; the fare training dataset comprising: historical sales impact data for a plurality of flights, and historical actual fares for each of the flights; the predicted fare output by the pre-constructed global model is used as the current fare of the target flight; the global model to be trained is a neural network model.
Optionally, in the method for adjusting a fare, a process of constructing the pre-constructed global model includes:
decomposing a global model to be trained into a plurality of network blocks;
selecting a plurality of network blocks to be updated from the disassembled network blocks;
aiming at each selected network block to be updated, sending the network block to be updated and the compressed network block corresponding to each network block except the network block to be updated in the global model to be trained to a working node appointed for the network block to be updated, combining the received network blocks to be updated and each compressed network block into a model with the same structure as the global model to be trained by the designated working node, and using the local fare training data set, by adjusting the parameters of the network blocks to be updated in the combined model, continuously training the combined model until the combined model meets the preset convergence condition, taking the adjusted parameters of the network block to be updated as updated parameters of the network block to be updated; wherein, the assigned working nodes of each network block to be updated are different from each other; the compressed network block corresponding to the network block is obtained by compressing the network block;
receiving updated parameters of the network block to be updated, which are sent by each working node;
for each network block to be updated, updating the parameters of the network block to be updated in the global model to be trained into updated parameters of the network block to be updated;
judging whether the current global model to be trained meets training termination conditions;
if the current global model to be trained does not meet the training termination condition, returning to the step of selecting a plurality of network blocks to be updated from the disassembled network blocks;
and if the current global model to be trained meets the training termination condition, determining the current global model to be trained as the pre-constructed global model.
Optionally, in the method for adjusting a fare, a difference privacy noise is added to the updated parameter of the network block to be updated sent by each of the working nodes.
Optionally, in the method for adjusting a fare, before the selecting the plurality of network blocks to be updated from the disassembled plurality of network blocks, the method further includes:
and respectively compressing each disassembled network block to obtain a compressed network block corresponding to each network block.
Optionally, in the method for adjusting a fare, after the sending, for each selected network block to be updated, the network block to be updated and the compressed network block corresponding to each network block in the global model to be trained, except the network block to be updated, to a working node designated for the network block to be updated, the method further includes:
receiving updated parameters of the compressed network blocks corresponding to the network blocks to be updated, which are sent by each working node; the updated parameters of the compressed network blocks corresponding to the network blocks to be updated are the parameters of the compressed network blocks obtained after the adjusted network blocks to be updated are compressed when the combined model of the working nodes meets the preset convergence condition;
and updating the parameters of the compressed network blocks corresponding to the network blocks to be updated into the updated parameters of the compressed network blocks corresponding to the network blocks to be updated aiming at the compressed network blocks corresponding to each network block to be updated.
Optionally, in the method for adjusting a fare, the compressing each disassembled network block to obtain a compressed network block corresponding to each network block includes:
inputting each disassembled network block into a compression model respectively, and obtaining and outputting a compression network block corresponding to each network block by the compression model; the compression model is obtained by training a neural network model through a network block training data set; the network block training dataset comprising: a plurality of original network blocks and equivalent compressed network blocks corresponding to each original network block; and the operation processing effect of the original network block is consistent with the operation processing effect of the equivalent compression network block corresponding to the original network block.
The third aspect of the present application discloses a fare adjustment device, comprising:
the acquisition unit is used for acquiring sales condition influence data of the current target flight; wherein the sales impact data of the target flight comprises: structured data affecting the sales condition of the target flight and unstructured data affecting the sales condition of the target flight;
the output unit is used for inputting the sales condition influence data of the target flight into a pre-constructed global model, and obtaining and outputting the predicted fare of the target flight by the pre-constructed global model; the pre-constructed global model is obtained by distributing a plurality of network blocks disassembled from the global model to be trained to a plurality of working nodes and training and adjusting parameters of the distributed network blocks by each working node through a fare training data set; the fare training dataset comprising: historical sales impact data for a plurality of flights, and historical actual fares for each of the flights; wherein the predicted fare output by the pre-built global model is used as the current fare of the target flight; the global model to be trained is a neural network model.
Optionally, in the above adjusting device for fare, further comprising:
the disassembly unit is used for disassembling the global model to be trained into a plurality of network blocks;
the selecting unit is used for selecting a plurality of network blocks to be updated from the disassembled network blocks;
a first sending unit, configured to send, for each selected network block to be updated, the network block to be updated and a compressed network block corresponding to each network block in the global model to be trained, except the network block to be updated, to a working node designated for the network block to be updated, combining the received network blocks to be updated and each compressed network block into a model with the same structure as the global model to be trained by the designated working node, and using the fare training dataset in a manner of adjusting parameters of the network blocks to be updated in the combined model, continuously training the combined model until the combined model meets the preset convergence condition, taking the adjusted parameters of the network block to be updated as updated parameters of the network block to be updated; wherein, the assigned working nodes of each network block to be updated are different from each other; the compressed network block corresponding to the network block is obtained by compressing the network block;
a first receiving unit, configured to receive updated parameters of the network block to be updated, where the updated parameters are sent by each working node;
a first updating unit, configured to update, for each network block to be updated, a parameter of the network block to be updated in the global model to be trained to an updated parameter of the network block to be updated;
the judging unit is used for judging whether the current global model to be trained meets training termination conditions;
the return unit is used for returning to the selection unit if the current global model to be trained does not accord with the training termination condition;
and the determining unit is used for determining the current global model to be trained as the pre-constructed global model if the current global model to be trained meets the training termination condition.
Optionally, in the adjusting apparatus for fare, a difference privacy noise is added to the updated parameter of the network block to be updated sent by each of the working nodes.
Optionally, in the above adjusting device for fare, further comprising:
and the compression unit is used for respectively compressing each disassembled network block to obtain a compressed network block corresponding to each network block.
Optionally, in the above adjusting device for fare, further comprising:
a second receiving unit, configured to receive updated parameters of the compressed network block corresponding to the network block to be updated, where the updated parameters are sent by each of the working nodes; the updated parameters of the compressed network blocks corresponding to the network blocks to be updated are the parameters of the compressed network blocks obtained after the adjusted network blocks to be updated are compressed when the combined model of the working nodes meets the preset convergence condition;
and a second updating unit, configured to update, for each compressed network block corresponding to the network block to be updated, a parameter of the compressed network block corresponding to the network block to be updated to an updated parameter of the compressed network block corresponding to the network block to be updated.
Optionally, in the above fare adjustment device, the compression unit includes:
the compression subunit is used for respectively inputting each disassembled network block into a compression model, and obtaining and outputting a compression network block corresponding to each network block by the compression model; the compression model is obtained by training a neural network model through a network block training data set; the network block training dataset comprising: a plurality of original network blocks and equivalent compressed network blocks corresponding to each original network block; and the operation processing effect of the original network block is consistent with the operation processing effect of the equivalent compression network block corresponding to the original network block.
A third aspect of the application discloses a computer readable medium having a computer program stored thereon, wherein the program when executed by a processor implements the method as described in any of the first aspects above.
A fourth aspect of the present application discloses a fare adjustment device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method as in any one of the first aspects above.
It can be seen from the foregoing technical solutions that, in the fare adjustment method provided in the embodiment of the present application, the sales condition influence data of the target flight is input into the pre-constructed global model, and the predicted fare of the target flight is obtained and output by the pre-constructed global model, where the sales condition influence data of the target flight includes: structured data influencing the sales condition of the target flight and unstructured data influencing the sales condition of the target flight, wherein the global model to be trained is a neural network model, so that the predicted fare of the target flight obtained by the pre-constructed global model is obtained by considering the structured data influencing the sales condition of the target flight and the unstructured data influencing the sales condition of the target flight, the predicted fare output by the pre-constructed global model is more accurate to be used as the current fare of the target flight, the pre-constructed global model is obtained by distributing a plurality of network blocks disassembled from the global model to be trained into a plurality of working nodes, and each working node trains and adjusts the parameters of the distributed network blocks through a fare training data set, so that the training process of the pre-constructed global model is completed by the plurality of working nodes together, the training efficiency is high, the operation complexity in the training process is low, and the construction can be completed quickly.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flow chart of a fare adjustment method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a process for constructing a pre-constructed global model according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a fare adjustment system according to an embodiment of the present application;
fig. 4 is a schematic diagram of a training process inside a working node according to an embodiment of the present application;
fig. 5 is a flowchart illustrating a method for updating parameters of a compressed network block according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a fare adjustment device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the embodiment of the present application discloses a method for adjusting a fare, which is applied to a server and specifically includes the following steps:
s101, obtaining sales condition influence data of the current target flight, wherein the sales condition influence data of the target flight comprises the following steps: structured data that affects the sales of the target flight and unstructured data that affects the sales of the target flight.
The sales impact data of the target flight refers to data that can impact the sales market of the target flight. The structured data influencing the sales condition of the target flight refers to data influencing quantifiable factors of the sales condition of the target flight, and can comprise at least one of the data of the reservation number of the current target flight, the reservation number of competitive flights to and from the target flight in the same time period, the current oil price, the fare of the current target flight and the fare of the current competitive flight. The unstructured data influencing the target flight sales condition refers to data influencing unquantizable factors of the target flight sales condition, and can comprise data of at least one influencing factor of time of the target flight, weather condition of the current day of the target flight, special events, policies, holidays and the like.
The server may extract sales impact data for the current target flight from the flight database. And updating and storing the sales condition influence data of the target flight in real time in the flight database.
S102, inputting sales condition influence data of the target flight into a pre-constructed global model, and obtaining and outputting a predicted fare of the target flight by the pre-constructed global model, wherein the pre-constructed global model is obtained by distributing a plurality of network blocks disassembled from the global model to be trained into a plurality of working nodes, and each working node trains and adjusts parameters of the distributed network blocks through a fare training data set, and the fare training data set comprises: the historical sales condition influence data of a plurality of flights and the historical actual fare of each flight, the predicted fare output by the pre-constructed global model is used as the current fare of the target flight, and the global model to be trained is a neural network model.
The pre-constructed global model is obtained by distributing a plurality of network blocks disassembled from the global model to be trained to a plurality of working nodes, and each working node trains and adjusts parameters of the distributed network blocks through a fare training data set, and the global model to be trained is a neural network model, so the pre-constructed global model is also a neural network model. The neural network model can process various structured and unstructured data and can process feature vectors of various structured and unstructured data, so that the pre-constructed global model can process the structured data which affect the sales condition of the target flight and the unstructured data which affect the sales condition of the target flight, which are input into the model.
Because the pre-constructed global model is obtained by distributing a plurality of network blocks disassembled from the global model to be trained to a plurality of working nodes, each working node trains and adjusts parameters of the distributed network blocks through a fare training data set, and the fare training data set comprises: historical sales impact data for a plurality of flights, and historical actual fares for each flight, the historical sales impact data for flights comprising: historical structured data influencing the flight sales condition and historical unstructured data influencing the flight sales condition, so that the pre-constructed global model can predict the predicted fare of the current target flight through the sales condition influence data of the target flight, and the predicted fare of the target flight is the optimal predicted by the pre-constructed global model in combination with the sales condition influence data of the current target flight, namely the pricing which can bring the maximum benefit to the airline company. The predicted fare of the target flight is used as the current fare of the target flight, i.e. the current fare of the target flight can be adjusted and updated to the predicted fare of the target flight.
In the prior art, a traditional statistical algorithm is used for calculating the predicted fare of the target flight, the traditional statistical algorithm can only process quantitative sales condition influence data of the target flight, and cannot process non-structural data which cannot be quantized, so that the predicted fare calculated by the traditional statistical algorithm is used, and because some unquantized sales condition influence factors of the target flight are not considered, the accuracy of the obtained predicted fare is not very high, and great benefit cannot be brought to an airline company.
In the embodiment of the application, the pre-constructed global model is obtained by distributing a plurality of network blocks disassembled from the global model to be trained to a plurality of working nodes, each working node trains and adjusts parameters of the distributed network blocks through a fare training data set, and the global model to be trained is a neural network model, so that the pre-constructed global model can process structured data influencing the sales condition of the target flight and unstructured data influencing the sales condition of the target flight, and predict the predicted fare of the target flight through the structured data influencing the sales condition of the target flight and the unstructured data influencing the sales condition of the target flight. The predicted fare of the target flight calculated by the pre-constructed global model not only considers the structured data influencing the sales condition of the target flight, but also considers the unstructured data influencing the sales condition of the target flight, so that the obtained predicted fare of the target flight is more accurate compared with the prior art, an airline can be helped to accurately adjust the fare of the target flight, and great benefit is brought to the airline.
Specifically, the fare training dataset for training the global model to be trained in the embodiment of the present application includes: historical sales for multiple flights affect the data, as well as the historical actual fares for each flight. Historical sales data for flights, comprising: historical structured data that affects flight sales and historical unstructured data that affects flight sales. Specifically, the construction process of the fare training data set comprises the following steps: for each preset historical time point, collecting historical sales condition influence data of each flight at the historical time point and historical actual fare of each flight after the preset historical time point. The historical actual fare of the flight after the preset historical time point refers to the fare after the adjustment of the fare of the flight is actually made by the airline after the preset historical time point. Through the fare training data set, the global model to be trained can learn the historical actual fare of each flight after the preset historical time point through the historical sales condition influence data of each flight at the historical time point, and the pre-constructed global model capable of predicting the fare is obtained through continuously learning the fare adjusting operation process of the airline company.
In addition, compared with the training process of various neural network models in the prior art, the training process of the pre-constructed global model in the embodiment of the application has shorter time consumption and lower computation requirement. In the prior art, training of a neural network model is usually performed in one computer, and when a neural network model with high accuracy needs to be trained, the calculation amount required by the one computer is very large, and the training process is very long. In the embodiment of the application, in the process of constructing the pre-constructed global model, a plurality of network blocks disassembled from the global model to be trained are distributed to a plurality of working nodes, and each working node trains and adjusts parameters of the distributed network blocks through a fare training data set, so that each working node is only responsible for training and adjusting the parameters of the distributed network blocks, and the parameter training of the plurality of network blocks is not only performed in one node but also performed by being distributed to the plurality of working nodes.
It should be noted that one working node may be assigned to only one network block, but when there are a small number of working nodes but there are many network blocks, one working node may be assigned to a plurality of network blocks. For example, if the global model to be trained is broken down into 8 network blocks and the number of working nodes is only 4, one working node may be responsible for parameter training of 2 network blocks.
It should be noted that the parameters of all network blocks decomposed by the global model to be trained do not necessarily need to be trained and adjusted, and may also be selectively selected to be part of the decomposed network blocks to be trained and adjusted. The global model to be trained is not necessarily a neural network model which is not trained in advance, but also can be a trained global model, and the trained global model is trained again to obtain a pre-constructed global model.
Optionally, referring to fig. 2, in an embodiment of the present application, a process of constructing a pre-constructed global model includes:
s201, decomposing the global model to be trained into a plurality of network blocks.
And decomposing the global model to be trained into a plurality of network blocks, wherein the network blocks are equivalent to sub-networks in the model. A network block may be a multi-layer or a layer of sub-networks. The specific way of breaking down the global model to be trained into a plurality of network blocks can be set manually. The global model to be trained may be an Artificial Neural Network (ANN) model or other types of Neural Network models.
It should be noted that the finer the global model to be trained is decomposed, the smaller the number of parameters contained in each network block is, and the smaller the computation workload when subsequently assigned to the working node for training is.
Alternatively, the method shown in fig. 2 may be performed in a periodic manner, i.e., in each training period, the process of building a pre-constructed global model is performed. For each training period, when step S201 is executed in the training period, the global model to be trained used may be a pre-constructed global model obtained in the previous training period.
S202, selecting a plurality of network blocks to be updated from the disassembled network blocks.
The specific selection rule may be preset, for example, the selection rule may be randomly selected according to a set selection probability value, or may be selected at will, and all the disassembled network blocks may also be selected as the network blocks to be updated.
The network block to be updated refers to a network block which needs to be trained to adjust parameters.
Optionally, in a specific embodiment of the present application, before performing step S202, the method further includes:
and respectively compressing each disassembled network block to obtain a compressed network block corresponding to each network block.
Because the network block often has a large amount of redundancy, compressing the network block reduces a large amount of redundant data, so that the computing resources occupied by the network block are greatly reduced, and the operation effect is almost the same as that of the original network block. The compression method is used for compressing the network block to obtain a compressed network block, the purposes of simplifying the network and saving computing resources can be achieved, and the compressed network block can enable output distribution to approach the network block which is not subjected to compression processing as far as possible under the condition of input of the same distribution. That is, the operation effect of the compressed network block corresponding to the network block is consistent with that of the network block.
Optionally, in a specific embodiment of the present application, an implementation manner of respectively compressing each disassembled network block to obtain a compressed network block corresponding to each network block is performed, where the implementation manner includes:
and inputting each disassembled network block into a compression model respectively, and obtaining and outputting a compression network block corresponding to each network block by the compression model.
Wherein, the compression model is obtained by training the neural network model through a network block training data set, and the network block training data set comprises: a plurality of original network blocks, and an equivalent compressed network block corresponding to each original network block. And the operation processing effect of the original network block is consistent with the operation processing effect of the equivalent compression network block corresponding to the original network block.
Specifically, each of the disassembled network blocks may be compressed through a compression model, so as to obtain a compressed network block corresponding to each of the network blocks. Specifically, the compression model is obtained by training the neural network model through a plurality of original network blocks and an equivalent compression network block corresponding to each original network block. The neural network model continuously learns through a plurality of original network blocks to compress the original network blocks into equivalent compression network blocks corresponding to the original network blocks, and then the compression model can be obtained. And the compression model can compress each disassembled network block to obtain a compressed network block corresponding to the network block, and the operation processing effect of the compressed network block corresponding to the network block is consistent with that of the network block.
Optionally, in a specific embodiment of the present application, the compression model may be modified according to a preset modification period. In order to ensure that the operation processing effect of the compressed network block corresponding to the network block output by the compressed model is consistent with that of the network block, the modified compressed model needs to be retrained according to a preset modification period to ensure the accuracy of the compressed model.
Optionally, the compression method adopted when compressing each disassembled network block is the same, so that the compressed network blocks corresponding to each obtained network block have the same specification.
Optionally, the compressed network blocks corresponding to each network block are spliced according to the structure of the global model to be trained, so that the compressed global model to be trained can be obtained.
Specifically, for example, referring to fig. 3, the server may execute steps S201 to S202 through the model disassembling module, the model management module, the compression modification module, and the network selection module. The model disassembling module disassembles the ANN model (i.e. the global model to be trained) into a network block B1, a network block B2 and a network block B3, and the disassembled global model to be trained and the corresponding compressed global model are stored in the model management module. The compressed global model includes a compressed network block B1 corresponding to the network block B1, a compressed network block B2 corresponding to the network block B2, and a compressed network block B3 corresponding to the network block B3. The structure of the compressed global model is consistent with that of the global model. The compressed network blocks corresponding to the network blocks are obtained by compressing the network blocks through a compression model. The network block selection model selects the network block to be updated from the network block B1, the network block B2 and the network block B3 which are obtained by disassembling the global model in the model management module. And the compression correction module can perform correction training on the compression model according to a preset correction period so as to ensure the compression precision.
S203, aiming at each selected network block to be updated, sending the network block to be updated and the compressed network block corresponding to each network block except the network block to be updated in the global model to be trained to a designated working node of the network block to be updated, combining the received network block to be updated and each compressed network block into a model with the same structure as the global model to be trained by the designated working node, using a local fare training data set, training the combined model continuously by adjusting the parameters of the network blocks to be updated in the combined model until the combined model meets a preset convergence condition, and taking the adjusted parameters of the network block to be updated as the updated parameters of the network block to be updated.
The specified working nodes of each network block to be updated are different from each other, that is, one working node is assigned with one network block to be updated, the compressed network block corresponding to the network block is obtained by compressing the network block, and the specific compression process may refer to the content of the part for compressing the network block in the above embodiment.
Aiming at each network block to be updated, after the network block to be updated and the compressed network block corresponding to each network block except the network block to be updated in the global model to be trained are sent to the working node appointed by the network block to be updated together, because the operation processing effect of the compressed network block corresponding to the network block is consistent with that of the network block, the operation processing effect of the combined model obtained after the compressed network blocks corresponding to the network block to be updated and the compressed network blocks except the network block to be updated in the global model to be trained are combined into the model with the same structure as that of the global model to be trained by the working node is also consistent with that of the global model to be trained, the training effect of the combined model is also equivalent to that of the global model to be trained, but the compressed network blocks in the combined model contain less parameters, the amount of computation required to train the combined model is therefore lower than the amount of computation required to train the global model to be trained.
Specifically, the training process of the model combined by the working node pairs is as follows: inputting historical sales condition influence data of each flight in a local fare training data set into a combined model, obtaining and outputting a predicted fare of each flight by the combined model, adjusting parameters of a network block to be updated in the combined model according to an error between the predicted fare of each flight and a historical actual fare, and taking parameters of the network block to be updated in the adjusted combined model as updated parameters of the network block to be updated when the error between the predicted fare of each flight and the historical actual fare output by the combined model meets a preset convergence condition.
Optionally, in a specific embodiment of the present application, after the working node uses the parameters of the network block to be updated in the adjusted combined model as the updated parameters of the network block to be updated, the method further includes:
and when the combined model meets a preset convergence condition, compressing the adjusted network block to be updated to obtain a compressed network block corresponding to the adjusted network block to be updated, and taking the compressed network block corresponding to the adjusted network block to be updated as an updated parameter of the compressed network block corresponding to the network block to be updated.
Besides the network block to be updated in the global model to be trained, the server may also update parameters of the compressed network block corresponding to the network block to be updated in the compressed global model to be trained, and then send the updated compressed network block to the working node when the step shown in fig. 2 is re-executed subsequently. Specifically, when the combined model of the working nodes meets the preset convergence condition, the adjusted network block to be updated is compressed to obtain the compressed network block corresponding to the adjusted network block to be updated, the compressed network block corresponding to the adjusted network block to be updated is used as the updated parameter of the compressed network block corresponding to the network block to be updated, and the updated parameter of the compressed network block corresponding to the network block to be updated can be used to update the parameter of the compressed network block corresponding to the network block to be updated in the server. The updated parameters of the network block to be updated are sent to the server, the server updates the parameters of the network block to be updated into the updated parameters of the network block to be updated, then the updated network block to be updated is compressed to obtain the compressed network block corresponding to the updated network block to be updated, and the original network block to be updated is replaced.
Optionally, since the updated parameter of the network block to be updated is theoretically consistent with the updated parameter of the compressed network block corresponding to the network block to be updated, the updated parameter of the network block to be updated may also be directly used as the updated parameter of the compressed network block corresponding to the network block to be updated.
For example, referring to fig. 3, in the working node, the training process of the network block is completed through the model training module, and the adjusted module to be updated is compressed through the model compression module. Specifically, the server shown in fig. 3 sends the network block B2 to be updated, the compressed network block B1, and the compressed network block B3 to the working node through the communication module, the working node receives the network block B2 to be updated, the compressed network block B1, and the compressed network block B3 through the communication module, and then the working node splices and combines the network block B2 to be updated, the compressed network block B1, and the compressed network block B3 according to the structure of the global model to obtain a combined model (i.e., a combined model). And then the working node trains the combined model by using a local fare training data set, continuously adjusts the parameters of the network block B2 to be updated until the combined model meets the preset convergence condition, and takes the adjusted parameters in the network block B' 2 to be updated as the updated parameters of the network block B2 to be updated. The model compression module compresses the adjusted network block B ' 2 to be updated to obtain a compressed network block B ' 2, and takes the parameters of the compressed network block B ' 2 as the updated parameters of the compressed network block B2 corresponding to the network block B2 to be updated. The model compression module may compress the adjusted network block B' 2 to be updated by using a compression model, and the compression model used by the working node is consistent with the compression model used in the server.
Optionally, referring to fig. 4, in an embodiment of the present application, during a training process of the combined model, the working node may select new data in the local fare training data set according to a preset probability P and select old data in the local fare training data set according to a probability 1-P to train the combined model, so as to ensure that the trained adjustment module to be updated can adapt to prediction of the new data.
And S204, receiving the updated parameters of the network block to be updated, which are sent by each working node.
After the working nodes obtain the updated parameters of the network blocks to be updated, the updated parameters of the network blocks to be updated are sent to the server, and the server receives the updated parameters of the network blocks to be updated sent by each working node. Since the working node transmits the updated parameters of the network block to be updated to the server, and does not transmit historical sales condition influence data of flights which may relate to the privacy of the user, the privacy safety of the user can be guaranteed.
Optionally, in a specific embodiment of the present application, a differential privacy noise is added to an updated parameter of a network block to be updated, which is sent by each working node.
Specifically, after obtaining the updated parameters of the network block to be updated, the working node adds the differential privacy noise to the updated parameters of the network block to be updated, and then sends the updated parameters of the network block to be updated to which the differential privacy noise is added to the server, so as to ensure that the privacy of the updated parameters of the network block to be updated is not leaked in the transmission process.
Optionally, referring to fig. 5, in an embodiment of the present application, after the server performs step S203, the method further includes:
s501, receiving updated parameters of the compressed network blocks corresponding to the network blocks to be updated, which are sent by each working node, wherein the updated parameters of the compressed network blocks corresponding to the network blocks to be updated are parameters of the compressed network blocks obtained after the adjusted network blocks to be updated are compressed when the model formed by the working nodes meets the preset convergence condition.
If the working node further compresses the adjusted network block to be updated to obtain the updated parameters of the compressed network block corresponding to the network block to be updated when the combined model meets the preset convergence condition, the working node sends the updated parameters of the compressed network block corresponding to the network block to be updated to the server in addition to the sending of the updated parameters of the network block to be updated in step S204, so that the server receives the updated parameters of the network block to be updated and the updated parameters of the compressed network block corresponding to the network block to be updated.
S502, aiming at each compressed network block corresponding to the network block to be updated, updating the parameters of the compressed network block corresponding to the network block to be updated into the updated parameters of the compressed network block corresponding to the network block to be updated.
And the server updates the parameters of the compressed network blocks corresponding to the network blocks to be updated into the updated parameters of the compressed network blocks corresponding to the network blocks to be updated aiming at the compressed network blocks corresponding to each locally stored network block to be updated.
S205, aiming at each network block to be updated, updating the parameters of the network block to be updated in the global model to be trained into the updated parameters of the network block to be updated.
And the server sets the parameters of the network blocks to be updated in the global model to be trained as the received updated parameters of the network blocks to be updated aiming at each network block to be updated.
For example, referring to fig. 3, the updated parameters of the network block to be updated obtained by the model training module and the updated parameters of the compressed network block corresponding to the network block to be updated obtained by the model compression module are packed into update information in the working node, differential privacy noise is added to the update information, and then the update information is sent to the communication module of the server through the communication module, and the communication module of the server receives the updated parameters of the network block to be updated sent by the working node and the updated parameters of the compressed network block corresponding to the network block to be updated.
And S206, judging whether the current global model to be trained meets the training termination condition.
If the current global model to be trained does not meet the training termination condition, returning to step S202, continuing to train the global model, and if the current global model to be trained meets the training termination condition, executing step S207. When the current global model to be trained does not meet the training termination condition, the current global model to be trained still cannot be directly used as the pre-constructed global model, and the training is still required to be continued. When the current global model to be trained meets the training termination condition, the current global model to be trained does not need to be trained any more, and the model precision meets the use requirement in the resource recommendation process.
The training termination condition may be set empirically, for example, the training duration of the global model to be trained needs to be greater than the training duration threshold, and when the training duration is not greater than the training duration threshold, the training process shown in fig. 2 needs to be repeatedly executed until the training duration is greater than the training duration threshold, and step S207 is executed. The training termination condition can also be set to be that the precision of the current global model to be trained needs to be greater than the precision threshold. The specific form of the training termination condition is many, including but not limited to what is proposed in the present application.
And S207, determining the current global model to be trained as a pre-constructed global model.
The current global model to be trained refers to the global model after the parameters in the network block are updated through step S205. After the current global model to be trained is determined as the pre-constructed global model, the method may be used in step S102 shown in fig. 1.
The fare adjustment method provided by the embodiment of the application inputs the sales condition influence data of the target flight into the pre-constructed global model, and obtains and outputs the predicted fare of the target flight by the pre-constructed global model, wherein the sales condition influence data of the target flight comprises the following steps: structured data influencing the sales condition of the target flight and unstructured data influencing the sales condition of the target flight, wherein the global model to be trained is a neural network model, so that the predicted fare of the target flight obtained by the pre-constructed global model is obtained by considering the structured data influencing the sales condition of the target flight and the unstructured data influencing the sales condition of the target flight, the predicted fare output by the pre-constructed global model is more accurate to be used as the current fare of the target flight, the pre-constructed global model is obtained by distributing a plurality of network blocks disassembled from the global model to be trained into a plurality of working nodes, and each working node trains and adjusts the parameters of the distributed network blocks through a fare training data set, so that the training process of the pre-constructed global model is completed by the plurality of working nodes together, the training efficiency is high, the operation complexity in the training process is low, and the construction can be completed quickly.
Referring to fig. 6, based on the adjustment method of the fare provided in the embodiment of the present application, the embodiment of the present application correspondingly discloses an adjustment device of the fare, which specifically includes: an acquisition unit 601 and an output unit 602.
An obtaining unit 601, configured to obtain sales impact data of a current target flight, where the sales impact data of the target flight includes: structured data that affects the sales of the target flight and unstructured data that affects the sales of the target flight.
The output unit 602 is configured to input the sales impact data of the target flight into the pre-constructed global model, and obtain and output the predicted fare of the target flight from the pre-constructed global model. The pre-constructed global model distributes a plurality of network blocks disassembled from the global model to be trained to a plurality of working nodes, and each working node trains and adjusts parameters of the distributed network blocks through a fare training data set, wherein the fare training data set comprises: historical sales condition influence data of a plurality of flights and historical actual fares of each flight, wherein predicted fares output by a pre-constructed global model are used as current fares of a target flight, and the global model to be trained is a neural network model.
Optionally, in a specific embodiment of the present application, the method further includes: the device comprises a disassembling unit, a selecting unit, a first sending unit, a first receiving unit, a first updating unit, a judging unit, a returning unit and a determining unit.
And the disassembling unit is used for disassembling the global model to be trained into a plurality of network blocks.
And the selecting unit is used for selecting a plurality of network blocks to be updated from the disassembled network blocks.
A first sending unit, configured to send, for each selected network block to be updated, a compressed network block corresponding to each network block except the network block to be updated in the network block to be updated and the global model to be trained to a designated work node for the network block to be updated, combine the received network block to be updated and each compressed network block into a model having the same structure as the global model to be trained by the designated work node, and train the combined model continuously by using a fare training data set in a manner of adjusting parameters of the network blocks to be updated in the combined model until the combined model satisfies a preset convergence condition, and use the adjusted parameters of the network blocks to be updated as updated parameters of the network blocks to be updated, where the designated work nodes of each network block to be updated are different from each other, and the compressed network block corresponding to the network block is obtained by compressing the network block.
And the first receiving unit is used for receiving the updated parameters of the network blocks to be updated, which are sent by each working node.
And the first updating unit is used for updating the parameters of the network blocks to be updated in the global model to be trained into the updated parameters of the network blocks to be updated according to each network block to be updated.
And the judging unit is used for judging whether the current global model to be trained meets the training termination condition.
And the returning unit is used for returning to the selecting unit if the current global model to be trained does not accord with the training termination condition.
And the determining unit is used for determining the current global model to be trained as the pre-constructed global model if the current global model to be trained meets the training termination condition.
Optionally, in a specific embodiment of the present application, a differential privacy noise is added to an updated parameter of a network block to be updated, which is sent by each working node.
Optionally, in a specific embodiment of the present application, the method further includes:
and the compression unit is used for respectively compressing each disassembled network block to obtain a compressed network block corresponding to each network block.
Optionally, in a specific embodiment of the present application, the compressing unit includes:
and the compression subunit is used for respectively inputting each disassembled network block into the compression model, and obtaining and outputting the compression network block corresponding to each network block by the compression model. Wherein, the compression model is obtained by training the neural network model through a network block training data set, and the network block training data set comprises: a plurality of original network blocks, and an equivalent compressed network block corresponding to each original network block. And the operation processing effect of the original network block is consistent with the operation processing effect of the equivalent compression network block corresponding to the original network block.
Optionally, in a specific embodiment of the present application, the method further includes: a second receiving unit and a second updating unit.
And the second receiving unit is used for receiving the updated parameters of the compressed network blocks corresponding to the network blocks to be updated, which are sent by each working node, wherein the updated parameters of the compressed network blocks corresponding to the network blocks to be updated are the parameters of the compressed network blocks obtained after the adjusted network blocks to be updated are compressed when the model formed by combining the working nodes meets the preset convergence condition.
And the second updating unit is used for updating the parameters of the compressed network blocks corresponding to the network blocks to be updated into the updated parameters of the compressed network blocks corresponding to the network blocks to be updated according to the compressed network blocks corresponding to each network block to be updated.
The specific principle and the implementation process of the fare adjustment device disclosed in the embodiment of the present application are the same as those of the fare adjustment method disclosed in the embodiment of the present application, and reference may be made to corresponding parts in the fare adjustment method disclosed in the embodiment of the present application, which are not described herein again.
In the fare adjustment apparatus provided in the embodiment of the present application, the output unit 602 inputs the sales condition influence data of the target flight into the pre-constructed global model, and the predicted fare of the target flight is obtained and output by the pre-constructed global model, where the sales condition influence data of the target flight includes: structured data influencing the sales condition of the target flight and unstructured data influencing the sales condition of the target flight, wherein the global model to be trained is a neural network model, so that the predicted fare of the target flight obtained by the pre-constructed global model is obtained by considering the structured data influencing the sales condition of the target flight and the unstructured data influencing the sales condition of the target flight, the predicted fare output by the pre-constructed global model is more accurate to be used as the current fare of the target flight, the pre-constructed global model is obtained by distributing a plurality of network blocks disassembled from the global model to be trained into a plurality of working nodes, and each working node trains and adjusts the parameters of the distributed network blocks through a fare training data set, so that the training process of the pre-constructed global model is completed by the plurality of working nodes together, the training efficiency is high, the operation complexity in the training process is low, and the construction can be completed quickly.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The embodiments of the present application provide a computer-readable medium, on which a computer program is stored, where the program is executed by a processor to implement any one of the fare adjustment methods mentioned in the above embodiments.
The embodiment of the application provides a fare's adjustment device, includes: one or more processors, a storage device, having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement any one of the methods of adjusting a fare as mentioned in the embodiments above.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer storage medium may be embodied in the apparatus; or may be separate and not incorporated into the device.
It should be noted that the computer storage media described above in this disclosure can be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer storage medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer storage medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
While several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (10)

1. A method for adjusting a fare, comprising:
acquiring sales condition influence data of the current target flight; wherein the sales impact data of the target flight comprises: structured data affecting the sales condition of the target flight and unstructured data affecting the sales condition of the target flight;
inputting the sales condition influence data of the target flight into a pre-constructed global model, and obtaining and outputting the predicted fare of the target flight by the pre-constructed global model; the pre-constructed global model is obtained by distributing a plurality of network blocks disassembled from the global model to be trained to a plurality of working nodes and training and adjusting parameters of the distributed network blocks by each working node through a fare training data set; the fare training dataset comprising: historical sales impact data for a plurality of flights, and historical actual fares for each of the flights; the predicted fare output by the pre-constructed global model is used as the current fare of the target flight; the global model to be trained is a neural network model.
2. The method of claim 1, wherein the pre-built global model building process comprises:
decomposing a global model to be trained into a plurality of network blocks;
selecting a plurality of network blocks to be updated from the disassembled network blocks;
aiming at each selected network block to be updated, sending the network block to be updated and the compressed network block corresponding to each network block except the network block to be updated in the global model to be trained to a working node appointed for the network block to be updated, combining the received network blocks to be updated and each compressed network block into a model with the same structure as the global model to be trained by the designated working node, and using the local fare training data set, by adjusting the parameters of the network blocks to be updated in the combined model, continuously training the combined model until the combined model meets the preset convergence condition, taking the adjusted parameters of the network block to be updated as updated parameters of the network block to be updated; wherein, the assigned working nodes of each network block to be updated are different from each other; the compressed network block corresponding to the network block is obtained by compressing the network block;
receiving updated parameters of the network block to be updated, which are sent by each working node;
for each network block to be updated, updating the parameters of the network block to be updated in the global model to be trained into updated parameters of the network block to be updated;
judging whether the current global model to be trained meets training termination conditions;
if the current global model to be trained does not meet the training termination condition, returning to the step of selecting a plurality of network blocks to be updated from the disassembled network blocks;
and if the current global model to be trained meets the training termination condition, determining the current global model to be trained as the pre-constructed global model.
3. The method according to claim 2, wherein a differential privacy noise is added to the updated parameters of the network block to be updated sent by each of the working nodes.
4. The method according to claim 2, wherein before the selecting the plurality of network blocks to be updated from the disassembled plurality of network blocks, further comprising:
and respectively compressing each disassembled network block to obtain a compressed network block corresponding to each network block.
5. The method according to claim 2, wherein after the sending, for each selected network block to be updated, the network block to be updated and the compressed network block corresponding to each network block in the global model to be trained except the network block to be updated to a working node designated for the network block to be updated, the method further comprises:
receiving updated parameters of the compressed network blocks corresponding to the network blocks to be updated, which are sent by each working node; the updated parameters of the compressed network blocks corresponding to the network blocks to be updated are the parameters of the compressed network blocks obtained after the adjusted network blocks to be updated are compressed when the combined model of the working nodes meets the preset convergence condition;
and updating the parameters of the compressed network blocks corresponding to the network blocks to be updated into the updated parameters of the compressed network blocks corresponding to the network blocks to be updated aiming at the compressed network blocks corresponding to each network block to be updated.
6. The method according to claim 4, wherein the compressing each disassembled network block to obtain a compressed network block corresponding to each network block comprises:
inputting each disassembled network block into a compression model respectively, and obtaining and outputting a compression network block corresponding to each network block by the compression model; the compression model is obtained by training a neural network model through a network block training data set; the network block training dataset comprising: a plurality of original network blocks and equivalent compressed network blocks corresponding to each original network block; and the operation processing effect of the original network block is consistent with the operation processing effect of the equivalent compression network block corresponding to the original network block.
7. A fare adjustment device, comprising:
the acquisition unit is used for acquiring sales condition influence data of the current target flight; wherein the sales impact data of the target flight comprises: structured data affecting the sales condition of the target flight and unstructured data affecting the sales condition of the target flight;
the output unit is used for inputting the sales condition influence data of the target flight into a pre-constructed global model, and obtaining and outputting the predicted fare of the target flight by the pre-constructed global model; the pre-constructed global model is obtained by distributing a plurality of network blocks disassembled from the global model to be trained to a plurality of working nodes and training and adjusting parameters of the distributed network blocks by each working node through a fare training data set; the fare training dataset comprising: historical sales impact data for a plurality of flights, and historical actual fares for each of the flights; wherein the predicted fare output by the pre-built global model is used as the current fare of the target flight; the global model to be trained is a neural network model.
8. The apparatus of claim 7, further comprising:
the disassembly unit is used for disassembling the global model to be trained into a plurality of network blocks;
the selecting unit is used for selecting a plurality of network blocks to be updated from the disassembled network blocks;
a first sending unit, configured to send, for each selected network block to be updated, the network block to be updated and a compressed network block corresponding to each network block in the global model to be trained, except the network block to be updated, to a working node designated for the network block to be updated, combining the received network blocks to be updated and each compressed network block into a model with the same structure as the global model to be trained by the designated working node, and using the fare training dataset in a manner of adjusting parameters of the network blocks to be updated in the combined model, continuously training the combined model until the combined model meets the preset convergence condition, taking the adjusted parameters of the network block to be updated as updated parameters of the network block to be updated; wherein, the assigned working nodes of each network block to be updated are different from each other; the compressed network block corresponding to the network block is obtained by compressing the network block;
a first receiving unit, configured to receive updated parameters of the network block to be updated, where the updated parameters are sent by each working node;
a first updating unit, configured to update, for each network block to be updated, a parameter of the network block to be updated in the global model to be trained to an updated parameter of the network block to be updated;
the judging unit is used for judging whether the current global model to be trained meets training termination conditions;
the return unit is used for returning to the selection unit if the current global model to be trained does not accord with the training termination condition;
and the determining unit is used for determining the current global model to be trained as the pre-constructed global model if the current global model to be trained meets the training termination condition.
9. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1 to 6.
10. An apparatus for adjusting a fare, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-6.
CN202110741251.5A 2021-06-30 2021-06-30 Fare adjusting method, device, readable medium and equipment Pending CN113487349A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110741251.5A CN113487349A (en) 2021-06-30 2021-06-30 Fare adjusting method, device, readable medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110741251.5A CN113487349A (en) 2021-06-30 2021-06-30 Fare adjusting method, device, readable medium and equipment

Publications (1)

Publication Number Publication Date
CN113487349A true CN113487349A (en) 2021-10-08

Family

ID=77937179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110741251.5A Pending CN113487349A (en) 2021-06-30 2021-06-30 Fare adjusting method, device, readable medium and equipment

Country Status (1)

Country Link
CN (1) CN113487349A (en)

Similar Documents

Publication Publication Date Title
US10679169B2 (en) Cross-domain multi-attribute hashed and weighted dynamic process prioritization
US20190057327A1 (en) Cumulative model for scheduling and resource allocation for airline operations
US11836582B2 (en) System and method of machine learning based deviation prediction and interconnected-metrics derivation for action recommendations
CN111061881A (en) Text classification method, equipment and storage medium
CN110020022B (en) Data processing method, device, equipment and readable storage medium
US20190220827A1 (en) Disruption control in complex schedules
US20210264311A1 (en) Automated Model Generation Platform for Recursive Model Building
US20230401484A1 (en) Data processing method and apparatus, electronic device, and storage medium
CN115796310A (en) Information recommendation method, information recommendation device, information recommendation model training device, information recommendation equipment and storage medium
CN115358411A (en) Data processing method, device, equipment and medium
CN114818913A (en) Decision generation method and device
Rhodes-Leader et al. A multi-fidelity modelling approach for airline disruption management using simulation
Praynlin et al. Performance analysis of software effort estimation models using neural networks
CN113487349A (en) Fare adjusting method, device, readable medium and equipment
CN113112311B (en) Method for training causal inference model and information prompting method and device
CN112948582B (en) Data processing method, device, equipment and readable medium
US20210027234A1 (en) Systems and methods for analyzing user projects
CN114067415A (en) Regression model training method, object evaluation method, device, equipment and medium
US9600770B1 (en) Method for determining expertise of users in a knowledge management system
US20240028935A1 (en) Context-aware prediction and recommendation
US20240193538A1 (en) Temporal supply-related forecasting using artificial intelligence techniques
CN116954642A (en) Updating method, device, equipment, medium and program product of recommendation system
US11113756B2 (en) Method for making cognitive bidding decision
CN116226757A (en) Data processing method, device, computer equipment and storage medium
CN116151956A (en) Credit approval processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination