CN109523123A - A kind of the intelligent allocation method and server of distributing real time system - Google Patents

A kind of the intelligent allocation method and server of distributing real time system Download PDF

Info

Publication number
CN109523123A
CN109523123A CN201811191526.7A CN201811191526A CN109523123A CN 109523123 A CN109523123 A CN 109523123A CN 201811191526 A CN201811191526 A CN 201811191526A CN 109523123 A CN109523123 A CN 109523123A
Authority
CN
China
Prior art keywords
affairs
node
processing load
transaction
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811191526.7A
Other languages
Chinese (zh)
Other versions
CN109523123B (en
Inventor
曾维刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201811191526.7A priority Critical patent/CN109523123B/en
Publication of CN109523123A publication Critical patent/CN109523123A/en
Application granted granted Critical
Publication of CN109523123B publication Critical patent/CN109523123B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0633Workflow analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance

Abstract

Present invention discloses the intelligent allocation methods and server of a kind of distributing real time system, comprising: the affairs for obtaining a certain node flow into information and affairs outflow information;Wherein, it includes flowing into the Transaction Information of all the first affairs of the node that affairs, which flow into information, and affairs outflow information includes flowing out the Transaction Information of all the second affairs of the node;Information is flowed into according to affairs and affairs flow out information, determines the present processing load of the node;Judge whether present processing load is greater than processing load threshold value;If so, the Transaction Information of at least one target transaction on node is gone to replacement node;Wherein, which is at least one affairs in all affairs removed except the second affairs in the first affairs on the node.Under the method, the processing load of node can be predicted by machine learning model, realizes the rational allocation to magnanimity affairs, and then improve tediously long dilatory issued transaction status.

Description

A kind of the intelligent allocation method and server of distributing real time system
Technical field
The present invention relates to field of computer technology, in particular to the intelligent allocation method and clothes of a kind of distributing real time system Business device.
Background technique
Current some issued transactions (such as insurance business processing) need the node by much handling, and each node can be with Regard a server or system as, each node can form a process chain according to the circulation sequence of issued transaction, whole A transaction data just circulates among the nodes.
By taking insurance business is handled as an example, insurance business processing needs to be accepted department, government department, legal department and wind Danger assessment company's constant pitch point carries out the circulation of transaction data, i.e. insurance business may be needed in processing by accepting department The primary condition of applicant corresponding for the insurance business is audited, and accepts department in the primary condition of audit the applicant Shi Ruguo, which leaves a question open the insurance business can circulate to relevant government department, does further audit, and government department is judging Applicant's condition of the insurance business there are in the case where certain risk, which can be circulated to risk assessment company into Row risk assessment;Or it can be by the insurance business when the applicant's condition for judging the insurance business needs to carry out law assessment It circulates to legal department, relevant law assessment is carried out to it by legal department.
However, usually will appear following situations when insurance business circulates between different nodes: the number of a large amount of insurance business Some node (such as server of risk assessment company) is flowed into according in the same period, so that the processing load of the node increases suddenly Greatly, block due to a large amount of insurance business data are timely handled because being unable to get in the node, the process flow of entire insurance business It is tediously long dilatory.
Summary of the invention
In order to solve to block due to a large amount of insurance business data present in the relevant technologies are timely handled because being unable to get in A certain node, the tediously long dilatory technical problem of the process flow of entire insurance business, the present invention provides a kind of distributed transactions The intelligent allocation method and server of processing.
A kind of intelligent allocation method of distributing real time system, which comprises
The affairs that server obtains a certain node flow into the affairs outflow information of information and the node;Wherein, the thing It includes flowing into the Transaction Information of all the first affairs of the node that business, which flows into information, and the affairs outflow information includes outflow institute State the Transaction Information of all the second affairs of node;
The server flows into information according to the affairs and the affairs flow out information, determines the current of the node Handle load;
The server judges whether the present processing load is greater than processing load threshold value;
If the present processing load be greater than the processing load threshold value, the server by the node at least The Transaction Information of one target transaction produce to replacement node, the target transaction be the node on the first affairs in remove At least one affairs in all affairs except second affairs.
As an alternative embodiment, the affairs flow into information and also wrap in first aspect of the embodiment of the present invention It includes affairs and flows into log, the affairs flow into the Transaction Identifier number that log carries first affairs, the affairs outflow letter Breath further includes affairs outflow log, and the affairs outflow log carries the Transaction Identifier number of second affairs, the service Device flows into information according to the affairs and the affairs flow out information, determines the present processing load of the node, comprising:
Server Transaction Identifier number according to entrained by affairs inflow log and the affairs flow out log Entrained Transaction Identifier number determines that the Current transaction of the node, the Current transaction are the first thing on the node All affairs except second affairs are removed in business;
The server is using the Transaction Information of the Current transaction as the present processing load of the node.
As an alternative embodiment, the affairs flow into log and also take in first aspect of the embodiment of the present invention Transaction types with first affairs, the affairs outflow log also carry the transaction types of second affairs, such as Present processing load described in fruit be greater than the processing load threshold value, the server by the node at least one of target thing The Transaction Information of business is produced to replacement node, comprising:
The server determines at least one target transaction from the Current transaction;
The server inquires preset transaction types and the node table of comparisons, to find out transaction types and the target thing The identical replacement node of the transaction types of business;
The server produces the Transaction Information of at least one of described target transaction to the replacement node, so that described Replace the processing for the Transaction Information that node is completed at least one of described target transaction.
As an alternative embodiment, in first aspect of the embodiment of the present invention, the server is by the node On at least one of target transaction Transaction Information produce to replacement node after, the method also includes:
The server obtains the flow direction of the target transaction;
The server updates the flow direction record of the target transaction, the flow direction according to the flow direction of the target transaction Record is for tracking the circulation path for counting the target transaction.
As an alternative embodiment, in first aspect of the embodiment of the present invention, the method also includes:
If the present processing load is not more than the processing load threshold value, the server will be described currently processed negative Lotus is input to processing load forecasting model;
The server is born based on the output of the processing load forecasting model as a result, determining that the prediction of the node is handled Lotus;
The server judges whether the prediction processing load is greater than the processing load threshold value;
If prediction processing load is greater than the processing load threshold value, the server executes described by the section The Transaction Information of at least one target transaction on point is produced to replacement node.
As an alternative embodiment, in first aspect of the embodiment of the present invention, if the present processing load No more than the processing load threshold value, the server by the present processing load be input to processing load forecasting model it Before, the method also includes:
The server obtains the history processing load of the node;
History described in the server by utilizing handles the initial neural network model of weight training, and it is pre- to obtain the processing load Survey model.
As an alternative embodiment, being gone through described in the server by utilizing in first aspect of the embodiment of the present invention History handles the initial neural network model of weight training, obtains the processing load forecasting model, comprising:
The server obtains N number of continuous processing load as sample from history processing load and inputs initial mind Through network model, so that the initial neural network model exports output load corresponding with the sample;The N is positive whole Number;
The output load is determined as target processing load by the server;
Target processing load is compared by the server with actual treatment load, updates institute according to comparison result The parameter of initial neural network model is stated, the actual treatment load is that the server is getting the target processing load At the time of from the nodal test to processing load;
The server judges whether the loss function of the initial neural network model meets preset condition, the loss Function is used to characterize the error between the output load and actual value of the initial neural network model;
If the loss function of the initial neural network model meets the preset condition, the server will be described first The parameter current of beginning network model is determined as handling the parameter of load forecasting model, and obtains the processing according to the parameter and bear Lotus prediction model.
A kind of server, the server include:
Module is obtained, the affairs for obtaining a certain node flow into information and the affairs of the node flow out information;Wherein, It includes flowing into the Transaction Information of all the first affairs of the node that the affairs, which flow into information, and the affairs outflow information includes Flow out the Transaction Information of all the second affairs of the node;
First determining module determines the section for flowing into information and affairs outflow information according to the affairs The present processing load of point;
Judgment module, for judging whether the present processing load is greater than processing load threshold value;
Shift module, for judging that the present processing load is greater than the processing load threshold value in the judgment module When, the Transaction Information of at least one target transaction on the node is produced to replacement node, the target transaction is described At least one affairs in all affairs except second affairs are removed in the first affairs on node.
The third aspect of the embodiment of the present invention discloses a kind of computer readable storage medium, stores computer program, wherein The computer program makes computer execute a kind of intelligence of distributing real time system disclosed in first aspect of the embodiment of the present invention It can concocting method.
Fourth aspect of the embodiment of the present invention discloses a kind of electronic equipment, and the electronic equipment includes:
Processor;
Memory is stored with computer-readable instruction on the memory, and the computer-readable instruction is by the processing When device executes, foregoing method is realized.
The technical solution that the embodiment of the present invention provides can include the following benefits:
The intelligent allocation method of distributing real time system provided by the present invention includes the following steps, obtains a certain node Affairs flow into information and the affairs of the node flow out information;Wherein, it includes flowing into all the first things of node that affairs, which flow into information, The Transaction Information of business, affairs outflow information include flowing out the Transaction Information of all the second affairs of node;It is flowed into and is believed according to affairs Breath and affairs flow out information, determine the present processing load of egress;Judge whether present processing load is greater than processing load threshold Value;If present processing load is greater than processing load threshold value, the Transaction Information of at least one target transaction on node is produced To replacement node, which is to remove in all affairs except the second affairs at least in the first affairs on the node One affairs.
Under the method, real-time monitoring is carried out to the processing load of each node, when the Transaction Information for monitoring a large amount of affairs Block when some node, the Transaction Information that will exceed the target transaction of the processing load of the node is produced to replacement node, is made The processing load obtained between node obtains reasonable equilibrium, and then improves tediously long dilatory issued transaction status.To sum up, distributed The intelligent allocation of issued transaction is achieved.
It should be understood that the above general description and the following detailed description are merely exemplary, this can not be limited Invention.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the invention Example, and in specification together principle for explaining the present invention.
Fig. 1 is a kind of schematic diagram of device shown according to an exemplary embodiment;
Fig. 2 is a kind of process of the intelligent allocation method of distributing real time system shown according to an exemplary embodiment Figure;
Fig. 3 is a kind of process of the intelligent allocation method of the distributing real time system shown according to another exemplary embodiment Figure;
Fig. 4 is a kind of block diagram of server shown according to an exemplary embodiment;
Fig. 5 is the block diagram of another server shown according to an exemplary embodiment;
Fig. 6 is the block diagram of another server shown according to an exemplary embodiment.
Specific embodiment
Here will the description is performed on the exemplary embodiment in detail, the example is illustrated in the accompanying drawings.Following description is related to When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment Described in embodiment do not represent all embodiments consistented with the present invention.On the contrary, they be only with it is such as appended The example of device and method being described in detail in claims, some aspects of the invention are consistent.
Fig. 1 is a kind of schematic diagram of device shown according to an exemplary embodiment.Device 100 can be electronic equipment. As shown in Figure 1, device 100 may include following one or more components: processing component 102, memory 104, power supply module 106, multimedia component 108, audio component 110, sensor module 114 and communication component 116.
The integrated operation of the usual control device 100 of processing component 102, such as with display, telephone call, data communication, phase Machine operation and the associated operation of record operation etc..Processing component 102 may include one or more processors 118 to execute Instruction, to complete all or part of the steps of following methods.In addition, processing component 102 may include one or more modules, For convenient for the interaction between processing component 102 and other assemblies.For example, processing component 102 may include multi-media module, use In to facilitate the interaction between multimedia component 108 and processing component 102.
Memory 104 is configured as storing various types of data to support the operation in device 100.These data are shown Example includes the instruction of any application or method for operating on the device 100.Memory 104 can be by any kind of Volatibility or non-volatile memory device or their combination are realized, such as static random access memory (Static Random Access Memory, abbreviation SRAM), electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, abbreviation EEPROM), Erasable Programmable Read Only Memory EPROM (Erasable Programmable Read Only Memory, abbreviation EPROM), programmable read only memory (Programmable Red- Only Memory, abbreviation PROM), read-only memory (Read-Only Memory, abbreviation ROM), magnetic memory, flash Device, disk or CD.It is also stored with one or more modules in memory 104, is configured to for the one or more module It is executed by the one or more processors 118, to complete all or part of step in method as follows.
Power supply module 106 provides electric power for the various assemblies of device 100.Power supply module 106 may include power management system System, one or more power supplys and other with for device 100 generate, manage, and distribute the associated component of electric power.
Multimedia component 108 includes the screen of one output interface of offer between described device 100 and user.One In a little embodiments, screen may include liquid crystal display (Liquid Crystal Display, abbreviation LCD) and touch panel. If screen includes touch panel, screen may be implemented as touch screen, to receive input signal from the user.Touch panel Including one or more touch sensors to sense the gesture on touch, slide, and touch panel.The touch sensor can be with The boundary of a touch or slide action is not only sensed, but also detects duration associated with the touch or slide operation and pressure Power.Screen can also include display of organic electroluminescence (Organic Light Emitting Display, abbreviation OLED).
Audio component 110 is configured as output and/or input audio signal.For example, audio component 110 includes a Mike Wind (Microphone, abbreviation MIC), when device 100 is in operation mode, such as call model, logging mode and speech recognition mould When formula, microphone is configured as receiving external audio signal.The received audio signal can be further stored in memory 104 or via communication component 116 send.In some embodiments, audio component 110 further includes a loudspeaker, for exporting Audio signal.
Sensor module 114 includes one or more sensors, and the state for providing various aspects for device 100 is commented Estimate.For example, sensor module 114 can detecte the state that opens/closes of device 100, the relative positioning of component, sensor group Part 114 can be with the position change of 100 1 components of detection device 100 or device and the temperature change of device 100.Some In embodiment, which can also include Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 116 is configured to facilitate the communication of wired or wireless way between device 100 and other equipment.Device 100 can access the wireless network based on communication standard, such as WiFi (Wireless-Fidelity, Wireless Fidelity).Show at one In example property embodiment, communication component 116 receives broadcast singal or broadcast from external broadcasting management system via broadcast channel Relevant information.In one exemplary embodiment, the communication component 116 further includes near-field communication (Near Field Communication, abbreviation NFC) module, for promote short range communication.For example, radio frequency identification can be based in NFC module (Radio Frequency Identification, abbreviation RFID) technology, Infrared Data Association (Infrared Data Association, abbreviation IrDA) technology, ultra wide band (Ultra Wideband, abbreviation UWB) technology, Bluetooth technology and other skills Art is realized.
In the exemplary embodiment, device 100 can be by one or more application specific integrated circuit (Application Specific Integrated Circuit, abbreviation ASIC), it is digital signal processor, digital signal processing appts, programmable Logical device, field programmable gate array, controller, microcontroller, microprocessor or other electronic components are realized, for executing Following methods.
Fig. 2 is a kind of process of the intelligent allocation method of distributing real time system shown according to an exemplary embodiment Figure.As shown in Fig. 2, the method includes the following steps.
Step 210, the affairs that server obtains a certain node flow into information and the affairs outflow information of the node.
Wherein, it may include flowing into the Transaction Information of all the first affairs of the node, the affairs which, which flows into information, Outflow information may include flowing out the Transaction Information of all the second affairs of the node;Optionally, the Transaction Information of affairs can be with The Transaction Identifier number (ID, IDentity) of the affairs is carried, i.e. the Transaction Information of the first affairs carries the thing of the first affairs Business identification number, the second affairs Transaction Information carry the Transaction Identifier number of the second affairs.
Step 220, server flows into information according to affairs and affairs flow out information, determines the currently processed negative of the node Lotus.
Wherein, it can also include that affairs flow into log that above-mentioned affairs, which flow into information, which flows into log and carry first The Transaction Identifier number of affairs, affairs outflow information further include affairs outflow log, and affairs outflow log carries the second thing The Transaction Identifier number of business.So, in an exemplary embodiment, server flows into information and above-mentioned affairs according to above-mentioned affairs Information is flowed out, determines that the present processing load of above-mentioned node may include:
Affairs entrained by server Transaction Identifier number according to entrained by affairs inflow log and affairs outflow log Identification number calculates the quantity for the Transaction Information of the first affairs for flowing into the node and flows out the number of transactions of the second affairs of the node According to quantity between difference, present processing load as the node.
Step 230, server judges whether present processing load is greater than processing load threshold value, if present processing load is big In processing load threshold value, triggering executes step 240;If present processing load terminates this process no more than processing load threshold value.
In the exemplary embodiment, optionally, server can receive the maximum processing load that each node reports, and according to The corresponding processing load threshold value of the node is arranged in the maximum processing load that each node is reported, and can be improved allotment issued transaction The flexibility of work.
Step 240, server produces the Transaction Information of at least one target transaction on the node to replacement node.
Wherein, which is in all affairs removed except the second affairs in the first affairs on above-mentioned node At least one affairs.
In the present exemplary embodiment, for a certain node, the first affairs can indicate that historical statistics flows into so far and be somebody's turn to do The affairs of node, the second affairs can indicate that the affairs of the node are flowed out in historical statistics so far, then, which can be Flow at least one affairs removed in all affairs except the second affairs in the first affairs of the node;It needs to illustrate It is that above-mentioned historical statistics is used to indicate target time section so far, which is flowed into information with the affairs of the node and wrapped It is initial time at the time of including the Transaction Information of the first affairs, obtains the affairs inflow of the node with server in the present embodiment It is cut-off time at the time of information and affairs outflow information.For example, the first affairs total quantity is 50, the second affairs total quantity It is 20, then, the quantity of target transaction can be 1,2 or 10, and the quantity of target transaction is no more than 30.
In one exemplary embodiment, optionally, for a certain node, the first affairs also may indicate that current time Flow into the affairs of the node, the second affairs can be expressed as the affairs for flowing out the node at current time, and history has affairs can be with The affairs that the moment in this prior has been positioned at the node are expressed as, then target transaction can be the whole of the first affairs and the node History has at least one affairs removed in all affairs except the second affairs in affairs;It should be noted that this is current At the time of moment is that the affairs that server obtains the node in the present embodiment flow into information and affairs outflow information.For example, The total quantity of first affairs is 15, and the total quantity of the second affairs is 20, and the total quantity that history has affairs is 35, then target thing The quantity of business can be 1,2 or 10, and the quantity of the target transaction is no more than 30.
In the embodiment of another exemplary, server turns the Transaction Information of at least one target transaction on the node May include: to replacement node out
Server determines the Current transaction of the node, and Current transaction is to remove the second thing in the first affairs on the node All affairs except business;
Server determines at least one target transaction from Current transaction;
Server inquires preset transaction types and the node table of comparisons, to find out the affairs of transaction types and target transaction The identical replacement node of type;
Server produces the Transaction Information of at least one target transaction to replacement node, so that replacement node completion pair The processing of the Transaction Information of at least one target transaction.
In the present example embodiment, transaction types replacement node identical with the transaction types of target transaction refers to that this is replaced It changes node and is capable of handling affairs identical with the transaction types of target transaction.Server can be by inquiring preset transaction types With the node table of comparisons, transaction types multiple replacement nodes identical with the transaction types of target transaction are found out, are replaced according to multiple The affairs for changing each replacement node in node flow into information and affairs outflow information, determine the currently processed of each replacement node Load, and the smallest target replacement node of present processing load in multiple replacement nodes is selected, by least one of above-mentioned node The Transaction Information of target transaction goes to target replacement node, so that target replacement node completes the number of transactions to target transaction According to processing.For example, target transaction can be risk assessment, and risk assessment can be handled by risk assessment company, The total quantity of risk assessment company is 20;The type of risk assessment can be the risk assessment of traffic Claims Resolution, be also possible to disease The risk assessment that pathology is paid for, if the transaction types of target transaction are the risk assessment of traffic Claims Resolution, then, it above-mentioned node and replaces Changing node is the risk assessment company for being capable of handling the risk assessment of traffic Claims Resolution;Then, server is preset by inquiry Transaction types and the node table of comparisons, determine to be capable of handling the risk assessment of traffic Claims Resolution in 20 risk assessment companies 12 risk assessment companies determine present processing load the smallest 1 target risk assessment company in 12 risk assessment companies, The data of the risk assessment of traffic Claims Resolution on a certain risk assessment company (node) are transferred to target risk assessment company, So that target risk assessment company handles it.
As it can be seen that implement the intelligent allocation method of distributing real time system described in Fig. 2, it can be to the processing of each node Load carry out real-time monitoring, when monitor a large amount of affairs Transaction Information block when some node, will exceed the place of the node The Transaction Information for managing the target transaction of load is produced to replacement node identical with the node type, so that the place of same type node Reason load obtains reasonable equilibrium, and then improves tediously long dilatory issued transaction status.
Fig. 3 is a kind of process of the intelligent allocation method of the distributing real time system shown according to another exemplary embodiment Figure.As shown in figure 3, in addition to the step shown in Fig. 2, step 230 judge present processing load no more than processing load threshold value it Afterwards, display control method is further comprising the steps of in this embodiment.
Step 250, present processing load is input to processing load forecasting model by server.
In one exemplary embodiment, before step 250, method shown in Fig. 4 can with the following steps are included:
Server obtains the history processing load of the node;
Server by utilizing history handles the initial neural network model of weight training, obtains processing load forecasting model.
Further alternative, server by utilizing history handles the initial neural network model of weight training, obtains processing load Prediction model may include:
Server obtains N number of continuous processing load as sample from history processing load and inputs initial neural network mould Type, so that initial neural network model exports output load corresponding with sample;N is positive integer;
Output load is determined as target processing load by server;
Target processing load is compared by server with actual treatment load, updates initial nerve net according to comparison result The parameter of network model, the actual treatment load be server at the time of getting target processing load from the nodal test to Handle load;
Server judges whether the loss function of initial neural network model meets preset condition, and loss function is for characterizing Error between the output load and actual value of initial neural network model;
If the loss function of initial neural network model meets preset condition, server is current by initial network model Parameter is determined as handling the parameter of load forecasting model, and obtains processing load forecasting model according to parameter.
In the present exemplary embodiment, processing load forecasting model can be used for predicting each node at the place of next period Manage load (prediction processing load), wherein the period of prediction can determine according to the training content of processing load model.Citing comes It says, the duration of each period can be set as 1 hour by server, using every 1 hour intermediate time as the regulation moment.For appoint One node obtains the history processing load of any node, arbitrarily chooses N number of continuous regulation moment from history processing load (such as 5 points 30 minutes, 6 points 30 minutes and 7 points 30 minutes) corresponding to history handle load, N is positive integer;By the N History corresponding to a regulation moment handles load as sample and is input to initial neural network model (machine learning model), i.e., Input vector is x=(x1, x2, x3, x4 ... xn), and the N+1 regulation moment institute after N number of continuous regulation moment is right The processing load answered is as desired output;It is further continued for executing above-mentioned operation, i.e., is arbitrarily selected from above-mentioned history processing load History processing load corresponding to other N number of continuous regulation moment is taken as sample and is input to the initial neural network mould Type so that the initial neural network model export it is pre- corresponding to the N+1 regulation moment after N number of continuous regulation moment Processing load (desired output) is surveyed, until the loss function of initial neural network model meets preset condition.Wherein, initial nerve Network model includes input layer, hidden layer and output layer, and server is a in the N+1 for getting initial neural network model output After providing moment corresponding prediction processing load (desired output), it can also be obtained from any node and is advised at the N+1 The actual treatment load (reality output) that timing is carved, and initial nerve net is determined according to prediction processing load and actual treatment load The output error of network model, which is the error of the output layer of initial neural network model, by the error of the output layer The error that can determine the direct preceding conducting shell of output layer determines the mistake of more preceding layer in the error according to the preceding conducting shell of the output layer Difference so reversely successively determine from output layer to input layer, can determine all layers in the initial neural network model Error, and determine according to all layers in the initial neural network model of error the loss letter of the initial neural network model Number.Since input sample repeatedly can be such that the initial neural network model constantly learns to initial neural network model, and learn Mode be constantly to change the network connection weight of initial neural network model under the stimulation of extraneous input sample so that initially The loss function of neural network model meets preset condition.It, can when the loss function of initial network model meets preset condition To determine that the initial network model training is completed, the parameter current of initial network model is determined as handling load prediction by server The parameter of model, and processing load forecasting model is obtained according to parameter.
Step 260, server is born based on the output of processing load forecasting model as a result, determining that the prediction of the node is handled Lotus.
Step 270, server judges whether prediction processing load is greater than processing load threshold value, if prediction processing load is big In processing load threshold value, triggering executes step 240;If prediction processing load terminates this process no more than processing load threshold value.
Processing may be instantly available since the Transaction Information of affairs flows into after node, it is also possible to will not be instantly available place Therefore reason predicts that the prediction of the node handles load based on above-mentioned processing load forecasting model, can know the node in advance The situation of change for handling load is conducive to carry out rational allocation to the Transaction Information of node, optimizes the balanced effect of processing load Fruit;In addition, processing moment for being handled after flowing into the node of the Transaction Information of affairs can be current according to the node Sequence of the Transaction Information of processing speed and the affairs in the node, wherein the sequence is flowed by the Transaction Information of affairs The sequencing of the node.
In a further exemplary embodiment, after step 240, display control method further includes following step in this embodiment Suddenly.
Step 280, server obtains the flow direction of target transaction.
Step 290, server updates the flow direction record of target transaction according to the flow direction of target transaction, and flow direction record is used In the circulation path of tracking statistics target transaction.
As it can be seen that the intelligent allocation method of distributing real time system described in implementing Fig. 3, it can be to the processing of each node Load carry out real-time monitoring, when monitor a large amount of affairs Transaction Information block when some node, will exceed the place of the node The Transaction Information for managing the target transaction of load is produced to replacement node identical with the node type, so that the place of same type node Reason load obtains reasonable equilibrium, and then improves tediously long dilatory issued transaction status;In addition, being based on above-mentioned processing load prediction The prediction of the model prediction node handles load, can know the situation of change of the processing load of the node in advance, is conducive to pair The Transaction Information of node carries out rational allocation, optimizes the portfolio effect of processing load.
It is the device of the invention embodiment below.
Fig. 4 is a kind of block diagram of server shown according to an exemplary embodiment.As shown in figure 4, the server includes:
Module 310 is obtained, the affairs for obtaining a certain node flow into information and the affairs of the node flow out information, and will The affairs flow into information and affairs outflow information is supplied to the first determining module 320.
Wherein, it includes flowing into the Transaction Information of all the first affairs of the node, affairs outflow which, which flows into information, Information includes flowing out the Transaction Information of all the second affairs of the node.
Optionally, the Transaction Information of affairs can carry the Transaction Identifier number (ID, IDentity) of the affairs, i.e., and first The Transaction Information of affairs carries the Transaction Identifier number of the first affairs, the Transaction Information of the second affairs carries the things of the second affairs Business identification number.
First determining module 320 determines the current of the node for flowing into information and affairs outflow information according to affairs Load is handled, and the present processing load is supplied to judgment module 330.
Judgment module 330 is provided for judging whether present processing load is greater than processing load threshold value, and by judging result To shift module 340.
In the exemplary embodiment, optionally, judgment module 330 is judging whether present processing load is greater than processing load Before threshold value, the maximum processing load that each node reports can also be received, and negative according to the maximum processing that each node is reported The corresponding processing load threshold value of the node is arranged in lotus, so that the allotment of issued transaction is more flexible.
Shift module 340, for inciting somebody to action when judgment module 330 judges that present processing load is greater than processing load threshold value The Transaction Information of at least one target transaction on the node is produced to replacement node;Wherein, which is on the node The first affairs in remove in all affairs except the second affairs at least one of affairs.
In the present exemplary embodiment, for a certain node, the first affairs can indicate that historical statistics flows into so far and be somebody's turn to do The affairs of node, the second affairs can indicate that the affairs of the node are flowed out in historical statistics so far, then, which can be Flow at least one affairs removed in all affairs except the second affairs in the first affairs of the node;It needs to illustrate It is that above-mentioned historical statistics is used to indicate target time section so far, which is flowed into information with the affairs of the node and wrapped It is initial time at the time of including the Transaction Information of the first affairs, obtains the affairs inflow of the node with server in the present embodiment It is cut-off time at the time of information and affairs outflow information.For example, the first affairs total quantity is 50, the second affairs total quantity It is 20, then, the quantity of target transaction can be 1,2 or 10, and the quantity of target transaction is no more than 30.
In one exemplary embodiment, optionally, for a certain node, the first affairs also may indicate that current time Flow into the affairs of the node, the second affairs can be expressed as the affairs for flowing out the node at current time, and history has affairs can be with The affairs that the moment in this prior has been positioned at the node are expressed as, then target transaction can be the whole of the first affairs and the node History has at least one affairs removed in all affairs except the second affairs in affairs;It should be noted that this is current At the time of moment is that the affairs that server obtains the node in the present embodiment flow into information and affairs outflow information.For example, The total quantity of first affairs is 15, and the total quantity of the second affairs is 20, and the total quantity that history has affairs is 35, then target thing The quantity of business can be 1,2 or 10, and the quantity of the target transaction is no more than 30.
As it can be seen that implement server described in Fig. 4, can processing load to each node carry out real-time monitoring, work as prison The Transaction Information for measuring a large amount of affairs blocks when some node, will exceed the affairs of the target transaction of the processing load of the node Data are produced to replacement node, so that the processing load of node obtains reasonable equilibrium, and then improve tediously long dilatory office Manage status.
Referring to Fig. 5, the block diagram of Fig. 5 another server shown according to an exemplary embodiment, wherein shown in Fig. 5 Server be that server further progress as shown in Figure 4 optimizes.Compared with server shown in Fig. 4, Fig. 5 institute The server shown further include:
Input module 350, for judgment module 330 judge present processing load no more than processing load threshold value when, Present processing load is input to processing load forecasting model, and triggers the starting of the second determining module 360.
Wherein, judgment module 330 can will judge after judging whether present processing load is greater than processing load threshold value As a result above-mentioned shift module 340 and the input module 350 are supplied to.
Second determining module 360, for the output based on processing load forecasting model as a result, determining at the prediction of the node Load is managed, and prediction processing load is supplied to judgment module 330.
Above-mentioned judgment module 330, is also used to judge whether prediction processing load is greater than processing load threshold value, and judgement is tied Fruit is supplied to shift module 340.
Above-mentioned shift module 340 is also used to judge that prediction processing load is greater than processing load threshold value in judgment module 330 When, the Transaction Information of at least one target transaction on the node is produced to replacement node identical with the node type.
In one exemplary embodiment, as shown in figure 5, above-mentioned acquisition module 310, can be also used for obtaining going through for the node History handles load, and history processing load is supplied to training module 370.
Training module 370 obtains processing load prediction for handling the initial neural network model of weight training using history Model.
Above-mentioned input module 350 is specifically used for handling the initial neural network of weight training using history in training module 370 Model, obtain processing load forecasting model after and judgment module 330 judge present processing load no more than processing load When threshold value, present processing load is input to processing load forecasting model.
Further alternative, above-mentioned training module 370 handles the initial neural network model of weight training using history, obtains The mode of processing load forecasting model is specifically as follows:
N number of continuous processing load, which is obtained, as sample from history processing load inputs initial neural network model, with Initial neural network model is set to export output load corresponding with sample;N is positive integer;
Output load is determined as target processing load;
Target processing load is compared with actual treatment load, initial neural network model is updated according to comparison result Parameter, the actual treatment load be server at the time of getting target processing load from the nodal test to processing it is negative Lotus;
Judge whether the loss function of initial neural network model meets preset condition, the loss function is initial for characterizing Error between the output load and actual value of neural network model;
If the loss function of initial neural network model meets preset condition, and the parameter current of initial network model is true It is set to the parameter of processing load forecasting model, and processing load forecasting model is obtained according to parameter.
In the present exemplary embodiment, processing load forecasting model can be used for predicting each node at the place of next period Manage load (prediction processing load), wherein the period of prediction can determine according to the training content of processing load model.Citing comes It says, the duration of each period can be set as 1 hour by training module 370, using every 1 hour intermediate time as the regulation moment. For any node, the history processing load of any node is obtained, arbitrarily chooses N number of continuous regulation from history processing load Moment (such as 5 points 30 minutes, 6 points 30 minutes and 7 points 30 minutes) corresponding to history handle load, N is positive integer; Initial neural network model (machine learning mould is input to using the processing load of history corresponding to N number of regulation moment as sample Type), i.e., input vector be x=(x1, x2, x3, x4 ... xn), by it is N number of it is continuous regulation the moment after the N+1 regulation when Corresponding processing load is carved as desired output;It is further continued for executing above-mentioned operation, i.e., from above-mentioned history processing load History processing load corresponding to any selection other N number of continuous regulation moment is input to the initial nerve net as sample Network model, so that the initial neural network model exported corresponding to the N+1 regulation moment after N number of continuous regulation moment Prediction handle load (desired output), until the loss function of initial neural network model meets preset condition.Wherein, initially Neural network model includes input layer, hidden layer and output layer, and training module 370 is getting initial neural network model output The N+1 regulation moment corresponding prediction processing load (desired output) after, can also be obtained from any node its The actual treatment load (reality output) at the N+1 regulation moment, and determined according to prediction processing load and actual treatment load The output error of initial neural network model, which is the error of the output layer of initial neural network model, defeated by this The error of layer can determine the error of the direct preceding conducting shell of output layer out, determine more in the error of the preceding conducting shell according to the output layer The error of preceding layer so carries out reversed successively calculating from output layer to input layer, can determine the initial neural network mould All layers of error in type, and the initial neural network model is determined according to all layers in the initial neural network model of error Loss function.Since input sample repeatedly can be such that the initial neural network model constantly learns to initial neural network model The mode practised, and learnt is constantly to change the network connection power of initial neural network model under the stimulation of extraneous input sample Weight, so that the loss function of initial neural network model meets preset condition.When the loss function of initial network model meets in advance If when condition, training module 370 can determine that the initial network model training is completed, and by the parameter current of initial network model It is determined as handling the parameter of load forecasting model, and processing load forecasting model is obtained according to parameter.
As it can be seen that implement server described in Fig. 5, can processing load to each node carry out real-time monitoring, work as prison The Transaction Information for measuring a large amount of affairs blocks when some node, will exceed the affairs of the target transaction of the processing load of the node Data are produced to replacement node identical with the node type, so that the processing load of same type node obtains reasonable equilibrium, into And improve tediously long dilatory issued transaction status;In addition, predicting the prediction of the node based on above-mentioned processing load forecasting model Load is handled, the situation of change of the processing load of the node can be known in advance, be conducive to close the Transaction Information of node Reason allotment, optimizes the portfolio effect of processing load.
Referring to Fig. 6, the block diagram of Fig. 6 another server shown according to an exemplary embodiment, wherein shown in Fig. 6 Server be that server further progress as shown in Figure 5 optimizes.Compared with server shown in fig. 5, in Fig. 6 Shown in server:
Above-mentioned first determining module 320 flows into information according to affairs and affairs flow out information, determines the current of the node The mode of processing load is specifically as follows:
Affairs entrained by server Transaction Identifier number according to entrained by affairs inflow log and affairs outflow log Identification number determines the Current transaction of the node;Wherein, which is to remove the second thing in the first affairs on the node All affairs except business;Using the Transaction Information of Current transaction as the present processing load of the node.
Wherein, it further includes that affairs flow into log that affairs, which flow into information, and affairs flow into the affairs that log carries the first affairs Identification number, affairs outflow information further include affairs outflow log, and affairs outflow log carries the Transaction Identifier number of the second affairs.
In one exemplary embodiment, as shown in fig. 6, affairs, which flow into log, can also carry the transactions classes of the first affairs Type, affairs outflow log also carry the transaction types of the second affairs;Above-mentioned shift module 340 is by least one on the node The Transaction Information of item target transaction, which is produced to the mode of replacement node, to be specifically as follows:
It determines the Current transaction of the node, and determines at least one target transaction from Current transaction, wherein really The Current transaction of the node is made, the Current transaction is busy except the second affairs to remove in the first affairs on the node Business;
Preset transaction types and the node table of comparisons are inquired, to find out transaction types phase of the transaction types with target transaction Same replacement node;
The Transaction Information of at least one target transaction is produced to replacement node, so that replacement node is completed at least one The processing of the Transaction Information of target transaction.
In a further exemplary embodiment, as shown in fig. 6, above-mentioned acquisition module 310, being also used to will in shift module 340 The Transaction Information of at least one target transaction on the node is produced to replacement node, obtains the flow direction of target transaction, and The flow direction of the target transaction is supplied to update module 380.
Update module 380 updates the flow direction record of target transaction, flow direction record for the flow direction according to target transaction For tracking the circulation path of statistics target transaction.
Wherein, server can be recorded according to the flow direction that the flow direction of affairs updates the target transaction, can be realized to target The circulation path that the Transaction Information of affairs flows to node carries out global monitoring;In addition, server can also receive user institute Equipment issue inquiry request after, according to the target transaction updated at the time of receiving the inquiry request Flow direction records the inquiry request for the user that makes an immediate response, i.e., the flow direction record of the target transaction, energy are sent to the equipment where user Response speed is enough improved, and then improves the interactive experience of user.
As it can be seen that implement server described in Fig. 6, can processing load to each node carry out real-time monitoring, work as prison The Transaction Information for measuring a large amount of affairs blocks when some node, will exceed the affairs of the target transaction of the processing load of the node Data are produced to replacement node identical with the node type, so that the processing load of same type node obtains reasonable equilibrium, into And improve tediously long dilatory issued transaction status;And the situation of change of the processing load of the node can be known in advance, have Rational allocation is carried out conducive to the Transaction Information to node, optimizes the portfolio effect of processing load;Furthermore it is possible to improve response speed Degree, and then improve the interactive experience of user.
The present invention also provides a kind of electronic equipment, which includes:
Processor;
Memory is stored with computer-readable instruction on the memory, when which is executed by processor, Realize the intelligent allocation method of distributing real time system as previously shown.
The electronic equipment can be Fig. 1 shown device 100.
In one exemplary embodiment, the present invention also provides a kind of computer readable storage mediums, are stored thereon with calculating Machine program when the computer program is executed by processor, realizes the intelligent allocation method of distributing real time system as previously shown.
It should be understood that the present invention is not limited to the precise structure already described above and shown in the accompanying drawings, and And various modifications and change can executed without departing from the scope.The scope of the present invention is limited only by the attached claims.

Claims (10)

1. a kind of intelligent allocation method of distributing real time system, which is characterized in that the described method includes:
The affairs that server obtains a certain node flow into the affairs outflow information of information and the node;Wherein, the transaction flow Entering information includes flowing into the Transaction Information of all the first affairs of the node, and the affairs outflow information includes flowing out the section The Transaction Information of all the second affairs of point;
The server flows into information according to the affairs and the affairs flow out information, determines the currently processed of the node Load;
The server judges whether the present processing load is greater than processing load threshold value;
If the present processing load be greater than the processing load threshold value, the server by the node at least one of The Transaction Information of target transaction is produced to replacement node, and the target transaction is described in removing in the first affairs on the node At least one affairs in all affairs except second affairs.
2. the method according to claim 1, wherein the affairs flow into information further include affairs flow into log, The affairs flow into the Transaction Identifier number that log carries first affairs, and the affairs outflow information further includes affairs outflow Log, the affairs outflow log carry the Transaction Identifier number of second affairs, and the server is according to the transaction flow Enter information and affairs outflow information, determine the present processing load of the node, comprising:
Server Transaction Identifier number according to entrained by affairs inflow log and affairs outflow log are taken The Transaction Identifier number of band determines that the Current transaction of the node, the Current transaction are in the first affairs on the node Remove all affairs except second affairs;
The server is using the Transaction Information of the Current transaction as the present processing load of the node.
3. according to the method described in claim 2, it is characterized in that, the affairs, which flow into log, also carries first affairs Transaction types, affairs outflow log also carries the transaction types of second affairs, if described currently processed negative Lotus is greater than the processing load threshold value, and the server produces the Transaction Information of at least one target transaction on the node To replacement node, comprising:
The server determines at least one target transaction from the Current transaction;
The server inquires preset transaction types and the node table of comparisons, to find out transaction types and the target transaction The identical replacement node of transaction types;
The server produces the Transaction Information of at least one of described target transaction to the replacement node, so that the replacement Node completes the processing of the Transaction Information at least one of described target transaction.
4. described in any item methods according to claim 1~3, which is characterized in that the server by the node extremely The Transaction Information of one item missing target transaction is produced to replacement node, the method also includes:
The server obtains the flow direction of the target transaction;
The server updates the flow direction record of the target transaction, the flow direction record according to the flow direction of the target transaction For tracking the circulation path for counting the target transaction.
5. the method according to claim 1, wherein the method also includes:
If the present processing load is not more than the processing load threshold value, the server is defeated by the present processing load Enter to processing load forecasting model;
The server is based on the output of the processing load forecasting model as a result, determining that the prediction of the node handles load;
The server judges whether the prediction processing load is greater than the processing load threshold value;
If the prediction processing load is greater than the processing load threshold value, will be on the node described in the server execution At least one of target transaction Transaction Information produce to replacement node.
6. according to the method described in claim 5, it is characterized in that, if the present processing load is negative no more than the processing Lotus threshold value, before the present processing load is input to processing load forecasting model by the server, the method also includes:
The server obtains the history processing load of the node;
History described in the server by utilizing handles the initial neural network model of weight training, obtains the processing load prediction mould Type.
7. according to the method described in claim 6, it is characterized in that, at the beginning of the processing weight training of history described in the server by utilizing Beginning neural network model obtains the processing load forecasting model, comprising:
The server obtains N number of continuous processing load as sample from history processing load and inputs initial nerve net Network model, so that the initial neural network model exports output information corresponding with the sample;The N is positive integer;
The output load is determined as target processing load by the server;
Target processing load is compared by the server with actual treatment load, is updated according to comparison result described first The parameter of beginning neural network model, the actual treatment load are the servers when getting the target processing load Carve from the nodal test to processing load;
The server judges whether the loss function of the initial neural network model meets preset condition, the loss function The error between output load and actual value for characterizing the initial neural network model;
If the loss function of the initial neural network model meets the preset condition, the server is by the original net The parameter current of network model is determined as handling the parameter of load forecasting model, and it is pre- according to the parameter to obtain the processing load Survey model.
8. a kind of server, which is characterized in that the server includes:
Module is obtained, the affairs for obtaining a certain node flow into information and the affairs of the node flow out information;Wherein, described It includes flowing into the Transaction Information of all the first affairs of the node that affairs, which flow into information, and the affairs outflow information includes outflow The Transaction Information of all the second affairs of the node;
First determining module determines the node for flowing into information and affairs outflow information according to the affairs Present processing load;
Judgment module, for judging whether the present processing load is greater than processing load threshold value;
Shift module, for the judgment module judge the present processing load be greater than the processing load threshold value when, The Transaction Information of at least one target transaction on the node is produced to replacement node, the target transaction is the node On the first affairs in remove in all affairs except second affairs at least one of affairs.
9. a kind of computer readable storage medium, which is characterized in that it stores computer program, and the computer program makes to succeed in one's scheme Calculation machine perform claim requires the intelligent allocation method of 1~7 described in any item distributing real time systems.
10. a kind of electronic equipment, including memory and processor, the memory are stored with computer program, which is characterized in that The step of processor realizes method according to any one of claims 1 to 7 when executing the computer program.
CN201811191526.7A 2018-10-12 2018-10-12 Intelligent allocation method for distributed transaction processing and server Active CN109523123B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811191526.7A CN109523123B (en) 2018-10-12 2018-10-12 Intelligent allocation method for distributed transaction processing and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811191526.7A CN109523123B (en) 2018-10-12 2018-10-12 Intelligent allocation method for distributed transaction processing and server

Publications (2)

Publication Number Publication Date
CN109523123A true CN109523123A (en) 2019-03-26
CN109523123B CN109523123B (en) 2024-04-05

Family

ID=65771813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811191526.7A Active CN109523123B (en) 2018-10-12 2018-10-12 Intelligent allocation method for distributed transaction processing and server

Country Status (1)

Country Link
CN (1) CN109523123B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000073142A (en) * 1999-05-06 2000-12-05 서평원 Apparatus And Method For Overload State Sensing Of Message
CN102055675A (en) * 2011-01-21 2011-05-11 清华大学 Multipath routing distribution method based on load equilibrium
CN104580396A (en) * 2014-12-19 2015-04-29 华为技术有限公司 Task scheduling method, node and system
CN105991458A (en) * 2015-02-02 2016-10-05 中兴通讯股份有限公司 Load balancing method and load balancing device
CN106101232A (en) * 2016-06-16 2016-11-09 北京思源置地科技有限公司 Load-balancing method and device
CN106302161A (en) * 2016-08-01 2017-01-04 广东工业大学 Perception data transmission method based on load estimation, device, path control deivce
CN106484530A (en) * 2016-09-05 2017-03-08 努比亚技术有限公司 A kind of distributed task dispatching O&M monitoring system and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000073142A (en) * 1999-05-06 2000-12-05 서평원 Apparatus And Method For Overload State Sensing Of Message
CN102055675A (en) * 2011-01-21 2011-05-11 清华大学 Multipath routing distribution method based on load equilibrium
CN104580396A (en) * 2014-12-19 2015-04-29 华为技术有限公司 Task scheduling method, node and system
CN105991458A (en) * 2015-02-02 2016-10-05 中兴通讯股份有限公司 Load balancing method and load balancing device
CN106101232A (en) * 2016-06-16 2016-11-09 北京思源置地科技有限公司 Load-balancing method and device
CN106302161A (en) * 2016-08-01 2017-01-04 广东工业大学 Perception data transmission method based on load estimation, device, path control deivce
CN106484530A (en) * 2016-09-05 2017-03-08 努比亚技术有限公司 A kind of distributed task dispatching O&M monitoring system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
陈世浩等: "基于预测的计算网格负载平衡研究", vol. 26, no. 26, pages 82 - 85 *
齐璇;: "支持动态修改的MINI工作流过程元模型", 计算机工程与科学, no. 07, 15 July 2007 (2007-07-15), pages 138 - 140 *

Also Published As

Publication number Publication date
CN109523123B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
CN106982359B (en) A kind of binocular video monitoring method, system and computer readable storage medium
CN109582793A (en) Model training method, customer service system and data labeling system, readable storage medium storing program for executing
CN104144204B (en) A kind of method and system for the simulation for carrying out industrial automation system
CN106549772A (en) Resource prediction method, system and capacity management device
CN105989441A (en) Model parameter adjustment method and device
CN109614238A (en) A kind of recongnition of objects method, apparatus, system and readable storage medium storing program for executing
CN109726664A (en) A kind of intelligence dial plate recommended method, system, equipment and storage medium
CN110442737A (en) The twin method and system of number based on chart database
CN108809694A (en) Arranging service method, system, device and computer readable storage medium
CN110139067A (en) A kind of wild animal monitoring data management information system
CN108320045A (en) Student performance prediction technique and device
CN109840111A (en) A kind of patterned transaction processing system and method
US20200372428A1 (en) Automated and efficient personal transportation vehicle sharing
CN109389518A (en) Association analysis method and device
CN109740965A (en) A kind of engineering verification analysis method and device
CN104866922B (en) A kind of off-grid prediction technique of user and device
CN110535850A (en) Treating method and apparatus, storage medium and the electronic device that account number logs in
CN113486584B (en) Method and device for predicting equipment failure, computer equipment and computer readable storage medium
CN107273979A (en) The method and system of machine learning prediction are performed based on service class
CN107924492A (en) Classified using normalization the value of the confidence to mobile equipment behavior
CN109995592A (en) Quality of service monitoring method and equipment
CN106909560A (en) Point of interest sort method
CN110458572A (en) The determination method of consumer's risk and the method for building up of target risk identification model
CN110263136B (en) Method and device for pushing object to user based on reinforcement learning model
CN109409780B (en) Change processing method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant