CN111400028A - Load balancing processing method for train management - Google Patents

Load balancing processing method for train management Download PDF

Info

Publication number
CN111400028A
CN111400028A CN201911398030.1A CN201911398030A CN111400028A CN 111400028 A CN111400028 A CN 111400028A CN 201911398030 A CN201911398030 A CN 201911398030A CN 111400028 A CN111400028 A CN 111400028A
Authority
CN
China
Prior art keywords
message
train
service
copy
ivoc
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911398030.1A
Other languages
Chinese (zh)
Other versions
CN111400028B (en
Inventor
温博为
张强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Traffic Control Technology TCT Co Ltd
Original Assignee
Traffic Control Technology TCT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Traffic Control Technology TCT Co Ltd filed Critical Traffic Control Technology TCT Co Ltd
Priority to CN201911398030.1A priority Critical patent/CN111400028B/en
Publication of CN111400028A publication Critical patent/CN111400028A/en
Application granted granted Critical
Publication of CN111400028B publication Critical patent/CN111400028B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Train Traffic Observation, Control, And Security (AREA)

Abstract

The embodiment of the disclosure provides a load balancing processing method and device for train management and a computer-readable storage medium. Each copy of the first-level micro service receives a call request sent by a client, executes the call request, generates a corresponding message, and sends the message to a corresponding next-level micro service message queue; each copy of the next-level micro service receives the corresponding message respectively, executes the preset operation and sends the generated message to the corresponding next-level micro service message queue; each copy of the secondary micro service receives the message in the secondary micro service message queue, executes preset operation and sends the generated message to the corresponding primary micro service message queue; and each copy of the first-stage micro service receives the message in the first-stage micro service message queue respectively, generates a command and sends the command to the corresponding client. The availability, load balance and expandability of the ITS management train are improved.

Description

Load balancing processing method for train management
Technical Field
Embodiments of the present disclosure relate generally to the field of rail transit technology, and more particularly, to a load balancing processing method, apparatus, and computer-readable storage medium for train management.
Background
An ITS (Intelligent Train Supervision) subsystem is mainly responsible for monitoring the state of trackside equipment, the running condition of a Train and alarming in a Train-vehicle communication system, providing a human-computer operation interface to control the trackside equipment and the Train, compiling a running plan for running and assisting the Train to automatically run according to the plan. By executing a series of automatic logics, the workload of manually dispatching trains is reduced as much as possible. The core automatic operation of the ITS is the calculation of the path of the vehicle, the management of the train number and the calculation of the station entering and exiting.
The ITS is similar to the ATS (Auto Train Supervision) function in a CBTC (Communication Based Train Control) system, and the system is a new system developed again on the basis of the original ATS for different system scenarios of vehicle-to-vehicle Communication and CBTC.
Referring to the conventional ATS, the ITS is largely divided into a centralized architecture using an application server as a processing core and a distributed architecture using a station extension as a processing core.
1) As shown in fig. 1, the principle of the centralized architecture with the application server as the processing core is as follows: a set of application servers is arranged on the whole line (to realize high availability, 2 or 4 application servers with the same function are usually deployed, and only one main application server periodically executes internal operation and outputs). The main functions of the application server include: the method comprises the steps of receiving and processing data of all OC (Object Controller), IVOC (intelligent vehicle-mounted Controller) and TMC (Train management platform) On a line forwarded by a gateway interface machine, wherein the functions comprise the processing of trackside equipment state, the processing of Train information and the processing of temporary speed limit. And internally executing loading of the operation diagram, automatically allocating train numbers to the train, calculating the operation path for the train, sending a control command to the IVOC, sending an operation command to the OC, responding to the turnout request of the IVOC, calculating the conflict, automatically adjusting, generating an actual operation diagram, calculating the PIS/PA information and the like.
The framework has the advantages of simple framework, single authority and high data consistency. The method has the defects of large calculation amount, large influence surface on single-point faults, high requirement on hardware and inextensibility. The application server failure can cause the important influences of grey display of the whole line station field interface, incapability of continuously running trains and the like. When the line is long or the driving interval is very dense, the memory, CPU, and hard disk resources occupied by the application server will also increase, even reaching a certain bottleneck to cause function abnormality. When the line is prolonged and the number of vehicles is increased, the contradiction can be alleviated only by hardware upgrading, and the software does not have the expansion capability.
2) As shown in fig. 2, the principle of the distributed architecture with the station extension as the processing core is as follows: each OC centralized station is provided with a set of station extension (for realizing high availability, 2 station extensions with the same function are usually deployed, and only one main station extension is used for periodically executing operation and outputting). The main functions of the station extension include: and receiving and processing the OC in the local centralized area and the IVOC data in the local centralized area range, wherein the processing of the states of the trackside equipment and the processing of train information are included. And internally executing automatic train number allocation for the train, calculation of a running path for the train, sending of a control command to the IVOC, sending of an operation command to the OC, responding to a turnout request of the IVOC, calculation of a conflict, automatic adjustment, generation of an actual running chart and the like. At the junction of the centralized area, adjacent station extension sets need to exchange the train and station yard states of the compound visual area, and in order to realize smooth transition of train moving across the centralized area, more complex traffic right logic and common management processing logic are added between the station extension sets.
The architecture has the advantages of high availability, limited fault influence range and certain line-based expansibility, and because the station extension sets always only supervise trains in the current physical range (single centralized area), the number of the trains is controllable, and the hardware requirement can be conveniently evaluated in advance. The defects are that the structure and the logic are complex, the synchronous data are frequent, and compared with an application server centralized control mode, the number of fault points is increased. For station extension in field, when all trains are ready to be delivered in a first-yard train mode, the delivery path is calculated for a large number of trains in real time, and the load is still large.
Disclosure of Invention
According to an embodiment of the present disclosure, a load balancing processing scheme for train management is provided.
In a first aspect of the disclosure, a load balancing processing method for train management is provided, including that each copy of a first-stage micro service receives a call request sent by a client, executes the call request, generates a corresponding message, and sends the message to a corresponding next-stage micro service message queue; each copy of the next-level micro service receives the corresponding message respectively, executes the preset operation and sends the generated message to the corresponding next-level micro service message queue; each copy of the secondary micro service receives the message in the secondary micro service message queue, executes preset operation and sends the generated message to the corresponding primary micro service message queue; and each copy of the first-stage micro service receives the message in the first-stage micro service message queue respectively, generates a command and sends the command to the corresponding client. .
In a second aspect of the disclosure, an electronic device is provided. The electronic device includes: a memory having a computer program stored thereon and a processor implementing the method as described above when executing the program.
In a third aspect of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the method as according to the first and/or second aspect of the present disclosure.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
FIG. 1 shows a sequence diagram of the transaction train traffic of an ITS with an application server as a processing core; (ii) a
FIG. 2 shows a sequence diagram of the processed train traffic of the ITS with the station extensions as the processing core;
FIG. 3 illustrates an operational environment schematic diagram of a load balancing process for train management according to an embodiment of the present disclosure;
FIG. 4 shows a flow diagram of a load balancing processing method of train management according to an embodiment of the present disclosure;
FIG. 5 shows a flow diagram of a method of automatically issuing a plan for a train in accordance with an embodiment of the present disclosure;
FIG. 6 illustrates a message view flow diagram for a method of automatically issuing a plan for a train in accordance with an embodiment of the present disclosure;
FIG. 7 illustrates a memory data perspective flow diagram for a method of automatically issuing a plan for a train in accordance with an embodiment of the present disclosure;
FIG. 8 shows a block diagram of a system for a method of automatically issuing a plan for a train in accordance with an embodiment of the present disclosure;
FIG. 9 illustrates a block diagram of an exemplary electronic device capable of implementing embodiments of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
FIG. 3 illustrates a block diagram of an operational environment for a load balancing process for train management according to an embodiment of the present disclosure; (ii) a
The operation environment is a micro-service architecture mode under the background of the Cloud platform, and a Spring Cloud framework is adopted.
In some embodiments, the runtime environment includes a plurality of virtual machines split on a cloud service, and the micro service runs on a service container of each virtual machine, as shown in fig. 3, including: eureka Server, Kubernetes Master, Kubernetes Node, Redis Server, RabbitMQ Server. Wherein the content of the first and second substances,
the Eureka Server is used for providing a Server and a client, the Server is an Eureka service registration center, and the client completes registration and discovery of the micro service to the Eureka service. The Eureka Server is a Server and is responsible for managing the information and the state of each micro service node. The Eureka client program is deployed on the micro-service, and the remote access Eureka Server registers the micro-service in the Eureka Server. And when the micro service needs to call another micro service, acquiring a service call address from the Eureka Server, and carrying out remote call.
The Kubernetes Master is used as a Kubernetes cluster control node and is responsible for the management and control of the whole cluster; the Kubernetes Master node runs the following set of key processes: kubernets API Server (kube-apiserver): the key service process providing the HTTP Rest interface is the only entrance of operations such as adding, deleting, changing and checking of all resources in Kubernetes, and is also the entrance process of cluster control. Kubernets controllerManager (kube-controller-manager): an automation control center for all resource objects in Kubernetes. Kubernetes Scheduler (kube-Scheduler): a process responsible for resource scheduling (Pod scheduling). In addition, an etcd service needs to be started on the Master node, because all the data of all the resource objects in Kubernetes are stored in the etcd.
Kubernetes nodes serve as the work load nodes of a Kubernetes cluster, and the following key processes run on each Node: kubelet: the cluster management system is responsible for tasks such as creation, start and stop of containers corresponding to the Pod, and is in close cooperation with the Master node, so that the basic function of cluster management is realized. kube-proxy: the important component for realizing the communication and load balancing mechanism of the Kubernets Service is realized. Docker Engine (Docker): and the Docker engine is responsible for the creation and management work of the container of the local machine. The Kubernetes Node is a micro-service in this embodiment, which runs on the container.
The Redis Server is used as an in-memory database to manage all real-time data.
The RabbitMQ Server is used for realizing load balance of calling among all micro-services, and in order to prevent blocking, asynchronous calling is adopted and is transmitted in a message mode.
In some embodiments, a station extension/application server for realizing train management is split into micro-services according to functions, and each micro-service is an independent autonomous body; the microservice includes: the train tracking service, the train number management service, the station entering and exiting computing service, the IVOC interface data processing service, the TMC interface data processing, the turnout conflict management service, the operation diagram service and the like are specifically shown in Table 1.
Figure BDA0002346819410000071
Figure BDA0002346819410000081
TABLE 1 station extension/application Server Business logic
Data generated by micro-services such as the train tracking service, train number management service, station entering and exiting calculation service, IVOC interface data processing service, TMC interface data processing, turnout conflict management service, operation diagram service and the like are all managed in a memory database, and dependent data is also acquired from the memory database. The services are mutually invoked through condition triggering.
In some embodiments, a memory database Redis is used to manage all real-time data, data required by the micro service can be acquired from the Redis, and meanwhile, a data backup function is also performed, if all micro service copies are down, a newly started micro service copy can acquire a result of the last previous operation from the Redis and continue to run.
In some embodiments, the number of copies of the IVOC interface data processing service, the train tracking service, the train number management service, the inbound and outbound calculation service depends on the pre-calculated maximum number of trains on the line, and the calculated pressure per service (the calculated pressure is proportional to the number of trains).
In some embodiments, the kubernets Master automatically deploys N copies of the microservice according to the configuration file, and automatically restarts (similar to the action of a watchdog) after detecting that the copies are closed until reaching a set target value of N, thereby meeting the requirement of the industry on high availability.
In some embodiments, each microservice integrates a Ribbon client load balancer, which is a load balancer published by Netflix and helps to control Http and TCP client behavior, and after configuring a service provider list for the Ribbon client load balancer, the Ribbon client load balancer can automatically help service consumers to requests based on some load balancing algorithm. The Ribbon client load balancer provides various load balancing algorithms, and the preset load balancing algorithms are as follows: RoundRobinRule (round robin); RandomRule (random algorithm); availabilityfiltering rule (): services in a breaker trip state due to multiple access faults and services with the number of concurrent connections exceeding a threshold value are filtered, and then the rest service lists are accessed according to a polling strategy; WeightedResponseTimeRule (): calculating the weight of all services according to the average response time, wherein the faster the response time is, the higher the probability of selecting the service weight is, if the statistical information is insufficient during starting, a RoundRobinrule strategy is used, and the statistical information is enough to be switched to WeightedResponseTimeRule; RetryRule (): firstly, acquiring service according to a RoundRobinrule strategy, and if the acquisition fails, retrying within a set time to acquire available service;
bestavbialbrule (): the services that are in the tripped state of the circuit breaker due to multiple access faults are filtered out first and then a service with the least amount of concurrency is selected.
Fig. 4 shows a flow diagram of a complex equalization processing method 200 of train management according to an embodiment of the present disclosure.
At block 402, each copy of the first level microservice receives a call request sent by a client, respectively;
the micro-services are formed by splitting a station extension/application server for realizing train management according to functions, and each micro-service is an independent autonomous body; the microservice includes: the system comprises a train tracking service, a train number management service, an in-and-out station calculation service, an IVOC interface data processing service, a TMC interface data processing service, a turnout conflict management service and a running chart service.
In some embodiments, the first-level microservice for which the client sends the call request may be an IVOC interface data processing service, a TMC interface data processing service, a runtime graph service.
In some embodiments, the remote service call can be made by using any one of the load balancing algorithms preset by the Ribbon client load balancer. In short, if the microservice has a plurality of remote services to be called in a normal operation state, one call request of the client can only reach one of the remote services, such as copy 1, and in the next ATS period, the call request of the client may reach another remote service, such as copy 2.
In this embodiment, the Ribbon load balancer is used when the client calls the micro service, and is generally used for synchronous call. Taking the invoked micro service as the IVOC interface data processing service as an example, the client, that is, the intelligent vehicle-mounted controller IVOC of each train invokes the IVOC interface data processing service through the IVOC interface service invocation request, and sends the train state data to the IVOC interface data processing service.
Selecting a corresponding copy of the IVOC interface data processing service by the Ribbon client load balancer according to a preset load balancing algorithm to perform remote service calling, for example, taking the example that the IVOC interface data processing service has two copies; in the polling algorithm, the IVOC interface data processing service call request of car 1 is sent to the copy of the IVOC interface data processing service 1, the IVOC interface data processing service call request of car 2 is sent to the copy of the IVOC interface data processing service 2, the IVOC interface data processing service call request of car 3 is sent to the copy of the IVOC interface data processing service 1, the IVOC interface data processing service call request of car 4 is sent to the copy of the IVOC interface data processing service 2, and the IVOC interface data processing service call request of car 5 is sent to the copy of the IVOC interface data processing service 1. In the next ATS cycle, the IVOC interface data processing service invocation request for car 1-5 may be sent to another copy of the IVOC interface data processing service. By utilizing the characteristic, the ITS management train can be managed according to a specific position constantly unlike a station extension, and all trains are managed unlike an application server, but each management service averagely shares the management tasks of all trains, and each service copy has the possibility of calculating any train in a certain ATS period without being bound to a specific train.
At block 404, each copy of the first-level microservice executes the invocation request, generates a corresponding message, and sends the message to a corresponding next-level microservice message queue;
in some embodiments, the calling relationship between each micro-service and other micro-services can be known from table 1, for example, the first-level service IVOC interface data processing service can call the next-level service train tracking service, switch conflict management service. Taking a first-level micro service as an IVOC interface data processing service and a next-level micro service as a train tracking service as an example, multiple copies of the IVOC interface data processing service successively receive train state data sent by IVOC1-IVOC5, the train state data is taken as IVOC external interface data, an internal protocol frame header is added to the IVOC external interface data, and the IVOC external interface data is converted into an internal communication protocol format; the copy of the IVOC interface data processing service finishes processing one frame of IVOC data and generates a message of a theme of 'receiving IVOC train message'; and sending the message of the topic of the received IVOC train message to a train tracking service message queue, wherein the message of the topic of the received IVOC train message carries train state data in an internal protocol format.
In some embodiments, calls between microservices at various levels are passed through messages using asynchronous calls to prevent blocking. The message can be routed in a load balancing way by adopting a RabbitMQ message mechanism realized based on AMQP (advanced message queue protocol). AMQP is a typical "production/consumption" message model. The producer sends a message to the broker server (RabbitMQ). Inside the broker server, Exchange/Queue is created in advance, and the Exchange/Queue is linked together by Binding rules. The Exchange distributes messages and is distinguished according to different distribution strategies of types/binding. The message finally comes to Queue, waiting for the consumer to take it away. The producer is a service sending a message, and the consumer is a called service.
The Message server broker server sends the Message to the queue of the corresponding next-level microserver Consumer according to the condition that the "routing key" filled when the previous-level microserver in the Message generates the Message is consistent with the "Binding key" in the next-level microserver initialization Binding (according to different Exchange modes, complete matching or partial matching is possible).
When a copy of the micro-service provider takes the message and gives a response, the message is removed after the Exchange receives the response, and the message is not routed to the copy of the other next micro-service provider. Under the default condition, the message server adopts a polling load balancing algorithm to forward the message to each copy of the next-level microservice provider, or each copy end of the next-level microservice provider is set to only receive one message each time, and a response is given after the processing is finished, so that the copy of the next-level microservice provider with weaker computing capability has lower message processing capability than the copy of the next-level microservice provider with stronger computing capability, so that the former has lower message taking capability than the latter, and the copy of the next-level microservice provider with stronger processing capability can process more message. This mechanism is the message load balancing of the AMQP model. For example, the train position calculation service generates 1-car and 2-car position messages, and in the process, the train position calculation service serves as a Producer, and the 2 messages are all messages with the same "routing key" and have now reached Exchange. Two other service train number management services and outbound and inbound computing services, as Consumer consumers of messages, respectively declare that two queues Queue1 and Queue2 are bound to Exchange. Then Exchange can now send message 1 to Queue1 and Queue2 one by one, multiple copies of the train management service are all bound to Queue1, multiple copies of the inbound and outbound computing service are all bound to Queue2, Queue1 pushes a certain copy of the train management service according to a polling algorithm, and after receiving, a response is given, and Queue1 removes the message. Similarly, the rest messages are processed and pushed to a certain copy of the train number management service.
At block 406, each copy of the next-level microservice receives a corresponding message, and performs a predetermined operation; generating a corresponding message;
at block 408, if the next-level microservice exists in the next-level microservice to be called, sending the generated message to a corresponding next-level microservice message queue, and executing block 412;
if not, the generated message is sent to the corresponding first-level microservice message queue at block 410, and block 414 is performed;
in some embodiments, the calling relationship between each microservice and other microservices can be known from table 1, for example, the train tracking service of the next microservice can call the train number management service of the next microservice, and the calculation service of the station entering and exiting.
In some embodiments, the called next-level service receives the message in the queue and performs the predetermined operation. For example, each copy of the train tracking service respectively receives a message of a theme of 'receiving an IVOC train message' in a train tracking service message queue, updates the internal train position according to the train position in the message of the theme of 'receiving the IVOC train message', generates a message of a theme of 'train position', and sends the message of the theme of 'train position' to a train number management service message queue and an in-out station calculation service message queue corresponding to the next-level micro service; so that the next-level micro service train number management service message queue and the inbound and outbound calculation service message queue execute preset operation after being called.
At block 412, each copy of the next-level microservice receives a corresponding message, performs a predetermined operation, and sends the generated message to a corresponding first-level microservice message queue;
in some embodiments, the next level of microservice with respect to the next level of microservice train tracking service comprises: train number management service and station entering and exiting computing service; in other embodiments, the next-level microservice is the train number management service and the station entering and exiting computing service, relative to the first-level microservice operation diagram service.
Each copy of the train number management service respectively receives the message of the theme of the train position in the train number management service message queue, calculates the corresponding train number, generates the message of the theme of the train number, and sends the message of the theme of the train number to the IVOC interface data processing service message queue.
And each copy of the in-and-out station computing service respectively receives the message of the subject of the 'train position' in the in-and-out station computing service message queue, computes the corresponding early and late point information of the train, generates the message of the subject of the 'next station of the train, the early and late point', and sends the message of the subject of the 'next station of the train, the early and late point' to the IVOC interface data processing service message queue.
At block 414, each copy of the first level microservice receives the message in the first level microservice message queue, generates a command, and sends the command to the corresponding client.
In some embodiments, each copy of the IVOC interface data processing service receives a message in the IVOC interface data processing service message queue, generates "train control information" in an external communication protocol format, and sends the "train control information" to the corresponding IVOC. Similarly, each copy of the TMC interface data processing service receives the message in the TMC interface data processing service message queue, and generates a command for TMC. Similarly, the operation diagram service respectively receives the messages in the operation diagram service message queue and generates the current-day plan operation diagram.
According to the embodiment of the disclosure, the following technical effects are achieved:
the high availability is realized, the number of copies required by each micro service is calculated according to the load condition, and the copies are deployed on each virtual machine of the cloud platform, so that the availability of the service is improved;
load balancing, which distributes invocation requests and information flow delivery using the load balancing policies of the spring closed framework and the RabbitMQ message queue (implementation of AMQP) itself; each copy is independently operated, and the processing pressure of the whole train is shared in a load balancing manner;
the execution of business logic is triggered by the definite calling relation/message among the micro-services, so that the risk that the result is inconsistent due to the simultaneous output of a plurality of copies is avoided;
the system has high expandability, has better expandability for coping with the pressure of long lines and small-interval sports cars, and can reduce the operation pressure of each service as long as the deployed service copies are increased.
Next, a train processing method for load balancing will be further described by taking an automatic train dispatch plan as an example.
Fig. 5 shows a flow diagram of a method 500 of automatically issuing a plan for a train in accordance with an embodiment of the present disclosure.
At block 502, each copy of the IVOC interface data processing service receives the train state data sent by each IVOC, generates a message of a theme of "received IVOC train", and sends the message of the theme of "received IVOC train" to a train tracking service message queue;
the intelligent vehicle-mounted controller IVOC of each train respectively sends train state data to the IVOC interface data processing service;
in some embodiments, the intelligent on-board controllers IVOC of each train call the IVOC interface data processing service through the IVOC interface service call request, and send the train state data to the IVOC interface data processing service. According to the load balancing algorithm of the RIBBON client side load balancer, the IVOC interface data processing service calling request of each train can only reach one copy of the IVOC interface data processing service.
In some embodiments, the train status data conforms to the ITS-IVOC external interface protocol, including: train ID, train true location, etc.
In some embodiments, the IVOC of vehicles 1-5 sends the IVOC interface data processing service call request to the IVOC interface data processing service at the same time or at different times. Selecting a corresponding copy of the IVOC interface data processing service by the Ribbon client load balancer according to a preset load balancing algorithm to perform remote service calling, for example, taking the example that the IVOC interface data processing service has two copies; in the polling algorithm, the IVOC interface data processing service call request of car 1 is sent to the copy of the IVOC interface data processing service 1, the IVOC interface data processing service call request of car 2 is sent to the copy of the IVOC interface data processing service 2, the IVOC interface data processing service call request of car 3 is sent to the copy of the IVOC interface data processing service 1, the IVOC interface data processing service call request of car 4 is sent to the copy of the IVOC interface data processing service 2, and the IVOC interface data processing service call request of car 5 is sent to the copy of the IVOC interface data processing service 1.
In some embodiments, the number of copies of the IVOC interface data processing service, the train tracking service, the train number management service, the inbound and outbound calculation service depends on the pre-calculated maximum number of trains on the line, and the calculated pressure per service (the calculated pressure is proportional to the number of trains).
In some embodiments, the multiple copies of the IVOC interface data processing service successively receive train state data sent by the IVOC1-IVOC5, use the train state data as IVOC external interface data, add an internal protocol frame header to the IVOC external interface data, and convert the IVOC external interface data into an internal communication protocol format; the copy of the IVOC interface data processing service finishes processing one frame of IVOC data and generates a message of a theme of 'receiving IVOC train message'; and sending the message of the topic of the received IVOC train message to a train tracking service message queue, wherein the message of the topic of the received IVOC train message carries train state data in an internal protocol format.
In some embodiments, sending the message of the "received IVOC train message" topic to a train tracking service message queue comprises:
sending the message of the subject of the received IVOC train message to a message server, namely a browser server (RabbitMQ);
the message server generates a message according to the 'routing key' filled when the IVOC interface data processing service included in the message of the 'received IVOC train message' topic generates the message and the called service, namely the 'Binding key' in the train tracking service initialization Binding is consistent (according to different Exchange modes, complete matching or partial matching is possible), and the switch in the message server distributes the message to the queue of the train tracking service.
At block 504, each copy of the train tracking service receives a message of the topic of "received IVOC train message" in the train tracking service message queue, updates the internal train position according to the train position in the message of the topic of "received IVOC train message", generates a message of the topic of "train position", and sends the message of the topic of "train position" to the train number management service message queue and the inbound and outbound calculation service message queue.
In some embodiments, the switch in the message server respectively sends the message load of the topic of 'received IVOC train message' in the train tracking service message queue to multiple copies of the train tracking service in a balanced manner according to a rabbitm MQ message mechanism; for example, "received IVOC1 train message," "received IVOC3 train message," "received IVOC4 train message" is pushed to copy 1 of the train tracking service; the "received IVOC2 train message" and "received IVOC5 train message" are pushed to copy 2 of the train tracking service.
In some embodiments, the train tracking service updates the internal train position according to the train position in the message of the subject of the received IVOC train message, outputs the train position according to the calculation result, generates the message of the subject of the train position, and sends the message of the subject of the train position to the train number management service message queue and the inbound and outbound calculation service message queue.
For example, copy 1 of the train tracking service receives the "received IVOC1 train message", "received IVOC3 train message", "received IVOC4 train message", copy 2 of the train tracking service receives the "received IVOC2 train message", "received IVOC5 train message"; copy 1 of the train tracking service then performs the computation logic output IVOC1 train position, IVOC3 train position, IVOC4 train position messages, and copy 2 of the train tracking service performs the computation logic output IVOC2 train position, IVOC5 train position messages. When the plurality of copies of the train tracking service do not receive the message, the calculation logic is not actively executed, so the messages in the same period are not output outwards, and the unexpected result of the 'train position' of the same train is output by simultaneously executing the two copies.
Each time the train tracking service calculates the output "train position", the "train position" field in the in-memory database is updated. If all other services need to use the position information of a certain train when executing specific logic, the position of the train can be taken out from the memory database.
At block 506, each copy of the train number management service receives the message of the topic of "train position" in the train number management service message queue, calculates the corresponding train number, generates the message of the topic of "train number", and sends the message of the topic of "train number" to the IVOC interface data processing service message queue.
In some embodiments, the train number management service will automatically receive the "train location" message due to the message exchange that is bound to the "train location" topic at startup. The switch in the message server respectively sends the message load of the subject of the train position in the train management service message queue to a plurality of copies of the train management service in a balanced manner according to a Rabbit MQ message mechanism; for example, the "train position 1 message", "train position 2 message", "train position 4 message" is pushed to the copy 1 of the train number management service; the "train position 3 message" and the "train position 5 message" are pushed to the copy 2 of the train number management service.
In some embodiments, each copy of the train number management service receives the message of the theme of the train position, inquires the train stopping state from the memory database, judges whether the train reaches the retracing rail/switching rail stopping state, redistributes the train number for the train meeting the conditions, then updates the field of the train number in the memory database, and generates the message of the theme of the train number; and sending the message of the subject of the train number to an IVOC interface data processing service message queue.
While, before, or after block 506 is performed, at block 508, each copy of the inbound and outbound computing service receives the message of the topic "train position" in the inbound and outbound computing service message queue, computes the corresponding train morning and evening point information, generates the message of the topic "next train station, morning and evening point", and sends the message of the topic "next train station, morning and evening point" to the IVOC interface data processing service message queue.
In some embodiments, the inbound and outbound computing services will automatically receive the "train location" message due to a message exchange that is also bound to the "train location" topic at startup. According to the RabbitMQ message mechanism, the same switch can be ensured to route the same message to different queues bound with the switch, and the queue bound with the train number management service is different from the queue bound with the in-out station computing service, so that the same 'train position' message can be received.
In some embodiments, the inbound and outbound computing services will automatically receive the "train location" message due to the message exchange that is bound to the "train location" topic at startup. The switch in the message server respectively sends the message load of the subject of the train position in the message queue of the inbound and outbound computing service to a plurality of copies of the inbound and outbound computing service in a balanced manner according to a Rabbit MQ message mechanism; for example, push "train position 2 message", "train position 4 message" to copy 1 of the inbound and outbound computing service; the "train position 1 message", "train position 3 message", "train position 5 message" are pushed to the copy 2 of the inbound and outbound computing service.
In some embodiments, according to the "train position" message, if the train has arrived at the platform rail, the train-fastening state of the train is updated according to the train-fastening state of the platform, then the train-parking state is inquired from the memory database, whether the train is parked stably in the parking area is judged, the train is sent out and counted down according to the early-late point and the next station according to the early-late point, the updated information of the "train early-late point, train-fastening, train-sending-counting-down and the next station" is output, the initial value fields of the "early-late point, the next station, train-fastening and train-sending-counting-down" of the train in the memory database are updated, and the message of the theme of the "next station and early-late point of the train" is generated.
In some embodiments, the copy 1 of the inbound and outbound computing service receives train position 2 and train position 4 messages triggered by the inbound and outbound computing service; generating information of 'train early and late, train buckling, train departure countdown, next station' and the like of the corresponding trains 2 and 4; the copy 2 of the train number management service receives the messages of the train position 1, the train position 3 and the train position 5 from the train number management service, and generates information of the corresponding trains 1, 3 and 5, such as train early and late, train holding, train departure countdown and next station. And updating the initial value fields of the train early-late point, the next station, the train-taking-out countdown and the next station of the train in the memory database according to the information of the train early-late point, the train-taking-out countdown, the next station and the train-taking-out countdown and other information, generating the information of the theme of the train next station and the early-late point, and sending the information of the theme of the train next station and the early-late point to an IVOC interface data processing service message queue.
At block 510, each copy of the IVOC interface data processing service receives a message in the IVOC interface data processing service message queue, generates "train control information" in an external communication protocol format, and sends the "train control information" to the corresponding IVOC.
In some embodiments, the IVOC interface data processing service automatically receives the message of the theme of the train number and the theme of the next station of the train and the morning and evening because the message switch binds the theme of the train number and the theme of the next station of the train and the morning and evening when the IVOC interface data processing service is started. And the switch in the message server respectively sends the message loads of the 'train number' theme and the 'next station of train, early and late' theme in the IVOC interface data processing service message queue to a plurality of copies of the IVOC interface data processing service in a balanced manner according to a Rabbit MQ message mechanism. It should be noted that in the IVOC interface data processing service message queue, the message of the "train number of trains" topic and the "next stop of train, morning and evening" topic corresponding to the same train are bound together to be sent to one copy of the IVOC interface data processing service.
In some embodiments, the IVOC interface data processing service obtains, by each copy, information of the subject "train number" of each train and "train number, early-late point, train-holding, departure countdown" information in the subject "next station, early-late point" of the train "message, respectively, assembles the information into an external communication protocol format" train control information "according to a period required by the protocol, and sends the information to the IVOC of each corresponding train. For example, "train control information" of the trains 1-5 is transmitted to the IVOCs 1-5, respectively, so that the IVOCs 1-5 of the trains 1-5 control the trains 1-5 according to the "train control information".
In some embodiments, a message view flow and a memory data view flow of the method are shown in fig. 6 and fig. 7, respectively.
According to the embodiment of the disclosure, the following technical effects are achieved:
the method has high availability, the number of copies required by IVOC interface data processing service, train tracking service, train number management service and station entering and exiting computing service is calculated according to the load condition, and the copies are deployed on each virtual machine of the cloud platform, so that the availability of the service is improved;
load balancing, which distributes invocation requests and information flow delivery using the load balancing policies of the spring closed framework and the RabbitMQ message queue (implementation of AMQP) itself; each copy is independently operated, and the processing pressure of the whole train is shared in a load balancing manner;
the system has high expandability, has better expandability for coping with the pressure of long lines and small-interval sports cars, and can reduce the operation pressure of each service as long as the deployed service copies are increased.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
The above is a description of embodiments of the method, and the embodiments of the apparatus are further described below.
Fig. 8 shows a block diagram of a system 400 for automatic delivery planning for a train according to an embodiment of the present disclosure. As shown in fig. 8, system 800 includes:
the IVOC interface data processing service module 802 receives the train state data sent by each IVOC, generates a message of a theme of 'receiving IVOC train', and sends the message to a train tracking service message queue;
the train tracking service module 804 receives the message of the topic of 'receiving IVOC train message' in the train tracking service message queue respectively by each copy, generates the message of the topic of 'train position' and sends the message to the train number management service message queue and the inbound and outbound calculation service message queue;
the train number management service module 806 receives the message of the theme of the train position in the train number management service message queue, generates the message of the theme of the train number and sends the message to the IVOC interface data processing service message queue;
the station-in and station-out calculation service module 808 respectively receives the information of the subject of the 'train position' in the station-in and station-out calculation service information queue, generates the information of the subject of the 'next station of the train, early and late points' and sends the information to the IVOC interface data processing service information queue;
each copy of the IVOC interface data processing service module 802 also receives the message of the theme of "train number" and the theme of "next station of train, morning and evening" in the IVOC interface data processing service message queue, generates "train control information" and sends the "train control information" to the corresponding IVOC.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the described module may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
FIG. 9 illustrates a schematic block diagram of an electronic device 900 that may be used to implement embodiments of the present disclosure. As shown, device 900 includes a Central Processing Unit (CPU)901 that can perform various appropriate actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM)902 or loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the device 900 can also be stored. The CPU901, ROM 902, and RAM 903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
A number of components in the device 900 are connected to the I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, and the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, optical disk, or the like; and a communication unit 909 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The processing unit 901 performs the various methods and processes described above, such as the methods 400, 500. For example, in some embodiments, the methods 400, 500 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 900 via ROM 902 and/or communications unit 909. When the computer program is loaded into RAM 903 and executed by CPU901, one or more steps of methods 400, 500 described above may be performed. Alternatively, in other embodiments, the CPU901 may be configured to perform the methods 400, 500 in any other suitable manner (e.g., by way of firmware).
For example, without limitation, exemplary types of hardware logic that may be used include Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOC), load programmable logic devices (CP L D), and so forth.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Further, while operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (9)

1. A load balancing processing method for train management is characterized by comprising the following steps:
each copy of the first-level micro service receives a calling request sent by a client, executes the calling request, generates a corresponding message, and sends the message to a corresponding next-level micro service message queue;
each copy of the next-level micro service receives the corresponding message respectively, executes the preset operation and sends the generated message to the corresponding next-level micro service message queue;
each copy of the secondary micro service receives the message in the secondary micro service message queue, executes preset operation and sends the generated message to the corresponding primary micro service message queue;
and each copy of the first-stage micro service receives the message in the first-stage micro service message queue respectively, generates a command and sends the command to the corresponding client.
2. The method of claim 1,
the micro-services are formed by splitting a station extension/application server for realizing train management according to functions, and comprise train tracking services, train number management services, station entering and exiting computing services, IVOC interface data processing services, TMC interface data processing, turnout conflict management services and operation diagram services.
3. The method of claim 1,
each micro-service is deployed on a virtual machine of the cloud platform, and the number of copies is determined according to the load condition.
4. The method of claim 1, wherein sending the generated message to a corresponding secondary microservice message queue comprises:
if the next-level microservice has the next-level microservice to be called, sending the generated message to a corresponding next-level microservice message queue; and if not, sending the generated message to the corresponding first-level micro-service message queue.
5. The method of claim 1, wherein the receiving, by each copy of the first-level microservice, the invocation request sent by the client comprises:
and selecting the corresponding copies of the first-level micro service by the Ribbon client load balancer according to a preset load balancing algorithm to carry out remote service calling, and respectively sending the calling requests sent by the client to the copies of the first-level micro service.
6. The method of claim 1, wherein receiving the messages in the secondary microservice message queue by each copy of the secondary microservice comprises:
and the switch in the message server respectively sends the message load in the next-level micro-service message queue to a plurality of copies of the next-level micro-service in a balanced manner according to a Rabbit MQ message mechanism.
7. The method of claim 1,
and storing data corresponding to the message in a Redis memory database.
8. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the processor, when executing the program, implements the method of any of claims 1-7.
9. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN201911398030.1A 2019-12-30 2019-12-30 Load balancing processing method for train management Active CN111400028B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911398030.1A CN111400028B (en) 2019-12-30 2019-12-30 Load balancing processing method for train management

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911398030.1A CN111400028B (en) 2019-12-30 2019-12-30 Load balancing processing method for train management

Publications (2)

Publication Number Publication Date
CN111400028A true CN111400028A (en) 2020-07-10
CN111400028B CN111400028B (en) 2023-07-25

Family

ID=71433950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911398030.1A Active CN111400028B (en) 2019-12-30 2019-12-30 Load balancing processing method for train management

Country Status (1)

Country Link
CN (1) CN111400028B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113919082A (en) * 2021-12-14 2022-01-11 成都运达科技股份有限公司 Train longitudinal dynamics modeling method and system
CN114889673A (en) * 2022-04-28 2022-08-12 西门子交通技术(北京)有限公司 Train control system and train control method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101254791A (en) * 2008-03-31 2008-09-03 北京和利时系统工程有限公司 Rail transit train automatic monitoring system based on communication
CN107284471A (en) * 2017-05-18 2017-10-24 交控科技股份有限公司 A kind of CBTC systems based on truck traffic
CN109703605A (en) * 2018-12-25 2019-05-03 交控科技股份有限公司 A kind of ATS system based on micro services
CN109747684A (en) * 2018-10-09 2019-05-14 比亚迪股份有限公司 For the comprehensive monitoring system of rail traffic, method and computer equipment
US20190342179A1 (en) * 2018-05-07 2019-11-07 Servicenow, Inc. Discovery and Management of Devices
CN110445643A (en) * 2019-07-25 2019-11-12 泰康保险集团股份有限公司 Asynchronous micro services call link tracking, device, medium and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101254791A (en) * 2008-03-31 2008-09-03 北京和利时系统工程有限公司 Rail transit train automatic monitoring system based on communication
CN107284471A (en) * 2017-05-18 2017-10-24 交控科技股份有限公司 A kind of CBTC systems based on truck traffic
US20190342179A1 (en) * 2018-05-07 2019-11-07 Servicenow, Inc. Discovery and Management of Devices
CN109747684A (en) * 2018-10-09 2019-05-14 比亚迪股份有限公司 For the comprehensive monitoring system of rail traffic, method and computer equipment
CN109703605A (en) * 2018-12-25 2019-05-03 交控科技股份有限公司 A kind of ATS system based on micro services
CN110445643A (en) * 2019-07-25 2019-11-12 泰康保险集团股份有限公司 Asynchronous micro services call link tracking, device, medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113919082A (en) * 2021-12-14 2022-01-11 成都运达科技股份有限公司 Train longitudinal dynamics modeling method and system
CN114889673A (en) * 2022-04-28 2022-08-12 西门子交通技术(北京)有限公司 Train control system and train control method

Also Published As

Publication number Publication date
CN111400028B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN108632365B (en) Service resource adjusting method, related device and equipment
RU2654162C2 (en) Vehicle management and computation system
US20190377604A1 (en) Scalable function as a service platform
CN103080903B (en) Scheduler, multi-core processor system and dispatching method
CN104580524A (en) Resource scaling method and cloud platform with same
US11270583B2 (en) Traffic control for autonomous vehicles
WO2020211455A1 (en) Data processing system and method
CN110658794A (en) Manufacturing execution system
US20140337435A1 (en) Device and Method for the Dynamic Load Management of Cloud Services
CN111400028B (en) Load balancing processing method for train management
CN105491150A (en) Load balance processing method based on time sequence and system
CN107168777A (en) The dispatching method and device of resource in distributed system
CN104794239A (en) Cloud platform data processing method
CN104052677A (en) Soft load balancing method and apparatus of single data source
CN111376953B (en) Method and system for issuing plan for train
CN106605213A (en) System for support in event of intermittent connectivity, corresponding local device, and corresponding cloud computing platform
Yamashita Analysis of dispatching rules of AGV systems with multiple vehicles
US11016751B2 (en) Automatic upgrade on total run count data on availability of new software
CN115033355A (en) Task scheduling method, electronic device and storage medium
CN114104883A (en) Central elevator dispatching method and device
CN116954878A (en) Method, apparatus, device, storage medium and program product for managing container clusters
CN114489970A (en) Method and system for realizing queue sequencing by using scheduling plug-in Kubernetes
CN111327663A (en) Bastion machine distribution method and equipment
CN116719632B (en) Task scheduling method, device, equipment and medium
CN113256177B (en) Existing vehicle distribution calculation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant