CN107040388B - Charging system and method - Google Patents

Charging system and method Download PDF

Info

Publication number
CN107040388B
CN107040388B CN201610079089.4A CN201610079089A CN107040388B CN 107040388 B CN107040388 B CN 107040388B CN 201610079089 A CN201610079089 A CN 201610079089A CN 107040388 B CN107040388 B CN 107040388B
Authority
CN
China
Prior art keywords
charging
engine
billing
information
billing engine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610079089.4A
Other languages
Chinese (zh)
Other versions
CN107040388A (en
Inventor
侯建卫
鲁瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Group Shanxi Co Ltd
Original Assignee
China Mobile Group Shanxi Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Group Shanxi Co Ltd filed Critical China Mobile Group Shanxi Co Ltd
Priority to CN201610079089.4A priority Critical patent/CN107040388B/en
Publication of CN107040388A publication Critical patent/CN107040388A/en
Application granted granted Critical
Publication of CN107040388B publication Critical patent/CN107040388B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/14Charging, metering or billing arrangements for data wireline or wireless communications
    • H04L12/1403Architecture for metering, charging or billing
    • H04L12/1407Policy-and-charging control [PCC] architecture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M15/00Arrangements for metering, time-control or time indication ; Metering, charging or billing arrangements for voice wireline or wireless communications, e.g. VoIP
    • H04M15/64On-line charging system [OCS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M15/00Arrangements for metering, time-control or time indication ; Metering, charging or billing arrangements for voice wireline or wireless communications, e.g. VoIP
    • H04M15/65Off-line charging system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M15/00Arrangements for metering, time-control or time indication ; Metering, charging or billing arrangements for voice wireline or wireless communications, e.g. VoIP
    • H04M15/66Policy and charging system

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention discloses a charging system, which comprises: the charging preprocessing device is used for acquiring first charging information and second charging information, sending the first charging information to an offline charging engine and sending the second charging information to an online charging engine; the first charging information comprises information that the data type is a file, and the second charging information comprises information that the data type is a message; the offline charging engine is used for generating a first charging ticket according to the first charging information and sending the first charging ticket to a bill management device; the online charging engine is used for generating a second charging ticket according to the second charging information and sending the second charging ticket to the bill management device; and the bill management device is used for generating a user bill according to the first charging bill and the first charging bill. The embodiment of the invention also discloses a charging method.

Description

Charging system and method
Technical Field
The present invention relates to charging technologies in the field of communications, and in particular, to a charging system and method.
Background
The charging system is a system for recording user service charge information in a communication network, is a core platform of a telecommunication operator, and is a core guarantee and support foundation for the telecommunication operator to develop services.
The existing charging system mainly adopts a fusion charging mode on a charging structure, namely a mode of fusing an online charging system and an offline charging system; however, the existing charging system still adopts the traditional IOE technology, i.e. international business machines corporation (IBM) small machine, Oracle database and easy security communication (EMC) storage system, on the concrete hardware and software system, while the online charging system and the offline charging system adopt the same version of charging engine program, and the existing charging system has the defects of low processing speed and efficiency and low system performance.
Disclosure of Invention
In view of this, embodiments of the present invention are expected to provide a charging system and a charging method, which can improve the processing speed and efficiency of the charging system and effectively improve the system performance.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
an embodiment of the present invention provides a charging system, including:
the charging preprocessing device is used for acquiring first charging information and second charging information, sending the first charging information to an offline charging engine and sending the second charging information to an online charging engine; the first charging information comprises information that the data type is a file, and the second charging information comprises information that the data type is a message;
the offline charging engine is used for generating a first charging ticket according to the first charging information and sending the first charging ticket to a bill management device;
the online charging engine is used for generating a second charging ticket according to the second charging information and sending the second charging ticket to the bill management device;
and the bill management device is used for generating a user bill according to the first charging bill and the first charging bill.
In the above solution, the online charging engine includes: a first billing engine and a second billing engine; wherein the first billing engine comprises a cluster of at least one host based on cloud computing;
the charging system further comprises: an agent adapter, a node manager;
the proxy adapter is used for receiving the second charging information sent by the charging preprocessing device and sending the second charging information to the first charging engine or the second charging engine according to a preset strategy;
the node manager is used for scheduling and managing the cluster of at least one host based on cloud computing.
In the above scheme, the second charging engine is further configured to send a first heartbeat message to any host of the first charging engine, and send an indication message to the proxy adapter when a first heartbeat response sent by any host is not received within a first preset time period, where the indication message is used to indicate that the proxy adapter sends all the second charging information to the second charging engine;
and the any host is used for sending the first heartbeat message to other hosts of the first billing engine after receiving the first heartbeat message, receiving first heartbeat message feedback returned by the other hosts, and sending the first heartbeat response to the second billing engine when the number of the received first heartbeat message feedback returned by the other hosts is greater than a first threshold value.
In the above scheme, any host of the first billing engine is further configured to send a second heartbeat message to the second billing engine, and send a second indication message to the proxy adapter when a second heartbeat response sent by the second billing engine is not received within a second preset time period, where the second indication message is used to indicate the proxy adapter to send all the second billing information to the first billing engine;
and the second charging engine is further configured to send the second heartbeat response to any one of the hosts after receiving the second heartbeat message.
In the above scheme, the proxy adapter is further configured to obtain a length of a queue of messages to be processed or a system resource utilization rate of the first and second billing engines; when the absolute value of the difference between the lengths of the queues of the messages to be processed of the first billing engine and the second billing engine is determined to be larger than a second threshold value, sending the second billing information to the engine with the relatively small length of the queue of the messages to be processed within a third preset time period; or when the system resource utilization rate of at least one of the first billing engine and the second billing engine is determined not to be larger than a third threshold value, sending the second billing information to the at least one engine.
In the above solution, the first charging engine is further configured to send a scheduling request to the node manager when detecting a sudden increase in a queue length of a message to be processed of the first charging engine; the scheduling request comprises the number and the capability of the host nodes;
and the node manager is also used for scheduling the host nodes with the quantity and the capacity for the first charging engine according to the scheduling request.
The embodiment of the invention provides a charging method, which comprises the following steps:
acquiring first charging information and second charging information, sending the first charging information to an offline charging engine, and sending the second charging information to an online charging engine; the first charging information comprises information that the data type is a file, and the second charging information comprises information that the data type is a message;
the offline charging engine generates a first charging ticket according to the first charging information;
the online charging engine generates a second charging ticket according to the second charging information;
and generating a user bill according to the first charging bill and the first charging bill.
In the above solution, the online charging engine includes: a first billing engine and a second billing engine; the first billing engine comprises a cluster of at least one host based on cloud computing; wherein the sending the second billing information to an online billing engine comprises:
sending the second charging information to the first charging engine or the second charging engine according to a preset strategy;
in the above aspect, the method further includes: the second billing engine sends a first heartbeat message to any host of the first billing engine, and indicates that all the second billing information is sent to the second billing engine when a first heartbeat response sent by any host is not received within a first preset time period;
and after receiving the first heartbeat message, the host sends the first heartbeat message to other hosts of the first billing engine, receives first heartbeat message feedback returned by the other hosts, and sends the first heartbeat response to the second billing engine when the number of the received first heartbeat message feedback returned by the other hosts is larger than a first threshold value.
In the above aspect, the method further includes: any host of the first billing engine sends a second heartbeat message to the second billing engine, and indicates to send all the second billing information to the first billing engine when a second heartbeat response sent by the second billing engine is not received within a second preset time period; and after receiving the second heartbeat message, the second charging engine sends a second heartbeat response to any host.
In the above aspect, the method further includes: acquiring the length of a message queue to be processed or the utilization rate of system resources of the first billing engine and the second billing engine; when the absolute value of the difference between the lengths of the queues of the messages to be processed of the first billing engine and the second billing engine is determined to be larger than a second threshold value, sending the second billing information to the engine with the relatively small length of the queue of the messages to be processed within a third preset time period; or when the system resource utilization rate of at least one of the first billing engine and the second billing engine is determined not to be larger than a third threshold value, sending the second billing information to the at least one engine.
In the above aspect, the method further includes: and when the first billing engine detects the sudden increase of the length of the queue of the messages to be processed of the first billing engine, scheduling the host nodes with the number and the capacity for the first billing engine.
The charging system and the method provided by the embodiment of the invention adopt different charging engine programs through the online charging system and the offline charging system, namely, the online charging engine and the offline charging engine are separated, the information with the data type of a file is sent to the offline charging engine for processing, and the information with the data type of a message is sent to the online charging engine for processing, so that the processing speed and the processing efficiency of the charging system are improved, and the system performance can be effectively improved.
Drawings
Fig. 1 is a structural diagram of a charging system according to an embodiment of the present invention;
fig. 2 is another structural diagram of a charging system according to an embodiment of the present invention;
fig. 3 is a flowchart of a charging method according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
Fig. 1 is a structural diagram of a charging system according to an embodiment of the present invention; as shown in fig. 1, the charging system includes: a charging preprocessing device 101, an offline charging engine 102, an online charging engine 103 and a bill management device 104; wherein the content of the first and second substances,
the charging preprocessing device 101 is configured to obtain first charging information and second charging information, send the first charging information to the offline charging engine 102, and send the second charging information to the online charging engine 103; the first charging information comprises information that the data type is a file, and the second charging information comprises information that the data type is a message;
the offline charging engine 102 is configured to generate a first charging ticket according to the first charging information, and send the first charging ticket to the bill management device 104;
the online charging engine 103 is configured to generate a second charging ticket according to the second charging information, and send the second charging ticket to the bill management device 104;
and the bill management device 104 is configured to generate a user bill according to the first charging ticket and the first charging ticket.
Here, the charging system provided by the embodiment of the present invention may be applied to a charging scenario of an operating network; the offline charging engine can be an offline charging engine applied to an offline charging system, and the offline charging engine is used for processing information of which the data type is a file; the online charging engine can be an online charging engine applied to an online charging system, and the online charging engine is used for processing information of which the data type is a message; the data type is information of a message, such as a message ticket, a Diameter Credit Control (DCC) message, a call duration, international long distance dialing, dialing by using an IP prefix and the like; the data type is file information, such as a file ticket.
The charging system provided by the embodiment of the invention adopts different charging engine programs through the online charging system and the offline charging system, namely, the online charging engine and the offline charging engine are separated, the information with the data type of a file is sent to the offline charging engine for processing, and the information with the data type of a message is sent to the online charging engine for processing, so that the processing speed and the processing efficiency of the charging system are improved, and the system performance is effectively improved.
The charging system in the prior art adopts the traditional IOE technology on specific hardware and software systems, and has the following problems: 1) because data and application are on the same machine, application and data separation cannot be effectively realized; 2) because a vertical shaft architecture is adopted, the horizontal linear expansion capability cannot be provided; 3) the evolution of the x86 platform cannot be effectively supported; 4) the calculation performance cannot be effectively expanded automatically; aiming at the problems of the existing charging system, a cloud charging system scheme of thorough cloud reconstruction appears, namely: adopting technologies such as a cloud computing technology, a distributed file system and a distributed memory database to completely reconstruct a charging system, and redevelopment and realization of the system structure and the component part of the charging system according to an Infrastructure as a Service (IaaS) of cloud computing, a Platform as a Service (PaaS) and a Software as a Service (SaaS) hierarchical structure; the existing scheme for thorough cloud reconstruction has the following problems: 1) the development period is long, and great functional hidden danger exists; 2) application migration presents difficulties; 3) system stability requires time to resolve; 4) the compatibility is not strong.
Fig. 2 is another structural diagram of a charging system according to an embodiment of the present invention, and the charging system shown in fig. 2 is based on fig. 1; the charging system shown in fig. 2 also includes: the system comprises a charging preprocessing device 101, an offline charging engine 102 and a bill management device 104, wherein each component also has the corresponding functions in fig. 1;
further, as shown in fig. 2, the online charging engine 103 includes: first billing engine 1031, second billing engine 1032; it should be noted that the first billing engine 1031 is a cloud online billing engine, and the first billing engine 1031 includes a cluster of at least one host based on cloud computing; the second billing engine 1032 is a non-cloud online billing engine; the charging system further comprises: agent adapter 105, node manager 106; the proxy adapter 105 is configured to receive the second charging information sent by the charging preprocessing device 101, and send the second charging information to the first charging engine 1031 or the second charging engine 1032 according to a preset policy; the node manager 106 is configured to schedule and manage a cluster of at least one host based on cloud computing included in the first billing engine 1031.
The charging system shown in fig. 2 is reconstructed by partially clouding the charging system on the basis of separating the online charging engine and the offline charging engine shown in fig. 1, and particularly, the partial clouding of the online charging system is implemented based on a cloud computing technology, that is, the online charging engine 103 includes a first charging engine 1031 and a second charging engine 1032; different from the prior art in which the billing engine is loaded on the UNIX minicomputer, the first billing engine 1031 is loaded on the Linux cluster server, and the first billing engine 1031 is implemented by a cluster of multiple PC hosts based on a cloud computing technology.
Here, the node manager 106 is configured to implement load balancing and post-host-exception scheduling management of the first charging engine 1031 based on a zoo manager (Zookeeper) technology, where the Zookeeper is a distributed cluster management technology in a big data distributed computing (Hadoop) system, and the node manager 106 is configured to schedule and manage a cluster of at least one host included in the first charging engine 1031 based on cloud computing, thereby implementing a Zookeeper distributed cluster node management function.
Here, the charging system shown in fig. 2 further includes: a distributed memory bank 107, a distributed computing Database (Hbase, Hadoop Database) cluster 108; the distributed memory bank 107 is arranged between the first billing engine 1031 and the bill management device 104, and serves as a data cache system of the cloud online billing system to improve the processing speed; the Hbase cluster 108 is an Hbase database system based on an HDFS distributed file system, and serves as a data storage, management and operation and maintenance database of the first billing engine 1031 and the billing management apparatus 104.
It should be noted that the node manager 106, the distributed memory 107, and the Hbase cluster 108 are all loaded on a Linux cluster server; compared with the existing charging system, the charging system provided by the embodiment of the invention adopts a new big data and cloud technology in the aspects of a charging engine, a memory database, a storage mode, a storage hardware platform and the like, and has the advantages of low cost, high efficiency and easy expansion.
Here, the proxy adapter 105 is configured to receive the second charging information sent by the charging preprocessing device 101, shunt a message ticket according to a preset policy, and send the second charging information to the first charging engine 1031 or the second charging engine 1032; in practice, the preset policy may be, for example, to distinguish according to the user number segments corresponding to the second charging information, send charging information corresponding to some number segments to the first charging engine 1031, and send charging information corresponding to other number segments to the second charging engine 1032; the transmission may be performed according to other predetermined rules.
Specifically, the mechanism for implementing the second charging information offloading by the proxy adapter 105 includes:
11) the Proxy adapter 105 receives a message ticket sent by a network element through a Proxy (Proxy);
12) the proxy adapter 105 sends a process host request to the node manager 106;
13) the node manager 106 receives the processing host request, sends a query request to the first billing engine 1031, queries all hosts in the cluster of at least one host based on cloud computing included in the first billing engine 1031, and determines an available processing host;
14) the node manager 106 receives the query result returned by the first billing engine 1031;
15) if the node manager 106 determines that there is an available processing host according to the query result, executing step 16); otherwise, feeding back a failure response to the proxy adapter 105, and jumping to step 12);
16) the node manager 106 returns available processing hosts to the proxy adapter 105;
17) the proxy adapter 105 sends the message ticket to the first billing engine 1031 or the second billing engine 1032 according to a preset policy.
The embodiment of the present invention further provides a fusion backup mechanism of first billing engine 1031 and second billing engine 1032: the first billing engine 1031 and the second billing engine 1032 are in a mutual backup relationship, and implement a split-flow fusion processing mode of online billing; if the first billing engine 1031 fails, the proxy adapter 105 sends all message tickets to the second billing engine 1032 for processing; if the second billing engine 1032 fails, the first billing engine 1031 is responsible for processing all message tickets;
specifically, the second charging engine 1032 is further configured to send a first heartbeat message to any host of the first charging engine 1031, and send an indication message to the proxy adapter 105 when a first heartbeat response sent by any host is not received within a first preset time period, where the indication message is used to indicate that the proxy adapter 105 sends all the second charging information to the second charging engine 1032; the any host is configured to send the first heartbeat message to another host of the first billing engine 1031 after receiving the first heartbeat message, receive a first heartbeat message feedback returned by the other host, and send the first heartbeat response to the second billing engine 1032 when the number of the received first heartbeat message feedbacks returned by the other host is greater than a first threshold; the first threshold may comprise, for example, 1/5 of the number of all hosts of the first billing engine 1031;
optionally, any host of the first billing engine 1031 is further configured to send a second heartbeat message to the second billing engine 1032, and send a second indication message to the proxy adapter 105 when a second heartbeat response sent by the second billing engine 1032 is not received within a second preset time period, where the second indication message is used to indicate that the proxy adapter 105 sends all the second billing information to the first billing engine 1031; the second charging engine 1032 is further configured to send the second heartbeat response to the any host after receiving the second heartbeat message.
Here, the converged backup mechanism of first billing engine 1031 and second billing engine 1032 includes: the second billing engine 1032 sends a first heartbeat message to the first billing engine 1031, and the first billing engine 1031 sends a second heartbeat message to the second billing engine 1032; the specific implementation process of the second billing engine 1032 sending the first heartbeat message to the first billing engine 1031 is described as follows:
21) the second billing engine 1032 periodically sends a first heartbeat message to a Master host of the first billing engine 1031; the Master host of the first billing engine 1031 is any one host randomly selected from all hosts of the first billing engine 1031;
22) after receiving the first heartbeat message, the Master host of the first billing engine 1031 sends the first heartbeat message to all other hosts of the first billing engine 1031;
23) the Master host of the first billing engine 1031 receives the first heartbeat message feedback returned by all other hosts, and if the number of the received first heartbeat message feedback is not greater than a first threshold, it is determined that the first billing engine 1031 has a fault, and a response message is not sent to the second billing engine 1032; otherwise, sending a first heartbeat response to second billing engine 1032;
24) if the second charging engine 1032 does not receive the first heartbeat response sent by the Master host of the first charging engine 1031 within the first preset time period, sending an indication message to the proxy adapter 105, where the indication message is used to indicate the proxy adapter 105 to send all the second charging information to the second charging engine 1032 for processing; otherwise, jump to step 21).
Meanwhile, a specific implementation process of the first billing engine 1031 sending the second heartbeat message to the second billing engine 1032 is described as follows:
31) the Master host of the first billing engine 1031 sends a second heartbeat message to the second billing engine 1032 at regular time;
32) after receiving the second heartbeat message, the second billing engine 1032 sends the second heartbeat response to the Master host of the first billing engine 1031 if the working state of the second billing engine 1032 is determined to be normal;
33) if the Master host of the first billing engine 1031 does not receive the second heartbeat response sent by the second billing engine 1032 within the second preset time period, sending a second indication message to the proxy adapter 105, where the second indication message is used to indicate that the proxy adapter 105 sends all the second billing information to the first billing engine 1031; otherwise, jump to step 31).
Meanwhile, in order to better match the processing capability and performance requirements of the first billing engine 1031 and the second billing engine 1032, the embodiment of the present invention provides a split algorithm based on multi-feature fusion judgment, and the proxy adapter 105 dynamically adjusts the sending of the billing information according to a plurality of features such as the difference between the queue lengths of the messages to be processed, the utilization rate of system resources, and the total amount of the messages to be processed; in practice, the proxy adapter 105 is further configured to obtain a length of a pending message queue or a system resource utilization rate of the first billing engine 1031 and the second billing engine 1032; when determining that the absolute value of the difference between the lengths of the queues of the messages to be processed of the first billing engine 1031 and the second billing engine 1032 is greater than a second threshold, sending the second billing information to an engine with a relatively small length of the queue of the messages to be processed within a third preset time period; or when it is determined that the system resource utilization rate of at least one of the first billing engine 1031 and the second billing engine 1032 is not greater than a third threshold, sending the second billing information to the at least one engine; the specific implementation process is described as follows:
41) the proxy adapter 105 generates a sub-thread, where the sub-thread is used to request, in real time, to obtain the queue length of the messages to be processed of the first billing engine 1031 and the second billing engine 1032, or the utilization rate of the system resources from the first billing engine 1031 and the second billing engine 1032; correspondingly, the first billing engine 1031 and the second billing engine 1032 need to detect the queue length of the respective pending message or the system resource utilization rate periodically or aperiodically;
42) after the sub-thread obtains the lengths of the message queues to be processed of the first billing engine 1031 and the second billing engine 1032, if it is determined that the absolute value of the difference between the lengths of the message queues to be processed of the first billing engine 1031 and the second billing engine 1032 is greater than a second threshold, sending the second billing information to the online billing engine with the relatively small length of the message queue to be processed in a third preset time period for processing; otherwise, shunting the second charging information according to a preset strategy; the third preset time period may include, for example, 10 s;
42') after obtaining the system resource utilization rates of the first billing engine 1031 and the second billing engine 1032, if it is determined that the system resource utilization rate of the first billing engine 1031 or the second billing engine 1032 is greater than the third threshold, the sub-thread notifies the proxy adapter 105 to stop sending the second billing information to the online billing engine whose system resource utilization rate is greater than the third threshold within a fourth preset time period; if it is determined that the system resource utilization rates of the first billing engine 1031 and the second billing engine 1032 are both greater than the third threshold, notifying the proxy adapter 105 to stop receiving the second billing information sent by the network element or stop sending the second billing information to the first billing engine 1031 and the second billing engine 1032 within a fifth preset time period; otherwise, shunting the second charging information according to a preset strategy; the fourth preset time period may include, for example, 3 s; the fifth preset time period may include, for example, 2 s.
The embodiment of the invention also provides a horizontal linear expansion implementation mode of the offline charging engine and the online charging engine in a cloud scene so as to further improve the processing capacity of the charging information; the method specifically comprises the following steps:
A. for the online charging mode, the charging system provided in the embodiment of the present invention further includes a Message Load Balancer (MLB), where the MLB is a Message Load balancer, is used for distributing Diameter signaling Control protocol (DCC, Diameter Credit Control) messages, and can dynamically select a Message Load balancer according to the effective state of a backend service; the MLB performs real-time signaling interaction with the node manager 106 to implement horizontal expansion of an online charging mode, and the specific process is as follows:
51) the MLB requests to obtain a list of currently available nodes from the node manager 106;
52) the MLB establishes opposite-end connection with the available nodes;
53) the MLB sends the received DCC message according to the load condition of the available node;
54) if a node fails, the MLB obtains a usable substitute node from the node manager 106 and reconnects the node.
B. Aiming at an offline charging mode, the charging system provided by the embodiment of the invention further comprises a File Load Balancer (FLB), wherein the FLB is a File Load balancer and is used for distributing File tickets and can dynamically select according to the effective state of a back-end service; the FLB performs real-time signaling interaction with the node manager 106 to implement horizontal expansion of an offline charging mode, and the specific process is as follows:
61) the application pool completes the registration work of the static/dynamic nodes through the real-time signaling interaction with the node manager 106;
62) after the FLB is started, requesting static/dynamic node information from the node manager 106;
63) the FLB acquires static/dynamic node information from the node manager 106 and then judges and processes the node information;
64) the FLB sends the file ticket to a node related to the application pool for processing; the nodes associated with the application pool interact with the node manager 106 to register the current state.
The embodiment of the invention also provides a Zookeeper technology-based node automatic switching technology, which is used for dealing with the abnormal condition of the message call ticket, such as the sudden increase of the message call ticket; the method specifically comprises the following steps: the first charging engine 1031 is further configured to send a scheduling request to the node manager 106 when detecting a sudden increase in the queue length of the message to be processed by the first charging engine 1031; the scheduling request comprises the number and the capability of the host nodes; the node manager 106 is further configured to schedule the number and the capability of the host nodes for the first billing engine 1031 according to the scheduling request. In practice, the specific implementation manner of the automatic node switching technology based on the Zookeeper technology may include:
71) the MLB detects the length of the pending message queue of the first billing engine 1031; if the sudden increase of the message ticket causes the sudden increase of the length of the message queue to be processed, sending a scheduling request to the node manager 106 to request the node manager 106 to increase the number and the capacity of host nodes;
72) the node manager 106, according to the scheduling request, brings the nodes of the backup module into the available node cluster by interacting with the backup nodes, so as to implement scheduling the host nodes of the number and capacity for the first billing engine 1031;
73) the node manager 106 returns a scheduling result to the MLB;
74) the MLB carries out capability evaluation, if nodes are required to be added continuously, the step is skipped to step 71), otherwise, the MLB continues to carry out; if the message ticket is determined to be processed to the normal range, the MLB requests the node manager 106 to recover redundant nodes so as to be incorporated into a backup module.
75) And the MLB evaluates according to the message condition, and jumps to step 74) if nodes are required to be reduced, otherwise, the MLB continues to jump to step 71).
The charging system provided by the embodiment of the invention realizes the automatic switching of the online engine and the offline engine through the separation of the online charging system engine and the offline charging system engine and the multi-feature fusion judgment and diversion algorithm; the online charging system is transformed into a cloud and non-cloud online charging system through a cloud technology, the cloud online charging system and the original online charging system run in a coexistence manner, and automatic expansion and switching of the system are realized through a Zookeeper technology-based node automatic switching technology; secondly, the cloud online charging system and the original online charging system realize the unified fusion of the cloud online charging system and the original online charging system based on a shunting fusion mode; the horizontal expansion capability of an online charging system and an offline charging system is realized based on the technologies such as cloud computing and the like so as to meet the requirements of the existing internet and 4G services.
Compared with the existing on-line charging and off-line charging fusion mode, the charging system provided by the embodiment of the invention has the following technical effects:
a1) the cost is low: based on the cloud technology, a low-cost X86 hardware system and a LINUX operating system can be adopted, and the cloud computing system has the advantage of low cost.
a2) High efficiency: by adopting new distributed and cloud computing technologies such as big data, distributed file systems, Zookeeper and the like, the system performance can be effectively improved.
a3) Easy expansion: by adopting the distributed database system, data and application analysis can be effectively realized, and the application and management mode of the existing high-cost, low-efficiency and difficult-to-expand charging system is transformed into a low-cost, high-efficiency and easy-to-expand cloud system.
Compared with the scheme of the cloud charging system with thorough cloud reconstruction, the charging system provided by the embodiment of the invention has the following technical effects:
b1) the system has high stability and compatibility;
b2) various applications can be gradually transplanted to protect the current investment;
b3) the support and the operation of the current service system are not influenced, and seamless smooth transition can be realized.
In practical applications, the charging preprocessing device 101, the offline charging engine 102, the online charging engine 103, the bill management device 104, the proxy adapter 105, the node manager 106, the distributed memory 107, and the Hbase cluster 108 may be implemented by a Central Processing Unit (CPU), a microprocessor unit (MPU), a Digital Signal Processor (DSP), or a Field Programmable Gate Array (FPGA) in a charging system.
Fig. 3 is a flowchart of a charging method according to an embodiment of the present invention, and as shown in fig. 3, the method includes:
step 301, obtaining first charging information and second charging information, sending the first charging information to an offline charging engine, and sending the second charging information to an online charging engine; the first charging information comprises information that the data type is a file, and the second charging information comprises information that the data type is a message;
here, the charging method provided by the embodiment of the present invention may be applied to a charging scenario of an operating network; the execution main body of the charging method provided by the embodiment of the invention can be a charging system; the offline charging engine can be an offline charging engine applied to an offline charging system, and the offline charging engine is used for processing information of which the data type is a file; the online charging engine can be an online charging engine applied to an online charging system, and the online charging engine is used for processing information of which the data type is a message; the data type is information of the message, such as message ticket, DCC message, duration of call, international long distance dialing and the like; the data type is file information, such as a file ticket.
Step 302, the offline charging engine generates a first charging ticket according to the first charging information;
step 303, the online charging engine generates a second charging ticket according to the second charging information;
here, the steps 302 and 303 may be executed in parallel without being performed in sequence.
And step 304, generating a user bill according to the first charging bill and the first charging bill.
According to the charging method provided by the embodiment of the invention, different charging engine programs are adopted by the online charging system and the offline charging system, namely, the online charging engine and the offline charging engine are separated, the information with the data type of a file is sent to the offline charging engine for processing, and the information with the data type of a message is sent to the online charging engine for processing, so that the processing speed and the processing efficiency of the charging system are improved, an efficient fault protection mechanism can be provided, and the system performance is effectively improved.
On the basis of the above embodiment, the online charging engine includes: a first billing engine and a second billing engine; the first billing engine comprises a cluster of at least one host based on cloud computing; the node manager schedules and manages the cluster of at least one host based on cloud computing; the proxy adapter receives the second charging information sent by the charging preprocessing device, and sends the second charging information to the first charging engine or the second charging engine according to a preset strategy;
on the basis of the above embodiment, the method further includes: the second billing engine sends a first heartbeat message to any host of the first billing engine, and sends an indication message to the proxy adapter when a first heartbeat response sent by any host is not received in a first preset time period, wherein the indication message is used for indicating the proxy adapter to send all the second billing information to the second billing engine; and after receiving the first heartbeat message, the host sends the first heartbeat message to other hosts of the first billing engine, receives first heartbeat message feedback returned by the other hosts, and sends a first heartbeat response to the second billing engine when the number of the received first heartbeat message feedback returned by the other hosts is greater than a first threshold value.
On the basis of the above embodiment, the method further includes: any host of the first billing engine sends a second heartbeat message to the second billing engine, and sends a second indication message to the proxy adapter when a second heartbeat response sent by the second billing engine is not received in a second preset time period, wherein the second indication message is used for indicating the proxy adapter to send all the second billing information to the first billing engine; and after receiving the second heartbeat message, the second charging engine sends a second heartbeat response to any host.
On the basis of the above embodiment, the method further includes: the proxy adapter acquires the length of a message queue to be processed or the utilization rate of system resources of the first billing engine and the second billing engine; when the absolute value of the difference between the lengths of the queues of the messages to be processed of the first billing engine and the second billing engine is determined to be larger than a second threshold value, sending the second billing information to the engine with the relatively small length of the queue of the messages to be processed within a third preset time period; or when the system resource utilization rate of at least one of the first billing engine and the second billing engine is determined not to be larger than a third threshold value, sending the second billing information to the at least one engine.
On the basis of the above embodiment, the method further includes: the first charging engine sends a scheduling request to the node manager when detecting the sudden increase of the length of a message queue to be processed of the first charging engine; the scheduling request comprises the number and the capability of the host nodes; and the node manager schedules the host nodes with the quantity and the capacity for the first charging engine according to the scheduling request.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (8)

1. A charging system, characterized in that the charging system comprises:
the charging preprocessing device is used for acquiring first charging information and second charging information, sending the first charging information to an offline charging engine and sending the second charging information to an online charging engine; the first charging information comprises information that the data type is a file, and the second charging information comprises information that the data type is a message;
the offline charging engine is used for generating a first charging ticket according to the first charging information and sending the first charging ticket to a bill management device;
the online charging engine is used for generating a second charging ticket according to the second charging information and sending the second charging ticket to the bill management device;
the bill management device is used for generating a user bill according to the first charging bill and the second charging bill;
wherein the online charging engine comprises: a first billing engine and a second billing engine; wherein the first billing engine comprises a cluster of at least one host based on cloud computing;
the charging system further comprises: an agent adapter, a node manager;
the proxy adapter is used for receiving the second charging information sent by the charging preprocessing device and sending the second charging information to the first charging engine or the second charging engine according to a preset strategy;
the node manager is used for scheduling and managing the cluster of at least one host based on cloud computing;
the second billing engine is further configured to send a first heartbeat message to any host of the first billing engine, and send an indication message to the proxy adapter when a first heartbeat response sent by any host is not received within a first preset time period, where the indication message is used to indicate that the proxy adapter sends all the second billing information to the second billing engine;
and the any host is used for sending the first heartbeat message to other hosts of the first billing engine after receiving the first heartbeat message, receiving first heartbeat message feedback returned by the other hosts, and sending the first heartbeat response to the second billing engine when the number of the received first heartbeat message feedback returned by the other hosts is greater than a first threshold value.
2. The billing system of claim 1,
any host of the first billing engine is further configured to send a second heartbeat message to the second billing engine, and send a second indication message to the proxy adapter when a second heartbeat response sent by the second billing engine is not received within a second preset time period, where the second indication message is used to indicate that the proxy adapter sends all the second billing information to the first billing engine;
and the second charging engine is further configured to send the second heartbeat response to any one of the hosts after receiving the second heartbeat message.
3. The charging system according to claim 1 or 2,
the proxy adapter is further configured to obtain a length of a message queue to be processed of the first billing engine and the second billing engine or a system resource utilization rate; when the absolute value of the difference between the lengths of the queues of the messages to be processed of the first billing engine and the second billing engine is determined to be larger than a second threshold value, sending the second billing information to the engine with the relatively small length of the queue of the messages to be processed within a third preset time period; or when the system resource utilization rate of at least one of the first billing engine and the second billing engine is determined not to be larger than a third threshold value, sending the second billing information to the at least one engine.
4. The billing system of claim 1,
the first charging engine is further configured to send a scheduling request to the node manager when a sudden increase in the length of a pending message queue of the first charging engine is detected; the scheduling request comprises the number and the capability of the host nodes;
and the node manager is also used for scheduling the host nodes with the quantity and the capacity for the first charging engine according to the scheduling request.
5. A charging method, characterized in that the method comprises:
acquiring first charging information and second charging information, sending the first charging information to an offline charging engine, and sending the second charging information to an online charging engine; the first charging information comprises information that the data type is a file, and the second charging information comprises information that the data type is a message;
the offline charging engine generates a first charging ticket according to the first charging information;
the online charging engine generates a second charging ticket according to the second charging information;
generating a user bill according to the first charging bill and the second charging bill;
wherein the online charging engine comprises: a first billing engine and a second billing engine; the first billing engine comprises a cluster of at least one host based on cloud computing; wherein the sending the second billing information to an online billing engine comprises:
sending the second charging information to the first charging engine or the second charging engine according to a preset strategy;
the second billing engine sends a first heartbeat message to any host of the first billing engine, and indicates that all the second billing information is sent to the second billing engine when a first heartbeat response sent by any host is not received within a first preset time period;
and after receiving the first heartbeat message, the host sends the first heartbeat message to other hosts of the first billing engine, receives first heartbeat message feedback returned by the other hosts, and sends the first heartbeat response to the second billing engine when the number of the received first heartbeat message feedback returned by the other hosts is larger than a first threshold value.
6. The method of claim 5, further comprising:
any host of the first billing engine sends a second heartbeat message to the second billing engine, and indicates to send all the second billing information to the first billing engine when a second heartbeat response sent by the second billing engine is not received within a second preset time period;
and after receiving the second heartbeat message, the second charging engine sends a second heartbeat response to any host.
7. The method of claim 5 or 6, further comprising:
acquiring the length of a message queue to be processed or the utilization rate of system resources of the first billing engine and the second billing engine; when the absolute value of the difference between the lengths of the queues of the messages to be processed of the first billing engine and the second billing engine is determined to be larger than a second threshold value, sending the second billing information to the engine with the relatively small length of the queue of the messages to be processed within a third preset time period; or when the system resource utilization rate of at least one of the first billing engine and the second billing engine is determined not to be larger than a third threshold value, sending the second billing information to the at least one engine.
8. The method of claim 5, further comprising:
and when the first billing engine detects the sudden increase of the length of the queue of the messages to be processed of the first billing engine, scheduling the number and the capacity of the host nodes for the first billing engine.
CN201610079089.4A 2016-02-03 2016-02-03 Charging system and method Active CN107040388B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610079089.4A CN107040388B (en) 2016-02-03 2016-02-03 Charging system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610079089.4A CN107040388B (en) 2016-02-03 2016-02-03 Charging system and method

Publications (2)

Publication Number Publication Date
CN107040388A CN107040388A (en) 2017-08-11
CN107040388B true CN107040388B (en) 2021-02-05

Family

ID=59532161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610079089.4A Active CN107040388B (en) 2016-02-03 2016-02-03 Charging system and method

Country Status (1)

Country Link
CN (1) CN107040388B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109428753A (en) * 2017-08-29 2019-03-05 西门子公司 Measuring index acquisition methods, service call record acquisition methods and device
CN110298677B (en) * 2018-03-22 2021-08-13 中移(苏州)软件技术有限公司 Cloud computing resource charging method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1859136A (en) * 2006-02-27 2006-11-08 华为技术有限公司 Charging system and charging method
CN102143470A (en) * 2010-12-31 2011-08-03 华为软件技术有限公司 Method for processing charging messages, charging host machine, load balancer and charging system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102595364B (en) * 2011-01-06 2015-02-04 中国移动通信集团广东有限公司 Charging system
CN103200552A (en) * 2013-03-20 2013-07-10 广州从兴电子开发有限公司 Communication control method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1859136A (en) * 2006-02-27 2006-11-08 华为技术有限公司 Charging system and charging method
CN102143470A (en) * 2010-12-31 2011-08-03 华为软件技术有限公司 Method for processing charging messages, charging host machine, load balancer and charging system

Also Published As

Publication number Publication date
CN107040388A (en) 2017-08-11

Similar Documents

Publication Publication Date Title
US10455009B2 (en) Optimizing a load balancer configuration
US9703608B2 (en) Variable configurations for workload distribution across multiple sites
US10084858B2 (en) Managing continuous priority workload availability and general workload availability between sites at unlimited distances for products and services
US20200034210A1 (en) Maintaining two-site configuration for workload availability between sites at unlimited distances for products and services
KR101544483B1 (en) Replication server apparatus and method for creating replica in distribution storage system
US10482104B2 (en) Zero-data loss recovery for active-active sites configurations
US9577961B2 (en) Input/output management in a distributed strict queue
US10200295B1 (en) Client selection in a distributed strict queue
US20140026000A1 (en) Highly available server system based on cloud computing
CN105007337A (en) Cluster system load balancing method and system thereof
US9584593B2 (en) Failure management in a distributed strict queue
CN105516347A (en) Method and device for load balance allocation of streaming media server
US9591101B2 (en) Message batching in a distributed strict queue
US20220232073A1 (en) Multichannel virtual internet protocol address affinity
CN113489691A (en) Network access method, device, computer readable medium and electronic equipment
CN107040388B (en) Charging system and method
US9577878B2 (en) Geographic awareness in a distributed strict queue
CN111224819A (en) Distributed messaging system
CN114064217A (en) Node virtual machine migration method and device based on OpenStack
CN105487946A (en) Fault computer automatic switching method and device
CN110099116B (en) Big data-based subnet security evaluation method
CN110493355A (en) A kind of method for down loading and device of system log
Imran et al. Cloud-niagara: A high availability and low overhead fault tolerance middleware for the cloud
EP3014439A1 (en) Automatic adjustment of application launch endpoints
CN112398668B (en) IaaS cluster-based cloud platform and node switching method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant