CN116916265A - Method, device, equipment and storage medium for processing ticket file data - Google Patents

Method, device, equipment and storage medium for processing ticket file data Download PDF

Info

Publication number
CN116916265A
CN116916265A CN202211619806.XA CN202211619806A CN116916265A CN 116916265 A CN116916265 A CN 116916265A CN 202211619806 A CN202211619806 A CN 202211619806A CN 116916265 A CN116916265 A CN 116916265A
Authority
CN
China
Prior art keywords
target
server
file data
processing
file name
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211619806.XA
Other languages
Chinese (zh)
Inventor
齐连秀
金天顺
王慧
刘艳
谢玲艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Hebei Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Hebei Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Hebei Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202211619806.XA priority Critical patent/CN116916265A/en
Publication of CN116916265A publication Critical patent/CN116916265A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/24Accounting or billing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/14Charging, metering or billing arrangements for data wireline or wireless communications
    • H04L12/1403Architecture for metering, charging or billing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for processing ticket file data, and relates to the technical field of communication. The method comprises the following steps: under the condition that the first server monitors the target processing request, determining a file name directory to be processed associated with a target processing process in a database, wherein the target processing process is associated with the target processing request; the method comprises the steps that a first server sends a file name list to be processed to a target server, wherein the target server is one of a plurality of second servers connected with the first server; the target server obtains target ticket file data from a database based on the file name directory to be processed, wherein the target ticket file data is ticket file data indicated by the file name directory to be processed; and the target server processes the target ticket file data through a target processing process to generate target processing file data.

Description

Method, device, equipment and storage medium for processing ticket file data
Technical Field
The present application belongs to the field of communications technologies, and in particular, to a method, an apparatus, a device, and a storage medium for processing ticket file data.
Background
With the increasing and decreasing of the fee of the 4G network and the gradual commercial use of the 5G network, the bill file data generated by the user is exponentially increased every year, and in order to realize timely and accurate billing of the bill of the user, the operator analyzes the bill file data through a service operation support system (Business Operation Support System, BOSS) to generate a detailed bill and a bill, so as to collect the related fee of the user.
The existing BOSS system is characterized in that a to-be-processed file list is directly listed in a shared file system, preemptive locking is carried out on each ticket file data of the to-be-processed file list in a physical library, and then process processing is carried out, so that ticket charging is realized. However, when a plurality of processes process the same ticket file data, due to the mechanism of preemptive locking, some processes wait to acquire the ticket file data, resulting in low processing efficiency of the ticket file data.
Disclosure of Invention
The embodiment of the application provides a processing method, a device, equipment and a storage medium for ticket file data, which can save waiting time without preempting and locking the ticket file data, thereby improving the processing efficiency of the ticket file data. .
In a first aspect, an embodiment of the present application provides a method for processing ticket file data, which is applied to a first server, where the method includes:
Under the condition that a target processing request is monitored, determining a to-be-processed file name directory associated with a target processing process in a database, wherein the database comprises at least one ticket file data and at least one processing file name directory, the to-be-processed file name directory comprises at least one file name of the ticket file data, and the target processing process is associated with the target processing request;
the file name list to be processed is sent to a target server, the target server is one of a plurality of second servers connected with the first server,
the target server is used for acquiring target ticket file data from the database based on the file name directory to be processed, wherein the target ticket file data is the ticket file data indicated by the file name directory to be processed; and processing the target ticket file data through the target processing process to generate target processing file data.
In a second aspect, an embodiment of the present application provides a method for processing ticket file data, which is applied to a second server, where the method includes:
receiving a file name directory to be processed sent by a first server, wherein the first server is used for determining the file name directory to be processed associated with a target processing process in a database under the condition that a target processing request is monitored, the database comprises at least one ticket file data and at least one processing file name directory, the file name directory to be processed comprises at least one file name of the ticket file data, and the target processing process is associated with the target processing request; the file name list to be processed is sent to a target server, wherein the target server is one of a plurality of second servers connected with the first server;
Acquiring target ticket file data from the database based on the to-be-processed file name directory, wherein the target ticket file data is the ticket file data indicated by the to-be-processed file name directory;
and processing the target ticket file data through the target processing process to generate target processing file data.
In a third aspect, an embodiment of the present application provides a processing device for ticket file data, applied to a first server, where the device includes:
a first determining module, configured to determine, in a database, a to-be-processed file name directory associated with a target processing process, where the database includes at least one ticket file data and at least one processed file name directory, and the to-be-processed file name directory includes a file name of at least one ticket file data, where the target processing process is associated with the target processing request;
a first sending module, configured to send the file name directory to be processed to a target server, where the target server is one of a plurality of second servers connected to the first server,
the target server is used for acquiring target ticket file data from the database based on the file name directory to be processed, wherein the target ticket file data is the ticket file data indicated by the file name directory to be processed; and processing the target ticket file data through the target processing process to generate target processing file data.
In a fourth aspect, an embodiment of the present application provides a processing device for ticket file data, applied to a second server, where the device includes:
the first receiving module is used for receiving a file name directory to be processed sent by a first server, wherein the first server is used for determining the file name directory to be processed associated with a target processing process in a database under the condition that a target processing request is monitored, the database comprises at least one ticket file data and at least one processing file name directory, the file name directory to be processed comprises at least one file name of the ticket file data, and the target processing process is associated with the target processing request; the file name list to be processed is sent to a target server, wherein the target server is one of a plurality of second servers connected with the first server;
the second acquisition module is used for acquiring target ticket file data from the database based on the file name directory to be processed, wherein the target ticket file data is the ticket file data indicated by the file name directory to be processed;
and the processing module is used for processing the target ticket file data through the target processing process to generate target processing file data.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory storing computer program instructions; the processor, when executing the computer program instructions, implements the method for processing ticket file data as described in any one of the above.
In a sixth aspect, an embodiment of the present application provides a computer readable storage medium, where computer program instructions are stored, where the computer program instructions, when executed by a processor, implement a method for processing ticket file data according to any one of the above.
In a seventh aspect, an embodiment of the present application provides a computer program product, where instructions in the computer program product, when executed by a processor of an electronic device, cause the electronic device to perform a method for processing ticket file data according to any one of the above.
In the method, the device, the equipment and the storage medium for processing the ticket file data, under the condition that the first server monitors the target processing request, the to-be-processed file name directory associated with the target processing process is determined in the database, the to-be-processed file name directory is sent to the target server, the target server can acquire the target ticket file data from the database based on the to-be-processed file name directory, and the target processing process is performed on each target ticket file data to generate the target processing file data. In this way, in the embodiment of the application, the second server acquires the ticket file data and completes the target processing process, and the ticket file data does not need to be preempted and locked, so that the waiting time can be saved, and the processing efficiency of the ticket file data is improved.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present application, the drawings that are needed to be used in the embodiments of the present application will be briefly described, and it is possible for a person skilled in the art to obtain other drawings according to these drawings without inventive effort.
FIG. 1 is a schematic diagram of an embodiment of a scenario for existing ticket file data processing provided by an embodiment of the present application;
FIG. 2 is a schematic flow chart of the conventional ticket file data processing according to the embodiment of the present application;
FIG. 3 is a block diagram of a ticket file data processing system according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating a method for processing ticket file data according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a scenario embodiment of a method for processing ticket file data according to an embodiment of the present application;
FIG. 6 is a schematic diagram of another scenario embodiment of a method for processing ticket file data according to an embodiment of the present application;
FIG. 7 is a schematic diagram of another embodiment of a method for processing ticket file data according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a ticket file data processing apparatus according to an embodiment of the present application;
FIG. 9 is a schematic diagram of another apparatus for processing ticket file data according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings and the detailed embodiments. It should be understood that the particular embodiments described herein are meant to be illustrative of the application only and not limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the application by showing examples of the application.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
With the increasing and decreasing of the fee of the 4G network and the gradual commercial use of the 5G network, the bill file data generated by the user is exponentially increased every year, and in order to realize timely and accurate billing of the bill of the user, the operator analyzes the bill file data through a service operation support system (Business Operation Support System, BOSS) to generate a detailed bill and a bill, so as to collect the related fee of the user.
As shown in FIG. 1, in the existing BOSS system, the bill billing is realized by directly listing the file table to be processed in the shared file system, preemptively locking each bill file data of the file table to be processed in the physical library, and then performing process treatment.
As shown in fig. 2, the processing flow of the existing ticket file data may be as follows:
1. acquiring a file task: sorting processes (one of which) list the files to be processed in a shared file system (equivalent to a shareable database).
2. Physical library preemption locking: and selecting a file to be processed by the sorting process, and locking the file in the physical library.
3. Shunting to a downstream shared storage directory: after the sorting process is completed, the processed ticket file data is output to the shared file system, and a rating and account closing catalog of the next link is generated, and is continuously processed by the next processing link (rating and account closing process).
4. Acquiring a file task: the batch billing and closing process also lists the tasks to be processed in the shared file system, and the ticket file data acquisition mode is the same as the sorting acquisition mode.
5. Physical library preemption locking: and the batch billing and closing process is carried out in a physical library, and the dialogue bill file data is preempted and locked.
6. Split to downstream directory: after the batch account closing is completed, a downstream processing catalog is generated and output.
The existing ticket document data processing method has at least the following disadvantages:
firstly, some ticket file data are repeatedly processed by a plurality of processes at the same time, write-in data conflict is caused by a preemptive locking mechanism, the situation that some processes wait for obtaining the ticket file data can occur, and the processing efficiency of the ticket file data is low;
and when the data quantity of the front-end file is seriously increased, the processing efficiency is improved by manual intervention, and the files are shunted to different paths, but the implementation method for manually adjusting the process allocation directory is low in efficiency and easy to make mistakes.
In order to solve the problems in the prior art, the embodiment of the application provides a method, a device, equipment and a storage medium for processing ticket file data.
FIG. 3 is a block diagram of a ticket file data processing system to which embodiments of the present application are applicable.
As shown in fig. 3, the ticket file data processing system 300 may include: a first server 310 and a second server 320.
The first server 310 includes a comprehensive schedule manager 311 and a sharable database, and the first server 310 is further loaded with a BOSS system for supporting charging services. The sharable database may be a specific shared file storage module.
The second server 320 may be a dis cluster server. When the ticket file data is seriously increased, the number of the second servers can be also multiple in order to improve the processing efficiency of the ticket file data. To implement load balancing management of multiple sets of Redis cluster servers (i.e., multiple second servers), the first server 310 may further include a zookeeper service management module 312. The first server 310 may be connected to the plurality of second servers 320, or may be connected to only at least one of the plurality of second servers.
The method for processing the ticket file data provided by the embodiment of the application is further described below.
Fig. 4 is a flow chart illustrating a method for processing ticket file data according to an embodiment of the present application. Alternatively, the method according to the embodiment of the present application may be applied to the first server 310 and the second server 320 shown in fig. 3.
As shown in fig. 4, the method for processing ticket file data may include steps S401 to S405.
S401, the first server determines a file name directory to be processed associated with a target processing process in a database under the condition that the target processing request is monitored.
S402, the first server sends the file name directory to be processed to the target server.
S403, the target server receives the file name directory to be processed sent by the first server.
S404, the target server acquires target ticket file data from the database based on the file name directory to be processed.
S405, the target server processes the target ticket file data through a target processing process to generate target processing file data.
The processing method of the ticket file data provided by the embodiment of the application can be realized between the first server and a plurality of second servers, under the condition that the first server monitors the target processing request, the to-be-processed file name directory associated with the target processing process is determined in the database, the to-be-processed file name directory is sent to the target server, and the target server can acquire the target ticket file data from the database based on the to-be-processed file name directory, and process the target processing process on each target ticket file data to generate target processing file data. In this way, in the embodiment of the application, the second server acquires the ticket file data and completes the target processing process, and the ticket file data does not need to be preempted and locked, so that the waiting time can be saved, and the processing efficiency of the ticket file data is improved.
A specific implementation of each of the above steps is described below.
In S401, the target processing procedure is associated with a target processing request.
The database comprises at least one ticket file data and at least one processing file name directory, wherein the file name directory to be processed comprises the file name of the at least one ticket file data.
The file name directory to be processed is a directory associated with the target processing process. In an exemplary application scenario of bill charging, when the target processing process is a sorting process, the file name directory to be processed is a directory generated after the last service processing process of the sorting process is executed, so as to finally realize bill charging through the service processing processes which are interlinked.
The foregoing interception of the target processing request may be, for example, that a setting node is planned on the first server to intercept each processing request, thereby completing a processing procedure corresponding to each processing request.
The above-mentioned determining the file name directory to be processed associated with the target processing process in the database, for example, when the target processing process is a sorting process, searching the file name directory to be processed corresponding to the sorting process in the database; if the target processing process is a batch account closing process, the last business processing process of the batch account closing process is a sorting process, the file name directory to be processed is a directory generated by the completion of the execution of the sorting process, the file name directory to be processed is stored in a database, and the association between the file name directory to be processed and the next business processing process (namely the sorting process) is established, so that the file name directory to be processed corresponding to the batch account closing process is determined in the database when the next business processing process (namely the batch account closing process) is monitored.
In S402, the target server is one of a plurality of second servers connected to the first server.
The first server is in communication connection with the plurality of second servers, and the first server sends the to-be-processed file name list to the target server, or the first server can send the to-be-processed file name list to one of the plurality of second servers.
In S403, the target server receives the file name directory to be processed sent by the first server, which may be one of the plurality of second servers, and receives the file name directory to be processed sent by the first server.
In S404, the target ticket file data is ticket file data indicated by the to-be-processed file name directory.
The target server obtains the target ticket file data from the database based on the file name directory to be processed, and may be one of a plurality of second servers, and obtains the target ticket file data from the database of the first server based on the file name directory to be processed.
In S405, the target server processes each target ticket file data through a target processing process to generate target processing file data, which may be one of the plurality of second servers, and processes each target ticket file data through a target processing process to generate target processing file data. Illustratively, if the target processing process is a sorting process, the target server performs sorting processing on the target ticket file data to generate sorting processing file data; and if the target processing process is a batch billing process, the target server performs batch billing processing on the target ticket file data to generate batch billing processing file data. The sorting process and the price closing process are both one of the target processing processes, and the processing process is not limited to the two processes, but can be other processing processes meeting the bill charging, and is not particularly limited herein.
In some embodiments, in order to establish an association with the service processing procedure of the next link, thereby completing ticket charging, the following steps may be further included after the step S405:
s406, the target server generates a target processing file name directory based on the target processing file data;
s407, the target server sends target processing file data and target processing file name catalogue to the first server;
s408, the first server receives target processing file data and target processing file name catalogue sent by the target server;
s409, the first server stores the target processing file data and the target processing file name directory into a database.
The target processing file name directory includes at least one file name of target processing file data.
The target server generates a target process file name directory based on the target process file data, for example, the target server may generate a process file name directory of a business process (i.e., a batch billing process) of a next link according to the process file data after the sorting process is completed.
The database may also include file name directories associated with a plurality of process steps, with different process steps associated with different file name directories. Illustratively, the database includes a filename directory associated with the sorting process, a filename directory associated with the price closing process, and a filename directory associated with other ticket charging processes, and the filename directory in the database is not limited thereto and is not specifically limited thereto.
In this embodiment, the target server generates a target processing file name directory based on the target processing file data, and sends the target processing file name directory to the first server, so as to establish a relationship between the target processing file name directory and a service processing process of a next link, thereby completing ticket charging processing.
As an implementation manner of the present application, in order to balance the loads of the plurality of second servers and improve the processing efficiency of the data when the ticket file data increases seriously, before the step S402, the method may further include:
and acquiring load information of each second server when the target processing request is monitored.
And determining the second server with the lowest load connection number as a target server based on the load information of each second server.
The load information includes the number of load connections of the second server.
The obtaining load information of each second server may, for example, be that the first server further includes a zookeeper service management module, where the zookeeper service management module reads service information of each second server through a preset node, and the service information includes an IP connection string of each second server, a load connection number with the first server, and the load connection number of each second server is used as load information.
In this embodiment, the second server with the lowest load connection number is determined as the target server, so that when the ticket file data increases seriously, the loads of the plurality of second servers can be balanced, thereby improving the processing efficiency of the data.
In some embodiments, in order to implement load balancing management of the plurality of second servers, before acquiring load information of each second server in the case of monitoring the target processing process, the method may further include:
setting a first node, a second node and a third node on the first server, wherein the first node is used for acquiring load information of each second server, the second node is used for monitoring processing requests, and the third node is used for monitoring distribution conditions of the first server and a plurality of second servers;
the acquiring load information of each second server when the target processing request is monitored specifically includes:
under the condition that the second node monitors the target processing request, the first node acquires the load information of each second server;
after determining the second server of the lowest load connection number as the target server based on the load information of each second server, further comprising:
writing the target server into the third node, and increasing the load connection number of the target server by one.
The first node, the second node and the third node are set on the first server, and illustratively, a "Redis service registration node" (i.e., the first node), a "client request node" (i.e., the second node) and a "client service allocation node" (i.e., the third node) may be planned on a zookeeper of the first server.
Each Redis server (namely each second server) performs initialization setting, and the Redis management process registers service information of each Redis server into the zookeeper, wherein the service information comprises information such as an IP connection string and an initial connection number 0 of the Redis server. For a client (i.e., a first server) to be connected to the Redis server, a request is registered to a "client request node" of the zookeeper, and a snoop is created for a "client service distribution node" of the zookeeper, waiting for distribution of the Redis service.
Under the condition that the target processing request is monitored, the load information of each second server can be obtained, monitoring is created for a client request node, after the processing request of distributing service for the client is received, all the Redis service information (package block load connection quantity) is read from a Redis service registration node of the zookeeper, and then the current connection quantity of each Redis server is used as the load information, and the Redis server with the lowest load is obtained and distributed to the client.
The writing of the target server into the third node and the increase of the load connection number of the target server by one may be writing the lowest-load Redis server into the client service allocation node of the zookeeper, and adding 1 to the connection number of the lowest-load Redis server and updating.
In this embodiment, the first node, the second node and the third node are planned on the zookeeper of the first server, so as to monitor the target processing process and obtain the load information of each second server, thereby realizing the load balancing management of a plurality of second servers.
In order to facilitate understanding of the method for processing ticket file data in the embodiment of the present application, an actual application process of the method for processing ticket file data is described as follows:
according to the technical scheme, the comprehensive scheduling manager is additionally adopted to uniformly manage and control the distributed tasks, so that the traditional preemptive file acquisition mechanism of each process, deadlock and other practical problems are changed. In addition, the existing Zookeeper polling scheduling mechanism can be improved through comprehensive scheduling of a manager, multiple sets of Redis clusters cannot be managed, and the problem of load balancing of the multiple Redis clusters cannot be achieved.
By adopting the scheme, file acquisition, task management, resource load balancing of Redis clusters and storage processing are finally and efficiently realized, and the pressure of a sorting process on I/O of a distributed file system is reduced, so that the file processing efficiency is improved.
The embodiment adds a comprehensive scheduling manager on the client (equivalent to the first server), which may include the following modules: the system comprises a data acquisition task management module, a zookeeper system load balancing service node management module (equivalent to a zookeeper service management module), a task dispatching and scheduling management module, a distributed file system processing module (equivalent to a database), a processing process module (such as a sorting module and a price closing module) and a Redis cluster server.
As shown in fig. 5, the load balancing service node management module of the zookeeper system can obtain the latest service node information on one hand, and realize the load balancing of system resources by attaching to a balancing algorithm to prevent performance overload; on the other hand, the sorting processing task is controlled by the Zookeeper to perform the management of the selected task, the master process takes responsibility for file task distribution, and other processes are ready to take over to make high-availability deployment.
The task dispatching and scheduling management module can manage a task acquisition processing process, the process is responsible for acquiring a task to be processed from the data acquisition module, and the task to be processed is divided into branch paths and branch queues and written into the Redis cluster server, so that the high availability of the management process is ensured, and the cluster mode is adopted for deployment.
Distributed file system: the shared data storage module is used for storing files to be processed and files which are acquired from the network element and processed.
Sorting module (one of the process modules): managing sorting tasks, designating source processing paths and target storage paths corresponding to the sorting tasks, distributing sorting tasks to obtain storage addresses of data sources in a data server stored in a memory of a Redis cluster server, and designating storage paths of processed file names stored in the Redis cluster server.
In addition, a comprehensive scheduling manager is added in the aspect of process acquisition tasks, so that the problem that the existing Zookeeper cannot manage a plurality of sets of Redis cluster servers can be solved, and load balancing of the plurality of sets of Redis cluster servers is achieved. Service information of a plurality of sets of Redis cluster servers is registered in a Zookeeper, when a load equalizer (equivalent to a Zookeeper system load balancing service node management module) detects that a client node has a new request for connecting the Redis cluster servers, load situation analysis of the existing network Redis cluster servers is carried out, and a connection string of the Redis cluster servers after load balancing is distributed to the client, so that load balancing among the sets of Redis cluster servers is realized.
The method comprises the following specific steps:
(1) The zookeeper plans a Redis service registration node, a client request node and a client service distribution node.
(2) When the Redis service is started, the Redis management process registers service information into a zookeeper (including information such as an IP connection string of the service, an initial client connection number 0 and the like).
(3) For a client to be connected with the Redis cluster server, registering a request to a client request node of the zookeeper, and creating interception to a client service distribution node of the zookeeper to wait for distribution of the Redis cluster server.
(4) The integrated schedule manager connects the zookeeper and creates a snoop to the "client request node". After receiving a registration request of distributing services of a client, reading service information of all Redis cluster servers from a 'Redis service registration node' of the zookeeper, taking the current connection number of each Redis cluster server as a load, acquiring the Redis cluster server with the lowest load to be distributed to the client, writing the Redis cluster server into the 'client service distribution node' of the zookeeper, and simultaneously updating the connection number of the Redis cluster server by 1.
(5) When the client monitors that the client service distribution node is distributed to the Redis cluster server with the lowest load, the client acquires information such as a connection string of the Redis cluster server and connects the Redis cluster server.
(6) When a certain Redis cluster server exits from the connection with the client, the Redis management process deletes the service information of the Redis cluster server from the zookeeper. The comprehensive scheduling manager starts and stops monitoring on the 'Redis service registration node', and when the Redis cluster server is monitored to be offline, processing tasks which are already allocated to the client in the offline Redis cluster server are allocated to other Redis cluster servers according to the order of the load from low to high.
As shown in fig. 6, the data information to be processed is collected from the last link, and in this embodiment, the ticket collection of the charging system is taken as an example, specifically, the ticket file is collected from the network element.
The method is characterized in that a file scheduling management module is added on the basis of the original processing flow, the ticket file name is written into a Redis queue, a sorting process supports connection with the Redis library, the Redis file queue can be automatically created, a file name task is written in, a file task is read, a file path is read, and task verification can be performed. The sorting process directly reads the file name from the Redis queue, directly reads the file under the directory according to the file name, processes the file, and outputs the file name to write into the Redis queue (namely, the price closing directory) after the processing is finished. In the processing process, the process does not need to list files from the catalogue, and does not need to lock and occupy files through a database, so that a processing mechanism of the cluster process for distributing the files is realized.
As shown in fig. 7, the flow of the processing method of ticket file data according to an embodiment of the present application is as follows:
step 1, task acquisition: the acquisition module acquires a file to be processed and writes the file into the shared file system;
step 2, selecting a master file scheduling management process: selecting a target server through a Zookeeper distributed lock;
step 3, file scheduling management process treatment: file scheduling management lists a file name directory to be processed from the shared file system;
step 4, the next link process (such as sorting process) obtains the service information of the target Redis cluster server from the Zookeeper, and connects the target Redis cluster server;
step 5, writing each file name in the file name directory to be processed into the target Redis cluster server: the file scheduling management writes the file name into a target Redis cluster server;
step 6, the target Redis cluster server of the next link process (such as sorting process) acquires a file name directory;
step 7, processing files: reading file content, wherein the process (such as sorting process) of the link directly acquires the source file from the shared file system according to the acquired file name directory, and performs file processing treatment without listing a file table in the shared file system or locking and preempting the file;
Step 8, writing the processed file name into Redis;
step 9, writing the processed file into a storage: sorting and writing the processed file content into a shared storage;
step 10, obtaining a task to be processed (file name) by a process (such as a batch charging and account closing process) Redis of the next processing link;
step 11, reading file content: according to the obtained task file name, obtaining a source file from a shared file system, and performing processing (price closing) of the link, wherein a file list in the shared file system is not required, and locking is not required to be used for preempting files;
step 12, document reprocessing (price closing process);
and step 13, continuing the charging processing of the follow-up ticket file data.
In the embodiment, the traditional preemptive file acquisition mechanism of each process is changed, the combination of Redis and Zookeeper is realized through the comprehensive scheduling manager, the load balancing of a plurality of Redis clusters is realized, the pressure of a sorting process on the I/O of the distributed file system is reduced, and the file processing efficiency is improved. The method solves the problems of locking, deadlock and the like of the database in the file preemption process, and improves the processing performance of the system on the files. The system performance processing bottleneck is solved: when a large number of file tasks are backlogged, the file listing speed of the application process is slow, and as the process increases, the file listing speed can be continuously deteriorated, so that after the process increases to a certain number, the overall processing speed of the system cannot be improved by increasing the process. By improving the system processing efficiency, the manual intervention in the maintenance process is avoided, and the situations of lagging manual operation, easy error and the like are avoided.
Based on the processing method of the ticket file data provided in the foregoing embodiments, correspondingly, the present application further provides a specific implementation manner of a processing device of the ticket file data, and it may be understood that, in the following embodiments of the devices, the relevant description may refer to the foregoing embodiments of the methods, which are not repeated for brevity. Please refer to the following examples.
Fig. 8 is a schematic structural diagram of a ticket file data processing apparatus 800 according to an embodiment of the present application, which is applied to a first server and may include: a first determination module 801 and a first transmission module 802.
A first determining module 801, configured to determine, in a database, a to-be-processed file name directory associated with a target processing process, where the database includes at least one ticket file data and at least one processed file name directory, the to-be-processed file name directory includes a file name of at least one ticket file data, and the target processing process is associated with the target processing request;
a first sending module 802, configured to send a file name directory to be processed to a target server, where the target server is one of a plurality of second servers connected to the first server,
The target server is used for acquiring target ticket file data from the database based on the file name directory to be processed, wherein the target ticket file data is ticket file data indicated by the file name directory to be processed; and processing the target ticket file data through a target processing process to generate target processing file data.
The processing device for the ticket file data provided by the embodiment of the application can be realized between the first server and the plurality of second servers, under the condition that the first server monitors the target processing request, the to-be-processed file name directory associated with the target processing process is determined in the database, the to-be-processed file name directory is sent to the target server, and the target server can acquire the target ticket file data from the database based on the to-be-processed file name directory, and process the target processing process on each target ticket file data to generate target processing file data. In this way, in the embodiment of the application, the second server acquires the ticket file data and completes the target processing process, and the ticket file data does not need to be preempted and locked, so that the waiting time can be saved, and the processing efficiency of the ticket file data is improved.
As an implementation manner of the present application, in order to balance loads of the plurality of second servers when the ticket file data increases seriously, the apparatus 800 may further include:
the first acquisition module is used for acquiring load information of each second server under the condition that the target processing request is monitored, wherein the load information comprises the load connection quantity of the second servers;
and the second determining module is used for determining the second server with the lowest load connection number as a target server based on the load information of each second server.
In some embodiments, to implement load balancing management of the plurality of second servers, the apparatus 800 may further include:
the device comprises a setting module, a first server, a second server, a third server and a control module, wherein the setting module is used for setting a first node, a second node and a third node on the first server, the first node is used for acquiring load information of each second server, the second node is used for monitoring a processing request, and the third node is used for monitoring distribution conditions of the first server and a plurality of second servers;
the first acquisition module is specifically configured to acquire load information of each second server by the first node when the second node monitors the target processing request;
and the distribution module is used for writing the target server into the third node and increasing the load connection number of the target server by one.
In some embodiments, to establish an association with a traffic handling process of a next link, the apparatus 800 may further include:
the second receiving module is used for receiving target processing file data and target processing file name catalogues sent by the target server, wherein the target server is also used for generating the target processing file name catalogues based on the target processing file data, and the target processing file name catalogues comprise file names of at least one target processing file data; transmitting target processing file data and a target processing file name directory to a first server;
and the storage module is used for storing the target processing file data and the target processing file name directory into a database.
Fig. 9 is a schematic structural diagram of another ticket file data processing apparatus 900 according to an embodiment of the present application, which is applied to a second server and may include: a first receiving module 901, a second obtaining module 902 and a processing module 903.
The first receiving module 901 is configured to receive a to-be-processed file name directory sent by a first server, where the first server is configured to determine, in a database, the to-be-processed file name directory associated with a target processing process under the condition that a target processing request is monitored, the database including at least one ticket file data and at least one processing file name directory, the to-be-processed file name directory including a file name of at least one ticket file data, and the target processing process being associated with the target processing request; transmitting a file name directory to be processed to a target server, wherein the target server is one of a plurality of second servers connected with the first server;
A second obtaining module 902, configured to obtain target ticket file data from the database based on the file name directory to be processed, where the target ticket file data is ticket file data indicated by the file name directory to be processed;
the processing module 903 is configured to process each target ticket file data through a target processing process, and generate target processing file data.
The processing device for the ticket file data provided by the embodiment of the application can be realized between the first server and the plurality of second servers, under the condition that the first server monitors the target processing request, the to-be-processed file name directory associated with the target processing process is determined in the database, the to-be-processed file name directory is sent to the target server, and the target server can acquire the target ticket file data from the database based on the to-be-processed file name directory, and process the target processing process on each target ticket file data to generate target processing file data. In this way, in the embodiment of the application, the second server acquires the ticket file data and completes the target processing process, and the ticket file data does not need to be preempted and locked, so that the waiting time can be saved, and the processing efficiency of the ticket file data is improved.
In some embodiments, in order to establish an association with a service processing process of a next link, the apparatus further includes:
the generating module is used for generating a target processing file name directory based on the target processing file data, wherein the target processing file name directory comprises at least one file name of the target processing file data;
and the second sending module is used for sending the target processing file data and the target processing file name directory to the first server.
Fig. 10 shows a schematic hardware structure of an electronic device according to an embodiment of the present application.
A processor 1001 and a memory 1002 storing computer program instructions may be included in an electronic device.
In particular, the processor 1001 described above may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present application.
Memory 1002 may include mass storage for data or instructions. By way of example, and not limitation, memory 1002 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. The memory 1002 may include removable or non-removable (or fixed) media, where appropriate. Memory 1002 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 1002 is a non-volatile solid state memory.
In particular embodiments, memory 1002 may include Read Only Memory (ROM), random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory includes one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors) it is operable to perform the operations described with reference to methods in accordance with aspects of the present disclosure.
The processor 1001 reads and executes the computer program instructions stored in the memory 1002 to implement the processing method of the ticket file data in any of the above embodiments.
In one example, the electronic device may also include a communication interface 1003 and a bus 1010. As shown in fig. 10, the processor 1001, the memory 1002, and the communication interface 1003 are connected to each other by a bus 1010, and perform communication with each other.
The communication interface 1003 is mainly used for implementing communication among the modules, devices, units and/or apparatuses in the embodiment of the application.
Bus 1010 includes hardware, software, or both, coupling components of an electronic device to each other. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. Bus 1010 may include one or more buses, where appropriate. Although embodiments of the application have been described and illustrated with respect to a particular bus, the application contemplates any suitable bus or interconnect.
The electronic device can execute the processing method of the ticket file data in the embodiment of the application, thereby realizing the processing method and the device of the ticket file data described with reference to fig. 4, 8 and 9.
In addition, in combination with the processing method of the ticket file data in the above embodiment, the embodiment of the application may be implemented by providing a computer readable storage medium. The computer readable storage medium has stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement a method for processing ticket file data in any of the above embodiments.
In combination with the method for processing ticket file data in the foregoing embodiment, an embodiment of the present application may provide a computer program product, where instructions in the computer program product when executed by a processor of an electronic device cause the electronic device to execute the method for processing ticket file data according to any one of the foregoing embodiments.
In combination with the method for processing ticket file data in the above embodiment, the embodiment of the present application may be implemented by providing a vehicle. The vehicle includes: and the electronic equipment is used for realizing the processing method of the ticket file data according to any one of the above.
It should be understood that the application is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and shown, and those skilled in the art can make various changes, modifications and additions, or change the order between steps, after appreciating the spirit of the present application.
The functional blocks shown in the above block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to being, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware which performs the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present application, and they should be included in the scope of the present application.

Claims (10)

1. The method for processing the ticket file data is characterized by being applied to a first server, and comprises the following steps:
under the condition that a target processing request is monitored, determining a to-be-processed file name directory associated with a target processing process in a database, wherein the database comprises at least one ticket file data and at least one processing file name directory, the to-be-processed file name directory comprises at least one file name of the ticket file data, and the target processing process is associated with the target processing request;
the file name list to be processed is sent to a target server, the target server is one of a plurality of second servers connected with the first server,
The target server is used for acquiring target ticket file data from the database based on the file name directory to be processed, wherein the target ticket file data is the ticket file data indicated by the file name directory to be processed; and processing the target ticket file data through the target processing process to generate target processing file data.
2. The method of claim 1, further comprising, prior to said sending the pending file name directory to a target server:
acquiring load information of each second server under the condition that the target processing request is monitored, wherein the load information comprises the load connection number of the second server;
and determining the second server with the lowest load connection number as the target server based on the load information of each second server.
3. The method of claim 2, further comprising, prior to obtaining load information for each of the second servers in the event that the target processing process is monitored:
a first node, a second node and a third node are arranged on the first server, the first node is used for acquiring load information of each second server, the second node is used for monitoring processing requests, and the third node is used for monitoring distribution conditions of the first server and a plurality of second servers;
And under the condition that the target processing request is monitored, acquiring the load information of each second server, wherein the load information comprises the following steps:
under the condition that the second node monitors the target processing request, the first node acquires load information of each second server;
after the second server with the lowest load connection number is determined as the target server based on the load information of each second server, the method further comprises:
writing the target server into the third node, and increasing the load connection number of the target server by one.
4. The method of claim 1, further comprising, after said sending the pending file name directory to a target server:
receiving the target processing file data and a target processing file name directory sent by the target server, wherein the target server is further used for generating the target processing file name directory based on the target processing file data, and the target processing file name directory comprises at least one file name of the target processing file data; transmitting the target processing file data and the target processing file name directory to the first server;
And storing the target processing file data and the target processing file name directory into the database.
5. The processing method of the ticket file data is characterized by being applied to a second server and comprising the following steps:
receiving a file name directory to be processed sent by a first server, wherein the first server is used for determining the file name directory to be processed associated with a target processing process in a database under the condition that a target processing request is monitored, the database comprises at least one ticket file data and at least one processing file name directory, the file name directory to be processed comprises at least one file name of the ticket file data, and the target processing process is associated with the target processing request; the file name list to be processed is sent to a target server, wherein the target server is one of a plurality of second servers connected with the first server;
acquiring target ticket file data from the database based on the to-be-processed file name directory, wherein the target ticket file data is the ticket file data indicated by the to-be-processed file name directory;
and processing the target ticket file data through the target processing process to generate target processing file data.
6. The method of claim 5, further comprising, after said processing of each of said target ticket file data by said target processing process to generate target processed file data:
generating a target processing file name directory based on the target processing file data, wherein the target processing file name directory comprises at least one file name of the target processing file data;
and sending the target processing file data and the target processing file name directory to the first server.
7. A ticket file data processing apparatus for use with a first server, said apparatus comprising:
a first determining module, configured to determine, in a database, a to-be-processed file name directory associated with a target processing process, where the database includes at least one ticket file data and at least one processed file name directory, and the to-be-processed file name directory includes a file name of at least one ticket file data, where the target processing process is associated with the target processing request;
a first sending module, configured to send the file name directory to be processed to a target server, where the target server is one of a plurality of second servers connected to the first server,
The target server is used for acquiring target ticket file data from the database based on the file name directory to be processed, wherein the target ticket file data is the ticket file data indicated by the file name directory to be processed; and processing the target ticket file data through the target processing process to generate target processing file data.
8. A ticket file data processing apparatus for use with a second server, said apparatus comprising:
the first receiving module is used for receiving a file name directory to be processed sent by a first server, wherein the first server is used for determining the file name directory to be processed associated with a target processing process in a database under the condition that a target processing request is monitored, the database comprises at least one ticket file data and at least one processing file name directory, the file name directory to be processed comprises at least one file name of the ticket file data, and the target processing process is associated with the target processing request; the file name list to be processed is sent to a target server, wherein the target server is one of a plurality of second servers connected with the first server;
The second acquisition module is used for acquiring target ticket file data from the database based on the file name directory to be processed, wherein the target ticket file data is the ticket file data indicated by the file name directory to be processed;
and the processing module is used for processing the target ticket file data through the target processing process to generate target processing file data.
9. An electronic device, the device comprising: a processor and a memory storing computer program instructions; the processor, when executing the computer program instructions, implements a method for processing ticket file data according to any one of claims 1-6.
10. A computer readable storage medium, wherein computer program instructions are stored on the computer readable storage medium, which when executed by a processor, implement a method of processing ticket file data according to any of claims 1-6.
CN202211619806.XA 2022-12-15 2022-12-15 Method, device, equipment and storage medium for processing ticket file data Pending CN116916265A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211619806.XA CN116916265A (en) 2022-12-15 2022-12-15 Method, device, equipment and storage medium for processing ticket file data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211619806.XA CN116916265A (en) 2022-12-15 2022-12-15 Method, device, equipment and storage medium for processing ticket file data

Publications (1)

Publication Number Publication Date
CN116916265A true CN116916265A (en) 2023-10-20

Family

ID=88361457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211619806.XA Pending CN116916265A (en) 2022-12-15 2022-12-15 Method, device, equipment and storage medium for processing ticket file data

Country Status (1)

Country Link
CN (1) CN116916265A (en)

Similar Documents

Publication Publication Date Title
CN108959292B (en) Data uploading method, system and computer readable storage medium
US8639792B2 (en) Job processing system, method and program
CN110019211A (en) The methods, devices and systems of association index
CN108512672B (en) Service arranging method, service management method and device
CN110968478B (en) Log acquisition method, server and computer storage medium
CN113127168A (en) Service distribution method, system, device, server and medium
CN115640110B (en) Distributed cloud computing system scheduling method and device
CN109104368B (en) Connection request method, device, server and computer readable storage medium
CN111491015B (en) Preheating task processing method and system, proxy server and service center
CN114422580B (en) Information processing method, device, electronic equipment and storage medium
CN113127472B (en) Method and system for real-time deduplication statistics of number of drivers with large reporting amount
CN112148467A (en) Dynamic allocation of computing resources
CN116600014B (en) Server scheduling method and device, electronic equipment and readable storage medium
CN108696554B (en) Load balancing method and device
CN109120680A (en) A kind of control system, method and relevant device
CN107045452B (en) Virtual machine scheduling method and device
CN116916265A (en) Method, device, equipment and storage medium for processing ticket file data
CN105657063B (en) Data download method and device
CN116150273A (en) Data processing method, device, computer equipment and storage medium
CN114070889B (en) Configuration method, traffic forwarding device, storage medium, and program product
CN112256436B (en) Resource allocation method, device, equipment and computer storage medium
CN114528140A (en) Method and device for service degradation
CN111800446B (en) Scheduling processing method, device, equipment and storage medium
CN112306701B (en) Service fusing method, device, equipment and storage medium
CN108683608B (en) Method and device for distributing flow

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination