CN117290143A - Fault locating method, system, electronic equipment and computer readable storage medium - Google Patents

Fault locating method, system, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN117290143A
CN117290143A CN202311270939.5A CN202311270939A CN117290143A CN 117290143 A CN117290143 A CN 117290143A CN 202311270939 A CN202311270939 A CN 202311270939A CN 117290143 A CN117290143 A CN 117290143A
Authority
CN
China
Prior art keywords
application
fault
node
semantic
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311270939.5A
Other languages
Chinese (zh)
Inventor
王永庚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202311270939.5A priority Critical patent/CN117290143A/en
Publication of CN117290143A publication Critical patent/CN117290143A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/079Root cause analysis, i.e. error or fault diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the specification discloses a fault locating method, a system, electronic equipment and a computer readable storage medium, wherein an initial vector representation of an application description text of each application is obtained by encoding the application description text, a structure diagram of the application is generated, then similar nodes are connected to generate a semantic diagram of the application, a self-supervision learning model is trained through the structure diagram and the similarity loss of the semantic diagram, a scoring vector representation of the application description text and the fault description text is generated by using the trained self-supervision learning model, and then a fault locating score of an application node is calculated according to the similarity of the scoring vector representation.

Description

Fault locating method, system, electronic equipment and computer readable storage medium
Technical Field
Embodiments of the present invention relate to the field of fault locating technologies, and in particular, to a fault locating method, a system, an electronic device, and a computer readable storage medium.
Background
In fault detection, post-locating a fault-like fault generally requires detailed fault diagnosis and analysis to determine the location and root cause of the fault, and appropriate measures are taken to repair the system.
The existing automatic positioning method for the post-positioning faults is designed based on a historical fault mode, a large amount of error reporting information can be generated by the system under the fault condition, each type of faults has different error information, and the existing automatic positioning method for the faults distinguishes and positions the faults according to the information. This approach relies strongly on annotation data based on historical faults, which can guide the model to learn the wrong pattern.
However, for customer complaint faults, when the automatic positioning of the faults is performed through customer complaints, the customer complaint faults occur less, have strong sporadic performance, seriously lack historical data of similar faults, and cannot cover the positioning requirements of all the faults through the historical faults.
Therefore, there is a need for a fault location method that can automate fault location based on customer complaint faults independent of historical data.
Disclosure of Invention
The embodiment of the specification provides a fault locating method, a fault locating system, electronic equipment and a computer readable storage medium, and the technical scheme is as follows:
in a first aspect, embodiments of the present disclosure provide a fault locating method, including:
Acquiring an application description text of an application node, wherein the application node is obtained based on a domain knowledge graph;
encoding each application description text respectively to generate an initial vector representation of each application description text;
generating an applied structure diagram according to the connection relation of the application nodes in the domain knowledge graph;
calculating initial vector representation of application description text of the application nodes in the structure diagram, connecting similar application nodes according to the initial vector representation, and generating an application semantic graph;
training a hierarchical self-supervision learning model, wherein the hierarchical self-supervision learning model is used for encoding an application description text and a fault description text, the training is performed by using hierarchical self-supervision loss, and the hierarchical self-supervision loss is calculated according to node-level, local-level and global-level similarity loss between initial vector representations of application nodes in the structural diagram and the semantic diagram;
and generating scoring vector representations of the application nodes and the fault description text by using the trained hierarchical self-supervision learning model, and calculating fault positioning scores of the application nodes according to similarity of the scoring vector representations.
In a second aspect, embodiments of the present disclosure provide a fault localization system comprising:
The application description acquisition module is used for acquiring application description text of the application node, and the application node is obtained based on the domain knowledge graph;
the fault description acquisition module is used for acquiring a fault description text;
the initial vector representation calculation module is used for respectively encoding each application description text and generating an initial vector representation of each application description text;
the structure diagram generation module is used for generating a structure diagram of the application according to the connection relation of the application nodes in the domain knowledge graph;
the semantic graph generation module is used for calculating initial vector representations of application description texts of the application nodes in the structure chart, connecting similar application nodes according to the initial vector representations, and generating an application semantic graph;
the training module is used for training a hierarchical self-supervision learning model, the training is carried out by using hierarchical self-supervision loss, and the hierarchical self-supervision loss is obtained by calculating node level, local level and global level similarity loss between the initial vector representations of the application nodes in the structural diagram and the semantic diagram;
the self-supervision learning model is used for encoding the application description text and the fault description text and generating scoring vector representations of the application nodes and the fault description text;
And the fault locating module is used for calculating the fault locating score of the application node according to the similarity of the scoring vector representation of the application description text and the scoring vector representation of the fault description text.
In a third aspect, embodiments of the present disclosure provide an electronic device including a processor and a memory; the processor is connected with the memory; the memory is used for storing executable program codes; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory for performing the steps of the fault localization method described in the first aspect of the embodiment.
In a fourth aspect, the present description provides a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the steps of the fault localization method of the first aspect of the above-described embodiments.
The technical scheme provided by some embodiments of the present specification has the following beneficial effects:
the method of the embodiment of the specification comprises the steps of firstly obtaining application description texts of all applications, coding the application description texts to obtain initial vector representations of the application description texts of all the applications, generating a structure diagram of the applications, then calculating similarity of application nodes in the structure diagram, connecting similar nodes to generate a semantic diagram of the applications, training a self-supervision learning model through similarity loss of the structure diagram and the semantic diagram, generating scoring vector representations of the application description texts and the fault description texts by using the trained self-supervision learning model, and further calculating fault positioning scores of the application nodes according to the similarity of the scoring vector representations. When the method in the embodiment of the specification calculates the score vector representation of the application node, the information of the similar node is utilized, even if the application description text of a certain application node is missing, the similar node can be utilized to generate accurate score vector representation, and when the score vector representation is utilized to combine the vector representation of the fault description text to perform fault location, the finally obtained fault location result is more accurate.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic system architecture diagram of a fault locating method according to an embodiment of the present disclosure.
Fig. 2 is a schematic flow chart of a fault locating method according to an embodiment of the present disclosure.
Fig. 3 is a flowchart of a process for generating a domain pre-training language model with knowledge enhancement of a fault location method according to an embodiment of the present disclosure.
Fig. 4 is a flow chart of still another fault locating method according to an embodiment of the present disclosure.
Fig. 5 is a flow chart of still another fault locating method according to an embodiment of the present disclosure.
Fig. 6 is a flow chart of still another fault locating method according to an embodiment of the present disclosure.
Fig. 7 is a schematic structural diagram of a fault location system according to an embodiment of the present disclosure.
Fig. 8 is a schematic structural diagram of yet another fault location system provided in an embodiment of the present disclosure.
Fig. 9 is a schematic structural diagram of yet another fault location system provided in an embodiment of the present disclosure.
Fig. 10 is a schematic structural diagram of yet another fault location system provided in an embodiment of the present disclosure.
Fig. 11 is a schematic structural diagram of still another fault location system provided in an embodiment of the present disclosure.
Fig. 12 is a schematic structural diagram of yet another fault location system provided in an embodiment of the present disclosure.
Fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the term "include" and any variations thereof is intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Before explaining the fault locating method in detail in connection with one or more embodiments, the present specification describes an application scenario of the fault locating method.
Faults can be classified into several categories based on the stage they are detected, including pre-intercept, in-process discovery, and post-process localization of the category faults.
Wherein, the prior interception type fault is detected and predicted before the accident happens, thereby taking preventive measures to prevent the accident. This type of fault is typically implemented by a monitoring and prediction system, which predicts potential faults in advance by real-time data analysis and pre-warning, and takes corresponding action to prevent the occurrence of faults.
The discovery of fault-like conditions in the process of operating a device or system is typically accomplished by monitoring the performance, operating conditions and data of the system to ensure that measures can be taken in time to prevent further development of the fault in the event of an anomaly or abnormal behavior.
The post-positioning type fault is detected after the system fails, and needs to be positioned and removed. Locating a fault afterwards typically requires detailed fault diagnosis and analysis to determine the location and root cause of the fault, and appropriate measures to repair the system. This type of fault handling typically consumes a significant amount of time and resources in fault localization, and therefore, it is extremely important to achieve automation of fault localization.
The existing fault positioning automatic method is designed based on a historical fault mode, a large amount of error reporting information can be generated by the system under the fault condition, each type of fault has different error information, and the existing fault positioning automatic method distinguishes and positions the faults based on the information. This approach relies strongly on labeling data that can guide the model to learn the wrong pattern.
The customer complaint fault is a kind of post-positioning fault, and refers to a fault discovered and positioned after receiving the text of the customer complaint fault. When the automatic positioning of the faults is carried out through customer complaints, the customer complaint faults are fewer, have strong sporadic performance, seriously lack historical data of similar faults, and cannot cover the positioning requirements of all the faults through the historical faults.
The text content of a customer complaint fault will vary according to the customer's personal choice, but in most cases will contain descriptive text in natural language such as which application the customer is in, which service is experiencing the fault, and the specifics of the fault.
Referring to fig. 1, a schematic system architecture of a fault locating method according to an embodiment of the present disclosure may at least include a server 110, a client 120 and a backend 130. Wherein:
Server 110 may be, but is not limited to, a cloud server that receives customer complaint faults sent by clients 120 and extracts fault description text from the received customer complaint fault data. The server 110 may also be configured to obtain application description text of the application node based on a domain knowledge graph, where the domain knowledge graph may be imported through the backend 130 and stored in the storage hardware of the server 110. The server 110 is internally provided with a pre-training language model, and can encode the application description text to generate an initial vector representation of the application description text; and can generate the structure diagram of the application according to the connection relation of the application nodes in the domain knowledge graph; further, calculating an initial vector representation of an application description text of the application node in the structure diagram, and connecting similar application nodes according to the initial vector representation to generate an application semantic graph; the server 110 can also train a hierarchical self-supervision learning model, the hierarchical self-supervision learning model is built in the storage hardware of the server 110, and the server 110 completes each iteration of the hierarchical self-supervision learning model by using the hierarchical self-supervision loss through calculation of the hierarchical self-supervision loss until the hierarchical self-supervision model training is completed; then, the server may be further configured to generate a score vector representation of the application node and the fault description text by using the trained hierarchical self-supervised learning model, calculate a fault location score of the application node according to a similarity of the score vector representation, send the fault location score of each application node to the backend 130, and send the highest several fault location scores and the application node information thereof to the backend 130 according to a preset screening rule.
There may be multiple clients 120, and all clients 120 are connected to server 110. Client 120 is able to establish data communication with server 110 to send customer complaint fault data to server 110. Client 120 is also capable of sending this customer complaint fault data to backend 130 and establishing communication with backend 130 to communicate fault information with fault handlers. The client 120 may be, but is not limited to, a cell phone, tablet, notebook, etc. device with a client application installed.
The background end 130 may be a terminal for fault handling personnel, in which an application program with a background function may be installed, and the background end 130 may be capable of establishing data communication with the server 110, so as to send data for training, such as a domain knowledge graph, a corpus, etc., to the server, or receive a fault location score and application node information calculated by the server 110; the backend 130 can also establish data communication with the client 120, thereby receiving customer complaint fault data, and performing fault location judgment in an auxiliary manner according to the fault description text therein so as to repair the fault. The backend 130 may also establish direct voice or text communication with the client 120 to expedite the failover process.
The fault location is not limited to the server 120, but may be performed by any one of the backend 130, which is not specifically limited in the embodiment of the present disclosure. Next, in connection with fig. 1, taking the case where the fault location is performed by the server 120 as an example, a fault location method provided in the embodiment of the present specification will be described.
Referring to fig. 2, fig. 2 is a schematic flow chart of a fault locating method according to an embodiment of the present disclosure. The fault locating method specifically comprises the following steps:
step 202, acquiring an application description text of an application node, wherein the application node is obtained based on a domain knowledge graph.
Specifically, the domain knowledge graph includes relevant knowledge of the customer service domain, where a plurality of nodes exist, and an application that needs to perform fault location is set as an application node in the domain knowledge graph. The nodes include, but are not limited to, application nodes, and relationships between nodes include relationships between application nodes and non-application nodes in addition to relationships between application nodes.
It should be noted that an application refers to a service or a system for performing a certain function, and an application generally faces a client, receives and processes operations of the client, or is provided in the background to provide support for other applications facing the client.
The application description text of each application node describes the name of the corresponding application, the location in the system, the function and the associated application.
Illustratively, the application description text used in the present embodiment employs a structured description for each application. In this example, the structure of the application description text is divided into four paragraphs, with the first paragraph of the application description text describing the name of the application; the second paragraph describes the position of the application in the man-machine interface, and the man-machine interface can be the page position of the graphical interactive interface, the trigger button position of the entity operation interface, and the calling voice instruction sequence of the voice interactive interface; the third paragraph describes what operations the application needs to do to activate, which information needs to be entered when activated, which results can be output; the fourth paragraph describes the upper and lower applications of the application from which the information entered is from which applications the information output is for which applications.
The format of the application description text describes the application in detail, so that the initial vector representation obtained after encoding can represent various attributes and functions of the application as comprehensively as possible.
And 204, encoding each application description text respectively to generate an initial vector representation of each application description text.
In the present embodiment, the above-described encoding is implemented using a pre-trained language model that learns generic semantic and contextual information through a large-scale corpus, thereby learning word-embedded and higher-level textual representations, such as sentence-level and paragraph-level representations. These representations map the application description text into a high-dimensional vector space, where each dimension represents one semantic or contextual feature in the application description text.
Through the coding of the pre-training language model, the contents such as functions and related parameters of the application in the application description text can be converted into initial vector representation in a natural language processing mode, and the names, positions, functions and related applications of the application are represented in a high-dimensional vector form.
And 206, generating an application structure diagram according to the connection relation of the application nodes in the domain knowledge graph.
It can be understood that, since the application nodes are extracted from the domain knowledge graph, and the domain knowledge graph includes relationships between the nodes, the graph including the application nodes and the relationships between the application nodes can be directly extracted through the domain knowledge graph, so as to generate the application structure graph.
Step 208, calculating an initial vector representation of the application description text of the application node in the structure diagram, and connecting similar application nodes according to the initial vector representation to generate an application semantic graph.
Through the step 208, the method in the embodiment of the present disclosure determines the similarity of the application nodes through the initial vector representations, and as an example, the method in the embodiment of the present disclosure calculates the similarity between the two initial vector representations by calculating the cosine value between the two initial vector representations, so as to obtain the similarity between the application nodes corresponding to the two initial vector representations.
After the similarity is determined, step 208 generates a connection relationship between the similar application nodes that are not directly connected, so that one application node can be supplemented by other application nodes similar to the application node. When the application description text of a certain application node is missing or difficult to be matched with the fault description text of customer complaints, the application description text of the other similar application nodes can be used for providing assistance by connecting other similar application nodes to the similar application nodes, so that the missing and the shortage of the application description text of the node can be supplemented.
Step 210, calculating to obtain a hierarchical self-supervision loss according to node level, local level and global level similarity losses between initial vector representations of application nodes in the structure diagram and the semantic graph, and then training a hierarchical self-supervision learning model by using the hierarchical self-supervision loss.
The hierarchical self-supervision learning model is used for encoding the application description text and the fault description text, and the hierarchical self-supervision loss used in the training process is calculated according to node level, local level and global level similarity loss between initial vector representations of application nodes in the structure diagram and the semantic diagram. Each iteration in the training process of the hierarchical self-supervision learning model calculates the hierarchical self-supervision loss, and the training is completed after the iteration reaches the designated times or the hierarchical self-supervision loss reaches the minimum.
Through step 210, the embodiment of the present disclosure uses the hierarchical self-supervision loss as a loss function of the training process, so that the hierarchical self-supervision learning model fuses the coding of the application description text with the external information related to the application node, and when the description information of the application itself is missing or difficult to match with the fault description text, and the fault is difficult to locate, step 210 provides features by using the local-level and global-level similarity loss, and further provides the description of the application node by using the local-level application node near the application node, so that the self-supervision learning model obtained by the final training can calculate the vector representation of the description text more accurately.
And 212, encoding the application description text of the application node by using the trained hierarchical self-supervision learning model, and generating a scoring vector representation of the application description text.
Step 214, obtaining a fault description text, and encoding the fault description text by using the trained hierarchical self-supervision learning model to generate a grading vector representation of the fault description text.
Step 216, calculating the fault location score of the application node according to the scoring vector representation of the application description text and the scoring vector representation of the fault description text.
Specifically, the fault locating score is obtained by calculating the similarity between the scoring vector representation of the application description text and the scoring vector representation of the fault description text, and the similarity calculation method between the vectors is to obtain cosine values of the two vectors.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
For example, the order of steps 214 and 216 may be reversed without affecting the implementation of the embodiments of the present description.
In one fault locating method according to the embodiment of the present disclosure, the step 204 encodes each application description text separately, and the generating of the initial vector representation of each application description text is performed by a knowledge-enhanced domain pre-training language model. To obtain this knowledge-enhanced domain pre-training language model, step 200 is also included, generating the knowledge-enhanced domain pre-training language model, prior to step 202.
Referring to fig. 3, fig. 3 is a process for generating a domain pre-training language model with enhanced knowledge of a fault location method according to an embodiment of the present disclosure, including the following steps:
and 302, extracting text information from a domain corpus by using a semi-open relation extraction technology, and adding the text information into a domain knowledge graph, wherein the domain corpus at least comprises an applied corpus.
The semi-open relation extraction technology is a method for automatically identifying and extracting relation information from unstructured text, and combines the characteristics of open information extraction and traditional relation extraction technology. Semi-open relationship extraction identifies and extracts context based context in the text, identifying potential relationships by detecting specific patterns, vocabulary, and grammatical structures in the text. The method can determine the roles and meanings of the entities in the relationship, so that the text information related to the application is more comprehensively added into the domain knowledge graph.
And 304, extracting a keyword triplet from the domain corpus, and adding the keyword triplet into the domain knowledge graph.
By extracting the keyword triples from the domain corpus, the method of the embodiment of the specification adds the keywords into the domain knowledge graph in the form of the triples, and the updating and the expansion of the domain knowledge graph are completed.
More specifically, step 304 extracts the expected keywords using the KeyBERT model, and obtains the keyword triples. It will be appreciated that this extraction is not limited to the use of a KeyBERT model, but may be accomplished using a pre-trained language model such as TF-IDF, GPT, etc.
And 306, pre-training the pre-training language model by using the domain knowledge graph to generate a domain pre-training language model with enhanced knowledge.
The domain pre-training language model with the knowledge enhancement generated by the steps can better capture key information in fault description information and improve the accuracy and the richness of the generated vector representation because the corresponding text information and triples are injected into the domain knowledge graph to carry out knowledge enhancement, so that the relevance of the pre-training language model and fault related information is enhanced.
Referring to fig. 4, fig. 4 is a flowchart of a method for calculating a hierarchical self-supervision loss using node level, local level and global level similarity loss according to an embodiment of the present disclosure, including:
step 402, respectively setting weight coefficients of node level, local level and global level similarity loss;
step 404, calculating a weighted sum according to the weight coefficients of the node level, the local level and the global level similarity losses, and using the weighted sum as a hierarchical self-supervision loss.
It can be appreciated that the above method enables the resulting hierarchical self-supervision loss to be adjusted in terms of the magnitude of the local level and global level similarity losses by setting the weight coefficients of the node level, local level and global level similarity losses, respectively. If the weight coefficient of the local level and the global level similarity loss is set higher, the hierarchical self-supervision learning model trained by the hierarchical self-supervision loss is more independent of the designated application node when calculating the score vector representation of the application description text, and the score vector representation is easier to be carried out through the application description texts of other similar application nodes, so that the calculation of the score vector representation is more accurate.
FIG. 4 also provides a flow chart of a local loss between a computation block diagram and an initial vector representation of an application node in a semantic graph in a fault location method according to an embodiment of the present disclosure, where the computation includes:
Step 502, clustering application nodes in a structure diagram and a semantic diagram to obtain a plurality of application node clusters;
step 504, calculating an initial vector representation average value of application description texts of the application nodes in each application node cluster, and taking the initial vector representation average value as a vector representation of the application node cluster;
step 506, calculating the Euclidean distance between the structure diagram and the vector representation of the application node cluster in the semantic graph as a local level penalty between the structure diagram and the initial vector representation of the application node in the semantic graph.
More specifically, the embodiment of the present specification uses a K-means clustering algorithm when performing the clustering of step 502, and it is understood that the embodiment of the present specification only provides a preferred clustering algorithm when performing step 502, and the clustering algorithm is not limited to K-means, and may also use hierarchical clustering, DBSCAN, mean Shift, and other clustering algorithms.
After step 504, the application nodes in the block diagram and semantic graph are clustered into clusters, with the embedding of each cluster coming from the average of the embedding of the nodes in the cluster. The Euclidean distance between the vector representation of the cluster where a certain application node is located in the structure diagram and the semantic graph is the local level loss between the initial vector representations of the application node.
FIG. 4 also provides a flowchart of a global level penalty between a computation block diagram and an initial vector representation of an application node in a semantic graph in a fault localization method according to an embodiment of the present disclosure, where the computation process includes:
step 602, carrying out mean value pooling calculation on application nodes in the structure diagram and the semantic diagram to obtain a global level vector representation of the structure diagram and the semantic diagram;
step 604, calculating the Euclidean distance between the structure graph and the global level vector representation of the semantic graph as a global level penalty between the structure graph and the initial vector representation of the application node in the semantic graph.
The mean value pooling is a pooling operation, and global main features can be extracted by downsampling the global structure map and the semantic map through the mean value pooling operation. The specific operation is as follows: and calculating average values of vector representations in the structure diagram and the semantic diagram, and taking the calculated average values as global-level vector representations of the structure diagram and the semantic diagram. The Euclidean distance between the global level vector representations of the structure diagram and the semantic diagram is the global level penalty between the initial vector representations of the application nodes in the structure diagram and the semantic diagram.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Referring to fig. 5, fig. 5 is a flowchart of calculating similarity between vector representations, connecting similar application nodes, and generating a semantic graph of an application in the fault location method provided in the embodiment of the present disclosure, where the flowchart includes:
and 702, calculating the similarity between the vector representations to obtain a similarity matrix.
The method for calculating the similarity is to calculate cosine values between vector representations, and the similarity matrix consists of the similarity between the initial vector representations of any two application nodes in the semantic graph.
Step 704, setting a similarity threshold, and screening application node combinations larger than the similarity threshold in the similarity matrix.
And 706, connecting the application nodes in the application node combination to generate the semantic graph of the application.
More specifically, in the embodiment of the present disclosure, step 704 sets a similarity threshold t, and two application nodes with a similarity greater than t are similar nodes, which are considered to have a pseudo edge therebetween. The pseudo edge is used for representing the similarity meaning between the two, and the two similar nodes are connected, namely, the semantic graph of the application containing the similarity relation between the application nodes is generated.
The embodiment of the present disclosure further provides a fault location method, after calculating a fault location score of an application node according to the score vector representation of the application description text and the score vector representation of the fault description text in step 216, the fault location method further includes:
Step 218, counting fault location scores of each application node, and sorting the application nodes according to the fault location scores.
It will be appreciated that step 218 counts the fault location scores of each application node relative to the fault description text to obtain a matrix, sequence or list of fault location scores, and step 218 orders the corresponding application nodes according to the likelihood of fault location according to the order in which the fault location scores are sized.
Referring to fig. 6, fig. 6 is a schematic diagram of a fault location method according to an embodiment of the present disclosure, after the application nodes are ordered according to the fault location score in step 218, the method further includes:
step 220, screening the designated number of application nodes with highest fault location scores.
The appointed number of application nodes with highest fault location scores are screened, so that the application nodes which are most likely to be fault occurrence positions can be obtained, and engineers can conveniently judge from the screening result to determine fault location.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a fault locating system according to an embodiment of the present disclosure.
As shown in fig. 7, the identity verification system may at least include an application description acquisition module 801, a fault description acquisition module 802, an initial vector representation calculation module 803, a structure diagram generation module 804, a semantic diagram generation module 805, a training module 806, a self-supervised learning model 807, and a fault localization module 808, wherein:
An application description acquisition module 801, configured to acquire an application description text of an application node, where the application node is obtained based on a domain knowledge graph;
a fault description acquisition module 802, configured to acquire a fault description text;
an initial vector representation calculation module 803, configured to encode each application description text separately, and generate an initial vector representation of each application description text;
the structure diagram generating module 804 is configured to generate a structure diagram of the application according to the connection relationship of the application nodes in the domain knowledge graph;
the semantic graph generating module 805 is configured to obtain an initial vector representation of an application description text of an application node in the structure chart, connect similar application nodes according to the initial vector representation, and generate a semantic graph of the application;
the training module 806 is configured to train the hierarchical self-supervised learning model, where training is performed using hierarchical self-supervised loss, and the hierarchical self-supervised loss is calculated according to node level, local level, and global level similarity loss between initial vector representations of the application nodes in the structure diagram and the semantic graph;
a self-supervised learning model 807 for encoding the application description text and the fault description text, generating scoring vector representations of the application nodes and the fault description text;
The fault location module 808 is configured to calculate a fault location score of the application node according to a similarity between the scoring vector representation of the application description text and the scoring vector representation of the fault description text.
It will be appreciated that the self-supervised learning model 807 is trained by the training module 806, and that the parameters of the self-supervised learning model 807 change during the training process performed by the training module 806 according to the hierarchical self-supervised loss, and the training is completed when the hierarchical self-supervised loss is minimized.
Referring to fig. 8, fig. 8 is a schematic structural diagram of still another fault location system according to an embodiment of the present disclosure, and compared to the fault location system shown in fig. 7, the fault location system shown in fig. 11 further includes a pre-training language model generating module 809 for generating a domain pre-training language model with enhanced knowledge,
specifically, referring to fig. 9, the pre-training language model generating module 809 includes:
a text extraction unit 901, configured to extract text information from a domain corpus using a semi-open relationship extraction technique, where the domain corpus at least includes an applied corpus;
a triplet extraction unit 902, configured to extract a keyword triplet from the domain corpus;
an inserting unit 903, configured to add the text information and the keyword triples to the domain knowledge graph;
The pre-training unit 904 is configured to pre-train the pre-training language model by using the domain knowledge graph, and generate a domain pre-training language model with enhanced knowledge.
Referring to fig. 10, in still another fault location system provided in an embodiment of the present disclosure, a training module 806 includes:
a weight coefficient setting unit 1001 for setting weight coefficients of node level, local level, and global level similarity loss, respectively;
the hierarchical self-supervision calculation unit 1002 is configured to calculate a weighted sum according to the weight coefficients of the node level, the local level, and the global level similarity loss, as a hierarchical self-supervision loss.
Referring also to fig. 10, in the fault localization system, the training module 806 further includes:
a clustering unit 1003, configured to cluster application nodes in the structure graph and the semantic graph to obtain a plurality of application node clusters;
a cluster calculation unit 1004, configured to calculate an initial vector representation average value of application description text of an application node in each application node cluster, as a vector representation of the application node cluster;
the local level loss calculation unit 1005 is configured to calculate, as a local level loss between the structure diagram and the initial vector representation of the application node in the semantic graph, a euclidean distance between the structure diagram and the vector representation of the application node cluster in the semantic graph.
Referring also to fig. 10, in the fault localization system, the training module 806 further includes:
the mean value calculation unit 1006 is configured to perform mean value pooling calculation on application nodes in the structure graph and the semantic graph, so as to obtain a global level vector representation of the structure graph and the semantic graph;
the global level penalty calculation unit 1007 calculates the euclidean distance between the global level vector representations of the structure map and the semantic map as the global level penalty between the initial vector representations of the application nodes in the structure map and the semantic map.
Referring to fig. 11, in still another fault location system provided in the embodiment of the present disclosure, a semantic graph generating module 805 includes:
a similarity node screening unit 1101, configured to set a similarity threshold, and screen an application node combination in the similarity matrix that is greater than the similarity threshold;
and the similar node connection unit 1102 is used for connecting the application nodes in the application node combination to generate a semantic graph of the application.
Referring to fig. 12, fig. 12 is a schematic structural diagram of still another fault location system according to an embodiment of the present disclosure, and compared to the fault location system shown in fig. 8, the fault location system shown in fig. 12 further includes a sorting module 810.
The ranking module 810 is configured to count fault location scores of each application node, and rank the application nodes according to the fault location scores.
In yet another fault location system provided in an embodiment of the present disclosure, the ranking module 810 is further configured to screen a specified number of application nodes with highest fault location scores.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are mutually referred to, and each embodiment mainly describes differences from other embodiments. In particular, for fault location system embodiments, since they are substantially similar to fault location method embodiments, the description is relatively simple, and references to the description of method embodiments are only relevant.
Please refer to fig. 13, which illustrates a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
As shown in fig. 13, the electronic device 1200 may include: at least one processor 1201, at least one network interface 1204, a user interface 1203, a memory 1205, and at least one communication bus 1202.
Wherein the communication bus 1202 may be used to facilitate communications among the various components described above.
The user interface 1203 may include keys, among other things, and the optional user interface may also include a standard wired interface, a wireless interface.
The network interface 1204 may include, but is not limited to, a bluetooth module, an NFC module, a Wi-Fi module, and the like.
Wherein the processor 1201 may include one or more processing cores. The processor 1301 uses various interfaces and lines to connect various portions of the overall electronic device 1200, perform various functions of the electronic device 1200, and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1205, and invoking data stored in the memory 1205. Alternatively, the processor 1201 may be implemented in at least one of the hardware forms DSP, FPGA, PLA. The processor 1201 may integrate one or a combination of several of a CPU, GPU, modem, and the like. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 1301 and may be implemented by a single chip.
The memory 1205 may include a RAM or a ROM. Optionally, the memory 1205 includes a non-transitory computer readable medium. The memory 1205 may be used to store instructions, programs, code sets, or instruction sets. The memory 1205 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described various method embodiments, etc.; the storage data area may store data or the like referred to in the above respective method embodiments. The memory 1205 may also optionally be at least one storage device located remotely from the processor 1201. The memory 1205, which is a type of computer storage medium, may include an operating system, a network communication module, a user interface module, and a fault location application. The processor 1201 may be configured to invoke the fault location application stored in the memory 1205 and perform the steps of fault location mentioned in the previous embodiments.
Embodiments of the present disclosure also provide a computer-readable storage medium having instructions stored therein, which when executed on a computer or processor, cause the computer or processor to perform the steps of one or more of the embodiments shown in fig. 1-6 above. The above-described constituent modules of the electronic apparatus may be stored in the computer-readable storage medium if implemented in the form of software functional units and sold or used as independent products.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present description, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (Digital Subscriber Line, DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a digital versatile Disk (Digital Versatile Disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those skilled in the art will appreciate that implementing all or part of the above-described embodiment methods may be accomplished by way of a computer program, which may be stored in a computer-readable storage medium, instructing relevant hardware, and which, when executed, may comprise the embodiment methods as described above. And the aforementioned storage medium includes: various media capable of storing program code, such as ROM, RAM, magnetic or optical disks. The technical features in the present examples and embodiments may be arbitrarily combined without conflict.
The above-described embodiments are merely preferred embodiments of the present disclosure, and do not limit the scope of the disclosure, and various modifications and improvements made by those skilled in the art to the technical solutions of the disclosure should fall within the protection scope defined by the claims of the disclosure without departing from the design spirit of the disclosure.

Claims (20)

1. A fault location method, comprising:
acquiring an application description text of an application node, wherein the application node is obtained based on a domain knowledge graph;
encoding each application description text respectively to generate an initial vector representation of each application description text;
Generating an applied structure diagram according to the connection relation of the application nodes in the domain knowledge graph;
acquiring an initial vector representation of an application description text of an application node in the structure diagram, connecting similar application nodes according to the initial vector representation, and generating an application semantic graph;
training a hierarchical self-supervision learning model, wherein the training is performed by using hierarchical self-supervision loss, and the hierarchical self-supervision loss is calculated according to node level, local level and global level similarity loss between initial vector representations of application nodes in the structural diagram and the semantic graph;
and generating scoring vector representations of the application nodes and the fault description text by using the trained hierarchical self-supervision learning model, and calculating fault positioning scores of the application nodes according to similarity of the scoring vector representations.
2. A fault location method according to claim 1, wherein each of said application description texts is encoded separately using knowledge-enhanced domain pre-trained language models.
3. The fault localization method of claim 2, the knowledge-enhanced domain pre-training language model generated using the method of:
extracting text information from a domain corpus by using a semi-open relation extraction technology, and adding the text information into a domain knowledge graph, wherein the domain corpus at least comprises an applied corpus;
Extracting a keyword triplet from the domain corpus, and adding the keyword triplet into the domain knowledge graph;
and pre-training the pre-training language model by using the domain knowledge graph to generate a domain pre-training language model with knowledge enhancement.
4. A fault location method according to claim 1, said method of calculating hierarchical self-monitoring losses comprising:
respectively setting weight coefficients of the node level, the local level and the global level similarity loss;
and calculating a weighted sum according to the weight coefficients of the node level, the local level and the global level similarity losses, and taking the weighted sum as the hierarchical self-supervision losses.
5. A method of fault location according to claim 1 or 4, the method of calculating local level similarity loss between the initial vector representations of the application nodes in the block diagram and the semantic graph comprising:
clustering the application nodes in the structure diagram and the semantic diagram to obtain a plurality of application node clusters;
calculating an initial vector representation average value of application description texts of application nodes in each application node cluster, and taking the initial vector representation average value as a vector representation of the application node cluster;
and calculating the Euclidean distance between the vector representations of the application node clusters in the structural diagram and the semantic diagram as the local level loss between the initial vector representations of the application nodes in the structural diagram and the semantic diagram.
6. A method of fault localization as claimed in claim 1 or 4, the method of calculating global level similarity loss between the initial vector representations of the application nodes in the block diagram and the semantic graph comprising:
carrying out average pooling calculation on initial vectors of application description texts of application nodes in the structure diagram and the semantic diagram to obtain global level vector representations of the structure diagram and the semantic diagram;
and calculating the Euclidean distance between the structure diagram and the global level vector representation of the semantic graph as the global level loss between the structure diagram and the initial vector representation of the application node in the semantic graph.
7. A fault location method according to claim 1, said connecting similar application nodes according to said initial vector representation, generating a semantic graph of an application, comprising:
calculating the similarity between the initial vector representations to obtain a similarity matrix;
setting a similarity threshold, and screening application node combinations larger than the similarity threshold in the similarity matrix;
and connecting the application nodes in the application node combination to generate a semantic graph of the application.
8. The fault location method according to claim 1, wherein after calculating the fault location score of the application node according to the similarity represented by the scoring vector, the fault location method further comprises:
And counting fault locating scores of each application node, and sequencing the application nodes according to the fault locating scores.
9. The fault location method according to claim 8, wherein after the ranking the application nodes according to the fault location score, the method further comprises:
the specified number of application nodes with highest fault location scores are screened.
10. A fault location system, comprising:
the application description acquisition module is used for acquiring application description text of the application node, and the application node is obtained based on the domain knowledge graph;
the fault description acquisition module is used for acquiring a fault description text;
the initial vector representation calculation module is used for respectively encoding each application description text and generating an initial vector representation of each application description text;
the structure diagram generation module is used for generating a structure diagram of the application according to the connection relation of the application nodes in the domain knowledge graph;
the semantic graph generation module is used for acquiring initial vector representations of application description texts of the application nodes in the structure chart, connecting similar application nodes according to the initial vector representations, and generating an application semantic graph;
The training module is used for training a hierarchical self-supervision learning model, the training is carried out by using hierarchical self-supervision loss, and the hierarchical self-supervision loss is obtained by calculating node level, local level and global level similarity loss between the initial vector representations of the application nodes in the structural diagram and the semantic diagram;
the self-supervision learning model is used for encoding the application description text and the fault description text and generating scoring vector representations of the application nodes and the fault description text;
and the fault locating module is used for calculating the fault locating score of the application node according to the similarity of the scoring vector representation of the application description text and the scoring vector representation of the fault description text.
11. The fault location system of claim 10, the initial vector representation calculation module encoding each of the application description texts separately using a knowledge-enhanced domain pre-trained language model.
12. The fault location system of claim 11, further comprising:
a pre-training language model generation module for generating the knowledge-enhanced domain pre-training language model, the pre-training language model generation module comprising:
The text extraction unit is used for extracting text information from a domain corpus by using a semi-open relation extraction technology, wherein the domain corpus at least comprises an applied corpus;
the triplet extraction unit is used for extracting keyword triples from the domain corpus;
the inserting unit is used for adding the text information and the keyword triples into a domain knowledge graph;
and the pre-training unit is used for pre-training the pre-training language model by using the domain knowledge graph to generate the domain pre-training language model with the enhanced knowledge.
13. The fault location system of claim 10, the training module comprising:
the weight coefficient setting unit is used for setting weight coefficients of the node level, the local level and the global level similar loss respectively;
and the hierarchical self-supervision calculation unit is used for calculating a weighted sum according to the weight coefficients of the node level, the local level and the global level similarity loss to serve as the hierarchical self-supervision loss.
14. A fault location system according to claim 10 or 13, the training module comprising:
the clustering unit is used for clustering the application nodes in the structure diagram and the semantic diagram to obtain a plurality of application node clusters;
A cluster calculation unit, configured to calculate an initial vector representation average value of application description text of an application node in each application node cluster, as a vector representation of the application node cluster;
and the local level loss calculation unit is used for calculating the Euclidean distance between the vector representations of the application node clusters in the structural diagram and the semantic diagram as the local level loss between the initial vector representations of the application nodes in the structural diagram and the semantic diagram.
15. A fault location system according to claim 10 or 13, the training module comprising:
the mean value calculation unit is used for carrying out mean value pooling calculation on the application nodes in the structure diagram and the semantic diagram to obtain a global level vector representation of the structure diagram and the semantic diagram;
and the global level loss calculation unit is used for calculating the Euclidean distance between the structure diagram and the global level vector representation of the semantic diagram as the global level loss between the structure diagram and the initial vector representation of the application node in the semantic diagram.
16. The fault location system of claim 10, the semantic graph generation module comprising:
the similarity calculation unit is used for calculating the similarity between the initial vector representations to obtain a similarity matrix;
A similarity node screening unit, configured to set a similarity threshold, and screen an application node combination in the similarity matrix that is greater than the similarity threshold;
and the similar node connection unit is used for connecting the application nodes in the application node combination to generate an application semantic graph.
17. The fault location system of claim 10, further comprising:
and the sequencing module is used for counting the fault locating score of each application node and sequencing the application nodes according to the fault locating score.
18. The fault location system of claim 17, the ranking module further operable to filter a specified number of application nodes with highest fault location scores.
19. An electronic device includes a processor and a memory;
the processor is connected with the memory;
the memory is used for storing executable program codes;
the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory for performing the method according to any one of claims 1 to 9.
20. A computer readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the method according to any one of claims 1-9.
CN202311270939.5A 2023-09-28 2023-09-28 Fault locating method, system, electronic equipment and computer readable storage medium Pending CN117290143A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311270939.5A CN117290143A (en) 2023-09-28 2023-09-28 Fault locating method, system, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311270939.5A CN117290143A (en) 2023-09-28 2023-09-28 Fault locating method, system, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN117290143A true CN117290143A (en) 2023-12-26

Family

ID=89251504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311270939.5A Pending CN117290143A (en) 2023-09-28 2023-09-28 Fault locating method, system, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117290143A (en)

Similar Documents

Publication Publication Date Title
US11455981B2 (en) Method, apparatus, and system for conflict detection and resolution for competing intent classifiers in modular conversation system
CN113094200B (en) Application program fault prediction method and device
CN112567394A (en) Techniques for constructing knowledge graphs in limited knowledge domains
CN109743311B (en) WebShell detection method, device and storage medium
WO2017075017A1 (en) Automatic conversation creator for news
EP4113357A1 (en) Method and apparatus for recognizing entity, electronic device and storage medium
CN110674255A (en) Text content auditing method and device
CN108268602A (en) Analyze method, apparatus, equipment and the computer storage media of text topic point
US20220027766A1 (en) Method for industry text increment and electronic device
EP4222635A1 (en) Lifecycle management for customized natural language processing
CN113590772A (en) Abnormal score detection method, device, equipment and computer readable storage medium
WO2021072864A1 (en) Text similarity acquisition method and apparatus, and electronic device and computer-readable storage medium
CN116701604A (en) Question and answer corpus construction method and device, question and answer method, equipment and medium
CN114676227B (en) Sample generation method, model training method and retrieval method
US20200159824A1 (en) Dynamic Contextual Response Formulation
CN113627197B (en) Text intention recognition method, device, equipment and storage medium
CN116795978A (en) Complaint information processing method and device, electronic equipment and medium
CN114218356B (en) Semantic recognition method, device, equipment and storage medium based on artificial intelligence
US11922129B2 (en) Causal knowledge identification and extraction
CN113724738B (en) Speech processing method, decision tree model training method, device, equipment and storage medium
CN115906797A (en) Text entity alignment method, device, equipment and medium
US11941414B2 (en) Unstructured extensions to rpa
CN117290143A (en) Fault locating method, system, electronic equipment and computer readable storage medium
CN113704452A (en) Data recommendation method, device, equipment and medium based on Bert model
CN113807920A (en) Artificial intelligence based product recommendation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination