US20220147814A1 - Task specific processing of regulatory content - Google Patents
Task specific processing of regulatory content Download PDFInfo
- Publication number
- US20220147814A1 US20220147814A1 US17/093,416 US202017093416A US2022147814A1 US 20220147814 A1 US20220147814 A1 US 20220147814A1 US 202017093416 A US202017093416 A US 202017093416A US 2022147814 A1 US2022147814 A1 US 2022147814A1
- Authority
- US
- United States
- Prior art keywords
- regulatory
- regulatory content
- language model
- task specific
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000001105 regulatory effect Effects 0.000 title claims abstract description 238
- 238000012545 processing Methods 0.000 title claims abstract description 40
- 238000012549 training Methods 0.000 claims abstract description 192
- 238000013528 artificial neural network Methods 0.000 claims abstract description 72
- 238000000034 method Methods 0.000 claims abstract description 50
- 230000004044 response Effects 0.000 claims abstract description 10
- 210000002569 neuron Anatomy 0.000 claims description 14
- 239000013598 vector Substances 0.000 claims description 11
- 230000000873 masking effect Effects 0.000 claims description 4
- 230000008014 freezing Effects 0.000 claims description 2
- 238000007710 freezing Methods 0.000 claims description 2
- 230000006870 function Effects 0.000 description 19
- 238000013500 data storage Methods 0.000 description 18
- 230000008569 process Effects 0.000 description 17
- 238000003860 storage Methods 0.000 description 12
- 238000000605 extraction Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 230000006978 adaptation Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000002372 labelling Methods 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 230000002457 bidirectional effect Effects 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000013526 transfer learning Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 230000001010 compromised effect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/216—Parsing using statistical methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/284—Lexical analysis, e.g. tokenisation or collocates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
Definitions
- This disclosure relates generally to performing computer implemented language processing tasks on regulatory content.
- Governments at all levels generate documents setting out requirements and/or conditions that should be followed for compliance with the applicable rules and regulations. For example, Governments implement regulations, permits, plans, court ordered decrees, and bylaws to regulate commercial, industrial, and other activities considered to be in the public's interest. Standards bodies, companies, and other organizations may also generate documents setting out conditions for product and process compliance. These documents may be broadly referred to as “regulatory content”.
- a method for training a computer implemented neural network system for performing a processing task on regulatory content involves configuring a neural network language model capable of generating a language embedding output in response to receiving content.
- the method further involves fine-tuning the language model using regulatory content training data to generate a regulatory content language embedding output for regulatory content processed by the language model.
- the method also involves configuring at least one task specific output layer to generate task specific results in response to receiving the regulatory content language embedding output from the language model, and training the neural network system using task specific training data to output the task specific results, at least a portion of the task specific training data having been labeled prior to configuring the task specific neural network.
- Configuring the language model may involve configuring a pre-trained neural network language model for generation of the language embedding output, the pre-trained neural network language model including a plurality of layers of neurons, each neuron having an associated weight and bias, the weights and biases having been determined during training of the language model.
- Fine-tuning the language model may involve one of modifying weights and biases of the neurons of the language model based on the regulatory content training data, freezing weights and biases of at least some of the layers of neurons while modifying weights and biases of other layers of neurons based on the regulatory content training data, or adding at least one additional layer of neurons to the language model and determining weights and biases of the least one at additional layer based on the regulatory content training data.
- the regulatory content training data may include a plurality of documents including regulatory text.
- the regulatory text in the plurality of documents may include unlabeled regulatory text.
- the plurality of documents may include regulatory text in a plurality of different languages.
- the plurality of documents including regulatory text may be pre-processed to generate the regulatory content training data by masking at least some words within sentences of the regulatory text and fine-tuning may involve configuring the neural network language model to generate a prediction for the masked words based on context provided by un-masked words in the sentence and updating the neural network language model based on a comparison between the generated prediction and the masked word.
- the regulatory content training data may involve pairs of sentences extracted from regulatory text associated with the plurality of documents and fine-tuning may involve configuring the neural network language model to generate a prediction as to whether the second sentence in the sentence pair follows the first sentence in the document and updating the neural network language model based on whether the generated prediction is correct.
- the regulatory content language embedding output may include a plurality of vectors, each vector including a plurality of values representing a context for each word in the regulatory content.
- Configuring the at least one task specific output layer may involve configuring a classification layer operable to generate a classification output for the regulatory content.
- Training the neural network system to generate the classification output may involve a further fine-tuning of the language model based on the task specific training data.
- the classification output may be associated with one of an identification of a plurality of text fields within the regulatory content that have a common connotation between different documents, an identification of requirements or conditions within the regulatory content, or an identification of citations within the regulatory content, each citation being associated with one or more requirements or conditions within the regulatory content.
- Configuring the at least one task specific output layer may involve configuring a classification output layer to generate a classification identifying text as a citation sequence, a classification identifying text as a citation title, and a classification identifying text as not being associated with a citation, and the neural network system may be trained using training data including samples labeled as corresponding to a citation sequence, samples labeled as corresponding to a citation title, and samples not associated with a citation.
- Configuring the at least one task specific output layer may involve configuring a sibling classifier output layer to generate a classification identifying citations as being one of a sibling citation or not a sibling citation, the neural network system being trained using training data including pairs of samples including samples labeled as having a sibling relationship and samples labeled as not having a sibling relationship.
- Configuring the at least one task specific output layer may involve configuring a sibling classifier output layer to generate a classification identifying citations as being one of a parent citation or not a parent citation, the neural network system being trained using training data including pairs of samples including samples labeled as having a parent relationship and samples labeled as not having a parent relationship.
- Configuring the at least one task specific output layer may involve configuring a requirement classification output layer to generate a classification identifying text as corresponding to a requirement, a classification identifying text as corresponding to an optional or site-specific requirement, and a classification identifying text as including descriptive language related to a requirement but is not itself a requirement, and the neural network system may be trained using training data including text sequences that are labeled as requirements, labeled as optional or site-specific requirements, and labeled as descriptive text.
- Configuring the at least one task specific output layer may involve configuring a requirement conjunction classifier output layer to generate a classification identifying a requirement as not being a conjunction, a classification identifying a requirement as being a conjunction between a parent requirement and a single child requirement, and a classification identifying a requirement as being a conjunction between a parent requirement and multiple child requirements, and, the neural network system is trained using training data including a plurality of pairs of separated requirements, each pair having an assigned label indicating whether the pair is not a conjunction, a single child requirement conjunction, or a multiple child requirement conjunction.
- Configuring the at least one task specific output layer may involve configuring a smart field classifier output layer to generate a plurality of classifications identifying text fields within the regulatory content having a common connotation and the neural network system may be trained using training data including labeled samples corresponding to each of the plurality of classifications.
- the task specific training data for training the task specific neural network may include a portion of unlabeled training data.
- the portion of labeled task specific training data may involve regulatory text associated with a first language and the portion of unlabeled training data may include regulatory text associated with a language other than the first language.
- a system for performing a processing task on regulatory content includes a processor circuit and codes for directing the processor circuit to implement a regulatory content language model capable of generating a language embedding output in response to receiving regulatory content, the regulatory content language model having been fine-tuned using regulatory content training data to generate a regulatory content language embedding output for regulatory content.
- the system also includes codes for directing the processor circuit to implement at least one task specific output layer to generate task specific results in response to receiving the regulatory content language embedding output from the language model, the neural network system having been trained using task specific training data to output the task specific results, at least a portion of the task specific training data having been labeled prior to configuring the task specific neural network.
- FIG. 1 is a block diagram of a computer implemented system for performing a processing task on regulatory content according to a first disclosed embodiment
- FIG. 2 is a block diagram of an inference processor circuit for implementing the system shown in FIG. 1 ;
- FIG. 3 is a block diagram of a training system for training the system shown in FIG. 1 ;
- FIG. 4 is a process flowchart of a process for training a regulatory content language model of the system shown in FIG. 1 ;
- FIG. 5 is a block diagram of a configuration for training a regulatory content processing system
- FIG. 6A is a block diagram of a citation identification system embodiment, which may be implemented on the inference processor circuit of FIG. 2 ;
- FIG. 6B is a block diagram of a relationship classifier system used in conjunction with the citation identification system shown in FIG. 6A embodiment, which may be implemented on the inference processor circuit of FIG. 2 ;
- FIG. 7 is a block diagram of a requirement extraction system embodiment, which may be implemented on the inference processor circuit of FIG. 2 ;
- FIG. 8 is a block diagram of a conjunction classifier system embodiment, which may be implemented on the inference processor circuit of FIG. 2 ;
- FIG. 9 is a block diagram of a smart field identification system, which may be implemented on the inference processor circuit of FIG. 2 .
- the system 100 includes a regulatory content language model 102 that receives an input of regulatory content data 104 and generates a language embedding output 106 representing the semantic and syntactic meaning of words in the regulatory content.
- the regulatory content 104 may be received in any of a variety of text data formats, where words and characters in the text are encoded into a digital data format representing the text of the regulatory content.
- regulatory content may be received as image data, where the text is represented by pixels rather than digital text. In this case the regulatory content image data would be pre-processed to extract the text in a digital data format to generate the regulatory content 104 .
- the language embedding output 106 of the regulatory content language model 102 may be in the form of a set of values that define the semantic and syntactic meaning of each words in the regulatory content.
- the meaning of each word may be expressed as a vector having a plurality of values (typically several hundred values).
- the language embedding output 106 is fed through a task specific processing block 108 to perform additional processing that is specific to a particular task.
- the task specific processing block 108 and/or the regulatory content language model 102 may be further trained using task specific training data to output task specific results 110 for the regulatory content 104 . Examples of some task specific results 110 include identification of citations within regulatory content, determination of relationships between citations, extraction of requirements from regulatory content, generation of associated requirement descriptions, and smart field recognition. These examples of task specific processing are described in more detail below.
- the system 100 shown in FIG. 1 may be implemented on a processor circuit operably configured to provide inference functions for performing the processing task on the regulatory content 104 .
- the regulatory content language model 102 and/or task specific processing block 108 may be implemented using various neural networks for processing the regulatory content 104 .
- an inference processor circuit is shown generally at 200 .
- the inference processor circuit 200 includes a microprocessor 202 , a program memory 204 , a data storage memory 206 , and an input output port (I/O) 208 , all of which are in communication with the microprocessor 202 .
- Program codes for directing the microprocessor 202 to carry out various functions are stored in the program memory 204 , which may be implemented as a random access memory (RAM), flash memory, a hard disk drive (HDD), or a combination thereof.
- RAM random access memory
- HDD hard disk drive
- the program memory 204 includes storage for program codes that are executable by the microprocessor 202 to provide functionality for implementing the various elements of the system 100 .
- the program memory 204 includes storage for program codes 230 for directing the microprocessor 202 to perform operating system functions.
- the operating system may be any of a number of available operating systems including, but not limited to, Linux, macOS, Windows, Android, and JavaScript.
- the program memory 204 also includes storage for program codes 232 for implementing the regulatory content language model 102 , and codes 234 for implementing functions associated with the task specific processing block 108 .
- the I/O 208 provides an interface for receiving input via a keyboard 212 , pointing device 214 .
- the I/O 208 also includes an interface for generating output on a display 216 and further includes an interface 218 for connecting the processor circuit 200 to a wide area network 220 , such as the internet.
- the data storage memory 206 may be implemented in RAM memory, flash memory, a hard drive, a solid state drive, or a combination thereof. Alternatively, or additionally the data storage memory 206 may be implemented at least in part as storage accessible via the interface 218 and wide area network 220 . In the embodiment shown, the data storage memory 206 provides storage 250 for regulatory content data 104 , storage 252 for the regulatory content language model configuration data, storage 254 for the task specific neural network configuration data, and storage 256 for storing results generated by the regulatory content processing block 108 .
- the inference processor circuit 200 is operable to implement the system 100 for processing regulatory content shown in FIG. 1 when configured with the applicable training and configuration data in storage locations 252 - 254 of the data storage memory 206 .
- the training may be performed on a conventional processor circuit such as the inference processor circuit 200 .
- a specifically configured training system such as a machine learning computing platform or cloud-based computing system, which may include one or more graphics processing units.
- An example of a training system is shown in FIG. 3 at 300 .
- the training system 300 includes a user interface 302 that may be accessed via an operator's terminal 304 .
- the operator's terminal 304 may be a processor circuit such as shown at 200 in FIG. 3 that has a connection to the wide area network 220 .
- the operator is able to access computational resources 306 and data storage resources 308 made available in the training system 300 via the user interface 302 .
- providers of cloud based neural network training systems 300 may make machine learning services 310 that provide a library of functions that may be implemented on the computational resources 306 for performing machine learning functions such as training.
- a neural network programming environment TensorFlowTM is made available by Google Inc.
- TensorFlow provides a library of functions and neural network configurations that can be used to configure the above described neural network.
- the training system 300 also implements monitoring and management functions that monitor and manage performance of the computational resources 306 and the data storage 308 .
- the functions provided by the training system 300 may be implemented on a stand-alone computing platform configured to provide adequate computing resources for performing the training.
- the training of the neural networks for implementing the regulatory content language model 102 and the task specific processing block 108 are performed under supervision of an operator using the training system 300 .
- the training process may be unsupervised or only partly supervised by an operator.
- the operator will typically determine an appropriate neural network configuration for generating a desired task specific output.
- the operator then prepares a training data set, which is used in a training exercise to establish weights and biases for the neural network portions of the regulatory content language model 102 and task specific processing block 108 .
- the set of training data samples may have associated labels or annotations that indicate a ground truth output result for each sample.
- set of training data may include unannotated training data samples.
- the training data set may include a combination of annotated and unannotated training data samples.
- the operator may make changes to the configuration of the neural network until a satisfactory accuracy and performance is achieved.
- the resulting neural network configuration and determined weights and biases may then be saved to the applicable locations 252 - 254 of the data storage memory 206 of the inference processor circuit 200 .
- the regulatory content language model 102 and task specific processing block 108 may be initially implemented, configured, and trained on the training system 300 , before being configured for regular use on the inference processor circuit 200 .
- a process for training the regulatory content language model 102 using the training system 300 is shown as a process flowchart at 400 .
- the process begins by configuring a generic language model on the training system 300 .
- the generic language model may be implemented using a pre-trained language model, such as Google's BERT (Bidirectional Encoder Representations from Transformers) or OpenAI's GPT-3 (Generative Pretrained Transformer).
- Configuration of the generic language model in block 402 may involve accessing and configuring library functions within a neural network programming environment such as TensorFlow to implement a desired generic language model.
- the generic language model training corpus 404 is shown in broken outline in FIG. 4 , since in many cases a generic language model may be implemented in a form that has already been trained on an extensive training corpus.
- the generic language model may thus be invoked in an already trained configuration, which is capable of outputting the meaning of each word or portion of a word in context as the language embedding output 106 .
- the language embedding output 106 may be in the form of a language embedding vector, which includes a plurality of values that capture the contextual meaning of the word. Words of similar meaning will thus be represented by vectors that have similar, but not necessarily identical values.
- the regulatory content 104 may be separated into tokens before processing each token in context to generate the language embedding output 106 .
- a token is a sequence of characters grouped together as a useful semantic unit for processing.
- the word “sleeping” may be represented by a first token “sleep” and a second token “ing”.
- Tokenization may be implemented at a word level, sub-word level, and/or character level. In the remainder of this description, the term token will be used to refer to sequences of one or more characters that have been rendered from the original regulatory content. Tokenization is usually undertaken on the basis of a vocabulary file that provides a set of words that will be used for the tokenization of content.
- a tokenizer vocabulary file may not include the word “sleeping” but may include sub-words “sleep” and “ing”, in which case the tokens will be output as “sleep” and “##ing”.
- Words that cannot be split into sub-words are known as out-of-vocabulary (OOV) words and may be tokenized on a character-by-character basis, or otherwise handled.
- OOV out-of-vocabulary
- Regulatory content language models generally process content in context, which may further involve splitting groups of tokens or text into text sequences, which may be sentence based.
- Examples of the types of documents making up the generic language model training corpus 404 include documents from Wikipedia, scientific publications, books, etc.
- the language model may be trained to generate multilingual language embeddings. Generating a multilingual language model facilitates ease of use and maintenance of the system, since a single model would be capable of processing regulatory content in many different languages.
- separate language models can be implemented and trained for each language. This requires that there be sufficient labeled regulatory content training data for the intended language.
- the training corpus 404 used for training many language models comprises unlabeled text data, and the training process is essentially self-supervised by the language model.
- the training corpus comprises words and sentences in context
- techniques such as word masking and next sentence prediction may be employed by the generic language model to make the training process semi-supervised without going to the laborious process of labeling the corpus.
- Generic language models may thus be trained for both word-level and sentence-level tasks, which are both applicable for the task specific processing performed by the task specific processing block 108 .
- requirement extraction and the identification of requirement descriptions within regulatory content are generally sentence-level tasks.
- detection of citations and smart fields are generally token-level tasks.
- Generic language models such as BERT, ALBERT, RoBERTa, and DistilBERT, which employ deep bidirectional transformer architectures, perform well in both sentence-level and token-level tasks.
- the generic language model is generally trained for processing generic language content that typically would be encountered in everyday situations. However, word distributions in regulatory text may differ from the generic text.
- the process 400 for training the generic language model further includes a fine-tuning step, in which the generic language model of block 402 is refined using a regulatory content training corpus 408 to improve its performance in generating relevant word embedding outputs for regulatory content. In the training embodiment 400 , this difference is accounted for by performing a fine-tuning of the generic language model to generate the regulatory content language model 102 . Fine-tuning generally proceeds as described above for generic training, except that a learning rate is reduced so that the effect of the pre-training of the generic language model 102 is not significantly changed.
- fine-tuning involves small adjustments to the parameters of the pre-trained language model to generate a regulatory content language model 102 that is optimized for performance regulatory content, without significantly altering the performance of the language model on generic content.
- the fine-tuning is performed using a regulatory content training corpus 408 , which include a relatively large number of regulatory content documents.
- regulatory content may fall into any of a number of classes, such as regulations, permits, plans, bylaws, standards, etc.
- the regulatory content training corpus 408 may be limited to one of these categories and the fine-tuning performed at block 406 may be based on a corpus of different documents in the same category.
- the regulatory content training corpus 408 may include documents in different categories to produce a broader based regulatory content language model 102 .
- the regulatory content training corpus 408 may also include multi-lingual documents, such that the regulatory content language model 102 is trained to generate embedding outputs for regulatory content is different languages.
- the regulatory content training corpus 408 comprises unlabeled or unannotated regulatory content text data, which has the advantage of avoiding the burden of preparing a labeled training corpus.
- the fine-tuning performed at block 406 proceeds on the same basis as the self-supervised pre-training of the generic language model at block 402 , including masking of words and/or next sentence prediction etc.
- the fine-tuning process has the advantage of refining the language model to improve performance on text data from the regulatory content domain.
- the training input would be regulatory content having some words replaced by a special token [MASK].
- the training input is first fed through a tokenizer, which separates the training content into tokens.
- the tokenized training input is then provided to a Bert Model configured with a language modeling output layer.
- the same training content in which the masked words still appear is also tokenized and provided as a labeling input to the model for the purposes of training.
- configuration data of the fine-tuned language model is output and saved in the regulatory content language model 102 configuration data storage locations 252 , in the data storage memory 206 of the inference processor circuit 200 shown in FIG. 2 .
- the fine-tuned regulatory content language model 102 is thus capable of providing regulatory content relevant token embedding outputs 106 that may be used in a variety of regulatory content processing tasks.
- the task specific processing block 108 of the system 100 receives the language embedding output 106 from the regulatory content language model 102 and generates task specific results.
- the task specific processing block 108 may be configured as a feature extraction network that is separately trained to output the task specific results 110 based on the language embedding output 106 of the regulatory content language model 102 .
- the task specific processing block 108 may be trained using training data in which regulatory content inputs have an associated label indicating a ground truth task result assigned by an operator. In this case, the regulatory content language model 102 has its parameters frozen, and the task specific processing block 108 is separately trained to generate the task specific results 110 .
- the regulatory content language model 102 may be trained in conjunction with the task specific processing block 108 to generate the task specific results 110 . As at least some parameters of the regulatory content language model 102 would remain unfrozen and be subject to change.
- FIG. 5 A block diagram of this alternative training configuration that may be implemented on the training system 300 is shown in FIG. 5 generally at 500 .
- the language model 102 is configured and fine-tuned generally as described above in connection with FIG. 4 .
- the regulatory content language model 102 is thus trained to generate a language embedding output 502 for a regulatory content input received by language model.
- the training configuration 500 further includes one or more task specific neural network layers 504 , which are configured to receive the language embedding output 502 and generate a task specific result 506 .
- the task specific result 506 is a classification output having n possible categories, c 1 -cn.
- the task specific neural network layers 504 are configured to output probabilities p c1 , p c2 , and p c3 , each indicating a likelihood that the input to the regulatory content language model 102 falls within the respective categories c 1 -cn.
- a final output layer in the output layers 504 may be configured as a softmax layer, which causes the probabilities p c1 , p c2 , p c3 and p cn to add up to 1.00.
- the training configuration 500 also includes a training block 508 , which implements functions on the training system 300 for training the one or more task specific neural network layers 504 and adapting the regulatory content language model 102 to generate the task specific result 506 .
- a task specific training data set 510 is fed into the regulatory content language model 102 .
- the training block 508 includes functions for evaluating the task specific result 506 via a loss function.
- the training block 508 also includes functions for back-propagating errors in the result to modify parameters of the regulatory content language model 102 and task specific neural network layers 504 .
- An optimization function is generally used for modifying the weights and biases of each neuron in the neural network.
- the regulatory content language model 102 generally has a final layer that outputs the values of the language embedding vector for each word or token.
- the task specific neural network layers 504 are configured depending on the task to be performed.
- the task specific neural network layers 504 may include a linear layer that is fully connected to receive the language embedding vector from the regulatory content language model 102 . This linear layer may be followed by a classification layer, such as a softmax layer, that generates the task specific result 506 .
- the parameters of the regulatory content language model 102 are not frozen, which allows the parameters of the language model to be refined using the optimization function for producing the task specific result 506 .
- the optimization function generally includes a learning rate that controls a magnitude of the change to each of the weights and biases of the neurons during each training iteration.
- the learning rate is usually set at a low level that limits the change magnitude such that the effect of the pre-training and fine-tuning (block 406 FIG. 4 ) of the regulatory content language model 102 is not lost.
- the task specific training thus includes further adaptation of the regulatory content language model 102 to better generate the task specific result 506 .
- the process of adaptation of the regulatory content language model 102 is in effect a second fine-tuning of the regulatory content language model 102 , based this time on task specific training data.
- the regulatory content language model 102 may be trained as a multilingual language model in which regulatory content in many different languages can be processed to generate the language embedding output 502 .
- Many current language model implementations do not even require identification of the language of the input, which reduces system complexity and prevents prediction errors due to language identification.
- the language model may thus intentionally not be informed of the language of the regulatory content so that pre-trained embeddings cannot be explicitly language specific.
- the multilingual nature of the language embedding output 502 may be further reinforced by providing a multilingual regulatory content training corpus 408 for adaptation the language model 102 , as disclosed above in connection with FIG. 4 .
- the training data set 510 includes labeled training samples for at least one language.
- the training data set 510 may thus include labeled training samples for only a single language, or in some cases a few selected languages.
- the reduced labeling requirement significantly reduces the time and effort needed to prepare the training set.
- the training data set 510 may include labeled training sample in the English language, which are used to train the task specific neural network layers 504 for generation of the task specific result 506 .
- the training configuration 500 employs zero-shot transfer learning to produce the task specific result 506 for other unseen languages based on the training on English regulatory content.
- Task specific training regulatory content may have many more representative articles in one language (for example, in English) than other languages. Sampling may be used to balance the number of articles based on language frequency. Tokenization is usually performed on the basis of a vocabulary list, which may be based on the frequency of occurrence of the token in each regulatory content language.
- the task specific training data set 510 may further include unlabeled samples from other languages in addition to the labeled samples from the selected training language or languages.
- the inclusion of regulatory content in other languages may improve the effectiveness of the transfer learning to a level that approaches the performance of language specific trained systems.
- the pre-trained language model 102 has already learnt an alignment between the vocabulary of each language and naturally integrates language alignment between languages.
- the zero-shot transfer training of the one or more task specific neural network layers 504 thus extends the functionality for generating the task specific result 506 for monolingual regulatory content. In many cases performance for other languages approaches the performance of language-specific models trained using labeled language-specific regulatory content data.
- the regulatory content language model 102 will have a set of previously established parameters based on its generic pre-training and subsequent fine-tuning (block 406 , FIG. 4 ), as described above. These previously established parameters will thus be modified during the task specific training using the training configuration 500 based on the task specific training data set 510 .
- the learning rate implemented by the optimization function is generally set to a low value, such that the previously established parameters are only perturbed by a small amount and such that the trained functionality of the regulatory content language model 102 is not compromised.
- the learning rate for the task specific neural network layers 504 may be set at a higher rate than the learning rate for adaptation of the regulatory content language model 102 .
- layer specific learning rates may be implemented or some layers of the regulatory content language model 102 may be frozen (essentially a zero learning rate).
- the pre-training may be considered adequate for all layers preceding the final hidden layers, and these preceding layers may be frozen during training.
- the training configuration 500 shown in FIG. 5 may be implemented to configure the training system 300 or other processor to perform any one of a number of tasks, as described in more detail following.
- the training configuration 500 may be used to train a system for performing the task of citation detection within regulatory content.
- a citation is a reference to one or more requirements or conditions within the text of the regulatory content.
- Regulatory content often includes explicit alphanumeric citations, either in the form of numeric or other characters that indicate a sequence or in the form of alphabetical text characters.
- FIG. 6A a citation identification system for identifying citations with regulatory content is shown generally at 600 .
- the citation identification system 600 includes a tokenizer 602 , the regulatory content language model 102 , and a citation classifier 604 .
- the citation identification system 600 is trained to perform named entity recognition (NER) on each sentence in the document.
- NER named entity recognition
- the citation classifier block 604 may include a linear layer configured for performing token classification based on the language embedding generated by the regulatory content language model 102 .
- An example of a portion of a regulatory content input is shown at 606 .
- the regulatory content input portion 606 includes a heading 608 , which includes citation numbers and a portion of a sentence 610 , which does not include citations.
- the citation classifier 604 is configured to generate a citation classification output 612 .
- the tokenizer 602 separates the regulatory content input 606 into tokens (i.e. words and sub-words).
- the regulatory content language model 102 generates a language embedding output for each token as described above.
- the citation classifier 604 then receives the language embedding output and generates the citation classification output 612 .
- the citation classification output 612 includes probabilities associated with five target classes.
- Citations in regulatory content may include text that implies a sequence (herein referred to as “citation numbers”) and/or text that acts as a heading or title for a requirement (herein referred to as “citation titles”). The location of citation number and citation title within a phrase may also be significant.
- the citation classification output 612 includes the classes listed in the table below.
- Class Description Figure 6A example B-CI_NUM Citation identifier number at the ′′Section′′ beginning of a phrase I-CI_NUM Continuation of a citation identifier ′′A.′′ number within a phrase B-CI_TXT Citation identifier title at the beginning ′′OPERATING′′ of a phrase I-CI_TXT Continuation of citation identifier title ′′&′′, ′′MAINTENANCE′′, ′′REQUIREMENTS′′ inside a phrase O Regulatory content body text not ′′The′′, ′′owner′′, etc. including citation identifiers
- the “A.” includes sub-words “A”, and “.”.
- “REQUIREMENTS” includes sub-words “REQUIRE” and “##MENTS”.
- Prediction of the citation classification 612 is performed on a token basis. Post processing based on heuristics may be implemented to confirm or correct assigned labels. For example, if a word between two words labeled as I-CI_TXT is initially assigned an “O” label (body text), then the word label is changed to I-CI_TXT for consistency.
- the citation identification system 600 may be trained on a training data set 510 as described above in connection with FIG. 5 .
- the training data set 510 includes sentences in which each token has been assigned a class corresponding to the classes of the citation classification output 612 .
- the training data set 510 may include labeled samples for only one language, or at most for a few languages. Labeling regulatory content in other languages is labor intensive. However, if there is sufficient labeled regulatory content for one language, the trained citation identification system 600 can be effectively used to identify citations for other unlabeled languages.
- the parameters of the regulatory content language model 102 and the citation classifier 604 may be stored in the storage locations 252 and 254 of the data storage memory 206 of the inference processor circuit 200 ( FIG. 2 ).
- the inference processor circuit 200 will thus be configured to process regulatory content stored in the data storage location 250 of the data storage memory 206 and to generate task specific regulatory content processing results, which may be stored in the storage location 256 of the data storage memory 206 and/or displayed on the display 216 .
- the citation identification system 600 thus outputs a classification for each token in the regulatory content input 606 .
- the citation classification output 612 may be further processed to generate a hierarchy of citations, which is useful in evaluating the requirements associated with the citations.
- a hierarchical tree of citation nodes is constructed by considering parent/child relationships between different citations. By establishing hierarchical levels for citation nodes in the tree, a determination can be made as to whether two consecutive citation nodes have a sibling relationship (i.e. the same level within the tree and the same format) or have a parent-child relationship (i.e. a different level within the tree and a different format).
- a hierarchical relationship classifier is described in detail in commonly owned United States patent application Ser. No. 17/017,406, entitled METHOD AND SYSTEM FOR IDENTIFYING CITATIONS WITHIN REGULATORY CONTENT, filed on Sep. 10, 2020, and incorporated herein by reference in its entirety.
- the training configuration 500 may be used to train a relationship classifier system 620 .
- the relationship classifier system 620 includes the pre-trained and fine-tuned regulatory content language model 102 and a sibling classifier 622 .
- the sibling classifier 622 includes one or more neural network layers configured to generate a classification output 626 indicating a probability that the input pair of citations have a parent/child relationship (i.e. “sibling” or “not_sibling”).
- the regulatory content language model 102 receives an input 624 including pairwise combinations of citations.
- the relationship classifier system 620 may be trained using a plurality of pairs of citations that are labeled as either “sibling” or “not_sibling”, which provides the labeled task specific training data set 510 shown in the training configuration 500 of FIG. 5 .
- the training data set 510 may include labeled citation samples for only one language, or at most for a few languages.
- the labeled pairs of citations may be used to further adapt (or fine-tune) the regulatory content language model 102 and to train the sibling classifier 622 to generate the classification output 626 .
- the relationship classifier system 620 thus generates a classification for each citation identified by the citation identification system 600 of FIG. 6A .
- sibling classifier 622 may be configured as a parent classifier, which is configured to generate a classification of citations as being “parent citations” or “not-parent citations”.
- the requirement extraction system 700 includes the pre-trained and fine-tuned regulatory content language model 102 that receives sentences of regulatory content 706 as an input.
- the input in this embodiment thus differs in some respects from the tokenized input in FIG. 6A , since this input includes sequences of tokens corresponding to sentences or text sequences.
- a special token [CLS] is used to denote the start of each sequence and a special [SEP] token is used to indicate separation between sentences or text sequences.
- SEP special token
- a maximum number of 512 tokens can be input and processed simultaneously.
- a final hidden state h of the first special token [CLS] is generally taken as the overall representation of the input sequence.
- the language embedding output 502 of the regulatory content language model 102 would be a vector W of 768 parameter values associated with the final hidden layer h for each token in the input sequence.
- the requirement extraction system 700 further includes a requirement classifier 702 , which is configured to generate a classification output 704 based on the output of the language model 102 .
- Regulatory content generally includes a plurality of requirements, some of which may be optional or site specific requirements.
- the classification output 704 of the requirement classifier 702 has three probability classes, REQ, ORR, and DSC.
- REQ output represents a probability that the sentence includes a requirement, which is taken to mean the requirement is not optional or site specific.
- the ORR output represents a probability that the sentence includes a requirement that is either optional or a recommendation.
- some actions may be conducted by the regulated entity as an option or alternative to another requirement or some recommended actions may be desirable but not mandatory.
- the DSC output represents a probability that the sentence includes descriptive language related to a requirement but is not itself a requirement.
- a set of sentences that are labeled as REQ, ORR, or DSC are input as the labeled task specific training data set 510 .
- the labeled sentences may be confined to a single language.
- the parameters of the regulatory content language model 102 are then adapted based on evaluating a loss function for the classification output 704 , and back-propagating errors to the weights W of the layer h and other layers of the regulatory content language model 102 .
- the requirement classifier 702 is configured as a softmax classifier, which receives the regulatory content language model 102 output and generates classification output probabilities 704 that add up to 1.00.
- the configuration and parameters of the regulatory content language model 102 and the requirement classifier 702 may be stored in the storage locations 252 and 254 of the data storage memory 206 of the inference processor circuit 200 ( FIG. 2 ).
- the inference processor circuit 200 will thus be configured to process regulatory content stored in the data storage location 250 of the data storage memory 206 to identify requirements.
- a requirement extracted by the requirement extraction system 700 may be followed by one or more subsidiary extracted requirements. Extracted requirements may thus have a “parent-child” relationship and in some cases, several child requirements may stem from a common parent requirement. Similarly, a child requirement may itself have one or more child requirements, for which the child requirement then acts as a parent. Identifying these parent/child relationships between extracted requirements is useful, since the wording of the parent requirement and each of the child requirements may be combined to form a complete requirement description. The complete requirement description would necessarily include the text of the parent requirement together with the text of the child requirement. The complete requirement description is thus a concatenation of parent and child requirement texts.
- the system 800 includes the regulatory content language model 102 , which in this embodiment receives pairs of extracted requirements 802 as an input. Each pair of extracted requirements 802 are identified as being separated, for example by using the [SEP] token for a BERT implementation of the regulatory content language model 102 .
- the system 800 further includes a requirement conjunction classifier 804 , which is configured to generate a classification output 806 based on the output of the language model 102 .
- the requirement conjunction classifier 804 may be implemented by adapting aspects of textual entailment processing, which are performed to identify whether a sentence and a hypothesis represent an entailment, a contradiction, or are neutral.
- the requirement conjunction classifier 804 generates a classification output having three probability classes.
- the first probability class, not_conjunction represents a probability that the pair of extracted requirements 802 do not share a parent-child relationship.
- the second probability class, conjunction_single represents a probability that the pair of extracted requirements 802 have a parent-child relationship, with the child requirement having a single requirement.
- the third probability class, conjunction_multiple represents a probability that the pair of extracted requirements 802 have a parent-child relationship, with the child requirement having multiple separate requirements.
- the requirement conjunction classifier system 800 may be trained by generating a labeled task specific training data set 510 including a plurality of pairs of separated requirements, each pair having an assigned label indicating that the pair falls into one of the not_conjunction, conjunction_single, or conjunction_multiple classes. The system 800 may then be trained as described above in connection with FIG. 5 using the task specific data set. The classification output 806 may be further post-processed to generate the final requirement description.
- smart fields within the regulatory content may have a common connotation between different documents that can be identified as smart fields.
- An example of smart fields within extracted requirements are various “requirement types”, which may be assigned to smart field subcategories such as equipment standard, testing and procedure, inspection, notification, record keeping, reporting, and operation standard.
- Another example would be “frequency”, related to a timing frequency at which an action must be repeated, such as annual, semi-annual, event-driven, ongoing, or specific date.
- Other smart fields such as an “equipment type” or “equipment identifier” may also be identified.
- a smart field identification system is shown generally at 900 .
- the smart field identification system 900 includes the regulatory content language model 102 and a tokenizer 902 .
- the tokenizer 902 receives an input of regulatory content 904 and separates the content into tokens, which are passed through the regulatory content language model 102 to generate a language embedding output. In this embodiment the smart-fields are thus generated for separated tokens.
- the regulatory content language model 102 outputs a language embedding vector 906 for each token received from the tokenizer 902 .
- Each language embedding output 906 of the regulatory content language model 102 is then fed through one or more neural network layers 908 that are configured to act as a smart field classifier
- the language embedding outputs 906 for each token may thus be fed through the same fully connected layers to generate a classification output 908 , which includes a plurality of classes corresponding to smart fields that are to be identified.
- the smart field classifications include regulatory content specific smart field classifications, such as equipment specific smart fields (equipment_standard, testing, inspection), time specific smart fields (annual, semi-annual), and other smart fields.
- the smart field identification system 900 may be trained on a training data set 510 as described above in connection with FIG. 5 .
- the training data set 510 may include already tokenized words that have been assigned an associated smart field classification.
- the labeled training data may be directly input into the regulatory content language model 102 , which is adapted to generate the classification output 908 based on the training data.
- the training data set 510 may include labeled samples for only one language, or at most for a few languages.
- the embodiments of the inference systems shown in FIG. 6-9 are each implemented using the same general training approach shown in FIG. 5 or FIG. 1 .
- the parameters may be loaded into the data storage memory 206 of the inference processor circuit 200 for use in processing actual regulatory content.
- the pre-trained and fine-tuned regulatory content language model 102 is used to generate the language embeddings.
- the regulatory content language model 102 is further adapted to generate the task specific result.
- the output of the regulatory content language model 102 may be frozen and the task-specific neural network may be trained for generating the result.
- the embodiments shown have the advantage of being specifically tailored to operate on regulatory content rather than generic language and further trained to generate the specific result.
- utilizing the pre-trained and fine-tuned regulatory content language model facilitates multi-lingual operation without requiring separate training for each language. This has the advantage of reducing the preparation time for labeled regulatory content training data.
Abstract
Description
- This disclosure relates generally to performing computer implemented language processing tasks on regulatory content.
- Governments at all levels generate documents setting out requirements and/or conditions that should be followed for compliance with the applicable rules and regulations. For example, Governments implement regulations, permits, plans, court ordered decrees, and bylaws to regulate commercial, industrial, and other activities considered to be in the public's interest. Standards bodies, companies, and other organizations may also generate documents setting out conditions for product and process compliance. These documents may be broadly referred to as “regulatory content”.
- Modern enterprises thus operate under an increasing burden of regulation, which has proliferated exponentially in an attempt by regulatory agencies and other governmental bodies to mitigate potential and actual dangers to the public. Documents setting out regulatory content may vary in size, from one page to several hundred pages. As a result, compliance with regulatory content has become increasingly difficult for enterprises. There remains a need for methods and systems that reduce the burden for enterprises in establishing which regulations and conditions in a body of regulatory content are applicable to their operations.
- In accordance with one disclosed aspect there is provided a method for training a computer implemented neural network system for performing a processing task on regulatory content. The method involves configuring a neural network language model capable of generating a language embedding output in response to receiving content. The method further involves fine-tuning the language model using regulatory content training data to generate a regulatory content language embedding output for regulatory content processed by the language model. The method also involves configuring at least one task specific output layer to generate task specific results in response to receiving the regulatory content language embedding output from the language model, and training the neural network system using task specific training data to output the task specific results, at least a portion of the task specific training data having been labeled prior to configuring the task specific neural network.
- Configuring the language model may involve configuring a pre-trained neural network language model for generation of the language embedding output, the pre-trained neural network language model including a plurality of layers of neurons, each neuron having an associated weight and bias, the weights and biases having been determined during training of the language model.
- Fine-tuning the language model may involve one of modifying weights and biases of the neurons of the language model based on the regulatory content training data, freezing weights and biases of at least some of the layers of neurons while modifying weights and biases of other layers of neurons based on the regulatory content training data, or adding at least one additional layer of neurons to the language model and determining weights and biases of the least one at additional layer based on the regulatory content training data.
- The regulatory content training data may include a plurality of documents including regulatory text.
- The regulatory text in the plurality of documents may include unlabeled regulatory text.
- The plurality of documents may include regulatory text in a plurality of different languages.
- The plurality of documents including regulatory text may be pre-processed to generate the regulatory content training data by masking at least some words within sentences of the regulatory text and fine-tuning may involve configuring the neural network language model to generate a prediction for the masked words based on context provided by un-masked words in the sentence and updating the neural network language model based on a comparison between the generated prediction and the masked word.
- The regulatory content training data may involve pairs of sentences extracted from regulatory text associated with the plurality of documents and fine-tuning may involve configuring the neural network language model to generate a prediction as to whether the second sentence in the sentence pair follows the first sentence in the document and updating the neural network language model based on whether the generated prediction is correct.
- The regulatory content language embedding output may include a plurality of vectors, each vector including a plurality of values representing a context for each word in the regulatory content.
- Configuring the at least one task specific output layer may involve configuring a classification layer operable to generate a classification output for the regulatory content.
- Training the neural network system to generate the classification output may involve a further fine-tuning of the language model based on the task specific training data.
- The classification output may be associated with one of an identification of a plurality of text fields within the regulatory content that have a common connotation between different documents, an identification of requirements or conditions within the regulatory content, or an identification of citations within the regulatory content, each citation being associated with one or more requirements or conditions within the regulatory content.
- Configuring the at least one task specific output layer may involve configuring a classification output layer to generate a classification identifying text as a citation sequence, a classification identifying text as a citation title, and a classification identifying text as not being associated with a citation, and the neural network system may be trained using training data including samples labeled as corresponding to a citation sequence, samples labeled as corresponding to a citation title, and samples not associated with a citation.
- Configuring the at least one task specific output layer may involve configuring a sibling classifier output layer to generate a classification identifying citations as being one of a sibling citation or not a sibling citation, the neural network system being trained using training data including pairs of samples including samples labeled as having a sibling relationship and samples labeled as not having a sibling relationship.
- Configuring the at least one task specific output layer may involve configuring a sibling classifier output layer to generate a classification identifying citations as being one of a parent citation or not a parent citation, the neural network system being trained using training data including pairs of samples including samples labeled as having a parent relationship and samples labeled as not having a parent relationship.
- Configuring the at least one task specific output layer may involve configuring a requirement classification output layer to generate a classification identifying text as corresponding to a requirement, a classification identifying text as corresponding to an optional or site-specific requirement, and a classification identifying text as including descriptive language related to a requirement but is not itself a requirement, and the neural network system may be trained using training data including text sequences that are labeled as requirements, labeled as optional or site-specific requirements, and labeled as descriptive text.
- Configuring the at least one task specific output layer may involve configuring a requirement conjunction classifier output layer to generate a classification identifying a requirement as not being a conjunction, a classification identifying a requirement as being a conjunction between a parent requirement and a single child requirement, and a classification identifying a requirement as being a conjunction between a parent requirement and multiple child requirements, and, the neural network system is trained using training data including a plurality of pairs of separated requirements, each pair having an assigned label indicating whether the pair is not a conjunction, a single child requirement conjunction, or a multiple child requirement conjunction.
- Configuring the at least one task specific output layer may involve configuring a smart field classifier output layer to generate a plurality of classifications identifying text fields within the regulatory content having a common connotation and the neural network system may be trained using training data including labeled samples corresponding to each of the plurality of classifications.
- The task specific training data for training the task specific neural network may include a portion of unlabeled training data.
- The portion of labeled task specific training data may involve regulatory text associated with a first language and the portion of unlabeled training data may include regulatory text associated with a language other than the first language.
- In accordance with another disclosed aspect there is provided a system for performing a processing task on regulatory content. The system includes a processor circuit and codes for directing the processor circuit to implement a regulatory content language model capable of generating a language embedding output in response to receiving regulatory content, the regulatory content language model having been fine-tuned using regulatory content training data to generate a regulatory content language embedding output for regulatory content. The system also includes codes for directing the processor circuit to implement at least one task specific output layer to generate task specific results in response to receiving the regulatory content language embedding output from the language model, the neural network system having been trained using task specific training data to output the task specific results, at least a portion of the task specific training data having been labeled prior to configuring the task specific neural network.
- Other aspects and features will become apparent to those ordinarily skilled in the art upon review of the following description of specific disclosed embodiments in conjunction with the accompanying figures.
- In drawings which illustrate disclosed embodiments,
-
FIG. 1 is a block diagram of a computer implemented system for performing a processing task on regulatory content according to a first disclosed embodiment; -
FIG. 2 is a block diagram of an inference processor circuit for implementing the system shown inFIG. 1 ; -
FIG. 3 is a block diagram of a training system for training the system shown inFIG. 1 ; -
FIG. 4 is a process flowchart of a process for training a regulatory content language model of the system shown inFIG. 1 ; -
FIG. 5 is a block diagram of a configuration for training a regulatory content processing system; -
FIG. 6A is a block diagram of a citation identification system embodiment, which may be implemented on the inference processor circuit ofFIG. 2 ; -
FIG. 6B is a block diagram of a relationship classifier system used in conjunction with the citation identification system shown inFIG. 6A embodiment, which may be implemented on the inference processor circuit ofFIG. 2 ; -
FIG. 7 is a block diagram of a requirement extraction system embodiment, which may be implemented on the inference processor circuit ofFIG. 2 ; -
FIG. 8 is a block diagram of a conjunction classifier system embodiment, which may be implemented on the inference processor circuit ofFIG. 2 ; and -
FIG. 9 is a block diagram of a smart field identification system, which may be implemented on the inference processor circuit ofFIG. 2 . - Referring to
FIG. 1 , a system for performing a processing task on regulatory content according to a first disclosed embodiment is shown generally at 100. Thesystem 100 includes a regulatorycontent language model 102 that receives an input ofregulatory content data 104 and generates alanguage embedding output 106 representing the semantic and syntactic meaning of words in the regulatory content. Theregulatory content 104 may be received in any of a variety of text data formats, where words and characters in the text are encoded into a digital data format representing the text of the regulatory content. In other embodiments regulatory content may be received as image data, where the text is represented by pixels rather than digital text. In this case the regulatory content image data would be pre-processed to extract the text in a digital data format to generate theregulatory content 104. - The
language embedding output 106 of the regulatorycontent language model 102 may be in the form of a set of values that define the semantic and syntactic meaning of each words in the regulatory content. In some language model implementations, the meaning of each word may be expressed as a vector having a plurality of values (typically several hundred values). Thelanguage embedding output 106 is fed through a taskspecific processing block 108 to perform additional processing that is specific to a particular task. The taskspecific processing block 108 and/or the regulatorycontent language model 102 may be further trained using task specific training data to output taskspecific results 110 for theregulatory content 104. Examples of some taskspecific results 110 include identification of citations within regulatory content, determination of relationships between citations, extraction of requirements from regulatory content, generation of associated requirement descriptions, and smart field recognition. These examples of task specific processing are described in more detail below. - The
system 100 shown inFIG. 1 may be implemented on a processor circuit operably configured to provide inference functions for performing the processing task on theregulatory content 104. The regulatorycontent language model 102 and/or taskspecific processing block 108 may be implemented using various neural networks for processing theregulatory content 104. Referring toFIG. 2 , an inference processor circuit is shown generally at 200. Theinference processor circuit 200 includes amicroprocessor 202, aprogram memory 204, adata storage memory 206, and an input output port (I/O) 208, all of which are in communication with themicroprocessor 202. Program codes for directing themicroprocessor 202 to carry out various functions are stored in theprogram memory 204, which may be implemented as a random access memory (RAM), flash memory, a hard disk drive (HDD), or a combination thereof. - The
program memory 204 includes storage for program codes that are executable by themicroprocessor 202 to provide functionality for implementing the various elements of thesystem 100. In this embodiment, theprogram memory 204 includes storage forprogram codes 230 for directing themicroprocessor 202 to perform operating system functions. The operating system may be any of a number of available operating systems including, but not limited to, Linux, macOS, Windows, Android, and JavaScript. Theprogram memory 204 also includes storage forprogram codes 232 for implementing the regulatorycontent language model 102, andcodes 234 for implementing functions associated with the taskspecific processing block 108. - The I/
O 208 provides an interface for receiving input via akeyboard 212, pointingdevice 214. The I/O 208 also includes an interface for generating output on adisplay 216 and further includes aninterface 218 for connecting theprocessor circuit 200 to awide area network 220, such as the internet. - The
data storage memory 206 may be implemented in RAM memory, flash memory, a hard drive, a solid state drive, or a combination thereof. Alternatively, or additionally thedata storage memory 206 may be implemented at least in part as storage accessible via theinterface 218 andwide area network 220. In the embodiment shown, thedata storage memory 206 providesstorage 250 forregulatory content data 104,storage 252 for the regulatory content language model configuration data,storage 254 for the task specific neural network configuration data, andstorage 256 for storing results generated by the regulatorycontent processing block 108. - The
inference processor circuit 200 is operable to implement thesystem 100 for processing regulatory content shown inFIG. 1 when configured with the applicable training and configuration data in storage locations 252-254 of thedata storage memory 206. - Processes for generating the necessary neural network training and configuration data stored in the
locations inference processor circuit 200. However, in practice neural network configuration and training is more commonly performed on a specifically configured training system such as a machine learning computing platform or cloud-based computing system, which may include one or more graphics processing units. An example of a training system is shown inFIG. 3 at 300. Thetraining system 300 includes a user interface 302 that may be accessed via an operator'sterminal 304. The operator's terminal 304 may be a processor circuit such as shown at 200 inFIG. 3 that has a connection to thewide area network 220. The operator is able to accesscomputational resources 306 anddata storage resources 308 made available in thetraining system 300 via the user interface 302. In some embodiments, providers of cloud based neuralnetwork training systems 300 may makemachine learning services 310 that provide a library of functions that may be implemented on thecomputational resources 306 for performing machine learning functions such as training. For example, a neural network programming environment TensorFlow™ is made available by Google Inc. TensorFlow provides a library of functions and neural network configurations that can be used to configure the above described neural network. Thetraining system 300 also implements monitoring and management functions that monitor and manage performance of thecomputational resources 306 and thedata storage 308. In other embodiments, the functions provided by thetraining system 300 may be implemented on a stand-alone computing platform configured to provide adequate computing resources for performing the training. - Generally, the training of the neural networks for implementing the regulatory
content language model 102 and the taskspecific processing block 108 are performed under supervision of an operator using thetraining system 300. In other embodiments the training process may be unsupervised or only partly supervised by an operator. The operator will typically determine an appropriate neural network configuration for generating a desired task specific output. The operator then prepares a training data set, which is used in a training exercise to establish weights and biases for the neural network portions of the regulatorycontent language model 102 and taskspecific processing block 108. In some embodiments the set of training data samples may have associated labels or annotations that indicate a ground truth output result for each sample. In other embodiments, set of training data may include unannotated training data samples. In some embodiments the training data set may include a combination of annotated and unannotated training data samples. During the training exercise, the operator may make changes to the configuration of the neural network until a satisfactory accuracy and performance is achieved. The resulting neural network configuration and determined weights and biases may then be saved to the applicable locations 252-254 of thedata storage memory 206 of theinference processor circuit 200. As such, the regulatorycontent language model 102 and taskspecific processing block 108 may be initially implemented, configured, and trained on thetraining system 300, before being configured for regular use on theinference processor circuit 200. - Referring to
FIG. 4 , a process for training the regulatorycontent language model 102 using thetraining system 300 is shown as a process flowchart at 400. As shown at block 402, the process begins by configuring a generic language model on thetraining system 300. In one embodiment the generic language model may be implemented using a pre-trained language model, such as Google's BERT (Bidirectional Encoder Representations from Transformers) or OpenAI's GPT-3 (Generative Pretrained Transformer). Configuration of the generic language model in block 402 may involve accessing and configuring library functions within a neural network programming environment such as TensorFlow to implement a desired generic language model. These language models are implemented using neural networks and may be pre-trained using a large multilingual training corpus 404 (i.e. sets of documents including sentences in context) to capture the semantic and syntactic meaning of words in text. The generic languagemodel training corpus 404 is shown in broken outline inFIG. 4 , since in many cases a generic language model may be implemented in a form that has already been trained on an extensive training corpus. The generic language model may thus be invoked in an already trained configuration, which is capable of outputting the meaning of each word or portion of a word in context as thelanguage embedding output 106. Thelanguage embedding output 106 may be in the form of a language embedding vector, which includes a plurality of values that capture the contextual meaning of the word. Words of similar meaning will thus be represented by vectors that have similar, but not necessarily identical values. - In some embodiments the
regulatory content 104 may be separated into tokens before processing each token in context to generate thelanguage embedding output 106. A token is a sequence of characters grouped together as a useful semantic unit for processing. For example, the word “sleeping” may be represented by a first token “sleep” and a second token “ing”. Tokenization may be implemented at a word level, sub-word level, and/or character level. In the remainder of this description, the term token will be used to refer to sequences of one or more characters that have been rendered from the original regulatory content. Tokenization is usually undertaken on the basis of a vocabulary file that provides a set of words that will be used for the tokenization of content. As an example, a tokenizer vocabulary file may not include the word “sleeping” but may include sub-words “sleep” and “ing”, in which case the tokens will be output as “sleep” and “##ing”. Words that cannot be split into sub-words are known as out-of-vocabulary (OOV) words and may be tokenized on a character-by-character basis, or otherwise handled. Regulatory content language models generally process content in context, which may further involve splitting groups of tokens or text into text sequences, which may be sentence based. - Examples of the types of documents making up the generic language
model training corpus 404 include documents from Wikipedia, scientific publications, books, etc. By including documents in different languages in the training corpus, the language model may be trained to generate multilingual language embeddings. Generating a multilingual language model facilitates ease of use and maintenance of the system, since a single model would be capable of processing regulatory content in many different languages. However, in some embodiments, separate language models can be implemented and trained for each language. This requires that there be sufficient labeled regulatory content training data for the intended language. Thetraining corpus 404 used for training many language models comprises unlabeled text data, and the training process is essentially self-supervised by the language model. Since the training corpus comprises words and sentences in context, techniques such as word masking and next sentence prediction may be employed by the generic language model to make the training process semi-supervised without going to the laborious process of labeling the corpus. Generic language models may thus be trained for both word-level and sentence-level tasks, which are both applicable for the task specific processing performed by the taskspecific processing block 108. As an example, requirement extraction and the identification of requirement descriptions within regulatory content are generally sentence-level tasks. In contrast, detection of citations and smart fields are generally token-level tasks. Generic language models such as BERT, ALBERT, RoBERTa, and DistilBERT, which employ deep bidirectional transformer architectures, perform well in both sentence-level and token-level tasks. - The generic language model is generally trained for processing generic language content that typically would be encountered in everyday situations. However, word distributions in regulatory text may differ from the generic text. As shown at
block 406, theprocess 400 for training the generic language model further includes a fine-tuning step, in which the generic language model of block 402 is refined using a regulatorycontent training corpus 408 to improve its performance in generating relevant word embedding outputs for regulatory content. In thetraining embodiment 400, this difference is accounted for by performing a fine-tuning of the generic language model to generate the regulatorycontent language model 102. Fine-tuning generally proceeds as described above for generic training, except that a learning rate is reduced so that the effect of the pre-training of thegeneric language model 102 is not significantly changed. As such, fine-tuning involves small adjustments to the parameters of the pre-trained language model to generate a regulatorycontent language model 102 that is optimized for performance regulatory content, without significantly altering the performance of the language model on generic content. In the embodiment shown inFIG. 4 , the fine-tuning is performed using a regulatorycontent training corpus 408, which include a relatively large number of regulatory content documents. As noted above, regulatory content may fall into any of a number of classes, such as regulations, permits, plans, bylaws, standards, etc. In some embodiments the regulatorycontent training corpus 408 may be limited to one of these categories and the fine-tuning performed atblock 406 may be based on a corpus of different documents in the same category. In other embodiments the regulatorycontent training corpus 408 may include documents in different categories to produce a broader based regulatorycontent language model 102. The regulatorycontent training corpus 408 may also include multi-lingual documents, such that the regulatorycontent language model 102 is trained to generate embedding outputs for regulatory content is different languages. - In this embodiment the regulatory
content training corpus 408 comprises unlabeled or unannotated regulatory content text data, which has the advantage of avoiding the burden of preparing a labeled training corpus. The fine-tuning performed atblock 406 proceeds on the same basis as the self-supervised pre-training of the generic language model at block 402, including masking of words and/or next sentence prediction etc. The fine-tuning process has the advantage of refining the language model to improve performance on text data from the regulatory content domain. In a BERT implementation using the huggingface Transformers library, the training input would be regulatory content having some words replaced by a special token [MASK]. The training input is first fed through a tokenizer, which separates the training content into tokens. The tokenized training input is then provided to a Bert Model configured with a language modeling output layer. The same training content in which the masked words still appear is also tokenized and provided as a labeling input to the model for the purposes of training. - Following completion of the fine-tuning at
block 406, configuration data of the fine-tuned language model is output and saved in the regulatorycontent language model 102 configurationdata storage locations 252, in thedata storage memory 206 of theinference processor circuit 200 shown inFIG. 2 . The fine-tuned regulatorycontent language model 102 is thus capable of providing regulatory content relevanttoken embedding outputs 106 that may be used in a variety of regulatory content processing tasks. - Referring back to
FIG. 1 , the taskspecific processing block 108 of thesystem 100 receives thelanguage embedding output 106 from the regulatorycontent language model 102 and generates task specific results. In some embodiments the taskspecific processing block 108 may be configured as a feature extraction network that is separately trained to output the taskspecific results 110 based on thelanguage embedding output 106 of the regulatorycontent language model 102. The taskspecific processing block 108 may be trained using training data in which regulatory content inputs have an associated label indicating a ground truth task result assigned by an operator. In this case, the regulatorycontent language model 102 has its parameters frozen, and the taskspecific processing block 108 is separately trained to generate the taskspecific results 110. - In other embodiments, the regulatory
content language model 102 may be trained in conjunction with the taskspecific processing block 108 to generate the taskspecific results 110. As at least some parameters of the regulatorycontent language model 102 would remain unfrozen and be subject to change. A block diagram of this alternative training configuration that may be implemented on thetraining system 300 is shown inFIG. 5 generally at 500. In this embodiment, thelanguage model 102 is configured and fine-tuned generally as described above in connection withFIG. 4 . The regulatorycontent language model 102 is thus trained to generate alanguage embedding output 502 for a regulatory content input received by language model. Thetraining configuration 500 further includes one or more task specific neural network layers 504, which are configured to receive thelanguage embedding output 502 and generate a taskspecific result 506. - In the embodiment shown the task
specific result 506 is a classification output having n possible categories, c1-cn. In this embodiment the task specific neural network layers 504 are configured to output probabilities pc1, pc2, and pc3, each indicating a likelihood that the input to the regulatorycontent language model 102 falls within the respective categories c1-cn. As an example, in one embodiment a final output layer in the output layers 504 may be configured as a softmax layer, which causes the probabilities pc1, pc2, pc3 and pcn to add up to 1.00. - The
training configuration 500 also includes atraining block 508, which implements functions on thetraining system 300 for training the one or more task specific neural network layers 504 and adapting the regulatorycontent language model 102 to generate the taskspecific result 506. During training, a task specifictraining data set 510 is fed into the regulatorycontent language model 102. Thetraining block 508 includes functions for evaluating the taskspecific result 506 via a loss function. Thetraining block 508 also includes functions for back-propagating errors in the result to modify parameters of the regulatorycontent language model 102 and task specific neural network layers 504. An optimization function is generally used for modifying the weights and biases of each neuron in the neural network. The regulatorycontent language model 102 generally has a final layer that outputs the values of the language embedding vector for each word or token. The task specific neural network layers 504 are configured depending on the task to be performed. In some embodiments, the task specific neural network layers 504 may include a linear layer that is fully connected to receive the language embedding vector from the regulatorycontent language model 102. This linear layer may be followed by a classification layer, such as a softmax layer, that generates the taskspecific result 506. - In this embodiment the parameters of the regulatory
content language model 102 are not frozen, which allows the parameters of the language model to be refined using the optimization function for producing the taskspecific result 506. The optimization function generally includes a learning rate that controls a magnitude of the change to each of the weights and biases of the neurons during each training iteration. During task specific training the learning rate is usually set at a low level that limits the change magnitude such that the effect of the pre-training and fine-tuning (block 406FIG. 4 ) of the regulatorycontent language model 102 is not lost. In the embodiment shown inFIG. 5 , the task specific training thus includes further adaptation of the regulatorycontent language model 102 to better generate the taskspecific result 506. The process of adaptation of the regulatorycontent language model 102 is in effect a second fine-tuning of the regulatorycontent language model 102, based this time on task specific training data. - As disclosed above, the regulatory
content language model 102 may be trained as a multilingual language model in which regulatory content in many different languages can be processed to generate thelanguage embedding output 502. Many current language model implementations do not even require identification of the language of the input, which reduces system complexity and prevents prediction errors due to language identification. The language model may thus intentionally not be informed of the language of the regulatory content so that pre-trained embeddings cannot be explicitly language specific. The multilingual nature of thelanguage embedding output 502 may be further reinforced by providing a multilingual regulatorycontent training corpus 408 for adaptation thelanguage model 102, as disclosed above in connection withFIG. 4 . - It is desirable that the multilingual regulatory content processing capability be preserved when the
training system configuration 500 is further trained for generating the taskspecific results 506. However, providing a labeled task specifictraining data set 510 that includes labeled regulatory content for each language may be prohibitively expensive. In this embodiment thetraining data set 510 includes labeled training samples for at least one language. Thetraining data set 510 may thus include labeled training samples for only a single language, or in some cases a few selected languages. The reduced labeling requirement significantly reduces the time and effort needed to prepare the training set. For example, thetraining data set 510 may include labeled training sample in the English language, which are used to train the task specific neural network layers 504 for generation of the taskspecific result 506. Following the training, thetraining configuration 500 employs zero-shot transfer learning to produce the taskspecific result 506 for other unseen languages based on the training on English regulatory content. Task specific training regulatory content may have many more representative articles in one language (for example, in English) than other languages. Sampling may be used to balance the number of articles based on language frequency. Tokenization is usually performed on the basis of a vocabulary list, which may be based on the frequency of occurrence of the token in each regulatory content language. - In some embodiments, the task specific
training data set 510 may further include unlabeled samples from other languages in addition to the labeled samples from the selected training language or languages. The inclusion of regulatory content in other languages may improve the effectiveness of the transfer learning to a level that approaches the performance of language specific trained systems. Thepre-trained language model 102 has already learnt an alignment between the vocabulary of each language and naturally integrates language alignment between languages. The zero-shot transfer training of the one or more task specific neural network layers 504 thus extends the functionality for generating the taskspecific result 506 for monolingual regulatory content. In many cases performance for other languages approaches the performance of language-specific models trained using labeled language-specific regulatory content data. - In this embodiment, the regulatory
content language model 102 will have a set of previously established parameters based on its generic pre-training and subsequent fine-tuning (block 406,FIG. 4 ), as described above. These previously established parameters will thus be modified during the task specific training using thetraining configuration 500 based on the task specifictraining data set 510. For the task specific training, the learning rate implemented by the optimization function is generally set to a low value, such that the previously established parameters are only perturbed by a small amount and such that the trained functionality of the regulatorycontent language model 102 is not compromised. In some training embodiments the learning rate for the task specific neural network layers 504 may be set at a higher rate than the learning rate for adaptation of the regulatorycontent language model 102. Alternatively, or additionally, layer specific learning rates may be implemented or some layers of the regulatorycontent language model 102 may be frozen (essentially a zero learning rate). As an example, in the regulatorycontent language model 102, the pre-training may be considered adequate for all layers preceding the final hidden layers, and these preceding layers may be frozen during training. - The
training configuration 500 shown inFIG. 5 may be implemented to configure thetraining system 300 or other processor to perform any one of a number of tasks, as described in more detail following. - Citation Detection and Representation
- In one embodiment the
training configuration 500 may be used to train a system for performing the task of citation detection within regulatory content. In context of regulatory content, a citation is a reference to one or more requirements or conditions within the text of the regulatory content. Regulatory content often includes explicit alphanumeric citations, either in the form of numeric or other characters that indicate a sequence or in the form of alphabetical text characters. Referring toFIG. 6A , a citation identification system for identifying citations with regulatory content is shown generally at 600. Thecitation identification system 600 includes atokenizer 602, the regulatorycontent language model 102, and acitation classifier 604. Thecitation identification system 600 is trained to perform named entity recognition (NER) on each sentence in the document. In one embodiment, thecitation classifier block 604 may include a linear layer configured for performing token classification based on the language embedding generated by the regulatorycontent language model 102. An example of a portion of a regulatory content input is shown at 606. The regulatorycontent input portion 606 includes a heading 608, which includes citation numbers and a portion of asentence 610, which does not include citations. Thecitation classifier 604 is configured to generate acitation classification output 612. - The
tokenizer 602 separates theregulatory content input 606 into tokens (i.e. words and sub-words). The regulatorycontent language model 102 generates a language embedding output for each token as described above. Thecitation classifier 604 then receives the language embedding output and generates thecitation classification output 612. In this embodiment thecitation classification output 612 includes probabilities associated with five target classes. Citations in regulatory content may include text that implies a sequence (herein referred to as “citation numbers”) and/or text that acts as a heading or title for a requirement (herein referred to as “citation titles”). The location of citation number and citation title within a phrase may also be significant. In this embodiment, thecitation classification output 612 includes the classes listed in the table below. -
Class Description Figure 6A example B-CI_NUM Citation identifier number at the ″Section″ beginning of a phrase I-CI_NUM Continuation of a citation identifier ″A.″ number within a phrase B-CI_TXT Citation identifier title at the beginning ″OPERATING″ of a phrase I-CI_TXT Continuation of citation identifier title ″&″, ″MAINTENANCE″, ″REQUIREMENTS″ inside a phrase O Regulatory content body text not ″The″, ″owner″, etc. including citation identifiers - In the above example, the “A.” includes sub-words “A”, and “.”. Similarly, “REQUIREMENTS” includes sub-words “REQUIRE” and “##MENTS”. Prediction of the
citation classification 612 is performed on a token basis. Post processing based on heuristics may be implemented to confirm or correct assigned labels. For example, if a word between two words labeled as I-CI_TXT is initially assigned an “O” label (body text), then the word label is changed to I-CI_TXT for consistency. - The
citation identification system 600 may be trained on atraining data set 510 as described above in connection withFIG. 5 . For training thecitation identification system 600, thetraining data set 510 includes sentences in which each token has been assigned a class corresponding to the classes of thecitation classification output 612. As disclosed above, in one embodiment thetraining data set 510 may include labeled samples for only one language, or at most for a few languages. Labeling regulatory content in other languages is labor intensive. However, if there is sufficient labeled regulatory content for one language, the trainedcitation identification system 600 can be effectively used to identify citations for other unlabeled languages. Following the training exercise, the parameters of the regulatorycontent language model 102 and thecitation classifier 604 may be stored in thestorage locations data storage memory 206 of the inference processor circuit 200 (FIG. 2 ). Theinference processor circuit 200 will thus be configured to process regulatory content stored in thedata storage location 250 of thedata storage memory 206 and to generate task specific regulatory content processing results, which may be stored in thestorage location 256 of thedata storage memory 206 and/or displayed on thedisplay 216. - The
citation identification system 600 thus outputs a classification for each token in theregulatory content input 606. Thecitation classification output 612 may be further processed to generate a hierarchy of citations, which is useful in evaluating the requirements associated with the citations. In one embodiment a hierarchical tree of citation nodes is constructed by considering parent/child relationships between different citations. By establishing hierarchical levels for citation nodes in the tree, a determination can be made as to whether two consecutive citation nodes have a sibling relationship (i.e. the same level within the tree and the same format) or have a parent-child relationship (i.e. a different level within the tree and a different format). A hierarchical relationship classifier is described in detail in commonly owned United States patent application Ser. No. 17/017,406, entitled METHOD AND SYSTEM FOR IDENTIFYING CITATIONS WITHIN REGULATORY CONTENT, filed on Sep. 10, 2020, and incorporated herein by reference in its entirety. - Referring to
FIG. 6B , in one embodiment thetraining configuration 500 may be used to train arelationship classifier system 620. Therelationship classifier system 620 includes the pre-trained and fine-tuned regulatorycontent language model 102 and asibling classifier 622. Thesibling classifier 622 includes one or more neural network layers configured to generate aclassification output 626 indicating a probability that the input pair of citations have a parent/child relationship (i.e. “sibling” or “not_sibling”). The regulatorycontent language model 102 receives aninput 624 including pairwise combinations of citations. - The
relationship classifier system 620 may be trained using a plurality of pairs of citations that are labeled as either “sibling” or “not_sibling”, which provides the labeled task specifictraining data set 510 shown in thetraining configuration 500 ofFIG. 5 . As disclosed above, in one embodiment thetraining data set 510 may include labeled citation samples for only one language, or at most for a few languages. The labeled pairs of citations may be used to further adapt (or fine-tune) the regulatorycontent language model 102 and to train thesibling classifier 622 to generate theclassification output 626. Therelationship classifier system 620 thus generates a classification for each citation identified by thecitation identification system 600 ofFIG. 6A . This classification is then used to construct the hierarchical tree by placing citations that are classified as being “sibling” at a level above citations that are classified as being “not-sibling”. In other embodiments thesibling classifier 622 may be configured as a parent classifier, which is configured to generate a classification of citations as being “parent citations” or “not-parent citations”. - Requirement Extraction
- It is useful to be able to exact requirements from regulatory content and to further identify which of the requirements are mandatory and which are optional. In regulatory content, not all of the text includes requirements, since some of the text may be explanatory, definitional, contextual, or address the obligations of the issuing regulatory body. Referring to
FIG. 7 , a requirement extraction system for identifying requirements in regulatory content is shown generally at 700. Therequirement extraction system 700 includes the pre-trained and fine-tuned regulatorycontent language model 102 that receives sentences ofregulatory content 706 as an input. - The input in this embodiment thus differs in some respects from the tokenized input in
FIG. 6A , since this input includes sequences of tokens corresponding to sentences or text sequences. In a Google BERT implementation of the regulatorycontent language model 102, a special token [CLS] is used to denote the start of each sequence and a special [SEP] token is used to indicate separation between sentences or text sequences. In the BERT Language model, a maximum number of 512 tokens can be input and processed simultaneously. For text classification tasks in BERT, a final hidden state h of the first special token [CLS] is generally taken as the overall representation of the input sequence. As such for a BERT implementation of the regulatorycontent language model 102 in therequirement extraction system 700, thelanguage embedding output 502 of the regulatorycontent language model 102 would be a vector W of 768 parameter values associated with the final hidden layer h for each token in the input sequence. - The
requirement extraction system 700 further includes arequirement classifier 702, which is configured to generate aclassification output 704 based on the output of thelanguage model 102. Regulatory content generally includes a plurality of requirements, some of which may be optional or site specific requirements. Theclassification output 704 of therequirement classifier 702 has three probability classes, REQ, ORR, and DSC. The REQ output represents a probability that the sentence includes a requirement, which is taken to mean the requirement is not optional or site specific. The ORR output represents a probability that the sentence includes a requirement that is either optional or a recommendation. In regulatory content, some actions may be conducted by the regulated entity as an option or alternative to another requirement or some recommended actions may be desirable but not mandatory. Finally, the DSC output represents a probability that the sentence includes descriptive language related to a requirement but is not itself a requirement. - For training of the
requirement extraction system 700 using thetraining configuration 500, a set of sentences that are labeled as REQ, ORR, or DSC are input as the labeled task specifictraining data set 510. As described above, the labeled sentences may be confined to a single language. The parameters of the regulatorycontent language model 102 are then adapted based on evaluating a loss function for theclassification output 704, and back-propagating errors to the weights W of the layer h and other layers of the regulatorycontent language model 102. In one embodiment therequirement classifier 702 is configured as a softmax classifier, which receives the regulatorycontent language model 102 output and generatesclassification output probabilities 704 that add up to 1.00. Following training, the configuration and parameters of the regulatorycontent language model 102 and therequirement classifier 702 may be stored in thestorage locations data storage memory 206 of the inference processor circuit 200 (FIG. 2 ). Theinference processor circuit 200 will thus be configured to process regulatory content stored in thedata storage location 250 of thedata storage memory 206 to identify requirements. - Requirement Description
- In regulatory content, a requirement extracted by the
requirement extraction system 700 may be followed by one or more subsidiary extracted requirements. Extracted requirements may thus have a “parent-child” relationship and in some cases, several child requirements may stem from a common parent requirement. Similarly, a child requirement may itself have one or more child requirements, for which the child requirement then acts as a parent. Identifying these parent/child relationships between extracted requirements is useful, since the wording of the parent requirement and each of the child requirements may be combined to form a complete requirement description. The complete requirement description would necessarily include the text of the parent requirement together with the text of the child requirement. The complete requirement description is thus a concatenation of parent and child requirement texts. - Referring to
FIG. 8 , a requirement conjunction classifier system is shown generally at 800. Thesystem 800 includes the regulatorycontent language model 102, which in this embodiment receives pairs of extractedrequirements 802 as an input. Each pair of extractedrequirements 802 are identified as being separated, for example by using the [SEP] token for a BERT implementation of the regulatorycontent language model 102. - The
system 800 further includes arequirement conjunction classifier 804, which is configured to generate aclassification output 806 based on the output of thelanguage model 102. Therequirement conjunction classifier 804 may be implemented by adapting aspects of textual entailment processing, which are performed to identify whether a sentence and a hypothesis represent an entailment, a contradiction, or are neutral. In this embodiment, therequirement conjunction classifier 804 generates a classification output having three probability classes. The first probability class, not_conjunction represents a probability that the pair of extractedrequirements 802 do not share a parent-child relationship. The second probability class, conjunction_single represents a probability that the pair of extractedrequirements 802 have a parent-child relationship, with the child requirement having a single requirement. The third probability class, conjunction_multiple represents a probability that the pair of extractedrequirements 802 have a parent-child relationship, with the child requirement having multiple separate requirements. - The requirement
conjunction classifier system 800 may be trained by generating a labeled task specifictraining data set 510 including a plurality of pairs of separated requirements, each pair having an assigned label indicating that the pair falls into one of the not_conjunction, conjunction_single, or conjunction_multiple classes. Thesystem 800 may then be trained as described above in connection withFIG. 5 using the task specific data set. Theclassification output 806 may be further post-processed to generate the final requirement description. - Smart Fields
- In regulatory content text fields within the regulatory content may have a common connotation between different documents that can be identified as smart fields. An example of smart fields within extracted requirements are various “requirement types”, which may be assigned to smart field subcategories such as equipment standard, testing and procedure, inspection, notification, record keeping, reporting, and operation standard. Another example would be “frequency”, related to a timing frequency at which an action must be repeated, such as annual, semi-annual, event-driven, ongoing, or specific date. Other smart fields such as an “equipment type” or “equipment identifier” may also be identified.
- Referring to
FIG. 9 , a smart field identification system is shown generally at 900. The smartfield identification system 900 includes the regulatorycontent language model 102 and atokenizer 902. Thetokenizer 902 receives an input ofregulatory content 904 and separates the content into tokens, which are passed through the regulatorycontent language model 102 to generate a language embedding output. In this embodiment the smart-fields are thus generated for separated tokens. In this embodiment, the regulatorycontent language model 102 outputs alanguage embedding vector 906 for each token received from thetokenizer 902. Eachlanguage embedding output 906 of the regulatorycontent language model 102 is then fed through one or more neural network layers 908 that are configured to act as a smart field classifier Thelanguage embedding outputs 906 for each token may thus be fed through the same fully connected layers to generate aclassification output 908, which includes a plurality of classes corresponding to smart fields that are to be identified. In the example shown inFIG. 9 , the smart field classifications include regulatory content specific smart field classifications, such as equipment specific smart fields (equipment_standard, testing, inspection), time specific smart fields (annual, semi-annual), and other smart fields. - The smart
field identification system 900 may be trained on atraining data set 510 as described above in connection withFIG. 5 . Thetraining data set 510 may include already tokenized words that have been assigned an associated smart field classification. The labeled training data may be directly input into the regulatorycontent language model 102, which is adapted to generate theclassification output 908 based on the training data. As disclosed above, in one embodiment thetraining data set 510 may include labeled samples for only one language, or at most for a few languages. - The embodiments of the inference systems shown in
FIG. 6-9 are each implemented using the same general training approach shown inFIG. 5 orFIG. 1 . Once configuration parameters have been determined during training, the parameters may be loaded into thedata storage memory 206 of theinference processor circuit 200 for use in processing actual regulatory content. In each of these embodiments the pre-trained and fine-tuned regulatorycontent language model 102 is used to generate the language embeddings. For implementation of each of the described tasks the regulatorycontent language model 102 is further adapted to generate the task specific result. Alternatively, the output of the regulatorycontent language model 102 may be frozen and the task-specific neural network may be trained for generating the result. The embodiments shown have the advantage of being specifically tailored to operate on regulatory content rather than generic language and further trained to generate the specific result. However, utilizing the pre-trained and fine-tuned regulatory content language model facilitates multi-lingual operation without requiring separate training for each language. This has the advantage of reducing the preparation time for labeled regulatory content training data. - While specific embodiments have been described and illustrated, such embodiments should be considered illustrative only and not as limiting the disclosed embodiments as construed in accordance with the accompanying claims.
Claims (21)
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/093,416 US20220147814A1 (en) | 2020-11-09 | 2020-11-09 | Task specific processing of regulatory content |
US17/486,555 US11232358B1 (en) | 2020-11-09 | 2021-09-27 | Task specific processing of regulatory content |
EP21887966.6A EP4241209A1 (en) | 2020-11-09 | 2021-11-08 | Task specific processing of regulatory content |
PCT/CA2021/051585 WO2022094723A1 (en) | 2020-11-09 | 2021-11-08 | Task specific processing of regulatory content |
PCT/CA2021/051586 WO2022094724A1 (en) | 2020-11-09 | 2021-11-08 | System and method for generating regulatory content requirement descriptions |
US18/252,282 US20230419110A1 (en) | 2020-11-09 | 2021-11-08 | System and method for generating regulatory content requirement descriptions |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/093,416 US20220147814A1 (en) | 2020-11-09 | 2020-11-09 | Task specific processing of regulatory content |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/510,647 Continuation US11314922B1 (en) | 2020-11-09 | 2021-10-26 | System and method for generating regulatory content requirement descriptions |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/486,555 Continuation US11232358B1 (en) | 2020-11-09 | 2021-09-27 | Task specific processing of regulatory content |
US18/252,282 Continuation US20230419110A1 (en) | 2020-11-09 | 2021-11-08 | System and method for generating regulatory content requirement descriptions |
Publications (1)
Publication Number | Publication Date |
---|---|
US20220147814A1 true US20220147814A1 (en) | 2022-05-12 |
Family
ID=79689700
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/093,416 Pending US20220147814A1 (en) | 2020-11-09 | 2020-11-09 | Task specific processing of regulatory content |
US17/486,555 Active US11232358B1 (en) | 2020-11-09 | 2021-09-27 | Task specific processing of regulatory content |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/486,555 Active US11232358B1 (en) | 2020-11-09 | 2021-09-27 | Task specific processing of regulatory content |
Country Status (3)
Country | Link |
---|---|
US (2) | US20220147814A1 (en) |
EP (1) | EP4241209A1 (en) |
WO (1) | WO2022094723A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220156300A1 (en) * | 2020-11-19 | 2022-05-19 | Accenture Global Solutions Limited | Deep document processing with self-supervised learning |
US20220171937A1 (en) * | 2020-11-30 | 2022-06-02 | Industrial Technology Research Institute | Document sentence concept labeling system, training method and labeling method thereof |
CN117271553A (en) * | 2023-09-08 | 2023-12-22 | 上海浦东发展银行股份有限公司 | Method for generating and operating supervision report data quality rule |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11625555B1 (en) * | 2020-03-12 | 2023-04-11 | Amazon Technologies, Inc. | Artificial intelligence system with unsupervised model training for entity-pair relationship analysis |
US20220147814A1 (en) * | 2020-11-09 | 2022-05-12 | Moore & Gasperecz Global Inc. | Task specific processing of regulatory content |
US11860919B2 (en) * | 2021-08-13 | 2024-01-02 | Zelig Llc | System and method for generating and obtaining remote classification of condensed large-scale text objects |
US20230245146A1 (en) * | 2022-01-28 | 2023-08-03 | Walmart Apollo, Llc | Methods and apparatus for automatic item demand and substitution prediction using machine learning processes |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140160528A1 (en) * | 2012-12-11 | 2014-06-12 | International Business Machines Corporation | Application management of printing requests through induced analytics |
US10956673B1 (en) * | 2020-09-10 | 2021-03-23 | Moore & Gasperecz Global Inc. | Method and system for identifying citations within regulatory content |
US20210117621A1 (en) * | 2019-10-18 | 2021-04-22 | Ul Llc | Technologies for dynamically creating representations for regulations |
US20210319173A1 (en) * | 2020-04-09 | 2021-10-14 | Rsa Security Llc | Determining syntax parse trees for extracting nested hierarchical structures from text data |
US11232358B1 (en) * | 2020-11-09 | 2022-01-25 | Moore & Gasperecz Global Inc. | Task specific processing of regulatory content |
US11314922B1 (en) * | 2020-11-27 | 2022-04-26 | Moore & Gasperecz Global Inc. | System and method for generating regulatory content requirement descriptions |
Family Cites Families (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5794236A (en) | 1996-05-29 | 1998-08-11 | Lexis-Nexis | Computer-based system for classifying documents into a hierarchy and linking the classifications to the hierarchy |
CA2381460A1 (en) | 1999-08-06 | 2001-02-15 | James S. Wiltshire, Jr. | System and method for classifying legal concepts using legal topic scheme |
US6684202B1 (en) | 2000-05-31 | 2004-01-27 | Lexis Nexis | Computer-based system and method for finding rules of law in text |
FR2813132B1 (en) | 2000-08-16 | 2003-01-31 | Marc Vogel | DATABASE ACCESS INTERFACE SYSTEM |
WO2005084124A2 (en) * | 2004-03-02 | 2005-09-15 | Metaphor Vision Ltd. | Device, system and method for accelerated modeling |
US8156010B2 (en) | 2004-08-31 | 2012-04-10 | Intel Corporation | Multimodal context marketplace |
FR2920897B1 (en) | 2007-09-11 | 2010-07-30 | Marc Vogel | METHOD FOR QUERYING A DATABASE AND INTERROGATION DEVICE |
FR2920898B1 (en) | 2007-09-11 | 2010-07-30 | Marc Vogel | DATABASE MANAGEMENT INSTALLATION |
US8306819B2 (en) * | 2009-03-09 | 2012-11-06 | Microsoft Corporation | Enhanced automatic speech recognition using mapping between unsupervised and supervised speech model parameters trained on same acoustic training data |
US20110255794A1 (en) | 2010-01-15 | 2011-10-20 | Copanion, Inc. | Systems and methods for automatically extracting data by narrowing data search scope using contour matching |
US9613267B2 (en) | 2012-05-31 | 2017-04-04 | Xerox Corporation | Method and system of extracting label:value data from a document |
US9235812B2 (en) | 2012-12-04 | 2016-01-12 | Msc Intellectual Properties B.V. | System and method for automatic document classification in ediscovery, compliance and legacy information clean-up |
US10013655B1 (en) | 2014-03-11 | 2018-07-03 | Applied Underwriters, Inc. | Artificial intelligence expert system for anomaly detection |
US10572828B2 (en) * | 2015-10-28 | 2020-02-25 | Qomplx, Inc. | Transfer learning and domain adaptation using distributable data models |
RU2628431C1 (en) | 2016-04-12 | 2017-08-16 | Общество с ограниченной ответственностью "Аби Продакшн" | Selection of text classifier parameter based on semantic characteristics |
US10922621B2 (en) * | 2016-11-11 | 2021-02-16 | International Business Machines Corporation | Facilitating mapping of control policies to regulatory documents |
US11080486B2 (en) * | 2017-05-09 | 2021-08-03 | International Business Machines Corporation | Remote neural network processing for guideline identification |
US10867214B2 (en) * | 2018-02-14 | 2020-12-15 | Nvidia Corporation | Generation of synthetic images for training a neural network model |
US10963627B2 (en) * | 2018-06-11 | 2021-03-30 | Adobe Inc. | Automatically generating digital enterprise content variants |
EP3821370A4 (en) | 2018-07-12 | 2022-04-06 | Knowledgelake, Inc. | Document classification system |
US10516902B1 (en) * | 2018-07-26 | 2019-12-24 | International Business Machines Corporation | Control of content broadcasting |
US10943274B2 (en) * | 2018-08-28 | 2021-03-09 | Accenture Global Solutions Limited | Automation and digitizalization of document processing systems |
US11568175B2 (en) * | 2018-09-07 | 2023-01-31 | Verint Americas Inc. | Dynamic intent classification based on environment variables |
US11687827B2 (en) * | 2018-10-04 | 2023-06-27 | Accenture Global Solutions Limited | Artificial intelligence (AI)-based regulatory data processing system |
US11256699B2 (en) | 2019-01-23 | 2022-02-22 | Servicenow, Inc. | Grammar-based searching of a configuration management database |
US11120221B2 (en) * | 2019-03-26 | 2021-09-14 | Tata Consultancy Services Limited | Method and system to resolve ambiguities in regulations |
US11580335B2 (en) * | 2019-04-02 | 2023-02-14 | General Electric Company | Transaction management of machine learning algorithm updates |
US10853696B1 (en) * | 2019-04-11 | 2020-12-01 | Facebook, Inc. | Evaluation of content items against policies regulating content presentation by an online system using machine learning |
US11120788B2 (en) * | 2019-05-02 | 2021-09-14 | Microsoft Technology Licensing, Llc | Organizational-based language model generation |
CN110705223A (en) | 2019-08-13 | 2020-01-17 | 北京众信博雅科技有限公司 | Footnote recognition and extraction method for multi-page layout document |
US11270105B2 (en) * | 2019-09-24 | 2022-03-08 | International Business Machines Corporation | Extracting and analyzing information from engineering drawings |
US11645458B2 (en) * | 2019-10-28 | 2023-05-09 | Paypal, Inc. | Systems and methods for automatically scrubbing sensitive data |
US20210209358A1 (en) * | 2020-01-06 | 2021-07-08 | Catachi Co. DBA Compliance.ai | Methods and systems for facilitating classification of portions of a regulatory document using multiple classification codes |
-
2020
- 2020-11-09 US US17/093,416 patent/US20220147814A1/en active Pending
-
2021
- 2021-09-27 US US17/486,555 patent/US11232358B1/en active Active
- 2021-11-08 EP EP21887966.6A patent/EP4241209A1/en active Pending
- 2021-11-08 WO PCT/CA2021/051585 patent/WO2022094723A1/en unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140160528A1 (en) * | 2012-12-11 | 2014-06-12 | International Business Machines Corporation | Application management of printing requests through induced analytics |
US20210117621A1 (en) * | 2019-10-18 | 2021-04-22 | Ul Llc | Technologies for dynamically creating representations for regulations |
US20210319173A1 (en) * | 2020-04-09 | 2021-10-14 | Rsa Security Llc | Determining syntax parse trees for extracting nested hierarchical structures from text data |
US10956673B1 (en) * | 2020-09-10 | 2021-03-23 | Moore & Gasperecz Global Inc. | Method and system for identifying citations within regulatory content |
US11232358B1 (en) * | 2020-11-09 | 2022-01-25 | Moore & Gasperecz Global Inc. | Task specific processing of regulatory content |
US11314922B1 (en) * | 2020-11-27 | 2022-04-26 | Moore & Gasperecz Global Inc. | System and method for generating regulatory content requirement descriptions |
Non-Patent Citations (4)
Title |
---|
KIPERWASSER, E. et al., "Simple and accurate dependency parsing using bidirectional LSTM feature representation," Trans. of the Assn. for Computational Linguistics, Vol. 4 (2016) pp. 313-327. (Year: 2016) * |
LE, T. et al., "Requirement text detection from contract packages to support project definition determination," Advances in Informatics and Computing in Civil and Construction Engineering: Proc. of the 35th CIB W78 Conference (2018) pp. 569-576. (Year: 2019) * |
ZHANG, R. et al., "A machine learning-based method for building code requirement hierarchy extraction," CSCE Annual Conference (2019) 10 pp. (Year: 2019) * |
ZHOU, P. et al., "Onotolgy-based automated information extraction from building energy conservation codes," Automation in Construction Vol. 74 (2017) pp. 103-117. (Year: 2017) * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220156300A1 (en) * | 2020-11-19 | 2022-05-19 | Accenture Global Solutions Limited | Deep document processing with self-supervised learning |
US11954139B2 (en) * | 2020-11-19 | 2024-04-09 | Accenture Global Solutions Limited | Deep document processing with self-supervised learning |
US20220171937A1 (en) * | 2020-11-30 | 2022-06-02 | Industrial Technology Research Institute | Document sentence concept labeling system, training method and labeling method thereof |
CN117271553A (en) * | 2023-09-08 | 2023-12-22 | 上海浦东发展银行股份有限公司 | Method for generating and operating supervision report data quality rule |
Also Published As
Publication number | Publication date |
---|---|
WO2022094723A1 (en) | 2022-05-12 |
US11232358B1 (en) | 2022-01-25 |
EP4241209A1 (en) | 2023-09-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11232358B1 (en) | Task specific processing of regulatory content | |
CN113705187B (en) | Method and device for generating pre-training language model, electronic equipment and storage medium | |
WO2018028077A1 (en) | Deep learning based method and device for chinese semantics analysis | |
CN111291195B (en) | Data processing method, device, terminal and readable storage medium | |
Prusa et al. | Designing a better data representation for deep neural networks and text classification | |
US9645988B1 (en) | System and method for identifying passages in electronic documents | |
US11334818B2 (en) | System and method for real-time training of machine learning model using small training data set | |
US20190018833A1 (en) | System and method for rule creation from natural language text | |
US20190197433A1 (en) | Methods for adaptive information extraction through adaptive learning of human annotators and devices thereof | |
Consoli et al. | Embeddings for named entity recognition in geoscience Portuguese literature | |
Fashwan et al. | SHAKKIL: an automatic diacritization system for modern standard Arabic texts | |
Utomo et al. | New instances classification framework on Quran ontology applied to question answering system | |
US11314922B1 (en) | System and method for generating regulatory content requirement descriptions | |
Akhoundzade et al. | Persian sentiment lexicon expansion using unsupervised learning methods | |
CN112685548B (en) | Question answering method, electronic device and storage device | |
US20230419110A1 (en) | System and method for generating regulatory content requirement descriptions | |
Saifullah et al. | Cyberbullying Text Identification based on Deep Learning and Transformer-based Language Models | |
Chowdhury et al. | Detection of compatibility, proximity and expectancy of Bengali sentences using long short term memory | |
Ananth et al. | Grammatical tagging for the Kannada text documents using hybrid bidirectional long-short term memory model | |
Munir et al. | A comparison of topic modelling approaches for urdu text | |
CN114201957A (en) | Text emotion analysis method and device and computer readable storage medium | |
Mulki et al. | Empirical evaluation of leveraging named entities for Arabic sentiment analysis | |
Reddy et al. | Text Summarization of Telugu Scripts | |
US20230325606A1 (en) | Method for extracting information from an unstructured data source | |
Patel et al. | To laugh or not to laugh–LSTM based humor detection approach |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MOORE & GASPERECZ GLOBAL INC., CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:RAMEZANI, MAHDI;SMITH, KENNETH;KRAG, ELIJAH SOLOMON;AND OTHERS;REEL/FRAME:054322/0436 Effective date: 20201106 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
AS | Assignment |
Owner name: INTELEX TECHNOLOGIES, ULC, CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MOORE & GASPERECZ GLOBAL INC.;REEL/FRAME:066619/0902 Effective date: 20240229 |