Disclosure of Invention
In view of this, the present disclosure provides a text analysis method, which can accurately obtain a text analysis result.
According to an aspect of the present disclosure, there is provided a text analysis method including: acquiring characteristic information corresponding to a plurality of word segments of a text to be analyzed; and inputting the characteristic information into an analysis model for processing to obtain a text analysis result of the text to be analyzed, wherein the analysis model comprises a convolution module, a relation module, a pooling module and a splicing output module.
In a possible implementation manner, inputting the feature information into an analysis model for processing, and obtaining a text analysis result of the text to be analyzed, includes:
inputting the characteristic information into the convolution module for processing to obtain a convolution result;
inputting the convolution results into the relation module and the pooling module respectively for processing, and acquiring a relation result and a pooling result respectively;
and inputting the relation result and the pooling result into a splicing output module for processing to obtain a text analysis result of the text to be analyzed.
In one possible implementation manner, obtaining feature information corresponding to a plurality of word segments of a text to be analyzed includes:
Vectorizing the multiple participles of the text to be analyzed respectively to obtain multiple vector information corresponding to the multiple participles;
and determining the characteristic information of the word segmentation according to the vector information.
In one possible implementation, the stitching output module comprises a plurality of fully-connected layers and a softmax processing layer,
inputting the relationship result and the pooling result into a stitching output module for processing, and acquiring a text analysis result of the text to be analyzed, wherein the method comprises the following steps:
performing vector splicing processing on the relation result and the pooling result to obtain spliced vector information;
and sequentially inputting the spliced vector information into the full connection layers and the softmax processing layer for processing, and acquiring a text analysis result of the text to be analyzed.
In one possible implementation, the method further includes:
acquiring training characteristic information corresponding to a plurality of participles of a sample text;
inputting the training characteristic information into an initial analysis model for processing to obtain a training analysis result of the sample text, wherein the initial analysis model comprises an initial convolution module, an initial relation module, an initial pooling module and an initial splicing output module;
Determining the model loss of the initial analysis model according to the training analysis result and the labeling result of the sample text;
according to the model loss, adjusting the parameter weight in the initial analysis model, and determining the adjusted analysis model;
and determining the adjusted analysis model as a final analysis model under the condition that the model loss meets the training condition.
In one possible implementation, the convolution module includes a convolutional neural network, the relationship module includes a relationship network, and the pooling module includes a maximum pooling layer.
According to another aspect of the present disclosure, there is provided a text analysis apparatus including:
the device comprises a characteristic acquisition unit, a feature analysis unit and a feature analysis unit, wherein the characteristic acquisition unit is used for acquiring characteristic information corresponding to a plurality of word segments of a text to be analyzed;
a result obtaining unit, configured to input the feature information into an analysis model for processing, and obtain a text analysis result of the text to be analyzed,
the analysis model comprises a convolution module, a relation module, a pooling module and a splicing output module.
In one possible implementation, the result obtaining unit includes:
a first result obtaining subunit, configured to input the feature information into the convolution module for processing, and obtain a convolution result;
A second result obtaining subunit, configured to input the convolution result into the relationship module and the pooling module, respectively, for processing, and obtain a relationship result and a pooling result, respectively;
and the third result acquisition subunit is used for inputting the relationship result and the pooling result into a splicing output module for processing to acquire a text analysis result of the text to be analyzed.
In one possible implementation, the feature obtaining unit includes:
the vectorization subunit is configured to perform vectorization processing on the multiple word segments of the text to be analyzed, and acquire multiple pieces of vector information corresponding to the multiple word segments;
and the characteristic determining subunit is used for determining the characteristic information of the multiple word segments according to the multiple vector information.
In one possible implementation, the stitching output module includes a plurality of fully-connected layers and a softmax processing layer, wherein the third result obtaining subunit includes:
the splicing subunit is used for carrying out vector splicing processing on the relation result and the pooling result to obtain spliced vector information;
and the information processing subunit is used for sequentially inputting the spliced vector information into the full connection layers and the softmax processing layer for processing to obtain a text analysis result of the text to be analyzed.
In one possible implementation, the apparatus further includes:
the training feature acquisition unit is used for acquiring training feature information corresponding to a plurality of participles of the sample text;
a training result obtaining unit, configured to input the training feature information into an initial analysis model for processing, and obtain a training analysis result of the sample text, where the initial analysis model includes an initial convolution module, an initial relationship module, an initial pooling module, and an initial concatenation output module;
a loss determining unit, configured to determine a model loss of the initial analysis model according to the training analysis result and the labeling result of the sample text;
the model adjusting unit is used for adjusting the parameter weight in the initial analysis model according to the model loss and determining an adjusted analysis model;
and a model determining unit, configured to determine the adjusted analysis model as a final analysis model when the model loss satisfies a training condition.
In one possible implementation, the convolution module includes a convolutional neural network, the relationship module includes a relationship network, and the pooling module includes a maximum pooling layer.
According to another aspect of the present disclosure, there is provided a viewpoint extracting apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the above-described viewpoint extraction method.
According to the embodiment of the disclosure, the feature information corresponding to a plurality of word segments of the text to be analyzed can be acquired, the feature information is input into the analysis model to be processed so as to acquire the text analysis result, and the text analysis is realized by utilizing the analysis model comprising the convolution module, the relation module, the pooling module and the splicing output module, so that the accuracy of the text analysis result is improved.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
FIG. 1 is a flow diagram illustrating a method of text analysis in accordance with an exemplary embodiment. The method can be applied to a server. As shown in fig. 1, a text analysis method according to an embodiment of the present disclosure includes:
in step S11, feature information corresponding to a plurality of segmented words of the text to be analyzed is acquired;
in step S12, the feature information is input into an analysis model for processing, and a text analysis result of the text to be analyzed is obtained,
the analysis model comprises a convolution module, a relation module, a pooling module and a splicing output module.
According to the embodiment of the disclosure, the feature information corresponding to a plurality of participles of the text to be analyzed can be acquired, the feature information is input into the analysis model to be processed so as to acquire the text analysis result, and the text analysis is realized by utilizing the analysis model comprising the convolution module, the relation module, the pooling module and the concatenation output module, so that the accuracy of the text analysis result is improved. According to the embodiment of the disclosure, business personnel can be helped to know the comment angle, the commendability and the derogatory attitude of the comment information (the text to be analyzed) of a certain object by the user, and the value of the comment information (the text to be analyzed) is fully mined.
For example, the text to be analyzed may include comment text of a user for a certain object. The object may refer to any object capable of comment analysis, and may be, for example, video, audio, news, a character, an event, a product, or the like.
In one possible implementation, before segmenting the words of the comment text of the user, the comment text can be preprocessed to improve the accuracy and efficiency of analysis. Wherein, the preprocessing of the comment text may include: and deleting specified characters in the comment text (for example, deleting forwarding characters in comments such as microblogs), converting traditional characters in the comment text into simplified characters, and the like. After pre-processing, the text to be analyzed may be determined.
In a possible implementation manner, a word segmentation manner of the related art may be adopted to perform word segmentation processing on the text to be analyzed. For example, a new word phrase may be extracted from all the comment texts for an object, and the new word phrase may be used as a segmentation dictionary for the object. The word segmentation dictionary can be used for carrying out word segmentation on the text to be analyzed, so that a plurality of word segments of the text to be analyzed are obtained. The number of the participles is smaller than or equal to the number N of the characteristic information which can be processed by the analysis model, namely the number of the participles is smaller than or equal to N. The present disclosure does not limit the specific manner in which the plurality of segmented words of the text to be analyzed are obtained.
Fig. 2 is a flowchart illustrating a step S11 of a text analysis method according to an exemplary embodiment. As shown in fig. 2, in one possible implementation, step S11 may include:
in step S111, performing vectorization processing on the multiple participles of the text to be analyzed, respectively, to obtain multiple vector information corresponding to the multiple participles;
in step S112, the feature information is determined according to the plurality of vector information.
For example, a pre-trained mapping model (e.g., google word2vector model, etc.) may be used to convert (map) a plurality of segmented words of a text to be analyzed into a plurality of vector information, i.e., a plurality of real row vectors, respectively. When the number of word segments of the text to be analyzed is less than N, the remaining positions may be filled with zeros, so that the total number of vector information is N. The obtained N vector information may be determined as N feature information. In this way, N pieces of feature information to be input into the analysis model for processing can be obtained.
FIG. 3 is a diagram illustrating an analytical model of a method of text analysis, according to an exemplary embodiment. As shown in fig. 3, the analysis model includes a convolution module 31, a relationship module 32, a pooling module 33, and a stitching output module 34.
Fig. 4 is a flowchart illustrating a step S12 of a text analysis method according to an exemplary embodiment. As shown in fig. 4, in one possible implementation, step S12 may include:
in step S121, inputting the feature information into the convolution module for processing, and obtaining a convolution result;
in step S122, the convolution results are respectively input into the relationship module and the pooling module for processing, and a relationship result and a pooling result are respectively obtained;
in step S123, the relationship result and the pooling result are input into a concatenation output module for processing, and a text analysis result of the text to be analyzed is obtained.
For example, convolution module 31 may include one or more convolutional neural networks. The convolutional neural network can effectively capture the context information of the local part of the sentence.
For example, for N pieces of feature information (vector information) of a text to be analyzed, if each piece of vector information is a real row vector with a dimension of k, that is, the length is k (k >1), the N pieces of feature information may form a matrix with N rows and k columns. The N rows and k columns matrix may be input to convolution module 31 for processing.
D convolution cores with different weights and (h, k) can be adopted in the convolution module 31 to perform convolution operation on the matrixes of N rows and k columns respectively so as to extract the local information of h continuous word segments. After a plurality of convolution operations, d column vectors with dimension of N-h +1 can be obtained to form a real number matrix (convolution result) with rows of N-h +1 and d columns. Each column in the real number matrix may correspond to a result of each convolution kernel operation, and each row may correspond to local information of a text to be analyzed.
In a possible implementation manner, the convolution module 31 may include a plurality of convolution neural networks, and the plurality of convolution neural networks respectively perform convolution processing on the N pieces of feature information by using different convolution kernels (h, k), so as to obtain a plurality of real number matrices as convolution results. For example, convolution kernels of h 2, 3, and 4 are used. In this way, local information of different sizes (consecutive h word segments) of the text to be analyzed can be obtained, so as to analyze and process the local information of different sizes.
It should be understood that, a person skilled in the art may select the convolutional neural network according to actual needs, and set parameters such as the number of weights of the convolutional neural network and the size of the convolutional kernel, which is not limited in this disclosure.
In a possible implementation manner, the convolution result may be input into the relationship module 32 for processing in step S122, and a relationship result may be obtained. The relationship module 32 may include one or more Relationship Networks (RNs), among others. The relationship network can be used for capturing the long-distance dependency relationship between the word segments of the text to be analyzed and extracting the relationship information between any two local information.
For example, the convolution results may be input into relationship module 32 for processing. Let M be N-h +1, the convolution result may be one or more real matrices of M rows and d columns. For each matrix of real numbers, each row thereof (i.e., M d-dimensional vectors of real numbers o) 1、o2、…、oM) Local information of the text to be analyzed may be represented. In the relation module 32, a multi-layer perceptron b can be used to express the relation between any two local information, i.e. a relation vector b (o)q,ol),Wherein q is more than or equal to 1<l is less than or equal to M. For all M (M-1)/2 relation vectors b (o)q,ol) Averaging, and inputting the result into another multi-layer perceptron f for processing to obtain a relation vector r. As shown in equation (1):
in the case that the convolution result is one or more real number matrices, the relationship module 32 may include one or more relationship networks that respectively process the convolution result, thereby obtaining one or more relationship vectors r and using the one or more relationship vectors r as the final relationship result.
It should be understood that those skilled in the art can select the relationship network and the multi-layer perceptrons b and f according to actual needs, and the disclosure is not limited thereto. In this way, the relationship results processed by the relationship module 32 may be obtained.
In a possible implementation manner, in step S123, the convolution result may be further input to the pooling module 33 for processing, and a pooling result is obtained. Wherein the pooling module 33 may include a maximum pooling layer.
For example, the convolution results may be input to pooling module 33 for processing, and pooling module 33 may, for example, include a maximum pooling layer. The convolution result may be one or more real matrices of M rows and d columns, where each column of the matrix may represent the operation result of each convolution kernel. The maximum value of each column of the matrix can be respectively obtained, so that d maximum values are obtained. A d-dimensional real vector c of d maxima may be used as a pooling result.
In a possible implementation manner, in step S123, the relationship result r and the pooling result c may be input into the concatenation output module 34 for processing, so as to obtain a text analysis result of the text to be analyzed.
In one possible implementation, the concatenation output module 34 may include a plurality of full connection layers and softmax processing layers, wherein step S123 may include:
performing vector splicing processing on the relation result and the pooling result to obtain spliced vector information;
and sequentially inputting the spliced vector information into the full connection layers and the softmax processing layer for processing to obtain a text analysis result of the text to be analyzed.
For example, the relationship result r and the pooling result c may be concatenated to obtain concatenated vector information (the length is the sum of the lengths of the relationship result r and the pooling result c). And sequentially inputting the spliced vector information into the plurality of full-connection layers and the softmax processing layer for processing, so as to obtain a text analysis result of the text to be analyzed. It should be understood that the fully connected layer and the softmax processing layer can be selected by those skilled in the art according to actual needs, and the disclosure is not limited thereto.
According to the embodiment of the disclosure, before the feature information is processed by the analysis model to obtain the text analysis result of the text to be analyzed, the initial analysis model may be trained.
FIG. 5 is a flow diagram illustrating a method of text analysis in accordance with an exemplary embodiment. As shown in fig. 5, in one possible implementation, the method further includes:
in step S13, training feature information corresponding to a plurality of participles of the sample text is obtained;
in step S14, inputting the training feature information into an initial analysis model for processing, and obtaining a training analysis result of the sample text, where the initial analysis model includes an initial convolution module, an initial relationship module, an initial pooling module, and an initial concatenation output module;
in step S15, determining a model loss of the initial analysis model according to the training analysis result and the labeling result of the sample text;
in step S16, according to the model loss, adjusting the parameter weight in the initial analysis model, and determining an adjusted analysis model;
in step S17, when the model loss satisfies the training condition, the adjusted analysis model is determined as the final analysis model.
For example, existing comment texts may be manually analyzed and the analysis results (labeling results of sample texts) may be labeled to form a training set. The method comprises the steps of preprocessing a sample text aiming at any sample text in a training set, and performing word segmentation processing on the sample text by adopting a word segmentation mode of a related technology to obtain a plurality of word segments of the sample text. The number of the participles is smaller than or equal to the number N of the characteristic information which can be processed by the analysis model, namely the number of the participles is smaller than or equal to N.
In one possible implementation, a pre-trained mapping model (e.g., google word2vector model, etc.) may be used to map a plurality of segmented words of the sample text into a plurality of vector information, respectively. When the number of word segments is less than N, the remaining positions may be filled with zeros so that the total number of vector information is N, and the obtained N vector information is determined as training feature information (N feature information) of the sample text.
In a possible implementation manner, the training feature information may be input into an initial analysis model for processing, and a training analysis result of the sample text is obtained, where the initial analysis model includes an initial convolution module, an initial relationship module, an initial pooling module, and an initial concatenation output module. The structure and form of each module of the initial analysis model may be as described above, and are not described herein again.
In one possible implementation, the model loss of the initial analysis model is determined according to the training analysis result and the labeling result of the sample text. The specific type of the loss function of the model loss can be selected by those skilled in the art according to actual situations, and the present disclosure is not limited thereto.
In a possible implementation manner, according to the model loss of the initial analysis model, the parameter weight in the initial analysis model may be adjusted, and the adjusted analysis model is determined. For example, a back Propagation algorithm, such as a bptt (back Propagation Through time) algorithm, may be employed to gradient the parameter weights of the initial analysis model based on the model loss, and adjust the parameter weights in the initial analysis model based on the gradient.
In one possible implementation, the model adjustment process of steps S14-S16 described above may be repeated multiple times. The training condition may be preset, and the training condition may include a set number of iterative training times and/or a set convergence condition. When the model loss meets the training condition, the analysis model after the last adjustment can be considered to meet the precision requirement, and the adjusted analysis model can be determined as the final analysis model.
By the method, the analysis model meeting the training conditions can be obtained by training according to the training characteristic information of the sample text and the initial analysis model, so that the analysis model can accurately extract viewpoints and emotional tendencies in the text to be analyzed.
According to the embodiment of the disclosure, the feature information corresponding to a plurality of word segments of the text to be analyzed can be acquired, the feature information is input into the analysis model to be processed so as to acquire the text analysis result, and the text analysis is realized by utilizing the analysis model comprising the convolution module, the relation module, the pooling module and the splicing output module, so that the accuracy of the text analysis result is improved. According to the embodiment of the disclosure, the business personnel can be helped to know the comment angle, the commendatory and derogatory attitude and the like of the comment information (the text to be analyzed) of a certain object, and the value of the comment information (the text to be analyzed) is fully mined.
Fig. 6 is a block diagram illustrating a text analysis apparatus according to an example embodiment. As shown in fig. 6, the text analysis device includes:
a feature acquisition unit 71 configured to acquire feature information corresponding to a plurality of segmented words of a text to be analyzed;
a result obtaining unit 72, configured to input the feature information into an analysis model for processing, obtain a text analysis result of the text to be analyzed,
the analysis model comprises a convolution module, a relation module, a pooling module and a splicing output module.
Fig. 7 is a block diagram illustrating a text analysis apparatus according to an example embodiment. As shown in fig. 7, in one possible implementation, the result obtaining unit 72 may include:
a first result obtaining subunit 721, configured to input the feature information into the convolution module for processing, and obtain a convolution result;
a second result obtaining subunit 722, configured to input the convolution result into the relationship module and the pooling module, respectively, for processing, and obtain a relationship result and a pooling result, respectively;
and a third result obtaining subunit 723, configured to input the relationship result and the pooling result into a stitching output module for processing, and obtain a text analysis result of the text to be analyzed.
As shown in fig. 7, in one possible implementation, the feature obtaining unit 71 may include:
the vectorization subunit 711 is configured to perform vectorization processing on the multiple segmented words of the text to be analyzed, and acquire multiple pieces of vector information corresponding to the multiple segmented words;
a feature determining subunit 712, configured to determine feature information of the multiple word segments according to the multiple vector information.
In one possible implementation, the stitching output module includes a plurality of fully-connected layers and a softmax processing layer, wherein the third result obtaining subunit includes:
the splicing subunit is used for carrying out vector splicing processing on the relation result and the pooling result to obtain spliced vector information;
and the information processing subunit is used for sequentially inputting the spliced vector information into the full connection layers and the softmax processing layer for processing to obtain a text analysis result of the text to be analyzed.
As shown in fig. 7, in one possible implementation, the apparatus further includes:
a training feature obtaining unit 73, configured to obtain training feature information corresponding to a plurality of word segments of the sample text;
a training result obtaining unit 74, configured to input the training feature information into an initial analysis model for processing, and obtain a training analysis result of the sample text, where the initial analysis model includes an initial convolution module, an initial relationship module, an initial pooling module, and an initial concatenation output module;
A loss determining unit 75, configured to determine a model loss of the initial analysis model according to the training analysis result and the labeling result of the sample text;
a model adjusting unit 76, configured to adjust a parameter weight in the initial analysis model according to the model loss, and determine an adjusted analysis model;
a model determining unit 77, configured to determine the adjusted analysis model as a final analysis model when the model loss satisfies the training condition.
In one possible implementation, the convolution module includes a convolutional neural network, the relationship module includes a relationship network, and the pooling module includes a maximum pooling layer.
Fig. 8 is a block diagram illustrating a text analysis apparatus 1900 according to an example embodiment. For example, the apparatus 1900 may be provided as a server. Referring to FIG. 8, the device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the apparatus 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.