WO2021043087A1 - 文字布局方法、装置、电子设备及计算机可读存储介质 - Google Patents
文字布局方法、装置、电子设备及计算机可读存储介质 Download PDFInfo
- Publication number
- WO2021043087A1 WO2021043087A1 PCT/CN2020/112335 CN2020112335W WO2021043087A1 WO 2021043087 A1 WO2021043087 A1 WO 2021043087A1 CN 2020112335 W CN2020112335 W CN 2020112335W WO 2021043087 A1 WO2021043087 A1 WO 2021043087A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- text
- feature
- words
- layout
- semi
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/80—Information retrieval; Database structures therefor; File system structures therefor of semi-structured data, e.g. markup language structured data such as SGML, XML or HTML
- G06F16/84—Mapping; Conversion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- This application relates to the field of artificial intelligence technology, and in particular to a text layout method, device, electronic device, and computer-readable storage medium for semi-structured text and user behavior collaboration.
- Text classification is a special data mining technology, which is mainly manifested in the characteristics of unstructured, subjective, and high-dimensional text information.
- the unstructured text information makes it difficult for text mining to extract effective and easy-to-understand classification rules from the text data; the inventor realizes that the high latitude of the text information causes the computational complexity of common classification algorithms to be too high, or even lost Its practicality; the subjectivity of text classification makes it difficult to find a completely suitable text representation method to accurately represent the text.
- There are many existing work of converting semi-structured text to text but it is always difficult to extract the layout of semi-structured text.
- a text layout method provided by this application includes:
- a random forest model is used to classify the text in the semi-structured text set to obtain the classification result of the text, thereby completing the text layout of the text.
- a text layout device including:
- Text preprocessing module used to obtain a semi-structured text set, and perform preprocessing operations on the semi-structured text set to obtain a numerical vector text set;
- Feature extraction module used to perform feature selection on the numerical vector text set and the text layout feature set by using a pre-built feature extraction model to obtain a text semantic feature set and a text distribution feature set respectively;
- Text classification module used to classify the text in the semi-structured text set according to the text semantic feature set and the text distribution feature set using a random forest model to obtain the classification result of the text, thereby completing all The text layout of the text.
- An electronic device includes a memory and a processor, the memory stores a text layout program that can be run on the processor, and when the text layout program is executed by the processor, the following steps are implemented:
- a random forest model is used to classify the text in the semi-structured text set to obtain the classification result of the text, thereby completing the text layout of the text.
- the present application also provides a computer-readable storage medium having a text layout program stored on the computer-readable storage medium, and the text layout program can be executed by one or more processors to implement the following steps:
- a random forest model is used to classify the text in the semi-structured text set to obtain the classification result of the text, thereby completing the text layout of the text.
- FIG. 1 is a schematic flowchart of a text layout method provided by an embodiment of this application
- FIG. 2 is a schematic diagram of the internal structure of an electronic device provided by an embodiment of the application.
- FIG. 3 is a schematic diagram of modules of a text layout device provided by an embodiment of the application.
- This application provides a text layout method.
- FIG. 1 it is a schematic flowchart of a text layout method provided by an embodiment of this application.
- the method can be executed by a device, and the device can be implemented by software and/or hardware.
- the text layout method includes:
- the semi-structured text is composed of a number of discrete modules with independent semantics, and each module contains and only contains one aspect of content, that is, a noun can be used Or noun phrases are summarized, and there are obvious non-punctuation segmentation symbols between each independent semantic module, and the non-punctuation segmentation symbols can be spaces, carriage returns, tables, numbers, special format characters, and so on.
- the semi-structured text described in the preferred embodiment of the present application may be a PDF text.
- the source of the PDF text collection is obtained in the following two ways: Method one is to obtain it by searching resumes from major recruitment websites; Method two is to obtain it by searching keywords from the corpus.
- preprocessing operations include de-duplication, de-stop words, word segmentation, and weight calculation.
- specific implementation steps of the preprocessing operation are:
- a preferred embodiment of the present application first performs a deduplication operation on the text data set.
- the present application uses the Euclidean distance formula to de-duplicate the text data set, and the Euclidean distance formula is as follows:
- d represents the distance between the text data
- w 1j and w 2j are any two pieces of text data respectively.
- this application presets the threshold value to be 0.1.
- the stop words are words that have no actual meaning in the text function words, which have no effect on the classification of the text, but the frequency of occurrence is high, so the text classification will be reduced.
- the stop words include commonly used pronouns, prepositions, etc. .
- the stop word may be " ⁇ ", " ⁇ ", "but” and so on.
- This application uses a pre-built stop vocabulary table to match the words in the text set after deduplication, wherein when the words in the text set after deduplication are successfully matched with the stop vocabulary list, the The successfully matched words are filtered, and when the words in the text set after deduplication are unsuccessfully matched with the stop vocabulary, the unsuccessful words are retained.
- the pre-built stop vocabulary list is downloaded through a web page.
- the preset dictionary includes a statistical dictionary and a prefix dictionary.
- the statistical dictionary is a dictionary constructed by all possible word segmentation obtained by statistical methods.
- the statistical dictionary counts the contribution frequency of adjacent characters in the corpus and calculates mutual information. When the mutual information of adjacent characters is greater than a preset threshold, it is recognized as a constituent word, and the threshold is 0.6.
- the prefix dictionary includes the prefix of each word segment in the statistical dictionary.
- the prefixes of the word "Peking University” in the statistical dictionary are “North”, “Beijing”, and “Beijing University”;
- the prefix is "big” and so on.
- This application uses the statistical dictionary to obtain the possible word segmentation results of the text set after removing the stop words, and obtains the final segmentation form according to the segmentation position of the word through the prefix dictionary, so as to obtain the removal of the stop words Characteristic words of the following text set.
- d. Weight calculation includes:
- the correlation strength between the feature words is calculated by constructing a dependency relationship graph, and the importance score of the feature words is calculated by the correlation strength, and the weight of the feature words is obtained.
- len(W i , W j ) represents the length of the dependency path between feature words W i and W j
- b is a hyperparameter
- tfidf (W) is a TF-IDF value of word W
- TF represents term frequency
- IDF represents inverse document frequency index
- d is the Euclidean distance between the feature vectors of words W i and W words of J;
- the correlation strength between the feature words W i and W j is:
- W i is associated with a set of vertices
- ⁇ is the damping coefficient
- the weight of the characteristic word is obtained, so that the characteristic word is expressed in a numerical vector form to obtain the numerical vector text set.
- the text image collection is obtained by scanning the text collection, so as to analyze the text layout of the text collection.
- the contrast refers to the contrast between the maximum value and the minimum value of the brightness in the imaging system, wherein a low contrast makes image processing more difficult.
- a contrast stretching method is adopted, which uses a method of increasing the dynamic range of gray levels to achieve the purpose of image contrast enhancement.
- the contrast stretching is also called gray-scale stretching, which is a commonly used gray-scale transformation method at present.
- the present application performs gray scale stretching on a specific region according to the piecewise linear transformation function in the contrast stretching method, so as to further improve the contrast of the output image.
- contrast stretching it essentially realizes gray value conversion.
- This application implements the gray value transformation through linear stretching.
- the linear stretching refers to a pixel-level operation with a linear relationship between the input and output gray values.
- the gray conversion formula is as follows:
- the image thresholding process is an efficient algorithm for binarizing the contrast-enhanced grayscale image through an OTSU algorithm to obtain a binarized image.
- the preferred embodiment of the present application presets the gray level t to be the segmentation threshold of the foreground and background of the gray image, and assumes that the proportion of the number of front spots in the image is w 0 , the average gray level is u 0 , and the proportion of the number of background points in the image is w 1 , The average gray level is u 1 , then the total average gray level of the gray image is:
- the gray level t at this time is the optimal threshold, and the gray level value of the gray level image after the contrast enhancement is greater than the gray level t Set to 255, and the gray value smaller than the gray t is set to 0 to obtain a binarized image of the grayscale image after contrast enhancement, wherein the binarized image is the target text image, thereby Obtain the target text image set.
- the edge points are those pixel points in the image where the pixel gray level has a step change or the roof change, that is, the place where the gray level derivative is large or extremely large.
- this application adopts the Canny edge detection algorithm to perform additional testing on the target text image set.
- the specific detection steps are: smoothly filtering the images of the target text image set through a Gaussian filter; use the finite difference of the first-order partial derivative to calculate the gradient magnitude and direction of the smoothed and filtered image, and combine the The amplitude of the non-local maximum point of the gradient is set to zero to obtain the thinned edge of the image; the thinned edges are connected by a double threshold method to obtain the text layout feature set of the target text image set .
- the present application obtains two threshold edge images N 1 [i,j] and N 2 [i,j] by presetting two thresholds T 1 and T 2 (T 1 ⁇ T 2 ).
- the double threshold method in the N 2 [i,j] connects the thinned edges into a complete contour, so when the discontinuity point of the edge is reached, it is in the N 1 [i,j] Search for connectable edges in the neighborhood of] until all the discontinuous points in N 2 [i,j] are connected, so as to obtain the text layout feature set.
- a feature extraction model including a BP neural network is constructed, where the BP neural network includes an input layer, a hidden layer, and an output layer.
- the BP neural network is a multi-layer feedforward neural network, The main feature of this network is the forward transmission of signals and the backward propagation of errors.
- the input signal is processed layer by layer from the input layer through the hidden layer to the output layer.
- the neuron state of each layer only affects the neuron state of the next layer. If the output layer cannot get the expected output, it will switch to back propagation and adjust the network weights and thresholds according to the prediction error, so that the network predicted output is constantly approaching the expected output.
- the input layer is the only data input entry of the entire neural network.
- the number of neuron nodes in the input layer is the same as the dimension of the numerical vector of the text, and the value of each neuron corresponds to the value of each item of the numerical vector.
- the hidden layer is mainly used to perform non-linear processing on the input data of the input layer, and the non-linear fitting of the input data based on the excitation function can effectively ensure the predictive ability of the model.
- the output layer is the only output of the entire model after the hidden layer.
- the number of neuron nodes in the output layer is the same as the number of text categories.
- the input layer receives the numerical vector text set and the text layout feature set; the hidden layer responds to the numerical vector text set and the text layout received by the input layer
- the feature set performs the following operations:
- q represents an output value O q-th hidden layer unit
- i denotes the input unit of the input layer
- X-i indicates a parameter value input unit of the input layer i
- q represents the hidden layer units, Represents the connection right between the input layer unit i and the hidden layer unit q;
- the output layer receives the output value of the hidden layer, and performs the following operations:
- y j represents the output value of the j-th unit of the output layer
- the predetermined feature X i and X k is a vector of the text or text layout features set the two values are concentrated on any characteristic of the output values.
- the feature X i is determined according to the chain rule of compound function partial derivative of ⁇ kj difference sensitivity and the characteristic sensitivity ⁇ ij X k, the completion of feature selection feature characteristic X k and X i, resulting in the Describes the semantic feature set of the text and the text distribution feature.
- a difference in the sensitivity of the characteristic ⁇ ij X i and X k sensitivity characteristic of the ⁇ kj is calculated as:
- the present application uses the construction of the feature extraction model including the BP neural network to perform feature selection on the numerical vector text set and the text layout feature set, respectively, to obtain the text semantic feature set and the text distribution feature set.
- the random forest algorithm uses bagging algorithm with replacement sampling, extracts multiple sample subsets from the original sample, and trains multiple decision tree models through the multiple sample subsets, and uses reference in the training process.
- the random feature subspace method extracts some features from the feature set to split the decision tree, and finally integrates multiple decision trees called an ensemble classifier, and the ensemble classifier is called a random forest model.
- the random forest algorithm flow is divided into three parts: generation of sub-sample sets, construction of decision trees, and voting results.
- the original sample is the above-mentioned PDF text set, which is divided according to the number of pages of the PDF text to form multiple sub-samples, and the text semantic features and text distribution features are respectively divided As a node of the decision tree, through voting, the corresponding result is produced.
- this application uses the random forest model to classify whether the text layout in the PDF text is based on multi-column PDF text or PDF text based on title content.
- the specific implementation steps of the classification are: dividing the text of the PDF text set by cross-certification to obtain a sub-sample set; taking the text semantic feature of the text and the text distribution feature as the random forest model Decision tree sub-node; classify the sub-sample set according to the sub-nodes of the decision tree to obtain the classification result of the sub-sample, accumulate the classification result of the sub-sample, and add the sub-sample with the largest accumulated value
- the text layout of the text is completed, that is, whether the PDF text layout is based on multi-column PDF text or PDF text based on title content.
- the invention also provides an electronic device.
- FIG. 2 it is a schematic diagram of the internal structure of an electronic device provided by an embodiment of this application.
- the electronic device 1 may be a PC (Personal Computer, personal computer), or a terminal device such as a smart phone, a tablet computer, or a portable computer, or a server.
- the electronic device 1 at least includes a memory 11, a processor 12, a communication bus 13, and a network interface 14.
- the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, and the like.
- the memory 11 may be an internal storage unit of the electronic device 1 in some embodiments, such as a hard disk of the electronic device 1.
- the memory 11 may also be an external storage device of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (Smart Media Card, SMC), and a Secure Digital (SD) Card, Flash Card, etc.
- the memory 11 may also include both an internal storage unit of the electronic device 1 and an external storage device.
- the memory 11 can be used not only to store application software and various data installed in the electronic device 1, such as the code of the text layout program 01, etc., but also to temporarily store data that has been output or will be output.
- the processor 12 may be a central processing unit (CPU), controller, microcontroller, microprocessor or other data processing chip in some embodiments, and is used to run the program code or processing stored in the memory 11 Data, for example, execute text layout program 01 and so on.
- CPU central processing unit
- controller microcontroller
- microprocessor microprocessor or other data processing chip in some embodiments, and is used to run the program code or processing stored in the memory 11 Data, for example, execute text layout program 01 and so on.
- the communication bus 13 is used to realize the connection and communication between these components.
- the network interface 14 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and is generally used to establish a communication connection between the electronic device 1 and other electronic devices.
- the electronic device 1 may further include a user interface.
- the user interface may include a display (Display) and an input unit such as a keyboard (Keyboard).
- the optional user interface may also include a standard wired interface and a wireless interface.
- the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, etc.
- the display can also be appropriately called a display screen or a display unit, which is used to display the information processed in the electronic device 1 and to display a visualized user interface.
- FIG. 2 only shows the electronic device 1 with components 11-14 and the text layout program 01.
- FIG. 1 does not constitute a limitation on the electronic device 1, and may include comparison diagrams. Show fewer or more components, or combinations of certain components, or different component arrangements.
- a text layout program 01 is stored in the memory 11; when the processor 12 executes the text layout program 01 stored in the memory 11, the following steps are implemented:
- Step 1 Obtain a semi-structured text set, and perform a preprocessing operation on the semi-structured text set to obtain a numerical vector text set.
- the semi-structured text is composed of a number of discrete modules with independent semantics, and each module contains and only contains one aspect of content, that is, a noun can be used Or noun phrases are summarized, and there are obvious non-punctuation segmentation symbols between each independent semantic module, and the non-punctuation segmentation symbols can be spaces, carriage returns, tables, numbers, special format characters, and so on.
- the semi-structured text in the preferred embodiment of the present application may be a PDF text.
- the source of the PDF text collection is obtained in the following two ways: Method one is to obtain it by searching resumes from major recruitment websites; Method two is to obtain it by searching keywords from the corpus.
- preprocessing operations include de-duplication, de-stop words, word segmentation, and weight calculation.
- specific implementation steps of the preprocessing operation are:
- a preferred embodiment of the present application first performs a deduplication operation on the text data set.
- the present application uses the Euclidean distance formula to de-duplicate the text data set, and the Euclidean distance formula is as follows:
- d represents the distance between the text data
- w 1j and w 2j are any two pieces of text data respectively.
- this application presets the threshold value to be 0.1.
- the stop words are words that have no actual meaning in the text function words, which have no effect on the classification of the text, but the frequency of occurrence is high, so the text classification will be reduced.
- the stop words include commonly used pronouns, prepositions, etc. .
- the stop word may be " ⁇ ", " ⁇ ", "but” and so on.
- This application uses a pre-built stop vocabulary table to match the words in the text set after deduplication, wherein when the words in the text set after deduplication are successfully matched with the stop vocabulary list, the The successfully matched words are filtered, and when the words in the text set after deduplication are unsuccessfully matched with the stop vocabulary, the unsuccessful words are retained.
- the pre-built stop vocabulary list is downloaded through a web page.
- the preset dictionary includes a statistical dictionary and a prefix dictionary.
- the statistical dictionary is a dictionary constructed by all possible word segmentation obtained by statistical methods.
- the statistical dictionary counts the contribution frequency of adjacent characters in the corpus and calculates mutual information. When the mutual information of adjacent characters is greater than a preset threshold, it is recognized as a constituent word, and the threshold is 0.6.
- the prefix dictionary includes the prefix of each word segment in the statistical dictionary.
- the prefixes of the word "Peking University” in the statistical dictionary are “North”, “Beijing”, and “Beijing University”;
- the prefix is "big” and so on.
- This application uses the statistical dictionary to obtain the possible word segmentation results of the text set after removing the stop words, and obtains the final segmentation form according to the segmentation position of the word through the prefix dictionary, so as to obtain the removal of the stop words Characteristic words of the following text set.
- d. Weight calculation includes:
- the correlation strength between the feature words is calculated by constructing a dependency relationship graph, and the importance score of the feature words is calculated by the correlation strength, and the weight of the feature words is obtained.
- len(W i , W j ) represents the length of the dependency path between feature words W i and W j
- b is a hyperparameter
- tfidf (W) is a TF-IDF value of word W
- TF represents term frequency
- IDF represents inverse document frequency index
- d is the Euclidean distance between the feature vectors of words W i and W words of J;
- the correlation strength between the feature words W i and W j is:
- the weight of the characteristic word is obtained, so that the characteristic word is expressed in a numerical vector form to obtain the numerical vector text set.
- Step 2 Convert the semi-structured text set into a text image set, and perform contrast enhancement processing and thresholding operations on the text image set to obtain a target text image set.
- the text image collection is obtained by scanning the text collection, so as to analyze the text layout of the text collection.
- the contrast refers to the contrast between the maximum value and the minimum value of the brightness in the imaging system, wherein a low contrast makes image processing more difficult.
- a contrast stretching method is adopted, which uses a method of increasing the dynamic range of gray levels to achieve the purpose of image contrast enhancement.
- the contrast stretching is also called gray-scale stretching, which is a commonly used gray-scale transformation method at present.
- the present application performs gray scale stretching on a specific region according to the piecewise linear transformation function in the contrast stretching method, so as to further improve the contrast of the output image.
- contrast stretching it essentially realizes gray value conversion.
- This application implements the gray value transformation through linear stretching, which refers to pixel-level operations in which the input and output gray values have a linear relationship.
- the gray scale transformation formula is as follows:
- the image thresholding process is an efficient algorithm for binarizing the contrast-enhanced grayscale image through an OTSU algorithm to obtain a binarized image.
- the preferred embodiment of the present application presets the gray level t to be the segmentation threshold of the foreground and background of the gray image, and assumes that the proportion of the number of front spots in the image is w 0 , the average gray level is u 0 , and the proportion of the number of background points in the image is w 1 , The average gray level is u 1 , then the total average gray level of the gray image is:
- the gray level t at this time is the optimal threshold, and the gray level value of the gray level image after the contrast enhancement is greater than the gray level t Set to 255, and the gray value smaller than the gray t is set to 0 to obtain a binarized image of the grayscale image after contrast enhancement, wherein the binarized image is the target text image, thereby Obtain the target text image set.
- Step 3 Detect the target text image set through an edge detection algorithm to obtain a text layout feature set.
- the edge points are those pixel points in the image where the pixel gray level has a step change or the roof change, that is, the place where the gray level derivative is large or extremely large.
- this application adopts the Canny edge detection algorithm to perform additional testing on the target text image set.
- the specific detection steps are: smoothly filtering the images of the target text image set through a Gaussian filter; use the finite difference of the first-order partial derivative to calculate the gradient magnitude and direction of the smoothed and filtered image, and combine the The amplitude of the non-local maximum point of the gradient is set to zero to obtain the thinned edge of the image; the thinned edges are connected by a double threshold method to obtain the text layout feature set of the target text image set .
- the present application obtains two threshold edge images N 1 [i,j] and N 2 [i,j] by presetting two thresholds T 1 and T 2 (T 1 ⁇ T 2 ).
- the double threshold method in the N 2 [i,j] connects the thinned edges into a complete contour, so when the discontinuity point of the edge is reached, it is in the N 1 [i,j] Search for connectable edges in the neighborhood of] until all the discontinuous points in N 2 [i,j] are connected, so as to obtain the text layout feature set.
- Step 4 Use a pre-built feature extraction model to perform feature selection on the numerical vector text set and the text layout feature set to obtain a text semantic feature set and a text distribution feature set, respectively.
- a feature extraction model including a BP neural network is constructed, where the BP neural network includes an input layer, a hidden layer, and an output layer.
- the BP neural network is a multi-layer feedforward neural network, The main feature of this network is the forward transmission of signals and the backward propagation of errors.
- the input signal is processed layer by layer from the input layer through the hidden layer to the output layer.
- the neuron state of each layer only affects the neuron state of the next layer. If the output layer cannot get the expected output, it will switch to back propagation and adjust the network weights and thresholds according to the prediction error, so that the network predicted output is constantly approaching the expected output.
- the input layer is the only data input entry of the entire neural network.
- the number of neuron nodes in the input layer is the same as the dimension of the numerical vector of the text, and the value of each neuron corresponds to the value of each item of the numerical vector.
- the hidden layer is mainly used to perform non-linear processing on the input data of the input layer, and the non-linear fitting of the input data based on the excitation function can effectively ensure the predictive ability of the model.
- the output layer is the only output of the entire model after the hidden layer.
- the number of neuron nodes in the output layer is the same as the number of text categories.
- the input layer receives the numerical vector text set and the text layout feature set; the hidden layer responds to the numerical vector text set and the text layout received by the input layer
- the feature set performs the following operations:
- q represents an output value O q-th hidden layer unit
- i denotes the input unit of the input layer
- X-i indicates a parameter value input unit of the input layer i
- q represents the hidden layer units, Represents the connection right between the input layer unit i and the hidden layer unit q;
- the output layer receives the output value of the hidden layer, and performs the following operations:
- y j represents the output value of the j-th unit of the output layer
- the predetermined feature X i and X k is a vector of the text or text layout features set the two values are concentrated on any characteristic of the output values.
- the feature X i is determined according to the chain rule of compound function partial derivative of ⁇ kj difference sensitivity and the characteristic sensitivity ⁇ ij X k, the completion of feature selection feature characteristic X k and X i, resulting in the Describes the semantic feature set of the text and the text distribution feature.
- a difference in the sensitivity of the characteristic ⁇ ij X i and X k sensitivity characteristic of the ⁇ kj is calculated as:
- the present application uses the feature extraction model including the BP neural network to perform feature selection on the numerical vector text set and the text layout feature set, respectively, to obtain the text semantic feature set and the text distribution feature set.
- Step 5 According to the text semantic feature set and the text distribution feature set, use a random forest model to classify the text of the semi-structured text set to obtain a classification result of the text, thereby completing the text Text layout.
- the random forest algorithm uses bagging algorithm with replacement sampling, extracts multiple sample subsets from the original sample, and trains multiple decision tree models through the multiple sample subsets, and uses reference in the training process.
- the random feature subspace method extracts some features from the feature set to split the decision tree, and finally integrates multiple decision trees called an ensemble classifier, and the ensemble classifier is called a random forest model.
- the random forest algorithm flow is divided into three parts: generation of sub-sample sets, construction of decision trees, and voting results.
- the original sample is the above-mentioned PDF text set, which is divided according to the number of pages of the PDF text to form multiple sub-samples, and the text semantic features and text distribution features are respectively divided As the node of the decision tree, through voting, the corresponding result is produced.
- the present application uses the random forest model to classify whether the text layout in the PDF text is based on multi-column PDF text or PDF text based on title content.
- the specific implementation steps of the classification are: dividing the text of the PDF text set by cross-certification to obtain a sub-sample set; taking the text semantic feature of the text and the text distribution feature as the random forest model Decision tree sub-nodes; classify the sub-sample set according to the sub-nodes of the decision tree to obtain the classification results of the sub-samples, accumulate the classification results of the sub-samples, and add the sub-sample with the largest accumulated value
- the text layout of the text is completed, that is, whether the PDF text text layout is based on multi-column PDF text or PDF text based on title content.
- the text layout device 02 can be divided into a text preprocessing module 10, a feature extraction module 20, and a text classification module 30.
- Land
- the text preprocessing module 10 is configured to: obtain a semi-structured text set, perform a preprocessing operation on the semi-structured text set to obtain a numerical vector text set; and convert the semi-structured text set into text An image set, performing contrast enhancement processing and thresholding operations on the text image set to obtain a target text image set, and detecting the target text image set by an edge detection algorithm to obtain a text layout feature set.
- the feature extraction module 20 is configured to use a pre-built feature extraction model to perform feature selection on the numerical vector text set and the text layout feature set to obtain a text semantic feature set and a text distribution feature set, respectively.
- the text classification module 30 is configured to: use a random forest model to classify the text in the semi-structured text set according to the text semantic feature set and the text distribution feature set to obtain a classification result of the text, Thus, the text layout of the text is completed.
- the embodiment of the present application also proposes a computer-readable storage medium, the computer-readable storage medium may be non-volatile or volatile, and a text layout program is stored on the computer-readable storage medium,
- the text layout program can be executed by one or more processors to achieve the following operations:
- a random forest model is used to classify the text in the semi-structured text set to obtain the classification result of the text, thereby completing the text layout of the text.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Databases & Information Systems (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Machine Translation (AREA)
Abstract
一种文字布局方法、装置、电子设备及计算机可读存储介质,实现了文本中文字的精确布局。所述方法包括:获取半结构化的文本集,对半结构化的文本集进行预处理操作,得到数值向量文本集,以及将所述半结构化的文本集转换为文本图像集,对所述文本图像集进行预处理操作,得到文本布局特征集;利用预先构建的特征提取模型对所述数值向量文本集和所述文本布局特征集进行特征选择,分别得到文本语义特征集和文本分布特征集;根据所述文本语义特征集和所述文本分布特征集,利用随机森林模型对所述半结构化的文本集中的文本进行分类,得到所述文本的分类结果,从而完成所述文本的文字布局。
Description
本申请要求于2019年9月2日提交中国专利局、申请号为201910829790.7,发明名称为“文字布局方法、装置及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及人工智能技术领域,尤其涉及一种半结构化文本和用户行为协同的文字布局方法、装置、电子设备及计算机可读存储介质。
文本分类是特殊的数据挖掘技术,主要表现在文本信息的无结构化、主观性、高维度等特点。文本信息的无结构化导致文本挖掘很难从文本数据中抽取出有效的、易于理解的分类规则;发明人意识到文本信息的高纬度导致常见分类算法的计算复杂度过高,甚至于失去了其实用性;文本分类的主观性导致很难找到一个完全合适的文本表示方法来准确的表示文本。现有半结构化文本转文字的工作有很多,但提取半结构化文本中的布局一向是难点。现有类似的已经有对半结构化文本规整表格进行提取,但对于多分栏、一栏标题和一栏内容这两者很难进行区分。特别是多分栏的半结构化文本,常会使一侧的内容插入另一侧,影响后续处理。
发明内容
本申请提供的一种文字布局方法,包括:
获取半结构化的文本集,对所述半结构化的文本集进行预处理操作,得到数值向量文本集;
将所述半结构化的文本集转换为文本图像集,对所述文本图像集进行对比度增强处理和阈值化操作,得到目标文本图像集;
通过边缘检测算法对所述目标文本图像集进行检测,得到文本布局特征集;
利用预先构建的特征提取模型对所述数值向量文本集和所述文本布局特征集进行特征选择,分别得到文本语义特征集和文本分布特征集;
根据所述文本语义特征集和所述文本分布特征集,利用随机森林模型对所述半结构化的文本集中的文本进行分类,得到所述文本的分类结果,从而完成所述文本的文字布局。
一种文字布局装置,包括:
文本预处理模块:用于获取半结构化的文本集,对所述半结构化的文本集进行预处理操作,得到数值向量文本集;
将所述半结构化的文本集转换为文本图像集,对所述文本图像集进行对比度增强处理和阈值化操作,得到目标文本图像集;
通过边缘检测算法对所述目标文本图像集进行检测,得到文本布局特征集;
特征提取模块:用于利用预先构建的特征提取模型对所述数值向量文本集和所述文本布局特征集进行特征选择,分别得到文本语义特征集和文本分布特征集;
文本分类模块:用于根据所述文本语义特征集和所述文本分布特征集,利用随机森林模型对所述半结构化的文本集中的文本进行分类,得到所述文本的分类结果,从而完成所述文本的文字布局。
一种电子设备,包括存储器和处理器,所述存储器中存储有可在所述处理器上运行的文字布局程序,所述文字布局程序被所述处理器执行时实现如下步骤:
获取半结构化的文本集,对所述半结构化的文本集进行预处理操作,得到数值向量文本集;
将所述半结构化的文本集转换为文本图像集,对所述文本图像集进行对比度增强处理和阈值化操作,得到目标文本图像集;
通过边缘检测算法对所述目标文本图像集进行检测,得到文本布局特征集;
利用预先构建的特征提取模型对所述数值向量文本集和所述文本布局特征集进行特征选择,分别得到文本语义特征集和文本分布特征集;
根据所述文本语义特征集和所述文本分布特征集,利用随机森林模型对所述半结构化的文本集中的文本进行分类,得到所述文本的分类结果,从而完成所述文本的文字布局。
本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有文字布局程序,所述文字布局程序可被一个或者多个处理器执行,以实现如下步骤:
获取半结构化的文本集,对所述半结构化的文本集进行预处理操作,得到数值向量文本集;
将所述半结构化的文本集转换为文本图像集,对所述文本图像集进行对比度增强处理和阈值化操作,得到目标文本图像集;
通过边缘检测算法对所述目标文本图像集进行检测,得到文本布局特征集;
利用预先构建的特征提取模型对所述数值向量文本集和所述文本布局特征集进行特征选择,分别得到文本语义特征集和文本分布特征集;
根据所述文本语义特征集和所述文本分布特征集,利用随机森林模型对所述半结构化的文本集中的文本进行分类,得到所述文本的分类结果,从而完成所述文本的文字布局。
图1为本申请一实施例提供的文字布局方法的流程示意图;
图2为本申请一实施例提供的电子设备的内部结构示意图;
图3为本申请一实施例提供的文字布局装置的模块示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供一种文字布局方法。参照图1所示,为本申请一实施例提供的文字布局方法的流程示意图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。
在本实施例中,文字布局方法包括:
S1、获取半结构化的文本集,对所述半结构化的文本集进行预处理操作,得到数值向量文本集。
本申请较佳实施例中,所述半结构化的文本是由若干个具有独立语义的、离散的模块内容模块组成,且每个模块内容包含且仅包含一个方面的内容,即可以用一个名词或名词短语进行归纳、每个独立语义模块之间有明显的非标点分割符号,所述非标点分割符号可以为空格、回车、表格、编号、特殊格式字符等等。优选 地,本申请较佳实施例所述半结构化的文本可以为PDF文本。其中,所述PDF文本集来源通过以下两种方式获取:方式一、从各大招聘网站搜索简历获取;方式二、通过从语料库中搜索关键字获取。
进一步地,预处理操作包括去重、去停用词、分词以及权重计算。详细地,所述预处理操作具体实施步骤为:
a.去重:
当所述半结构化的文本集中存在重复的文本时,会降低文本分类的精度,因此,本申请较佳实施例首先对所述文本数据集执行去重操作。
优选地,本申请通过欧式距离公式对所述文本数据集进行去重操作,所述欧式距离公式如下:
其中,d表示所述文本数据之间的距离,w
1j和w
2j分别为任意2个文本数据,当两个文本数据之间的距离小于预设距离阈值,则删除其中一个文本数据。优选地,本申请预设所述阈值为0.1。
b.去停用词:
所述停用词是文本功能词中没有什么实际意义的词,对文本的分类没有什么影响,但是出现频率高,于是,会降低文本分类,其中所述停用词包括常用的代词、介词等。例如,所述停用词可以为“的”、“在”、“不过”等等。本申请通过预先构建好的停用词表和去重后的所述文本集中词语进行一一匹配,其中,当去重后的所述文本集中词语与所述停用词表匹配成功时,将所述匹配成功的词语过滤,当去重后的所述文本集中词语与所述停用词表匹配不成功时,将所述匹配不成功的词语保留。其中,所述预先构建好的停用词表通过网页下载得到。
c.分词:
本申请通过预设的策略将去停用词后的所述文本集中的词语与预设的词典中的词条进行匹配,得到去停用词后的所述文本集的特征词,并将所述特征词用空格符号隔开。优选地,本申请较佳实施例中,所述预设的词典包含统计词典和前缀词典。所述统计词典是由统计方法得到的所有可能的分词构造的词典。所述统计词典统计相邻字在语料库中贡献的频度并计算互信息,当所述相邻字互相出现信息大于预设的阈值时,即认定为构成词,所述阈值为0.6。所述前缀词典包括所述统计词典中每一个分词的前缀,例如所述统计词典中的词“北京大学”的前缀分别是“北”、“北京”、“北京大”;词“大学”的前缀是“大”等。本申请利用所述统计词典得到去停用词后的所述文本集的可能的分词结果,并通过所述前缀词典根据分词的切分位置,得到最终的切分形式,从而得到去停用词后的所述文本集的特征词。
d.权重计算包括:
本申请通过构建依存关系图计算所述特征词之间的关联强度,通过所述关联强度计算出所述特征词的重要度得分,得到所述特征词的权重。详细地,计算所述特征词中的任意两个特征词W
i和W
j的依存关联度:
其中,len(W
i,W
j)表示特征词W
i和W
j之间的依存路径长度,b是超参数;
计算所述特征词W
i和W
j的引力:
其中,tfidf(W)是词语W的TF-IDF值,TF表示词频,IDF表示逆文档频率指数,d是特征词W
i和W
j的词向量之间的欧式距离;
得到特征词W
i和W
j之间的关联强度为:
weight(W
i,W
j)=Dep(W
i,W
j)*f
grav(W
i,W
j)
建立无向图G=(V,E),其中V是顶点的集合,E是边的集合;
计算出特征词W
i的重要度得分:
根据所述特征词重要度得分,得到所述特征词权重,从而将所述特征词表示成数值向量形式,得到所述数值向量文本集。
S2、将所述半结构化的文本集转换为文本图像集,对所述文本图像集进行对比度增强处理和阈值化操作,得到目标文本图像集。
本申请较佳实施例通过对所述文本集进行扫描,得到所述文本图像集,从而对所述文本集中文本布局进行分析。
进一步地,所述对比度指的是成像系统中亮度最大值与最小值之间的对比,其中,对比度低会使图像处理难度增大。本申请较佳实施例中采用的是对比度拉伸方法,利用提高灰度级动态范围的方式,达到图像对比度增强的目的。所述对比度拉伸也叫作灰度拉伸,是目前常用的灰度变换方式。详细地,本申请根据所述对比度拉伸方法中的分段线性变换函数对特定区域进行灰度拉伸,进一步提高输出图像的对比度。当进行对比度拉伸时,本质上是实现灰度值变换。本申请通过线性拉伸实现灰度值变换,所述线性拉伸指的是输入与输出的灰度值之间为线性关系的像素级运算,灰度变换公式如下所示:
D
b=f(D
a)=a*D
a+b
其中a为线性斜率,b为在Y轴上的截距。当a>1时,此时输出的图像对比度相比原图像是增强的。当a<1时,此时输出的图像对比度相比原图像是削弱的,其中D
a代表输入图像灰度值,D
b代表输出图像灰度值。
进一步地,所述图像阈值化处理通过OTSU算法将对比度增强后的所述灰度图像进行二值化的高效算法,以得到二值化图像。本申请较佳实施例预设灰度t为灰度图像的前景与背景的分割阈值,并假设前景点数占图像比例为w
0,平均灰度为u
0;背景点数占图像比例为w
1,平均灰度为u
1,则灰度图像的总平均灰度为:
u=w
0*u
0+w
1*u
1,
所述灰度图像的前景和背景图象的方差为:
g=w
0*(u
0-u)*(u
0-u)+w
1*(u
1-u)*(u
1-u)=w
0*w
1*(u
0-u
1)*(u
0-u
1),
其中,当方差g最大时,则此时前景和背景差异最大,此时的灰度t为最佳阈值,并将对比度增强后的所述灰度图像中大于所述灰度t的灰度值设置为255,小于所述灰度t的灰度值设置为0,得到对比度增强后的所述灰度图像的二值化图像,其中,所述二值化图像即所述目标文本图像,从而得到所述目标文本图像集。
S3、通过边缘检测算法对所述目标文本图像集进行检测,得到文本布局特征集。
本申请较佳实施例中,所述边缘检测的基本思想认为边缘点是图像中像素灰度有阶跃变化或者屋顶变化的那些像素点,即灰度导数较大或极大的地方。优选地,本申请采用Canny边缘检测算法对所述目标文本图像集进行加测。详细地,具体检测步骤为:通过高斯滤波器对所述目标文本图像集的图像进行平滑滤波;利用一阶偏导的有限差分计算平滑滤波后的所述图像的梯度幅度和方向,并将所述梯度非局部极大值点的幅度置为零,得到所述图像细化的边缘;通过双阙值法将所述细化的边缘进行连接,得到所述目标文本图像集的文本布局特征集。
进一步地,本申请通过预设两个阙值T
1和T
2(T
1<T
2),得到两个阙值边缘图像N
1[i,j]和N
2[i,j]。所述双阙值法在所述N
2[i,j]中是把所述细化的边缘连接成完整的轮廓,因此当到达边缘的间断点时,就在所述N
1[i,j]的邻域内寻找可以连接的边缘,直到N
2[i,j]中的所有间断点连接起来为止,从而得到所述文本布局特征集。
S4、利用预先构建的特征提取模型对所述数值向量文本集和所述文本布局特征集进行特征选择,分别得到文本语义特征集和文本分布特征集。
本申请较佳实施例中,搭建包括BP神经网络的特征提取模型,其中,所述BP神经网络包含输入层、隐藏层以及输出层,所述BP神经网络是一种多层前馈神经网络,该网络的主要特点是信号前向传递,误差反向传播,在前向传递中,输入信号从输入层经过隐藏层逐层处理,直至输出层。每一层的神经元状态只影响下一层神经元状态。如果输出层得不到期望输出,则转入反向传播,根据预测误差调整网络权值和阈值,从而使网络预测输出不断逼近期望输出。所述输入层是整个神经网络唯一数据输入入口,输入层的神经元节点数目和文本的数值向量维数相同,每一个神经元的值对应数值向量的每个项的值。所述隐藏层是对主要用来对输入层输入的数据进行非线性化处理,以激励函数为基础对输入的数据进行非线性拟合可以有效保证模型的预测能力。所述输出层在隐藏层之后,是整个模型的唯一输出。输出层的神经元节点数目和文本的类别数目相同。
进一步地,本申请较佳实施例中,所述输入层接收所述数值向量文本集和所述文本布局特征集;所述隐藏层对输入层接收的所述数值向量文本集和所述文本布局特征集执行如下操作:
所述输出层接收所述隐藏层的输出值,并执行如下操作:
预设特征X
i以及特征X
k为所述数值向量文本集或所述文本布局特征集中任意的两个特征输出值。
根据复合函数求偏导数的链式法则求出所述特征X
i的灵敏度δ
ij和所述特征X
k的灵敏度δ
kj之差,完成对特征X
i和特征X
k的特征选择,从而得到所述文本语 义特征集和文本分布特征。其中,所述特征X
i的灵敏度δ
ij和特征X
k的灵敏度δ
kj之差计算公式为:
其中,
当
则得到δ
ij>δ
kj,即特征X
i对第j类模式的分类能力比特征X
k的强。于是,本申请利用所述搭建包括BP神经网络的特征提取模型,分别对上述数值向量文本集和上述文本布局特征集进行特征选择,得到所述文本语义特征集和所述文本分布特征集。
S5、根据所述文本语义特征集和所述文本分布特征集,利用随机森林模型对所述半结构化的文本集的文本进行分类,得到所述文本的分类结果,从而完成所述文本的文字布局。
所述随机森林算法是利用袋装算法的有放回抽样,从原始样本中抽取多个样本子集,并通过所述多个样本子集对多个决策树模型训练,在训练过程中采用借鉴随机特征子空间方法,在特征集中抽取部分特征进行决策树的分裂,最后集成多个决策树称为一个集成分类器,所述集成分类器称为随机森林模型。所述随机森林算法流程分为三部分:子样本集的生成、决策树的构建以及投票产生结果。
进一步地,本申请较佳实施例中,原始样本为上述PDF文本集,根据所述PDF文本页数不同,对其进行划分,形成多个子样本,并分别将所述文本语义特征和文本分布特征作为决策树的节点,通过投票,产生相应结果。优选地,本申请通过所述随机森林模型对所述PDF文本中文字布局是基于多分栏的PDF文本还是基于标题内容的PDF文本进行分类。其中,所述分类具体实施步骤为:通过交叉认证对所述PDF文本集的文本进行划分,得到子样本集;将所述文本的文本语义特征和所述文本分布特征作为所述随机森林模型的决策树子节点;根据所述决策树的子节点对所述子样本集进行分类,得到所述子样本的分类结果,将所述子样本的分类结果进行累加,并将累加值最大的子样本作为所述文本的分类结果,从而完成所述文本的文字布局,即得到所述PDF文本文字布局是基于多分栏的PDF文本还是基于标题内容的PDF文本。
发明还提供一种电子设备。参照图2所示,为本申请一实施例提供的电子设备的内部结构示意图。
在本实施例中,所述电子设备1可以是PC(Personal Computer,个人电脑),或者是智能手机、平板电脑、便携计算机等终端设备,也可以是一种服务器等。该电子设备1至少包括存储器11、处理器12,通信总线13,以及网络接口14。
其中,存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、磁性存储器、磁盘、光盘等。存储器11在一些实施例中可以是电子设备1的内部存储单元,例如该电子设备1的硬盘。存储器11在另一些实施例中也可以是电子设备1的外部存储设备,例如电子设备1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器11还可以既包括电子设备1的内部存储单元也包括外部存储设备。存储器11不仅可以用于存储安装于电子设备1的应用软件及各类数据,例如文字布局程序01的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit, CPU)、控制器、微控制器、微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行文字布局程序01等。
通信总线13用于实现这些组件之间的连接通信。
网络接口14可选的可以包括标准的有线接口、无线接口(如WI-FI接口),通常用于在所述电子设备1与其他电子设备之间建立通信连接。
可选地,所述电子设备1还可以包括用户接口,用户接口可以包括显示器(Display)、输入单元比如键盘(Keyboard),可选的用户接口还可以包括标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在电子设备1中处理的信息以及用于显示可视化的用户界面。
图2仅示出了具有组件11-14以及文字布局程序01的电子设备1,本领域技术人员可以理解的是,图1示出的结构并不构成对电子设备1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。
在图2所示的电子设备1实施例中,存储器11中存储有文字布局程序01;处理器12执行存储器11中存储的文字布局程序01时实现如下步骤:
步骤一、获取半结构化的文本集,对所述半结构化的文本集进行预处理操作,得到数值向量文本集。
本申请较佳实施例中,所述半结构化的文本是由若干个具有独立语义的、离散的模块内容模块组成,且每个模块内容包含且仅包含一个方面的内容,即可以用一个名词或名词短语进行归纳、每个独立语义模块之间有明显的非标点分割符号,所述非标点分割符号可以为空格、回车、表格、编号、特殊格式字符等等。优选地,本申请较佳实施例所述半结构化的文本可以为PDF文本。其中,所述PDF文本集来源通过以下两种方式获取:方式一、从各大招聘网站搜索简历获取;方式二、通过从语料库中搜索关键字获取。
进一步地,预处理操作包括去重、去停用词、分词以及权重计算。详细地,所述预处理操作具体实施步骤为:
a.去重:
当所述半结构化的文本集中存在重复的文本时,会降低文本分类的精度,因此,本申请较佳实施例首先对所述文本数据集执行去重操作。
优选地,本申请通过欧式距离公式对所述文本数据集进行去重操作,所述欧式距离公式如下:
其中,d表示所述文本数据之间的距离,w
1j和w
2j分别为任意2个文本数据,当两个文本数据之间的距离小于预设距离阈值,则删除其中一个文本数据。优选地,本申请预设所述阈值为0.1。
b.去停用词:
所述停用词是文本功能词中没有什么实际意义的词,对文本的分类没有什么影响,但是出现频率高,于是,会降低文本分类,其中所述停用词包括常用的代词、介词等。例如,所述停用词可以为“的”、“在”、“不过”等等。本申请通过预先构建好的停用词表和去重后的所述文本集中词语进行一一匹配,其中,当去重后的所述文本集中词语与所述停用词表匹配成功时,将所述匹配成功的词语过滤,当去重后的所述文本集中词语与所述停用词表匹配不成功时,将所述匹配不 成功的词语保留。其中,所述预先构建好的停用词表通过网页下载得到。
c.分词:
本申请通过预设的策略将去停用词后的所述文本集中的词语与预设的词典中的词条进行匹配,得到去停用词后的所述文本集的特征词,并将所述特征词用空格符号隔开。优选地,本申请较佳实施例中,所述预设的词典包含统计词典和前缀词典。所述统计词典是由统计方法得到的所有可能的分词构造的词典。所述统计词典统计相邻字在语料库中贡献的频度并计算互信息,当所述相邻字互相出现信息大于预设的阈值时,即认定为构成词,所述阈值为0.6。所述前缀词典包括所述统计词典中每一个分词的前缀,例如所述统计词典中的词“北京大学”的前缀分别是“北”、“北京”、“北京大”;词“大学”的前缀是“大”等。本申请利用所述统计词典得到去停用词后的所述文本集的可能的分词结果,并通过所述前缀词典根据分词的切分位置,得到最终的切分形式,从而得到去停用词后的所述文本集的特征词。
d.权重计算包括:
本申请通过构建依存关系图计算所述特征词之间的关联强度,通过所述关联强度计算出所述特征词的重要度得分,得到所述特征词的权重。详细地,计算所述特征词中的任意两个特征词W
i和W
j的依存关联度:
其中,len(W
i,W
j)表示特征词W
i和W
j之间的依存路径长度,b是超参数;
计算所述特征词W
i和W
j的引力:
其中,tfidf(W)是词语W的TF-IDF值,TF表示词频,IDF表示逆文档频率指数,d是特征词W
i和W
j的词向量之间的欧式距离;
得到特征词W
i和W
j之间的关联强度为:
weight(W
i,W
j)=Dep(W
i,W
j)*f
grav(W
i,W
j)
建立无向图G=(V,E),其中V是顶点的集合,E是边的集合;
计算出特征词W
i的重要度得分:
根据所述特征词重要度得分,得到所述特征词权重,从而将所述特征词表示成数值向量形式,得到所述数值向量文本集。
步骤二、将所述半结构化的文本集转换为文本图像集,对所述文本图像集进行对比度增强处理和阈值化操作,得到目标文本图像集。
本申请较佳实施例通过对所述文本集进行扫描,得到所述文本图像集,从而对所述文本集中文本布局进行分析。
进一步地,所述对比度指的是成像系统中亮度最大值与最小值之间的对比,其中,对比度低会使图像处理难度增大。本申请较佳实施例中采用的是对比度拉伸方法,利用提高灰度级动态范围的方式,达到图像对比度增强的目的。所述对比度拉伸也叫作灰度拉伸,是目前常用的灰度变换方式。详细地,本申请根据所述对比度拉伸方法中的分段线性变换函数对特定区域进行灰度拉伸,进一步提高输出图像的对比度。当进行对比度拉伸时,本质上是实现灰度值变换。本申请通 过线性拉伸实现灰度值变换,所述线性拉伸指的是输入与输出的灰度值之间为线性关系的像素级运算,灰度变换公式如下所示:
D
b=f(D
a)=a*D
a+b
其中a为线性斜率,b为在Y轴上的截距。当a>1时,此时输出的图像对比度相比原图像是增强的。当a<1时,此时输出的图像对比度相比原图像是削弱的,其中D
a代表输入图像灰度值,D
b代表输出图像灰度值。
进一步地,所述图像阈值化处理通过OTSU算法将对比度增强后的所述灰度图像进行二值化的高效算法,以得到二值化图像。本申请较佳实施例预设灰度t为灰度图像的前景与背景的分割阈值,并假设前景点数占图像比例为w
0,平均灰度为u
0;背景点数占图像比例为w
1,平均灰度为u
1,则灰度图像的总平均灰度为:
u=w
0*u
0+w
1*u
1,
所述灰度图像的前景和背景图象的方差为:
g=w
0*(u
0-u)*(u
0-u)+w
1*(u
1-u)*(u
1-u)=w
0*w
1*(u
0-u
1)*(u
0-u
1),
其中,当方差g最大时,则此时前景和背景差异最大,此时的灰度t为最佳阈值,并将对比度增强后的所述灰度图像中大于所述灰度t的灰度值设置为255,小于所述灰度t的灰度值设置为0,得到对比度增强后的所述灰度图像的二值化图像,其中,所述二值化图像即所述目标文本图像,从而得到所述目标文本图像集。
步骤三、通过边缘检测算法对所述目标文本图像集进行检测,得到文本布局特征集。
本申请较佳实施例中,所述边缘检测的基本思想认为边缘点是图像中像素灰度有阶跃变化或者屋顶变化的那些像素点,即灰度导数较大或极大的地方。优选地,本申请采用Canny边缘检测算法对所述目标文本图像集进行加测。详细地,具体检测步骤为:通过高斯滤波器对所述目标文本图像集的图像进行平滑滤波;利用一阶偏导的有限差分计算平滑滤波后的所述图像的梯度幅度和方向,并将所述梯度非局部极大值点的幅度置为零,得到所述图像细化的边缘;通过双阙值法将所述细化的边缘进行连接,得到所述目标文本图像集的文本布局特征集。
进一步地,本申请通过预设两个阙值T
1和T
2(T
1<T
2),得到两个阙值边缘图像N
1[i,j]和N
2[i,j]。所述双阙值法在所述N
2[i,j]中是把所述细化的边缘连接成完整的轮廓,因此当到达边缘的间断点时,就在所述N
1[i,j]的邻域内寻找可以连接的边缘,直到N
2[i,j]中的所有间断点连接起来为止,从而得到所述文本布局特征集。
步骤四、利用预先构建的特征提取模型对所述数值向量文本集和所述文本布局特征集进行特征选择,分别得到文本语义特征集和文本分布特征集。
本申请较佳实施例中,搭建包括BP神经网络的特征提取模型,其中,所述BP神经网络包含输入层、隐藏层以及输出层,所述BP神经网络是一种多层前馈神经网络,该网络的主要特点是信号前向传递,误差反向传播,在前向传递中,输入信号从输入层经过隐藏层逐层处理,直至输出层。每一层的神经元状态只影响下一层神经元状态。如果输出层得不到期望输出,则转入反向传播,根据预测误差调整网络权值和阈值,从而使网络预测输出不断逼近期望输出。所述输入层是整个神经网络唯一数据输入入口,输入层的神经元节点数目和文本的数值向量维数相同,每一个神经元的值对应数值向量的每个项的值。所述隐藏层是对主要用来对输入层输入的数据进行非线性化处理,以激励函数为基础对输入的数据进行非线性拟合可以有效保证模型的预测能力。所述输出层在隐藏层之后,是整个 模型的唯一输出。输出层的神经元节点数目和文本的类别数目相同。
进一步地,本申请较佳实施例中,所述输入层接收所述数值向量文本集和所述文本布局特征集;所述隐藏层对输入层接收的所述数值向量文本集和所述文本布局特征集执行如下操作:
所述输出层接收所述隐藏层的输出值,并执行如下操作:
预设特征X
i以及特征X
k为所述数值向量文本集或所述文本布局特征集中任意的两个特征输出值。
根据复合函数求偏导数的链式法则求出所述特征X
i的灵敏度δ
ij和所述特征X
k的灵敏度δ
kj之差,完成对特征X
i和特征X
k的特征选择,从而得到所述文本语义特征集和文本分布特征。其中,所述特征X
i的灵敏度δ
ij和特征X
k的灵敏度δ
kj之差计算公式为:
其中,
当
则得到δ
ij>δ
kj,即特征X
i对第j类模式的分类能力比特征X
k的强。于是,本申请利用所述搭建包括BP神经网络的特征提取模型,分别对上述数值向量文本集和上述文本布局特征集进行特征选择,得到所述文本语义特征集和所述文本分布特征集。
步骤五、根据所述文本语义特征集和所述文本分布特征集,利用随机森林模型对所述半结构化的文本集的文本进行分类,得到所述文本的分类结果,从而完成所述文本的文字布局。
所述随机森林算法是利用袋装算法的有放回抽样,从原始样本中抽取多个样本子集,并通过所述多个样本子集对多个决策树模型训练,在训练过程中采用借鉴随机特征子空间方法,在特征集中抽取部分特征进行决策树的分裂,最后集成多个决策树称为一个集成分类器,所述集成分类器称为随机森林模型。所述随机森林算法流程分为三部分:子样本集的生成、决策树的构建以及投票产生结果。
进一步地,本申请较佳实施例中,原始样本为上述PDF文本集,根据所述PDF文本页数不同,对其进行划分,形成多个子样本,并分别将所述文本语义特征和文本分布特征作为决策树的节点,通过投票,产生相应结果。优选地,本申请通过所述随机森林模型对所述PDF文本中文字布局是基于多分栏的PDF文本还是基于标题内容的PDF文本进行分类。其中,所述分类具体实施步骤为:通过交叉认证对所述PDF文本集的文本进行划分,得到子样本集;将所述文本的 文本语义特征和所述文本分布特征作为所述随机森林模型的决策树子节点;根据所述决策树的子节点对所述子样本集进行分类,得到所述子样本的分类结果,将所述子样本的分类结果进行累加,并将累加值最大的子样本作为所述文本的分类结果,从而完成所述文本的文字布局,即得到所述PDF文本文字布局是基于多分栏的PDF文本还是基于标题内容的PDF文本。
参照图3所示,为本申请的文字布局装置02的模块示意图,该实施例中,所述文字布局装置02可以被分割为文本预处理模块10、特征提取模块20、文本分类模块30示例性地:
所述文本预处理模块10用于:获取半结构化的文本集,对所述半结构化的文本集进行预处理操作,得到数值向量文本集;将所述半结构化的文本集转换为文本图像集,对所述文本图像集进行对比度增强处理和阈值化操作,得到目标文本图像集,通过边缘检测算法对所述目标文本图像集进行检测,得到文本布局特征集。
所述特征提取模块20用于:利用预先构建的特征提取模型对所述数值向量文本集和所述文本布局特征集进行特征选择,分别得到文本语义特征集和文本分布特征集。
所述文本分类模块30用于:根据所述文本语义特征集和所述文本分布特征集,利用随机森林模型对所述半结构化的文本集中的文本进行分类,得到所述文本的分类结果,从而完成所述文本的文字布局。
上述文本预处理模块10、特征提取模块20、文本分类模块30等程序模块被执行时所实现的功能或操作步骤与上述实施例大体相同,在此不再赘述。
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质可以是非易失性,也可以是易失性,所述计算机可读存储介质上存储有文字布局程序,所述文字布局程序可被一个或多个处理器执行,以实现如下操作:
获取半结构化的文本集,对所述半结构化的文本集进行预处理操作,得到数值向量文本集;
将所述半结构化的文本集转换为文本图像集,对所述文本图像集进行对比度增强处理和阈值化操作,得到目标文本图像集;
通过边缘检测算法对所述目标文本图像集进行检测,得到文本布局特征集;
利用预先构建的特征提取模型对所述数值向量文本集和所述文本布局特征集进行特征选择,分别得到文本语义特征集和文本分布特征集;
根据所述文本语义特征集和所述文本分布特征集,利用随机森林模型对所述半结构化的文本集中的文本进行分类,得到所述文本的分类结果,从而完成所述文本的文字布局。
本申请计算机可读存储介质具体实施方式与上述电子设备和方法各实施例基本相同,在此不作累述。
需要说明的是,上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。并且本文中的术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件, 但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。
Claims (20)
- 一种文字布局方法,其中,所述方法包括:获取半结构化的文本集,对所述半结构化的文本集进行预处理操作,得到数值向量文本集;将所述半结构化的文本集转换为文本图像集,对所述文本图像集进行对比度增强处理和阈值化操作,得到目标文本图像集;通过边缘检测算法对所述目标文本图像集进行检测,得到文本布局特征集;利用预先构建的特征提取模型对所述数值向量文本集和所述文本布局特征集进行特征选择,分别得到文本语义特征集和文本分布特征集;根据所述文本语义特征集和所述文本分布特征集,利用随机森林模型对所述半结构化的文本集中的文本进行分类,得到所述文本的分类结果,从而完成所述文本的文字布局。
- 如权利要求1所述的文字布局方法,其中,所述预处理操作包括去重、去停用词、分词以及权重计算;其中,所述去重包括:利用欧式距离公式对所述文本集进行去重操作,所述欧式距离公式如下:其中,d表示所述文本数据之间的距离,w 1j和w 2j分别为任意2个文档数据;所述去停用词包括:通过预先构建好的停用词表和去重后的所述文本集中词语进行一一匹配,其中,当去重后的所述文本集中词语与所述停用词表匹配成功时,将所述匹配成功的词语过滤,当去重后的所述文本集中词语与所述停用词表匹配不成功时,将所述匹配不成功的词语保留;所述分词包括:通过预设的策略将去停用词后的所述文本集中的词语与预设的词典中的词条进行匹配,得到去停用词后的所述文本集的特征词,并将所述特征词用空格符号隔开;及所述权重计算包括:通过构建依存关系图计算所述特征词之间的关联强度,并通过所述关联强度计算出所述特征词的重要度得分,得到所述特征词的权重。
- 如权利要求1所述的文字布局方法,其中,所述通过边缘检测算法对所述目标文本图像集进行检测,得到所述文本布局特征集,包括:通过高斯滤波器对所述目标文本图像集的图像进行平滑滤波;利用一阶偏导的有限差分计算平滑滤波后的所述图像的梯度幅度和方向,并将所述梯度非局部极大值点的幅度置为零,得到所述图像细化的边缘;通过双阙值法将所述细化的边缘进行连接,得到所述文本布局特征集。
- 如权利要求1所述的文字布局方法,其中,所述利用预先构建的特征提取模型对所述数值向量文本集和所述文本布局特征集进行特征选择,分别得到文本语义特征集和文本分布特征集,包括:构建包括BP神经网络的特征提取模型,其中,所述BP神经网络包含输入层、隐藏层以及输出层;其中:所述输入层接收所述数值向量文本集和所述文本布局特征集;所述隐藏层对输入层接收的所述数值向量文本集和所述文本布局特征集执行如下操作:所述输出层接收所述隐藏层的输出值,并执行如下操作:预设特征X i以及特征X k为所述数值向量文本集或所述文本布局特征集中任意的两个特征输出值。根据复合函数求偏导数的链式法则求出所述特征X i的灵敏度δ ij和所述特征X k的灵敏度δ kj之差,完成对特征X i和特征X k的特征选择,从而得到所述文本语义特征集和文本分布特征。
- 如权利要求1至4中任一项所述的文字布局方法,其中,所述根据所述文本语义特征集和所述文本分布特征集,利用随机森林模型对所述半结构化的文本集中的文本进行分类,得到所述文本的分类结果,从而完成所述文本的文字布局,包括:通过交叉认证对所述半结构化的文本集中的文本进行划分,得到子样本集;将所述文本中的文本语义特征和所述文本分布特征作为所述随机森林模型的决策树子节点;根据所述决策树的子节点对所述子样本集进行分类,得到所述子样本的分类结果,将所述子样本的分类结果进行累加,并将累加值最大的子样本作为所述文本的分类结果,从而完成所述文本的文字布局。
- 如权利要求1所述的文字布局方法,其中,所述半结构化文本集由若干个具有独立语义的、离散的模块内容模块组成。
- 如权利要求2所述的文字布局方法,其中,所述预设的词典包含统计词典和前缀词典;所述分词进一步包括:利用所述统计词典得到去停用词后的所述文本集的可能的分词结果,并通过所述前缀词典根据分词的切分位置,得到最终的切分形式,从而得到去停用词后的所述文本集的特征词。
- 一种文字布局装置,其中,该装置包括:文本预处理模块:用于获取半结构化的文本集,对所述半结构化的文本集进行预处理操作,得到数值向量文本集;将所述半结构化的文本集转换为文本图像集,对所述文本图像集进行对比度增强处理和阈值化操作,得到目标文本图像集;通过边缘检测算法对所述目标文本图像集进行检测,得到文本布局特征集;特征提取模块:用于利用预先构建的特征提取模型对所述数值向量文本集和所述文本布局特征集进行特征选择,分别得到文本语义特征集和文本分布特征集;文本分类模块:用于根据所述文本语义特征集和所述文本分布特征集,利用随机森林模型对所述半结构化的文本集中的文本进行分类,得到所述文本的分类结果,从而完成所述文本的文字布局。
- 一种电子设备,其中,所述电子设备包括存储器和处理器,所述存储器上存储有可在所述处理器上运行的文字布局程序,所述文字布局程序被所述处理器执行时实现如下步骤:获取半结构化的文本集,对所述半结构化的文本集进行预处理操作,得到数值向量文本集;将所述半结构化的文本集转换为文本图像集,对所述文本图像集进行对比度增强处理和阈值化操作,得到目标文本图像集;通过边缘检测算法对所述目标文本图像集进行检测,得到文本布局特征集;利用预先构建的特征提取模型对所述数值向量文本集和所述文本布局特征集进行特征选择,分别得到文本语义特征集和文本分布特征集;根据所述文本语义特征集和所述文本分布特征集,利用随机森林模型对所述半结构化的文本集中的文本进行分类,得到所述文本的分类结果,从而完成所述文本的文字布局。
- 如权利要求9所述的电子设备,其中,所述预处理操作包括去重、去停用词、分词以及权重计算;其中,所述去重包括:利用欧式距离公式对所述文本集进行去重操作,所述欧式距离公式如下:其中,d表示所述文本数据之间的距离,w 1j和w 2j分别为任意2个文档数据;所述去停用词包括:通过预先构建好的停用词表和去重后的所述文本集中词语进行一一匹配,其中,当去重后的所述文本集中词语与所述停用词表匹配成功时,将所述匹配成功的词语过滤,当去重后的所述文本集中词语与所述停用词表匹配不成功时,将所述匹配不成功的词语保留;所述分词包括:通过预设的策略将去停用词后的所述文本集中的词语与预设的词典中的词条进行匹配,得到去停用词后的所述文本集的特征词,并将所述特征词用空格符号隔开;及所述权重计算包括:通过构建依存关系图计算所述特征词之间的关联强度,并通过所述关联强度计算出所述特征词的重要度得分,得到所述特征词的权重。
- 如权利要求9所述的电子设备,其中,所述通过边缘检测算法对所述目标文本图像集进行检测,得到所述文本布局特征集,包括:通过高斯滤波器对所述目标文本图像集的图像进行平滑滤波;利用一阶偏导的有限差分计算平滑滤波后的所述图像的梯度幅度和方向,并将所述梯度非局部极大值点的幅度置为零,得到所述图像细化的边缘;通过双阙值法将所述细化的边缘进行连接,得到所述文本布局特征集。
- 如权利要求9所述的电子设备,其中,所述利用预先构建的特征提取模型对所述数值向量文本集和所述文本布局特征集进行特征选择,分别得到文本语义特征集和文本分布特征集,包括:构建包括BP神经网络的特征提取模型,其中,所述BP神经网络包含输入层、隐藏层以及输出层;其中:所述输入层接收所述数值向量文本集和所述文本布局特征集;所述隐藏层对输入层接收的所述数值向量文本集和所述文本布局特征集执行如下操作:所述输出层接收所述隐藏层的输出值,并执行如下操作:预设特征X i以及特征X k为所述数值向量文本集或所述文本布局特征集中任意的两个特征输出值。根据复合函数求偏导数的链式法则求出所述特征X i的灵敏度δ ij和所述特征X k的灵敏度δ kj之差,完成对特征X i和特征X k的特征选择,从而得到所述文本语义特征集和文本分布特征。
- 如权利要求9至12任一项所述的电子设备,其中,所述根据所述文本语义特征集和所述文本分布特征集,利用随机森林模型对所述半结构化的文本集中的文本进行分类,得到所述文本的分类结果,从而完成所述文本的文字布局,包括:通过交叉认证对所述半结构化的文本集中的文本进行划分,得到子样本集;将所述文本中的文本语义特征和所述文本分布特征作为所述随机森林模型的决策树子节点;根据所述决策树的子节点对所述子样本集进行分类,得到所述子样本的分类结果,将所述子样本的分类结果进行累加,并将累加值最大的子样本作为所述文本的分类结果,从而完成所述文本的文字布局。
- 如权利要求9所述的电子设备,其中,所述半结构化文本集由若干个具有独立语义的、离散的模块内容模块组成。
- 如权利要求10所述的电子设备,其中,所述预设的词典包含统计词典和前缀词典;所述分词进一步包括:利用所述统计词典得到去停用词后的所述文本集的可能的分词结果,并通过所述前缀词典根据分词的切分位置,得到最终的切分形式,从而得到去停用词后的所述文本集的特征词。
- 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有文字布局程序,所述文字布局程序可被一个或者多个处理器执行,以实现如下步骤:获取半结构化的文本集,对所述半结构化的文本集进行预处理操作,得到数值向量文本集;将所述半结构化的文本集转换为文本图像集,对所述文本图像集进行对比度 增强处理和阈值化操作,得到目标文本图像集;通过边缘检测算法对所述目标文本图像集进行检测,得到文本布局特征集;利用预先构建的特征提取模型对所述数值向量文本集和所述文本布局特征集进行特征选择,分别得到文本语义特征集和文本分布特征集;根据所述文本语义特征集和所述文本分布特征集,利用随机森林模型对所述半结构化的文本集中的文本进行分类,得到所述文本的分类结果,从而完成所述文本的文字布局。
- 如权利要求16所述的计算机可读存储介质,其中,所述预处理操作包括去重、去停用词、分词以及权重计算;其中,所述去重包括:利用欧式距离公式对所述文本集进行去重操作,所述欧式距离公式如下:其中,d表示所述文本数据之间的距离,w 1j和w 2j分别为任意2个文档数据;所述去停用词包括:通过预先构建好的停用词表和去重后的所述文本集中词语进行一一匹配,其中,当去重后的所述文本集中词语与所述停用词表匹配成功时,将所述匹配成功的词语过滤,当去重后的所述文本集中词语与所述停用词表匹配不成功时,将所述匹配不成功的词语保留;所述分词包括:通过预设的策略将去停用词后的所述文本集中的词语与预设的词典中的词条进行匹配,得到去停用词后的所述文本集的特征词,并将所述特征词用空格符号隔开;及所述权重计算包括:通过构建依存关系图计算所述特征词之间的关联强度,并通过所述关联强度计算出所述特征词的重要度得分,得到所述特征词的权重。
- 如权利要求16所述的计算机可读存储介质,其中,所述通过边缘检测算法对所述目标文本图像集进行检测,得到所述文本布局特征集,包括:通过高斯滤波器对所述目标文本图像集的图像进行平滑滤波;利用一阶偏导的有限差分计算平滑滤波后的所述图像的梯度幅度和方向,并将所述梯度非局部极大值点的幅度置为零,得到所述图像细化的边缘;通过双阙值法将所述细化的边缘进行连接,得到所述文本布局特征集。
- 如权利要求16所述的计算机可读存储介质,其中,所述利用预先构建的特征提取模型对所述数值向量文本集和所述文本布局特征集进行特征选择,分别得到文本语义特征集和文本分布特征集,包括:构建包括BP神经网络的特征提取模型,其中,所述BP神经网络包含输入层、隐藏层以及输出层;其中:所述输入层接收所述数值向量文本集和所述文本布局特征集;所述隐藏层对输入层接收的所述数值向量文本集和所述文本布局特征集执行如下操作:所述输出层接收所述隐藏层的输出值,并执行如下操作:预设特征X i以及特征X k为所述数值向量文本集或所述文本布局特征集中任意的两个特征输出值。根据复合函数求偏导数的链式法则求出所述特征X i的灵敏度δ ij和所述特征X k的灵敏度δ kj之差,完成对特征X i和特征X k的特征选择,从而得到所述文本语义特征集和文本分布特征。
- 如权利要求16至19中任一项所述的计算机可读存储介质,其中,所述根据所述文本语义特征集和所述文本分布特征集,利用随机森林模型对所述半结构化的文本集中的文本进行分类,得到所述文本的分类结果,从而完成所述文本的文字布局,包括:通过交叉认证对所述半结构化的文本集中的文本进行划分,得到子样本集;将所述文本中的文本语义特征和所述文本分布特征作为所述随机森林模型的决策树子节点;根据所述决策树的子节点对所述子样本集进行分类,得到所述子样本的分类结果,将所述子样本的分类结果进行累加,并将累加值最大的子样本作为所述文本的分类结果,从而完成所述文本的文字布局。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910829790.7A CN110704687B (zh) | 2019-09-02 | 2019-09-02 | 文字布局方法、装置及计算机可读存储介质 |
CN201910829790.7 | 2019-09-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021043087A1 true WO2021043087A1 (zh) | 2021-03-11 |
Family
ID=69193845
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/112335 WO2021043087A1 (zh) | 2019-09-02 | 2020-08-30 | 文字布局方法、装置、电子设备及计算机可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110704687B (zh) |
WO (1) | WO2021043087A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113361521A (zh) * | 2021-06-10 | 2021-09-07 | 京东数科海益信息科技有限公司 | 场景图像的检测方法及其装置 |
CN114999575A (zh) * | 2022-05-27 | 2022-09-02 | 爱科思(北京)生物科技有限公司 | 生物信息数据管理系统 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110704687B (zh) * | 2019-09-02 | 2023-08-11 | 平安科技(深圳)有限公司 | 文字布局方法、装置及计算机可读存储介质 |
CN111833303B (zh) * | 2020-06-05 | 2023-07-25 | 北京百度网讯科技有限公司 | 产品的检测方法、装置、电子设备及存储介质 |
CN112149653B (zh) * | 2020-09-16 | 2024-03-29 | 北京达佳互联信息技术有限公司 | 信息处理方法、装置、电子设备及存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102750541A (zh) * | 2011-04-22 | 2012-10-24 | 北京文通科技有限公司 | 一种文档图像分类识别方法及装置 |
CN102880857A (zh) * | 2012-08-29 | 2013-01-16 | 华东师范大学 | 一种基于svm的文档图像版式信息识别方法 |
US8831361B2 (en) * | 2012-03-09 | 2014-09-09 | Ancora Software Inc. | Method and system for commercial document image classification |
CN109344815A (zh) * | 2018-12-13 | 2019-02-15 | 深源恒际科技有限公司 | 一种文档图像分类方法 |
CN110135264A (zh) * | 2019-04-16 | 2019-08-16 | 深圳壹账通智能科技有限公司 | 数据录入方法、装置、计算机设备以及存储介质 |
CN110704687A (zh) * | 2019-09-02 | 2020-01-17 | 平安科技(深圳)有限公司 | 文字布局方法、装置及计算机可读存储介质 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101777060B (zh) * | 2009-12-23 | 2012-05-23 | 中国科学院自动化研究所 | 基于网页视觉特征的网页分类方法及其系统 |
CN102831244B (zh) * | 2012-09-13 | 2015-09-30 | 重庆立鼎科技有限公司 | 一种房产文档图像的分类检索方法 |
CN103544475A (zh) * | 2013-09-23 | 2014-01-29 | 方正国际软件有限公司 | 一种版面类型的识别方法及系统 |
US9298981B1 (en) * | 2014-10-08 | 2016-03-29 | Xerox Corporation | Categorizer assisted capture of customer documents using a mobile device |
CN107491730A (zh) * | 2017-07-14 | 2017-12-19 | 浙江大学 | 一种基于图像处理的化验单识别方法 |
US11106716B2 (en) * | 2017-11-13 | 2021-08-31 | Accenture Global Solutions Limited | Automatic hierarchical classification and metadata identification of document using machine learning and fuzzy matching |
-
2019
- 2019-09-02 CN CN201910829790.7A patent/CN110704687B/zh active Active
-
2020
- 2020-08-30 WO PCT/CN2020/112335 patent/WO2021043087A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102750541A (zh) * | 2011-04-22 | 2012-10-24 | 北京文通科技有限公司 | 一种文档图像分类识别方法及装置 |
US8831361B2 (en) * | 2012-03-09 | 2014-09-09 | Ancora Software Inc. | Method and system for commercial document image classification |
CN102880857A (zh) * | 2012-08-29 | 2013-01-16 | 华东师范大学 | 一种基于svm的文档图像版式信息识别方法 |
CN109344815A (zh) * | 2018-12-13 | 2019-02-15 | 深源恒际科技有限公司 | 一种文档图像分类方法 |
CN110135264A (zh) * | 2019-04-16 | 2019-08-16 | 深圳壹账通智能科技有限公司 | 数据录入方法、装置、计算机设备以及存储介质 |
CN110704687A (zh) * | 2019-09-02 | 2020-01-17 | 平安科技(深圳)有限公司 | 文字布局方法、装置及计算机可读存储介质 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113361521A (zh) * | 2021-06-10 | 2021-09-07 | 京东数科海益信息科技有限公司 | 场景图像的检测方法及其装置 |
CN113361521B (zh) * | 2021-06-10 | 2024-04-09 | 京东科技信息技术有限公司 | 场景图像的检测方法及其装置 |
CN114999575A (zh) * | 2022-05-27 | 2022-09-02 | 爱科思(北京)生物科技有限公司 | 生物信息数据管理系统 |
Also Published As
Publication number | Publication date |
---|---|
CN110704687B (zh) | 2023-08-11 |
CN110704687A (zh) | 2020-01-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021043087A1 (zh) | 文字布局方法、装置、电子设备及计算机可读存储介质 | |
CN108804512B (zh) | 文本分类模型的生成装置、方法及计算机可读存储介质 | |
CN109255118B (zh) | 一种关键词提取方法及装置 | |
CN107451126B (zh) | 一种近义词筛选方法及系统 | |
WO2020237856A1 (zh) | 基于知识图谱的智能问答方法、装置及计算机存储介质 | |
CN113011533A (zh) | 文本分类方法、装置、计算机设备和存储介质 | |
CN107168954B (zh) | 文本关键词生成方法及装置和电子设备及可读存储介质 | |
CN109902175A (zh) | 一种基于神经网络结构模型的文本分类方法及分类系统 | |
WO2021051518A1 (zh) | 基于神经网络模型的文本数据分类方法、装置及存储介质 | |
Se et al. | Predicting the sentimental reviews in tamil movie using machine learning algorithms | |
US20090276378A1 (en) | System and Method for Identifying Document Structure and Associated Metainformation and Facilitating Appropriate Processing | |
CN110765765B (zh) | 基于人工智能的合同关键条款提取方法、装置及存储介质 | |
CN110765761A (zh) | 基于人工智能的合同敏感词校验方法、装置及存储介质 | |
CN110083832B (zh) | 文章转载关系的识别方法、装置、设备及可读存储介质 | |
CN112559747A (zh) | 事件分类处理方法、装置、电子设备和存储介质 | |
CN104462229A (zh) | 一种事件分类方法及装置 | |
Dong et al. | An adult image detection algorithm based on Bag-of-Visual-Words and text information | |
US20200364259A1 (en) | Image retrieval | |
Tian et al. | Image classification based on the combination of text features and visual features | |
Samsudin et al. | Mining opinion in online messages | |
CN104794209A (zh) | 基于马尔科夫逻辑网络的中文微博情绪分类方法及系统 | |
Wilkinson et al. | A novel word segmentation method based on object detection and deep learning | |
Hassan et al. | Roman-urdu news headline classification with ir models using machine learning algorithms | |
US20240005690A1 (en) | Generating article polygons within newspaper images for extracting actionable data | |
US20190095525A1 (en) | Extraction of expression for natural language processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20860757 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20860757 Country of ref document: EP Kind code of ref document: A1 |