CN112084753A - Method and system for assisting in editing document - Google Patents

Method and system for assisting in editing document Download PDF

Info

Publication number
CN112084753A
CN112084753A CN202010963770.1A CN202010963770A CN112084753A CN 112084753 A CN112084753 A CN 112084753A CN 202010963770 A CN202010963770 A CN 202010963770A CN 112084753 A CN112084753 A CN 112084753A
Authority
CN
China
Prior art keywords
text
node
unit
client
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010963770.1A
Other languages
Chinese (zh)
Other versions
CN112084753B (en
Inventor
李延
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Qixing Tian Patent Operation Management Co ltd
Original Assignee
Suzhou Qixing Tian Patent Operation Management Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Qixing Tian Patent Operation Management Co ltd filed Critical Suzhou Qixing Tian Patent Operation Management Co ltd
Priority to CN202110710047.7A priority Critical patent/CN113255303B/en
Priority to CN202110672721.7A priority patent/CN113312884B/en
Priority to CN202110674052.7A priority patent/CN113221516B/en
Priority to CN202110755500.6A priority patent/CN114186534A/en
Priority to CN202010963770.1A priority patent/CN112084753B/en
Publication of CN112084753A publication Critical patent/CN112084753A/en
Application granted granted Critical
Publication of CN112084753B publication Critical patent/CN112084753B/en
Priority to US17/447,576 priority patent/US20220083724A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/14Tree-structured documents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/137Hierarchical processing, e.g. outlines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/151Transformation
    • G06F40/16Automatic learning of transformation rules, e.g. from examples
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/197Version control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The embodiment of the specification discloses a method for assisting in editing a document, which is applied to a client and comprises the following steps: receiving and displaying a text structure of a second text acquired by the server based on the first text; the first text comprises at least one discussion, and each discussion comprises at least one key point; the text structure of the second text is a tree structure and comprises at least one structure node corresponding to at least one discussion or/and at least one key point; the second text also comprises at least one text unit corresponding to the at least one structural node, and the at least one text unit is used for explaining the first text; when detecting that the structural node is triggered, generating an acquisition request of a target text unit corresponding to the structural node, and sending the acquisition request to a server; and receiving and displaying a target text unit corresponding to the structural node acquired by the server.

Description

Method and system for assisting in editing document
Technical Field
The present application relates to the field of information processing technologies, and in particular, to a method and a system for assisting document editing.
Background
With the rapid development of scientific technology and the rapid update of knowledge, technical communication and knowledge dissemination through a large number of documents are required. Due to the limitations of writing level and editing time, part of technicians have low efficiency and poor quality of editing documents.
Therefore, a method and a system for assisting in editing a document are needed to improve the efficiency and quality of editing the document.
Disclosure of Invention
One aspect of the embodiments of the present specification provides a method for assisting in editing a document, which is applied to a client, and includes: receiving and displaying a text structure of a second text acquired by the server based on the first text; the first text comprises at least one discussion, each discussion comprising at least one keypoint; the text structure of the second text is a tree structure and comprises at least one structure node corresponding to the at least one discussion or/and the at least one key point, and the structure node is generated through manual input or through a structure node generation model; the structure node generation model is a machine learning model, and the input characteristics comprise the content characteristics of the superior structure nodes of the structure nodes and the content characteristics of the level structure nodes; the second text further comprises at least one text unit corresponding to the at least one structural node, and the at least one text unit is used for explaining the first text; when the structural node is detected to be triggered, generating an acquisition request of a target text unit corresponding to the structural node, and sending the acquisition request to the server; and receiving and displaying the target text unit acquired by the server.
Another aspect of an embodiment of the present specification provides a system for document-aided editing, including: the text structure receiving module is used for receiving and displaying a text structure of a second text acquired by the server based on the first text; the first text comprises at least one discussion, each discussion comprising at least one keypoint; the text structure of the second text is a tree structure and comprises at least one structure node corresponding to the at least one discussion or/and the at least one key point, and the structure node is generated through manual input or through a structure node generation model; the structure node generation model is a machine learning model, and the input characteristics comprise the content characteristics of a superior structure node and an inferior structure node of the structure node; the second text further comprises at least one text unit corresponding to the at least one structural node, and the at least one text unit is used for explaining the first text; the text unit request module is used for generating an acquisition request of a target text unit corresponding to the structural node when detecting that the structural node is triggered, and sending the acquisition request to the server; and the text unit display module is used for receiving and displaying the target text unit acquired by the server.
One aspect of the embodiments of the present specification provides a method for assisting editing of a document, which is applied to a server, and includes: acquiring a first text, wherein the first text comprises one or more discussions, and each discussion comprises one or more key points; acquiring a text structure of a second text based on the first text; the text structure of the second text is a tree structure and comprises at least one structure node corresponding to the at least one discussion or/and the at least one key point, and the structure node is generated through manual input or through a structure node generation model; the structure node generation model is a machine learning model, and the input characteristics comprise the content characteristics of the superior structure nodes of the structure nodes and the content characteristics of the level structure nodes; the second text further comprises at least one text unit corresponding to the at least one structural node, and the at least one text unit is used for explaining the first text; sending the text structure of the second text to a client; receiving an acquisition request of a target text unit corresponding to the structural node generated by the client; and responding to the acquisition request, acquiring the target text unit and sending the target text unit to the client.
Another aspect of an embodiment of the present specification provides a system for document-aided editing, including: the first text acquisition module is used for acquiring a first text, wherein the first text comprises one or more discussions, and each discussion comprises one or more key points; the text structure generating module is used for acquiring a text structure of a second text based on the first text; the text structure of the second text is a tree structure and comprises at least one structure node corresponding to the at least one discussion or/and the at least one key point, and the structure node is generated through manual input or through a structure node generation model; the structure node generation model is a machine learning model, and the input characteristics comprise the content characteristics of the superior structure nodes of the structure nodes and the content characteristics of the level structure nodes; the second text further comprises at least one text unit corresponding to the at least one structural node, and the at least one text unit is used for explaining the first text; the text structure sending module is used for sending the text structure of the second text to the client; a request receiving module, configured to receive an acquisition request of a target text unit corresponding to the structural node generated by the client; and the text unit sending module is used for responding to the acquisition request, acquiring the target text unit and sending the target text unit to the client.
Another aspect of embodiments of the present specification provides a computer-readable storage medium, characterized in that the storage medium stores computer instructions that, when executed by a processor, implement a method of document-assisted editing.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of a document assisted editing system, shown in accordance with some embodiments of the present description;
FIG. 2 is an exemplary flow diagram of a method for document-assisted editing applied to a server, shown in accordance with some embodiments of the present description;
FIG. 3 is an exemplary flow diagram of a method for document assisted editing applied to a client, shown in accordance with some embodiments of the present description;
FIG. 4 is a schematic illustration of document assisted editing, shown in accordance with some embodiments of the present description;
FIG. 5 is a schematic diagram of a method for generating a structure node according to a structure node generative model shown in some embodiments of the present description;
FIG. 6 is an exemplary flow diagram of a method of editing a text unit, shown in some embodiments herein;
FIG. 7a is a schematic illustration of an edit text unit in accordance with some embodiments of the present description;
FIG. 7b is a schematic diagram illustrating a difference in version of a text structure according to some embodiments of the present description;
FIG. 7c is a schematic diagram illustrating differences in versions of text units according to some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "device", "unit" and/or "module" as used in this specification is a method for distinguishing different components, elements, parts or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
FIG. 1 is a schematic diagram of an application scenario of a document assisted editing system according to some embodiments of the present description.
The system for document assisted editing may generate a text structure of a second text of a document based on a first text of the document and assist a user in editing a text element of the second text. For example, the document assisted editing system may generate a description outline based on the claims of the patent application file and assist the user in editing the description content. For another example, the document assisted editing system may generate an analysis description outline based on the analysis conclusion of the enterprise analysis report, and assist the user in editing the analysis description content.
As shown in FIG. 1, an application scenario diagram 100 of a document assisted editing system may include a server 110, a network 120, a client 130, and a database 140. The server 110 may include a processing device 112.
In some embodiments, server 110 may be used to process information and/or data related to data processing. In some embodiments, server 110 may access information and/or data stored in clients 130 and database 140 via network 120. For example, the server 110 may obtain the first text in the database 140 via the network 120. As another example, the server may receive a first text input by a user at the client 130 via the network 120. In some embodiments, server 110 may interface directly with clients 130 and/or database 140 to access information and/or profiles stored therein. For example, the server 110 may receive a request for obtaining a target text unit corresponding to a structural node generated by a client. The server 110 may be a stand-alone server or a group of servers. The set of servers can be centralized or distributed (e.g., server 110 can be a distributed system). In some embodiments, the server 110 may be regional or remote. In some embodiments, the server 110 may execute on a cloud platform. For example, the cloud platform may include one or any combination of a private cloud, a public cloud, a hybrid cloud, a community cloud, a decentralized cloud, an internal cloud, and the like.
In some embodiments, the server 110 may include a processing device 112. The processing device 112 may process data and/or information to perform one or more of the functions described herein. For example, the processing device 112 may obtain multiple sets of sample data based on the completed document, completing the training of the structure node generation model. For another example, the processing device 112 may generate a model through the trained structure node, and obtain a text structure of the second text based on the first text. As another example, the processing device 112 may retrieve the target text unit and send it to the client 130 in response to the retrieval request. In some embodiments, the processing device 112 may include one or more sub-processing devices (e.g., a single core processing device or a multi-core processing device). By way of example only, the processing device 112 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an Application Specific Instruction Processor (ASIP), a Graphics Processor (GPU), a Physical Processor (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a programmable logic circuit (PLD), a controller, a micro-controller unit, a Reduced Instruction Set Computer (RISC), a microprocessor, or the like, or any combination thereof.
In some embodiments, the network 120 may facilitate the exchange of data and/or information, which may include a first text, a text unit type, a text unit requirement, a second text, and so forth. In some embodiments, one or more components (e.g., server 110, client 130, database 140) in the scenegraph 100 may send data and/or information to other components in the scenegraph 100 over the network 120. In some embodiments, network 120 may be any type of wired or wireless network. For example, network 120 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a ZigBee network, a Near Field Communication (NFC) network, the like, or any combination thereof. In some embodiments, network 120 may include one or more network access points. For example, the network 120 may include wired or wireless network access points, such as base stations and/or Internet switching points 120-1, 120-2, …, through which one or more components of the scene graph 100 may connect to the network 120 to exchange data and/or information.
In some embodiments, client 130 may be a computing device or group of computing devices. In some embodiments, the client 130 has the function of input, which can be used for user input data. For example, a first text is entered. As another example, the text unit content of the second text is input. The computing device may include one or any combination of a cell phone 130-1, a tablet 130-2, a laptop 130-3, a desktop 130-4, and the like. The group of computing devices may be centralized or distributed. In some embodiments, the client 130 may send the entered first text to the server 110. Accordingly, the server 110 may determine a text structure of the second text to send to the client 130 based on the input first text. In some embodiments, the client 130 has a display function, and may be configured to display the text structure of the second text and the target text unit obtained by the server.
In some embodiments, the document assisted editing system comprises: the device comprises a text structure receiving module, a text unit requesting module, a text unit displaying module and a second text sending module.
The text structure receiving module is used for receiving and displaying a text structure of a second text acquired by the server based on the first text; the first text includes at least one discussion, each discussion including at least one keypoint.
In some embodiments, the text structure of the second text is a tree structure, and includes at least one structure node corresponding to at least one discussion or/and at least one key point, and the structure node is generated through manual input or through a structure node generation model; the structure node generation model is a machine learning model, and the input features comprise the content features of the upper-level structure nodes and the lower-level structure nodes of the structure nodes. In some embodiments, the content characteristics of the superordinate structure node or the content characteristics of the hierarchical structure node comprise one or more of the following characteristics of the superordinate structure node or the hierarchical structure node: corresponding discussions, corresponding key points, types of corresponding text units, and associated requirements for corresponding text units. In some embodiments, the content features further include a keypoint type feature of the keypoint; the structure node generation model is a neural network model and is generated through training. In some embodiments, the keypoint type features are obtained by a keypoint type discrimination model; the key point type distinguishing model is a machine learning model and comprises an embedding sub-model and a classification sub-model; the embedding sub-model generates a key point text representation vector based on the key point text; the classification submodel generates keypoint type features based on the keypoint text representation vectors.
In some embodiments, the second text further comprises at least one text unit corresponding to the at least one structural node, the at least one text unit being used to illustrate the first text.
And the text unit request module is used for generating an acquisition request of a target text unit corresponding to the structural node when detecting that the structural node is triggered, and sending the acquisition request to the server.
And the text unit display module is used for receiving and displaying the target text unit acquired by the server. In some embodiments, the text unit display module is further configured to display a plurality of adjacent text units of the target text unit; acquiring a modification instruction of a target text unit; and after the modification instruction is executed, displaying the updated target text unit. In some embodiments, the text unit display module is further configured to display a version difference of a text structure of the plurality of second texts provided by the server; and displaying the version difference of the text units of the plurality of second texts provided by the server.
And the second text sending module is used for sending the version of the current second text to the server based on the stored trigger condition.
In some embodiments, a document assisted editing system may include: the device comprises a first text acquisition module, a text structure generation module, a text structure sending module, a request receiving module, a text unit sending module, a second text acquisition module and a version difference determination module.
The first text acquisition module is used for acquiring a first text, wherein the first text comprises one or more discussions, and each discussion comprises one or more key points.
And the text structure generating module is used for acquiring the text structure of the second text based on the first text.
In some embodiments, the text structure of the second text is a tree structure, and includes at least one structure node corresponding to at least one discussion or/and at least one key point, and the structure node is generated through manual input or through a structure node generation model; the structure node generation model is a machine learning model, and the input features comprise content features of upper-level structure nodes of the structure nodes and content features of level structure nodes.
In some embodiments, the second text further comprises at least one text unit corresponding to the at least one structural node, the at least one text unit being used to illustrate the first text.
And the text structure sending module is used for sending the text structure of the second text to the client.
And the request receiving module is used for receiving an acquisition request of a target text unit corresponding to the structural node generated by the client.
And the text unit sending module is used for responding to the acquisition request, acquiring the target text unit and sending the target text unit to the client.
And the second text acquisition module is used for receiving the version of the current second text from the client.
The version difference determining module is used for determining the version difference of the text structures of the plurality of second texts and sending the version difference to the client; and determining and sending the version difference of the text units of the second texts to the client.
FIG. 2 is an exemplary flow diagram illustrating a method for document-assisted editing applied to a server in accordance with some embodiments of the present description.
A document is a collection of words that describe the results of an analysis and/or a research effort. In some embodiments, the document may be a report that analyzes laws and phenomena. Such as enterprise analysis reports, market analysis reports, economic situation analysis reports, and social problem analysis reports, among others. In some embodiments, documentation may also be a solution to the technical problem. Such as product design solutions, engineering solutions, management solutions, etc. In some embodiments, the document may also be a summary of academic research efforts. Such as academic papers, patent application texts, etc.
In some embodiments, the document may include a first text and a second text. Wherein the first text may be a summary, conclusion, and/or argument of the document, etc., and the second text may be a description, explanation, and/or argument of the document, etc. For example, if the document is a business analysis report, the first text is an analysis conclusion and the second text is an analysis specification. For another example, if the document is a patent application text, then the first text is a claim and the second text is a specification.
As shown in fig. 2, the method 200 for assisted editing of a document applied to a server may include:
step 210, a first text is obtained. In particular, step 210 may be performed by a first text acquisition module.
As previously described, the first text may be a summary, conclusion, and/or point of discourse, etc., of the document.
In some embodiments, the first text may include one or more discussions. Each discussion may characterize an aspect of the first text. In some embodiments, each discussion includes one or more keypoints. Key points are the main content of the discussion, and each key point may characterize one point of the discussion.
Illustratively, continuing with the enterprise analysis report as an example, the first text is an analysis conclusion, including 3 discussions: discussion 1 is an enterprise operation condition analysis conclusion, discussion 2 is an enterprise financial condition analysis conclusion, and discussion 3 is an enterprise value evaluation conclusion; therein, discussion 1 includes 2 key points: the key point 1 is the yield of the enterprise, and the key point 2 is the sales performance of the enterprise.
For further example, continuing with the patent application text for example, the first text is a claim, including 3 discussions of claim 1, claim 2 and claim 3, respectively; where claim 1 comprises 2 key points, i.e. 2 different technical features.
In some embodiments, the server may obtain the first text from a user input at the client, by reading stored data, by invoking an associated interface, or by other means.
Step 220, based on the first text, a text structure of the second text is obtained. In particular, step 220 may be performed by the text structure generation module.
As previously mentioned, the first text may be a summary, conclusion, and/or point of discussion, etc. of the document, and the second text may be a description, explanation, and/or demonstration, etc. of the document. It is to be understood that the second text may be used to illustrate the first text. For example, an analysis specification of a business analysis report may be used to specify an analysis conclusion. For another example, the specification of the patent application text may be used to describe the claims.
The text structure refers to the layout of the second text, for example, an outline and a title of the content, and the like. In some embodiments, the text structure may include a content feed and a location hierarchy for the second text. In some embodiments, the text structure is a tree structure comprising at least one structure node corresponding to at least one discussion or/and at least one key point.
Wherein the structure node may characterize the synopsis of the second text. Illustratively, taking fig. 4 as an example, the structure node 1.1 "step 210" may represent the second text to explain "step 210". In some embodiments, the structure nodes correspond to the discussion and/or key points of the first text. Illustratively, continuing with the patent application text of fig. 4 as an example, structure node 1 "outlines" corresponding claim 1 (i.e., discussion 1) in the claim (i.e., first text), structure node 1.1 "step 210" corresponds to technical feature 1 (i.e., keypoint 1) in claim 1 (i.e., discussion 1) in the claim (i.e., first text), and the discussion 1 and keypoint 1 of the first text are to be explained by characterizing the second text.
The tree structure is a location hierarchy for the second text, and may characterize the location hierarchy corresponding to the second text. As shown in fig. 4, in the tree structure, the structure node 1.1 "step 210", the structure node 1.2 "step 220" and the structure node 1.3 "step 230" are all child nodes of the structure node 1 "summary", so that the positions of the structure node 1.1, the structure node 1.2 and the structure node 1.3 are at the same level, and are level structure nodes, and the positions of the corresponding second texts "step 210 content", "step 220 content" and "step 230 content" are also at the same level; the position of the "summary" of the structure node 1 is at the upper level, and is the upper level structure node of the structure node 1.1, the structure node 1.2 and the structure node 1.3, and the corresponding second text "summary content" is also at the upper level. In some embodiments, the level structure node of the structure node comprises the structure node. For example, a structure node 1.1 level structure node includes not only structure node 1.2 and structure node 1.3, but also structure node 1.1.
In some embodiments, the structure nodes are generated by obtaining manual input. Specifically, the corresponding structure node and its location hierarchy may be manually entered based on the discussion and/or keypoints of the first text.
In some embodiments, the structure nodes may be generated by a structure node generative model. The structure node generation model is a machine learning model, and the input features comprise the content features of the upper-level structure nodes and the lower-level structure nodes of the structure nodes. It is understood that when the structure node is a level structure node and there is no corresponding upper level structure node, the input features only include the content features of the level structure node. The description of generating the structure node by the structure node generation model is shown in fig. 4, and is not repeated herein.
In some embodiments, the second text further comprises at least one text unit corresponding to the at least one structural node. The text units are constituent elements of the second text, and the second text may be divided into different text units according to different contents. It will be appreciated that each text unit corresponds to a structure node. The text element of the second text may be used to illustrate the first text, i.e. the second text may be used to illustrate the first text. For a detailed description of the text unit, reference may be made to step 320, which is not described herein.
Step 230, the text structure of the second text is sent to the client. In particular, step 230 may be performed by the text structure transmission module.
In some embodiments, the server may send the text structure of the second text to the client, including the structure node and the tree structure.
Step 240, receiving an acquisition request of a target text unit corresponding to the structure node generated by the client. In particular, step 240 may be performed by the request receiving module.
And the target text unit corresponding to the structural node is the text unit corresponding to the structural node triggered by the user on the user interface of the client. For a detailed description, refer to step 320, which is not described herein.
In some embodiments, the server may receive a fetch request sent by the client. For example, the server receives a request for obtaining the target text unit "step 220 content" sent by the client.
And step 250, responding to the acquisition request, acquiring the target text unit and sending the target text unit to the client. In particular, step 250 may be performed by the text unit sending module.
In some embodiments, the server may retrieve the target text unit in response to the retrieval request by reading a database storage data, invoking an associated interface, or otherwise.
It is understood that the target text unit stored in the database may be a blank text unit generated by the server, or a text unit saved after being edited by the user. In some embodiments, after obtaining the text structure of the second text based on the first text in step 220, the server simultaneously generates a blank text unit corresponding to each structure node in the text structure, and stores the text structure and the blank text unit in the database. In some embodiments, the server may further obtain, from the client, a text unit of the second text saved after the user edits, and store the text unit saved after the user edits in the database.
Further, the server sends the target text unit to the client.
FIG. 3 is an exemplary flow diagram illustrating a method for document-assisted editing applied to a client in accordance with some embodiments of the present description. As shown in fig. 3, the method 300 for assisted editing of a document for a client may include:
and 310, receiving and displaying a text structure of the second text acquired by the server based on the first text. In particular, step 310 may be performed by the text structure receiving module.
In some embodiments, the client may receive a text structure of the second text that the server obtained based on the first text. The server obtains the detailed description of the second text based on the first text, referring to step 220, which is not described herein again.
In some embodiments, the client may display the received text structure, including the structure nodes and the tree structure, in a user interface. In some embodiments, the user interface may display all or a portion of the text structure based on user manipulation. For example, operations may include collapsing (indicated by "-"), expanding (indicated by "+"), scrolling (indicated by a double-headed arrow), zooming, and the like.
In some embodiments, the second text further comprises at least one text unit corresponding to the at least one structural node. The text units are constituent elements of the second text, and the second text may be divided into different text units according to different contents. In order to further clarify the claim 1 corresponding to the unit 1, the patent application text shown in fig. 4 may divide the content in the second text into three text units according to the "content of step 210", "content of step 220" and "content of step 230": text element 1.1, text element 1.2 and text element 1.3 are used to illustrate 3 technical features of claim 1, respectively.
In some embodiments, each text unit corresponds to a structure node. As shown in fig. 4, text element 1.1 "content of step 210" corresponds to structure node 1.1 "step 210", text element 1.2 "content of step 220" corresponds to structure node 1.2 "step 220", text element 1.3 "content of step 230" corresponds to structure node 1.3 "step 230".
The text element of the second text may be used to illustrate the first text, i.e. the second text may be used to illustrate the first text. It is understood that the text element corresponding to the structural node is used to illustrate the discussion and/or key points of the first text corresponding to the structural node. Illustratively, continuing with the patent application text of fig. 4 as an example, the structure node 1 "outlines" the corresponding claim 1 (i.e., discussion 1) in the claim (i.e., first text), and the corresponding text unit 1 "outlines" may be used to describe the claim 1 (discussion 1); the structure node 1.1 "step 210" corresponds to the technical feature 1 (i.e. the key point 1) of claim 1 (i.e. the discussion 1) in the claim (i.e. the first text), and the corresponding text unit 1, 1 "step 210 content" can be used to explain the technical feature 1 (i.e. the key point 1).
And step 320, when detecting that the structural node is triggered, generating an acquisition request of the target text unit corresponding to the structural node, and sending the acquisition request to the server. In particular, step 320 may be performed by a text unit request module.
As previously mentioned, the structure node may characterize the synopsis of the second text. The detailed description of the structure node can be referred to in step 220, and is not repeated here.
And the target text unit corresponding to the structural node is the text unit corresponding to the triggered structural node. In some embodiments, the client may detect whether a user has performed a triggered operational action on a structure node of the user interface display. In some embodiments, the triggered operational behavior may include, but is not limited to: single click, double click, frame selection, touch and gesture input, etc. Specifically, when a user operates a structure node displayed on the user interface, the client may detect a trigger on the user interface, and then generate an acquisition request of a target text unit corresponding to the structure node.
In some embodiments, the client may send a fetch request to the server.
For example, when the user clicks the structure node "step 220" displayed on the user interface, and the client detects that the user interface structure node "step 220" is triggered, an acquisition request of the target text unit "step 220 content" is generated, and the request is sent to the server.
And step 330, receiving and displaying the target text unit acquired by the server. In particular, step 330 may be performed by the text unit display module.
In some embodiments, the client may receive the target text unit retrieved by the server. For the description of the target text unit obtained by the server, refer to step 250, which is not described herein again.
Further, after receiving the target text unit, the client may display the target text unit on the user interface, so that the user may edit the content of the text unit on the user interface. As shown in fig. 7a, when the user clicks the structure node "step 220" on the user interface of the client, the client detects the trigger operation of "click", obtains the content of the target text unit "step 220 content" corresponding to the structure node "step 220" from the server, and displays the target text unit on the user interface. The content of the target text unit may be edited content or unedited blank content.
The related description of the content of the edited text unit is referred to fig. 6 and will not be described here.
The above embodiment has at least the following beneficial effects: and editing the text unit based on the text structure, so that the second text has the characteristic of structuralization, the characteristics of the text unit corresponding to each structure node are determined, and the structure of the text is conveniently and flexibly adjusted. Furthermore, based on a clear data structure, the structure nodes can be generated by the aid of a machine learning model, and the writing efficiency and the document quality are improved.
FIG. 5 is a schematic diagram of a method for generating a structure node according to a structure node generative model as shown in some embodiments of the present description.
The structural node generative model may generate structural nodes. As previously described, in some embodiments, to generate a structure node based on a structure node generation model, content features of upper level structure nodes including the structure node and content features of the level structure nodes are input, and output is the structure node.
The upper-level structure node is a father node of the structure node, and the lower-level structure node and the structure node have a common father node. Illustratively, continuing with the example of FIG. 4, assume that to generate the fabric node 1.2 "step 220", as previously described, the inputs include: the upper structure node: content characteristics of structure node 1, hierarchical structure node: content characteristics of structure node 1.1, content characteristics of structure node 1.2, and content characteristics of structure node 1.3.
The content characteristics of the structure nodes refer to the basis of the content sources of the structure nodes. In some embodiments, the content features include one or more of the following features of the structure node: the method comprises the following steps of discussion corresponding to the structural node, key points corresponding to the structural node, the type of a text unit corresponding to the structural node and relevant requirements for the text unit corresponding to the structural node. It can be understood that the content feature of the level structure node is the above feature content corresponding to the level structure node, and the content feature of the upper level structure node is the above feature content corresponding to the upper level structure node.
As previously described, the structure nodes correspond to the discussions and/or key points of the first text, where each discussion may characterize an aspect of the first text and each key point may characterize a point of the discussion. As shown in fig. 4, the superior structure node 1 corresponds to the discussion 1 (i.e., claim 1) of the first text (i.e., claim 1), the level structure node 1.1 corresponds to the key point 1 (i.e., technical feature 1) of the discussion 1 (i.e., claim 1), the level structure node 1.2 corresponds to the key point 2 (i.e., technical feature 2) of the discussion 1 (i.e., claim 1), and the level structure node 1.3 corresponds to the key point 3 (i.e., technical feature 3) of the discussion 1 (i.e., claim 1).
As previously described, each structural node corresponds to a unit of text. Wherein the type of the text unit refers to the form of the content of the text unit. For example, figure number descriptions, drawing descriptions, summaries, definitions, operations, examples, extensions, benefits, formulas, standard expressions, and others. As shown in fig. 4, the type of the "summary content" (not shown) of the text unit 1 corresponding to the upper structure node 1 is [ summary ], the type of the "step 210 content" of the text unit 1.1 corresponding to the level structure node 1.1 is [ operation ], and the types of the text unit 1.2 and the text unit 1.3 corresponding to the level structure node 1.2 and the level structure node 1.3 are [ algorithm ]. In some embodiments, the type of text unit may be obtained through manual input or manual selection, or through a classification model based on the structure node.
The relevant requirements of a text unit refer to the suggestive and annotated text of the content of the text unit. Such as details, notes, references, etc. For example, the requirement related to the "summary content" of the text unit 1 corresponding to the upper structure node 1 is [ abbreviated description ], the requirement related to the "step 210 content" of the text unit corresponding to the hierarchical structure node 1.1 is [ detailed description ], and the requirement related to the text unit 1.2 and the text unit 1.3 corresponding to the hierarchical structure node 1.2 and the hierarchical structure node 1.3 is [ detailed description ]. In some embodiments, the relevant requirements for a text unit may be obtained by manual input.
In some embodiments, the content characteristics of the structure nodes further include a keypoint type characteristic.
The keypoint type features refer to attributes of the keypoint type. For example, the aforementioned type characteristics of the key point 1 "production of the business" and the key point 2 "sales performance of the business" of the business analysis report are data. For another example, if the key point of the patent application document is a technical feature, the feature of the technical feature type may include a model structure, an algorithm, a material, a composition, a structure, and the like.
In some embodiments, the keypoint type features may be obtained by a keypoint type discrimination model. In some embodiments, the keypoint type discrimination model is a machine learning model.
In some embodiments, the keypoint type discrimination model comprises an embedding submodel and a classification submodel.
In some embodiments, the embedding submodel may generate a keypoint text representation vector based on the keypoints. Specifically, the embedding sub-model may first quantize words in the keypoint text to obtain word vectors, and then determine the keypoint text representation vectors based on the obtained word vectors. In some embodiments, the embedding submodel may include, but is not limited to: word2vec model, Term Frequency-Inverse Document Frequency model (TF-IDF), SSWE-C (skip-gram based combined-sensing Word embedding) model, neural network model, and the like.
In some embodiments, the classification submodel may generate keypoint type features based on keypoint text representation vectors. Specifically, the classification submodel may map the input keypoint text representation vector to a numerical value or a probability, and then obtain the keypoint type feature based on the numerical value or the probability. In some embodiments, the classification sub-model may be, but is not limited to, a Logistic regression model, a naive bayes classification model, a gaussian distributed bayesian classification model, a decision tree model, a random forest model, a KNN classification model, a neural network model, or the like.
As described above, the input of the structure node generation model includes the content characteristics of the upper-level structure node and the lower-level structure node of the structure node to be generated, and the output is the structure node. Specifically, the structure node generation model may first quantize the content features, then encode the quantized content features to obtain a semantic vector fused with the content features, and then obtain the structure nodes based on the semantic vector.
As shown in fig. 4, the upper level structure node 1 corresponds to the discussion, and thus has no key point type feature; the key point type characteristics of the level structure node 1.1 are [ data ], and the key point type characteristics of the level structure node 1.2 and the level structure node 1.3 are [ structure ].
To sum up, continuing with the example of fig. 4, assuming "step 220" is the structure node to be generated, the content characteristics of the upper structure node 1 are: discussion [ claim 1 ], type of text unit "summary content" (summary), related requirements (brief description), and content characteristics of the hierarchical structure node 1.1: the key points [ technical characteristics 1 ], the types [ operations ] of the text units "contents in step 210", the related requirements [ detailed description ], the key point type characteristics [ data ], and the content characteristics of the hierarchical structure node 1.2: key points [ technical characteristics 2 ], type [ algorithm ] of text unit "step 220 content", related requirements [ detailed description ], key point type characteristics [ structure ], and content characteristics of a hierarchical structure node 1.3: the key points [ technical characteristics 3 ], the types [ algorithms ] of the text unit 'contents in step 230', the related requirements [ detailed description ], the key point type characteristics [ structure ] are input into the structure node generation model, and the structure node 'step 220' is output.
In some embodiments, the structure node generation model may include, but is not limited to, a Bi-directional Long Short-Term Memory (Bi-LSTM) model, an ELMo (embedding from Long modules) model, a GPT (general Pre-transforming) model, a BERT (bidirectional Encoder retrieval from transforms) model, and the like.
In some embodiments, the model may be generated based on a number of training samples with identifications training structure nodes. Specifically, a training sample with a mark is input into the structure node generation model, and parameters of the structure node generation model are updated through training.
In some embodiments, the training samples may be content features of superior structure nodes and content features of the level structure nodes of the sample structure nodes. In some embodiments, the identification may be a sample structure node. In some embodiments, the training samples and identifications may be obtained based on completed documents, by manual entry, reading stored data, invoking an associated interface, or otherwise.
In some embodiments, training may be performed by a commonly used method based on the training samples. For example, the training may be based on a gradient descent method. In some embodiments, the training is ended when the trained model satisfies a preset condition.
The above embodiment has at least one of the following technical effects: (1) based on the first text, a high-quality text structure can be obtained through a neural network model; (2) based on the text unit type and the requirement of the second text set by the user, the text structure which does not accord with the user setting can be filtered out, and the generated text structure is controllable.
FIG. 6 is a schematic diagram of a method of editing the content of a text unit, according to some embodiments of the present description. As shown in fig. 6, the method 600 for editing the content of a text unit may include:
step 610 displays a plurality of adjacent text units of the target text unit.
As previously described, the user interface of the client may display the target text units retrieved by the server.
In some embodiments, the user interface may display a plurality of adjacent text units of the target text unit. Specifically, the client may send a request to the server to retrieve a plurality of adjacent text units based on the content of the target text unit, and receive and display the plurality of adjacent text units retrieved by the server from the database based on the request. And if the client stores the adjacent text units, directly reading and displaying. As shown in FIG. 7a, the client displays the previous text unit "step 210 content" and the next text unit "step 230 content" based on the target text unit "step 220 content".
In some embodiments, the client may display information related to the selected current text unit in the user interface based on a user selection operation on the text unit at the client. Wherein the related information is information related to the current text unit for prompting the user. In some embodiments, the related information may include content characteristics of the current structure node corresponding to the current text unit and a modification annotation by the user. As shown in fig. 7a, the user selects "step 220 content" as the current text unit on the page of the text unit at the client, and the user interface displays the relevant information corresponding to the current text unit, including the content characteristics of the current structure node "step 220" and the modification comments made by the user to the current text unit. It will be appreciated that the user may refer to the relevant information of the text unit when entering the modification instruction. The description of the modification instruction refers to step 620, and is not repeated here.
Step 620, obtain the modification instruction for the target text unit.
In some embodiments, the client may obtain user modification instructions for the target text unit. The modification instruction can be used for editing unedited blank content in the text unit so as to obtain an initial version of the second text; modifying may also refer to editing edited content in a unit of text. It will be appreciated that each modification corresponds to a version of the second text.
Step 630, after executing the modification instruction, displaying the updated target text unit.
Further, after the client executes the modification instruction, the user interface may display the target text unit after the content is updated (i.e., modified).
Illustratively, the client obtains and displays the content of the target text unit as blank in the user interface, and after the client obtains the input "in step 220" of the user to the target text unit, the client displays the content of the target text unit as "in step 220" in the user interface.
In some embodiments, the client may send the version of the current second text to the server based on the saved trigger condition.
In some embodiments, the saved trigger condition may be reaching a preset time interval. Specifically, the client may automatically obtain the content of the second text at the current time based on a preset time interval, and send the version of the current second text to the server.
In some embodiments, the trigger for saving may also be that the client detects an operation by the user to save the version of the second text. Specifically, the client may send the version of the current second text to the server after receiving a saving instruction triggered by the user.
Further, the server may store the current version of the second text in the database after receiving the version of the second text from the client. In some embodiments, the server may store only the initial version and the most recently modified version of the second text, or may save all versions of the second text.
In some embodiments, the client may display the version difference of the text structure and the version difference of the display text unit. In some embodiments, the user may select, via the client, a second text that requires multiple versions of the discrepancy to be displayed. In some embodiments, the client may also automatically select the second text of the latest version and the previous version after receiving a user-triggered display difference instruction. Specifically, the client sends a request for obtaining the text structure and/or the difference of the text unit of the second texts in multiple versions to the server, and after receiving the request, the server calls the second texts in multiple versions from the database and determines the version difference of the text structure and/or the version difference of the text unit of the second texts.
Further, the client may display the version difference of the text structure and/or the version difference of the text unit of the plurality of second texts provided by the server. As shown in fig. 7b, the text structure of the current version and the text structure of the historical version may be contrastingly displayed on the text structure page based on the selection operation of the user at the client. Further, the user obtains the difference between the current version and the historical version by comparing the text structure of the current version with the text structure of the historical version.
In some embodiments, the client may display the differences of the other versions in an annotated manner on the second text of one of the versions. As shown in fig. 7c, the difference between the text unit of the current version and the text unit of the historical version can be displayed in an annotation manner on the text unit page based on the selection operation of the user at the client.
The above embodiment has at least one of the following technical effects: (1) the text unit editing interface can display the corresponding relation between the text structure nodes and the text units, so that a user can quickly position corresponding contents, and the document editing efficiency is improved; (2) based on user selection, the text unit editing interface can display the text structure and the difference of each version of the text unit, so that the user can learn and summarize conveniently, and the document editing capability of the user is improved; (3) the client may automatically save the user's version so that historical versions may be retrieved based on the user's selection.
The embodiment of the specification also provides a computer readable storage medium. The storage medium stores computer instructions, and after the computer reads the computer instructions in the storage medium, the computer realizes the method for assisting in editing the document.
It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present description may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereof. Accordingly, aspects of this description may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present description may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of this specification may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran2003, Perl, COBOL2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or processing device. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing processing device or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (18)

1. A method for assisting in editing a document is applied to a client and comprises the following steps:
receiving and displaying a text structure of a second text acquired by the server based on the first text;
the first text comprises at least one discussion, each discussion comprising at least one keypoint;
the text structure of the second text is a tree structure and comprises at least one structure node corresponding to the at least one discussion or/and the at least one key point, and the structure node is generated through manual input or through a structure node generation model; the structure node generation model is a machine learning model, and the input characteristics comprise the content characteristics of the superior structure nodes of the structure nodes and the content characteristics of the level structure nodes;
the second text further comprises at least one text unit corresponding to the at least one structural node, and the at least one text unit is used for explaining the first text;
when the structural node is detected to be triggered, generating an acquisition request of a target text unit corresponding to the structural node, and sending the acquisition request to the server;
and receiving and displaying the target text unit acquired by the server.
2. The method of claim 1, the content characteristics of the superordinate structure node or the content characteristics of the hierarchical structure node comprising one or more of the following characteristics of the superordinate structure node or the hierarchical structure node: corresponding discussions, corresponding key points, types of corresponding text units, and associated requirements for the corresponding text units.
3. The method of claim 2, the content features further comprising: a keypoint type characteristic of the corresponding keypoint;
the structure node generation model is a neural network model and is generated through training.
4. The method of claim 3, further comprising:
the key point type characteristics are obtained through a key point type discrimination model;
the key point type distinguishing model is a machine learning model and comprises an embedding sub-model and a classification sub-model;
the embedding submodel generates a key point text representation vector based on the key point text;
the classification submodel generates the keypoint type features based on the keypoint text representation vectors.
5. The method of claim 1, further comprising:
displaying a plurality of adjacent text units of the target text unit;
acquiring a modification instruction of the target text unit;
and after the modification instruction is executed, displaying the updated target text unit.
6. The method of claim 5, further comprising:
based on the saved trigger condition, sending the current version of the second text to the server;
displaying version differences of the text structure of a plurality of the second texts provided by the server;
displaying version differences of the text units of the plurality of second texts provided by the server.
7. A system for document assisted editing, comprising:
the text structure receiving module is used for receiving and displaying a text structure of a second text acquired by the server based on the first text;
the first text comprises at least one discussion, each discussion comprising at least one keypoint;
the text structure of the second text is a tree structure and comprises at least one structure node corresponding to the at least one discussion or/and the at least one key point, and the structure node is generated through manual input or through a structure node generation model; the structure node generation model is a machine learning model, and the input characteristics comprise the content characteristics of a superior structure node and an inferior structure node of the structure node;
the second text further comprises at least one text unit corresponding to the at least one structural node, and the at least one text unit is used for explaining the first text;
the text unit request module is used for generating an acquisition request of a target text unit corresponding to the structural node when detecting that the structural node is triggered, and sending the acquisition request to the server;
and the text unit display module is used for receiving and displaying the target text unit acquired by the server.
8. The system of claim 7, the content characteristics of the superordinate structure node or the content characteristics of the hierarchy node comprising one or more of the following characteristics of the superordinate structure node or the hierarchy node: corresponding discussions, corresponding key points, types of corresponding text units, and associated requirements for the corresponding text units.
9. The system of claim 8, further comprising:
the content features further comprise keypoint type features of the keypoints; the structure node generation model is a neural network model and is generated through training.
10. The system of claim 9, further comprising:
the key point type characteristics are obtained through a key point type discrimination model;
the key point type distinguishing model is a machine learning model and comprises an embedding sub-model and a classification sub-model;
the embedding submodel generates a key point text representation vector based on the key point text;
the classification submodel generates the keypoint type features based on the keypoint text representation vectors.
11. The system of claim 7, the text unit display module further to:
displaying a plurality of adjacent text units of the target text unit;
acquiring a modification instruction of the target text unit;
and after the modification instruction is executed, displaying the updated target text unit.
12. The system of claim 11, further comprising:
the second text sending module is used for sending the current version of the second text to the server based on the stored trigger condition;
the text unit display module is also used for
Displaying a version difference of the text structure of a plurality of second texts provided by the server;
displaying a version difference of the text unit of a plurality of second texts provided by the server.
13. A computer-readable storage medium, wherein the storage medium stores computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 6.
14. A method for assisting in editing a document is applied to a server and comprises the following steps:
acquiring a first text, wherein the first text comprises one or more discussions, and each discussion comprises one or more key points;
acquiring a text structure of a second text based on the first text;
the text structure of the second text is a tree structure and comprises at least one structure node corresponding to the at least one discussion or/and the at least one key point, and the structure node is generated through manual input or through a structure node generation model; the structure node generation model is a machine learning model, and the input characteristics comprise the content characteristics of the superior structure nodes of the structure nodes and the content characteristics of the level structure nodes;
the second text further comprises at least one text unit corresponding to the at least one structural node, and the at least one text unit is used for explaining the first text;
sending the text structure of the second text to a client;
receiving an acquisition request of a target text unit corresponding to the structural node generated by the client;
and responding to the acquisition request, acquiring the target text unit and sending the target text unit to the client.
15. The method of claim 14, further comprising:
receiving a current version of the second text from the client;
determining version differences of the text structures of a plurality of second texts and sending the version differences to the client;
determining and sending the version difference of the text units of the second texts to the client.
16. A system for document assisted editing, comprising:
the first text acquisition module is used for acquiring a first text, wherein the first text comprises one or more discussions, and each discussion comprises one or more key points;
the text structure generating module is used for acquiring a text structure of a second text based on the first text;
the text structure of the second text is a tree structure and comprises at least one structure node corresponding to the at least one discussion or/and the at least one key point, and the structure node is generated through manual input or through a structure node generation model; the structure node generation model is a machine learning model, and the input characteristics comprise the content characteristics of the superior structure nodes of the structure nodes and the content characteristics of the level structure nodes;
the second text further comprises at least one text unit corresponding to the at least one structural node, and the at least one text unit is used for explaining the first text;
the text structure sending module is used for sending the text structure of the second text to the client;
a request receiving module, configured to receive an acquisition request of a target text unit corresponding to the structural node generated by the client;
and the text unit sending module is used for responding to the acquisition request, acquiring the target text unit and sending the target text unit to the client.
17. The system of claim 16, further comprising:
the second text acquisition module is used for receiving the current version of the second text from the client;
the version difference determining module is used for determining the version difference of the text structure of the second texts and sending the version difference to the client; and the system is used for determining the version difference of the text units of the second texts and sending the version difference to the client.
18. A computer-readable storage medium, wherein the storage medium stores computer instructions which, when executed by a processor, implement the method of any one of claims 14 to 15.
CN202010963770.1A 2020-09-14 2020-09-14 Method and system for assisting in editing document Active CN112084753B (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CN202110710047.7A CN113255303B (en) 2020-09-14 2020-09-14 Method and system for assisting in editing document
CN202110672721.7A CN113312884B (en) 2020-09-14 2020-09-14 Method and system for assisting in editing document
CN202110674052.7A CN113221516B (en) 2020-09-14 2020-09-14 Method and system for assisting in editing document
CN202110755500.6A CN114186534A (en) 2020-09-14 2020-09-14 Method and system for assisting in editing document
CN202010963770.1A CN112084753B (en) 2020-09-14 2020-09-14 Method and system for assisting in editing document
US17/447,576 US20220083724A1 (en) 2020-09-14 2021-09-13 Methods and systems for assisting document editing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010963770.1A CN112084753B (en) 2020-09-14 2020-09-14 Method and system for assisting in editing document

Related Child Applications (4)

Application Number Title Priority Date Filing Date
CN202110674052.7A Division CN113221516B (en) 2020-09-14 2020-09-14 Method and system for assisting in editing document
CN202110755500.6A Division CN114186534A (en) 2020-09-14 2020-09-14 Method and system for assisting in editing document
CN202110672721.7A Division CN113312884B (en) 2020-09-14 2020-09-14 Method and system for assisting in editing document
CN202110710047.7A Division CN113255303B (en) 2020-09-14 2020-09-14 Method and system for assisting in editing document

Publications (2)

Publication Number Publication Date
CN112084753A true CN112084753A (en) 2020-12-15
CN112084753B CN112084753B (en) 2021-06-29

Family

ID=73737879

Family Applications (5)

Application Number Title Priority Date Filing Date
CN202110672721.7A Active CN113312884B (en) 2020-09-14 2020-09-14 Method and system for assisting in editing document
CN202110710047.7A Active CN113255303B (en) 2020-09-14 2020-09-14 Method and system for assisting in editing document
CN202110755500.6A Pending CN114186534A (en) 2020-09-14 2020-09-14 Method and system for assisting in editing document
CN202010963770.1A Active CN112084753B (en) 2020-09-14 2020-09-14 Method and system for assisting in editing document
CN202110674052.7A Active CN113221516B (en) 2020-09-14 2020-09-14 Method and system for assisting in editing document

Family Applications Before (3)

Application Number Title Priority Date Filing Date
CN202110672721.7A Active CN113312884B (en) 2020-09-14 2020-09-14 Method and system for assisting in editing document
CN202110710047.7A Active CN113255303B (en) 2020-09-14 2020-09-14 Method and system for assisting in editing document
CN202110755500.6A Pending CN114186534A (en) 2020-09-14 2020-09-14 Method and system for assisting in editing document

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110674052.7A Active CN113221516B (en) 2020-09-14 2020-09-14 Method and system for assisting in editing document

Country Status (2)

Country Link
US (1) US20220083724A1 (en)
CN (5) CN113312884B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11615231B1 (en) * 2022-02-15 2023-03-28 Atlassian Pty Ltd. System for generating outline navigational interface for native mobile browser applications

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2642701A (en) * 2000-03-10 2001-09-13 Ezylaw Pty Ltd System for automated generation of professional documents
DE10162155A1 (en) * 2000-12-18 2002-07-25 Siemens Corp Res Inc Automated document generation system for production of structured documents from information held in a database, e.g. creation of SGML documents using a document type definition to structure information to a created template
CN103389970A (en) * 2012-05-08 2013-11-13 北京华宇软件股份有限公司 Real-time learning-based auxiliary word writing system and method
CN107368546A (en) * 2017-06-28 2017-11-21 武汉斗鱼网络科技有限公司 A kind of method and apparatus for generating outline
CN108369578A (en) * 2016-02-01 2018-08-03 微软技术许可有限责任公司 Automatic moulding plate based on previous document generates
CN110287785A (en) * 2019-05-20 2019-09-27 深圳壹账通智能科技有限公司 Text structure information extracting method, server and storage medium
CN111046645A (en) * 2019-12-11 2020-04-21 浙江大搜车软件技术有限公司 Method and device for generating article, computer equipment and storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8918709B2 (en) * 2009-05-29 2014-12-23 Microsoft Corporation Object templates for data-driven applications
KR101243057B1 (en) * 2012-11-23 2013-03-26 한국과학기술정보연구원 An automated input system and method for producing xml full-text of journal articles
US10467295B1 (en) * 2014-07-31 2019-11-05 Open Text Corporation Binding traits to case nodes
JP6162909B2 (en) * 2015-07-31 2017-07-12 楽天株式会社 Tree structure data editing device, tree structure data editing method, and program
CN105677764B (en) * 2015-12-30 2020-05-08 百度在线网络技术(北京)有限公司 Information extraction method and device
CN106649223A (en) * 2016-12-23 2017-05-10 北京文因互联科技有限公司 Financial report automatic generation method based on natural language processing
CN109190098A (en) * 2018-08-15 2019-01-11 上海唯识律简信息科技有限公司 A kind of document automatic creation method and system based on natural language processing
CN110852044B (en) * 2018-08-20 2023-09-15 上海颐为网络科技有限公司 Text editing method and system based on structuring
CN111159982B (en) * 2019-12-24 2023-05-16 中信银行股份有限公司 Document editing method, device, electronic equipment and computer readable storage medium
CN111488743A (en) * 2020-04-10 2020-08-04 苏州七星天专利运营管理有限责任公司 Text auxiliary processing method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2642701A (en) * 2000-03-10 2001-09-13 Ezylaw Pty Ltd System for automated generation of professional documents
DE10162155A1 (en) * 2000-12-18 2002-07-25 Siemens Corp Res Inc Automated document generation system for production of structured documents from information held in a database, e.g. creation of SGML documents using a document type definition to structure information to a created template
CN103389970A (en) * 2012-05-08 2013-11-13 北京华宇软件股份有限公司 Real-time learning-based auxiliary word writing system and method
CN108369578A (en) * 2016-02-01 2018-08-03 微软技术许可有限责任公司 Automatic moulding plate based on previous document generates
CN107368546A (en) * 2017-06-28 2017-11-21 武汉斗鱼网络科技有限公司 A kind of method and apparatus for generating outline
CN110287785A (en) * 2019-05-20 2019-09-27 深圳壹账通智能科技有限公司 Text structure information extracting method, server and storage medium
CN111046645A (en) * 2019-12-11 2020-04-21 浙江大搜车软件技术有限公司 Method and device for generating article, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113221516A (en) 2021-08-06
CN113255303B (en) 2022-03-25
CN112084753B (en) 2021-06-29
CN113312884A (en) 2021-08-27
CN114186534A (en) 2022-03-15
CN113255303A (en) 2021-08-13
CN113312884B (en) 2022-02-08
CN113221516B (en) 2021-11-30
US20220083724A1 (en) 2022-03-17

Similar Documents

Publication Publication Date Title
US11416268B2 (en) Aggregate features for machine learning
US20210165955A1 (en) Methods and systems for modeling complex taxonomies with natural language understanding
US11860920B2 (en) System and method for providing technology assisted data review with optimizing features
WO2019100635A1 (en) Editing method and apparatus for automated test script, terminal device and storage medium
US11620453B2 (en) System and method for artificial intelligence driven document analysis, including searching, indexing, comparing or associating datasets based on learned representations
CN112417121A (en) Client intention recognition method and device, computer equipment and storage medium
CN113032336A (en) Information processing apparatus, storage medium, and information processing method
KR20230006601A (en) Alignment methods, training methods for alignment models, devices, electronic devices and media
CN112084753B (en) Method and system for assisting in editing document
KR102532216B1 (en) Method for establishing ESG database with structured ESG data using ESG auxiliary tool and ESG service providing system performing the same
CN116595191A (en) Construction method and device of interactive low-code knowledge graph
CN111723177B (en) Modeling method and device of information extraction model and electronic equipment
CN114238768A (en) Information pushing method and device, computer equipment and storage medium
US20190163810A1 (en) Search User Interface
US11809398B1 (en) Methods and systems for connecting data with non-standardized schemas in connected graph data exchanges
US20230281327A1 (en) Utilizing a switchboard management system to efficiently and accurately manage sensitive digital data across multiple computer applications, systems, and data repositories
Yunhasnawa et al. Analysis of System Requirements and Architecture for Facilitating Table-Based Data Clustering for Non-Technical Users
CN117493333A (en) Data archiving method and device, electronic equipment and storage medium
WO2022147359A1 (en) Adaptive learning systems utilizing machine learning techniques
CN115481616A (en) Target text acquisition method and device, computer equipment and storage medium
CN113254471A (en) SQL statement checking method and device
CN117891531A (en) System parameter configuration method, system, medium and electronic equipment for SAAS software

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant