CN112668283B - Document editing method and device and electronic equipment - Google Patents

Document editing method and device and electronic equipment Download PDF

Info

Publication number
CN112668283B
CN112668283B CN202011522442.4A CN202011522442A CN112668283B CN 112668283 B CN112668283 B CN 112668283B CN 202011522442 A CN202011522442 A CN 202011522442A CN 112668283 B CN112668283 B CN 112668283B
Authority
CN
China
Prior art keywords
task
target
information
text
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011522442.4A
Other languages
Chinese (zh)
Other versions
CN112668283A (en
Inventor
曾清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202011522442.4A priority Critical patent/CN112668283B/en
Publication of CN112668283A publication Critical patent/CN112668283A/en
Application granted granted Critical
Publication of CN112668283B publication Critical patent/CN112668283B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the disclosure discloses a document editing method, a device and an electronic device, wherein a specific implementation mode of the method comprises the following steps: in response to receiving a task adding instruction for adding a target task to target text in a text line in a document, adding a task identifier in the text line; responding to the selection operation of a target text line in the text lines, and displaying a task adding control for adding a task, wherein the task adding control comprises a task information interaction window; and generating the corresponding target task in the text line according to the task information and the target text input by the first user in the task information interaction window. The target tasks can be created and managed in the document by the user, and the user experience is improved.

Description

Document editing method and device and electronic equipment
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a method and an apparatus for editing a document, and an electronic device.
Background
In an office scenario, users often receive a wide variety of tasks. These tasks may include daily tasks such as weekly reports and monthly reports, and may also include other tasks that are temporarily assigned. When managing these tasks, the user can manage them by means of a task management application.
Disclosure of Invention
This disclosure is provided to introduce concepts in a simplified form that are further described below in the detailed description. This disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The embodiment of the disclosure provides a document editing method and device and electronic equipment.
In a first aspect, an embodiment of the present disclosure provides a document editing method, where the method includes: in response to receiving a task adding instruction for adding a target task to target text in a text line in a document, adding a task identifier in the text line; responding to the selection operation of a target text line in the text lines, and displaying a task adding control for adding a task, wherein the task adding control comprises a task information interaction window; and generating the corresponding target task in the text line according to the task information and the target text input by the first user in the task information interaction window.
In a second aspect, an embodiment of the present disclosure provides a document editing apparatus, including: the identification adding module is used for responding to a task adding instruction for adding a target task to a target text in a text line in a document, and adding a task identification in the text line; the task adding module is used for responding to the selection operation of a target text line in the text lines and displaying a task adding control for adding a task, and the task adding control comprises a task information interaction window; and the task generating module is used for generating the corresponding target task in the text line according to the task information and the target text which are input by the first user in the task information interaction window.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the document editing method of the first aspect.
In a fourth aspect, the disclosed embodiments provide a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the steps of the document editing method described in the first aspect above.
According to the document editing method, the document editing device and the electronic equipment, the task adding instruction for adding the target task to the target text in the text line is received in response to the receiving of the task adding instruction for adding the target task to the target text in the text line, and the task identification is added to the text line; responding to the selection operation of a target text line in the text lines, and displaying a task adding control for adding a task, wherein the task adding control comprises a task information interaction window; and generating the corresponding target task in the text line according to the task information and the target text input by the first user in the task information interaction window. The target tasks can be created and managed in the document by the user, and the user experience is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a flow diagram of one embodiment of a document editing method according to the present disclosure;
FIG. 2 is a schematic diagram of an embodiment of a document editing apparatus according to the present disclosure;
FIG. 3 is an exemplary system architecture to which the document editing method of one embodiment of the present disclosure may be applied;
fig. 4 is a schematic diagram of a basic structure of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, which shows a flowchart of one embodiment of a document editing method according to the present disclosure, as shown in fig. 1, the document editing method includes the following steps 101 to 103.
Step 101, responding to a task adding instruction for adding a target task to a target text in a text line received in a document, and adding a task identifier in the text line;
the documents may include electronic documents, such as electronic documents edited in a text editor application, which may be edited and presented on a plurality of terminal devices via services provided by a server.
The terminal device can respond to a document acquisition request of a user, and displays a document which the user wants to browse on the display interface. The display interface can be a display interface of a touch screen, and a user can edit the document by touching the corresponding display area; the display interface may be a display interface having only a display function, and the user may select a corresponding display area by using a device such as a mouse electrically connected to the terminal device, and may edit the document by using a keyboard electrically connected to the terminal device.
The document may include one text line or a plurality of text lines. The text information may be described or may not be described in each text line.
The target text may be all the content included in the text line, or may be a part of the content included in the text line.
The user may select the target text in the text line and may add the target task to the target text in the document by sending the task addition instructions described above. For example, the user may select the target text and then may add the target task by clicking a "add task," "add task" button, or the like, which may essentially implement a preset button for the target task. When the user's click operation on the preset button is detected, it can be considered that the task adding instruction is received.
After receiving the task adding instruction, a task identifier can be added in the text line. Optionally, the task identifier here may include, for example, a mark such as a box, a circle, an underline, etc., and the mark may be added at the head or the tail of the text line or at a certain position in the text; optionally, the task identification may further include presenting the text of the text line in a new display style, such as highlighting the text, etc. The task identifier may be substantially a style or mark that displays text of the target task differently from other text information.
In some application scenarios, if there are multiple text lines in a document, when a task addition instruction for adding a target task to a target text is received, a task identifier may be added to the text line where the target text is located. In this way, each text line may correspond to a respective task identifier.
And 102, responding to the selection operation of a target text line in the text lines, and displaying a task adding control for adding a task, wherein the task adding control comprises a task information interaction window.
The selection operation may include, for example, hovering a mouse cursor over a target text line or a target text where the target text is located with a mouse.
After receiving the selection operation, the task addition control can be displayed to the user. The task adding control can comprise a task information interaction window. Task information such as personnel information related to the target task or time information related to the target task may be added through the task information interaction window.
The user can input the task information of the target task in the task information interaction window to realize man-machine interaction so as to add the target task. In some application scenarios, prompt information such as "please input personnel information" or "please input time information" may be displayed in the displayed task information interaction window to prompt the user to input task information of the target task, so as to achieve the purpose of successfully adding the target task.
Step 103, generating the corresponding target task in the text line according to the task information and the target text input by the first user in the task information interaction window.
After receiving the task information input by the user in the task information interaction window, a corresponding target task can be generated in the text line according to the task information and the target text. For example, after receiving the "Zhang III" input by the user in the task information interaction window, the target task with the target of completing the requirement design can be generated in the text line by combining the selected target text "completing the requirement design".
In the embodiment, a task identifier is added in a text line by firstly responding to a task adding instruction for adding a target task to target text in the text line; then, responding to the selection operation of a target text line in the text lines, and displaying a task adding control for adding a task, wherein the task adding control comprises a task information interaction window; and finally, generating the corresponding target task in the text line according to the task information and the target text input by the first user in the task information interaction window. The target tasks can be created and managed in the document by the user, and the user experience is improved.
In the related art, tasks respectively allocated in a plurality of application scenarios often need to be managed in a management application program corresponding to each application scenario. For example, a plurality of tasks recorded in a meeting scene are often recorded in a document management application, temporary tasks distributed in a communication tool are recorded in the communication tool or are individually copied and placed in the task management application, and tasks such as work reports and monthly reports are often individually managed through a form application program. Therefore, by using the mode that one management application manages one corresponding target task independently, a user needs to roll over to manage different tasks in a plurality of management applications respectively, so that the user is inconvenient to uniformly manage the tasks, and the user experience is reduced.
In some application scenarios of the present embodiment, a plurality of task addition instructions may be received in a document, so that a plurality of target tasks may be generated in the document. Therefore, a user can uniformly manage a plurality of target tasks in the same document without running among a plurality of management applications, and the user experience is improved.
In some alternative implementations, the text line includes a first target text without content, and the step 103 may include:
step 1031 a: receiving an editing instruction of a first user on the first target text in the text line, and generating task description information;
in some application scenarios, a user may edit a first target text in a line of text without content. Specifically, the user may send an edit instruction for the first target text. After receiving the editing instruction, the information content edited by the user can be received. Task description information may then be generated based on these information contents. The task description information can be used for explaining, marking or the like the target task. For example, when the user sends an edit instruction for the first target text and edits the information content "time-critical processing please" in priority, task description information "time-critical processing please" for describing the target task may be generated based on the information content.
Step 1032 a: and generating a corresponding target task in the text line according to the task information and the task description information which are input by the first user in the task information interaction window.
After the task description information is generated, a corresponding target task may be generated in a text line according to the task information and the task description information input by the user. For example, the task information input by the user characterizes the target task as being performed from 10 months and 10 days of 2020. The task description information is "submit the current month workload 10 days per month". Then, a target task may be generated that submits a monthly workload on 10 days per month, which is performed from 10 months and 10 days in 2020.
Through the step 1031a and the step 1032a, the user can flexibly describe the target task to be generated, so that other people can know the target task more.
In some alternative implementations, the text line includes a second target text containing content, and step 103 may include: and generating a corresponding target task in the text line according to the task information input by the first user in the task information interaction window and the second target text.
In some application scenarios, the user may generate a corresponding target task in the text line according to the second target text in the text line, in which the information content is recorded, and the task information input by the user. For example, the text line is recorded with text information whose information content is "development application a", and the user may select the text information, and at this time, the text information may be regarded as the second target text. Then, when the task information representation Zhang III input by the user is received as a follow-up person, a target task aiming at developing the application program A can be generated by Zhang III.
Therefore, the user can select the target text to be added with the target task, and then the corresponding target task can be generated after the corresponding task information is input. Then, the user may record the relevant information (i.e., the second target text) about the target task before the target task is not added. And then, when finishing later, adding a corresponding target task for the second target text, wherein the corresponding target task is suitable for adding the task received in the conference or the temporary conversation.
In some optional implementations, the second target text includes execution object identification therein; and the step 103 may include: and generating a target task aiming at the execution object corresponding to the execution object identification in the text line according to the task information input by the first user in the task information interaction window and the second target text.
The execution object identifier may be used to identify an execution object for executing the target task. For example, a combination of preset symbols and person names may be included. The preset symbol here may include, for example, an "@" symbol. Thus, the combination of the above-mentioned preset symbol and the person's name may be "@ zhangsan". That is, "@ zhang" here may be regarded as the execution object identification.
When it is detected that the execution object identifier is included in the second target text selected by the user, a target task for an execution object corresponding to the execution object identifier may be generated. For example, the information content corresponding to the second target text includes "please draft the file as soon as possible @ zhang", and when the task information input by the user represents that zhang si is an execution object, a target task for zhang si and zhang si as related personnel can be generated.
In this way, the execution object corresponding to the execution object identifier included in the second target text can be automatically set as the execution object of the target task. In this way, the user can also mark the execution object by the execution object identifier before the target task is not generated, and avoid forgetting to add related personnel when the target task is generated later.
In some optional implementations, the task information interaction window includes at least one task selection item; and the step 103 may include:
step 1031 b: determining a target task option according to the selection operation of the first user on the at least one task option;
some commonly used tasks may be added in advance in the task information interaction window. Then, at least one task selection item can be included in the task information interaction window when being presented to the user.
The task selection items can include commonly used task selection items such as weekly reports, monthly reports and quarterly reports.
The user may perform a selection operation such as clicking or long-pressing on at least one task selection item. After the selection operation is detected, the task selection item selected by the user can be determined as the target task selection item.
Step 1032 b: and generating a corresponding target task in the text line according to the target task selection item and the target text.
After the target task selection item is determined, a corresponding target task can be generated according to the target task selection item and the target text. For example, when the user selects the monthly report as the target task selection item, the target text can be used as the monthly report content to generate the corresponding target task.
Through the at least one task selection item, a user can select the target task selection item independently, the target task can be generated by combining the target text, the operation of adding the target task by the user is optimized, and the method is convenient and quick.
In some optional implementations, the task information interaction window includes a first task execution object control; thus, the step 103 may include:
step 1031 c: in response to receiving a selection operation of the first user on the first task execution object control, displaying a first execution object information input window for inputting an execution object for executing the target task;
in some application scenarios, a first task execution object control may be included in the task information interaction window. The first task execution object control may be used to add an execution object for the target task.
After receiving a selection operation such as a click or a long press performed by the user on the first task execution object control, the user may be presented with a first execution object information input window for inputting an execution object for executing the target task. Then, the user may input information of an execution object to execute the target task in the first execution object information input window. The information of the execution object here may include, for example, a name, a contact address, a department, and the like of the execution object.
Step 1032 c: determining a target execution object for executing a target task according to the input operation of the information of the execution object executed by the first user in the first execution object information input window;
after receiving an input operation of information of an execution target input by a user, a target execution target to execute a target task may be determined according to the information of the execution target input by the user. For example, the information of the execution object that receives the user input is "zhang san", and "zhang san" may be determined as a target execution object to execute the target task.
Step 1033 c: generating a target task including information of the target execution object in the text line.
After a target execution object to execute the target task is determined, the target task including information of the target execution object may be generated in the text line. For example, after "three sheets" is determined as the target execution object, a target task whose execution object is "three sheets" may be generated in the text line.
Through the steps 1031c to 1033c, the user may autonomously input information of the target execution object, and then the target task including the information of the target execution object may be generated, so that the user may add the execution objects together when adding the target task, which is convenient and practical.
In some optional implementations, the document editing method further includes: in response to receiving a selection operation of the first user to perform an object control on the first task, adding an orientation identifier in the text line;
the user can perform a selection operation such as long pressing or clicking on the first task execution object control. When a selection operation is detected, a directional identifier may be added in the text line. The orientation identifier here may comprise, for example, the "@" symbol.
Thus, step 1032c may comprise: and associating the target execution object with the directional identifier in the text line to generate an execution object identifier.
After receiving information of the execution object input by the user, a target execution object may be determined. The target execution object may then be associated with the orientation identifier in a line of text. When the two are associated, an execution object identification may be generated. For example, when the user inputs "zhangsan" as the name information of the execution object, the execution object "zhangsan" may be associated with the directional identifier "@" to generate the execution object identifier "@ zhangsan". At this time, the content in the text line changes. That is, the original "please determine writer" can be changed into "please determine writer @ zhang san".
Thus, object information of the target execution object may be added in the text line after the target execution object is input using the orientation identifier described above.
In some optional implementations, the task information interaction window includes a first task execution time control; and the step 103 may include:
step 1031 d: in response to receiving a selection operation performed by the first user on the first task execution time control, displaying a first task execution time information input window for inputting an execution time for executing the target task;
in some application scenarios, a first task execution time control may be included in the task information interaction window. The first task execution time control may be used to add an execution time of the target task.
After receiving a selection operation, such as a click or a long press, performed by the user on the first task execution time control, the user may be presented with a first task execution time information input window for inputting an execution time to execute the target task. Then, the user may input a corresponding execution time in the first task execution time information input window.
Step 1032 d: determining a target execution time for executing a target task according to an input operation of the information of the execution time executed by the first user in the first task execution time information input window;
after receiving an input operation performed by the user to input information of the execution time, the information of the execution time input by the user may be determined as a target execution time to execute the target task. For example, after receiving information that the user has input the execution time of "2020.12.12" in the first task execution time information input window, 12/2020/s may be determined as the execution time to execute the target task.
Step 1033 d: generating a target task including the target execution time information in the text line.
After the target execution time for the target task is determined, the target task including the target execution time may be generated in the text line. For example, after determining that 12.12.2020 is the target execution time, the target task having the execution time of 12.12.2020 may be generated in the text line.
Through the steps 1031d to 1033d, the user can autonomously input the information of the target execution time and then generate the target task including the information of the target execution time, so that the user can add the execution time together when adding the target task, and the method is convenient and practical.
In some optional implementations, the first task execution time information input window includes a first time list including at least one time selection item; and the step 1032d may include: and determining the target execution time selected by the first user according to the selection operation executed by the first user in the first time list.
In some application scenarios, a first time list may be provided to a user to provide the user with at least one time selection. The first time list here may comprise a calendar, for example. That is, when the above-described selection operation is detected, a calendar for inputting the execution time of the execution target task may be displayed to the user.
The user may select an execution time to execute the target task in the first time list. When it is detected that the user selects the execution time, the execution time selected by the user may be determined as the target execution time. For example, the user may select 2020, 12 and 15 days as the execution time in the calendar, and then may determine 2020, 12 and 15 days as the target execution time for executing the target task.
By displaying the first time list to the user, the user can select the target execution time in the first time list, the speed of adding the target execution time by the user is increased, and the method is convenient and practical.
In some optional implementation manners, the document editing method may further include adding an operation of executing an object to the target task after the target task is generated.
Step A: and displaying a second task execution object control in response to receiving a first preset operation executed on the target task and used for adding an execution object, wherein the second task execution object control comprises a second execution object information input window.
In some application scenarios, the user may perform a first preset operation for adding an execution object on the target task after the target task is created. The first preset operation here may include, for example, an operation of inputting an orientation identifier "@" in a text line corresponding to the target task. When the "@" symbol is detected to be input by the user, the first preset operation can be considered to be received.
And after the first preset operation is received, showing a second task execution object control to the user. The second task execution object control herein may include a second execution object information input window. The user can input information of a corresponding execution object in the second execution object information input window.
And B: and determining a target execution object for executing the target task according to the input operation of the information of the execution object executed by the first user in the second execution object information input window.
After receiving the information of the execution object input by the user, the execution object indicated by the information of the execution object input by the user may be determined as a target execution object for executing the target task. For example, receiving that the user inputs "zhang san" after the "@" symbol input, that is, the input content may be "@ zhang", and at this time, zhang san may be determined as the target execution object for executing the target task.
And C: and adding the information of the target execution object in a display area corresponding to the target task.
In some application scenarios, the target task is added in the target text line, and at this time, information of the target execution object may be added in other display areas of the target text line. For example, the task name of the target task is displayed at the head of the target text line, and information of the target execution object may be added at a position after the task name.
Through the steps A to C, the user can add the corresponding target execution object according to the target task after the target task is established. Therefore, even if the user cannot determine or add the corresponding target execution object when adding the target task, the target execution object can be added after the target task is added, and the method is convenient and practical.
In some optional implementations, the second execution object information input window includes a second candidate execution object list including object information of at least one candidate execution object; and the step B may include: and determining a target execution object selected by the first user according to the selection operation of the first user in the second candidate execution object list.
In some application scenarios, the second list of candidate execution objects may be presented to the user. The user can visually see the object information of the corresponding candidate execution object in the second candidate execution object list.
At this time, the user may perform a selection operation in the second candidate execution object list to select an execution object to execute the target task. When it is detected that the user performs a selection operation on the object information of a certain candidate execution object, the object indicated by the object information may be determined as a target execution object for executing the target task.
In this way, when the object information of the target execution object is not determined, the user can also successfully determine the target execution object as the execution object of the target task, and meanwhile, the situation of information input error existing in the process of manually inputting information is avoided.
In some optional implementation manners, the document editing method may further include an operation of adding an execution time to the target task after the target task is generated.
Step a: and displaying a second task execution time control in response to receiving a second preset operation executed on the target task and used for adding execution time, wherein the second task execution time control comprises a second task execution time information input window.
The user may perform a second preset operation for adding an execution time to the target task after the target task creation is completed. The second preset operation here may include, for example, a click operation to "add execution time button".
After receiving the second preset operation, a second task execution time control may be presented to the user. Then, the user may input a corresponding execution time in the second task execution time information input window.
Step b: and determining the target execution time for executing the target task according to the execution time information input operation executed by the first user in the second task execution time information input window.
When the information of the execution time input by the user is received, the information of the execution time input by the user may be determined as a target execution time to execute the target task. For example, receiving information "2020.12.15" of the execution time input by the user with respect to the target task a, 12, 15 days in 2020 may be determined as the target execution time to execute the target task a.
Step c: and adding time information of the target execution time in a display area corresponding to the target task.
In some application scenarios, the target task is added in the target text line, and at this time, information of the target execution time may be added in other display areas of the target text line. For example, information of the target execution time may be added at the end of a line of the target text line.
Through the steps a to c, the user can add the corresponding target execution time according to the target task after the target task is established. In this way, even if the user cannot determine or cannot add the corresponding target execution time when adding the target task, the user can add the target execution time after the target task is added, and the method is convenient and practical.
In some alternative implementations, the second task execution time information input window includes a second time list, the second time list including at least one time selection item; and the step b may include: and determining the target execution time selected by the first user according to the selection operation executed by the first user in the second time list.
In some application scenarios, the user may be presented with a second list of times to present at least one time selection to the user. The second time list here may also be a calendar, for example.
The user may select a corresponding time for executing the target task in the second time list, and after it is detected that a certain time selection item is selected by the user, the time corresponding to the time selection item may be determined as the target execution time.
The second time list is displayed for the user, so that the user can select the target execution time in the second time list, the speed of adding the target execution time by the user is increased, and the method is convenient and practical.
In some optional implementations, the document editing method further includes: and sending target task prompt information for prompting to view the target task to the target execution object.
After the target execution object is determined, prompt information may be sent to the target execution object to prompt the target execution object to view the target task. For example, after determining that the target execution object is zhang, a prompt such as "please view the target task a assigned to you" may be sent to zhang.
In some application scenarios, the prompt information may include related information of the target task, and the related information may include, for example, a text position, a task name, a task execution time, and the like corresponding to a document describing the target task.
Therefore, the target execution object can check the information of the execution target task in time, and then the target task can be executed in time.
In some optional implementations, the document editing method further includes: in response to receiving a viewing operation of a target execution object on the text line of the document, information of the target execution object is displayed in a first pattern.
After the target execution object receives the prompt message, the target execution object can check the target task in the document according to the prompt message. When a viewing operation of the target execution object on the text line in the document is received, the information of the target execution object can be displayed in a first mode. The first expression herein may include, for example, highlighting information of the target execution object. The highlighting herein may include, for example, setting an area including information of the target execution object in the text line to an extremely noticeable color such as red, yellow, or the like.
In some application scenarios, the target execution object may be identified by a user identity code. For example, the id code of the execution object may be stored in advance, and after the user determines the target execution object, the target id code of the target execution object may be stored. In this way, after the identity identification code of the object for viewing the document is detected to be the pre-stored target identity identification code, the object can be determined to be the target execution object. Then, the information of the target execution object may be displayed in the first pattern.
The information of the target execution object is displayed in the first mode, so that the target execution object can visually and clearly see the target task to be executed by the target execution object.
In some optional implementations, the document editing method further includes: in response to receiving a viewing operation of a target execution object on the text line of the document, information of other execution objects except the target execution object is displayed in a second style.
In some application scenarios, when the target execution object views a text line, it may be detected whether information of other execution objects exists. When it is detected that information of other execution objects exists in the text line, the information of the other execution objects may be displayed in a second style. The second pattern here may be different from the first pattern described above. For example, when the first style is set to red for a region including information of a target execution object in a text line, the second style may be set to yellow for a region including information of other execution objects in the text line.
In this way, the target execution object can be facilitated to quickly identify other execution objects.
In some optional implementations, the document editing method further includes step 1: and sending reminding information to an execution object executing the target task at the time indicated by the target execution time information.
After the target execution time information is determined, the reminder information may be sent at a time indicated by the target execution time information. For example, after the target execution time information is determined to be 10/2020, reminder information may be sent to the execution object on that day. In this way, the execution object may receive the reminder information at the corresponding time. The reminder information here may include, for example, the name of the target task, the execution time, and the like.
By sending the reminding information to the execution object, the execution object can execute the target task in time, and the target task can be ensured to be carried out according to a plan.
In some alternative implementations, step 1 may include the following step 11, step 12, or step 13.
Step 11: and responding to the information of the execution object in the text line, and sending reminding information to the execution object.
In some application scenarios, it may be determined whether information exists for executing objects in a line of text. For example, it is possible to identify whether or not information such as a person's name or a person's office number exists in the text line to determine whether or not an execution object exists in the text line.
When it is determined that the information of the execution object exists in the text line, the reminding information may be sent to the execution object indicated by the information of the execution object.
Step 12: and responding to the information that the execution object does not exist in the text line, and sending reminding information to the owner of the document.
When it is determined that information of the execution object does not exist in the text line, reminder information may be sent to the document owner. The owner of a document here may be, for example, the user who created the document.
Step 13: and responding to the fact that the information of the execution object does not exist in the text line, and sending reminding information to an editor of the target text.
When it is determined that the information of the execution object does not exist in the text line, reminding information can be sent to an editor of the target text. For example, lie four edits target text in a document created by zhang, at which point, a reminder message may be sent to lie four.
Through the above step 11, step 12 or step 13, the reminding information can be used to remind the personnel related to the document, so that at least one of the personnel can timely know the information that the execution time of the target task reaches, and further the target task can be performed according to a plan.
In some optional implementations, the task execution time control includes a time limit reminder selection item and a reminder time information input window; the reminding time indicated by the time limit reminding selection item is determined based on the following steps: and determining the reminding time according to the reminding time information input operation executed by the user in the reminding time information input window.
In some application scenarios, the user may select whether to time-limit-remind the execution object. Specifically, if the user determines to perform time limit reminding on the execution object, the selection operation may be performed on the time limit reminding option; if the user determines not to perform the time limit reminder to the execution object, the selection operation may not be performed on the time limit reminder option.
In these application scenarios, if the user performs a selection operation on the time limit selection item, a reminder time information input window may be presented to the user. The reminding time information input window here can be used for inputting the time information of sending the time limit reminding information. For example, time information of 10 months and 10 days in 2020 may be input in the reminder time information input window. In this way, the execution object may receive the time limit reminder information at a corresponding time.
In these application scenarios, the user may also input specific time information for sending the time limit reminding information. For example, the time information of 10 am may be continuously input after the time information of 10/2020/10 is input. At this time, it can be determined that the reminder time is 10 am 10 o' clock 10 month 2020. Then, the time limit reminding information can be sent to the execution object at the moment.
After the user selects the time limit reminding option of the reminding execution object, the corresponding reminding time is input through the reminding time information input window, so that the user can conveniently set the reminding time for the target task needing time limit reminding, and the user experience is improved.
In some optional implementations, the document editing method further includes: displaying the target execution time in a display area of a target task in the document, wherein the display color of the target execution time is determined according to the time difference between the current time and the target execution time; wherein different time differences correspond to different display colors of the target execution time.
The target execution time may be displayed in different colors in the document. Specifically, the time difference may be determined according to the time difference between the current time and the target execution time. For example, when the time difference between the current time and the target execution time is 1 day, the display color of the target execution time may be red; when the time difference between the current time and the target execution time is 3 days, the display color of the target execution time may be gray; when the time difference between the current time and the target execution time is 5 days, the display color of the target execution time may be white.
The display color of the target execution time is determined through the time difference between the current time and the target execution time, so that a user can visually know the urgency degree of the target task, and then the target task can be executed according to the time reasonably arranged by the urgency degree.
In some optional implementations, the document editing method further includes: and updating the display state of the target task in response to receiving the triggering operation of the task identifier.
In some application scenarios, the user may update the display state of the target task by a trigger operation. The triggering operation here may include, for example, an operation of clicking on a task identifier. For example, the user may click on the task identifier when the task identifier is an open box that characterizes the target task as an ongoing state. At this time, the open box may be changed to a solid box representing the ongoing state of the target task or to a box different from the current color.
Therefore, the user can update the display state of the target task according to the actual situation, the purpose of marking the execution state of the target task is achieved, and the operation is simple and convenient.
In some optional implementations, the updating the display state of the target task includes; and adding a deletion line for the text information corresponding to the target task.
In some application scenarios, after the user performs the above-mentioned trigger operation on the task identifier, a strikethrough may be added to the text information corresponding to the target task. In these application scenarios, the target task corresponding to the text information to which the strikethrough is added may be regarded as a task that is not to be executed substantially, such as completed or cancelled.
By adding a deletion line to the text information corresponding to the target task, the user can quickly mark the executed or cancelled task, and the user can conveniently manage the target task.
Referring to fig. 2, which shows a schematic structural diagram of an embodiment of a document editing apparatus according to the present disclosure, as shown in fig. 2, the document editing apparatus includes an identifier adding module 201, a task adding module 202, and a task generating module 203. The identification adding module 201 is configured to add a task identification in a text line in response to receiving a task adding instruction for adding a target task to a target text in the text line in a document; the task adding module 202 is configured to respond to a selection operation on a target text line in the text lines and display a task adding control for adding a task, where the task adding control includes a task information interaction window; and the task generating module 203 is configured to generate the corresponding target task in the text line according to the task information and the target text input by the first user in the task information interaction window.
It should be noted that, for specific processing of the identifier adding module 201, the task adding module 202, and the task generating module 203 of the document editing apparatus and technical effects thereof, reference may be made to the related descriptions of step 101 to step 103 in the corresponding embodiment of fig. 1, which is not described herein again.
In some optional implementations of this embodiment, the text line includes a first target text without content, and the task generation module 203 is further configured to: receiving an editing instruction of a first user on the first target text in the text line, and generating task description information; and generating a corresponding target task in the text line according to the task information and the task description information which are input by the first user in the task information interaction window.
In some optional implementations of this embodiment, the text line includes a second target text including content, and the task generating module 203 is further configured to: and generating a corresponding target task in the text line according to the task information input by the first user in the task information interaction window and the second target text.
In some optional implementations of this embodiment, the second target text includes an execution object identifier; and the task generation module 203 is further configured to: and generating a target task aiming at the execution object corresponding to the execution object identification in the text line according to the task information input by the first user in the task information interaction window and the second target text.
In some optional implementations of this embodiment, the task information interaction window includes at least one task selection item; and the task generation module 203 is further configured to: determining a target task option according to the selection operation of the first user on the at least one task option; and generating a corresponding target task in the text line according to the target task selection item and the target text.
In some optional implementations of this embodiment, the task information interaction window includes a first task execution object control; and the task generation module 203 is further configured to: in response to receiving a selection operation of the first user on the first task execution object control, displaying a first execution object information input window for inputting an execution object for executing the target task; determining a target execution object for executing a target task according to the input operation of the information of the execution object executed by the first user in the first execution object information input window; generating a target task including information of the target execution object in the text line.
In some optional implementations of this embodiment, the document editing apparatus further includes an execution object identifier generating module, where the execution object identifier generating module is configured to add an orientation identifier in the text line in response to receiving a selection operation of an object control executed by the first user on the first task; the task generation module 203 is further configured to: and associating the target execution object with the directional identifier in the text line to generate an execution object identifier.
In some optional implementations of this embodiment, the task information interaction window includes a first task execution time control; and the task generation module 203 is further configured to: in response to receiving a selection operation performed by the first user on the first task execution time control, displaying a first task execution time information input window for inputting an execution time for executing the target task; determining a target execution time for executing a target task according to an input operation of the information of the execution time executed by the first user in the first task execution time information input window; generating a target task including the target execution time information in the text line.
In some optional implementations of this embodiment, the first task execution time information input window includes a first time list, the first time list including at least one time selection item; and the task generating module 203 is further configured to: and determining the target execution time selected by the first user according to the selection operation executed by the first user in the first time list.
In some optional implementation manners of this embodiment, the document editing apparatus further includes a prompt module, where the prompt module is configured to: and sending target task prompt information for prompting to view the target task to the target execution object.
In some optional implementation manners of this embodiment, the document editing apparatus further includes a first viewing module, where the first viewing module is configured to: in response to receiving a viewing operation of a target execution object on the text line of the document, information of the target execution object is displayed in a first pattern.
In some optional implementation manners of this embodiment, the document editing apparatus further includes a second viewing module, where the second viewing module is configured to: in response to receiving a viewing operation of a target execution object on the text line of the document, information of other execution objects except the target execution object is displayed in a second style.
In some optional implementation manners of this embodiment, the document editing apparatus further includes a reminding module, where the reminding module is configured to: and sending reminding information to an execution object executing the target task at the time indicated by the target execution time information.
In some optional implementation manners of this embodiment, the reminding module is further configured to: responding to the information of the execution object in the text line, and sending reminding information to the execution object; sending reminding information to an owner of the document in response to the fact that the information of the execution object does not exist in the text line; or responding to the fact that the information of the execution object does not exist in the text line, and sending reminding information to an editor of the target text.
In some optional implementation manners of this embodiment, the task execution time control includes a time limit reminder selection item and a reminder time information input window; the reminding time indicated by the time limit reminding selection item is determined based on the following steps: and determining the reminding time according to the reminding time information input operation executed by the user in the reminding time information input window.
In some optional implementation manners of this embodiment, the document editing apparatus further includes a display module, where the display module is configured to: displaying the target execution time in a display area of a target task in the document, wherein the display color of the target execution time is determined according to the time difference between the current time and the target execution time; wherein different time differences correspond to different display colors of the target execution time.
In some optional implementation manners of this embodiment, the document editing apparatus further includes an update module, where the update module is configured to: and updating the display state of the target task in response to receiving the triggering operation of the task identifier.
In some optional implementations of this embodiment, the update module is further configured to: and adding a deletion line for the text information corresponding to the target task.
Referring to FIG. 3, an exemplary system architecture is shown in which the document editing method of one embodiment of the present disclosure may be applied.
As shown in fig. 3, the system architecture may include terminal devices 301, 302, 303, a network 304, and a server 305. The network 304 serves as a medium for providing communication links between the terminal devices 301, 302, 303 and the server 305. Network 304 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few. The terminal devices and servers described above may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., Ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The terminal devices 301, 302, 303 may interact with a server 305 over a network 304 to receive or send messages or the like. The terminal devices 301, 302, 303 may have various client applications installed thereon, such as a video distribution application, a search-type application, and a news-information-type application.
The terminal devices 301, 302, 303 may be hardware or software. When the terminal devices 301, 302, 303 are hardware, they may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal device 301, 302, 303 is software, it can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 305 may be a server that can provide various services, for example, receives a document acquisition request transmitted by the terminal device 301, 302, 303, performs analysis processing on the document acquisition request, and transmits the analysis processing result (e.g., document data corresponding to the above-described acquisition request) to the terminal device 301, 302, 303.
It should be noted that the document editing method provided by the embodiment of the present disclosure may be executed by a server or a terminal device, and accordingly, the document editing apparatus may be disposed in the server or the terminal device.
It should be understood that the number of terminal devices, networks, and servers in fig. 3 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to fig. 4, shown is a schematic diagram of an electronic device (e.g., a server or a terminal device of fig. 3) suitable for use in implementing embodiments of the present disclosure. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, the electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 401 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage means 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 4 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 409, or from the storage device 408, or from the ROM 402. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 401.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: in response to receiving a task adding instruction for adding a target task to target text in a text line in a document, adding a task identifier in the text line; responding to the selection operation of a target text line in the text lines, and displaying a task adding control for adding a task, wherein the task adding control comprises a task information interaction window; and generating the corresponding target task in the text line according to the task information and the target text input by the first user in the task information interaction window.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a module does not in some cases constitute a limitation on the unit itself, for example, the identification addition module 201 may also be described as a "module that adds a task identification in a line of text in response to receiving a task addition instruction in a document to add a target task to target text in the line of text".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (21)

1. A document editing method, comprising:
in response to receiving a task adding instruction for adding a target task to target text in a text line in a document, adding a task identifier in the text line;
responding to the selection operation of a target text line in the text lines, and displaying a task adding control for adding a task, wherein the task adding control comprises a task information interaction window;
and generating the corresponding target task in the text line according to the task information and the target text input by the first user in the task information interaction window.
2. The method of claim 1, wherein the line of text comprises a first target text without content, an
The generating of the corresponding target task in the text line according to the task information and the target text input by the first user in the task information interaction window includes:
receiving an editing instruction of a first user on the first target text in the text line, and generating task description information;
and generating a corresponding target task in the text line according to the task information and the task description information which are input by the first user in the task information interaction window.
3. The method of claim 1, wherein the line of text comprises a second target text comprising content, and
the generating of the corresponding target task in the text line according to the task information and the target text input by the first user in the task information interaction window includes:
and generating a corresponding target task in the text line according to the task information input by the first user in the task information interaction window and the second target text.
4. The method of claim 3, wherein the second target text includes execution object identification; and
the generating of the corresponding target task in the text line according to the task information and the target text input by the first user in the task information interaction window includes:
and generating a target task aiming at the execution object corresponding to the execution object identification in the text line according to the task information input by the first user in the task information interaction window and the second target text.
5. The method of claim 1, wherein the task information interaction window includes at least one task selection item; and
the generating of the corresponding target task in the text line according to the task information and the target text input by the first user in the task information interaction window includes:
determining a target task option according to the selection operation of the first user on the at least one task option;
and generating a corresponding target task in the text line according to the target task selection item and the target text.
6. The method of claim 1, wherein the task information interaction window comprises a first task execution object control; and
the generating of the corresponding target task in the text line according to the task information and the target text input by the first user in the task information interaction window includes:
in response to receiving a selection operation of the first user on the first task execution object control, displaying a first execution object information input window for inputting an execution object for executing the target task;
determining a target execution object for executing a target task according to the input operation of the information of the execution object executed by the first user in the first execution object information input window;
generating a target task including information of the target execution object in the text line.
7. The method of claim 6, wherein the method further comprises:
in response to receiving a selection operation of the first user to perform an object control on the first task, adding an orientation identifier in the text line;
the determining a target execution object for executing a target task according to the input operation of the information of the execution object executed by the first user in the first execution object information input window includes:
and associating the target execution object with the directional identifier in the text line to generate an execution object identifier.
8. The method of claim 1, wherein the task information interaction window comprises a first task execution time control; and
the generating of the corresponding target task in the text line according to the task information and the target text input by the first user in the task information interaction window includes:
in response to receiving a selection operation performed by the first user on the first task execution time control, displaying a first task execution time information input window for inputting an execution time for executing the target task;
determining a target execution time for executing a target task according to an input operation of the information of the execution time executed by the first user in the first task execution time information input window;
generating a target task including the target execution time information in the text line.
9. The method of claim 8, wherein the first task execution time information input window comprises a first time list comprising at least one time selection item; and
the determining a target execution time for executing a target task according to the input operation of the information of the execution time executed by the first user in the first task execution time information input window includes:
and determining the target execution time selected by the first user according to the selection operation executed by the first user in the first time list.
10. The method according to any one of claims 6-7, further comprising:
and sending target task prompt information for prompting to view the target task to the target execution object.
11. The method according to any one of claims 6-7, wherein the method further comprises:
in response to receiving a viewing operation of a target execution object on the text line of the document, information of the target execution object is displayed in a first pattern.
12. The method according to any one of claims 6-7, further comprising:
in response to receiving a viewing operation of a target execution object on the text line of the document, information of other execution objects except the target execution object is displayed in a second style.
13. The method of claim 8, further comprising:
and sending reminding information to an execution object executing the target task at the time indicated by the target execution time information.
14. The method of claim 13, the sending a reminder to an execution object executing the target task at a time indicated by the target execution time information, comprising:
responding to the information of the execution object in the text line, and sending reminding information to the execution object;
sending reminding information to an owner of the document in response to the fact that the information of the execution object does not exist in the text line; alternatively, the first and second liquid crystal display panels may be,
and responding to the fact that the information of the execution object does not exist in the text line, and sending reminding information to an editor of the target text.
15. The method of claim 13, wherein the task execution time control comprises a time limited reminder selection and a reminder time information input window; the reminding time indicated by the time limit reminding option is determined based on the following steps:
and determining the reminding time according to the reminding time information input operation executed by the user in the reminding time information input window.
16. The method of any of claims 8-9 or 13-14, wherein the method further comprises:
displaying the target execution time in a display area of a target task in the document, wherein the display color of the target execution time is determined according to the time difference between the current time and the target execution time; wherein
The different time differences correspond to different display colors of the target execution time.
17. The method of claim 1, wherein the method further comprises:
and updating the display state of the target task in response to receiving the triggering operation of the task identifier.
18. The method of claim 17, wherein said updating a display status of said target task comprises;
and adding a deletion line for the text information corresponding to the target task.
19. A document editing apparatus comprising:
the identification adding module is used for responding to a task adding instruction for adding a target task to a target text in a text line in a document, and adding a task identification in the text line;
the task adding module is used for responding to the selection operation of a target text line in the text lines and displaying a task adding control for adding a task, and the task adding control comprises a task information interaction window;
and the task generating module is used for generating the corresponding target task in the text line according to the task information and the target text which are input by the first user in the task information interaction window.
20. An electronic device, comprising:
one or more processors;
storage having one or more programs stored thereon that, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-18.
21. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-18.
CN202011522442.4A 2020-12-21 2020-12-21 Document editing method and device and electronic equipment Active CN112668283B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011522442.4A CN112668283B (en) 2020-12-21 2020-12-21 Document editing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011522442.4A CN112668283B (en) 2020-12-21 2020-12-21 Document editing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112668283A CN112668283A (en) 2021-04-16
CN112668283B true CN112668283B (en) 2022-05-27

Family

ID=75407124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011522442.4A Active CN112668283B (en) 2020-12-21 2020-12-21 Document editing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112668283B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115249009A (en) * 2021-04-26 2022-10-28 北京字跳网络技术有限公司 Information editing processing method, device, equipment and medium
CN113741756A (en) * 2021-09-16 2021-12-03 北京字跳网络技术有限公司 Information processing method, device, terminal and storage medium
CN114493541A (en) * 2022-02-09 2022-05-13 北京字跳网络技术有限公司 Task creation method, task creation apparatus, electronic device, storage medium, and program product

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7076546B1 (en) * 1999-02-10 2006-07-11 International Business Machines Corporation Browser for use in accessing hypertext documents in a multi-user computer environment
US8381088B2 (en) * 2010-06-22 2013-02-19 Microsoft Corporation Flagging, capturing and generating task list items
US20130124605A1 (en) * 2011-11-14 2013-05-16 Microsoft Corporation Aggregating and presenting tasks
US9436717B2 (en) * 2013-12-19 2016-09-06 Adobe Systems Incorporated Method and apparatus for managing calendar entries in a document
US9639184B2 (en) * 2015-03-19 2017-05-02 Apple Inc. Touch input cursor manipulation
US20180136829A1 (en) * 2016-11-11 2018-05-17 Microsoft Technology Licensing, Llc Correlation of tasks, documents, and communications
CN109858813A (en) * 2019-02-01 2019-06-07 广州影子科技有限公司 Cultivate task management method and device, cultivation task management equipment and system
CN111400488B (en) * 2020-03-19 2023-04-25 抖音视界有限公司 Online document information processing method and device, electronic equipment and readable medium

Also Published As

Publication number Publication date
CN112668283A (en) 2021-04-16

Similar Documents

Publication Publication Date Title
CN112668283B (en) Document editing method and device and electronic equipment
EP3425872A1 (en) Dual-modality client application
US10956032B2 (en) Keyboard utility for inputting data into a mobile application
US10025475B1 (en) Apparatus and method for message reference management
CN104915367A (en) Method and apparatus for providing calendar displaying work history of document
US11652769B2 (en) Snippet(s) of content associated with a communication platform
US9967349B2 (en) Integrated social media server and architecture
US20220083985A1 (en) Computer-implemented systems and methods for executing interaction models for controlling interaction environments
CN112328853A (en) Document information processing method and device and electronic equipment
US10708208B2 (en) Smart chunking logic for chat persistence
US11416503B2 (en) Mining data for generating consumable collaboration events
CN113609834A (en) Information processing method, device, equipment and medium
US11875311B2 (en) Communication platform document as a communication channel
US11861380B2 (en) Systems and methods for rendering and retaining application data associated with a plurality of applications within a group-based communication system
CN111931464A (en) Document editing method and device and electronic equipment
CN112363790B (en) Table view display method and device and electronic equipment
JP2016224949A (en) Task tracking method, computer program, and system
US11630708B2 (en) OSN/PCS collaboration mechanism integration
WO2023202453A1 (en) Task processing method and apparatus, device and medium
US10200496B2 (en) User interface configuration tool
KR20160143912A (en) Method and apparatus for managing business using text input
US20230368105A1 (en) Contextual workflow buttons
KR102036076B1 (en) Method for setting location information of a meeting attendee and user terminal thereof
US20240121124A1 (en) Scheduled synchronous multimedia collaboration sessions
US20230394440A1 (en) Generating collaborative documents for virtual meetings in a communication platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant