CN117151048A - Information processing method and device and electronic equipment - Google Patents

Information processing method and device and electronic equipment Download PDF

Info

Publication number
CN117151048A
CN117151048A CN202311118830.XA CN202311118830A CN117151048A CN 117151048 A CN117151048 A CN 117151048A CN 202311118830 A CN202311118830 A CN 202311118830A CN 117151048 A CN117151048 A CN 117151048A
Authority
CN
China
Prior art keywords
document
editing
information
request information
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311118830.XA
Other languages
Chinese (zh)
Inventor
林梓欢
周王鹏
邹安宁
李林飞
胡昱松
黎一山
林广华
孟潇
陈思
刘金伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202311118830.XA priority Critical patent/CN117151048A/en
Publication of CN117151048A publication Critical patent/CN117151048A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • G06F40/186Templates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure provides an information processing method, an information processing device and electronic equipment, wherein the method comprises the following steps: editing request information for editing the document according to the received request, and acquiring scene information corresponding to the editing request information; determining an editing object of the document corresponding to the editing request information based on the scene information; and sending the editing request information and the editing object to a natural language processing model.

Description

Information processing method and device and electronic equipment
Technical Field
The embodiment of the disclosure relates to the technical field of computers and the Internet, in particular to an information processing method, an information processing device and electronic equipment.
Background
The user may edit the text content using a client that supports text content editing, such as editing an online document at an open online document editing client.
In text content editing, text content is typically input by a user, and the client supporting text content editing generates a document according to the text content input by the user.
Disclosure of Invention
The embodiment of the disclosure provides an information processing method, an information processing device and electronic equipment.
In a first aspect, an embodiment of the present disclosure provides an information processing method, including: editing request information for editing the document according to the received request, and acquiring scene information corresponding to the editing request information; determining an editing object of the document corresponding to the editing request information based on the scene information; and sending the editing request information and the editing object to a natural language processing model.
In a second aspect, an embodiment of the present disclosure provides an information processing apparatus including: the acquisition unit is used for acquiring scene information corresponding to the editing request information according to the received editing request information for editing the document; a determining unit configured to determine an editing object of the document corresponding to the editing request information based on the scene information; and the sending unit is used for sending the editing request information and the editing object to a natural language processing model.
In a third aspect, embodiments of the present disclosure provide a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, implement the above first aspect and the various methods that the first aspect may relate to.
In a fourth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program which, when executed by a processor, implements the above first aspect and the various possible methods of the first aspect.
According to the information processing method, the information processing device and the electronic equipment, through editing request information for editing the document according to the received request, scene information corresponding to the editing request information is obtained; determining an editing object of the document corresponding to the editing request information based on the scene information; and sending the editing request information and the editing object to the natural language processing model, so that the user operation when the document is edited by using an auxiliary editing tool is simplified, and the efficiency of editing the document can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the description of the prior art, it being obvious that the drawings in the following description are some embodiments of the present disclosure, and that other drawings may be obtained from these drawings without inventive effort to a person of ordinary skill in the art.
FIG. 1 is a schematic flow chart of an information processing method provided by the present disclosure;
FIG. 2 is a schematic diagram of an application scenario;
FIG. 3 is a schematic flow chart of an information processing method provided by the present disclosure;
FIG. 4 is a schematic diagram of an application scenario;
FIG. 5 is a schematic block diagram of an information processing apparatus provided by the present disclosure;
fig. 6 is a schematic hardware structure of an electronic device according to an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
To improve the efficiency of editing a document by a user, an example of the present disclosure provides an auxiliary editing tool that assists a user in editing the content of a document. The user may edit the document content using an auxiliary editing tool.
In the above examples, editing content that may be implemented using the above auxiliary editing tools includes providing a composition template for editing a theme, extracting an outline for a document, generating a summary of the document content, rewriting the content, expanding the content, abbreviated the content, and the like.
However, in some editing functions (such as generating a summary of text content, rewriting text content, expanding text content, abbreviated text content, etc.) that can be implemented by the auxiliary editing tool, it is necessary to provide the auxiliary editing tool with content to be edited first, and then the auxiliary editing tool can perform auxiliary editing according to the content to be edited. Specifically, the user selects the content to be edited, then the content is sent to the auxiliary editing tool, and the auxiliary editing tool performs editing operation on the content to be edited selected by the user to generate an editing result.
In the above-described process, it is necessary for the user to manually select the content to be edited, and then perform an operation of transmitting the selected content to the auxiliary editing tool. It can be seen that in the above process, the operations that require the user to participate are more, which is not beneficial to improving the efficiency of editing the text content by the auxiliary editing tool.
According to the scheme provided by the disclosure, when the editing request information for editing the document is received, the editing object of the document is determined according to the scene information, so that the auxiliary editing tool can edit the editing object, the operation of performing text editing operation by using the auxiliary editing tool can be simplified, and the efficiency of text editing can be improved.
Referring to fig. 1, a schematic flowchart of an information processing method provided in the present disclosure is shown. As shown in fig. 1, the method includes:
s101: and acquiring scene information corresponding to the editing request information according to the received editing request information for editing the document.
In the present embodiment, the execution subject of the information processing method may be a client for performing an auxiliary text editing operation, such as an auxiliary editing tool client. The client may be a client that operates independently, or may be integrated with other application clients, for example, as a component of the other application clients. The auxiliary editing tool client may be communicatively coupled to the natural language processing model.
As one implementation, a user initiates an auxiliary editing tool call operation at a client that presents document information. After receiving the call operation, the auxiliary editing tool display window can be displayed. The user can input the editing request information in the above-described auxiliary editing tool display window. The above-mentioned editing request information requests the auxiliary editing tool to make editing operation on the document.
The scene information corresponding to the editing request information comprises the scene information corresponding to the editing request information. After receiving the edit request information, the execution subject may acquire the scene information corresponding to the received edit request information.
In one example, the execution body is an integrated client integrated with an application client that presents document information, and the execution body may directly obtain scene information from the application client.
In one example, the execution body is an independent client independent of the application client for displaying the document information, and the execution body may implement communication connection with the application client for displaying the document information through a preset interface. The execution body can acquire the scene information from the client for displaying the document information through a preset interface.
The above scene information includes, but is not limited to: application information of an application displaying document information, presentation carrier information of document content, event information related to a document, document content of a document. The application information of the application displaying the document information in the above-described scene information indicates in what application the above-described editing request information is received. And the display carrier information of the document content in the scene information is used for indicating the information of the display carrier of the document when the editing request information is received. The display carrier comprises a floating window or a popup window and the like.
Application clients that expose the above document information include, but are not limited to: an instant messaging application client and a document application client. The document application client comprises an online document editing application client, an online form editing application client, a mail client, a multimedia conference client and the like.
In the above-described instant messaging application client, the document information may be displayed in a session of the instant messaging application.
The session may be with any contact object. The above-described document information displayed in the session may illustratively include a document title, a document link, and the like.
S102: and determining an editing object of the document corresponding to the editing request information based on the scene information.
The above scene information includes, but is not limited to: application information of an application displaying document information, presentation carrier information of document content, event information related to a document, document content of a document.
The execution subject may determine the editing object corresponding to the editing request information based on the scene information.
The determined editing object includes one of: document links, at least part of the content of the document.
The determined editing object of the document can be different according to different scene information, so that the auxiliary editing tool can determine the editing object in different scenes, thereby reducing the selection of the editing object by a user and sending the editing object to related operations of the auxiliary editing tool.
S103: and sending the editing request information and the editing object to the natural language processing model.
The execution body sends the editing object and the editing request information to the natural language processing model so that the natural language processing model executes corresponding editing operation on the editing object according to the editing request information.
In the embodiment, according to the received editing request information for editing the document, scene information corresponding to the editing request information is obtained; determining an editing object of a document corresponding to the editing request information based on the scene information; and sending the editing request information and the editing object to the natural language processing model, so that the user operation when the document is edited by using the auxiliary editing tool can be simplified, and the efficiency of editing the document is improved.
In some embodiments, the step S102 includes the following steps:
in response to receiving the edit request information in a session of the instant messaging application, a link of the document is taken as an edit object of the document.
In these embodiments, a user initiates an edit request for editing a document in a session of an instant messaging application.
Referring to fig. 2, an application scenario is shown. As shown in fig. 2, a session details interface 202 of user a and contact user B is displayed in the instant messaging application interface 201. Document information of the document is displayed in the above-described session detail interface 202. The above-mentioned document information may be a title of the document, and may also be a link of the document, for example, a link of the document a. The links of the above documents are used to open the document. The user may perform a selection operation on the above-described document information and then perform a calling operation requesting a document auxiliary editing tool. After receiving the call operation, the auxiliary editing tool display window 203 may be displayed. The user may input an edit request in the auxiliary editing tool display window to request the auxiliary editing tool to perform a corresponding editing operation. Illustratively, the edit request may be "generate summary of document". After receiving the editing request information, the auxiliary editing tool can acquire scene information from the instant messaging application, wherein the scene information comprises an instant messaging application identifier, selection operation information and document information indicated by the selection operation. The execution body may determine that the editing object of the document is a link of the document a based on the scene information. The execution body may send the link and the edit request information of the document a to the natural language processing model to acquire the content of the document a through the link by the natural language processing model, and execute the editing operation indicated by the edit request information on the content.
Referring to fig. 3, a schematic flow chart of the information processing method provided by the present disclosure is shown.
As shown in fig. 3, the method comprises the steps of:
s301: and acquiring scene information corresponding to the editing request information according to the received editing request information for editing the document.
In the present embodiment, the execution subject of the information processing method may be a client for performing an auxiliary text editing operation, such as an auxiliary editing tool client. The client may be a client that operates independently, or may be integrated with other application clients.
The specific implementation of step S301 may refer to step S101 in the embodiment shown in fig. 1, which is not described herein.
The above scene information includes, but is not limited to: application information of an application displaying document information, presentation carrier information of document content, event information related to a document, document content of a document.
S302: in response to receiving edit request information in a document displayed by a document application, it is detected whether a content selection operation is performed in the document.
The execution subject may search the scene information for the application information corresponding to the edit request information. The document information is determined to be displayed in the document application based on the application information. The above-described document information includes document content.
That is, it may be determined that the document content is displayed in the document application based on the application information in the scene information.
The document application may be an application for editing an online document, an application for editing an online form, a mail application for editing mail content, or a multimedia conference application for recording conference content.
In determining that the above-described editing request information is received in a document displayed by the document application, it is possible to detect whether the user has performed a content selection operation in the document.
For example, it is searched in the event information included in the scene information whether there is a user selection operation event within the time when the editing request information matches. If the selection operation event is not detected, it is determined that the user does not perform the content selection operation. If the selection operation event of the user is detected in the time, whether the user performs the content selection operation can be determined according to the region acted by the selection operation of the user. If the area acted by the selection operation is a blank area, determining that the user does not execute the content selection operation. Otherwise, it is determined that the user pair performed the content selection operation.
S303: in response to not detecting that the content selection operation is performed in the document, it is determined whether a target location is indicated in the document.
If it is determined that the user has not been detected to perform the content selection operation in the document, a determination may be continued as to whether the user has indicated a target location in the document.
For example, the execution subject may determine whether there is a cursor event within a time matching the editing request information in the scene information. Or the cursor event information in the time matched with the editing request information is read from the reading operation system, and whether the cursor appears in the document in the time matched with the editing request information is determined according to the cursor event information read from the operation system.
If it is determined whether a cursor appears within the time when the editing request information matches, it is determined that the user indicates a target position in the document.
Illustratively, the target position may be a position before (including) the content start position of the document, may be a position in the document content, and may be a position after (including) the document content end position.
If it is determined that the cursor does not appear within the time when the editing request information matches, it is determined that the user does not indicate the target position in the document.
S304: an editing object of the document is determined based on the determination result.
The editing object of the document may be determined according to a determination result of determining whether the user indicates the target position in the document.
Different target locations may correspond to different editing objects of a document.
In some embodiments, the step S304 includes:
in response to determining that the user does not indicate the target position in the document, the entire content of the document is taken as an editing object.
For example, in the above document, a user may indicate a target position in the above document using a pointing device such as a mouse, a stylus, or the like, and a symbol indicating the current position, such as a cursor symbol such as an arrow or a vertical line, may be displayed at the above target position.
If it is not detected that the user indicates the target position in the document, the entire content of the document may be set as an editing object.
In these embodiments, if the user does not indicate the target position in the document, it can be regarded that the user has an intention to subject the entire content of the document to editing. Therefore, in these embodiments, the entire content of the above-described document is taken as an editing object to meet the user's demand. The operation of selecting the whole content by the user and taking the whole content as an editing object can be saved.
In some embodiments, the step S304 includes:
in response to determining that the user indicated a target location in the document, an editing object is determined from the target location.
In these embodiments, a cursor event may be generated when a user indicates a target location in the above-described document using a pointing device, such as a mouse, stylus, or the like. The cursor event information may include location information where the cursor is located and time information at which the event occurred. In some application scenarios, the above scenario information includes cursor event information. It may be determined from the cursor event information whether the user has indicated a target location in the document. In some application scenarios, the execution body may also determine from the read cursor event information of the operating system whether the user indicates the target position in the document.
An editing object may be determined from the indicated target position.
The target positions indicated by the users are different, and the determined editing objects can also be different.
By determining the editing object according to the target position, it is possible to reduce the user from performing a selection operation of selecting the editing object in the document.
In some application scenarios, determining an editing object from a target location includes: if the target position is the start position or the end position, the whole content of the document is set as the editing object.
In these application scenarios, if the target position indicated by the user is the start position or the end position of the document, this means that the user does not select local content in the document as an editing object. It can be seen that a user has a need to take the entire content of a document as an editing object. In these application scenes, the entire content of the above-mentioned document is taken as an editing object to meet the user's demands to some extent. In addition, the operation of selecting the whole content in the document as the editing object by the user can be saved.
In some application scenarios, determining an editing object from a target location includes: if the target position is located in the content of the document, the content of the document before or after the target position is taken as an editing object.
Referring to fig. 4, an application scenario is shown. As shown in fig. 4, the content of the document B is displayed in the document application display interface 401. The user indicates a target position in the content, for example, moves the cursor 402 to the target position of the content. The user then issues editing request information for editing the document to the auxiliary editing tool. The auxiliary editing tool may acquire the scene information after receiving the editing request information. The above-described scene information may include document application information and cursor event information. The execution body may determine a target position indicated by the user in the document based on the document application information and the cursor event information. The document content 403 before the target position or the document content 404 after the target position may be taken as an editing object.
The above-mentioned document content before the target position or document content after the target position may be set by the user or by the server.
In these application scenarios, the user may determine the editing object according to the target position by indicating the target position in the content of the document. The method and the device facilitate the user to take the local content of the document as an editing object, and can improve the efficiency of editing the document.
S305: and sending the editing request information and the editing object to the natural language processing model.
The specific implementation of step S305 may refer to the description of the relevant parts of the embodiment shown in fig. 1, which is not repeated here.
Compared with the embodiment shown in fig. 1, the embodiment describes how to determine the content of the editing object of the document when the editing request information is received during the document display of the document application, so that the editing object of the document indicated by the editing request information can be quickly determined in the document application, and the efficiency of performing auxiliary editing on the document is improved.
In some embodiments, the scene information indicates that edit request information is received in a document displayed by the document application, and the above-described information processing method further includes:
in response to determining that a selection operation performed by a user in a document is received, determining target content selected by the selection operation, and taking the target content as an editing object.
In these embodiments, the selection operation event information may be determined among event information of the scene information. The event information may include, for example, start position and end position information corresponding to the selection operation. And then determining the target content selected by the selection operation according to the starting position information and the ending position information indicated by the selection operation event information. And then the target content is taken as an editing object.
In these embodiments, the editing object may be determined according to a user's selection operation, so that the editing object may be accurately determined according to the user's selection operation.
Fig. 5 is a block diagram of an information processing apparatus according to an embodiment of the present disclosure, corresponding to the information processing method of the embodiment shown in fig. 1 above. For ease of illustration, only portions relevant to embodiments of the present disclosure are shown. Referring to fig. 5, the apparatus 50 includes an acquisition unit 501, a determination unit 502, and a transmission unit 503.
Wherein,
an obtaining unit 501, configured to obtain, according to edit request information that is received and is used to perform an editing operation on a document, scene information corresponding to the edit request information;
a determining unit 502 configured to determine an editing object of a document corresponding to the editing request information based on the scene information;
a transmitting unit 503 for transmitting the editing request information and the editing object to the natural language processing model. In one embodiment of the present disclosure, information of at least one search result is also displayed in the first page.
In some embodiments, the determining unit 502 is further configured to:
in response to receiving the edit request information in a session of the instant messaging application, a link of the document is taken as an edit object of the document.
In some embodiments, the determining unit 502 is further configured to:
detecting whether a content selection operation is performed in a document in response to receiving edit request information in the document displayed by the document application; determining whether a target location is indicated in the document in response to not detecting that the content selection operation is performed in the document;
and determining the editing object of the document according to the determination result.
In some embodiments, the determining unit 502 is further configured to:
in response to determining that the user does not indicate the target location in the document, the entire content of the document is taken as the editing object.
In some embodiments, the determining unit 502 is further configured to:
in response to determining that the user indicated a target location in the document, an editing object is determined from the target location.
In some embodiments, the determining unit 502 is further configured to:
if the target position is the document start position or end position, the whole content of the document is taken as an editing object.
In some embodiments, the determining unit 502 is further configured to:
if the target position is located in the content of the document, the content of the document before or after the target position is taken as an editing object.
In some embodiments, the scenario information indicates that edit request information is received in a document displayed by the document application, and the determining unit 502 is further configured to:
in response to determining that a selection operation performed by a user in a document is received, determining target content selected by the selection operation, and taking the target content as an editing object.
In order to achieve the above embodiments, the embodiments of the present disclosure further provide an electronic device.
Referring to fig. 6, a schematic diagram of a structure of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown, the electronic device 600 may be a terminal device or a server. The terminal device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (Personal Digital Assistant, PDA for short), a tablet (Portable Android Device, PAD for short), a portable multimedia player (Portable Media Player, PMP for short), an in-vehicle terminal (e.g., an in-vehicle navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 6 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 6, the electronic apparatus 600 may include a processing device (e.g., a central processing unit, a graphics processor, etc.) 601 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage device 608 into a random access Memory (Random Access Memory, RAM) 603. In the RAM 603, various programs and data required for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM602, and the RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 607 including, for example, a liquid crystal display (Liquid Crystal Display, LCD for short), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 shows an electronic device 600 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 601.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program (computer-executable instructions) for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above-described embodiments.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (Local Area Network, LAN for short) or a wide area network (Wide Area Network, WAN for short), or it may be connected to an external computer (e.g., connected via the internet using an internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not constitute a limitation of the unit itself in some cases, and for example, the acquisition unit may also be described as "an edit request information for performing an edit operation on a document according to a received request" a unit for acquiring scene information corresponding to the edit request information ".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (11)

1. An information processing method, comprising:
editing request information for editing the document according to the received request, and acquiring scene information corresponding to the editing request information;
determining an editing object of the document corresponding to the editing request information based on the scene information;
and sending the editing request information and the editing object to a natural language processing model.
2. The method according to claim 1, wherein the determining an editing object of the document corresponding to the editing request information based on the scene information includes:
and in response to receiving the editing request information in a session of the instant messaging application, taking the link of the document as an editing object of the document.
3. The method according to claim 1, wherein the determining an editing object of the document corresponding to the editing request information based on the scene information includes:
detecting whether a content selection operation is performed in a document in response to receiving the editing request information in the document displayed by the document application;
determining whether a target location is indicated in the document in response to not detecting that a content selection operation is performed in the document;
and determining the editing object of the document according to the determination result.
4. A method according to claim 3, wherein said determining an editing object of the document according to the determination result comprises:
in response to determining that the user does not indicate the target location in the document, the entire content of the document is taken as an editing object.
5. The method according to claim 3, wherein the determining the editing object of the document according to the determination result further comprises:
responsive to determining that a user has indicated a target location in the document, the editing object is determined in accordance with the target location.
6. The method of claim 5, wherein said determining said editing object from said target location comprises:
and if the target position is the document starting position or ending position, taking the whole content of the document as an editing object.
7. The method of claim 5, wherein said determining said editing object from said target location comprises:
and if the target position is positioned in the content of the document, taking the content of the document before or after the target position as an editing object.
8. A method according to claim 3, characterized in that the method further comprises:
in response to determining that a selection operation performed by a user in the document is received, determining target content selected by the selection operation, and taking the target content as an editing object.
9. An information processing apparatus comprising:
the acquisition unit is used for acquiring scene information corresponding to the editing request information according to the received editing request information for editing the document;
a determining unit configured to determine an editing object of the document corresponding to the editing request information based on the scene information;
and the sending unit is used for sending the editing request information and the editing object to a natural language processing model.
10. An electronic device, comprising: a processor and a memory;
the memory stores computer-executable instructions;
the processor executing computer-executable instructions stored in the memory, causing the processor to perform the method of any one of claims 1 to 8.
11. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor implement the method of any one of claims 1 to 8.
CN202311118830.XA 2023-08-31 2023-08-31 Information processing method and device and electronic equipment Pending CN117151048A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311118830.XA CN117151048A (en) 2023-08-31 2023-08-31 Information processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311118830.XA CN117151048A (en) 2023-08-31 2023-08-31 Information processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN117151048A true CN117151048A (en) 2023-12-01

Family

ID=88904047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311118830.XA Pending CN117151048A (en) 2023-08-31 2023-08-31 Information processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN117151048A (en)

Similar Documents

Publication Publication Date Title
CN110046021B (en) Page display method, device, system, equipment and storage medium
US11711441B2 (en) Method and apparatus for publishing video synchronously, electronic device, and readable storage medium
CN110365973B (en) Video detection method and device, electronic equipment and computer readable storage medium
CN111510760A (en) Video information display method and device, storage medium and electronic equipment
CN113377366B (en) Control editing method, device, equipment, readable storage medium and product
CN111399729A (en) Image drawing method and device, readable medium and electronic equipment
CN111459364B (en) Icon updating method and device and electronic equipment
CN111225288A (en) Method and device for displaying subtitle information and electronic equipment
CN113507637A (en) Media file processing method, device, equipment, readable storage medium and product
CN111309617A (en) Application program control method and device, storage medium and electronic equipment
CN110083768B (en) Information sharing method, device, equipment and medium
CN112954453A (en) Video dubbing method and apparatus, storage medium, and electronic device
CN116192789A (en) Cloud document processing method and device and electronic equipment
CN111385599A (en) Video processing method and device
CN117151048A (en) Information processing method and device and electronic equipment
CN110472220B (en) Paste operation processing method and device, electronic equipment and computer-readable storage medium
AU2018403361A1 (en) Data transmission
CN109889737B (en) Method and apparatus for generating video
CN114786069A (en) Video generation method, device, medium and electronic equipment
CN111709342B (en) Subtitle segmentation method, device, equipment and storage medium
CN113961769A (en) Session communication method, device, equipment and storage medium
CN109284350B (en) Method and device for updating search content, storage medium and electronic equipment
CN114063795A (en) Interaction method, interaction device, electronic equipment and storage medium
CN113885741A (en) Multimedia processing method, device, equipment and medium
CN111385638B (en) Video processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination