CN115934229A - Operation method and device of objects in page, electronic equipment and storage medium - Google Patents

Operation method and device of objects in page, electronic equipment and storage medium Download PDF

Info

Publication number
CN115934229A
CN115934229A CN202211569910.2A CN202211569910A CN115934229A CN 115934229 A CN115934229 A CN 115934229A CN 202211569910 A CN202211569910 A CN 202211569910A CN 115934229 A CN115934229 A CN 115934229A
Authority
CN
China
Prior art keywords
target
page
document
target page
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211569910.2A
Other languages
Chinese (zh)
Inventor
陈宇旋
区钺坚
杨钦鹏
许凌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Office Software Inc
Zhuhai Kingsoft Office Software Co Ltd
Wuhan Kingsoft Office Software Co Ltd
Original Assignee
Beijing Kingsoft Office Software Inc
Zhuhai Kingsoft Office Software Co Ltd
Wuhan Kingsoft Office Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Office Software Inc, Zhuhai Kingsoft Office Software Co Ltd, Wuhan Kingsoft Office Software Co Ltd filed Critical Beijing Kingsoft Office Software Inc
Priority to CN202211569910.2A priority Critical patent/CN115934229A/en
Publication of CN115934229A publication Critical patent/CN115934229A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application relates to an operation method, an operation device, electronic equipment and a storage medium for objects in a page, wherein the method comprises the following steps: presenting a target page; detecting object operation executed by the target page; determining an operation object of the object operation from a multi-layer page object located in an operation area of the object operation based on the operation information of the object operation; and executing the object operation on the operation object. Therefore, the operation object of the object operation is determined from the multilayer page objects in the operation area of the object operation based on the operation information of the object operation executed by the target page, and the object operation is further executed on the operation object.

Description

Operation method and device of objects in page, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an operation method and apparatus for an object in a page, an electronic device, and a storage medium.
Background
Currently, page interaction, which is a common interaction method between an electronic device and a user, has increasingly rich page objects (such as pictures, documents, toolbars, etc.) presented. Taking documents as an example, some documents usually need to be distinguished from other documents by adding watermark identifiers, secret identifiers and the like.
However, in the prior art, there are many inconveniences for the operation of the page object. For example, the operation efficiency or the operation safety is low. In particular, some page areas may contain multiple layers of page objects. For example, when a security mark is added to a document, the page area where the security mark is located usually includes other document objects (such as pictures and texts) in addition to the security mark. In this scenario, since the same region includes multiple layers of page objects, the operation accuracy of the page objects is low, and the operation efficiency of the page objects is low. In addition, in the prior art, some document objects in the document are often easily deleted and tampered by a user, so that the document has a security hole.
Therefore, how to improve the operation convenience of the page object is a significant technical problem.
Disclosure of Invention
In view of this, to solve some or all of the above technical problems, embodiments of the present application provide a method and an apparatus for operating an object in a page, an electronic device, and a storage medium.
In a first aspect, an embodiment of the present application provides an operation method for an object in a page, where the method includes:
presenting a target page;
detecting object operation executed by the target page;
determining an operation object of the object operation from a multi-layer page object located in an operation area of the object operation based on the operation information of the object operation;
and executing the object operation on the operation object.
In one possible implementation, the determining, based on the operation information of the object operation, an operation object of the object operation from a multi-layered page object located in an operation area of the object operation includes:
generating discrimination information indicating whether the object operation is an object moving operation based on the operation information of the object operation;
and determining an operation object operated by the object from the multi-layer page object positioned in the operation area operated by the object based on the discrimination information.
In one possible embodiment, before the presenting the target page, the method further includes:
acquiring page data for presenting a target page; wherein the page data includes target object data; during the presentation of the target page, a target page object represented by the target object data is presented on the top layer of the target page; and
the presentation target page comprises:
presenting the target page based on the page data; and
the determining, based on the discrimination information, an operation object of the object operation from a multi-layered page object located in an operation area of the object operation includes:
determining that the operation object of the object operation is the target page object under the condition that the judgment information indicates that the object operation is the object moving operation;
and when the judgment information does not indicate that the object operation is the object moving operation, determining that an operation object of the object operation is a lower-layer object of the target page object.
In a possible implementation manner, in a case that an operation object of the object operation is the target page object, the performing the object operation on the operation object includes:
determining a moving direction and a moving distance of the object moving operation;
and moving the presentation position of the target page object in the target page to the moving direction by the moving distance so as to update the presentation position.
In a possible implementation manner, in a case that an operation object of the object operation is the lower-layer object, the performing the object operation on the operation object includes:
and transmitting the object operation detected by the target page object to the lower-layer object so as to enable the lower-layer object to execute the object operation.
In one possible embodiment, the object operation includes a mouse key pressing operation and a mouse key releasing operation performed by a target mouse, and the mouse key pressing operation is a previous mouse key pressing operation of the mouse key releasing operation; and
the generating, based on the operation information of the object operation, discrimination information indicating whether the object operation is an object moving operation includes:
determining a pressing position corresponding to the mouse key pressing operation and a releasing position corresponding to the mouse key releasing operation based on the operation information of the object operation;
determining whether the pressed position and the released position indicate different positions;
determining whether a target movement operation executed by the target mouse is detected or not under the condition that the pressing position and the releasing position indicate different positions, wherein the execution time of the target movement operation is between the execution time of the mouse key pressing operation and the execution time of the mouse key releasing operation;
in a case where the target moving operation is detected, discrimination information indicating that the target operation is a target moving operation is generated.
In one possible embodiment, the determining, based on the operation information of the object operation, a pressing position corresponding to the mouse key pressing operation and a releasing position corresponding to the mouse key releasing operation includes:
determining whether an intersection region exists between a target region and the operation region or not based on the operation information of the object operation, wherein the target region is a region where the target page object is located in the target page;
and under the condition that the intersection area exists between the target area and the operation area, determining a pressing position corresponding to the mouse key pressing operation and a releasing position corresponding to the mouse key releasing operation based on the operation information of the object operation.
In one possible implementation, the target page object indicates that the target page is a classified page; and
the method further comprises the following steps:
detecting a printing operation aiming at the target page;
in the case of detecting the printing operation, determining whether the target object data is printing related data of the target page, wherein a page object indicated by the printing related data is used for being presented in a printing result of the target page;
and in the case that the target object data is not the printing related data, printing the target page based on the target object data to obtain a printing result of the target page with the target page object.
In one possible embodiment, the printing the target page based on the target object data includes:
generating new printing related data of the target page based on the target object data so as to update the printing related data of the target page;
and printing the target page based on the updated printing related data.
In one possible embodiment, the printing the target page based on the target object data includes:
rendering the presentation content of the target page including the target page object;
and printing the target page according to the presentation content.
In a second aspect, an embodiment of the present application provides an apparatus for operating an object in a page, where the apparatus includes:
the first presentation unit is used for presenting a target page;
the first detection unit is used for detecting object operation executed by the target page;
a first determination unit, configured to determine, based on operation information of the object operation, an operation object of the object operation from a multi-layered page object located in an operation area of the object operation;
and the operation unit is used for executing the object operation on the operation object.
In one possible implementation, the determining, based on the operation information of the object operation, an operation object of the object operation from a multi-layered page object located in an operation area of the object operation includes:
generating discrimination information indicating whether the object operation is an object moving operation based on the operation information of the object operation;
and determining an operation object operated by the object from the multilayer page object positioned in the operation area operated by the object based on the judgment information.
In one possible embodiment, before the presenting the target page, the apparatus further includes:
the acquisition unit is used for acquiring page data for presenting a target page; wherein the page data includes target object data; during the presentation of the target page, presenting a target page object represented by the target object data on the top layer of the target page; and
the presentation target page comprises:
presenting the target page based on the page data; and
the determining, based on the discrimination information, an operation object of the object operation from a multi-layer page object located in an operation area of the object operation includes:
determining that the operation object of the object operation is the target page object under the condition that the judgment information indicates that the object operation is the object moving operation;
and when the judgment information does not indicate that the object operation is the object moving operation, determining that an operation object of the object operation is a lower-layer object of the target page object.
In a possible implementation manner, in a case that an operation object of the object operation is the target page object, the performing the object operation on the operation object includes:
determining a moving direction and a moving distance of the object moving operation;
and moving the presentation position of the target page object in the target page to the moving direction by the moving distance so as to update the presentation position.
In one possible implementation, in a case that an operation object of the object operation is the lower layer object, the performing the object operation on the operation object includes:
and transmitting the object operation detected by the target page object to the lower-layer object so as to enable the lower-layer object to execute the object operation.
In one possible embodiment, the object operation includes a mouse key pressing operation and a mouse key releasing operation performed by a target mouse, and the mouse key pressing operation is a previous mouse key pressing operation of the mouse key releasing operation; and
the generating, based on the operation information of the object operation, discrimination information indicating whether the object operation is an object moving operation includes:
determining a pressing position corresponding to the mouse key pressing operation and a releasing position corresponding to the mouse key releasing operation based on the operation information of the object operation;
determining whether the pressed position and the released position indicate different positions;
determining whether a target movement operation executed by the target mouse is detected or not under the condition that the pressing position and the releasing position indicate different positions, wherein the execution time of the target movement operation is between the execution time of the mouse key pressing operation and the execution time of the mouse key releasing operation;
in a case where the target moving operation is detected, discrimination information indicating that the target operation is a target moving operation is generated.
In one possible embodiment, the determining, based on the operation information of the object operation, a pressing position corresponding to the mouse key pressing operation and a releasing position corresponding to the mouse key releasing operation includes:
determining whether an intersection area exists between a target area and the operation area or not based on the operation information of the object operation, wherein the target area is an area where the target page object is located in the target page;
and under the condition that the intersection area exists between the target area and the operation area, determining a pressing position corresponding to the mouse key pressing operation and a releasing position corresponding to the mouse key releasing operation based on the operation information of the object operation.
In one possible implementation, the target page object indicates that the target page is a classified page; and
the device further comprises:
a second detection unit configured to detect a printing operation for the target page;
a second determination unit, configured to determine whether the target object data is print-related data of the target page in a case where the print operation is detected, where a page object indicated by the print-related data is used to be presented in a print result of the target page;
a first printing unit, configured to print the target page based on the target object data to obtain a print result of the target page with the target page object if the target object data is not the print-related data.
In one possible embodiment, the printing the target page based on the target object data includes:
generating new printing related data of the target page based on the target object data so as to update the printing related data of the target page;
and printing the target page based on the updated printing related data.
In one possible embodiment, the printing the target page based on the target object data includes:
rendering the presentation content of the target page including the target page object;
and printing the target page according to the presentation content.
In a third aspect, an embodiment of the present application provides an operation method of an object in a document, where the method includes:
presenting the target document; wherein a target region of the target document presents a target document object;
detecting an object operation executed by the target document;
determining whether an intersection region exists between the target region and an operation region operated by the object;
in a case where the intersection region exists between the target region and the operation region, it is determined whether the object operation is allowed to be performed with respect to the target document object, based on operation information of the object operation.
In one possible implementation, after the determining whether to allow the object operation to be performed on the target document object based on the operation information of the object operation, the method further includes:
in a case where it is determined that the object operation is prohibited from being performed with respect to the target document object, an operation object of the object operation is determined from among the multiple-layered document objects located in an operation area of the object operation based on operation information of the object operation.
In one possible implementation, the target document object indicates that the target document is a confidential document; and
the determining whether to allow the object operation to be performed on the target document object based on the operation information of the object operation comprises:
determining whether the object operation is an object moving operation or not based on the operation information of the object operation;
if the object operation is the object moving operation, allowing the object operation to be executed on the target document object;
and if the object operation is not the object moving operation, prohibiting the object operation from being executed aiming at the target document object.
In one possible implementation, in a case where the object operation is allowed to be performed with respect to the target document object, the method further includes:
determining a moving direction and a moving distance of the object moving operation;
and moving the presentation position of the target document object in the target document to the moving direction by the moving distance so as to update the presentation position.
In one possible implementation, the target document object is presented on top of the target area; and
in the case where the object operation is prohibited from being performed with respect to the target document object, the method further includes:
and under the condition that the lower layer object of the target document object is included in the operation area, transmitting the object operation detected by the target document object to the lower layer object so as to enable the lower layer object to execute the object operation.
In one possible embodiment, the object operation includes a mouse key pressing operation and a mouse key releasing operation performed by a target mouse, and the mouse key pressing operation is a previous mouse key pressing operation of the mouse key releasing operation; and
the determining whether the object operation is an object moving operation based on the operation information of the object operation includes:
determining a pressing position corresponding to the mouse key pressing operation and a releasing position corresponding to the mouse key releasing operation based on the operation information of the object operation;
determining whether the pressed position and the released position indicate different positions;
determining whether a target movement operation executed by the target mouse is detected or not under the condition that the pressing position and the releasing position indicate different positions, wherein the execution time of the target movement operation is between the execution time of the mouse key pressing operation and the execution time of the mouse key releasing operation;
determining that the object operation is an object moving operation when the target moving operation is detected.
In one possible embodiment, the method further comprises:
detecting a printing operation for the target document;
in the case that the printing operation is detected, determining that target object data used for presenting the target document object is printing associated data of the target document, wherein the document object indicated by the printing associated data is used for presenting in a printing result of the target document;
printing the target document based on the target object data to obtain a printing result of the target document with the target document object if the target object data is not the printing related data.
In one possible embodiment, the printing the target document based on the target object data includes:
generating new printing related data of the target document based on the target object data so as to update the printing related data of the target document;
and printing the target document based on the updated printing related data.
In one possible embodiment, the printing the target document based on the target object data includes:
rendering presentation content of the target document including the target document object;
and printing the target document according to the presentation content.
In a fourth aspect, an embodiment of the present application provides an apparatus for operating an object in a document, where the apparatus includes:
a second presentation unit for presenting the target document; wherein a target region of the target document presents a target document object;
a third detection unit configured to detect an object operation performed by the target document;
a third determination unit, configured to determine whether an intersection region exists between the target region and an operation region operated by the object;
a fourth determination unit configured to determine whether to allow the object operation to be performed on the target document object based on operation information of the object operation in a case where the intersection region exists between the target region and the operation region.
In one possible implementation manner, after the determining whether to allow the object operation to be performed on the target document object based on the operation information of the object operation, the apparatus further includes:
a seventh determining unit configured to determine, based on operation information of the object operation, an operation object of the object operation from among multiple layers of document objects located in an operation area of the object operation, in a case where it is determined that the object operation is prohibited from being performed with respect to the target document object.
In one possible implementation, the target document object indicates that the target document is a confidential document; and
the determining whether to allow the object operation to be executed for the target document object based on the operation information of the object operation includes:
determining whether the object operation is an object moving operation or not based on the operation information of the object operation;
if the object operation is the object moving operation, allowing the object operation to be executed on the target document object;
and if the object operation is not the object moving operation, prohibiting the object operation from being executed aiming at the target document object.
In one possible implementation, in a case where the object operation is allowed to be performed with respect to the target document object, the apparatus further includes:
a fifth determination unit configured to determine a movement direction and a movement distance of the object movement operation;
and the moving unit is used for moving the presentation position of the target document object in the target document to the moving direction by the moving distance so as to update the presentation position.
In one possible implementation, the target document object is presented at the top level of the target area; and
in the case where the object operation is prohibited from being performed with respect to the target document object, the apparatus further includes:
and the execution unit is used for transmitting the object operation detected by the target document object to the lower-layer object so as to enable the lower-layer object to execute the object operation under the condition that the lower-layer object of the target document object is included in the operation area.
In one possible embodiment, the object operation includes a mouse key pressing operation and a mouse key releasing operation performed by a target mouse, and the mouse key pressing operation is a previous mouse key pressing operation of the mouse key releasing operation; and
the determining whether the object operation is an object moving operation based on the operation information of the object operation includes:
determining a pressing position corresponding to the mouse key pressing operation and a releasing position corresponding to the mouse key releasing operation based on the operation information of the object operation;
determining whether the pressed position and the released position indicate different positions;
determining whether a target movement operation executed by the target mouse is detected or not under the condition that the pressing position and the releasing position indicate different positions, wherein the execution time of the target movement operation is between the execution time of the mouse key pressing operation and the execution time of the mouse key releasing operation;
and determining that the object operation is an object moving operation when the target moving operation is detected.
In one possible embodiment, the apparatus further comprises:
a fourth detection unit configured to detect a printing operation for the target document;
a sixth determining unit, configured to determine that, in a case where the printing operation is detected, target object data for presenting the target document object is print-related data for the target document, where a document object indicated by the print-related data is used to be presented in a print result of the target document;
a second printing unit, configured to print the target document based on the target object data to obtain a print result of the target document with the target document object if the target object data is not the print-related data.
In one possible embodiment, the printing the target document based on the target object data includes:
generating new printing related data of the target document based on the target object data so as to update the printing related data of the target document;
and printing the target document based on the updated printing related data.
In one possible embodiment, the printing the target document based on the target object data includes:
rendering presentation content of the target document including the target document object;
and printing the target document according to the presentation content.
In a fifth aspect, an embodiment of the present application provides an electronic device, including:
a memory for storing a computer program;
a processor, configured to execute the computer program stored in the memory, and when the computer program is executed, implement the method according to any embodiment of the method for operating an object in a page according to the first aspect of the present application, or implement the method according to any embodiment of the method for operating an object in a document according to the third aspect of the present application.
In a sixth aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements a method according to any one of the embodiments of the method for operating an object in a page according to the first aspect described above, or implements a method according to any one of the embodiments of the method for operating an object in a document according to the third aspect described above.
In a seventh aspect, this application provides a computer program, where the computer program includes computer readable code, and when the computer readable code runs on a device, a processor in the device is caused to implement a method according to any one of the embodiments of the method for operating an object in a page in the first aspect, or a method according to any one of the embodiments of the method for operating an object in a document in the third aspect.
The method for operating the object in the page, provided by the embodiment of the application, can present a target page, then detect an object operation executed by the target page, then determine an operation object of the object operation from a multi-layer page object located in an operation area of the object operation based on operation information of the object operation, and then execute the object operation on the operation object. Therefore, the operation object of the object operation is determined from the multilayer page objects in the operation area of the object operation based on the operation information of the object operation executed through the target page, and the object operation is executed on the operation object.
The method for operating an object in a document according to the embodiment of the present application may present a target document, where a target region of the target document presents a target document object, then detect an object operation performed by the target document, then determine whether an intersection region exists between the target region and an operation region operated by the object, and finally, determine whether to allow the object operation to be performed on the target document object based on operation information of the object operation when the intersection region exists between the target region and the operation region. Thus, whether the object operation is executed or not is determined for the target document object through the operation information of the object operation, the safety of the document operation is improved, and the operation convenience of the document object is improved.
Drawings
Fig. 1 is a schematic flowchart of an operation method of an object in a page according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of another method for operating an object in a page according to an embodiment of the present disclosure;
fig. 3A to fig. 3C are schematic flowcharts illustrating another method for operating an object in a page according to the embodiment of the present application;
FIG. 4 is a flowchart illustrating a method for operating an object in a document according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an operating device for an object in a page according to an embodiment of the present disclosure;
FIG. 6 is a schematic structural diagram of an apparatus for manipulating an object in a document according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of parts and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise.
It is understood by those skilled in the art that the terms "first", "second", etc. in the embodiments of the present application are only used for distinguishing different steps, devices or modules, etc., and do not represent any particular technical meaning or logical order therebetween.
It should also be understood that in the present embodiment, "a plurality" may mean two or more, and "at least one" may mean one, two or more.
It should also be understood that any reference to any component, data, or structure in the embodiments of the present application may be generally understood as one or more, unless explicitly defined otherwise or stated to the contrary hereinafter.
In addition, the term "and/or" in this application is only one kind of association relationship describing the association object, and means that there may be three kinds of relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this application generally indicates that the former and latter related objects are in an "or" relationship.
It should also be understood that the description of the embodiments of the present application emphasizes the differences between the embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. For the purpose of facilitating an understanding of the embodiments of the present application, reference will now be made in detail to the present application, examples of which are illustrated in the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
Fig. 1 is a schematic flowchart of an operation method of an object in a page according to an embodiment of the present application. As shown in fig. 1, the method specifically includes:
step 101, presenting a target page.
In this embodiment, the target page may include a multi-layer page object. The destination page may include, but is not limited to, the following pages: pages in document editing software, pages in drawing software, game pages.
By way of example, the target page may include a document, the bottom layer of the document may include document content such as text, pictures, etc., and the top layer of the document may include a confidential identifier. The confidential marks can be generated by writing texts in text boxes of the documents, or can be obtained by drawing texts or pictures in the transparent covering layer.
As yet another example, multiple overlapping page objects (e.g., multiple overlapping pictures) may be included in the target page. Since the plurality of page objects overlap each other, a multi-layered page object can be formed.
Step 102, detecting an object operation executed by the target page.
In this embodiment, the object operation may be an operation executed through the target page. The object operation may include a multi-layer page object in an operation region. And the whole or part of each page object in the multi-layer page object is positioned in the operation area.
Further, the above object operation may be performed by an input-output device such as a mouse, a touch panel, a touch screen, a keyboard, or the like.
The operation area may be a page range having a certain area, or may be a point in a page. In the case where the operation region is a page range having a certain area, the operation region may be determined as follows: and determining a rectangular area (for example, a minimum rectangular area) containing an operation trajectory of the object operation in the target page as an operation area of the object operation.
Step 103, based on the operation information of the object operation, determining an operation object of the object operation from the multi-layer page objects located in the operation area of the object operation.
In this embodiment, the operation information may be any information of the object operation. As an example, the operation information may include, but is not limited to, at least one of the following: operation position, operation duration, operation track, operation times, operation type and the like.
The operation object may be a page object for executing the object operation, which is determined from a multi-layer page object located in an operation area of the object operation.
The operation types may include: a select operation, a move operation, a call-out menu operation, an edit operation (including a delete operation, a copy operation, a cut operation, etc.).
Here, the above-described type of operation may be realized by one or a combination of a single-click operation, a slide operation, a double-click operation, and a long-press operation. Specifically, the corresponding type of operation may be determined by judging one or a combination of a single-click operation, a slide operation, a double-click operation, and a long-press operation. In addition, different systems or different software may have different settings.
The multi-layer page object may be a plurality of (at least two) page objects located in each layer (at least two layers) of the page object of the operation region operated by the object. The multi-level page object may include at least two levels of page objects. For example, at least two page objects can be selected from the page objects of each layer, thereby obtaining a multi-layer page object. And each page object in each layer of page objects is respectively positioned in different layers of the operation area. Each page object of the multi-layer page object is respectively positioned at a different layer of the operation area.
As an example, in a case where it is detected that an object operation (e.g., a one-click operation) is performed for the first time, the operation information of the object operation may indicate "the one-click operation is performed for the first time". In this case, a top page object in the multi-level page objects located in the operation region operated by the object may be determined as an operation object operated by the object. In the case of detecting that the object operation is executed for the second time (for example, the time interval between two executions is less than or equal to the preset time length, and/or no other object operation is executed in the middle of two executions), the operation information of the object operation may indicate "single click operation is executed for the second time". In this case, a page object located at a lower level than the above-mentioned top-level page object may be determined as an operation object operated by the object. And so on.
Wherein. Under the condition that the execution of two single-click operations is detected, if the time interval between the two single-click operations is smaller than a preset time length threshold value, the double-click operation can be considered as one time. If the time interval between two times of single-click operations is greater than or equal to the preset time length threshold, the two times of single-click operations can be considered.
Specifically, if the time interval between the operation timing of pressing the mouse (button down) and the operation timing of releasing the mouse (button up) is less than the first preset time length threshold, it may be regarded as a click operation. If the time interval between the mouse pressing operation time and the mouse releasing operation time is smaller than a first preset time threshold, and the same operation is detected again under the condition that other operations are not detected (the time interval between the mouse pressing operation time and the mouse releasing operation time is smaller than the first preset time threshold), and the time interval between two times of the same operation is smaller than or equal to a second preset time threshold, the double-click operation can be considered. And if the time interval between the operation time of pressing the mouse and the operation time of releasing the mouse is greater than or equal to the first preset time threshold, the mouse can be regarded as heavy pressing operation or long pressing operation. If the time interval between the operation time of pressing the mouse and the operation time of releasing the mouse is greater than or equal to the first preset time threshold, or the cursor position when the mouse is pressed is different from the cursor position when the mouse is released (i.e. the pressing operation, the moving operation and the releasing operation are performed in sequence), the operation can be regarded as the dragging operation.
The mouse pressing and mouse releasing are a group of operations, and in the group of operations, no other operation is detected between the mouse pressing and mouse releasing. The durations respectively represented by the first preset duration threshold and the second preset duration threshold may be equal or different.
Further, the operation may be performed by a touch panel or the like, in addition to the mouse operation.
Specifically, if the time interval between the operation timing of pressing the touch panel and the operation timing of releasing the touch panel is less than the first preset time threshold, it may be regarded as a single-click operation. If the time interval between the touch pad pressing operation time and the touch pad releasing operation time is smaller than a first preset time threshold, and the same operation is detected again under the condition that other operations are not detected (the time interval between the touch pad pressing operation time and the touch pad releasing operation time is smaller than the first preset time threshold), and the time interval between two times of the same operation is smaller than or equal to a second preset time threshold, the double-click operation can be considered. If the time interval between the operation time of pressing the touch pad and the operation time of releasing the touch pad is greater than or equal to the first preset time threshold, the operation can be regarded as the heavy pressing operation or the long pressing operation. If the time interval between the operation time of pressing the touch pad and the operation time of releasing the touch pad is greater than or equal to the first preset time threshold, or the cursor position when the touch pad is pressed is different from the cursor position when the touch pad is released (i.e. the pressing operation, the moving operation and the releasing operation are performed in sequence), the operation may be regarded as a dragging operation.
The touch pad is pressed and released in one group of operation, and in the operation group, no other operation is detected between the two operations of pressing and releasing the touch pad. The durations respectively represented by the first preset duration threshold and the second preset duration threshold may be equal or different.
Further, if an operation of dragging a window by three fingers performed with respect to the touch panel is detected, the corresponding window may be moved. For example, moving the cursor to the title bar of the window, and sliding the cursor on the touch pad with three fingers, the window can move along with the fingers. If an operation of dragging and moving the selected text by three fingers performed on the touch pad is detected, the corresponding text can be selected. For example, first moving the cursor to the text, dragging the cursor on the touch pad with three fingers, and then selecting the corresponding text with the fingers.
Here, it may be determined whether the object operation is an executable operation of a top page object in the multi-layered page object according to the operation information. If so, determining that the operation object operated by the object is a top-level page object; if not, whether the operation object of the object operation is the executable operation of the next layer page object of the top layer page object or not is continuously determined according to the operation information. The executable operation may be determined according to a preset rule, and if the top-level page object can only accept some operations, the object operation is permeated to the next layer, or the operation is blocked.
And 104, executing the object operation on the operation object.
As an example, in the case where the target page is a document page and the object operation is a selection operation, if it is detected that the cursor of the interaction device slides from a document position 1 to a document position 2 for changing to a selected state for an object (e.g., text) in an area determined from the document position 1 and the document position 2. In this case, the operation information of the object operation may indicate that "the cursor of the interactive device slides from a document position 1 to a document position 2", if the cursor slides to the target area corresponding to the target page object (such as the dense-level identifier) during the sliding process, the cover layer (including the target page object) does not perform the selection operation because the movement operation is not performed. That is, an object in the document (a lower layer object of the cover layer) may be determined as an operation object, and then the selection operation may be performed on the object in the document.
As yet another example, in the case where the target page is a document page, if a move operation is detected, the move operation may be performed on the cover layer (including the target page object). That is, the mask layer (including the target page object) may be determined as the operation object, and then the move operation may be performed on the mask layer.
The method for operating the object in the page, provided by the embodiment of the application, can present a target page, then detect an object operation executed by the target page, then determine an operation object of the object operation from a multi-layer page object located in an operation area of the object operation based on operation information of the object operation, and then execute the object operation on the operation object. Therefore, the operation object of the object operation is determined from the multilayer page objects in the operation area of the object operation based on the operation information of the object operation executed by the target page, and the object operation is further executed on the operation object, so that the operation object can be determined from the multilayer page objects through the operation information under the condition that the same area in the page comprises the multilayer page objects, the operation accuracy of the page objects under the condition that the same area comprises the multilayer page objects is improved, the operation efficiency of the page objects under the condition that the same area comprises the multilayer page objects is improved, and the operation convenience of the page objects is improved.
Fig. 2 is a schematic flowchart of another method for operating an object in a page according to an embodiment of the present disclosure. As shown in fig. 2, the method specifically includes:
step 201, presenting a target page.
In this embodiment, step 201 is substantially the same as step 101 in the embodiment corresponding to fig. 1, and is not described here again.
Step 202, detecting an object operation executed by the target page.
In this embodiment, step 202 is substantially the same as step 102 in the embodiment corresponding to fig. 1, and is not described herein again.
Step 203, generating discrimination information indicating whether the object operation is an object moving operation based on the operation information of the object operation.
In this embodiment, the object moving operation may be an operation for moving a page object in the target page. As an example, the object movement operation may be a mouse movement operation. As yet another example, the object movement operation may also be performed by inputting new location information of the page object.
Further, the above-described object moving operation may be performed by an input-output device such as a mouse, a touch panel, a touch screen, a keyboard, or the like.
For example, in a case where the operation information indicates that the "object operation is a mouse movement operation", discrimination information indicating that the object operation is an object movement operation may be generated. In a case where the operation information indicates "a position indicated by the new position information of the input page object is different from the current position of the page object", discrimination information indicating that the object operation is an object moving operation may be generated. In addition to the other cases described above, discrimination information indicating that the object operation is not an object moving operation may be generated.
In practice, the judgment information "true" may be adopted to indicate that the object operation is an object moving operation; discrimination information "false" is used to indicate that the object operation is not an object moving operation. Alternatively, the determination information "1" may be used to indicate that the target operation is a target movement operation; discrimination information "0" is employed to indicate that the object operation is not an object moving operation. Alternatively, a preset information-related movement operation may be employed, and when an interactive event including the preset information is received, it may be determined that the object movement operation is received.
Here, the above method of generating the discrimination information is merely exemplary, and practically, the discrimination information may be generated by other methods, which are not limited herein.
Step 204, based on the discrimination information, determining an operation object operated by the object from the multi-layer page object located in the operation area operated by the object.
In this embodiment, in a case that the discrimination information indicates that the object operation is the object moving operation, a page object capable of performing the object moving operation, among the multi-layer page objects located in the operation area of the object operation, may be determined as the operation object of the object operation.
Step 205, executing the object operation on the operation object.
In this embodiment, step 205 is substantially the same as step 104 in the corresponding embodiment of fig. 1, and is not described herein again.
In some optional implementations of this embodiment, before performing step 201, the following steps may also be performed:
page data for presenting a target page is acquired.
Wherein the page data includes target object data. During the presentation of the target page, a target page object (e.g., a target document object described later) represented by the target object data is presented on a top level of the target page. And when the page object at the top layer is not the transparent page object, the page object at the top layer presented in the target page can shield the lower layer object of the page object at the top layer.
On this basis, the above step 201 can be performed in the following manner:
and presenting the target page based on the page data.
For example, the page data may be displayed as a target page after being parsed, laid out, rendered, and displayed.
In one embodiment, the object move operation is an object operation that the target page object can perform. And object operations such as delete operation, modify operation, copy operation, etc. which are not executable by the target page object.
Then, on this basis, the above step 203 can be performed in the following manner:
first, when the discrimination information indicates that the object operation is the object moving operation, it is determined that an operation object of the object operation is the target page object.
And secondly, determining that the operation object of the object operation is a lower-layer object of the target page object when the judgment information does not indicate that the object operation is the object moving operation.
It can be understood that, in the above optional implementation manner, it is determined whether the object operation is the object moving operation by determining whether the discrimination information indicates that the object operation is the top-level page object of the target page object or the lower-level object of the top-level page object, so that the top-level page object can only perform the object moving operation, but cannot perform object operations such as deleting, modifying content, copying, and the like, and it can be ensured that the top-level page object in the target page is not deleted and tampered, and the blocking of the lower-level page object by the top-level page object can be reduced by moving the top-level page object.
In some application scenarios in the above optional implementation manners, in a case that an operation object operated by the object is the target page object, the following steps may be adopted to perform the step 205:
first, a moving direction and a moving distance of the object moving operation are determined.
Wherein, the moving direction may include an x-axis direction and a y-axis direction in a preset coordinate system. The movement distance may represent a movement distance in the x-axis direction and a movement distance in the y-axis direction in the preset coordinate system.
And then, moving the presentation position of the target page object in the target page to the moving direction by the moving distance so as to update the presentation position.
In some cases, in a case that an operation object of the object operation is the target page object, the object moving operation may be performed only on the target page object (that is, the top-level page object), and the object moving operation may not be performed on the lower-level object.
It can be appreciated that the movement of the top page object is achieved in the manner described in the application scenario above.
In some application scenarios in the above optional implementation manner, in a case that an operation object operated by the object is the lower layer object, the following steps may be adopted to perform the step 205:
and transmitting the object operation detected by the target page object to the lower layer object so as to enable the lower layer object to execute the object operation.
In practice, the object operation can be detected by monitoring touch events of a touch screen, mouse events and keyboard events. Since the top page object is displayed on the top of the target page, the object operation performed in the page area (i.e., the target area) where the top page object is located can be detected by the target page object. After the target page object detects the object operation, the object operation detected by the target page object may be transferred to a lower layer object through a programming language such as JavaScript.
In some cases, in a case that an operation object of the object operation is the lower object, the object operation may be performed only on the lower object of the top page object (i.e., the target page object), and not on the top page object. Alternatively, the above object operation may be performed on a lower object and a top page object of the top page object (i.e., the target page object).
It can be understood that in the application scenario, by transferring the object operation detected by the target page object to the lower layer object, the operation for the lower layer object is realized, so that the operation efficiency for the lower layer object in a scenario in which the same region includes multiple layers of page objects (for example, multiple layers of document objects described later) is improved.
In some application scenarios in the above-described alternative implementation, the object operation includes a mouse key pressing operation and a mouse key releasing operation performed by the target mouse. The mouse key pressing operation is a previous mouse key pressing operation of the mouse key releasing operation.
On this basis, the above step 203 can be performed in the following manner:
step one, determining a pressing position corresponding to the mouse key pressing operation and a releasing position corresponding to the mouse key releasing operation based on the operation information of the object operation.
And step two, determining whether the pressing position and the releasing position indicate different positions.
And step three, under the condition that the pressing position and the releasing position indicate different positions, determining whether the target movement operation executed by the target mouse is detected.
And the execution time of the target moving operation is positioned between the execution time of the mouse key pressing operation and the execution time of the mouse key releasing operation.
And step four, generating discrimination information indicating that the object operation is an object moving operation when the target moving operation is detected.
It is understood that, in the application scenario described above, it may be determined that the object movement operation is detected in a case where it is detected that the mouse has sequentially performed the pressing operation, the moving operation, and the releasing operation, and the release position and the pressing position are different, thereby improving the accuracy of determining whether the mouse has performed the object movement operation.
In some cases in the above application scenario, the step one may be performed as follows:
first, based on the operation information of the object operation, it is determined whether there is an intersection region between the target region and the operation region.
And the target area is the area where the target page object is located in the target page.
The intersection region may be an overlapping region between the target region and the operation region, that is, a page region (a page range or a point in a page with a certain area) indicated by an intersection between the target region and the operation region.
Here, whether there is an intersection region between the target region and the operation region may be determined by comparing the size and position of the operation region operated by the object in the operation information and the size and position of the target region.
Then, when the intersection region exists between the target region and the operation region, a pressing position corresponding to the mouse key pressing operation and a release position corresponding to the mouse key release operation are determined based on the operation information of the object operation.
Here, only when the intersection region exists between the target region and the operation region, a pressed position corresponding to the mouse key pressing operation and a released position corresponding to the mouse key releasing operation may be determined based on operation information of the object operation. And if the target area does not include the operation area (for example, the target area does not include the operation area, or the target area only includes a part of the operation area), determining a pressing position corresponding to the mouse key pressing operation and a releasing position corresponding to the mouse key releasing operation without based on the operation information of the object operation.
It is to be understood that, in the case where the target area does not include the operation area, the operation target of the object operation is not the target page object, and therefore, in the case where the target area and the operation area have the intersection area, only the pressed position corresponding to the mouse key pressing operation and the released position corresponding to the mouse key releasing operation are determined based on the operation information of the object operation, and it is determined whether the object operation is an object moving operation, and the speed of determining the operation target can be increased.
In some application scenarios in the above optional implementation manners, the target page object indicates that the target page is a security-involved page.
When the target page is used for presenting the document, the target page object may be a confidential identifier in the document, and the confidential identifier may indicate that the document is a confidential document.
On this basis, the following steps (including the first to third steps) may also be performed:
in a first step, a printing operation for the target page is detected.
Wherein the printing operation is to print the target page. The target page may be printed on a sheet by performing a printing operation, or may be printed as a file in a PDF (Portable Document Format) Format.
And a second step of determining whether the target object data is print-related data of the target page in a case where the printing operation is detected.
Wherein the page object indicated by the printing association data is used for being presented in the printing result of the target page. Thus, when the target object data is the print-related data of the target page, the target page object may be included in the print result of the target page; and when the target object data is not the printing related data of the target page, the printing result of the target page does not include the target page object.
In practice, the target object data of the target page object drawn based on the target object data in the hierarchical manner does not usually belong to the print related data of the target page. Target object data of a target page object generated in such a manner that text or a picture is inserted in a document generally belongs to print related data of the target page.
And thirdly, printing the target page based on the target object data to obtain a printing result of the target page with the target page object under the condition that the target object data is not the printing related data.
Here, the target page may be printed based on the target object data with reference to the following description, which is not repeated for the time being.
It can be understood that, in the application scenario described above, in the case that the target page object indicates that the target page is a confidential page, whether the target object data is print-related data or not, the print result with the target page object can be obtained by printing. Therefore, whether the printed content is confidential can be judged according to the printing result.
In some cases in the application scenarios described above, the target page may be printed based on the target object data in the following manner:
first, based on the target object data, new printing related data of the target page is generated to update the printing related data of the target page.
As an example, the target object data may be converted into print-related data of the target page. For example, in the case where the target object data is used to draw the target page object in a hierarchical manner, the target object data may be converted into new print-related data for generating text or pictures in a document. Wherein text or pictures in the document generated by the new print-related data visually coincide with a target page object rendered by the target object data.
And then, printing the target page based on the updated printing related data.
It is understood that, in the above case, when the target object data is not the print related data, the print related data of the target page may be updated based on the target object data, and the print result with the target page object may be obtained by printing.
In some cases in the above application scenarios, the target page may also be printed based on the target object data in the following manner:
first, rendering content of the target page including the target page object is rendered.
As an example, the target page, including the target page object, may be rendered as a file in PDF format.
And then, printing the target page according to the presentation content.
It can be understood that in the above case, the target page is printed according to the presentation content of the target page including the target page object, so that the printing result is more similar to the presentation format of the target page.
It should be noted that, in addition to the above-mentioned contents, the present embodiment may further include corresponding technical features described in the embodiment corresponding to fig. 1, so as to achieve the technical effect of the operation method of the object in the page shown in fig. 1.
According to the operation method of the object in the page, the operation object of the object operation is determined by judging whether the object operation is the object moving operation, and the accuracy of determining the operation object can be improved.
Fig. 3A to fig. 3C are schematic flow charts of another operation method of an object in a page according to an embodiment of the present application. The method can be applied to one or more electronic devices such as smart phones, notebook computers, desktop computers, portable computers and servers. In addition, the execution main body of the method can be hardware or software. When the execution main body is hardware, the execution main body may be one or more of the electronic devices. For example, a single electronic device may perform the method, or multiple electronic devices may cooperate with each other to perform the method. When the execution subject is software, the method may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module. And is not particularly limited herein.
The following description is made for the purpose of illustrating embodiments of the present application, but it should be noted that the embodiments of the present application may have the features described below, but the following description should not be construed as limiting the scope of the embodiments of the present application.
Specifically, as shown in fig. 3A, the secret level identification information may be generated and stored in an encrypted manner as follows:
the first step is as follows: and starting. And carrying out a standard secret operation on a secret-related document (a secret-related page). The user can mark the document for security and input the security information of the document so as to realize the security marking operation on the security-related document. Security information (a kind of target object data) of a document (representing a security level of the document), security-related information such as a security person, and the like is acquired.
The second step is that: and processing the security level related information input by the user to generate a plaintext character string in json (JavaScript Object Notation) format. For a confidential document, the information related to the security level includes, but is not limited to, the security level, the security period, the drafter of the document, the person who determined the security of the document, and the like. The security level related information may be stored in encrypted form within the document data. And when the document is opened for display subsequently, the security level related information is read, and the security level information is drawn and processed in a covering layer mode according to the information content in the security level related information.
Optionally, after the document is opened and the secret level identification information stored in the document data is read, an object capable of displaying text information or picture information, such as a text box object or a picture object, may be inserted, and the secret level identification information is displayed on the corresponding object as a text or a picture, so as to replace the scheme of drawing the secret level identification information in a Mongolian manner.
The third step: and encrypting the plain text character string in the json format generated in the second step by using a national encryption algorithm, such as an SM4 (block cipher standard) encryption algorithm, to generate a cipher information ciphertext.
The fourth step: and storing the cipher-grade identification information cipher text generated in the third step into document data (namely page data).
The fifth step: and generating and storing the security level identification information.
Referring next to fig. 3B, as shown in fig. 3B, the hierarchical drawing process of the secret level identifier may include:
the first step is as follows: and starting.
The second step is that: the user opens a confidential document (a kind of target page).
The third step: a secret identification information ciphertext (a type of target object data) is read.
The fourth step: and decrypting the cipher grade identification information ciphertext to obtain the plaintext character string in the json format.
The fifth step: and analyzing the plaintext character string in the json format to acquire security level related information, such as coordinate information (pos _ x, pos _ y) of a display position (namely, the presentation position), security level information and the like. Here, the security level identification information may be displayed at a fixed position on the top page of the document, and the position information (pos _ x, pos _ y) is also position information for the top page.
And a sixth step: a transparent cover layer is drawn at the position (pos _ x, pos _ y) and secret identification information (a kind of target page object) is drawn thereon.
Therefore, the security level identification information can be always displayed on the first page of the document in relation to the change of the page of the document, for example, into multi-window contrast, etc. The method comprises the steps that the masking layer can be zoomed along with page zooming, the zooming proportion of the width and the height of the page is respectively calculated in real time in the zooming process of the page, and then the width and the height of the masking layer are multiplied by the corresponding proportion to achieve equal-scale zooming when the masking layer is drawn.
And a sixth step: and (6) ending.
As shown in fig. 3C, the hierarchical dragging process of the confidential level identification information includes:
the first step is as follows: and starting.
The second step is that: the user operates the document by using the mouse, and moves the mouse to a certain position to perform operations such as clicking or dragging (i.e., the above object operations).
The third step: the click test module in the system can detect the current mouse position (including the pressing position and the releasing position) in real time and store the position information (namely the presenting position) of the password identification mask layer.
The fourth step: and judging whether the mouse is in the Mongolian range of the secret level identification, namely determining whether an intersection area exists between the target area and the operation area.
The fifth step: and if the mouse is in the range of the secret level identification cover layer, further judging whether the current mouse behavior is a dragging behavior (namely the object moving operation). If it is a normal click or double click behavior, the current drag flow does nothing.
Wherein, the range of the covering layer can be: and a rectangular area which can draw the text information corresponding to the font size completely is determined by factors such as the text content, the font size and the like of the confidential information.
The process of identifying a drag behavior may include: by monitoring the mouse event in real time, whether the mouse position of the latest mouse left button pressing (LBUTTONDOWN) event is the same as the mouse position of the latest mouse left button releasing (LBUTTONUP) event is judged. If the positions are different, whether a mouse movement (mouse) event occurs between the two events is judged, if yes, a dragging behavior can be considered to occur, and the starting point and the end point of dragging are the coordinates of LBUTTONDOWN and the coordinates of LBUTTONUP respectively.
In addition, the Mongolian layer can be positioned at the top layer as a UI (User Interface) layer, and the mouse transmission of each UI layer is controlled by maintaining a mouse event adapter of the UI layer. Which level obtains the control right of the mouse event depends on the level position of the UI, for example, if a cover layer is placed at the topmost layer, the mouse falls in the range of the cover layer, the mouse obtains the control right of the mouse event, and after the processing is finished, whether the mouse information is transmitted to the next level (namely, the lower-layer object) for processing can be controlled. Here, the cover layer will suspend the mouse information from being passed to the lower layer after the drag mouse event is processed, so that there is no case where the lower page object is selected through the cover layer, in other words, if the mouse range falls on the cover layer and is dragged, and if there are other objects in the lower layer of the cover layer at this time, the cover layer is finally dragged, and the lower layer object is not affected.
And a sixth step: and if the judgment is a dragging behavior, the dragging offsets dx and dy are recorded in real time.
The seventh step: when a drag ending event occurs, a final dense-level identification montage coordinate position (pos _ x0+ dx, pos _ y0+ dy) is obtained through calculation of the current dense-level identification montage coordinate (pos _ x0, pos _ y 0) and drag offsets dx and dy.
Eighth step: the position information in the updated secret level identification information is (pos _ x0+ dx, pos _ y0+ dy).
The ninth step: redrawing the transparent masking layer and the secret level identification information at the new coordinate position (pos _ x0+ dx, pos _ y0+ dy);
the tenth step: and (6) ending.
In addition, for the fifth step in the Mongolian draggable flow, different processing results can be available for different mouse event behaviors. Because the secret identification mask layer is drawn on the top layer of the document view, if actions such as ordinary mouse click (click), double click (dblclick), hover (hover) and the like are taken, the mouse position falls on the secret identification mask layer, and no operation is required for the draggable function of the secret identification mask layer. Therefore, the mouse event and the mouse information at this time can be transferred to other UI elements at the lower layer of the montage. If the current mouse behavior is a dragging behavior, the secret identification masking layer can intercept the mouse dragging behavior, so that the dragging behavior only acts on the secret identification masking layer, and other underlying UI elements cannot be influenced.
Optionally, the following method may also be adopted to implement dragging of the secret identification mask layer: setting target position information of the dense-level identification cover layer through an interface, inputting position information by a user, redrawing the dense-level identification cover layer and the dense-level identification information at a position indicated by new position information after clicking confirmation, and simultaneously updating the position information in the dense-level identification information and storing the position information in document data.
Further, in the case of printing a confidential document in which the confidential identification information is drawn in the above masking manner, since the confidential identification is drawn in the masking manner, the confidential identification itself does not belong to the content of the document, and if the document is directly printed, the confidential identification information cannot be printed. Therefore, the problem that the secret identification information cannot be printed can be solved in the following two ways:
in the first mode, the security level information corresponding to the cover layer is converted into the document object described above and inserted into the document to obtain the print related data. Then, the document inserted with the object corresponding to the security information is transmitted to the printer, so that the printer can print the document with the security identification information.
In the second mode, through the printing flow of the printer, when the drawing is performed on the printer device, the drawing of the masking layer and the secret level information is added to obtain the presentation content of the target page, so that the printing result is consistent with the effect of screen presentation.
It should be noted that, in addition to the above-mentioned contents, the present embodiment may further include the technical features described in the above embodiments, so as to achieve the technical effect of the operation method of the object in the page shown above.
The method for operating the objects in the page solves the problems that a secret-level identification cover layer of a secret-related document, the secret-level identification information is drawn and displayed, the secret-level identification cover layer can be dragged and the like. The security level identification information is encrypted and stored to the document data, and the related security level identification information is displayed by using a drawing mode, so that the security level identification information of the security-related document is high in safety, the security level identification information of the security-related document is protected, the reliability of the security level identification of the security-related document is enhanced, and the security level identification information has great significance in the field of security-related systems. In addition, the confidential identification data is stored in the document entity in an encrypted manner, confidential information is read after decryption, a transparent cover layer is generated at the corresponding position of the top layer of the document according to the position information in the confidential data, and then the confidential information is drawn on the cover layer, so that the drawing and displaying effect of the confidential identification information is achieved. Because the security identification data is stored in the document data through encryption and then the security identification information is displayed in a drawing mode, the security identification information usually has no risk of being copied, changed, deleted and the like by a user. Meanwhile, the method also improves the operation usability of the confidential identification object of the confidential document, and the existing mode of inserting the object or the text box to realize the display of the confidential identification has the problem of overlapping the positions of a plurality of objects, for example, under the condition that the confidential identification object is already displayed, a new object is inserted in the same position information, and then the new object covers the confidential identification object, so that the displayed confidential identification object is shielded. The method draws the security level identification information on the top layer of the document view, so that the problem of shielding among objects can be reduced. In the process of dragging operation by a user moving a mouse, the position of the mouse and mouse operation behaviors such as clicking, double clicking, dragging and the like can be detected in real time, so that dragging of the hierarchical identification cover layer area is controlled.
Fig. 4 is a flowchart illustrating an operation method of an object in a document according to an embodiment of the present application. As shown in fig. 4, the method specifically includes:
301, presenting a target document; wherein the target region of the target document presents a target document object.
In this embodiment, the target document may be any document on which the target document object is displayed. By way of example, the target document may include, but is not limited to, documents in the following formats: doc Format, docx Format, ppt Format, pptx Format, rtf (Rich Text Format), html (Hyper Text Markup Language) Format, and the like. In addition, the target document can also be an offline document stored locally or an online document stored in the cloud.
The target area may be an area in the target document where the target document object is present. As an example, the target area may be a smallest rectangular area in the target document including the target document object.
The target document object can be any document object in the target document. Specifically, the target document object may be a document object at any layer (e.g., top layer) in the target document. By way of example, the target document object may be a text box, a picture, a confidential identification object, and the like.
302, detecting an object operation performed by the target document.
In this embodiment, the object operation may be an operation performed by the target document.
And 303, determining whether an intersection region exists between the target region and the operation region operated by the object.
In this embodiment, the operation region may be a range having a certain area, or may be a single point. In the case where the operation region is a range having a certain area, the operation region may be determined in the following manner: a rectangular area (for example, a minimum rectangular area) including an operation trajectory of the target operation in the target document is determined as an operation area of the target operation.
304, in the case that the intersection region exists between the target region and the operation region, determining whether to allow the object operation to be performed on the target document object based on the operation information of the object operation.
In this embodiment, the operation information may be any information operated by the object. As an example, the operation information may include, but is not limited to, at least one of the following: operation position, operation duration, operation track, operation times, operation type and the like.
The operation types may include: selection operations, move operations, call-out menu operations, editing operations (including delete operations, copy operations, cut operations, etc.).
Here, the above-described type of operation can be realized by one or a combination of a single-click operation, a slide operation, a double-click operation, and a long-press operation. Specifically, the corresponding type of operation may be determined by judging one or a combination of a single-click operation, a slide operation, a double-click operation, and a long-press operation. In addition, different systems or different software may have different settings.
The intersection region may be an overlapping region between the target region and the operation region, that is, a page region (a page range or a point in a page with a certain area) indicated by an intersection between the target region and the operation region.
As an example, each document object in the target document (including the target document object) may correspond to one or more operation types. Thus, whether the target document object is allowed to be executed by the target document object can be determined by judging whether the operation type indicated by the operation information of the target operation is the operation type corresponding to the target document object.
For example, when a selection operation is performed on an object in a document, it is detected that a cursor of an interaction device slides from a document position 1 to a document position 2, and is used for changing the object (for example, text) in an area determined from the document position 1 and the document position 2 into a selected state. In this case, if the target area corresponding to the target page object (secret mark) is slid, the masking layer (including the target page object) does not perform the selection operation because the movement operation is not performed.
In some optional implementation manners of this embodiment, the target document object indicates that the target document is a confidential document.
On this basis, the step 304 can be executed in the following manner:
the method comprises the first step of determining whether the object operation is an object moving operation or not based on operation information of the object operation.
The object moving operation may be an operation for moving a document object (e.g., a target document object) in the target document. As an example, the object movement operation may be a mouse movement operation. As yet another example, the object movement operation may also be performed by inputting new position information of the document object.
For example, in a case where the operation information indicates that the "object operation is a mouse movement operation", it may be determined that the object operation is an object movement operation. In the case where the operation information indicates "the position indicated by the new position information of the input document object is different from the current position of the document object", it may be determined that the object operation is an object moving operation. In addition to the other cases described above, it may be determined that the object operation is not an object moving operation.
Here, the manner of determining whether the object operation is the object moving operation is merely exemplary, and in practice, other manners may also be adopted to determine whether the object operation is the object moving operation, which is not limited herein.
And a second step of allowing the object operation to be performed with respect to the target document object in a case where the object operation is the object moving operation.
And thirdly, if the object operation is not the object moving operation, prohibiting the object operation from being executed aiming at the target document object.
It is understood that, in the above alternative implementation manner, by determining whether the object operation is an object moving operation to determine whether to allow the object operation to be performed on the target document object, the security of the document object operation may be further improved.
In some application scenarios in the above-mentioned optional implementation manners, in a case where the object operation is allowed to be performed on the target document object, the following steps may also be performed:
first, a moving direction and a moving distance of the object moving operation are determined.
The moving direction may include an x-axis direction and a y-axis direction in a preset coordinate system. The movement distance may represent a movement distance in the x-axis direction and a movement distance in the y-axis direction in the preset coordinate system.
And then, moving the presentation position of the target document object in the target document to the moving direction by the moving distance so as to update the presentation position.
It will be appreciated that movement of the target document object is achieved in the manner described in the application scenarios above.
In some application scenarios in the above-described alternative implementation, the target document object is presented on top of the target area.
On this basis, in the case where the object operation is prohibited from being performed with respect to the target document object, the following steps may also be performed:
and under the condition that the lower layer object of the target document object is included in the operation area, transmitting the object operation detected by the target document object to the lower layer object so as to enable the lower layer object to execute the object operation.
The lower layer object may be a document object located at a lower layer of the target document object.
In practice, the object operation may be detected by monitoring a touch event of a touch screen, a mouse event, and a keyboard event. Since the target document object is displayed on top of the target area, the object operations performed in the target area are all detectable by the target document object. After the target document object detects the object operation, the object operation detected by the target document object may be transferred to the underlying object through a programming language such as JavaScript.
It can be understood that in the application scenario, the object operation detected by the target document object is transferred to the lower layer object, so that the operation on the lower layer object in the scenario that the target area includes multiple layers of document objects is realized, and the operation efficiency on the lower layer object in the scenario that the same area includes multiple layers of document objects is improved.
In some application scenarios in the above optional implementation manners, the object operation includes a mouse key pressing operation and a mouse key releasing operation that are executed by a target mouse, and the mouse key pressing operation is a previous mouse key pressing operation of the mouse key releasing operation.
On this basis, the first step can be executed in the following way:
first, a pressing position corresponding to the mouse key pressing operation and a release position corresponding to the mouse key release operation are determined based on the operation information of the object operation.
Thereafter, it is determined whether the pressed position and the released position indicate different positions.
Then, in a case where the pressed position and the released position indicate different positions, it is determined whether or not a target movement operation performed by the target mouse is detected.
And the execution time of the target moving operation is positioned between the execution time of the mouse key pressing operation and the execution time of the mouse key releasing operation.
Finally, in the case that the target movement operation is detected, it is determined that the object operation is an object movement operation.
It is understood that, in the application scenario described above, it may be determined that the object movement operation is detected in a case where it is detected that the mouse has sequentially performed the pressing operation, the moving operation, and the releasing operation, and the release position and the pressing position are different, thereby improving the accuracy of determining whether the mouse has performed the object movement operation.
In some optional implementations of this embodiment, the following steps may also be performed:
first, a printing operation for the target document is detected.
Wherein the printing operation is to print the target document. The target Document may be printed on a sheet of paper by performing a printing operation, or may be printed as a file in a PDF (Portable Document Format) Format.
Thereafter, in a case where the printing operation is detected, it is determined that target object data for presenting the target document object is print-related data for the target document.
Wherein the document object indicated by the print association data is used for presentation in a print result of the target document. Thus, in the case that the target object data is the print-related data of the target document, the target document object may be included in the print result of the target document; and when the target object data is not the printing related data of the target document, the target document object is not included in the printing result of the target document.
In practice, a target document object drawn based on target object data in a hierarchical manner does not generally belong to print-related data of the target document. A target document object, generated by inserting text or pictures in a document, typically belongs to print related data of the target document.
Wherein the document object indicated by the print association data is used for presentation in a print result of the target document.
Then, in the case that the target object data is not the printing related data, the target document is printed based on the target object data to obtain a printing result of the target document with the target document object.
Here, the target document may be printed based on the target object data with reference to the following description, which is not repeated here.
It is to be understood that, in the above alternative implementation manner, in the case that the target document object indicates that the target document is a confidential document, whether the target object data is print-related data or not, the print result with the target document object may be obtained by printing. Therefore, whether the printed content is confidential can be judged according to the printing result.
In some application scenarios in the above-mentioned optional implementation manners, the target document may be printed based on the target object data in the following manner:
first, based on the target object data, new print-related data of the target document is generated to update the print-related data of the target document.
As an example, the target object data may be converted into print-related data of the target document. For example, in the case where the target object data is used to draw a target document object in a hierarchical manner, the target object data may be converted into new print-related data for generating text or pictures in a document. Wherein text or pictures in the document generated by the new print-related data visually coincide with a target document object drawn by the target object data.
And then printing the target document based on the updated printing related data.
It can be understood that, in the above application scenario, when the target object data is not the print-related data, the print-related data of the target document may be updated based on the target object data, and then the print result with the target document object may be obtained by printing.
In some application scenarios in the above optional implementation manners, the target document may be printed based on the target object data in the following manner:
first, the presentation content of the target document including the target document object is rendered.
As an example, the target document, including the target document object, may be rendered as a PDF formatted file.
And then printing the target document according to the presentation content.
It can be understood that, in the above case, the target document is printed according to the presentation content of the target document including the target document object, so that the printing result can be made more similar to the presentation format of the target document.
In some optional implementations of this embodiment, after the step 304 is performed, the following steps may also be performed:
in a case where it is determined that the object operation is prohibited from being performed with respect to the target document object, an operation object of the object operation is determined from among the multi-layered document objects located in an operation area of the object operation based on operation information of the object operation.
In some application scenarios in the foregoing optional implementation manners, an operation object of the object operation may be determined from a multi-layer document object located in an operation area of the object operation based on operation information of the object operation in the following manner:
first, based on operation information of the object operation, discrimination information indicating whether or not the object operation is an object moving operation is generated.
And secondly, determining an operation object operated by the object from the multilayer document object positioned in the operation area operated by the object based on the discrimination information.
In some cases in the above application scenarios, in a case where an operation object of the object operation is the lower layer object, the object operation may be performed on the operation object in the following manner:
and transmitting the object operation detected by the target document object to the lower-layer object so as to enable the lower-layer object to execute the object operation.
It should be noted that the target document described in this embodiment may be presented in the target page described in the operation method of the object in each page. Specifically, the target page described in fig. 1 to 3C may be the target document described in the present embodiment; the page object (e.g., target page object) described in FIGS. 1-3C may be a document object (e.g., target document object) described in this embodiment; the confidential pages described in fig. 1 to 3C may be the confidential documents described in this embodiment. Therefore, in addition to the above-mentioned contents, the present embodiment may further include corresponding technical features described in the embodiments corresponding to fig. 1 to fig. 3C, so as to further achieve the technical effect of the operation method of the object in the page shown in fig. 1 to fig. 3C, and for brevity, the related description is specifically referred to fig. 1 to fig. 3C, and is not repeated herein.
The method for operating an object in a document according to the embodiment of the present application may present a target document, where a target region of the target document presents a target document object, then detect an object operation performed by the target document, then determine whether an intersection region exists between the target region and an operation region operated by the object, and finally, determine whether to allow the object operation to be performed on the target document object based on operation information of the object operation when the intersection region exists between the target region and the operation region. Thus, whether the object operation is executed or not is determined for the target document object through the operation information of the object operation, the safety of the document operation is improved, and the operation convenience of the document object is improved.
Fig. 5 is a schematic structural diagram of an operating device for an object in a page according to an embodiment of the present application. The method specifically comprises the following steps:
a first presentation unit 401, configured to present a target page;
a first detecting unit 402, configured to detect an object operation performed by the target page;
a first determining unit 403, configured to determine, based on operation information of the object operation, an operation object of the object operation from a multi-layered page object located in an operation area of the object operation;
an operation unit 404, configured to perform the object operation on the operation object.
In one possible implementation, the determining, based on the operation information of the object operation, an operation object of the object operation from a multi-layered page object located in an operation area of the object operation includes:
generating discrimination information indicating whether the object operation is an object moving operation based on the operation information of the object operation;
and determining an operation object operated by the object from the multi-layer page object positioned in the operation area operated by the object based on the discrimination information.
In one possible embodiment, before the presenting the target page, the apparatus further includes:
an acquisition unit (not shown in the figure) for acquiring page data for presenting a target page; wherein the page data comprises target object data; during the presentation of the target page, a target page object represented by the target object data is presented on the top layer of the target page; and
the presentation target page comprises:
presenting the target page based on the page data; and
the determining, based on the discrimination information, an operation object of the object operation from a multi-layer page object located in an operation area of the object operation includes:
determining that the operation object of the object operation is the target page object under the condition that the judgment information indicates that the object operation is the object moving operation;
and when the judgment information does not indicate that the object operation is the object moving operation, determining that an operation object of the object operation is a lower-layer object of the target page object.
In a possible implementation manner, in a case that an operation object of the object operation is the target page object, the performing the object operation on the operation object includes:
determining a moving direction and a moving distance of the object moving operation;
and moving the presenting position of the target page object in the target page to the moving direction by the moving distance so as to update the presenting position.
In one possible implementation, in a case that an operation object of the object operation is the lower layer object, the performing the object operation on the operation object includes:
and transmitting the object operation detected by the target page object to the lower-layer object so as to enable the lower-layer object to execute the object operation.
In one possible embodiment, the object operation includes a mouse key pressing operation and a mouse key releasing operation performed by a target mouse, and the mouse key pressing operation is a previous mouse key pressing operation of the mouse key releasing operation; and
the generating, based on the operation information of the object operation, discrimination information indicating whether the object operation is an object moving operation includes:
determining a pressing position corresponding to the mouse key pressing operation and a releasing position corresponding to the mouse key releasing operation based on the operation information of the object operation;
determining whether the pressed position and the released position indicate different positions;
determining whether a target movement operation executed by the target mouse is detected or not under the condition that the pressing position and the releasing position indicate different positions, wherein the execution time of the target movement operation is between the execution time of the mouse key pressing operation and the execution time of the mouse key releasing operation;
in a case where the target moving operation is detected, discrimination information indicating that the target operation is a target moving operation is generated.
In one possible embodiment, the determining, based on the operation information of the object operation, a pressing position corresponding to the mouse key pressing operation and a releasing position corresponding to the mouse key releasing operation includes:
determining whether an intersection region exists between a target region and the operation region or not based on the operation information of the object operation, wherein the target region is a region where the target page object is located in the target page;
and under the condition that the intersection area exists between the target area and the operation area, determining a pressing position corresponding to the mouse key pressing operation and a releasing position corresponding to the mouse key releasing operation based on the operation information of the object operation.
In one possible implementation, the target page object indicates that the target page is a classified page; and
the device further comprises:
a second detection unit (not shown in the figure) for detecting a printing operation for the target page;
a second determining unit (not shown in the figure) configured to determine whether the target object data is print-related data of the target page, in a case where the printing operation is detected, wherein a page object indicated by the print-related data is used for being presented in a print result of the target page;
a first printing unit (not shown in the figure) for printing the target page based on the target object data to obtain a printing result of the target page with the target page object if the target object data is not the printing related data.
In one possible embodiment, the printing the target page based on the target object data includes:
generating new printing related data of the target page based on the target object data so as to update the printing related data of the target page;
and printing the target page based on the updated printing related data.
In one possible embodiment, the printing the target page based on the target object data includes:
rendering the presentation content of the target page including the target page object;
and printing the target page according to the presentation content.
The operation device for the object in the page provided in this embodiment may be the operation device for the object in the page as shown in fig. 5, and may perform all the steps of the operation method for the object in each page, so as to achieve the technical effect of the operation method for the object in each page.
Fig. 6 is a schematic structural diagram of an operation apparatus for an object in a document according to an embodiment of the present application. The method specifically comprises the following steps:
a second presentation unit 411 for presenting a target document; wherein a target region of the target document presents a target document object;
a third detecting unit 412, configured to detect an object operation performed by the target document;
a third determining unit 413, configured to determine whether there is an intersection region between the target region and the operation region operated by the object;
a fourth determining unit 414, configured to determine whether to allow the object operation to be performed on the target document object based on operation information of the object operation if the intersection region exists between the target region and the operation region.
In one possible implementation, the target document object indicates that the target document is a confidential document; and
the determining whether to allow the object operation to be executed for the target document object based on the operation information of the object operation includes:
determining whether the object operation is an object moving operation based on the operation information of the object operation;
if the object operation is the object moving operation, allowing the object operation to be executed on the target document object;
and if the object operation is not the object moving operation, prohibiting the object operation from being executed aiming at the target document object.
In one possible implementation, in a case that the object operation is allowed to be performed with respect to the target document object, the apparatus further includes:
a fifth determination unit (not shown in the figure) for determining a movement direction and a movement distance of the object movement operation;
a moving unit (not shown in the figure) for moving the presentation position of the target document object in the target document to the moving direction by the moving distance to update the presentation position.
In one possible implementation, the target document object is presented at the top level of the target area; and
in the case where the object operation is prohibited from being performed with respect to the target document object, the apparatus further includes:
an executing unit (not shown in the figure) for, in a case where a lower layer object of the target document object is included in the operation area, passing the object operation detected by the target document object to the lower layer object to cause the lower layer object to execute the object operation.
In one possible embodiment, the object operation includes a mouse key pressing operation and a mouse key releasing operation performed by a target mouse, and the mouse key pressing operation is a previous mouse key pressing operation of the mouse key releasing operation; and
the determining whether the object operation is an object moving operation based on the operation information of the object operation includes:
determining a pressing position corresponding to the mouse key pressing operation and a releasing position corresponding to the mouse key releasing operation based on the operation information of the object operation;
determining whether the pressed position and the released position indicate different positions;
determining whether a target movement operation executed by the target mouse is detected or not under the condition that the pressing position and the releasing position indicate different positions, wherein the execution time of the target movement operation is between the execution time of the mouse key pressing operation and the execution time of the mouse key releasing operation;
determining that the object operation is an object moving operation when the target moving operation is detected.
In one possible embodiment, the apparatus further comprises:
a fourth detection unit (not shown in the figure) for detecting a printing operation for the target document;
a sixth determining unit (not shown in the figure) for determining that target object data for presenting the target document object is print-related data for the target document in a case where the printing operation is detected, wherein the document object indicated by the print-related data is used for presentation in a print result of the target document;
a second printing unit (not shown in the figure) for printing the target document based on the target object data to obtain a printing result of the target document with the target document object if the target object data is not the printing related data.
In one possible embodiment, the printing the target document based on the target object data includes:
generating new printing related data of the target document based on the target object data so as to update the printing related data of the target document;
and printing the target document based on the updated printing related data.
In one possible embodiment, the printing the target document based on the target object data includes:
rendering presentation content of the target document including the target document object;
and printing the target document according to the presentation content.
In one possible implementation manner, after the determining whether to allow the object operation to be performed on the target document object based on the operation information of the object operation, the apparatus further includes:
a seventh determining unit (not shown in the figure) for determining an operation object of the object operation from among the multi-layered document objects located in the operation area of the object operation based on the operation information of the object operation in a case where it is determined that the object operation is prohibited from being performed with respect to the target document object.
The operation device for the object in the document provided in this embodiment may be the operation device for the object in the document shown in fig. 6, and may perform all the steps of the operation method for the object in each document described above, so as to achieve the technical effect of the operation method for the object in each document described above.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device 500 shown in fig. 7 includes: at least one processor 501, memory 502, at least one network interface 504, and other user interfaces 503. The various components in the electronic device 500 are coupled together by a bus system 505. It is understood that the bus system 505 is used to enable connection communications between these components. The bus system 505 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 505 in FIG. 7.
The user interface 503 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, touch pad, or touch screen, among others.
It will be appreciated that the memory 502 in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM) which serves as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), enhanced Synchronous SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The memory 502 described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 502 stores elements, executable units or data structures, or a subset thereof, or an expanded set thereof as follows: an operating system 5021 and application programs 5022.
The operating system 5021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application 5022 includes various applications, such as a Media Player (Media Player), a Browser (Browser), and the like, for implementing various application services. A program for implementing the method according to the embodiment of the present application may be included in the application 5022.
In this embodiment, by calling a program or an instruction stored in the memory 502, specifically, a program or an instruction stored in the application 5022, the processor 501 is configured to execute the method steps provided by the method embodiments, for example, including:
presenting a target page;
detecting object operation executed by the target page;
determining an operation object of the object operation from a multi-layer page object located in an operation area of the object operation based on the operation information of the object operation;
and executing the object operation on the operation object.
Alternatively, the first and second liquid crystal display panels may be,
presenting the target document; wherein a target region of the target document presents a target document object;
detecting an object operation executed by the target document;
determining whether an intersection region exists between the target region and an operation region operated by the object;
in a case where the intersection region exists between the target region and the operation region, it is determined whether the object operation is allowed to be performed with respect to the target document object, based on operation information of the object operation.
The method disclosed in the embodiments of the present application may be applied to the processor 501, or implemented by the processor 501. The processor 501 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in software form in the processor 501. The Processor 501 may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off the shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software elements in the decoding processor. The software elements may be located in ram, flash, rom, prom, or eprom, registers, among other storage media that are well known in the art. The storage medium is located in the memory 502, and the processor 501 reads the information in the memory 502 and completes the steps of the method in combination with the hardware.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or a combination thereof. For a hardware implementation, the Processing units may be implemented in one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented by means of units performing the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
The electronic device provided in this embodiment may be the electronic device shown in fig. 7, and may execute all the steps of the operation method for the object in each page, so as to achieve the technical effect of the operation method for the object in each page.
The embodiment of the application also provides a storage medium (computer readable storage medium). The storage medium herein stores one or more programs. Among others, storage media may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, a hard disk, or a solid state disk; the memory may also comprise a combination of the above kinds of memories.
When one or more programs in the storage medium are executable by one or more processors, the method for operating the objects in the page executed on the electronic device side is realized.
The processor is used for executing an operation program of the object in the page stored in the memory so as to realize the following steps of the operation method of the object in the page executed on the electronic equipment side:
presenting a target page;
detecting object operation executed by the target page;
determining an operation object of the object operation from a multi-layer page object located in an operation area of the object operation based on the operation information of the object operation;
and executing the object operation on the operation object.
Alternatively, the first and second electrodes may be,
presenting the target document; wherein a target region of the target document presents a target document object;
detecting an object operation executed by the target document;
determining whether an intersection region exists between the target region and an operation region operated by the object;
in a case where the intersection region exists between the target region and the operation region, it is determined whether the object operation is allowed to be performed with respect to the target document object, based on operation information of the object operation.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above-mentioned embodiments, objects, technical solutions and advantages of the present application are further described in detail, it should be understood that the above-mentioned embodiments are only examples of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present application should be included in the scope of the present application. Moreover, while the foregoing embodiments have been described for illustrative purposes, they are presented as a series of interrelated states for performing the disclosed steps, it will be appreciated by those skilled in the art that the present invention is not limited by the order in which the acts are described, as some steps may occur in other orders or concurrently with other steps in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules illustrated are not necessarily required to practice the invention.

Claims (14)

1. A method for operating an object in a page, the method comprising:
presenting a target page;
detecting object operation executed by the target page;
determining an operation object of the object operation from a multi-layer page object located in an operation area of the object operation based on the operation information of the object operation;
and executing the object operation on the operation object.
2. The method according to claim 1, wherein the determining an operation object of the object operation from a multi-layer page object located in an operation area of the object operation based on the operation information of the object operation comprises:
generating discrimination information indicating whether the object operation is an object moving operation based on the operation information of the object operation;
and determining an operation object operated by the object from the multi-layer page object positioned in the operation area operated by the object based on the discrimination information.
3. The method of claim 2, wherein prior to said rendering the target page, the method further comprises:
acquiring page data for presenting a target page; wherein the page data comprises target object data; during the presentation of the target page, a target page object represented by the target object data is presented on the top layer of the target page; and
the presentation target page comprises:
presenting the target page based on the page data; and
the determining, based on the discrimination information, an operation object of the object operation from a multi-layered page object located in an operation area of the object operation includes:
determining that the operation object of the object operation is the target page object under the condition that the judgment information indicates that the object operation is the object moving operation;
and when the judgment information does not indicate that the object operation is the object moving operation, determining that an operation object of the object operation is a lower-layer object of the target page object.
4. The method according to claim 3, wherein in a case that an operation object of the object operation is the target page object, the performing the object operation on the operation object comprises:
determining a moving direction and a moving distance of the object moving operation;
and moving the presenting position of the target page object in the target page to the moving direction by the moving distance so as to update the presenting position.
5. The method according to claim 3, wherein in a case that an operation object operated by the object is the underlying object, the performing the object operation on the operation object comprises:
and transmitting the object operation detected by the target page object to the lower-layer object so as to enable the lower-layer object to execute the object operation.
6. The method according to any one of claims 3-5, wherein the target page object indicates that the target page is a classified page; and
the method further comprises the following steps:
detecting a printing operation aiming at the target page;
in the case of detecting the printing operation, determining whether the target object data is printing related data of the target page, wherein the page object indicated by the printing related data is used for being presented in a printing result of the target page;
and in the case that the target object data is not the printing related data, printing the target page based on the target object data to obtain a printing result of the target page with the target page object.
7. The method of claim 6, wherein said printing the target page based on the target object data comprises:
generating new printing related data of the target page based on the target object data so as to update the printing related data of the target page;
and printing the target page based on the updated printing related data.
8. The method of claim 6, wherein said printing the target page based on the target object data comprises:
rendering presentation content of the target page including the target page object;
and printing the target page according to the presentation content.
9. A method for manipulating an object in a document, the method comprising:
presenting the target document; wherein a target region of the target document presents a target document object;
detecting an object operation executed by the target document;
determining whether an intersection region exists between the target region and an operation region operated by the object;
in a case where the intersection region exists between the target region and the operation region, it is determined whether the object operation is allowed to be performed with respect to the target document object, based on operation information of the object operation.
10. The method of claim 9, wherein after the determining whether to allow the object operation to be performed with respect to the target document object based on the operation information of the object operation, the method further comprises:
in a case where it is determined that the object operation is prohibited from being performed with respect to the target document object, an operation object of the object operation is determined from among the multiple-layered document objects located in an operation area of the object operation based on operation information of the object operation.
11. An apparatus for operating objects in a page, the apparatus comprising:
the first presentation unit is used for presenting a target page;
the first detection unit is used for detecting object operation executed by the target page;
a first determination unit, configured to determine, based on operation information of the object operation, an operation object of the object operation from a multi-layered page object located in an operation area of the object operation;
and the operation unit is used for executing the object operation on the operation object.
12. An apparatus for manipulating an object in a document, the apparatus comprising:
a second presentation unit for presenting the target document; wherein a target region of the target document presents a target document object;
a third detection unit configured to detect an object operation performed by the target document;
a third determination unit, configured to determine whether an intersection region exists between the target region and an operation region operated by the object;
a fourth determination unit configured to determine whether to allow the object operation to be performed on the target document object based on operation information of the object operation in a case where the intersection region exists between the target region and the operation region.
13. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing a computer program stored in the memory, and when executed, implementing the method of any of the preceding claims 1-10.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of any one of the preceding claims 1 to 10.
CN202211569910.2A 2022-12-06 2022-12-06 Operation method and device of objects in page, electronic equipment and storage medium Pending CN115934229A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211569910.2A CN115934229A (en) 2022-12-06 2022-12-06 Operation method and device of objects in page, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211569910.2A CN115934229A (en) 2022-12-06 2022-12-06 Operation method and device of objects in page, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115934229A true CN115934229A (en) 2023-04-07

Family

ID=86650278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211569910.2A Pending CN115934229A (en) 2022-12-06 2022-12-06 Operation method and device of objects in page, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115934229A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117519888A (en) * 2024-01-05 2024-02-06 成都泰盟软件有限公司 Method and system for generating chat record document based on Web screenshot

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117519888A (en) * 2024-01-05 2024-02-06 成都泰盟软件有限公司 Method and system for generating chat record document based on Web screenshot

Similar Documents

Publication Publication Date Title
US9805005B1 (en) Access-control-discontinuous hyperlink handling system and methods
US10068104B2 (en) Conditional redaction of portions of electronic documents
US7458038B2 (en) Selection indication fields
US9013438B2 (en) Touch input data handling
US7337389B1 (en) System and method for annotating an electronic document independently of its content
US7559033B2 (en) Method and system for improving selection capability for user interface
TWI473002B (en) Method for communication between a document editor in-space user interface and a document editor out-space user interface
US9348803B2 (en) Systems and methods for providing just-in-time preview of suggestion resolutions
US20070198561A1 (en) Method and apparatus for merging data objects
US9336753B2 (en) Executing secondary actions with respect to onscreen objects
EP3491506B1 (en) Systems and methods for a touchscreen user interface for a collaborative editing tool
US20060267958A1 (en) Touch Input Programmatical Interfaces
US20090327853A1 (en) Comparing And Selecting Form-Based Functionality
US20130067366A1 (en) Establishing content navigation direction based on directional user gestures
US7921370B1 (en) Object-level text-condition indicators
KR930001926B1 (en) Display control method and apparatus
US20120306749A1 (en) Transparent user interface layer
US9910835B2 (en) User interface for creation of content works
CN115934229A (en) Operation method and device of objects in page, electronic equipment and storage medium
CN102841993A (en) Electronic apparatus, program, and control method
US10437464B2 (en) Content filtering system for touchscreen devices
US9659338B2 (en) Method and system for adaptive content protection
US20120079404A1 (en) Method for creating and searching a folder in a computer system
KR101446075B1 (en) Method and apparatus for copying formatting between objects through touch-screen display input
CN108932054B (en) Display device, display method, and non-transitory recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination