CN114741016B - Operation method, device, electronic equipment and computer readable storage medium - Google Patents

Operation method, device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN114741016B
CN114741016B CN202210543872.7A CN202210543872A CN114741016B CN 114741016 B CN114741016 B CN 114741016B CN 202210543872 A CN202210543872 A CN 202210543872A CN 114741016 B CN114741016 B CN 114741016B
Authority
CN
China
Prior art keywords
display window
display
window
sub
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210543872.7A
Other languages
Chinese (zh)
Other versions
CN114741016A (en
Inventor
郭充
王峣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202210543872.7A priority Critical patent/CN114741016B/en
Publication of CN114741016A publication Critical patent/CN114741016A/en
Application granted granted Critical
Publication of CN114741016B publication Critical patent/CN114741016B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

An operating method, an apparatus, an electronic device, and a computer-readable storage medium. The operation method is applied to a display page, the display page comprises a display area, the display area comprises a plurality of sub-display areas, the display page comprises a current display window, and the method comprises the following steps: responding to that the current display window is respectively overlapped with at least partial areas of at least two sub-display areas in the plurality of sub-display areas, and acquiring size adjustment triggering operation executed on the current display window; in response to a resizing trigger operation, determining a window area occupied in the display page after the current display window is resized, the window area including at least parts of at least two sub-display areas; and generating a first display window in the window area, and replacing the current display window by the first display window to visually adjust the size of the current display window. The method can solve the problem that the edge of the first display window cannot be accurately controlled due to external reasons such as limb occlusion and the like.

Description

Operation method, device, electronic equipment and computer readable storage medium
Technical Field
Embodiments of the present disclosure relate to an operating method, apparatus, electronic device, and computer-readable storage medium.
Background
Along with the development of electronic technology and internet technology, the functions provided by electronic equipment are more and more abundant, so that the life of people is more and more intelligent.
For example, technologies such as smart cities, smart parks, and smart enterprises are modes for promoting the cities, parks, and enterprises to enter intelligent services, management, and construction by using new-generation information technologies such as internet of things, cloud computing, big data, and spatial information integration. Technologies such as wisdom city, wisdom garden, wisdom enterprise can show data information or image information to the user through visual management and control platform.
Disclosure of Invention
At least one embodiment of the present disclosure provides an operation method, applied to a display page, where the display page includes a display area, the display area includes a plurality of sub-display areas, and the display page includes a current display window, the method includes: responding to the current display window to be respectively overlapped with at least partial areas of at least two sub display areas in the plurality of sub display areas, and acquiring a size adjustment triggering operation executed on the current display window; in response to the resizing trigger operation, determining a window area occupied in the display page after the current display window is resized, wherein the window area comprises at least a portion of the at least two sub-display areas; and generating a first display window in the window area, the current display window being replaced by the first display window to visually resize the current display window.
For example, in an operation method provided in an embodiment of the present disclosure, the resizing trigger operation includes a click operation performed on the current display window.
For example, in an operation method provided by an embodiment of the present disclosure, a current display window includes at least one element, where the at least one element includes a current window display area, and the current window display area is used for displaying an image from a signal source object bound to the current display window.
For example, in an operation method provided by an embodiment of the present disclosure, at least one element further includes a title bar, the title bar is located on one side of the current window display area, and the method further includes: and displaying the image information of the image in the title bar.
For example, in an operation method provided by an embodiment of the present disclosure, at least one element further includes at least one control icon, and the at least one control icon is located in the title bar.
For example, in an operating method provided by an embodiment of the present disclosure, the first display window includes a first window display area, and the method further includes: binding the signal source object with the first display window; and in response to receiving an image from the source object, displaying the image in the first window display area.
For example, in an operation method provided by an embodiment of the present disclosure, the at least one element further includes a video display window, the video display window being located in the title bar, the video display window being used for displaying video from another signal source object bound to the video display window.
For example, in an operation method provided in an embodiment of the present disclosure, the method further includes: and responding to the acquired zooming trigger operation of the current display window, and generating a second display window to zoom the current display window visually according to the zooming trigger operation.
For example, in an operation method provided by an embodiment of the present disclosure, the current display window includes at least one element, the at least one element further includes a title bar and at least one control icon, the at least one control icon is located in the title bar, and in response to acquiring a zoom trigger operation on the current display window, according to the zoom trigger operation, generating a second display window to visually zoom the current display window includes: in a case where the size of the current display window is controlled to be reduced to be smaller than a size threshold value in response to the zoom trigger operation, selecting a part of control icons displayed in the second display window from the at least one control icon; and displaying the part of the control icons on the title bar to generate the second display window.
For example, in an operation method provided by an embodiment of the present disclosure, the zoom trigger operation includes a pull operation performed on a control point of the current display window, and in response to acquiring the zoom trigger operation on the current display window, according to the zoom trigger operation, generating a second display window to visually zoom the current display window includes: moving a first edge of the current display window to a first edge position in response to the pulling operation, and taking the first edge position as a second edge position of a second edge corresponding to the first edge in the second display window; and generating the second display window according to the second edge position.
For example, in an operation method provided by an embodiment of the present disclosure, dividing the display area into a plurality of sub-display areas by a plurality of grid lines, where the plurality of grid lines include a grid line extending in a first direction and a grid line extending in a second direction, and the zoom trigger operation includes a pull operation performed on a first edge of the current display window, where the first edge extends in the first direction, and in response to obtaining a zoom trigger operation on the current display window, generating a second display window to visually zoom the current display window according to the zoom trigger operation includes: and in response to the pulling operation, moving the first edge to a first edge position along the second direction, and determining a second edge position of a second edge corresponding to the first edge in the second display window according to the first edge position, wherein the second edge position is the closest grid line extending along the first direction to the first edge position.
For example, in an operation method provided by an embodiment of the present disclosure, positions and sizes of edges other than the first edge in the current display window are not changed in the second display window, except that the first edge is moved to the second edge position in the current display window to replace the first edge by the second edge.
For example, in an operation method provided by an embodiment of the present disclosure, the arrangement of the plurality of sub display regions is the same as the arrangement of the plurality of sub display screens interacting with the display page.
For example, in an operation method provided in an embodiment of the present disclosure, a display page includes a plurality of signal source objects, and the method further includes: acquiring a placement trigger operation for placing a target object, wherein the target object is selected from the plurality of signal source objects, and responding to the placement trigger operation for the target object, and creating an object display window in a sub display area corresponding to the placement trigger operation; and binding the object display window with the target object, and displaying the image from the target object on the object display window.
For example, in an operation method provided in an embodiment of the present disclosure, the method further includes: and responding to the acquired dragging triggering operation of the object display window, and visually moving the object display window to a target position corresponding to the dragging triggering operation.
For example, in an operation method provided in an embodiment of the present disclosure, the drag trigger operation includes: dragging the object display window until the object display window is dragged to the target position while the object display window is selected; and releasing control of the object display window in response to the object display window being dragged to the target position.
For example, in an operation method provided in an embodiment of the present disclosure, the resizing trigger operation includes a zoom-in trigger operation for zooming in a current display window or a zoom-out trigger operation for zooming out the current display window.
For example, in an operation method provided in an embodiment of the present disclosure, the resizing trigger operation includes a zoom-out trigger operation for zooming out the current display window, and determining, according to the positional relationship, a window area occupied in the display page after the resizing of the current display window includes: acquiring the overlapping areas of the current display window and a plurality of sub-display areas respectively; and determining that the window area occupied in the display page after the current display window is resized is the sub-display area with the largest overlapping area.
For example, in an operation method provided in an embodiment of the present disclosure, the resizing trigger operation includes a zoom-in trigger operation for zooming in the current display window, and determining, according to the positional relationship, a window area occupied in the display page after the resizing of the current display window includes: acquiring each sub-display area where the feature point of the current display window is located; and taking the maximum area formed by each sub-display area as the window area occupied in the display page after the size of the current display window is adjusted.
At least one embodiment of the present disclosure provides an operating device, applied to a display page, where the display page includes a display area, the display area includes a plurality of sub-display areas, and the display page includes a current display window, the device includes: a trigger operation acquisition unit configured to acquire a size adjustment trigger operation performed on the current display window in response to the current display window respectively overlapping with at least a partial area of at least two sub-display areas of the plurality of sub-display areas; the area determining unit is configured to respond to the size adjustment triggering operation, determine a window area occupied in the display page after the size of the current display window is adjusted, wherein the window area at least comprises parts of the at least two sub-display areas; and a window generating unit configured to generate a first display window in the window region, the current display window being replaced by the first display window to visually adjust a size of the current display window.
At least one embodiment of the present disclosure provides an electronic device comprising a processor; a memory comprising one or more computer program instructions; one or more computer program instructions are stored in the memory and when executed by the processor implement the instructions of the method of operation provided by any of the embodiments of the present disclosure.
At least one embodiment of the present disclosure provides a computer-readable storage medium having non-transitory computer-readable instructions stored thereon, which when executed by a processor, may implement a method of operation provided by any of the embodiments of the present disclosure.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings of the embodiments will be briefly introduced below, and it is apparent that the drawings in the following description relate only to some embodiments of the present disclosure and are not limiting to the present disclosure.
FIG. 1A illustrates a flow chart of a method of operation provided by at least one embodiment of the present disclosure;
fig. 1B illustrates a schematic diagram that a position of a current display window in a display page meets a preset condition according to at least one embodiment of the present disclosure;
FIG. 1C illustrates a flow chart of another method of operation provided by at least one embodiment of the present disclosure;
fig. 1D illustrates a flowchart of a method for adaptive rectangular window enlargement according to at least one embodiment of the present disclosure;
fig. 1E is a schematic diagram illustrating an effect of at least one embodiment of the present disclosure after a zoom-in trigger operation is performed on the current display window in fig. 1B;
FIGS. 1F and 1G are schematic diagrams illustrating a zoom-out trigger operation performed on a current display window according to at least one embodiment of the present disclosure;
fig. 2A illustrates a flowchart of a method of step S30 in fig. 1A according to at least one embodiment of the present disclosure;
FIG. 2B is a diagram illustrating a newly created first display window provided by at least one embodiment of the present disclosure;
fig. 2C is a schematic diagram illustrating a first display window after a signal source object is bound according to at least one embodiment of the present disclosure;
fig. 3 illustrates a schematic diagram of a zoom trigger operation performed on a current display window according to at least one embodiment of the present disclosure;
fig. 4A illustrates a flowchart of a method for generating a second display window according to a zoom trigger operation according to at least one embodiment of the present disclosure;
FIG. 4B illustrates a schematic diagram of four second display windows provided by at least one embodiment of the present disclosure;
fig. 4C illustrates a schematic diagram of step S401 in fig. 4A provided by at least one embodiment of the present disclosure;
FIG. 5A illustrates a flow chart of another method of operation provided by at least one embodiment of the present disclosure;
fig. 5B illustrates a flowchart of a method of step S40 in fig. 5A according to at least one embodiment of the present disclosure;
fig. 5C is a schematic diagram illustrating an effect of a plurality of sub-display regions provided by at least one embodiment of the present disclosure;
fig. 6 is a schematic diagram illustrating a method for selecting a signal source object from a signal source list and placing the signal source object in a certain sub-display area according to at least one embodiment of the present disclosure;
FIGS. 7A-7D are schematic diagrams of a display page provided by at least one embodiment of the present disclosure;
fig. 8 illustrates a schematic block diagram of an operating device provided by at least one embodiment of the present disclosure;
fig. 9 illustrates a schematic block diagram of an electronic device provided by at least one embodiment of the present disclosure;
fig. 10 illustrates a schematic block diagram of another electronic device provided by at least one embodiment of the present disclosure; and
fig. 11 illustrates a schematic diagram of a computer-readable storage medium provided by at least one embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. Also, the use of the terms "a," "an," or "the" and similar referents do not denote a limitation of quantity, but rather denote the presence of at least one. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
As the functions provided by the electronic device become richer, the electronic device interacts with the user more and more. For example, the visualization governance platform may include a scenario editor in which a user may perform visualization operations of various graphics. For example, the user may perform various window operations on a web page provided by the scene editor, such as moving a window, changing the size of a window of a size, and so on; after each window operation is completed, a corresponding instruction is sent to the background server, so that scenes are generated according to various window operations executed by a user in the scene editor, and the scenes created by the user in the scene editor can be stored as a thumbnail so as to be called and displayed at any time.
For example, in a scene editor, if a user is made to manually adjust the window size, it is difficult to accurately adjust the window size due to, for example, occlusion of a hand.
At least one embodiment of the present disclosure provides an operating method, an apparatus, an electronic device, and a computer-readable storage medium. The operation method is applied to a display page, the display page comprises a display area, the display area comprises a plurality of sub-display areas, the display page comprises a current display window, and the position of the current display window in the display page meets a preset condition, and the method comprises the following steps: responding to the obtained size adjustment triggering operation executed on the current display window, and obtaining the position relation between the current display window and the plurality of sub display areas; determining a window area occupied in the display page after the size of the current display window is adjusted according to the position relation, wherein the window area comprises at least part of sub-display areas in the plurality of sub-display areas; and generating a first display window in the window area, and replacing the current display window by the first display window to visually adjust the size of the current display window. The method can accurately control the position of the edge of the first display window after the current display window is adjusted in size, and the technical problem that the edge of the first display window cannot be accurately controlled due to external reasons such as limb occlusion and the like is solved.
At least one embodiment of the present disclosure provides an operating device applied to a display page. The display page includes a display area including a plurality of sub-display areas, the display page including a current display window. The device includes: the device comprises a trigger operation acquisition unit, an area determination unit and a window generation unit. The trigger operation acquisition unit is configured to acquire a size adjustment trigger operation performed on the current display window in response to the current display window respectively overlapping with at least partial areas of at least two sub-display areas of the plurality of sub-display areas. The area determining unit is configured to determine a window area occupied in the display page after the current display window is resized in response to the resizing trigger operation, wherein the window area at least includes portions of the at least two sub-display areas. The window generating unit is configured to generate a first display window in the window region, and replace the current display window by the first display window to visually adjust the size of the current display window. The device can accurately control the position of the edge of the first display window after the current display window is adjusted in size, and the technical problem that the edge of the first display window cannot be accurately controlled due to external reasons such as limb shielding is solved.
It should be noted that, although the present disclosure takes window resizing in a scene editor as an example to illustrate the technical problem to be solved by the present disclosure, this has no limiting effect on the operation method provided by the present disclosure, and the operation method of the present disclosure may be applied to any scene.
Fig. 1A illustrates a flow chart of a method of operation provided by at least one embodiment of the present disclosure.
As shown in FIG. 1A, the method may include steps S10-S40. The operation method is applied to a display page, the display page comprises a display area, the display area comprises a plurality of sub-display areas, and the display page comprises a current display window.
Step S10: and responding to the situation that the position of the current display window in the display page meets the preset condition, and acquiring the size adjustment triggering operation executed on the current display window.
Step S20: and responding to the size adjustment triggering operation, and acquiring the position relation between the current display window and the plurality of sub-display areas.
Step S30: and determining a window area occupied in the display page after the size of the current display window is adjusted according to the position relation, wherein the window area comprises at least part of the sub-display areas in the plurality of sub-display areas.
Step S40: a first display window is generated in the window area, and the current display window is replaced by the first display window to visually adjust the size of the current display window.
In some embodiments of the present disclosure, the preset condition is, for example, that the current display window overlaps at least partial regions of at least two sub-display regions of the plurality of sub-display regions, respectively. For example, the plurality of sub display regions includes a first sub display region where the first portion of the current display window is located and a second sub display region where the second portion of the current display window is located.
For another example, the preset condition may be that one vertex of the current display window is aligned with a vertex of a page container in the display page, and the like. The preset conditions are not limited in the present disclosure, and those skilled in the art can set the preset conditions according to actual requirements. The operation method provided by the present disclosure is described below by taking a preset condition as an example that the current display window is respectively overlapped with at least partial areas of at least two sub-display areas of the plurality of sub-display areas.
In some embodiments of the present disclosure, step S10 may be to acquire a resizing trigger operation performed on the current display window in response to the current display window overlapping at least a partial region of at least two sub-display regions of the plurality of sub-display regions, respectively.
Fig. 1B illustrates a schematic diagram that a position of a current display window in a display page meets a preset condition according to at least one embodiment of the present disclosure.
In the example of fig. 1B, the preset condition is that the current display window overlaps at least partial regions of at least two sub display regions of the plurality of sub display regions, respectively.
As shown in fig. 1B, the display page 100 includes a display area 1000, and the display area 1000 includes the current display window 11. The display area 1000 includes a plurality of sub display areas, for example, a sub display area 9, a sub display area 1, a sub display area 2, a sub display area 3, a sub display area 4, a sub display area 5, a sub display area 6, a sub display area 7, and a sub display area 8.
As shown in fig. 1B, the current display window 11 overlaps at least some of the sub display regions 4, 5, 7, and 8, respectively, and thus satisfies a preset condition.
With respect to step S10, in some embodiments of the present disclosure, the resizing trigger operation may include a zoom-in trigger operation for zooming in the current display window, and the resizing trigger operation may include a zoom-out trigger operation for zooming out the current display window.
In some embodiments of the present disclosure, the resizing trigger operation comprises a click operation on the currently displayed window. For example, performing a double-click operation on the current display window is performing an enlargement triggering operation on the current display window, and performing a triple-click operation on the current display window is performing a reduction triggering operation on the current display window. For example, the display page is displayed on the touch screen, and if the time length between two click operations performed on the current display window is less than a preset time length (e.g., 1 s), the operation is a double-click operation. Similarly, for example, if the total time length of three consecutive click operations is less than a preset time length (e.g., 2 s), the operation is a three-click operation. The size adjustment triggering operation may also be other operations, such as a sliding operation, and the size adjustment triggering operation is not limited by the present disclosure, and a person skilled in the art may set the size adjustment triggering operation by himself or herself.
Embodiments of the present disclosure are described below with respect to a resizing trigger operation as an enlargement trigger operation unless otherwise specified.
For step S20: for example, in response to a double-click operation performed on the current display window, the positional relationship between the current display window and the plurality of sub display regions is acquired.
In some embodiments of the present disclosure, the position relationship includes a sub-display region in which at least part of the feature points in the current display window are located.
For example, the current display window is a rectangle, at least some of the feature points may be three or four vertices of the rectangle, and step S10 may be to obtain a sub-display area where the three or four vertices of the current display window are located.
For example, as shown in FIG. 1B, each sub-display region is traversed to find three sub-display regions in which three vertices A, B, C of the current display window (rectangle) are located.
The method for determining whether the sub-display area includes the vertex of the current display window is described by taking the vertex a in fig. 1B as an example. Assuming that the sub-display areas are also rectangular, the vertex coordinates of the top left corner of a certain sub-display area are (gridX, gridY y), the width is gridW, the height is gridH, and the coordinates of the vertex a of the top left corner of the current display window are (left, top), one of the following conditions (1) to (4) is satisfied, that is, it is said that the vertex a falls within the sub-display area:
(1) left = = gridX and top = = gridY;
(2) left > gridX and left < gridX + gridW and top > gridY and top < gridY + gridH;
(3) top = = gridY and left > gridX and left < gridX + gridW;
(4) left = = gridX and top > gridY and top < gridY + gridH.
In the above conditions (1) to (4), the case where the vertex falls on the side of the sub display region is considered. The method for judging the top right vertex B and the bottom left vertex C of the current display window is similar to the method for judging the vertex a.
In some embodiments of the present disclosure, for example, the sub-display area is drawn using API provided by fabric.js, the vertex coordinates (gridX, gridY y), wide gridW, and high gridH may be obtained according to API provided by fabric.js. Js is an open source canvas-based drawing library that provides JSAPI to simplify native canvas operations, which is a method provided by HTML5 to draw graphics on web pages using scripts.
Of course, those skilled in the art may also use other determination methods to determine whether the sub-display area includes the vertex of the current display window, which is not limited in this disclosure.
For step S30, for example, determining the size of the window region according to the sub-display region in which at least part of the feature points in the current display window are located; and determining the window area according to the size of the window area.
In some embodiments of the present disclosure, the window region includes at least portions of at least two sub-display regions. For example, the window region includes one or more of the at least two sub-display regions.
For example, in the case where the window region is rectangular and at least part of the feature points include at least part of the vertices of the rectangle, the width and height of the window region are determined according to the sub-display region in which at least part of the vertices are located.
For example, the width of the window area is determined according to at least one sub-display area where two width vertexes in the width direction of the current display window are located; and determining the height of the window area according to at least one sub-display area where the two height vertexes in the height direction of the current display window are located.
There may be duplicate vertices for the two width vertices and the two height vertices. For example, vertex a in fig. 1B may be one of the width vertices or one of the height vertices.
In some embodiments of the present disclosure, determining the width of the window region according to the sub-display regions where the two width vertices in the width direction of the current display window are respectively located includes: the maximum distance of the edge of at least one sub-display area where two width vertexes in the width direction of the current display window are located in the width direction is used as the width of the window area, and the height of the window area is determined according to at least one sub-display area where two height vertexes in the height direction of the current display window are located, and the method comprises the following steps: the maximum distance of the edge of at least one sub-display region where two height vertices in the height direction of the current display window are located in the height direction is taken as the height of the window region.
For example, as shown in fig. 1B, at least some of the vertices include vertex a at the upper left corner of the current display window, vertex B at the upper right corner of the current display window, and vertex C at the lower left corner of the current display window, the width of the window area may be determined according to the sub-display area 4 where vertex a at the upper left corner is located and the sub-display area 5 where vertex B at the upper right corner is located, and the height of the window area may be determined according to the sub-display area 4 where vertex a at the upper left corner is located and the sub-display area 7 where vertex C at the lower left corner of the current display window is located.
For example, in the enlargement trigger operation, the two width vertices in the width direction of the current display window include vertex a in the upper left corner and vertex B in the upper right corner of the current display window 11. The distance between the top left corner vertex O of the sub-display area 4 where the top left corner vertex a is located and the top right corner vertex P of the sub-display area where the top right corner vertex B is located is the maximum distance of the edge of at least one sub-display area in the width direction, and the width of the window area is the distance between the vertex O and the vertex P. For example, the two height vertices in the height direction of the current display window include vertex a in the upper left corner and vertex C in the lower left corner of the current display window 11. The distance between the top left corner vertex O of the sub-display area 4 where the top left corner vertex a is located and the bottom left corner vertex M of the sub-display area where the bottom left corner vertex C is located is the maximum distance in the height direction of the edge of at least one sub-display area, and thus the height of the window area is the distance between the vertex O and the vertex M.
For step S30, after determining the width and height of the window region, the window region is determined based on the width and height of the window region. For example, in the example of fig. 1B, the width of the window region is the distance between the vertex O and the vertex P, and the height of the window region is the distance between the vertex O and the vertex M, so that the window region is a rectangular region formed by the vertex O, the vertex P, the vertex N, and the vertex M (hereinafter, referred to as "window region OPNM").
Of course, the current display window may have other shapes, such as a pentagon, a hexagon, etc., and the present disclosure does not limit the shapes of the current display window and the sub display region. One skilled in the art can devise suitable methods for determining the size of the window area, not listed here, depending on the shape of the current display window and the sub-display area.
For example, the resize trigger operation in step S20 includes a zoom-in trigger operation for zooming in the current display window, and step S30 includes: acquiring each sub-display area where the feature point of the current display window is located; and taking the maximum area formed by each sub-display area as a window area occupied in the display page after the size of the current display window is adjusted.
For example, as shown in fig. 1B, the sub-display regions in which the vertex a at the upper left corner, the vertex B at the upper right corner, the vertex C at the lower left corner, and the vertex D at the lower right corner of the feature point of the current display window are located are the sub-display region 4, the sub-display region 5, the sub-display region 7, and the sub-display region 8. The maximum area formed by the sub display area 4, the sub display area 5, the sub display area 7, and the sub display area 8 is OPNM, and therefore, the window area occupied in the display page after the current display window is resized is OPNM.
For another example, the resize trigger operation in step S20 includes a zoom-out trigger operation for zooming out the current display window, and step S30 includes: responding to the size adjustment triggering operation, and acquiring the overlapping areas of the current display window and the plurality of sub-display areas respectively; and determining that the window area occupied in the display page after the current display window is resized is the sub-display area with the largest overlapping area. Fig. 1G and 1F below exemplarily describe a zoom-out trigger operation.
With step S40, for example, redrawing the display window in the window region OPNM results in a first display window, and deleting the original current display window 11, displaying the same content as the current display window 11 by the first display window, thereby replacing the current display window 11 by the first display window to visually enlarge the current display window 11.
Fig. 1C illustrates a flow chart of another method of operation provided by at least one embodiment of the present disclosure.
As shown in FIG. 1C, the method may include steps S110 to S130. The operation method is applied to a display page, the display page comprises a display area, the display area comprises a plurality of sub-display areas, and the display page comprises a current display window.
The method of operation shown in FIG. 1C is one embodiment of the method of operation shown in FIG. 1A.
Step S110: and acquiring a size adjustment triggering operation executed on the current display window in response to the fact that the current display window is respectively overlapped with at least partial areas of at least two sub-display areas in the plurality of sub-display areas. This step S110 is similar to step S10 described above with reference to fig. 1A, and please refer to the related description above.
Step S120: and responding to the size adjustment triggering operation, and determining a window area occupied in the display page after the size of the current display window is adjusted, wherein the window area at least comprises parts in at least two sub-display areas.
This step S120 is similar to steps S20 and S30 described above with respect to fig. 1A, please refer to the related description above.
For example, the resizing trigger operation includes a zoom-in trigger operation for zooming in the current display window, and step S120 may include: acquiring each sub-display area where the feature point of the current display window is located; and taking the maximum area formed by each sub-display area as a window area occupied in the display page after the size of the current display window is adjusted.
For another example, the resizing trigger operation includes a zoom-out trigger operation for zooming out the currently displayed window, and step S120 includes: responding to the size adjustment triggering operation, and acquiring the overlapping areas of the current display window and the plurality of sub-display areas respectively; and determining that the window area occupied in the display page after the current display window is resized is the sub-display area with the largest overlapping area. Fig. 1G and 1F below exemplarily describe the zoom-out trigger operation.
Step S130: a first display window is generated in the window area, and the current display window is replaced by the first display window to visually adjust the size of the current display window. This step S130 is similar to step S40 described above with reference to fig. 1A, and please refer to the related description above.
Fig. 1D illustrates a flowchart of a method for adaptive rectangular window enlargement according to at least one embodiment of the present disclosure.
As shown in FIG. 1D, the method may include steps S101 to S111.
Step S101: an amplification trigger operation is obtained. For example, the double-click operation on the current display window in the rectangular shape in the display page in the touch screen is acquired.
Step S102: traversing a plurality of sub-display regions in the display region.
Step S103: and judging whether the sub-display area contains the top left vertex A of the current display window. For example, whether the vertex a falls within the sub-display area may be determined according to the conditions (1) to (4) described above. If the vertex a falls within the sub-display area, step S104 is executed; if the vertex a does not fall within the sub-display area, step S105 is executed.
Step S104: the sub-display area is recorded. For example, information such as the name, position coordinates, and included vertex a of the sub display area is recorded.
Step S105: and judging whether the sub-display area contains the top right vertex B of the current display window. For example, whether the vertex B falls within the sub-display area may be determined according to the conditions (1) to (4) described above. If the vertex B falls within the sub-display area, executing step S106; if the vertex B does not fall within the sub-display area, step S107 is performed.
Step S106: the sub-display area is recorded. For example, information such as the name, position coordinates, and included vertex B of the sub display area is recorded.
Step S107: and judging whether the sub-display area contains the lower left vertex C of the current display window. For example, whether the vertex C falls within the sub-display area may be determined according to the conditions (1) to (4) described above. If the vertex C falls within the sub-display area, go to step S108; if the vertex C does not fall within the sub-display area, step S109 is executed.
Step S108: the sub-display area is recorded. For example, information such as the name, position coordinates, and contained vertex C of the sub display area is recorded.
Step S109: and judging whether the sub-display areas where the three vertexes are respectively located are found. If the sub-display areas where the three vertexes are respectively located are found, executing step S110 and step S111; if the sub-display areas where the three vertices are located are not found, step S102 is executed to continue traversing the next sub-display area.
Step S110: and calculating the size of the window area according to the sub-display areas where the three vertexes are respectively positioned. The size of the calculation window may be determined, for example, according to the method described in step S30.
Step S111: drawing the first display window and deleting the current display window according to the size of the window area.
Fig. 1E is a schematic diagram illustrating an effect of performing a zoom-in trigger operation on the current display window in fig. 1B according to at least one embodiment of the present disclosure.
As shown in fig. 1B and 1E, the display content of the current display window 11 is displayed by the enlarged first display window, and the first display window fills the sub-display area 4, the sub-display area 5, the sub-display area 7, and the sub-display area 8, so that the effect of visually enlarging the current display window 11 is achieved.
According to the operation method provided by the disclosure, the display area is divided into the plurality of sub-display areas, the size of the current display window is adaptively adjusted according to the sub-display area where the feature point of the current display window is located in response to the size adjustment triggering operation, the problem that the position of the edge after the size adjustment cannot be accurately controlled due to the fact that the size of the current display window is adjusted through methods such as dragging the edge of the current display window by a finger and the like is solved, and therefore the technical effect of accurately adjusting the size of the current display window is achieved.
In some embodiments of the present disclosure, the current display window includes at least one element. The at least one element may for example comprise a display area for displaying the image and/or a title bar or the like. As shown in FIG. 1B, the current display window may include a current window display area 11-1 for displaying an image and a title bar 11-2.
For example, the current window display area displays an image from a signal source object bound to the current display window. In some embodiments of the present disclosure, the method may further include acquiring a signal source object bound to the current window display area; and displaying an image in the current window display area in response to receiving the image from the source object. For example, acquiring a signal source object bound to the current window display area is performed before step S10; and displaying an image in the current window display area in response to receiving the image from the source object. The method enables the user to automatically bind the current window display area with the signal source object, and improves the user experience and the flexibility of the operation method.
In some embodiments of the present disclosure, for example, a title bar is located on one side of the current window display area. For example, in the example shown in FIG. 1B, the title bar 11-2 is located above the current window display area 11-1.
In some embodiments of the present disclosure, image information of an image displayed in the current window display area may be displayed in the title bar. The image information may include, for example, the name of the image, the resolution of the image, and the like.
In some embodiments of the present disclosure, the at least one element may further comprise a control icon, the control icon being located in the title bar. The control icons may include, for example, an unlock icon, a sound control icon, a close icon, and the like. One skilled in the art can set a plurality of control icons in the title bar according to actual needs.
Fig. 1F and 1G illustrate schematic diagrams of performing a zoom-out trigger operation on a current display window according to at least one embodiment of the present disclosure.
As shown in fig. 1F, the current display window 301 overlaps with the entire area of the sub display area 4 (see fig. 1B, reference numeral "4" is covered), overlaps with a partial area of the sub display area 5, overlaps with a partial area of the sub display area 7, and overlaps with a partial area of the sub display area 8.
For example, when a zoom-out trigger operation (for example, a triple-click) is performed on the current display window 301, a sub-display region having the largest overlapping area is selected from sub-display regions overlapping at least a partial region of the current display window 301 as the first display window after the zoom-out trigger operation is performed on the current display window 301.
As shown in fig. 1G, the overlapping area of the current display window 301 and the sub-display region 4 is the largest, and therefore, the sub-display region 4 is used as the area occupied by the first display window 302 after the zoom-out trigger operation is performed on the current display window 301.
According to the embodiment, the size of the current display window is reduced in a self-adaptive manner according to the overlapping area of the current display window and the sub-display area, so that the problem that the position of the edge after size reduction cannot be accurately controlled due to the fact that the size of the current display window is reduced by methods of dragging the edge of the current display window by fingers and the like is solved, and the technical effect of accurately reducing the size of the current display window is achieved.
Fig. 2A illustrates a flowchart of a method of step S40 in fig. 1A according to at least one embodiment of the present disclosure.
As shown in fig. 2A, step S40 may include step S41 and step S42.
Step S41: the position of each of the at least one element in the window area is obtained.
Step S42: a first display window is rendered in the window area according to the position of each of the at least one element in the window area.
With respect to step S41, for example, the position of the rectangular element of the first display window in the window region, the position of the title bar in the window region, the position of the control icon in the window region, and the like are acquired.
For step S42, it can be rendered using a Group-related API (e.g., a composition graph) provided by fabric. Js creates an API for a composite graphics object as: group ([ shape1, shape2, shape3 … ]), where each shape is a respective one of the combined graphics. In the above embodiment, three graphic elements of an icon lockImg of the unlock button, an icon soundImg of the sound control button, and an icon closeImg of the close button, as well as a rectangular element header of the title bar and a rectangular element Body of the first display window may be included in the combined graphic. For example, each individual element is created, and the position (x, y) of each element is the coordinate of the top left vertex of the entire window region, i.e., the top left vertex of the window region is taken as the origin.
Assuming that the width and height of the whole window area are width and height, respectively, the height of the title bar is headheight, the coordinates of the unlock button icon lockImg relative to the origin are (iconLeft, iconTop), the width and height of each icon are iconWidth, and the distance between the two icons is iconLeft, so that the positions and sizes of other elements can be obtained. For example, the positions and sizes of the respective elements are shown in the following table.
Watch 1
Element(s) Abscissa (x) Ordinate (y) Width of Height
Unlock button lockImg iconLeft iconTop iconWidth iconWidth
Sound control button soundImg 2×iconLeft+iconWidth iconTop iconWidth iconWidth
Close button closeImg width–iconWidth–iconLeft iconTop iconWidth iconWidth
Header of rectangular element of title bar 0 0 width headerHeight
First display window Body 0 0 width height
When creating the combined graph, attention needs to be paid to the arrangement order of the elements in the API, and the sub-elements with large sizes need to be placed in front. For example, the size of the rectangular element body of the first display window is larger than the size of the rectangular element header of the title bar, so the rectangular element body of the first display window is placed in front of the rectangular element header of the title bar to avoid that the title bar and the icons of the windows cannot be correctly displayed, and therefore the creating function of the frame is (it should be noted that, as follows, only for the simplified code, many other attributes of the frame object actually need to be set):
var winShape = new Group ([body, header, lockImg, soundImg, closeImg]);
canvas.add(winShape)。
fig. 2B illustrates a schematic diagram of a newly created first display window provided by at least one embodiment of the present disclosure.
As shown in fig. 2B, the newly created first display window is a blank frame. The newly created first display window includes a title bar 201 and a first window display area 202. After the blank window is obtained, a signal source needs to be bound to the blank window, and after the synchronization operation is started, the video picture is played in the first window display area 202 of the first display window.
For example, determining the signal source object bound by the current display window; and in response to receiving the image from the source object, displaying the image in the first window display area. In some embodiments of the present disclosure, for example, the signal source object may be a video, i.e., a multi-frame image, and the first window display region may sequentially display the multi-frame images.
As shown in fig. 2B, control icons, such as an unlock icon 211, a sound control icon 221, a close icon 231, and the like, are distributed on the title bar 201. For example, the abscissa of the unlock icon 211 is iconLeft in table one, the ordinate is iconTop in table one, the width of the first display window is width in table one, and the height is height in table one.
After binding the signal source for the blank frame and starting the synchronization operation, for example, elements may be added to the blank window, which may include, for example, a video picture videoImg (graphic element), a title (text element) of the first display window, and image information (text element). Js may be utilized, for example, with the API provided by fabric that adds elements to an existing composite graphic: add (shape). The coordinates (x, y) of each element when dynamically adding elements are default to coordinates relative to the center point of the combined graphic, and as shown in fig. 2C, the abscissa title (x) and the ordinate title (y) of the title are coordinates relative to the center point of the combined graphic. The title of the first display window and the coordinates of the video picture are thus as shown in table two below.
Watch two
Element(s) Abscissa (x) Ordinate (y) Width of Height
Title 0–(width/2–4×iconLeft–2×iconWidth) 0–(height/2–iconTop) Determined by the content without specification Without specification, determined by font size
Video picture videoImg 0–width/2 0–(height/2–headerHeight) width height-headerHeight
In some embodiments of the disclosure, the at least one element further comprises a video display window, the video display window located in the title bar, the video display window for displaying video from another signal source object bound to the video display window.
For example, in the example of fig. 1B, the current display window 11 binds the source object 1, and displays video (i.e., multi-frame images) from the source object 1 in the first display window display region of the first display window, the video display window located in the title bar may be bound with another source object (e.g., the source object 2), thereby displaying video from the source object 2 in the video display window.
In this embodiment, another source object is simultaneously displayed in the title bar in the form of a thumbnail image with the source object bound to the current display window, so that the user can observe two source objects at the same time, thereby facilitating the user to compare the two source objects.
Fig. 2C is a schematic diagram illustrating a first display window after a signal source object is bound according to at least one embodiment of the present disclosure.
As shown in fig. 2C, after the signal source is bound, the first display window is added with a video picture videoImg 203 and a title 204 of the first display window, and the video picture videoImg 203 is displayed by the first window display area 202.
It can be understood that, except that the first display window may be created according to the method described in fig. 2A to 2C, other display windows (e.g., the current display window, the second display window, etc.) may be created according to the method described in fig. 2A to 2C, and are not described again.
In some illustrations of the present disclosure, the operation method may further include, in addition to the steps S10 to S40 illustrated in fig. 1A, generating a second display window to visually zoom the current display window according to a zoom trigger operation in response to acquiring the zoom trigger operation for the current display window. The zoom trigger operation on the currently displayed window may be performed, for example, before step S10.
The method enables a user to manually adjust the size of the current display window, for example, a mouse or a finger is used for pressing and dragging a control point on the edge of the current display window to enlarge or reduce, and after the mouse or the finger is lifted, the window needs to be redrawn according to the modified size to obtain a second display window. The method improves the flexibility and diversity of control over the display windows, a user can select proper operation according to requirements, and the user can conveniently adjust the current display window which does not meet the preset conditions to the display window which meets the preset conditions, so that the size of the display window which meets the preset conditions can be conveniently adjusted to trigger operation.
In some embodiments of the present disclosure, for example, the zoom trigger operation includes a pull operation on a control point of the currently displayed window. In response to acquiring a zoom trigger operation on the current display window, generating a second display window to visually zoom the current display window according to the zoom trigger operation, including: moving a first edge of the current display window to a first edge position in response to a pulling operation, and taking the first edge position as a second edge position of a second edge corresponding to the first edge in the second display window; and generating a second display window according to the second edge position. This embodiment is described below exemplarily in fig. 3.
In other embodiments of the present disclosure, the display area is divided into a plurality of sub-display areas by a plurality of grid lines, the plurality of grid lines includes a grid line extending in a first direction and a grid line extending in a second direction, and the zoom trigger operation includes a pull operation performed on a first edge of the current display window, the first edge extending in the first direction. In response to acquiring a zoom trigger operation on the current display window, generating a second display window to visually zoom the current display window according to the zoom trigger operation, including: moving the first edge to a first edge position along a second direction in response to the pulling operation, and determining a second edge position of a second edge corresponding to the first edge in the second display window according to the first edge position, wherein the second edge position is a grid line which is closest to the first edge position and extends along the first direction; and generating a second display window according to the second edge position and the edges of the current display window except the first edge. Fig. 4C below exemplarily illustrates that the first edge is moved to the first edge position in the second direction in response to the pulling operation, and the second edge position of the second edge corresponding to the first edge in the second display window is determined according to the first edge position, the second edge position being a grid line extending in the first direction where the first edge position is closest. For this embodiment, please refer to the description of fig. 4C below.
Fig. 3 illustrates a schematic diagram of a zoom trigger operation on a current display window according to at least one embodiment of the present disclosure.
As shown in fig. 3, a current display window 11 is included in the schematic diagram. The current display window 11 overlaps at least some of the sub display regions 4, 5, 7, and 8, respectively.
For example, the finger may be dragged by holding down a control point (e.g., vertex B) of the edge of the current display window 11, and the current display window 11 visually enlarges as the finger is dragged. If the finger is lifted at point F, the window is redrawn in the modified size so that the current display window 11 is visually enlarged to the second display window 12. For example, if the finger is lifted at the point F, the side of the current display window 11 located above is moved to the position FQ (i.e., the first side position), and thus the second side position where the second side of the second display window corresponding to the first side is located is the position FQ. The other three sides of the current display window 11 are moved in a manner similar to the above side, and finally the current display window 11 is visually enlarged to the second display window 12.
As described above, if the current display window 11 is directly enlarged and triggered, the window area enlarged in the current display window 11 is the window area OPNM. If the zoom trigger operation is performed on the current display window 11, the current display window 11 is enlarged to the second display window 12, and then the enlargement trigger operation is performed on the second display window 12, a window area of the second display window 12 after being further enlarged is, for example, UVNM, so that the current display window 11 is finally enlarged to fill the sub-display area 1, the sub-display area 2, the sub-display area 4, the sub-display area 5, the sub-display area 7, and the sub-display area 8. According to the method, a user can firstly perform scaling triggering operation on the current display window to a proper size according to needs, and then perform size adjustment triggering operation on the second display window, so that the size of the current display window is finally adjusted to the size required by the user.
Similarly, the current display window may be first reduced, and then the reduced window is enlarged to trigger operation, so that the finally filled sub-display area of the current display window meets the requirement.
For example, the other display window 13 overlaps with at least some of the sub display region 9, the sub display region 1, the sub display region 3, the sub display region 4, the sub display region 6, and the sub display region 7, respectively, and if the user wants the display window 13 to fill up the sub display region 3, the sub display region 4, the sub display region 6, and the sub display region 7, the zoom trigger operation may be performed on the display window 13 to reduce the display window to the second display window 14 overlapping with at least some of the sub display region 3, the sub display region 4, the sub display region 6, and the sub display region 7, and then the zoom trigger operation may be performed on the reduced second display window 14 to enlarge the reduced display window 14 to fill up the sub display region 3, the sub display region 4, the sub display region 6, and the sub display region 7.
The operation method enables a user to accurately control the edge of the current display window to the target position without strictly dragging the edge of the current display window to the target position, and only needs to zoom the current display window to the approximate position of the target position and then perform the size adjustment triggering operation on the zoomed display window.
Fig. 4A illustrates a flowchart of a method for generating a second display window according to a zoom trigger operation according to at least one embodiment of the present disclosure.
As shown in FIG. 4A, the method may include steps S401 to S403. The method is applied, for example, in the case where the current display window includes at least one element including a title bar and at least one control icon, the at least one control icon being located in the title bar.
Step S401: and responding to the acquired zooming trigger operation of the current display window, and determining the size of the second display window according to the zooming operation.
Step S402: at least a portion of the control icons displayed in the second display window are selected from the at least one control icon according to the size of the second display window.
Step S403: and generating the second display window according to the size of the second display window and at least part of the control icon.
For example, when the current display window is reduced, the entry in the title bar needs to be hidden or displayed according to the adjusted size of the second display window, so as to avoid the problem that a plurality of icons overlap due to the fact that the second display window is too small.
For example, for step S401, the size of the second display window may be, for example, the height and width of the second display window.
With step S402, for example, in a case where the size of the currently displayed window is controlled to be reduced to less than the size threshold in response to the zoom trigger operation, a part of the control icons displayed in the second display window is selected from the at least one control icon. For example, in a case where the size of the current display window is controlled to be equal to or larger than the size threshold in response to the zoom trigger operation, at least one control icon is displayed in the second display window.
For example, if the title bar of the second display window extends in the width direction, that is, the plurality of icons are arranged in order in the width direction, at least a part of the elements located in the second display window may be selected from the at least one element according to the width of the second display window. For another example, if the title bar of the second display window extends in the height direction, that is, the plurality of icons are sequentially arranged in the height direction, at least some of the elements located in the second display window may be selected from the at least one element according to the height of the second display window.
In some embodiments of the present disclosure, at least a portion of the elements located in the second display window may be selected according to a priority of the plurality of elements. For example, the close icon has a higher priority than the unlock icon, which has a higher priority than the sound control icon.
For example, at least some elements in the second display window are selected from the at least one element according to the priority of the close icon being higher than that of the unlock icon, and the priority of the unlock icon being higher than that of the sound control icon, so as to realize the adaptive scaling of the window. In the following example, a plurality of icons are sequentially arranged in the width direction of the second display window, iconLeft means the horizontal distance of the icon from the frame of the second display window, and iconWidth means the width of the icon).
The minimum width of the zoomable window is 2 × iconLeft + iconWidth, that is, the minimum width of the display window is 2 × iconLeft + iconWidth. If the width of the second display window is greater than or equal to 2 × iconLeft + iconWidth and less than 3 × iconLeft +2 × iconWidth, the second display window contains only 1 icon, which may be, for example, a close icon. If the width of the second display window is equal to or greater than 3 × iconLeft +2 × iconWidth and less than 4 × iconLeft +3 × iconWidth, the second display window may include two icons, such as an unlock icon and a close icon. If the width of the second display window is equal to or greater than 4 × iconLeft +3 × iconWidth and less than 6 × iconLeft +3 × iconWidth + titleWidth, the second display window may include three icons, for example, an unlock icon, a sound control icon, and a close icon, respectively. If the width of the second display window is greater than or equal to 6 × iconLeft +3 × iconWidth + titleWidth, at least 4 icons may be included.
When the window is redrawn after zooming, it is necessary to determine which icons are displayed in the title bar by comparing the width of the second display window with the width of 2 × iconLeft + iconWidth, 3 × iconLeft +2 × iconWidth, 4 × iconLeft +3 × iconWidth, and 6 × iconLeft +3 × iconWidth + titleWidth. In this embodiment, the size threshold may be, for example, 6 × iconLeft +3 × iconWidth + titleWidth, each control icon is displayed in the second display window when the width of the second display window is equal to or greater than 6 × iconLeft +3 × iconWidth + titleWidth, and the control icon displayed in the second display window is selected in accordance with the priority described above when the width of the second display window is less than 6 × iconLeft +3 × iconWidth + titleWidth.
For step S403, the second display window is rendered according to the size of the second display window and the position of each of at least some of the elements. For example, the selected part of the control icons is displayed in the title bar to generate the second display window.
Fig. 4B illustrates a schematic diagram of four second display windows provided by at least one embodiment of the present disclosure.
As shown in fig. 4B (1), the width of the second display window 410 is greater than or equal to 2 × iconLeft + iconWidth and less than 3 × iconLeft +2 × iconWidth, and the title bar in the second display window 410 displays only the close icon.
As shown in fig. 4B (2), the width of the second display window 420 is greater than or equal to 3 × iconLeft +2 × iconWidth and less than 4 × iconLeft +3 × iconWidth, and the title bar in the second display window 420 displays the close icon and the unlock icon.
As shown in fig. 4B (3), the width of the second display window 430 is greater than or equal to 4 × iconLeft +3 × iconWidth and less than 6 × iconLeft +3 × iconWidth + titleWidth, and the title bar displays the close icon, the unlock icon, and the sound control icon in the second display window 430.
As shown in fig. 4 (4) in fig. 4B, the width of the second display window 440 is greater than or equal to 6 × iconLeft +3 × iconWidth + titleWidth, and the title bar in the second display window 440 displays a close icon, an unlock icon, and a sound control icon, and also displays a title camera 1.
It can be understood that, if the size adjustment triggering operation is a zoom-out triggering operation, the method for generating the first display window in response to the zoom-out triggering operation is similar to the method for generating the second display window according to the zoom-out triggering operation described in the foregoing fig. 4A and 4B, and the icon displayed in the first display window may also be selected according to the priority of the icon, which is not described in detail in this disclosure.
In some embodiments of the present disclosure, the display area is divided into a plurality of sub-display areas by a plurality of grid lines including a grid line extending in the first direction and a grid line extending in the second direction. The zooming trigger operation comprises a pulling operation performed on a first edge of the current display window, wherein the first edge extends along a first direction. Step S401 includes: and moving the first edge to a first edge position along the second direction in response to the pulling operation, and determining a second edge position of a second edge corresponding to the first edge in the second display window according to the first edge position, wherein the second edge position is a grid line which is closest to the first edge position and extends along the first direction.
As shown in FIG. 1B and FIG. 4C below, a plurality of grid lines divide the display area 1000 into sub-display areas 1-9. The plurality of grid lines include grid lines extending in the OP direction and grid lines extending in the MO direction.
Fig. 4C illustrates a schematic diagram of step S401 in fig. 4A provided by at least one embodiment of the present disclosure.
As shown in fig. 4C, for example, the zoom trigger operation is a pull operation performed on the side S above the currently displayed window 11, the side S extending in the OP direction. Side S is an example of a first side.
In response to the pulling operation moving the first side S to the first side position in the second direction (i.e., MO direction), from the first side position (e.g., S 'position), a grid line extending in the first direction, i.e., grid line R1R2, is determined in the second display window where the second side corresponding to the first side (i.e., the upper side of the second display window) is located closest to the S' position. That is, in response to the pull operation of the side S above the current display window 11 by the zoom operation, the first side S is moved to the S' position, and the second display window is the region filled with the oblique lines in fig. 4C.
In this embodiment, the positions and sizes of edges other than the first edge in the current display window are unchanged in the second display window except that the first edge is moved to the second edge position in the current display window to replace the first edge by the second edge. As shown in fig. 4C, the size and position of the other three sides of the current display window 11 in the second display window remain unchanged except for the side S above the current display window 11. That is, the second display window has three other sides coinciding with the three other sides of the first display window 11, except that the upper side S' of the second display window is located at a different position from the upper side S of the first display window 11, as compared with the first display window.
The embodiment realizes accurate control on the single edge, the user only needs to pull the single edge to be pulled to the approximate position, the embodiment can self-adaptively align the single edge with the grid line, thereby realizing accurate control and enabling the user to control the display window more freely and more diversified.
Fig. 5A illustrates a flow chart of another method of operation provided by at least one embodiment of the present disclosure.
As shown in FIG. 5A, the operation method may further include a step S50 and a step S60 on the basis of the steps S10-S40 shown in FIG. 1A.
Step S50: a canvas is generated in a page container of a display page.
Step S60: the canvas is divided to obtain a plurality of sub-display areas.
With respect to step S50, in some embodiments of the present disclosure, a canvas is generated in the page container of the display page, for example, using canvas techniques.
For step S60, grid lines are drawn on the canvas to get a plurality of sub-display regions. For example, the canvas is used for simulating a display screen of the terminal device, and the display screen of the terminal device includes a plurality of sub-display screens, so that the grid lines can be drawn on the canvas according to the arrangement of the sub-display screens to obtain a plurality of sub-display areas.
Therefore, in some embodiments of the present disclosure, step S50 may include obtaining arrangement information of the plurality of sub display screens; according to the arrangement information of the sub display screens, the canvas is divided to obtain a plurality of sub display areas, and the arrangement of the sub display areas is matched with the arrangement of the sub display screens. For example, the arrangement of the plurality of sub display regions is the same as the arrangement of the plurality of sub display screens interacting with the display page.
For example, the arrangement information of the plurality of sub-display screens is acquired by communicating with the terminal device, or the input arrangement information is received. For example, the arrangement information is that 8 screens are arranged in a4 × 2 array, and thus 4 grid lines are drawn in the canvas, one grid line being along the width direction of the canvas and three grid lines being along the height direction of the canvas.
For example, after the canvas is divided into 8 rectangles using the grid lines, 8 rectangular objects are drawn in the 8 rectangles as container elements where the display window is placed.
For example, the total width and height of the sub-display screens are width and height, respectively, and the number of rows and columns where each sub-display screen is located is row and column, respectively, then the position coordinates (x, y) and width and height (w, h) of each rectangle on the canvas (see fig. 5C) can be calculated by the following formula, where index is the index of each rectangle, and the number is counted from 0:
x = (index % column) × (width / column)
y = (index / column) × (height / row)
w = width / column
h = height / row
after obtaining the position coordinates (x, y) and width and height (w, h) of each rectangle, the rectangle object can be drawn using the API provided by fabric. Ret = new fabric ({ left, top, width, height, option }). For example, the background color of a rectangular object is set to transparent, so that the rectangular object is not visible to the user, but is simply a container placed as a form.
Fig. 5B illustrates a flowchart of a method of step S50 in fig. 5A according to at least one embodiment of the present disclosure.
As shown in FIG. 5B, the step S50 may include steps S51-S53.
Step S51: the size of the display screen is acquired.
Step S52: the size of the canvas is determined according to the size of the display screen.
Step S53: the canvas is generated in the page container according to the size of the canvas.
For step S51, for example, the server interface is called to obtain the size of the display screen of the terminal device. For example, the width and height of the display screen are screen width and screen height, respectively.
For step S52, for example, in response to the width of the display screen being greater than or equal to the height of the display screen, the width of the canvas is equal to the width of the page container; the height of the canvas is determined according to the aspect ratio of the display screen; and in response to the width of the display screen being less than the height of the display screen, the height of the canvas being greater than the height of the page container, the width of the canvas being determined according to the aspect ratio of the display screen.
If the display screen of the terminal device is the combination of the plurality of sub-display screens, the width of the display screen is the width obtained after the combination of the plurality of sub-display screens, and the height of the display screen is the height obtained after the combination of the plurality of sub-display screens.
In some embodiments of the present disclosure, the width and height of the page container may be obtained through a standard API getbackingclientrect provided by the browser. getbackingclientrect is an API for obtaining a browser viewport.
For example, the page container where the canvas is located is the container, and the wide container width and the high container height of the page container can be obtained as follows.
containerWidth=container.getBoundingClientRect().width;
containerHeight=container.getBoundingClientRect().height。
For example, step S52 is described with the example that the width of the display screen is screen width and the height of the display screen is screen height. If screen width/screen height > =1, then:
canvas (i.e., width of canvas) = contenanterwidth;
canvas (i.e., canvas height) = (canvas w × screen height)/screen width.
That is, the width of the canvas is equal to the width of the page container (i.e., the browser viewport), and the height of the canvas is determined according to the aspect ratio of the display screen.
The canvas height canvas at this time may be greater than the page container contianerthehight height, and the scaling zoom needs to be calculated again. If canvasH > containerhight, zoom = (containerhight/canvasH) × ratio. Zoom = ratio if canvas h < = contact energy height. ratio is at a scaling factor (0.9) to ensure that the size of the canvas is smaller than the size of the page container.
If the screenWidth/screenHeight <1, then:
canvasH=containerHeight;
canvasW=(screenWidth×canvasH)/screenHeight。
that is, the height of the canvas is higher than the height of the page container (i.e., the browser viewport), and the width of the canvas is determined according to the aspect ratio of the display screen.
At this point, canvas w may be larger than the width of the container viewing area, and the scaling zoom needs to be calculated again.
If canvasW > contenanterWidth, then:
zoom = (contenanterwidth/canvas) × ratio; zoom w < = contact width, zoom = ratio.
Finally, the width and the height of the canvas are calculated as follows:
canvasWidth=canvasW×zoom;
canvasHeight = canvasH×zoom。
in some embodiments of the present disclosure, the method of operation may further comprise adjusting the position of the canvas such that the center of the canvas is located at the center of the page container.
For example, the canvas position calculated according to the above step S52 is located at the upper left corner of the page container, and the canvas needs to be translated horizontally and vertically by a certain distance to be placed at the center of the page container. The moving distances in the horizontal direction and the vertical direction can be calculated according to the following formula.
moveX=containerWidth/2–canvasWidth/2;
moveY=containerHeight/2–canvasHeight/2。
Fig. 5C is a schematic diagram illustrating an effect of a plurality of sub-display regions according to at least one embodiment of the present disclosure.
As shown in FIG. 5C, a page container 510 in a page is displayed, the page container 510 having a canvas 521 painted thereon. The size of the canvas 521 may be as per step S52 in FIG. 5B, with the center of the canvas 521 at the center of the page container 510. Canvas 521 is divided into 8 rectangles by grid lines, and adding rectangular objects in each rectangle results in 8 sub-display areas.
As shown in FIG. 5A, the operation method may further include a step S70 on the basis of the steps S10-S60.
Step S70: binding a trigger event for the canvas, the trigger event including at least a resize trigger operation.
Js provides a rich event system, so that various events and related event handling functions can be bound to a canvas or an object on the canvas by using the js. For example: the event handling function handler for pressing an event is canvas. The event handling function handler for the double-click event is canvass. on ('mouse: dblclick', handler); the event handling function handler for the lift event is canvas. The event handling function handler for placing events for objects in the canvas is canvas. For example, in some embodiments of the present disclosure, the zoom-in trigger operation is a double-click event performed on the canvas.
In some embodiments of the present disclosure, the display page includes a plurality of signal source objects, and the triggering event further includes: a placement triggering operation of placing a target object selected from a plurality of signal source objects, the operation method further comprising: responding to a placement trigger operation for placing a target object, and creating an object display window in a sub display area corresponding to the placement trigger operation; and binding the object display window with the target object to display the image from the target object in the object display window. For example, the placement trigger operation may be performed before step S10 of fig. 1A, or may also be performed after step S40.
As shown in fig. 1B, 1E, and 3, the display page 100 may include a signal source list 2000 in addition to the display area 1000 formed by the page container, and the signal source list 2000 may sequentially arrange a plurality of signal source objects. For example, each signal source object is a video source. As shown in fig. 1B, the signal source list 2000 may be located on one side, e.g., the right side, of the display area 1000. The trigger event further includes a placement trigger operation to place a target object selected from the plurality of signal source objects. For example, a target object is selected from a plurality of signal source objects, and the target object is placed at a certain position of the display area.
For example, a user selecting a video source from the right signal source list 2000, dragging to a sub-display area in the canvas, automatically adds an object display window in the grid. Each object display window includes, for example, a title bar, control icons (including unlock, sound, close), a window display area, and the like, and can be drawn using API for combining graphics provided by fabric. The drawing of the object display window is performed, for example, in the method described above with reference to fig. 2A. The position parameter of the object display window is determined by the grid where the drag is placed. Since a rectangular area has been previously drawn in each grid, in the placement trigger operation of the canvas, the position parameters (x, y, w, h) of the currently belonging rectangle can be determined according to the parameters of the placement trigger operation, so that the object window graph is drawn in the sub-display area according to the position parameters.
Fig. 6 is a diagram illustrating a method for selecting a signal source object from a signal source list 2000 and placing the signal source object in a certain sub-display area according to at least one embodiment of the present disclosure.
As shown in fig. 6, the user selects the signal source object 2 from the signal source list 2000, and moves the signal source object 2 to the position where the sub-display area 9 is located, that is, the user performs the placement trigger operation on the signal source object 2. Some embodiments of the present disclosure provide an operation method that, in response to the placement trigger operation, adds an object display window 61 in the sub display area 9 and displays a screen of the signal source object 2 in the object display window 61.
In some embodiments of the present disclosure, the triggering event further comprises: and carrying out dragging triggering operation of dragging the display window in the display page. The drag trigger operation includes: dragging the object display window until the object display window is dragged to a target position while the object display window is selected; and releasing control of the object display window in response to the object display window being dragged to the target position.
The method of operation further comprises: and responding to the acquired dragging trigger operation of the object display window, and visually moving the object display window to a target position corresponding to the dragging trigger operation.
For example, in response to the object display window satisfying a preset condition at the target position, the object display window may be the current display window.
The object display window is an example of a display window in a display page, and a user can perform dragging triggering operation on any display window in the display page.
For example, drag trigger operations include: dragging the object display window until the object display window is dragged to the target position while the object display window is selected; and releasing control of the object display window in response to the object display window being dragged to the target position.
For example, in the example shown in fig. 6, the user may perform a drag trigger operation on the object display window 61, for example, drag the object display window 61 to the target position T. For example, the display window is redrawn at the target position T, the redrawn display window having the same content as the object display window 61, thereby visually causing the object display window 61 to be dragged to the target position T. For example, the redrawn display window at the target position T covers the sub display area 3, the sub display area 4, the sub display area 6, and the sub display area 7, respectively. Since the redrawn display window at the target position T satisfies the preset condition (for example, the current display window overlaps at least a part of at least two sub-display regions of the plurality of sub-display regions, respectively), the redrawn display window at the target position T may be used as the current display window, that is, the user may perform the resizing trigger operation described in fig. 1A above on the display window at the target position T.
FIGS. 7A-7D are schematic diagrams illustrating a display page provided by at least one embodiment of the disclosure. The operation method provided by the present disclosure is further described with reference to fig. 7A to 7D.
As shown in fig. 7A, the display page is a command center visualization control platform. For example, the command center visual control platform is used for monitoring and managing a smart city.
The display page 700 of the visual control platform of the command center comprises 4 page areas, namely page areas A1-A4. As shown in FIG. 7A, page area A1 is a display area that is divided into 8 sub-display areas (sub-display areas 701-708) to simulate the tiling of 8 displays (each individual display is identified by thicker lines in FIG. 7A). Page area a2 is a user-selected area where the user may choose to split the display screen again, splitting the single display screen into 1, 4, 9, 16 sub-display elements for display. For example, if the user selects 4, the single display screen is again divided into 4 sub-display units (as shown by the dotted lines in fig. 7A). Page area a3 shows the list of source objects, all of which the user can see, as well as a preview of the screen in real time. The page area a4 is a scene list saved after the user finishes editing the page area a1, and supports one-key calling.
In other embodiments of the present disclosure, the arrangement strategy of the plurality of sub-display areas in the display area may be determined according to the actual arrangement of the plurality of sub-display screens, so as to simulate a real tiled display screen to restore the display image of the tiled display screen on the touch screen, thereby preventing the image from overflowing the display area. For example, the real tiled display screen includes 11 sub-display screens, and the 11 sub-display screens include a sub-display screen with the largest size of 1, a sub-display screen with the smallest size of 8, and two sub-display screens with medium sizes; two sub-display screens with medium size are vertically arranged into a Chinese character 'ri', every four sub-display screens with 8 smallest sub-display screens form a Chinese character 'tian' -shaped display screen group, and the two sub-display screen groups are vertically arranged; the largest sub-display and the two sub-displays in the shape of a Chinese character 'ri' are respectively located at two sides of the whole of the 8 smallest sub-displays, so that the display area corresponding to the whole formed by the sub-display areas 701-704 in fig. 7A can be used to create the largest display window to correspond to the largest sub-display, the sub-display areas 705 and 706 can be respectively divided into 4 sub-display units to correspond to the 8 smallest sub-displays, and the sub-display area 707 and the sub-display area 708 can respectively correspond to the two middle-sized sub-displays.
The display area can be divided by technicians in the field according to the arrangement of the real spliced screen, so that the phenomenon that the image picture is displayed in an overflowing manner or the display area is not fully utilized due to the fact that the image picture is small can be avoided, and various display requirements of users can be met.
As shown in fig. 7B, the video source 71 in a certain signal source object list in the page area A3 is dragged and dropped to the page area a 1. In some embodiments of the present disclosure, a display window 72 may be created directly at the released position of the video source 71 in the page area a1 for displaying the picture of the video source 71. In other embodiments of the present disclosure, a display window may be created in a certain sub-display area of the page area a1 according to the released position, so as to display the dragged signal source in the sub-display area corresponding to the large screen. The above-described operation of creating a display window in the page area a1 is simply referred to as "windowing" hereinafter.
In some embodiments other than those described in the present disclosure, if it is desired to display a video source such as the source object 71 in a larger area, the edge of the area of the page area a1 "windowed" may be clicked, for example, to zoom the display window 72 in and out in a dragging manner, and if it is desired to align the display window 72 with the auxiliary lines (i.e., grid lines in the screen), the display window 72 may be dragged to the respective auxiliary line edges, and the auxiliary lines may be automatically dragged by the display window 72. However, this edge-attaching method is inconvenient to operate, and especially when operating on Pad, it is often difficult to precisely control the accurate position of the edge of the display window due to the occlusion of the touch operation finger and the limitation of the operation fineness on the small screen, so the windowing operation is less accurate.
The operation method provided by the present disclosure, for example, as shown in fig. 1A, does not need to drag the edge of the display window, but first moves the display window to the area that is to be occupied, presses the auxiliary line on the display window, and then double-clicks the display window, so as to automatically fill the area where the auxiliary line is located in the display window. The interactive mode is more convenient to operate on the Pad and is not easy to make mistakes.
For example, a drag-trigger operation is performed on the display window 72 shown in fig. 7B to visually move the display window 72 to an area where the display window needs to be filled. The drag trigger operation may be, for example, first clicking on the display window 72 and then dragging the display window 72.
As shown in FIG. 7C, visually, the display window 72 is moved to the region 722 by the region 721 (i.e., the region formed by the sub display units S4, S5, S7, and S8), covering at least a partial region of each of the sub display units S1 to S9, respectively, after the display window 72 is moved to the region 722. Thereafter, a zoom-in trigger operation may be performed on the display window located in the region 722.
As shown in FIGS. 7D and 7C, after the zoom-in trigger operation is performed on the display window located in the region 722, the retrieved first display window fills the sub-display units S1-S9.
Js, an open-source canvas-based drawing library, is used in the above embodiments to implement various operations on a display window. Other ways for implementing interactive operations such as drawing and controlling of display windows may be adopted by those skilled in the art. For example, 2D planar Graphics rendering and interaction can also be implemented on web pages using native canvas and Scalable Vector Graphics (SVG) technologies.
Native canvases also provide various drawing APIs, such as drawing a rectangle, polygon, circle, etc. But it does not support the functionality of event handling. To implement operations such as repositioning and zooming a rectangle and obtain the size position after the operation, the user needs to implement the relevant logic and calculation by himself, and the native canvas does not provide support for the graphics composition operation.
SVG is a language for describing 2D graphics using XML, and has the greatest advantages over canvas: each rendered graphic is treated as an object to which an associated event handler may be added; SVG images do not suffer from loss of graphics quality in the case of magnification or resizing. However, it is not as convenient as canvas to save images on canvas in the format of png or jpg, etc., and it is still necessary to write a large amount of implementation logic and computation for visualization operations such as dragging, panning, etc., and it is not as convenient as fabric.
For the case where the resizing trigger operation is a zoom-out trigger operation, the method of operation is similar to the zoom-in trigger operation described above. For example, the current display window covers four sub-display areas of the plurality of sub-display areas, and in response to a zoom-out trigger operation, the window area occupied in the display page after the current display window is resized is determined to be three sub-display areas of the four sub-display areas, thereby generating a first display window among the three sub-display areas, and replacing the current display window with the first display window to visually zoom out the size of the current display window.
Fig. 8 illustrates a schematic block diagram of an operating device 800 provided by at least one embodiment of the present disclosure.
For example, as shown in fig. 8, the operation device 800 includes a trigger operation acquisition unit 810, a region determination unit 820, and a window generation unit 830.
The trigger operation obtaining unit 810 is configured to obtain a resizing trigger operation performed on the current display window in response to the current display window respectively overlapping at least a partial region of at least two sub-display regions of the plurality of sub-display regions.
The trigger operation acquisition unit 810 may perform, for example, step S110 described in fig. 1C.
The area determination unit 820 is configured to determine, in response to the resizing trigger operation, a window area occupied in the display page after the resizing of the current display window, where the window area includes at least a portion of the at least two sub-display areas.
The region determining unit 820 may perform step S120 described in fig. 1C, for example.
The window generating unit 830 is configured to generate a first display window in the window region, and replace the current display window by the first display window to visually adjust the current display window.
The window generating unit 830 may perform, for example, step S130 described in fig. 1C.
For example, the trigger operation acquisition unit 810, the area determination unit 820, and the window generation unit 830 may be hardware, software, firmware, and any feasible combination thereof. For example, the trigger operation acquiring unit 810, the area determining unit 820 and the window generating unit 830 may be dedicated or general circuits, chips or devices, and may also be a combination of a processor and a memory. The embodiments of the present disclosure are not limited in this regard to the specific implementation forms of the above units.
The device can accurately control the position of the edge of the first display window after the current display window is adjusted in size, and the technical problem that the edge of the first display window cannot be accurately controlled due to external reasons such as limb shielding is solved.
It should be noted that, in the embodiment of the present disclosure, each unit of the operation device 800 corresponds to each step of the operation method, and for the specific function of the operation device 800, reference may be made to the related description about the operation method, which is not described herein again. The components and configuration of the operator 800 shown in fig. 8 are exemplary only, and not limiting, and the operator 800 may include other components and configurations as desired.
At least one embodiment of the present disclosure also provides an electronic device comprising a processor and a memory, the memory including one or more computer program instructions. One or more computer program instructions are stored in the memory and configured to be executed by the processor, the one or more computer program instructions comprising instructions for implementing the method of operation described above. The electronic equipment can accurately control the position of the edge of the first display window after the current display window is adjusted in size, and the technical problem that the edge of the first display window cannot be accurately controlled due to external reasons such as limb shielding is solved.
Fig. 9 is a schematic block diagram of an electronic device provided in some embodiments of the present disclosure. As shown in fig. 9, the electronic device 900 includes a processor 910 and a memory 920. The memory 920 is used to store non-transitory computer-readable instructions (e.g., one or more computer program modules). The processor 910 is configured to execute non-transitory computer readable instructions, which when executed by the processor 910 may perform one or more of the steps of the above-described method of operation. The memory 920 and the processor 910 may be interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, the processor 910 may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or other form of processing unit having data processing capabilities and/or program execution capabilities. For example, the Central Processing Unit (CPU) may be an X86 or ARM architecture or the like. The processor 910 may be a general-purpose processor or a special-purpose processor that may control other components in the electronic device 900 to perform desired functions.
For example, memory 920 may include any combination of one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), USB memory, flash memory, and the like. One or more computer program modules may be stored on the computer-readable storage medium and executed by processor 910 to implement various functions of electronic device 900. Various applications and various data, as well as various data used and/or generated by the applications, and the like, may also be stored in the computer-readable storage medium.
It should be noted that, in the embodiment of the present disclosure, reference may be made to the description about the operation method in the foregoing for specific functions and technical effects of the electronic device 900, and details are not described herein again.
Fig. 10 is a schematic block diagram of another electronic device provided by some embodiments of the present disclosure. The electronic device 1000 is, for example, suitable for implementing the operation methods provided by the embodiments of the present disclosure. The electronic device 1000 may be a terminal device or the like. It should be noted that the electronic device 1000 shown in fig. 10 is only one example, and does not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 10, electronic device 1000 may include a processing means (e.g., central processing unit, graphics processor, etc.) 1010 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 1020 or a program loaded from storage device 1080 into a Random Access Memory (RAM) 1030. In the RAM1030, various programs and data necessary for the operation of the electronic apparatus 1000 are also stored. The processing device 1010, the ROM 1020, and the RAM1030 are connected to each other by a bus 1040. An input/output (I/O) interface 1050 is also connected to bus 1040.
Generally, the following devices may be connected to the I/O interface 1050: input devices 1060 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, or the like; an output device 1070 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, or the like; storage 1080 including, for example, tape, hard disk, etc.; and a communication device 1090. The communication means 1090 may allow the electronic device 1000 to communicate wirelessly or by wire with other electronic devices to exchange data. While fig. 10 illustrates an electronic device 1000 having various means, it is to be understood that not all illustrated means are required to be implemented or provided, and that the electronic device 1000 may alternatively be implemented or provided with more or less means.
For example, the above-described operation method may be implemented as a computer software program according to an embodiment of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program comprising program code for performing the above-described method of operation. In such embodiments, the computer program may be downloaded and installed from a network through communication device 1090, or from storage device 1080, or from ROM 1020. When executed by the processing device 1010, the computer program may implement the functions defined in the operation method provided by the embodiments of the present disclosure.
At least one embodiment of the present disclosure also provides a computer-readable storage medium for non-transitory storage of computer-readable instructions, which when executed by a computer, may implement the above-described method of operation. By utilizing the computer readable storage medium, the position of the edge of the first display window after the current display window is adjusted in size can be accurately controlled, and the technical problem that the edge of the first display window cannot be accurately controlled due to external reasons such as limb occlusion and the like is solved.
Fig. 11 is a schematic diagram of a storage medium according to some embodiments of the present disclosure. As shown in fig. 11, storage medium 1100 is used to store non-transitory computer readable instructions 1110. For example, the non-transitory computer readable instructions 1110, when executed by a computer, may perform one or more steps according to the method of operation described above.
The storage medium 1100 may be applied to the electronic apparatus 900 described above, for example. For example, the storage medium 1100 may be the memory 920 in the electronic device 900 shown in fig. 9. For example, the related description about the storage medium 1100 may refer to the corresponding description of the memory 920 in the electronic device 900 shown in fig. 9, and is not repeated here.
The following points need to be explained:
(1) the drawings of the embodiments of the disclosure only relate to the structures related to the embodiments of the disclosure, and other structures can refer to common designs.
(2) Without conflict, embodiments of the present disclosure and features of the embodiments may be combined with each other to arrive at new embodiments.
The above description is only a specific embodiment of the present disclosure, but the scope of the present disclosure is not limited thereto, and the scope of the present disclosure should be subject to the scope of the claims.

Claims (19)

1. An operation method applied to a display page, wherein the display page comprises a display area, the display area comprises a plurality of sub-display areas, the display page comprises a current display window, and the method comprises the following steps:
responding to the current display window to be respectively overlapped with at least partial areas of at least two sub display areas in the plurality of sub display areas, and acquiring a size adjustment triggering operation executed on the current display window;
in response to the resizing trigger operation, determining a window area occupied in the display page after the current display window is resized, wherein the window area comprises at least a portion of the at least two sub-display areas; and
generating a first display window in the window area, the current display window being replaced by the first display window to visually resize the current display window,
wherein the method further comprises:
in response to obtaining a zoom trigger operation on the current display window, generating a second display window to visually zoom the current display window according to the zoom trigger operation,
the display area is divided into a plurality of sub-display areas by a plurality of grid lines, the grid lines comprise grid lines extending along a first direction and grid lines extending along a second direction, the zooming trigger operation comprises a pulling operation on a first edge of the current display window, wherein the first edge extends along the first direction,
in response to acquiring a zoom trigger operation on the current display window, generating a second display window to visually zoom the current display window according to the zoom trigger operation, including:
in response to the pulling operation, moving the first edge to a first edge position along the second direction, and determining a second edge position of a second edge corresponding to the first edge in the second display window according to the first edge position, wherein the second edge position is the closest grid line extending along the first direction to the first edge position; and
generating the second display window according to the second edge position and the edges of the current display window except the first edge,
wherein, in response to the current display window being respectively overlapped with at least partial areas of at least two sub-display areas of the plurality of sub-display areas, acquiring a resizing trigger operation performed on the current display window, comprising:
and acquiring a size adjustment triggering operation executed on the second display window in response to the second display window being respectively overlapped with at least partial areas of at least two sub-display areas in the plurality of sub-display areas.
2. The method of claim 1, wherein the resizing trigger operation comprises a click operation on the currently displayed window.
3. The method of claim 1, wherein the current display window comprises at least one element comprising a current window display area for displaying images from signal source objects bound to the current display window.
4. The method of claim 3, wherein the at least one element further comprises a title bar located to one side of the current window display area,
the method further comprises the following steps: and displaying the image information of the image in the title bar.
5. The method of claim 4, wherein the at least one element further comprises at least one control icon, the at least one control icon located in the title bar.
6. The method of claim 3, wherein the first display window comprises a first window display area, the method further comprising:
binding the signal source object with the first display window; and
in response to receiving an image from the source object, displaying the image in the first window display area.
7. The method of claim 4, wherein the at least one element further comprises a video display window,
wherein the video display window is located in the title bar,
the video display window is used for displaying video from another signal source object bound with the video display window.
8. The method of claim 1, wherein the currently displayed window includes at least one element, the at least one element further including a title bar and at least one control icon, the at least one control icon located in the title bar,
in response to acquiring a zoom trigger operation on the current display window, generating a second display window to visually zoom the current display window according to the zoom trigger operation, including:
in a case where the size of the current display window is controlled to be reduced to be smaller than a size threshold value in response to the zoom trigger operation, selecting a part of control icons displayed in the second display window from the at least one control icon; and
displaying the part of the control icons in the title bar to generate the second display window.
9. The method of claim 1, wherein the position and size of edges other than the first edge in the current display window are unchanged in the second display window except that the first edge in the current display window is moved to the second edge position to replace the first edge by the second edge.
10. The method of claim 1, wherein the arrangement of the plurality of sub-display areas is the same as the arrangement of the plurality of sub-display screens interacting with the display page.
11. The method of claim 1, wherein the display page includes a plurality of signal source objects,
the method further comprises the following steps:
obtaining a placement trigger operation for placing a target object, wherein the target object is selected from the plurality of signal source objects,
responding to a placement trigger operation on the target object, and creating an object display window in a sub display area corresponding to the placement trigger operation; and
and binding the object display window with the target object, and displaying the image from the target object on the object display window.
12. The method of claim 11, further comprising:
and responding to the acquired dragging triggering operation of the object display window, and visually moving the object display window to a target position corresponding to the dragging triggering operation.
13. The method of claim 12, wherein the drag trigger operation comprises:
dragging the object display window until the object display window is dragged to the target position while the object display window is selected; and
releasing control of the object display window in response to the object display window being dragged to the target location.
14. The method of claim 1, wherein the resizing trigger operation comprises a zoom-in trigger operation to zoom in the current display window or a zoom-out trigger operation to zoom out the current display window.
15. The method of claim 1, wherein the resizing trigger operation comprises a zoom-out trigger operation for zooming out the currently displayed window,
responding to the size adjustment triggering operation, determining a window area occupied in the display page after the size of the current display window is adjusted, including:
responding to the size adjustment triggering operation, and acquiring the overlapping areas of the current display window and the plurality of sub-display areas respectively; and
and determining that the window area occupied in the display page after the current display window is resized is the sub-display area with the largest overlapping area.
16. The method of claim 1, wherein the resizing trigger operation comprises a zoom trigger operation to zoom in on the currently displayed window,
responding to the size adjustment triggering operation, determining a window area occupied in the display page after the size of the current display window is adjusted, including:
acquiring each sub-display area where the feature point of the current display window is located; and
and taking the maximum area formed by each sub-display area as the window area occupied in the display page after the size of the current display window is adjusted.
17. An operation device applied to a display page, wherein the display page comprises a display area, the display area comprises a plurality of sub-display areas, and the display page comprises a current display window, the device comprises:
a trigger operation acquisition unit configured to acquire a resizing trigger operation performed on the current display window in response to the current display window being respectively overlapped with at least partial areas of at least two sub-display areas of the plurality of sub-display areas;
the area determining unit is configured to respond to the size adjustment triggering operation and determine a window area occupied in the display page after the size of the current display window is adjusted, wherein the window area at least comprises parts of the at least two sub-display areas; and
a window generating unit configured to generate a first display window in the window region, the current display window being replaced by the first display window to visually adjust a size of the current display window,
wherein the operating device is further configured to generate a second display window to visually zoom the current display window according to a zoom trigger operation in response to acquiring the zoom trigger operation on the current display window,
the display area is divided into a plurality of sub-display areas by a plurality of grid lines, the grid lines comprise grid lines extending along a first direction and grid lines extending along a second direction, the zooming trigger operation comprises a pulling operation on a first edge of the current display window, wherein the first edge extends along the first direction,
wherein, in response to acquiring a zoom trigger operation on the current display window, generating a second display window to visually zoom the current display window according to the zoom trigger operation, comprises:
in response to the pulling operation, moving the first edge to a first edge position along the second direction, and according to the first edge position, determining a second edge position of a second edge corresponding to the first edge in the second display window, wherein the second edge position is the closest grid line extending along the first direction to the first edge position; and
generating the second display window according to the second edge position and the edges of the current display window except the first edge,
the trigger operation acquisition unit is configured to acquire a resizing trigger operation performed on the second display window in response to the second display window respectively overlapping with at least a partial area of at least two sub-display areas of the plurality of sub-display areas.
18. An electronic device, comprising:
a processor;
a memory comprising one or more computer program instructions;
wherein the one or more computer program instructions are stored in the memory and when executed by the processor implement the method of operation of any of claims 1 to 16.
19. A computer-readable storage medium, non-transitory, storing computer-readable instructions, wherein the computer-readable instructions, when executed by a processor, implement the method of operation of any of claims 1-16.
CN202210543872.7A 2022-05-19 2022-05-19 Operation method, device, electronic equipment and computer readable storage medium Active CN114741016B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210543872.7A CN114741016B (en) 2022-05-19 2022-05-19 Operation method, device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210543872.7A CN114741016B (en) 2022-05-19 2022-05-19 Operation method, device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN114741016A CN114741016A (en) 2022-07-12
CN114741016B true CN114741016B (en) 2022-09-16

Family

ID=82288151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210543872.7A Active CN114741016B (en) 2022-05-19 2022-05-19 Operation method, device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114741016B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116737022A (en) * 2022-09-14 2023-09-12 荣耀终端有限公司 Display method and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111198734B (en) * 2018-11-20 2024-03-15 西安诺瓦星云科技股份有限公司 Window setting method and device, electronic equipment and nonvolatile storage medium
CN110162288A (en) * 2019-06-03 2019-08-23 北京淳中科技股份有限公司 A kind of method, apparatus, equipment and the medium of determining display area
CN112612405B (en) * 2020-12-28 2022-05-24 北京梧桐车联科技有限责任公司 Window display method, device, equipment and computer readable storage medium
CN112947815B (en) * 2021-04-27 2022-11-25 北京仁光科技有限公司 Multi-window interaction method and system, readable storage medium and electronic device
CN113676766A (en) * 2021-09-02 2021-11-19 中国电信股份有限公司 Browser video display method and device, storage medium and electronic equipment
CN114201085A (en) * 2021-11-30 2022-03-18 北京城市网邻信息技术有限公司 Information display method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114741016A (en) 2022-07-12

Similar Documents

Publication Publication Date Title
US10936790B2 (en) Responsive grid layouts for graphic design
US10037184B2 (en) Systems, methods, and devices for manipulation of images on tiled displays
US7773101B2 (en) Fisheye lens graphical user interfaces
US9880727B2 (en) Gesture manipulations for configuring system settings
US20150046853A1 (en) Computing Device For Collaborative Project Management
US9026946B2 (en) Method and apparatus for displaying an image
US20090091547A1 (en) Information display device
CN111107418A (en) Video data processing method, video data processing device, computer equipment and storage medium
JP3705826B2 (en) Virtual three-dimensional window display control method
WO2013139089A1 (en) Screen content zoom-in and displaying method and terminal
JPS6232527A (en) Display picture control system
US11354027B2 (en) Automatic zoom-loupe creation, selection, layout, and rendering based on interaction with crop rectangle
CN114924824B (en) Visual object blurring method, visual object rendering method and computing device
US20100295869A1 (en) System and method for capturing digital images
US11039196B2 (en) Method and device for displaying a screen shot
WO2018198703A1 (en) Display device
CN114741016B (en) Operation method, device, electronic equipment and computer readable storage medium
US20140208246A1 (en) Supporting user interactions with rendered graphical objects
WO2023221041A1 (en) Operation method and apparatus, electronic device, and computer-readable storage medium
JP6191851B2 (en) Document presentation method and user terminal
JP2009129223A (en) Image editing device, image editing program, recording medium, and image editing method
EP2557562B1 (en) Method and apparatus for displaying an image
JPH0619663A (en) Automatic control method for multiwindow
CN111142754A (en) Screenshot processing method and device and storage medium
JP4089490B2 (en) Image display device, image display method, and image display system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant