CN116069198A - Floating window adjusting method and electronic equipment - Google Patents

Floating window adjusting method and electronic equipment Download PDF

Info

Publication number
CN116069198A
CN116069198A CN202111278442.9A CN202111278442A CN116069198A CN 116069198 A CN116069198 A CN 116069198A CN 202111278442 A CN202111278442 A CN 202111278442A CN 116069198 A CN116069198 A CN 116069198A
Authority
CN
China
Prior art keywords
area
interface
user
application
widget
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111278442.9A
Other languages
Chinese (zh)
Inventor
胡诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Petal Cloud Technology Co Ltd
Original Assignee
Petal Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Petal Cloud Technology Co Ltd filed Critical Petal Cloud Technology Co Ltd
Priority to CN202111278442.9A priority Critical patent/CN116069198A/en
Priority to PCT/CN2022/123662 priority patent/WO2023071718A1/en
Publication of CN116069198A publication Critical patent/CN116069198A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1407General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A floating window adjusting method and electronic equipment relate to the technical field of terminals and can improve the efficiency of multitasking, the method is applied to first electronic equipment, the first electronic equipment comprises a display screen, and the method comprises the following steps: displaying a first interface on the display screen, and displaying a floating window on the upper layer of the first interface, wherein the floating window is positioned in a first area of the display screen; and detecting that the first interface is switched to a second interface, and adjusting the floating window to a second area of the display screen based on the behavior information of the first user.

Description

Floating window adjusting method and electronic equipment
Technical Field
The application relates to the technical field of terminals, in particular to a floating window adjusting method and electronic equipment.
Background
As technology advances, more and more electronic devices have multitasking capabilities. Currently, one of the ways to multitask is application windowing. The electronic device may display the content of the application through a widget. For example, as shown in fig. 1, a video application (which may be referred to as an upper layer application) is displayed in a small window form at an upper layer of an instant communication application (which may be referred to as a lower layer application). Thus, the user can play the video in the video application while using the instant messaging application, and further perform multitasking.
In the above-mentioned widget scenario, the electronic device may provide a multitasking function, however, in some cases, the widget may block some key information of the lower layer application, which may affect not only viewing related content of the lower layer application by the user, but also use of the lower layer application by the user, reducing efficiency of operating the lower layer application by the user, and further resulting in lower multitasking efficiency. Therefore, how to improve the efficiency of the multitasking in the small window scenario is a urgent issue to be resolved.
Disclosure of Invention
The application provides a floating window adjusting method and electronic equipment, which can improve the multi-task processing efficiency.
In order to achieve the above purpose, the embodiment of the present application provides the following technical solutions:
in a first aspect, a floating window adjustment method is provided, applied to a first electronic device, where the first electronic device includes a display screen, and the method includes:
and displaying a first interface on the display screen, and displaying a floating window on the upper layer of the first interface, wherein the floating window is positioned in a first area of the display screen. And detecting that the first interface is switched to the second interface, and adjusting the floating window to a second area of the display screen based on the behavior information of the first user. According to the method, the first electronic equipment can automatically and intelligently adjust the floating window based on the behavior information of the user, on one hand, the time delay of multi-task processing caused by manually adjusting the position of the small window by the user can be reduced, the first electronic equipment can perform multi-task processing more smoothly, and the efficiency of the multi-task processing is improved. On the other hand, the floating window is adjusted based on the behavior information of the user, so that the adjusted floating window area is more in line with the use habit of the user, and the efficiency of multitasking is further improved.
Alternatively, the first region and the second region may have an overlapping portion, or the first region and the second region may not have an overlapping portion.
In one possible design, the behavior information of the first user is used, including information obtained by historically operating the floating window by the first user;
and/or the behavior information of the first user comprises information obtained by historically operating the second interface by the first user.
Alternatively, the floating window is historically operated, which may be operated within a certain time window. Alternatively, it may be all operations on the floating window after the floating window is generated.
Optionally, historically operating the second interface includes: and (3) after historically switching to the second interface (or an interface similar to the second interface, such as a third interface), performing operation on the second interface. Optionally, an interface similar to the second interface comprises: the type of interface element, layout of the element, size of the element, etc. are the same or substantially the same, so that the requirements of the second interface and the third interface on the placement area of the floating window are consistent.
Illustratively, historically, when the lower layer interface a switches to interface B, the upper layer floating window is adjusted from a first position to a second position. The floating window has a smaller shielding degree on the important area in the lower layer interface B when in the second position. Then, subsequently, when the lower layer interface a is switched to the interface C (the interface C and the interface B are similar interfaces), the first electronic device may adjust the floating window to the second position. Since the interface B is similar to the element type or layout of the interface C, the floating window has a smaller degree of shielding of the important area in the lower interface C when in the second position.
In one possible design, historically operating the floating window includes: and when the second interface is historically displayed, adjusting the position and/or the size of the floating window displayed on the upper layer of the second interface.
In one possible design, the information historically obtained by the first user operating the floating window includes: when the second interface is historically displayed, the first user adjusts the relevant information of the third area for placing the floating window, which is obtained by the floating window; the second region is associated with the third region.
In the scheme of adjusting the floating window based on the historical operation information of the floating window, along with the increase of the operation times of the small window of the user (the longer the use time of the user is), the electronic equipment can more and more accurately predict the placement areas of the upper floating window under the condition of different lower interfaces, and the effect of more and more conforming to the use habit of the user in the corresponding use scene is realized. Therefore, compared with the existing non-personalized scheme, the placement area of the floating window is more in line with the habit of the user, and the personalized floating window use experience of the user can be met.
Optionally, the related information of the third area includes coordinate information of the third area. Optionally, the second region is related to the third region, which may mean that the second region is located in the third region, or that there is a partial overlapping region between the second region and the third region.
In one possible design, operating the second interface includes: and performing sliding operation on the second interface.
In one possible design, the information historically obtained by the first user operating the second interface includes: a fourth area with the change rate of the elements in the second interface meeting the first condition, wherein the fourth area is used for placing a floating window; the second region is located within the fourth region.
In one possible design, the method further comprises, prior to adjusting the floating window to the second region of the display screen:
and determining a plurality of optional areas for placing floating windows, wherein the second area is the latest obtained area in the plurality of optional areas, and/or the second area is the area with the frequency meeting the second condition in the plurality of optional areas, and/or the second area is the area with the highest weighted score in the plurality of optional areas.
In one possible design, adjusting the floating window to a second area of the display screen based on behavioral information of a first user of the first electronic device includes: and adjusting the floating window to a second area of the display screen based on fusion information acquired from the server, wherein the fusion information is obtained by the server based on the behavior information of the first user and the behavior information of one or more second users.
The floating window placement area prediction is carried out based on the fusion information, so that the prediction effect of the floating window placement area under the condition of cold start (namely, the floating window is not adjusted by the user or the corresponding application program is used for the first time) can be optimized, and meanwhile, the personalized habit of the user can be fully reserved.
In one possible design, the rate of change between the presented content of the first interface and the presented content of the second interface satisfies the third condition. Alternatively, the presentation content of the interface may refer to the content in the interface container, window, view. Therefore, only when the content difference between the first interface and the second interface of the lower layer is large, the floating window of the upper layer is adjusted, the frequency of adjusting the floating window can be reduced, and the power consumption problem caused by frequent adjustment of the floating window is avoided.
In one possible design, the behavior information of the first user is behavior information of the first user on the first electronic device, or the behavior information of the first user is behavior information of the first user on the second electronic device.
In one possible design, the first electronic device and the second electronic device are different electronic devices that log in to the same account.
The scheme can be suitable for use scenes such as machine changing, backup, multi-equipment and the like. In a cold start scene, the first electronic device can automatically and intelligently adjust the placement area of the upper floating window based on the behavior information of the user on the second electronic device, so that shielding of the upper floating window on some important areas in the lower interface is reduced as much as possible.
In one possible design, the first region has a first size; the second region has a second size, and the first size is different from the second size.
In one possible design, the first interface and the second interface are interfaces of the same application, or the first interface and the second interface are interfaces of different applications.
In one possible design, displaying a floating window at an upper layer of the first interface includes: generating a floating window and displaying the floating window on the upper layer of the first interface; the floating window is generated by the first electronic device based on instructions entered by the user, or the floating window is automatically generated by the first electronic device.
In one possible design, the first region of the floating window is determined based on a preset region; or, the first region is determined based on behavior information of the first user; or, the first area is determined based on the converged information obtained by the server based on the behavior information of the first user and the behavior information of the one or more second users.
In one possible design, the preset area includes any one or more of the following: virtual keyboard, dial, back button, navigation button, search box, first region is located in the default area.
In a second aspect, the present application provides a first electronic device, the first electronic device comprising:
the display module is used for displaying the first interface, displaying a floating window on the upper layer of the first interface, and the floating window is positioned in the first area.
The processing module is used for detecting that the first interface is switched to the second interface and adjusting the floating window to the second area based on the behavior information of the first user.
Alternatively, the first region and the second region may have an overlapping portion, or the first region and the second region may not have an overlapping portion.
In one possible design, the behavior information of the first user is used, including information obtained by historically operating the floating window by the first user;
and/or the behavior information of the first user comprises information obtained by historically operating the second interface by the first user.
In one possible design, historically operating the floating window includes: and when the second interface is historically displayed, adjusting the position and/or the size of the floating window displayed on the upper layer of the second interface.
In one possible design, the information historically obtained by the first user operating the floating window includes: when the second interface is historically displayed, the first user adjusts the relevant information of the third area for placing the floating window, which is obtained by the floating window; the second region is associated with the third region.
Optionally, the related information of the third area includes coordinate information of the third area.
In one possible design, operating the second interface includes: and performing sliding operation on the second interface.
In one possible design, the information historically obtained by the first user operating the second interface includes: a fourth area with the change rate of the elements in the second interface meeting the first condition, wherein the fourth area is used for placing a floating window; the second region is located within the fourth region.
In one possible design, the processing module is further configured to determine a plurality of optional areas for placing the floating window before the floating window is adjusted to the second area, where the second area is a newly obtained area of the plurality of optional areas, and/or the second area is an area where the frequency satisfies the second condition in the plurality of optional areas, and/or the second area is an area where the weighted score is highest in the plurality of optional areas.
In one possible design, adjusting the floating window to the second region based on behavior information of a first user of the first electronic device includes: and adjusting the floating window to the second area based on the fusion information acquired from the server, wherein the fusion information is obtained by the server based on the behavior information of the first user and the behavior information of one or more second users.
In one possible design, the rate of change between the presented content of the first interface and the presented content of the second interface satisfies the third condition.
In one possible design, the behavior information of the first user is behavior information of the first user on the first electronic device, or the behavior information of the first user is behavior information of the first user on the second electronic device.
In one possible design, the first electronic device and the second electronic device are different electronic devices that log in to the same account.
In one possible design, the first region has a first size; the second region has a second size, and the first size is different from the second size.
In one possible design, the first interface and the second interface are interfaces of the same application, or the first interface and the second interface are interfaces of different applications.
In one possible design, the processing module is further configured to generate a floating window, and control the display module to display the floating window on an upper layer of the first interface; the floating window is generated by the first electronic device based on instructions entered by the user, or the floating window is automatically generated by the first electronic device.
In one possible design, the first region of the floating window is determined based on a preset region; or, the first region is determined based on behavior information of the first user; or, the first area is determined based on the converged information obtained by the server based on the behavior information of the first user and the behavior information of the one or more second users.
In one possible design, the preset area includes any one or more of the following: virtual keyboard, dial, back button, navigation button, search box, first region is located in the default area.
In a third aspect, the present application provides an electronic device having a function of implementing the floating window adjustment method in any of the above aspects and any one of possible implementations thereof. The functions can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In a fourth aspect, the present application provides a computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform a floating window adjustment method as in any of the above aspects and any one of the possible implementations.
In a fifth aspect, the present application provides a computer program product for, when run on an electronic device, causing the electronic device to perform the floating window adjustment method of any of the aspects and any of the possible implementations thereof.
In a sixth aspect, there is provided circuitry comprising processing circuitry configured to perform the floating window adjustment method of any of the aspects and any one of the possible implementations described above.
In a seventh aspect, embodiments of the present application provide a chip system, including at least one processor and at least one interface circuit, where the at least one interface circuit is configured to perform a transceiving function and send an instruction to the at least one processor, and when the at least one processor executes the instruction, the at least one processor performs a floating window adjustment method in any of the above aspects and any one of possible implementation manners.
Drawings
FIG. 1 is an interface diagram showing an upper floating window and a lower interface;
FIG. 2 is an interface diagram involved in a floating window adjustment scheme;
FIG. 3 is a schematic diagram of a system architecture according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic diagram of a software architecture of an electronic device according to an embodiment of the present application;
fig. 6 is another schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 7 is an interface diagram provided by an embodiment of the present application;
FIGS. 8A-8C are interface diagrams provided in embodiments of the present application;
fig. 9 and 10 are interface diagrams provided in an embodiment of the present application;
11A and 11B are interface diagrams provided in embodiments of the present application;
FIGS. 12A and 12B are interface diagrams provided in embodiments of the present application;
FIG. 13 is a flowchart of a floating window adjustment method according to an embodiment of the present disclosure;
fig. 14A and 14B are interface diagrams provided in the embodiments of the present application;
FIG. 15 is a schematic diagram of a training and running process of a predictive model provided in an embodiment of the present application;
FIG. 16 is a schematic view of an alternative area for placement of a floating window provided in an embodiment of the present application;
FIG. 17 is a schematic diagram of a local model and a fusion model according to an embodiment of the present disclosure;
FIG. 18 is a flowchart of another floating window adjustment method according to an embodiment of the present disclosure;
FIG. 19 is an interface diagram provided by an embodiment of the present application;
FIG. 20 is a flowchart of another floating window adjustment method according to an embodiment of the present disclosure;
FIG. 21 is a schematic view of an apparatus provided in an embodiment of the present application;
fig. 22 is a schematic diagram of a chip system according to an embodiment of the present application.
Detailed Description
Taking multitasking by the mobile phone as an example, fig. 2 shows an interface of interaction between the user and the mobile phone in the current small window scenario. In the widget scene, the upper widget application is assumed to be a video application A, the lower widget application is a video application B, the initial placement position of the widget is shown in (1) of fig. 2, the widget obstructs a key operation area of the video application B, and a user needs to manually move the widget out of the key operation area to perform related operation on the video application B. For example, the user moves the widget to the area shown in (2) of fig. 2. Thereafter, the user needs to search for video in the video application B, as shown in (3) of fig. 2, after detecting that the user clicks the search box 201, the handset pops up the virtual keyboard. At this time, the small window obstructs the virtual keyboard, so that the user cannot click on the character to be input, and therefore, the user needs to manually move the small window out of the virtual keyboard area. For example, the user moves the widget to the area shown in (4) of fig. 2. Therefore, in the whole multitasking process, the small window shields the key operation area for many times, and the user needs to adjust the position of the small window for many times, on one hand, the user needs to consume a certain time to manually adjust the position of the small window, which generally results in lower multitasking efficiency. On the other hand, the user manually adjusts the operation of the position of the small window, so that the smoothness of the operation of the user is reduced, and the interaction experience is poor.
In order to improve the efficiency of multitasking, the application provides a floating window adjusting method, which automatically determines and adjusts the placement area of a small window, can reduce the operation of manually adjusting the small window by a user as much as possible, and the adjusted small window usually does not cause shielding to a key area applied on the lower layer, so that the man-machine interaction efficiency can be improved.
The application program according to the embodiment of the present application may be, but is not limited to, any one or more of the following: an installation-free application (such as a quick application), a user-level application (such as a third party downloaded application), a system-level application (such as a system-preloaded application). The embodiments of the present application are not limited to the application form.
The application may be used on one electronic device or across devices. For example, in some scenarios, a cell phone may cooperate with an application on a tablet computer.
Fig. 3 shows an exemplary architecture of a system to which embodiments of the present application are applicable. The system includes one or more terminals (e.g., smart phone 302, tablet 303, notebook 304 as shown in fig. 3).
As one possible implementation, a terminal (such as a smartphone) may include an application management module and a widget management module.
Alternatively, the application management module may also be referred to as an application monitoring device (application monitoring device, AMD), and the widget management module may also be referred to as an intelligent widget region prediction device (SWLD), where each name does not constitute a limitation on the functions that the module and the device may implement.
Some applications in the terminal support windowing. These windowable applications may be registered with the application monitoring apparatus at installation (corresponding to interaction (2) shown in fig. 3). After the application generates the widget, a notification generated by the widget can be sent out, and the application monitoring device can analyze the notification in real time and obtain the control right of the widget (corresponding to interaction (2) shown in fig. 3), so as to intelligently adjust the position of the widget. Or, although the application does not generate the widget yet, the application monitoring device detects that the user inputs an instruction for generating the widget, the application detection device may also obtain the widget control right, determine the initial position of the widget, and may invoke the display driver to display the widget at the determined initial position.
The application monitoring device can be used for monitoring the generation of the small window, the exit of the small window, the operation of the monitoring user on the scaling/moving of the small window and the like, monitoring the interface change of the lower application of the small window, and scaling and/or moving the small window according to the small window size and/or the placement area determined by the intelligent small window area prediction device.
Wherein the function of the application monitoring means is supported to be turned on or off in the setting (corresponding to the interaction (1) shown in fig. 3). After the function is closed in the setting, the terminal does not detect and automatically adjust the position of the small window; after the function is opened, the terminal has the authority of monitoring, detecting and adjusting the application window.
The intelligent widget region prediction device can provide interface supply monitoring device call (corresponding to interaction (3) shown in fig. 3). And the device also has the right to store data, some of which can be stored in the storage space (corresponding to interaction (4) shown in fig. 3). For example, information such as non-critical/critical areas applied in different states, user records of the operation of the widget, etc. may be stored. The apparatus may also train the predictive model based on a record of user operations on the widget.
Optionally, the device may be further configured to predict the target placement area of the widget under the corresponding application state according to the application, the state corresponding to the application, and the prediction model, and send the information of the target placement area to the application monitoring device (corresponding to the interaction (3) shown in fig. 3).
Optionally, the state corresponding to the application includes, but is not limited to, an interface displayed by the application.
In some cases, the state of the underlying application may change. Optionally, the state of the lower-layer application changes, which may mean that the interface of the application is refreshed, or the change rate of the presented content of the interface meets a certain condition, for example, the change rate of the presented content of the interface exceeds a threshold.
For example, as shown in fig. 8B (2), the interface corresponding to the "home" tab of the lower-layer video player, as shown in fig. 8B (3), the interface corresponding to the "my" tab of the video player, where the display contents of the interfaces are different, the states of the video player are different.
Or the state of the lower layer application changes, the lower layer application can receive a user operation instruction, and the operation instruction can trigger the lower layer application interface to refresh. Alternatively, the operating instruction may trigger the underlying application interface to change, and the rate of change of the interface exceeds a threshold. The rate of change of the interface can be defined in a number of ways. For example, it may be represented by the area ratio of the interface element that is changed. The embodiments of the present application are not limited to the specific meaning of the rate of change.
Alternatively, an element in an interface (abbreviated as interface element (interface element)) may refer to a series of elements included in a software or system interface that can meet the interaction requirements of a user. Elements include, but are not limited to, one or more of the following: search boxes, input boxes, forms, menus, scroll bars, buttons, and the like.
Alternatively, the user operation instruction may be an operation instruction of clicking, sliding, or the like on the interface, or may be a voice instruction, for example. The embodiment of the application does not limit the operation instruction of triggering the refreshing of the lower application interface by the user.
Illustratively, as shown in the interface (2) of fig. 8B, the handset will not refresh the interface when it does not detect that the user clicks on the my tab, and will refresh the application interface when it detects that the user clicks on the my tab, such as jumping to the interface shown in (3) of fig. 8B.
Illustratively, the state of the underlying application changes, and may include, but is not limited to: pull up a virtual keyboard, switch applications, return other interfaces within an application, etc.
When the state of the lower layer application changes, the terminal can redetermine the placement area of the upper layer application window based on the state change of the lower layer application, so as to reduce the probability of shielding the key area of the lower layer application by the upper layer application window.
Optionally, the system may further comprise a server. The terminal may interact with the server through a communication network.
The server can be used for carrying out fusion summarization on the prediction model generated by the intelligent small window area prediction device of the terminal to obtain a fusion model. And, in response to the query request of the intelligent small window area prediction device of the terminal, the fusion model can be issued to the terminal.
Optionally, the intelligent widget region prediction device can operate in an end-side mode (i.e., data and models are stored and updated on the end side) and also support end-cloud collaborative mode operation.
In the end cloud collaboration mode, optionally, a terminal open model query interface is invoked to a server to support uploading a terminal local model to the server for fusion (corresponding to interaction (5) shown in fig. 3). Optionally, the terminal may also call a model download interface of the server to download the latest version of the fusion model (corresponding to the interaction (5) shown in fig. 3), and perform initialization or local model tuning according to the fusion model.
Optionally, the terminal may be a mobile phone, a tablet computer, a desktop, a laptop, a handheld computer, a notebook, a netbook, a personal digital assistant (personal digital assistant, PDA), a television, or other devices, and the specific form of the electronic device is not particularly limited in the embodiments of the present application.
The terms "first" and "second" and the like in the description and in the drawings of the present application are used for distinguishing between different objects or for distinguishing between different processes of the same object. The words "first," "second," and the like may distinguish between identical or similar items that have substantially the same function and effect. For example, the first device and the second device are merely for distinguishing between different devices, and are not limited in their order of precedence. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
"at least one" means one or more,
"plurality" means two or more.
"and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
Furthermore, references to the terms "comprising" and "having" and any variations thereof in the description of the present application are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or apparatus.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the present application and in the drawings, "english: of", corresponding to "and" corresponding to "are sometimes used in combination, and it should be noted that the meaning of the expression is consistent when the distinction is not emphasized.
The system architecture and the service scenario described in the present application are for more clearly describing the technical solution of the present application, and do not constitute a limitation to the technical solution provided in the present application, and those skilled in the art can know that, with the evolution of the system architecture and the appearance of a new service scenario, the technical solution provided in the present application is also applicable to similar technical problems.
Taking a terminal as an example of a mobile phone, please refer to fig. 4, which is an exemplary structural diagram of the mobile phone according to an embodiment of the present application. As shown in fig. 4, the terminal 103 may include: processor, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor module 180, keys 190, motor 191, indicator 192, camera 193, display 194, and subscriber identity module (subscriber identification module, SIM) card interface 195, etc.
The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, and a bone conduction sensor 180M.
It should be understood that the structure illustrated in the embodiments of the present invention does not constitute a specific limitation on the terminal 103. In other embodiments of the present application, terminal 103 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor may include one or more processing units, such as: the processors may include application processors (application processor, AP), modem processors, graphics processors (graphics processing unit, GPU), image signal processors (image signal processor, ISP), controllers, memories, video codecs, digital signal processors (digital signal processor, DSP), baseband processors, and/or neural network processors (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the terminal 103, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor for storing instructions and data. In some embodiments, the memory in the processor is a cache memory. The memory may hold instructions or data that the processor has just used or recycled. If the processor needs to reuse the instruction or data, it can be called directly from memory. Repeated access is avoided, and the waiting time of the processor is reduced, so that the efficiency of the system is improved.
In some embodiments, the processor may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the interfacing relationship between the modules illustrated in the embodiment of the present invention is only illustrative, and does not limit the structure of the terminal 103. In other embodiments of the present application, the terminal 103 may also use different interfacing manners in the foregoing embodiments, or a combination of multiple interfacing manners.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters.
The wireless communication function of the terminal 103 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in terminal 103 may be configured to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G or the like applied on the terminal 103. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in a processor. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc. applied on the terminal 103. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor. The wireless communication module 160 may also receive a signal to be transmitted from the processor, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of terminal 103 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that terminal 103 may communicate with a network and other devices via wireless communication techniques. Wireless communication techniques may include global system for mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The terminal 103 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the terminal 103 may include 1 or N displays 194, N being a positive integer greater than 1.
Terminal 103 may implement shooting functions through an ISP, a camera 193, a video codec, a GPU, a display 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, so that the electrical signal is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, terminal 103 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the terminal 103 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The terminal 103 may support one or more video codecs. In this way, the terminal 103 may play or record video in multiple encoding formats, for example: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of the terminal 103 can be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to realize the memory capability of the extension terminal 103. The external memory card communicates with the processor through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer-executable program code that includes instructions. The processor executes various functional applications of the terminal 103 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the terminal 103 (such as audio data, phonebook, etc.), etc. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
In some embodiments of the present application, the memory may be used to train samples of the predictive model, such as application state information. Optionally, the memory may be further configured to store a corresponding widget operation record in the application state, and/or other information to be stored in the embodiment of the present application.
The terminal 103 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in a processor, or a portion of the functional modules of the audio module 170 may be provided in a processor.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The terminal 103 can listen to music through the speaker 170A or listen to hands-free calls.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the terminal 103 receives a call or voice message, the voice can be received by bringing the receiver 170B close to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The terminal 103 may be provided with at least one microphone 170C.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The terminal 103 may receive key inputs, generating key signal inputs related to user settings of the terminal 103 as well as function controls.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be contacted and separated from the terminal 103 by inserting into the SIM card interface 195 or extracting from the SIM card interface 195. Terminal 103 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The terminal 103 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, the terminal 103 employs esims, i.e.: an embedded SIM card. The eSIM card can be embedded in the terminal 103 and cannot be separated from the terminal 103.
The software system of the terminal 103 may employ a layered architecture, an event driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. Embodiments of the present application provide for a hierarchical system architecture (e.g.
Figure BDA0003330417360000111
Or a hong-and-mong system, etc.) are examples, illustrating the software structure of the terminal 103.
Fig. 5 is a software architecture block diagram of the terminal 103 provided in the embodiment of the present application. The layered architecture may divide the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the hierarchical system may include three layers, from top to bottom, an application layer (abbreviated application layer), an application framework layer (abbreviated framework layer), and a kernel layer (also referred to as a driver layer), respectively.
The application layer may include a series of application packages, among other things. For example, the application package may be a camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, desktop start (counter), etc. application. In the embodiment of the application, the application program comprises an application capable of being windowed, such as video and memo.
The framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. As shown in fig. 5, the framework layer may include a window manager (window manager service, WMS) and an activity manager (activity manager service, AMS), etc. Optionally, the framework layer may also include a content provider, view system, telephony manager, resource manager, notification manager, etc. (not shown in the figures).
In embodiments of the present application, the framework layer may receive interaction events from the kernel layer.
In some embodiments of the present application, the framework layer may include an application monitoring module and an intelligent widget region prediction module. The functions of both are described above and will not be described in detail here.
In other embodiments, one or both of the application monitoring module and the intelligent widget region prediction module may be disposed in other levels, which embodiments of the present application do not limit.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver. The kernel layer is a layer between hardware and software. The kernel layer may contain display drivers, input/output device drivers (e.g., keyboard, touch screen, headphones, speakers, microphones, handles, etc.), camera drivers, audio drivers, sensor drivers, and so forth.
The above-described fig. 5 is only one possible example of the software architecture of the terminal 103, and does not constitute a limitation on the software architecture of the terminal 103. It will be appreciated that the software architecture of the terminal 103 may also be other. For example, in the layered software architecture, it may be further divided into more or less layers, and the specific function of each layer is not limited.
The foregoing illustrates the software and hardware structure of the electronic device in the embodiment of the present application by using the terminal 103, but the structure and the form of the electronic device are not limited. The embodiment of the application does not limit the structure and the form of the electronic equipment. By way of example, fig. 6 shows another exemplary structure of an electronic device. As shown in fig. 6, the electronic device includes: a processor 501, a memory 502, a transceiver 503. The processor 501 and the memory 502 may be implemented by a processor and a memory of the terminal 103. A transceiver 503 for the electronic device to interact with other devices, such as the electronic device 101. The transceiver 503 may be a device based on a communication protocol such as Wi-Fi, bluetooth, or other.
In other embodiments of the present application, the electronic device may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or certain components may be replaced, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The embodiment of the application can be applied to a scene of multi-task processing, in the multi-task scene, certain upper floating windows possibly cause shielding to interface display contents of lower-layer applications, and in the embodiment of the application, the positions of the upper floating windows can be automatically and intelligently adjusted, so that shielding to some important areas (which can be called key operation areas in the embodiment of the application) of the lower-layer applications is reduced.
In some scenarios, the user enters instructions into the handset to open the application widget. The instruction for opening the application widget may be various instructions, which is not limited in the embodiment of the present application. For example, the instruction to open an application widget may be an instruction for the user to click on a "widget play" option in video software (e.g., video). For another example, the instruction for opening the application widget may be an instruction corresponding to a specific gesture input by the user. Illustratively, as shown in fig. 7 (1), the mobile phone currently displays the interface 701 of the video application a, and when detecting the operation that the finger of the user slides leftwards on the interface 701, the mobile phone may pop up a window 702 shown in fig. 7 (2), which may be used to call up the application widget. Wherein window 702 may display information of applications capable of windowing, as shown in fig. 7 (2), window 702 includes information of applications 1-3. Window 702 may also include a control 703, control 703 may be used to add applications that can be windowed.
In the embodiment of the application, the window may also be called a floating window, a small window or other names, but does not form a display of the window function.
After detecting that the user clicks on application 1 in the window 702, the mobile phone determines that the user wants to call up the widget of application 1, then the mobile phone may determine a key operation area 705 and a key operation area 706 of the lower application (video application a) according to a preset policy, and determine a target placement area of the widget of application 1 according to the key operation area 705 and the key operation area 706. The specific implementation of the preset policy and determining the key operation area of the lower application according to the preset policy will be described in detail below.
The key operation area may be defined as an area where the user has a higher operation probability in the interface of the lower application. In other embodiments, the critical operating area may also be defined as having certain functions (or may be referred to as target functions, functions that the user desires to perform). Exemplary target functions include, but are not limited to, an interface skip function, an enter character function, a skip to specified interface function. Of course, the key operation area may have other defining manners, and embodiments of the present application are not limited.
For example, the area where the functions of the virtual keyboard, dial, back button, navigation button, search box, more options, etc. are located may be defined as a key operation area. These predefined critical operating areas may also be referred to as preset areas.
As another example, as shown in fig. 8B (2), the "my" tab is an area that the user is likely to click on, and the user is typically also able to trigger the phone to jump to the "my" interface as shown in fig. 8B (3), i.e., the "my" tab area, with irreplaceable functionality, only by clicking on the "my" tab.
As a possible implementation manner, the mobile phone determines that the target placement area of the small window is an area other than the key operation area 705 and the area 706, so as to avoid that the small window shields the key operation area of the lower application, and further avoid that the small window shields the key operation area of the lower application, so that the problem of low multitasking efficiency is solved.
After determining the target placement area of the widget of application 1, the handset may display the widget 704 of application 1 in the target placement area as shown in (3) of fig. 7. As can be seen, compared with the scheme shown in fig. 2, in which the small window shields the key operation area 706, so that the user still needs to manually adjust the position of the small window, and the efficiency of the multitasking is lower, in the scheme corresponding to fig. 7, the mobile phone can automatically and intelligently adjust the position of the small window according to the preset policy, so that the multitasking delay caused by the manual adjustment of the position of the small window by the user is reduced, the mobile phone can perform the multitasking more smoothly, and the efficiency of the multitasking is improved.
As another example, as shown in fig. 7 (4), after detecting the operation of clicking the search box 707 by the user, the mobile phone determines that the virtual keyboard needs to be called up. In this scenario, to avoid obstruction of the virtual keyboard by the widget 704, which may be a hindrance to the user's character input, the mobile phone may intelligently adjust the location of the widget 704, moving the widget 704 to a location outside the virtual keyboard area 708 and the key operation area 705.
As follows, several widget usage scenarios are exemplified, and schemes for intelligently adjusting widgets in corresponding widget scenarios.
Scene 1: a widget play scene in a video application, that is, a user performs other operations in a video player while playing a video through the video player widget, taking the example of viewing a recent offer of the video player in the video player.
Illustratively, as shown in fig. 8A (1), the mobile phone currently plays the video 801 through the video player landscape screen, and the user can click on the widget play option 802 to continue playing the video 801 in the widget. After detecting the operation of clicking the widget play option 802 by the user, the mobile phone determines that widget play is required, and then the mobile phone needs to determine the target placement area of the widget in the screen. As one possible implementation, the handset invokes the SWLD through AMD, which detects that the critical operation area includes an area 803 and an area 804 as shown in (2) of fig. 8A, and determines a target placement area of the widget according to the area 803 and the area 804. Thereafter, the SWLD returns the target placement area of the small window to the AMD, which adjusts the small window to the target placement area shown in fig. 8A (2). As shown in fig. 8A (2), the placement area of the widget is intelligently adjusted by the mobile phone, so that the widget does not block the key operation area 803 and the area 804, and the problem of low multi-task processing efficiency caused by the fact that the widget blocks the key operation area is avoided.
Then, as shown in (2) of fig. 8A, when the user is detected to click on the my tab at the bottom of the video player, the mobile phone can display the interface shown in (3) of fig. 8A, and the widget just obscures the option of "member center". The user can manually drag the widget 801 downward, and after detecting the operation of dragging the widget 801 by the user through the AMD, the mobile phone adjusts the placement area of the widget to the area shown in (4) of fig. 8A. In the interface shown in fig. 8A (4), a "member center" option is displayed, and the user can perform a subsequent operation based on this option. For example, the user may click on "Member center" to browse the Member center's preferential activity in the pop-up interface.
As a possible implementation, after the AMD detects an operation of dragging the widget 801 by the user, the operation information may be sent to the SWLD, which may store the operation information and use the operation information to update the prediction model. Optionally, the AMD may further send an application state corresponding to the operation information to the SWLD, where the SWLD may store the application state, and use the operation information and the corresponding application state to update the prediction model.
Optionally, the operation information for the widget operation (may be simply referred to as widget operation information) includes, but is not limited to, scaled size information, information of a drag position. The prediction model, which is a model provided by the embodiment of the application, can be used for predicting the target placement area of the small window. Specific training of the predictive model and methods of use are described in detail below.
Fig. 8B shows the process of the handset adjusting and displaying the widget in the subsequent widget play scene, provided that the handset has updated the predictive model according to the scene of fig. 8A. As shown in fig. 8B (1), in the case of playing the video 801 through the video player landscape screen, the mobile phone detects the operation of clicking the widget play option 802 by the user, and adjusts the widget to the position shown in fig. 8B (2) to avoid shielding the key operation area 803 and the key operation area 804.
After that, the user is detected to click on the 'My' tab at the bottom of the video player, the mobile phone determines that the state of the lower-layer video player is changed and the interface of the lower-layer video player needs to be refreshed, and then the mobile phone needs to redetermine the placement area of the small window 801 so as to avoid the small window 801 from shielding the key area of the lower-layer video player as far as possible. In some examples, the handset invokes the SWLD through AMD, which predicts that the target placement area of the widget is the last moved location of the user in my interface 805 (such as the location the user moved the widget to in the scenario of fig. 8A), i.e., the area shown in (3) of fig. 8B, according to the updated predictive model. And, the SWLD returns the target placement area of the small window to the AMD. Then, after the handset jumps to the "My" interface as shown in FIG. 8B (3), the AMD may control the movement of the widget to the target placement area. Subsequently, as shown in (3) of fig. 8B, the user can click on "member center" and browse the preferential activity of the video player in the displayed interface of "member center". In the scenario corresponding to fig. 8B, since the prediction model can be continuously updated based on the historical operation of the user on the widget, the target placement area of the widget is predicted according to the prediction model, and the widget position is intelligently and automatically adjusted, so that the user does not need to manually adjust the widget position in the whole process, the operation of the user can be simplified, and the efficiency of the multitasking is improved. Meanwhile, because the small window operation information is obtained based on the use habits of different users, the position of the small window preferred by the user can be predicted more accurately and individually according to the small window operation information, and the man-machine interaction performance is improved.
In other embodiments, as shown in fig. 8C (1) and (2), detecting that the user clicks on the my tab at the bottom of the video player, the phone jumps to the my interface as shown in fig. 8C (3), and then adjusts the placement area of the widget 801 from the area shown in fig. 8C (3) to the area shown in fig. 8C (4).
Fig. 9 shows a widget adjustment scheme in a widget play scene. As in (1) - (2) of fig. 9, the user windows the video player and the phone places the widget in a default lower right corner position. As in (3) of fig. 9, the user clicks on the bottom my tab, triggering the handset to display the interface shown in (4) of fig. 9. In the interface shown in fig. 9 (4), the widget blocks the "member center" option, and the user needs to manually drag the widget down to the position shown in fig. 9 (5) so as to expose the "member center" option. It can be seen that in some arrangements, the user needs to adjust the position of the widget twice. In the technical solution of the embodiment of the present application, for example, in the scenario shown in fig. 8A, since the key operation area can be determined, shielding of the key operation area can be avoided during the window miniaturization, and thus the operation of manually adjusting the position of the small window by the user shown in fig. 9 (2) is reduced. Therefore, compared with the technical scheme shown in fig. 8A, which needs to manually adjust the small window for multiple times, on one hand, the frequency of manually adjusting the position of the small window by a user can be reduced, and the man-machine interaction performance is improved. On the other hand, even if the user manually adjusts, the information of the window adjustment operation can be used as input for subsequently updating the prediction model, so that the window placement area can be predicted more accurately according to the continuously updated prediction model. In the same use scene (for example, in the same application), the small window placement area can be intelligently and automatically predicted and adjusted, the frequency of manually adjusting the small window when the small window is used in the follow-up process is reduced, and the use convenience is improved. Wherein the widget position and size is related to the position and size that the user was previously actively adjusting.
For another example, in the scenario shown in fig. 8B, since the key operation region can be determined, shielding the key operation region can be avoided at the time of the window reduction, and further, the operation of manually adjusting the position of the window by the user shown in fig. 9 (2) is reduced. In addition, the corresponding small window target placement areas in different operation conditions can be predicted according to the prediction model, so that the operation of manually adjusting the small window positions by a user shown in (4) of fig. 9 is reduced.
Scene 2: simultaneous video playback using instant messaging applications
Illustratively, when the mobile phone displays the interface content of the video player in the form of small window playing and detects the operation of the small window of the instant messaging application (such as clicking the instant messaging application option in the control 702 shown in (2) of fig. 7) by the user, the mobile phone calls the SWLD by the AMD, the SWLD predicts that the target placement area of the small window 1001 is the middle lower part of the screen shown in (1) of fig. 10 and is full of the screen in the horizontal direction in the instant messaging application based on the prediction model, the SWLD returns the information of the target placement area of the small window 1001 to the AMD, and the AMD controls the small window 1001 to be adjusted and enlarged and then moves to the target placement area shown in (1) of fig. 10. It can be seen that, on the one hand, the adjusted widget does not cover the key operation area such as the bottom tab, and on the other hand, the user does not need to manually adjust the widget from the position of covering the bottom tab such as the lower right corner to the target placement area as shown in fig. 10 (1), so that the complexity of the user operation is reduced.
After that, it is detected that the user clicks on the bottom "found" 1002 tab, the handset can jump to the interface as shown in fig. 10 (2). The user may trigger the handset to display group information by clicking on the "group" option 1003 in order to view the group information.
The user reviews the group information of interest, and as shown in fig. 10 (3), the mobile phone can pop up the review control 1005 when detecting the click of the control 1004 by the user. Upon detecting a user clicking on comment control 1005, the handset determines that a virtual keyboard needs to be called. To avoid the widget 1001 obscuring the virtual keyboard after the virtual keyboard is pulled up, in some examples, the handset may redetermine the placement area of the widget 1001 before the virtual keyboard is pulled up. After determining the placement area of the widget 1001, the handset pulls up the virtual keyboard 1006 and adjusts the position of the widget 1001 to the redetermined placement area, such as the position shown in (3) of fig. 10. In this manner, the user can directly input the characters required for comments using the virtual keyboard 1006 without manually adjusting the position of the small window for inputting the characters.
In the operation, the historical behavior of the user in the WeChat is fully trained on the model, so that the model can be directly positioned at the optimal placement position of the small window in the WeChat without manual adjustment, and the operation of the user is facilitated.
Scene 3: when the user wants to share the views of the books with friends continuously while watching the electronic book, the chat application can be windowed and chat with friends while reading in the electronic book application. Illustratively, upon detecting that the user clicks on application 2 in control 702 as shown in fig. 11A (1), the handset knows that the user wants to call up the widget of the instant messaging application 2, and then the handset can display the widget-minimized chat application page 1101 as shown in fig. 11A (2). At this time, the chat window is large and in the middle, and most of the text contents are blocked. The user can manually narrow down the chat page and move to the upper right corner position as shown in fig. 11A (3). The upper right corner position may be defined as a non-critical display area. In the embodiment of the application, the mobile phone can record the operation of shrinking window and moving window performed by the user, and update the prediction model according to the recorded operation information.
Subsequently, when the user opens the chat interface in the electronic book application again, the mobile phone can predict the target placement area of the small window according to the updated prediction model, so that the probability of manually adjusting the position of the small window by the user is reduced. By way of example, FIG. 11B illustrates a scenario in which the user opens the chat interface again in the e-book application. Specifically, upon detecting that the user clicks on application 2 in control 702 as shown in fig. 11B (1), the handset determines that the user wants to call up a widget of application 2. Then, the mobile phone may call the SWLD through the AMD, where the SWLD determines that the target placement area of the small window is upper right and the size is the minimum available value in the e-book application, and returns the information of the target placement area to the AMD, and the AMD controls to move the scaled-down chat window 1101 to the target placement area shown in (2) of fig. 11B.
Therefore, under the condition that the history operation record of the user on the small window exists, the electronic equipment can continuously update the prediction model based on the history operation of the user, and then the small window can be accurately adjusted to a target area (key operation areas such as a virtual keyboard are not blocked) according to the prediction model, manual adjustment of the user is not needed, and the operation of the user is facilitated.
Scene 4: when learning online class, for example, a user often has some thinking and ideas during learning online class, and needs to record quickly, so that the software for recording such as memos can be windowed, and the user can record own ideas while playing online class. Illustratively, as shown in fig. 12A (1), when the user plays the web lesson video 1202, detecting the user clicking on the operation of the memo application 3 in the control 702, the handset determines that the user wants to call up the widget of the memo application 3. Then the handset can display a small window 1201 of the memo as shown in (2) of fig. 12A on the video playing interface. At this point the memo widget 1201 obscures part of the page content. As shown in (2) of fig. 12A, the user can narrow the memo widget and move to the right as appropriate. The user can make fine adjustments as needed to maximize the exposure of the memo while not obscuring the video page content. After detecting the adjustment operation of the user on the widget 1201, the mobile phone may record information corresponding to the adjustment operation (such as, but not limited to, the adjusted widget size and widget position information), and adjust the reduced widget 1201 to the position shown in (3) of fig. 12A according to the user's widget adjustment operation. In this way, the user can view the complete video interface and record ideas and notes based on the video interface content.
In the scenario shown in fig. 12A, the user performs the widget movement and scaling operation, and the mobile phone may record the operation of the user on the widget, update the prediction model according to the operation information, and use the updated prediction model to predict the target placement area of the widget in the same application state (for example, also opening the chat widget in the electronic book interface), and adjust the position of the widget to the target placement area.
Illustratively, FIG. 12B illustrates a process for adjusting the position of a widget using an updated predictive model. As shown in fig. 12B (1), when the mobile phone detects that the user clicks the memo application 3 in the control 702 while playing the web class video 1202, the mobile phone may call the SWLD by the AMD, and the SWLD predicts that the target placement area of the small window is the right edge (does not cause shielding of the web class interface) in the play state of the web class application by the user based on the prediction model, and the SWLD returns the information of the target placement area of the small window to the AMD, and the AMD controls the memo small window to the position shown in fig. 12B (2). Therefore, in the scene shown in fig. 12B, based on the prediction model, the mobile phone can automatically and intelligently zoom the memo widget to an accurate target area, and manual adjustment of a user is not needed, so that the operation of the user is simplified.
Of course, the widget scene is not limited to the above-listed types, and when the electronic device needs to display the UI, for the UI that may cause shielding to the key operation area of the lower layer application, the technical solution of the embodiment of the present application may be used to adjust the position and/or size of the UI, so as to reduce shielding to the key operation area of the lower layer application by the UI as much as possible.
As follows, specific technical details of the widget adjustment method related to each widget scene will be described. Taking an electronic device as a mobile phone as an example, as shown in fig. 13, a small window adjustment method in an embodiment of the present application includes the following steps:
s101, the electronic equipment displays a first interface.
Illustratively, as shown in fig. 7 (1), the mobile phone displays the first interface 701 of the video application in a full screen display manner.
S102, the electronic equipment detects a first instruction.
The first instruction is used for indicating the floating window which is called out of the upper layer.
Alternatively, in a multitasking scenario, a floating window may be displayed at an upper layer of some applications. Alternatively, the floating window and the underlying interface may belong to the same application or different applications. For example, as shown in fig. 7, the first interface 701 at the lower layer and the floating window 704 at the upper layer belong to the video application. For another example, as shown in fig. 10, the first interface of the lower layer is an interface of the chat application, and the floating window of the upper layer is a floating window of the video application.
Alternatively, the size of the upper floating window may be set by the system or adjusted by the user.
Alternatively, the first instruction may be an instruction corresponding to one or more operations.
For example, after detecting the user's operation to slide to the left as shown in fig. 7 (1), the mobile phone displays a control 702 as shown in fig. 7 (2). Upon detecting that the user clicks on the application 1 option in control 702, the handset determines that the user wants to call up the widget of application 1. Here, the left-hand slide operation by the user and the operation of clicking on the application 1 option correspond to the first instruction. The first instruction is used for indicating that a small window corresponding to the application 1 is called.
For another example, as shown in fig. 8A (1), after detecting that the user clicks on the widget play option 802, the handset determines that the user wants to call out the widget of the video player. Here, the operation of clicking the widget play option 802 by the user corresponds to the first instruction. The first instruction is for instructing to call up a widget of the video player, i.e. to play the video in the form of a widget.
As one possible implementation, after detecting the first instruction input by the user, the electronic device may send a notification indicating that the widget is generated to the AMD, and the AMD monitors the notification to determine that the user wants to call out the floating window.
S103, the electronic equipment obtains a first target placement area (which can be simply called a first area) of the floating window according to the first instruction.
Alternatively, the first target placement area of the widget may be a non-critical area in the screen. The non-critical areas include non-critical display areas and/or non-critical operation areas. Non-critical operating regions may refer to regions where the probability of operation is below a certain threshold. The non-critical display area may be defined as a display area of some non-important information, or as a display area not preferred by the user, or as an unnecessary display area. Typically, occlusion of non-critical display areas has less impact on the visual experience of the user. Conversely, the area except the key area in the screen may be a key area, and the key area may be defined as an area where the user needs to touch by using the lower layer application, or an area where the operation probability is higher than the threshold value when the user uses the lower layer application, or a necessary operation area.
Illustratively, as shown in (3) of fig. 7, the probability that the user operates the region 705 and the region 706 is high, and the probability that the region outside the region 705, the region 706 is operated is low, and then the region 705, the region 706 can be regarded as a non-key operation region, and the region 705 and the region 706 can be regarded as key operation regions. The handset may select a first target placement area of the widget in a non-critical operational area.
Note that, the non-critical operation area in the embodiment of the present application is not limited to a blank area in the interface, or an area with a large blank ratio. For example, as shown in fig. 8B, the blank area in the browsing area in the middle of the screen is smaller than the blank area in the key operation area 803, and the technical solution of the embodiment of the present application may adjust the small window to the middle browsing area, so as to avoid shielding the key operation area 803. It can be seen that, in the embodiments of the present application, the small window is not simply adjusted to the blank area in the interface, but the small window needs to be adjusted to the display area with less influence on the user operation.
Optionally, the key region may be further divided. For example, taking the division of the key operation area as an example, the key operation area can be further divided into areas with various levels, which are used for representing the importance degree of each area, and penalty coefficients of different areas are different. Illustratively, in the key operation region, if the cost of the tabs at the bottom or top is higher (the operation probability is higher), the tabs set the key operation region of the first level, and if the cost of the return key is lower, the return key may be set as the key operation region of the second level.
In this embodiment of the present application, the manner in which the mobile phone determines the first target placement area of the upper floating window at least includes the following several ways.
Mode 1: the mobile phone determines a critical area and/or a non-critical area based on the prior data, and determines a first target placement area of the small window in the non-critical area.
As one possible implementation, the system opens a setup interface for the application for the critical operating area. For example, when an application is installed, the application informs the system of critical and/or non-critical areas of the application. The mobile phone can then determine a non-key area in the screen based on the area information notified by the application, and select a part of the area from the non-key area as a first target placement area of the small window, so as to reduce the probability that the small window shields the key area.
Alternatively, different interfaces of an application may correspond to different critical/non-critical areas. Illustratively, as shown in fig. 8B (2), in the "home" interface of the video player, the key operation area includes an area 803 and an area 804, and as shown in fig. 8B (3), in the "my" interface of the video player, the key operation area includes an area 806 and an area 803. When an application is installed, the application may inform the system of critical/non-critical area information for different interfaces. Accordingly, for the same lower application, the target placement area of the upper application widget may be different in the scene where the lower application displays different interfaces.
In the mode, because the key area and/or the non-key area are set based on the notification of each application, the area information of each application is often matched with the characteristics of the application, the misjudgment rate of the key area/the non-key area is lower, and the position of the small window obtained based on the key area/the non-key area is more accurate.
As another possible implementation, for each lower layer application, a partial region is preset as a critical region. Illustratively, the screen top region and/or the screen bottom region are preset as key regions. For example, as shown in fig. 7 (3), the bottom area 706 and the top area 705 of the screen may be preset as key areas, so that when determining the first target placement area of the widget, two key areas, i.e., the key area 705 and the key area 706, may be excluded, so as to avoid the widget from blocking the key areas.
It should be noted that the preset key area may be other, which is not limited in the embodiment of the present application.
Alternatively, different critical/non-critical areas may be set for different underlying applications. For example, as shown in (3) of fig. 7, for applications such as video players with bottom tabs, considering that the probability of the user operating the bottom tab is high, the region corresponding to the bottom tab may be taken as a key region. For another example, for applications without a bottom tab, the bottom area may not be considered a critical area.
For example, assuming that when the video player is installed, the video player informs the system that the key area of the video player is the bottom tab area and the top tab area, then, as shown in fig. 8B (1), after detecting the instruction (i.e., the first instruction) that the user clicks the widget play option 802, the mobile phone may determine the first target placement area of the widget 801 according to the key area 803 and the area 804 set by the video player, as shown in fig. 8B (2), so as to avoid the widget 801 blocking the key area 803 and the area 804 of the video player.
For example, the first area of the floating window may be determined based on the preset area (such as, but not limited to, a virtual keyboard, a dial, etc.).
Mode 2: the display content characteristics of the underlying application determine non-critical/critical areas in the screen and determine a first target placement area for the widget from the non-critical areas.
In general, the content browsing area and the operation area of most applications are separated. The browsing area is mainly used for presenting rich contents, and the operation area is mainly used for realizing interactive operation functions such as returning, closing and switching. When a user browses content, the display content of the browsing area often changes significantly, while the display content of the operation area often changes little or no.
Based on the characteristics, a key region judgment scheme based on image recognition can be designed, namely, in the process of multitasking, a plurality of images displayed on a screen are collected, an image region with the change rate smaller than a certain threshold (such as 10%) is judged to be a key region through comparison and analysis on the plurality of images, and a graph region with the change rate larger than or equal to the certain threshold is judged to be a non-key region. For example, as shown in fig. 14A (1) and (2), in the video player interface, the user typically performs a slide-up and slide-down operation. Wherein the display content in the browsing area 809 is typically refreshed with the up-and-down sliding operation. The display contents in the operation area 803 and the area 804 are not generally changed with the up-down sliding operation. Therefore, the region 803 and the region 804 can be determined as critical regions, and the region 809 can be determined as non-critical regions.
Still further exemplary, as shown in fig. 14A (1), detecting the user's operation of clicking on the top tab "movie" option on the "episode" interface, the handset displays the "movie" interface as shown in fig. 14A (2). The interface shown in fig. 14A (2) is not changed in the region 803 corresponding to the bottom tab, but is changed in the region 804 corresponding to the top tab, compared with the interface shown in fig. 14A (1).
Alternatively, different interfaces of an application may correspond to different critical/non-critical areas. Illustratively, as shown in fig. 8B (2), in the "home" interface of the video player, the key operation area includes an area 803 and an area 804, and as shown in fig. 8B (3), in the "my" interface of the video player, the key operation area includes an area 806 and an area 803. The key area/non-key area corresponding to the lower layer application under different interface conditions can be counted. Accordingly, for the same lower application, the target placement area of the upper application widget may be different in the scene where the lower application displays different interfaces.
In some embodiments, the top tab area 804, while changing, has a smaller rate of change for the top tab area 804, which may be considered unchanged or not significantly changed. Alternatively, while the top tab area 804 is changed, based on the multiple image sampling results, it may be determined that in the interface shown in fig. 14B (2), if the user does not switch tabs (i.e., does not switch the bottom tab while not switching the top tab), the top tab 804 will not typically change during subsequent browsing processes (e.g., sliding up and down). Thus, the top tab region 804 may still be considered a critical region based on multiple image sampling results or based on a smaller rate of change of the top tab region 804.
Further exemplary, as shown in fig. 14B (2), detecting the user's operation of clicking the bottom tab "education" option on the "home" interface, the mobile phone displays the "movie" interface as shown in fig. 14B (3). The interface shown in fig. 14B (3) is not significantly changed (i.e., the rate of change is smaller) in the region 803 corresponding to the bottom tab, and is changed in the region 804 corresponding to the top tab, as compared with the interface shown in fig. 14B (2). In this case, the image may be acquired a plurality of times, for example, a plurality of images displayed on the screen may be acquired while the user slides up and down in the refreshing interface in the interface shown in (3) of fig. 14B. Thereafter, it may be determined from the plurality of images that the top tab area 807 has not changed during the process such as sliding the refresh interface up and down as shown in fig. 14B (3), and then the mobile phone may use the top tab area 807 as a key area.
Therefore, through the scheme, the key operation area of the lower application is judged by analyzing the user behaviors and the image characteristics of the page, and the non-key operation area can be accurately identified, so that the accuracy of the small window placement area prediction under the corresponding lower application state is improved. Furthermore, the critical area/non-critical area can be corrected by collecting a plurality of images displayed on the screen, so that the probability of misjudging the critical area/non-critical area can be reduced, and more accurate critical area/non-critical area can be further obtained.
For example, assuming that the key area of the video player is determined to be the bottom tab area and the top tab area by collecting multiple images displayed on the screen for analysis, after detecting the instruction (i.e., the first instruction) that the user clicks the widget play option 802 as shown in fig. 8B (1), the mobile phone may determine the first target placement area of the widget 801 according to the key area 803 and the area 804 of the video player as shown in fig. 8B (2) to avoid the widget 801 blocking the key area 803 and the area 804 of the video player.
In other embodiments, the key operation area/the non-key operation area may be determined based on image analysis, or multiple images may be acquired and analyzed in the process of sliding the lower application interface left and right, where the interface area with no change or a small change rate (for example, less than a certain threshold value) in the sliding process is defined as the key operation area, and the interface area with a large change rate is defined as the non-key operation area.
The user of the electronic device may be referred to as a first user, and information historically obtained by the first user operating an interface (including but not limited to a second interface) may be referred to as behavior information of the first user. The behavior information of the first user may also include other information.
Wherein operating the second interface comprises: and performing sliding operation on the second interface.
Information obtained by the first user historically operating the second interface includes: a fourth area with the change rate of the elements in the second interface meeting the first condition, wherein the fourth area is used for placing a floating window; the second region is located within the fourth region. The first condition is used to characterize a higher rate of change of the element in the interface.
For example, as shown in fig. 14A (2), if the second interface is the interface 809, the interface 809 may be subjected to a sliding operation to determine a fourth region in which the change rate of the element in the interface 809 satisfies the first condition. In fig. 14A (2), the fourth region is a region other than the region 804 and the region 803, in which the change rate of the element is high, and can be used as a non-critical operation region. When the small window is placed, the small window can be placed in the non-key operation area or in an area which is partially overlapped with the non-key operation area.
Mode 3: and the mobile phone determines a key area/non-key area of the lower application according to the prediction model of the terminal side, and determines a first target placement area of the small window in the non-key area. The prediction model at the end side refers to a prediction model in the terminal.
The prediction model may be trained based on information of various historical operation behaviors of the user on the widget (or simply referred to as widget operation information) in various application states. In this way, when the terminal has a corresponding application state next time, the terminal can predict the first target placement area of the small window according to the prediction model. That is, the target placement position of the application widget may be determined according to the user's historical operation behavior to avoid the occlusion and influence on the underlying application as much as possible.
First, a training process of a prediction model according to an embodiment of the present application will be described. Training process as shown in fig. 15, training a predictive model for identifying a first target placement area of a widget requires providing N (N is a positive integer) samples, including the state of the underlying application. Optionally, the sample may further include widget operation information corresponding to the lower application state. Optionally, the training sample may further include labels corresponding to various application states (representing the first target placement region of the widget corresponding to the application state). Optionally, the training samples may also include some a priori data, such as preset critical/non-critical regions. And training a plurality of samples to obtain a prediction model.
Optionally, before training the prediction model, data such as training samples may be processed, for example, smoothed, normalized. Wherein the normalization process may reduce the complexity of the algorithm. The smoothing process may further include operations such as noise reduction, fitting, etc., to reduce the effects of statistical errors.
Optionally, in order to improve the recognition accuracy of the prediction model, the prediction model may be evaluated and tested. And when the recognition rate of the prediction model reaches a certain threshold value, the prediction model is trained. When the recognition rate of the prediction model is low, the prediction model can be continuously trained until the recognition accuracy of the prediction model reaches a certain threshold.
Alternatively, the training process of the prediction model may be trained on the end side (such as a terminal like a mobile phone) or the cloud side (such as a server) or the end cloud. The training may be offline training or online training. The embodiment of the application does not limit the specific training mode of the prediction model. Also, multiple types of models may be used for training. For example, a machine deep learning model, a reinforcement learning model, or other types of predictive models may be trained. Subsequently, the lower application state can be input into a trained prediction model, and the prediction model outputs a first target placement area of the small window under the corresponding lower application state.
Optionally, the prediction model may also be updated after the prediction model is trained. Alternatively, the predictive model may be updated periodically. Alternatively, the predictive model may be updated when there is new underlying application state and/or new widget operation information. The embodiments of the present application do not limit the timing and conditions of updating the prediction model. When updating the prediction model, it is also necessary to input a new sample into the prediction model. The newly added samples include the state of the underlying application. Optionally, the newly added sample further includes small window operation information corresponding to the lower layer application state. Optionally, the newly added sample may further include a tag corresponding to the lower layer application state.
Illustratively, the state of the lower-layer application is as shown in (3) of fig. 8A, i.e., the lower-layer application displays an interface corresponding to the my tab. In the state corresponding to the lower-layer application, when the mobile phone detects the drag operation of the user on the upper-layer widget 801, the mobile phone can input the operation information of the drag widget and the state of the lower-layer application (i.e. the interface corresponding to the display of the my tab) as a new sample into the prediction model to be updated, train the prediction model according to the flow shown in fig. 15, and obtain the updated prediction model. The updated predictive model predicts the ability to predict the first target placement area of the widget under the same application state (i.e., displaying the interface corresponding to the "my" tab).
Then, in the subsequent multitasking process, if the corresponding state of the lower-layer application is detected (for example, the interface corresponding to the my tab is displayed) or the lower-layer application is detected to have the corresponding state, the mobile phone can predict the first target placement area of the widget according to the updated prediction model. Illustratively, as shown in (2) of fig. 8B, the handset detects that the user clicks on the my option and determines that the user wants to call out the interface corresponding to the my tab. In order to reduce the occlusion of the widget to the critical area in the underlying application ("my" tab corresponding interface), the handset needs to determine the first target placement area of the widget according to the updated predictive model. In this example, as shown in fig. 8B (3), the handset determines the first target placement area of the widget as the location to which the user last moved the widget (e.g., the user-adjusted widget location in the scene of fig. 8A).
Alternatively, the prediction model provided by the embodiment of the application may be a parametric model or a non-parametric model.
Taking the prediction model as a non-parametric model as an example, the prediction model may include, but is not limited to, a combination of one or more of the following:
1. recent priority model: and taking the first target placement area which is adjusted by the user last time as a prediction result. That is, the second area is the latest obtained area among a plurality of optional areas (such as a third area or a fourth area) for placing floating windows.
For example, in the same application state, for example, all interfaces a of the lower application are opened, the user adjusts the upper application window to the area 1 of the screen when opening the interface a of the lower application for the first time, and adjusts the upper application window to the area 2 of the screen when opening the interface a of the lower application for the second time. Then, when the user wants to open the interface a of the lower application for the third time, the handset can adjust the widget to the area 2 of the screen according to the predictive model.
2. The most frequent priority model: if the user has the action of adjusting the small window for a plurality of times, calculating a weighted score for the first target placement area of the small window adjusted each time, and selecting the first target placement area with the highest weighted score as a prediction result. The second region is a region with highest weighted score among the plurality of selectable regions. The second area is an area where the frequency satisfies a second condition among a plurality of selectable areas for placing floating windows. The second condition is used to characterize the frequency higher.
Alternatively, for each adjusted first target placement area, a corresponding weighted score may be determined according to the following formula:
Figure BDA0003330417360000211
wherein W is i Is a weighted score of the first target placement area i, S ij Is the area of the overlapping part of the first target placement area i and the first target placement area j, S i Is the area of the first target placement area i.
By way of example, FIG. 16 shows an example of the first object placement areas A-D, and the calculation formulas of the respective weighted scores.
For example, after detecting the instruction (i.e., the first instruction) that the user clicks on the widget play option 802 as shown in fig. 8B (1), the mobile phone may determine the first target placement area of the widget 801 according to the prediction model, as shown in fig. 8B (2), so as to avoid the widget 801 blocking the key area 803 and the area 804 of the video player. Alternatively, the first target placement area may be a position where the user placed the widget 801 last time, or may be a position where the user placed the widget 801 more frequently, or may be other placement areas of the widget 801 calculated according to the prediction model.
Alternatively, the mobile phone may also determine the second area in combination with the above-described ways. For example, the second area is the latest area in the multiple optional areas, the second area is the area with the frequency meeting the second condition in the multiple optional areas, the second area is the area with the highest weighted score in the multiple optional areas, and the three conditions can be combined and judged. That is, the second region is the most recently obtained region of the plurality of selectable regions, and/or the second region is the region of the plurality of selectable regions for which the frequency satisfies the second condition, and/or the second region is the region of the plurality of selectable regions for which the weighted score is highest.
The user of the electronic device may be referred to as a first user, and information obtained by the first user historically operating the floating window may be referred to as behavior information of the first user. The behavior information of the first user may also include other information.
Historically operating floating windows included: when a second interface (for example, a second interface 709 shown in fig. 7 (3)) is displayed historically, the position and/or size of the floating window displayed on the upper layer of the second interface is adjusted.
The information obtained by the first user operating the floating window historically includes: when the second interface is historically displayed, the first user adjusts the relevant information of the third area for placing the floating window, which is obtained by the floating window; the second region is associated with the third region.
Illustratively, the first user has operated the floating window displayed on top of the interface 709 three times. The floating window is adjusted to the third area shown in fig. 7 (4) for the first time, the floating window is adjusted to the third area in the upper left corner for the second time, and the floating window is also adjusted to the third area shown in fig. 7 (4) for the third time. Subsequently, when the lower layer interface of the mobile phone is the second interface, the mobile phone can automatically adjust the floating window to the related areas of the plurality of third areas based on the behavior information of the user, for example, to an area in a certain third area or to an area partially overlapping with the third area.
In the embodiment of the application, the prediction model can be continuously optimized based on the operation record of the user on the small window. In addition, as the operation times of the small window of the user are increased (the using time of the user is longer), the prediction model can more and more accurately predict the target placement areas of the small window under different application states, and the effect of more and more conforming to the using habit of the user in the corresponding using scene is realized. Therefore, compared with the existing non-personalized scheme, the placement area of the small window is more in line with the habit of the user, and the personalized small window use experience of the user can be met.
Illustratively, the first region is determined based on behavior information of the first user on the electronic device. Or, based on the behavior information of the first user on the electronic device associated with the electronic device. Alternatively, the electronic devices associated with each other may be electronic devices that log into the same account.
Mode 4: the mobile phone determines a critical area/non-critical area based on a prediction model acquired from the cloud side, and determines a first target placement area of the small window in the non-critical area. When the user authorizes to agree with the protocol of using the end cloud cooperation, the user is supported to download a prediction model (cloud model for short) of the cloud side through the terminal, so that the terminal can predict a first target placement area of the upper application window according to the cloud model, and shielding and influence of the upper application window on lower application are reduced as much as possible.
Optionally, the cloud model includes, but is not limited to, the following two types:
1. personalized model of cloud side: the type can be suitable for use scenes such as machine changing, backup, multi-device and the like. The terminal may train the predictive model of the end side based on the user's widget operation record. And, the user can upload the trained personalized prediction model to the cloud side through a terminal (such as a terminal A). Subsequently, the prediction model uploaded by the terminal a may be downloaded from the cloud side and used at another terminal (such as the terminal B). Specific implementation of the terminal B to perform the small window position/size prediction according to the downloaded prediction model can be referred to the above embodiment, and will not be described in detail.
Alternatively, the terminal B may update the local prediction model based on the small window operation behavior of the user on the new terminal, and may upload the updated prediction model to the cloud side. Optionally, the cloud side may fuse the prediction model uploaded by the terminal with the prediction models uploaded by other terminals.
As a possible implementation manner, when the terminal uploads the prediction model to the cloud side, the downloading permission and/or the fusion permission of the prediction model may be set. For example, the uploaded prediction model may be set for downloading only by terminals logging into the same account, or the uploaded stored model may be set for downloading by all terminals. Further exemplary, the uploaded prediction model may be configured not to be fused with the prediction models uploaded by other account terminals. Or, the set up and uploaded prediction model can be fused with the prediction model uploaded by the terminal of other accounts. The embodiment of the application does not limit the set rights, in particular what rights.
Terminal B may be referred to as a first electronic device, terminal a as a second electronic device, and the user of terminal A, B as the first user. When the first electronic device detects that the lower layer interface is switched from the first interface to the second interface, the area where the upper layer floating window is located can be adjusted based on behavior information (such as a floating window placement area obtained in history) of the first user on the first electronic device (terminal B) so as to reduce shielding of the upper layer floating window on a key area in the lower layer interface as much as possible. Or, the first electronic device (terminal B) may obtain the behavior information of the user on the second electronic device (terminal a) from the server, and when the first electronic device (terminal B) detects that the lower layer interface is switched from the first interface to the second interface, the area where the upper layer floating window is located may be adjusted based on the behavior information of the first user on the second electronic device (terminal a). Optionally, the first electronic device and the second electronic device are different electronic devices that log in the same account.
2. Fusion model of cloud side: the type model may be applicable to multi-device scenarios. And uploading the local prediction models to the cloud side by the plurality of terminals. Correspondingly, the cloud side fuses a plurality of prediction models which come from a plurality of terminals and can be fused in an authorized manner to obtain a fusion model so as to optimize the prediction performance of the prediction model.
Optionally, when the terminal uploads the local prediction model, any one or more of the following information may be uploaded together with the prediction model: screen size, screen resolution, and widget operation information. The widget operation information includes, but is not limited to, any one or more of the following: the identification of the lower layer application, the state identification of the lower layer application, the predicted small window size and the predicted small window position information.
Alternatively, the cloud side may perform model fusion in any manner, for example, based on lifting (boosting) algorithm and guiding aggregation (Bagging) algorithm, which is not limited herein. It can be appreciated that the fusion model can correspond to various states of the plurality of terminals, various applications of the plurality of terminals, and widget history operation behaviors in various application states.
Alternatively, taking the prediction model as a non-parametric model as an example, as shown in fig. 17, the format of the prediction model may be, but is not limited to, the following format:
application 1: application state (state) 1: [ (30, 20, 210, 150), (), … ];
application state 2: [ (), (), () ], … ];
application 2: application State 1 [ (), (), (), … ]
Where (30, 20, 210, 150) represents the coordinate position of the widget, 30, 20 represents the coordinate value of one vertex angle (such as the upper left corner) of the widget, and 210, 150 represents the coordinate value of the opposite angle (such as the lower right corner) of the above vertex angle of the widget. Each application state may correspond to a coordinate location of one or more portlets.
Optionally, when the cloud side performs model fusion, a fusion model can be generated according to a prediction model uploaded by the terminal, a screen size and resolution corresponding to the terminal. And each fusion model fuses different application identifiers and application state identifiers. The fusion model obtained by fusion comprises full data corresponding to the terminal with corresponding screen size and resolution, wherein the full data comprises small window positions obtained by adjustment of different users under different application states. The terminal a uploads the prediction model B, and after receiving the prediction model B, the cloud side performs fusion and summarization on the prediction model C and the prediction model B uploaded by the terminal which is also a 6-inch screen, so as to obtain a fusion model a. The fusion model A comprises the positions of the small windows in each application state in the model B and the positions of the small windows in each application state in the model C.
In the embodiment of the application, the authorized terminal can download the fusion model from the cloud side, and the terminal can send a downloading request to the cloud side server. Optionally, the download request carries the screen size and resolution of the terminal. Optionally, after receiving the download request, the server may query the download authority of the terminal. In one example, if the terminal has the downloading authority, the server may automatically match a fusion model that is closest to parameters such as the screen size and resolution of the terminal, and send the fusion model to the terminal. The terminal predicts a first target placement area of an upper application widget in a corresponding lower application state based on the downloaded fusion model.
Optionally, the terminal may update a local prediction model based on the small window operation behavior of the user on the terminal, and upload the updated prediction model to the cloud side for fusion.
Optionally, in any state of any application, the local prediction model of the terminal stores all small window operation behavior information of the user. When uploading, only the optimal area is uploaded to reduce the data transmission quantity; in the same way, the cloud side stores the optimal area uploaded by all terminals, and only downloads the optimal area when the terminals download so as to reduce the data transmission quantity. For example, as shown in fig. 17, the screen is a 6 inch terminal, and the local model D includes a plurality of position coordinates of the widget in different application states, and there may be a plurality of selectable widget positions in each application state. When uploading the local model, for a certain application state, the terminal may select an optimal small window position (i.e. an optimal area) from a plurality of optional areas for placing small windows, for example, for the state1 condition of APP1, only upload the optimal coordinate position (30,20,210,150) corresponding to the state1 condition of APP1, so as to reduce the data transmission amount. Similarly, when the cloud side issues the fusion model to the terminal, the cloud side may only issue coordinate information of the optimal position of the widget in the corresponding application state.
Optionally, the data transmission process between the terminal and the cloud side may use compression coding encryption technology.
In the above-mentioned end-cloud collaborative embodiment, the prediction model is trained mainly by the end side and then uploaded to the cloud side. In other embodiments, in the end-cloud collaboration, the end side may upload the training samples to the cloud side, and the cloud side performs model training, and the cloud side issues the trained model to the terminal for use. Part of the training process can be realized on the end side, and part of the training process can be realized on the cloud side. The embodiment of the application does not limit the specific mode of end cloud collaboration.
The small window placement area prediction is performed based on the end cloud cooperation technology, so that the prediction effect under the condition of cold start (namely, the small window is not adjusted by the user or the corresponding application program is used for the first time) can be optimized, and meanwhile, the personalized habit of the user can be fully reserved.
Alternatively, the fusion model obtained by the electronic device from the server may be referred to as fusion information, and the electronic device may adjust the floating window to the second area of the display screen based on the fusion information obtained from the server. The fusion information is obtained by the server based on the behavior information of the first user and the behavior information of one or more second users, namely, the fusion information is determined by the server according to the behavior information of the plurality of users.
It should be noted that, the embodiments of the present application may also determine the placement area and the key operation area of the floating window based on other ways, and are not limited to the ways listed herein. Alternatively, the electronic device may determine the placement area of the floating window based on the frequency of calls to the interactive controls, the calling features, and the like. Optionally, the interaction controls include input controls and/or output controls. The input controls may be used to input information to the electronic device and the output controls may be used to output information from the electronic device. By way of example, the region where the controls operated frequently by the user are located is taken as a key operation region, and when the small window is placed, the controls are prevented from being blocked. Further exemplary, key operation regions are defined based on user operation habits.
In this embodiment of the present application, the mobile phone may determine the first target placement area of the upper floating window in any one of modes 1 to 4. Or, the mobile phone can also combine the modes to predict the first target placement area of the small window so as to display the upper application small window in the area with the smallest influence on the lower application, thereby improving the man-machine interaction efficiency of the multi-task processing process. Embodiments of the present application are not limited to a particular algorithm or manner of predicting a small window region. For example, the mobile phone may add a priori data (e.g., for applying a preset critical area/non-critical area) when training the prediction model, so that the prediction model can be applied to not only a scene of an existing widget history (e.g., a user has operated a widget in a certain state of an application) but also a scene without the widget history.
Illustratively, the first region is determined based on the converged information obtained by the server based on the behavior information of the first user (the user of the electronic device described above) and the behavior information of one or more second users (the users of other electronic devices).
S104, the electronic equipment displays a floating window in the first area.
Illustratively, as shown in (1) of fig. 12B, upon detecting a first instruction of the user clicking on the memo application 3 in the control 702, the handset determines that the user wants to call out a widget of the memo application 3. Then, the mobile phone may determine the first target placement area of the widget in any manner described above, and display the widget in the first target placement area as shown in (2) of fig. 12B, so as to reduce the shielding and the influence of the upper application widget on the lower application as much as possible.
In other embodiments, after determining the first target placement area of the initially called widget, that is, the initial placement area of the widget, in the subsequent multitasking process, the mobile phone may intelligently and automatically adjust the position of the widget according to the change of the application state of the lower layer, so as to meet the multitasking requirement of the user as much as possible. As shown in fig. 13, optionally, the technical solution of the embodiment of the present application further includes the following steps:
S105, the electronic device detects the first scene, and determines a second target placement area (may be simply referred to as a second area) of the widget according to the first scene.
Alternatively, the electronic device may determine a plurality of selectable regions for placing the floating window and determine the second region based on the plurality of selectable regions. Alternatively, the optional region may be the third region or the fourth region or other regions described above.
Optionally, the second area is the latest obtained area in the plurality of optional areas, and/or the second area is the area with the frequency meeting the second condition in the plurality of optional areas, and/or the second area is the area with the highest weighted score in the plurality of optional areas.
Optionally, the first scenario includes, but is not limited to, any one or more of the following: the first lower interface is switched (for example, the first interface is switched to the second interface); the rate of change between the presented content of the first interface and the presented content of the second interface satisfies a third condition (meaning that the presented content of the lower interface is more changed); and detecting an operation instruction capable of triggering the lower interface to switch (at the moment, the lower interface is not switched). The lower layer interface changes, including switching one interface of the same lower layer application program to another interface, or switching one interface of the lower layer application program to another interface of another application program. When the interface of the lower layer application is refreshed or the presented content of the lower layer interface is changed more, the mobile phone needs to determine the position of the upper layer floating window again so as to reduce the probability that the small window shields the key area of the lower layer application.
The second interface and the floating window may belong to the same application or different applications.
It should be noted that the first scenario is not limited to the above listed items, and any condition for indicating that the lower application interface may be refreshed, or triggering the refresh of the lower application interface may be the first scenario.
Illustratively, as shown in fig. 7 (4), in the case that the widget 704 is called out, the mobile phone detects that the user clicks the search box 707 in the lower video player application, which means that the state of the video player changes, and the state change may cause the refresh of the subsequent interface (for example, the mobile phone may trigger the virtual keyboard 708 to be pulled up), so that the mobile phone needs to redetermine the placement area (i.e., the second target placement area) of the upper application widget 704 to avoid the widget 704 from blocking the virtual keyboard 708. For a specific implementation of the mobile phone determining the second target placement area of the widget 704, reference may be made to the relevant description of the mobile phone determining the first target placement area of the widget. For example, the interface to be refreshed, such as the second target placement area in the interface shown in (4) of fig. 7, of the lower layer application of the widget may be determined according to the preset critical area/non-critical area. For another example, the second target placement area of the widget in the interface shown in (4) of fig. 7 may be determined according to a predictive model. Thus, the shielding probability of the small window to the key area of the virtual keyboard can be reduced as much as possible.
Therefore, each time the state of the lower application is detected to be possibly changed or possibly obviously changed (for example, the change rate is larger than the threshold value), the process of predicting and adjusting the small window placement area is automatically triggered, that is, the technical effect that the small window adjustment is automatically adapted to the interface change of the lower application can be achieved.
S106, the electronic equipment displays a floating window in the second area.
Optionally, the first area has a first size; the second region has a second size, and the first size is different from the second size. That is, in the embodiment of the application, the electronic device not only can adjust the floating window to be the position, but also can adjust the size of the floating window so as to reduce the shielding of the key area as much as possible, and the electronic device is more in line with the current use scene of the electronic device.
Illustratively, as shown in fig. 7 (4), after the mobile phone detects that the user clicks on the search box 707 in the underlying video player application in the case where the widget 704 has been invoked, the virtual keyboard 708 may be pulled up, and the widget 704 may be displayed in the determined second target placement area. It can be seen that the adjusted widget 704 does not obscure the virtual keyboard 708.
Fig. 18 illustrates interactions between modules in a cell phone during adjustment of the widget position. The interaction process comprises the following steps:
S1, monitoring a first instruction by the AMD. If the first instruction is monitored, the following step S2 is executed, and if the first instruction is not monitored, the first instruction is continuously monitored.
Wherein the first instruction is for indicating that a widget is generated. Illustratively, as shown in fig. 8A (1), the first instruction is an operation in which the user clicks the widget play option 802.
S2, the AMD calls the SWLD, so that the SWLD determines a target placement area of the small window.
As one possible implementation, after detecting the first instruction, the AMD sends a request to the SWLD through an interface provided by the SWLD to invoke the SWLD to make the window placement area prediction.
As one possible implementation, the SWLD queries the AMD for the underlying application and the state of the underlying application, and then locally queries the window operation records in the underlying application state. If the history operation record of the user on the small window exists, the target placement area of the small window is predicted based on the history operation record of the small window. For example, the target placement area of the widget is predicted according to a prediction model (which is trained based on the user's historical operating record of the widget). Specifically, the state information of the lower application is input into a prediction model, and a small window target placement area output by the prediction model is obtained. For example, in the scenario shown in fig. 12B, a history of operation record of the widget 1201 by the user is stored in the mobile phone (the record is derived from scaling and moving of the widget by the user in the scenario shown in fig. 12A), then the mobile phone may determine, according to the history, that the target placement area of the widget 1201 is the area shown in (2) of fig. 12B in the same lower application state.
Otherwise, if the SWLD does not locally query the widget operation record in the lower layer application state, it indicates that the user has not operated the widget in the same lower layer application state before. Then, the SWLD may determine a critical area/non-critical area of the lower layer application in the current state based on the foregoing mode 1 and/or mode 2, and determine a target placement area of the widget according to the critical area/non-critical area. Illustratively, one area is randomly selected from the non-critical areas as the target placement area for the widget. Or, calculating the probability of each non-key region operated by the user, and taking the non-key region with the smallest probability of being operated by the user as the target placement region of the small window. Alternatively, one area is selected from among the non-critical areas as a target placement area for the widget in other possible manners.
S3, the SWLD returns information of the target placement area of the small window to the AMD.
S4, controlling the placement area of the small window to be a target placement area by AMD.
Alternatively, in a scenario where the location of the widget is initially determined, the AMD controls the widget to be displayed in the target placement area. Illustratively, as shown in (1) of FIG. 8B, upon detecting an instruction to call up the widget 801 (such as an instruction to click on the widget play option 802), the AMD may invoke a display driver to display the widget 801 in the area shown in (2) of FIG. 8B.
Optionally, in a scene where a small window exists and the position of the small window needs to be adjusted, the AMD judges the current size and the contact ratio between the position of the small window and the target placement area, if the contact ratio exceeds a certain threshold value, the position adjustment is not performed, otherwise, if the contact ratio between the current placement area of the small window and the target placement area is smaller than the threshold value, the AMD controls the small window to move to the position of the target placement area and/or controls the small window to zoom to the size of the target placement area. Illustratively, as shown in (2) of fig. 8B, when the user's click on the my tab is detected, this means that the state of the lower video player is changed, and the handset needs to redetermine the placement area of the tap 801. After the cell phone redetermines the placement area of the widget 801, the AMD of the cell phone controls the widget 801 to move from the area shown in fig. 8B (2) to the area shown in fig. 8B (3).
S5, the AMD monitors the operation of the user on the small window.
As one possible implementation, the AMD continues to monitor the user's operations on the widget throughout the lifetime of the widget.
As a possible implementation manner, starting from a certain time of dragging the small window or adjusting the size of the small window by the user, if the user does not operate the small window any more within a preset period of time, the adjustment of the small window is considered to be completed. The AMD records the application, the state corresponding to the application and the small window operation information under the application state, and sends the recorded information to the SWLD. Optionally, the widget operation information includes an adjusted widget region (e.g., a mapping relationship of widget size and location).
Illustratively, as shown in (2) of fig. 11A, the AMD listens to the drag and resize operation of the widget 1101 by the user, and sends operation information on the widget to the SWLD so that the SWLD subsequently determines the placement area of the widget according to the widget operation information.
S6, AMD sends small window operation information to SWLD.
S7, AMD sends the status information of the lower application to SWLD.
Optionally, the AMD sends the SWLD an identification of the application to which the status information corresponds.
S8, the AMD monitors the state of the lower application. And S2 is executed if the state of the lower-layer application is changed, and the state of the lower-layer application is continuously monitored if the state of the lower-layer application is not changed.
It can be understood that if the state of the lower application changes, which means that the interface of the lower application may be refreshed, or the rate of change of the interface of the lower application may be relatively high, in order to avoid that the upper application window shields the key of the lower application after the interface of the lower application changes, the AMD needs to call the SWLD to re-determine the placement area of the window.
S9, the SWLD updates the prediction model according to the state information of the lower application and the small window operation information.
Alternatively, after the SWLD receives the application, the state information of the application, and the widget operation information corresponding to the application state from the AMD, the information may be stored.
Alternatively, the SWLD may aggregate according to the application and the corresponding application state, and if there are multiple small window operation records in the same state of the same application, these records may be stored sequentially according to a certain policy. For example, a plurality of small window operation records in the same application state are sequentially stored according to a time sequence.
The specific implementation of updating the prediction model may be referred to the related description of the above embodiment, which is not repeated here.
S10, the AMD monitors whether the small window exits.
It will be appreciated that if a widget exit is detected (e.g., an event is received that the widget application destroys the widget), the AMD executes S1, continuing to monitor for a first instruction indicating the generation of the widget. If the exit of the widget is not detected, the AMD continues to monitor whether the widget exits or not, and may continue to monitor the state of the underlying application.
In the above embodiments, mainly, after the first instruction (for calling up the upper application widget), the mobile phone directly determines the placement area of the widget, and displays the widget in the placement area, and in other embodiments, the mobile phone may generate the widget after the first instruction is detected, and display the widget in the initial area. And then, automatically adjusting the placement area of the small window to avoid the small window from shielding the key area of the lower application. Illustratively, as shown in fig. 19 (2), after detecting the first instruction (for calling up the widget of application 1) of the application 1 option in the user click control 702, the handset generates the widget 704 of application 1 shown in fig. 19 (3). After generating the widget 704, application 1 may send a notification of the generation of the widget to the AMD, which, after listening for the notification of the generation of the widget, invokes the SWLD to adjust the location of the widget. The SWLD determines that the newly placed area of the widget is the area shown in (4) of fig. 19 through the calculation of the SWLD, and returns information of the area to the AMD, and the AMD controls the widget 704 to move from the area shown in (3) of fig. 19 to the area shown in (4) of fig. 19.
In other embodiments, there are multiple underlying applications, and thus, multitasking can be performed in multiple underlying applications.
In other embodiments, in the case that the user forgets to turn on the intelligent widget adjustment function, the user may be queried whether to turn on the intelligent widget adjustment function after detecting that the generated widget is generated, so as to improve the man-machine interaction performance in the widget scene.
Fig. 20 shows a floating window adjustment method according to an embodiment of the present application. The method may be applied to a first electronic device comprising a display screen, the method comprising:
s201, displaying a first interface on a display screen.
In this embodiment, taking a mobile phone as the first electronic device as an example, the mobile phone displays a first interface 701 on a lower layer as shown in fig. 7 (3).
S202, displaying a floating window on the upper layer of the first interface, wherein the floating window is positioned in a first area of the display screen.
Illustratively, as shown in fig. 7 (3), the mobile phone displays a floating window 704 on the upper layer of the first interface 701.
S203, detecting that the first interface is switched to the second interface, and adjusting the floating window to a second area of the display screen based on the behavior information of the first user.
Optionally, the first interface and the second interface are interfaces of the same application program, or the first interface and the second interface are interfaces of different application programs.
Alternatively, there may be a partial region overlap of the first region and the second region. Alternatively, the first region and the second region do not overlap at all.
For example, as shown in fig. 7 (4), when the mobile phone detects that the lower interface is switched from the first interface 701 to the second interface 709, the mobile phone adjusts the floating window from the first area shown in fig. 7 (3) to the second area shown in fig. 7 (4) based on the behavior information of the user.
In other embodiments, portions of the critical operating area may also be masked while the floating window is adjusted. For example, if the key operation area is a virtual keyboard, and based on an authorized user image (such as user behavior), it is determined that the keys commonly used by the user are english letters in the virtual keyboard, and the number keys are not commonly used, the electronic device may avoid shielding english letter keys of the lower virtual keyboard and may shield part of the number keys when adjusting the position and the size of the floating window.
It should be noted that the steps in the above method flow are only exemplary. Some of the steps may be replaced with other steps, or some of the steps may be added or subtracted.
Some operations in the flow of the method embodiments described above are optionally combined and/or the order of some operations is optionally changed.
The order of execution of the steps in each flow is merely exemplary, and is not limited to the order of execution of the steps, and other orders of execution may be used between the steps. And is not intended to suggest that the order of execution is the only order in which the operations may be performed. One of ordinary skill in the art will recognize a variety of ways to reorder the operations herein. In addition, it should be noted that for a certain method, the details of other processes herein in connection with other methods herein apply in a similar manner to the above in connection with the method.
Alternatively, in an implementation of the present application, the floating window may be generated by the first electronic device based on an instruction (such as the first instruction) input by the user.
Or the first electronic device automatically generates the floating window. As a possible implementation manner, the first electronic device automatically generates the floating window when the preset scene is detected. For example, in the process that the user browses the electronic book by using the first electronic device, the first electronic device receives the message of the instant messaging software, and then the first electronic device can automatically generate the floating window of the instant messaging software, so that the user can process the message of the instant messaging software and browse the electronic book at the same time.
It should be noted that the interface shown in the drawings is only an exemplary interface. For interfaces that are not described in detail in some of the figures, reference may be made to corresponding descriptions in other figures. For example, the description of the interface shown in fig. 14B (1) may be referred to in fig. 14A (2), and the description of the interface shown in fig. 19 (1) may be referred to in fig. 7 (1).
Further embodiments of the present application provide an apparatus that may be an electronic device as described above (e.g., a folding screen phone). The apparatus may include: a display screen, a memory, and one or more processors. The display, memory, and processor are coupled. The memory is for storing computer program code, the computer program code comprising computer instructions. When the processor executes the computer instructions, the electronic device may perform the functions or steps performed by the mobile phone in the above-described method embodiments. The structure of the electronic device may refer to the electronic device shown in fig. 6 or fig. 4.
The core structure of the electronic device may be represented as the structure shown in fig. 21, and the core structure may include: processing module 1301, input module 1302, storage module 1303, display module 1304.
Processing module 1301 may include at least one of a Central Processing Unit (CPU), an application processor (Application Processor, AP), or a communication processor (Communication Processor, CP). Processing module 1301 may perform operations or data processing related to control and/or communication of at least one of the other elements of the consumer electronic device. Specifically, the processing module 1301 may be configured to control the content displayed on the home screen according to a certain trigger condition. Or determining the content displayed on the screen according to preset rules. The processing module 1301 is further configured to process the input instruction or data, and determine a display style according to the processed data.
The input module 1302 is configured to obtain an instruction or data input by a user, and transmit the obtained instruction or data to other modules of the electronic device. Specifically, the input mode of the input module 1302 may include touch, gesture, proximity screen, or voice input. For example, the input module may be a screen of an electronic device, acquire an input operation of a user, generate an input signal according to the acquired input operation, and transmit the input signal to the processing module 1301.
The storage module 1303 may include volatile memory and/or nonvolatile memory. The storage module is configured to store at least one relevant instruction or data in other modules of the user terminal device, and in some embodiments of the present application, the storage module may record a prediction model and a fusion model.
Display module 1304, which may include, for example, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, an Organic Light Emitting Diode (OLED) display, a microelectromechanical system (MEMS) display, or an electronic paper display. For displaying user viewable content (e.g., text, images, video, icons, symbols, etc.).
Optionally, the structure shown in fig. 21 may further include a communication module 1305 for supporting the electronic device to communicate with other electronic devices. For example, the communication module may be connected to a network via wireless communication or wired communication to communicate with other personal terminals or network servers. The wireless communication may employ at least one of cellular communication protocols, such as Long Term Evolution (LTE), long term evolution-advanced (LTE-a), code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), universal Mobile Telecommunications System (UMTS), wireless broadband (WiBro), or global system for mobile communications (GSM). The wireless communication may include, for example, short-range communication. The short-range communication may include at least one of wireless fidelity (Wi-Fi), bluetooth, near Field Communication (NFC), magnetic Stripe Transmission (MST), or GNSS.
It should be noted that, descriptions of steps in the method embodiment of the present application may be referred to modules corresponding to the apparatus, and are not described herein again.
Embodiments of the present application also provide a chip system, as shown in fig. 22, comprising at least one processor 1401 and at least one interface circuit 1402. The processor 1401 and the interface circuit 1402 may be interconnected by wires. For example, interface circuit 1402 may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, interface circuit 1402 may be used to send signals to other devices (e.g., processor 1401). Illustratively, the interface circuit 1402 may read instructions stored in the memory and send the instructions to the processor 1401. The instructions, when executed by the processor 1401, may cause the electronic device to perform the various steps in the embodiments described above. Of course, the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
The embodiment of the application also provides a computer storage medium, which comprises computer instructions, when the computer instructions run on the electronic device, the electronic device is caused to execute the functions or steps executed by the mobile phone in the embodiment of the method.
The embodiment of the application also provides a computer program product, which when run on a computer, causes the computer to execute the functions or steps executed by the mobile phone in the embodiment of the method.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (19)

1. A floating window adjustment method, characterized in that it is applied to a first electronic device, the first electronic device including a display screen, the method comprising:
displaying a first interface on the display screen;
displaying a floating window on the upper layer of the first interface, wherein the floating window is positioned in a first area of the display screen;
and detecting that the first interface is switched to a second interface, and adjusting the floating window to a second area of the display screen based on the behavior information of the first user.
2. The method of claim 1, wherein the behavioral information of the first user comprises information obtained from historical operation of the floating window by the first user;
and/or the behavior information of the first user comprises information obtained by historically operating the second interface by the first user.
3. The method of claim 2, wherein historically operating the floating window comprises: and when the second interface is historically displayed, adjusting the position and/or the size of the floating window displayed on the upper layer of the second interface.
4. A method according to claim 2 or 3, wherein the information historically obtained by the first user operating the floating window comprises: when the second interface is historically displayed, the first user adjusts the related information of the third area for placing the floating window, which is obtained by the floating window; the second region is associated with the third region.
5. The method of claim 2, wherein operating the second interface comprises: and performing sliding operation on the second interface.
6. The method of claim 2 or 5, wherein the information historically obtained by the first user operating the second interface comprises: a fourth area with the change rate of the element in the second interface meeting the first condition, wherein the fourth area is used for placing the floating window; the second region is located within the fourth region.
7. The method of any of claims 1-6, wherein prior to adjusting the floating window to the second region of the display screen, the method further comprises:
And determining a plurality of selectable areas for placing floating windows, wherein the second area is the latest obtained area in the plurality of selectable areas, and/or the second area is the area with the frequency meeting a second condition in the plurality of selectable areas, and/or the second area is the area with the highest weighted score in the plurality of selectable areas.
8. The method of any of claims 1-7, wherein the adjusting the floating window to the second region of the display screen based on behavior information of the first user of the first electronic device comprises: and adjusting the floating window to a second area of the display screen based on fusion information acquired from a server, wherein the fusion information is obtained by the server based on the behavior information of the first user and the behavior information of one or more second users.
9. The method of any of claims 1-8, wherein a rate of change between the presented content of the first interface and the presented content of the second interface satisfies a third condition.
10. The method of any of claims 1-9, wherein the behavior information of the first user is behavior information of the first user on the first electronic device or the behavior information of the first user is behavior information of the first user on a second electronic device.
11. The method of claim 10, wherein the first electronic device and the second electronic device are different electronic devices that log on to the same account.
12. The method of any one of claims 1-11, wherein the first region has a first size; the second region has a second size, and the first size is different from the second size.
13. The method of any of claims 1-12, wherein the first interface and the second interface are interfaces of a same application or wherein the first interface and the second interface are interfaces of different applications.
14. The method of any one of claims 1-13, wherein displaying a floating window on top of the first interface comprises: generating the floating window and displaying the floating window on the upper layer of the first interface; the floating window is generated by the first electronic device based on instructions input by a user, or the floating window is automatically generated by the first electronic device.
15. The method of any one of claims 1-14, wherein the first region of the floating window is determined based on a preset region; or, the first area is determined based on behavior information of the first user; or, the first area is determined based on fusion information obtained by a server based on the behavior information of the first user and the behavior information of one or more second users.
16. The method of claim 15, wherein the preset area comprises any one or more of the following: the device comprises a virtual keyboard, a dial, a return button, a navigation button and a search box, wherein the first area is located in the preset area.
17. An electronic device, comprising:
one or more processors;
a memory;
and one or more computer programs, wherein the one or more computer programs are stored on the memory, which when executed by the one or more processors, cause the electronic device to perform the method of any of claims 1-16.
18. A computer readable storage medium, characterized in that the computer readable storage medium comprises a computer program or instructions which, when run on a computer, cause the computer to perform the method of any of claims 1-16.
19. A computer program product, the computer program product comprising: computer program or instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1-16.
CN202111278442.9A 2021-10-30 2021-10-30 Floating window adjusting method and electronic equipment Pending CN116069198A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111278442.9A CN116069198A (en) 2021-10-30 2021-10-30 Floating window adjusting method and electronic equipment
PCT/CN2022/123662 WO2023071718A1 (en) 2021-10-30 2022-09-30 Floating window adjusting method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111278442.9A CN116069198A (en) 2021-10-30 2021-10-30 Floating window adjusting method and electronic equipment

Publications (1)

Publication Number Publication Date
CN116069198A true CN116069198A (en) 2023-05-05

Family

ID=86160299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111278442.9A Pending CN116069198A (en) 2021-10-30 2021-10-30 Floating window adjusting method and electronic equipment

Country Status (2)

Country Link
CN (1) CN116069198A (en)
WO (1) WO2023071718A1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686419A (en) * 2012-09-14 2014-03-26 腾讯科技(深圳)有限公司 Method and device for setting video windows
CN103616985B (en) * 2013-11-27 2016-09-14 乐视网信息技术(北京)股份有限公司 A kind of method and device choosing floating window player position
US9392217B2 (en) * 2014-03-20 2016-07-12 Blackberry Limited Automatically relocating picture-in-picture window in video calls
CN105005430B (en) * 2015-07-17 2019-05-14 深圳市金立通信设备有限公司 A kind of window display method and terminal
CN111045573A (en) * 2018-10-15 2020-04-21 阿里巴巴集团控股有限公司 Method and device for displaying floating window page and equipment/terminal/server
CN109814768A (en) * 2018-12-14 2019-05-28 中国平安财产保险股份有限公司 Method, apparatus, computer equipment and the storage medium of the position optimization of suspended frame
CN111953924B (en) * 2020-08-21 2022-03-25 杨文龙 Video window adjusting method, device, medium and system based on image processing

Also Published As

Publication number Publication date
WO2023071718A1 (en) 2023-05-04

Similar Documents

Publication Publication Date Title
US11567623B2 (en) Displaying interfaces in different display areas based on activities
KR102470275B1 (en) Voice control method and electronic device
US20220269405A1 (en) Floating Window Management Method and Related Apparatus
US20220223154A1 (en) Voice interaction method and apparatus
KR102534354B1 (en) System navigation bar display control method, graphical user interface and electronic device
US20220300154A1 (en) Split-Screen Display Processing Method and Apparatus, and Electronic Device
US20230046708A1 (en) Application Interface Interaction Method, Electronic Device, and Computer-Readable Storage Medium
CN110119296B (en) Method for switching parent page and child page and related device
CN111176506A (en) Screen display method and electronic equipment
EP4283454A1 (en) Card widget display method, graphical user interface, and related apparatus
EP3958106B1 (en) Interface display method and electronic device
US20220374123A1 (en) Display element display method and electronic device
CN113448658A (en) Screen capture processing method, graphical user interface and terminal
CN114117269A (en) Memorandum information collection method and device, electronic equipment and storage medium
CN112740148A (en) Method for inputting information into input box and electronic equipment
WO2022222688A1 (en) Window control method and device
CN116069198A (en) Floating window adjusting method and electronic equipment
EP3982658B1 (en) Wireless access point deployment method and apparatus
CN117234404A (en) Equipment control method and electronic equipment
CN117008772A (en) Display method of application window and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination