CN113655929A - Interface display adaptation processing method and device and electronic equipment - Google Patents

Interface display adaptation processing method and device and electronic equipment Download PDF

Info

Publication number
CN113655929A
CN113655929A CN202110986863.0A CN202110986863A CN113655929A CN 113655929 A CN113655929 A CN 113655929A CN 202110986863 A CN202110986863 A CN 202110986863A CN 113655929 A CN113655929 A CN 113655929A
Authority
CN
China
Prior art keywords
interface
area
input
virtual keys
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110986863.0A
Other languages
Chinese (zh)
Inventor
孙涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110986863.0A priority Critical patent/CN113655929A/en
Publication of CN113655929A publication Critical patent/CN113655929A/en
Priority to PCT/CN2022/113628 priority patent/WO2023025060A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The application discloses an interface display adaptation processing method and device and electronic equipment, and relates to the technical field of communication. The method comprises the following steps: receiving a first input of a user under the condition that a first interface is displayed in a first area; displaying a second interface in a second area in response to the first input; the first interface comprises S first virtual keys, the second interface comprises images corresponding to third areas in the first interface, and the third areas are areas corresponding to N first virtual keys in the S first virtual keys; the second interface comprises second virtual keys, and the second virtual keys and the N first virtual keys establish a relational relationship; s, N is a positive integer and S.gtoreq.N.gtoreq.1.

Description

Interface display adaptation processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to an interface display adaptation processing method and device and electronic equipment.
Background
Cell phones and tablets often share Applications (APPs), although the hardware configurations are quite different. For example, most games run on the tablet side are actually games designed for running on the mobile phone side.
However, for an application program designed for running at the mobile phone end, the size of an interface (UI) of the application program is designed by considering the size of a screen of the mobile phone, so that the operation at the mobile phone end is more convenient, but after the tablet end is subjected to equal-proportion adaptation, various problems such as an overlarge button, an overlong sliding distance, and incapability of covering an operation area by a thumb occur, so that the convenience in use of the tablet end is reduced.
Disclosure of Invention
The embodiment of the application aims to provide an interface display adaptation processing method and device and an electronic device, and the problem that the use convenience is reduced due to the fact that a terminal is not adapted to an application program interface can be solved.
In a first aspect, an embodiment of the present application provides an adaptation processing method for interface display, where the method includes:
receiving a first input of a user under the condition that a first interface is displayed in a first area;
displaying a second interface in a second area in response to the first input;
the first interface comprises S first virtual keys, the second interface comprises images corresponding to third areas in the first interface, and the third areas are areas corresponding to N first virtual keys in the S first virtual keys; the second interface comprises second virtual keys, and the second virtual keys and the N first virtual keys establish a relational relationship; s, N is a positive integer and S.gtoreq.N.gtoreq.1.
In a second aspect, an embodiment of the present application provides an apparatus for adapting an interface display, including:
the first receiving module is used for receiving a first input of a user under the condition that the first interface is displayed in the first area;
the display module is used for responding to the first input and displaying a second interface in a second area;
the first interface comprises S first virtual keys, the second interface comprises images corresponding to third areas in the first interface, and the third areas are areas corresponding to N first virtual keys in the S first virtual keys; the second interface comprises second virtual keys, and the second virtual keys and the N first virtual keys establish a relational relationship; s, N is a positive integer and S.gtoreq.N.gtoreq.1.
In a third aspect, embodiments of the present application further provide an electronic device, which includes a processor, a memory, and a program or an instruction stored on the memory and executable on the processor, and when executed by the processor, the program or the instruction implements the steps of the method according to the first aspect.
In a fourth aspect, the present embodiments also provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, after a first input of a user is received, a second interface can be displayed in a second area in response to the first input, because the second interface includes images of areas corresponding to N first virtual keys in the first interface and a second virtual key related to the N first virtual keys is established, the second virtual key does not need to be operated, the related second virtual key can be triggered through the images of the areas corresponding to the first virtual keys, inconvenience in operation caused by direct operation of the virtual keys of the application interface is avoided, and convenience of the electronic device is improved.
Drawings
Fig. 1 is a schematic flowchart of an adaptation processing method for interface display according to an embodiment of the present application;
FIG. 2 is a diagram of a screen display of an electronic device;
FIG. 3 is a second schematic diagram of a screen display of the electronic device;
FIG. 4 is a third schematic diagram of a screen display of the electronic device;
FIG. 5 is a fourth schematic diagram of a screen display of the electronic device;
FIG. 6 is a fifth diagram of a screen display of the electronic device;
fig. 7 is a schematic structural diagram of an adaptation processing apparatus for interface display according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to another embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced otherwise than as shown or described herein and the terms "first," "second," and the like are used generically and do not limit the number of terms to which they may be applied, e.g., the first term may refer to one or more than one term. Further, in the specification and claims, "and/or" means at least one of the connected objects, the character "/" generally means a relationship that preceding and succeeding associated objects are an "or".
An adaptation processing method for interface display and an electronic device provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, an adaptation processing method for interface display according to an embodiment of the present application includes:
step 101, receiving a first input of a user under the condition that a first interface is displayed in a first area.
Here, the first input is an input that triggers processing of the first interface currently displayed in the first area, triggering step 102. After the electronic equipment displays the first interface, the user can realize the first input through a preset input mode. Specifically, the first input may be an input based on a physical key or a virtual key, or an input through a biometric technology, such as voice, touch, infrared, gesture, and the like. Of course, other inputs in this embodiment are also preset, and there may be a plurality of implementations as well as the first input.
Wherein the first interface is an application interface. The interface displayed by the first interface in the first area according to the preset display proportion can realize clearer display. And considering that the screen of the electronic equipment is not matched with the preset display proportion, the first interface is displayed in the first area and is executed after the user triggers the matching processing function displayed by the application interface.
Step 102, responding to the first input, and displaying a second interface in a second area; the first interface comprises S first virtual keys, the second interface comprises an image corresponding to a third area in the first interface, and the third area is an area corresponding to N first virtual keys in the S first virtual keys; the second interface comprises second virtual keys, and the second virtual keys and the N first virtual keys establish a relational relationship; s, N is a positive integer and S.gtoreq.N.gtoreq.1.
In this step, in response to the first input received in step 101, the second interface may be displayed in the second area, that is, in the second area, images of areas corresponding to N first virtual keys in the first interface are displayed, and second virtual keys having an association relationship with the N first virtual keys are displayed.
Here, the number of the second virtual keys is also N, and the second virtual keys are in one-to-one correspondence with the N first virtual keys. The established association relationship between the second virtual key and the N first virtual keys is the association relationship between the area where the image of the first virtual key is located and the second virtual key in the second interface, so that the user can trigger the corresponding second virtual key based on the second area when inputting in the area where the image of the first virtual key is located. For example, the virtual key a in the second interface has an association relationship with it, which is a specific area in the second interface, and the specific area is an area where the virtual key a image on the first interface is located when the second interface is displayed.
In this way, after receiving the first input of the user according to the above step 101, the electronic device responds to the first input, and can display the second interface in the second area, because the second interface includes the images of the areas corresponding to the N first virtual keys in the first interface and the second virtual keys establishing the association relationship with the N first virtual keys, the associated second virtual keys can be triggered through the images of the areas corresponding to the first virtual keys without operating the second virtual keys, thereby avoiding the inconvenience in operation caused by directly operating the virtual keys of the application interface, and improving the convenience of the electronic device.
In this embodiment, the second virtual key of the second interface and the first virtual key in the first interface can trigger the same function, and only the size of the corresponding area is different when displaying, that is, the first interface and the second interface display the application interface in different display proportions, and optionally, the size of the second interface is larger than that of the first interface.
Wherein the preset display scale is an optional display scale for a standard screen design, such as 6 inches diagonal, 18: 9.
Optionally, the first area is adapted to the optimal display scale, and the second area is a display area of the electronic device.
It should be appreciated that, in a scenario where an application B (e.g., a game program) is designed for a standard mobile phone screen, for example, a user clicks on "game operation adaptation" through a game assistant "or other entry, and the application B interface is triggered to be displayed in the first area at the designed optimal display ratio. And then, the user can display images of the areas corresponding to the N first virtual keys in the first interface and second virtual keys with the association relation with the N first virtual keys in the second area through the first input.
Optionally, in this embodiment, after step 102, the method further includes:
receiving a second input of the user to the second interface;
triggering a target virtual key in response to the second input;
and the target virtual key is a second virtual key associated with the first virtual key corresponding to the second input in the second interface.
Here, the second input is an input corresponding to an area in the second interface where the image of the first virtual key is located, such as a click on the image of the first virtual key (the image of the virtual key C on the first interface) displayed on the second interface. And the virtual key C is triggered by the incidence relation established between a second virtual key (virtual key C) in the second interface and a specific area (the area where the image of the first virtual key is located in the second interface).
That is, when the second virtual key in the second interface has established a relationship with the area where the specific image (the image of the area where the first virtual key is located in the first interface) is located in the second interface, if a second input of the user to the area where the specific image is located is received, the corresponding second virtual key, i.e., the target virtual key in the second interface is triggered in response to the second input.
It should also be appreciated that, in this embodiment, in view of the personalized needs of the user and the implementation of the adaptation shortcut, on one hand, optionally, in step 102, displaying the second interface in the second area includes:
acquiring an image of the first interface;
identifying S first virtual keys in the image of the first interface;
acquiring initial images of N first virtual keys in the S first virtual keys in the image of the first interface;
synthesizing the initial image and the first interface processed according to the first display proportion to obtain the second interface;
displaying the second interface in the second area
Therefore, the received first input is the trigger electronic device which automatically recognizes S first virtual keys in the image of the first interface based on the image of the first interface, and then the initial image of the N first virtual keys in the image of the first interface in the S first virtual keys is obtained, so that the second interface is obtained by synthesizing the initial image and the first interface processed according to the first display proportion, and the rapid adaptation processing is realized.
The first interface displays an application interface, and the image of the first interface can be understood as a screenshot of the first interface when the first interface displays the application interface. Specifically, in a manner of identifying the virtual key, as shown in fig. 2, the electronic device identifies an edge of the virtual key in the image of the first interface 201 in the screen 1, so as to divide the image of the first interface 201 into: a first virtual key image area 202, a second virtual key image area 203, a third virtual key image area 204, and a non-virtual key image area 205. Here, the first virtual key image area 202, the second virtual key image area 203, and the third virtual key image area 204 are third areas.
The first display ratio is preset, and may be a display ratio of full-screen display of the application interface on the screen of the electronic device, or may be other values, which are not listed here.
On the other hand, optionally, in step 102, the displaying the second interface in the second area includes:
dividing the first interface and obtaining an image of the third area;
merging the image of the third area with the first interface processed according to the first display proportion to obtain the second interface;
displaying the second interface in the second area.
That is, the received first input is a manual division of the first interface by the user, and the electronic device divides the first interface in response to the first input. Specifically, the first input of the user realizes manual division of the virtual key image area and the non-virtual key image area. And then, obtaining an image of a third area by division, and synthesizing the image of the third area and the first interface processed according to the first display proportion to obtain a second interface, thereby realizing rapid adaptation processing.
For example, after the electronic device receives a first input from the user, as shown in fig. 3 and 4, a first interface 301 is displayed on the screen 1, and the first interface is in the first area. Then, as shown in fig. 4, the user draws two dividing lines (arc-shaped dotted lines in the figure) on the first interface 301 through a first input, and the first interface 301 divides the image into three image areas based on the dividing lines: a first image area 401, a second image area 402 and a third image area 403. The first image area 401 and the second image area 402 are third areas because they include virtual key images, and the third image area 403 is a non-virtual key image area because it does not include virtual key images. Then, the first image area 401 and the second image area 402 are merged with the first interface processed at the first display ratio to obtain a second interface, and the second interface is displayed in the second area, such as a full screen display area.
In the second interface, the first image area and the second image area are associated with the corresponding virtual keys in the second interface, and when a user clicks the first image area or the second image area in the second interface, the second virtual keys which are related to the area where the clicking position is located are triggered to execute corresponding operations. Here, the image corresponding to the area where the click position is located is the image of the first virtual key.
In addition, considering that the first area exists on the screen and does not occupy the full screen display area, in order to highlight the first interface, optionally, the method further includes:
and when the first area displays the first interface, displaying areas except the first area in the screen according to the target display parameters.
Here, the target display parameter may be a value obtained using gaussian blur processing, which enables blur display of a region (non-image region) other than the first region in the screen. Specifically, as shown in fig. 3, after displaying a first interface 301 in a first area, the electronic device displays an area 302 (i.e., a non-image area) on the screen 1 except for the area where the first interface is located, in a blurred manner by using a gaussian blur process.
In addition, as can be seen from the above, the automatic recognition or the manual division divides the first interface into an area including the virtual key image and an area not including the virtual key image, and the user performs the second input based on the displayed area including the virtual key image in the second interface. In this embodiment, the merged second interface includes two parts, so that, in order to avoid the obstruction of the image corresponding to the third area in the second interface to the display of the first interface processed according to the first display scale, optionally, the image corresponding to the third area in the first interface included in the second interface is draggable to drag the image to a position where the user is convenient to operate. When the image is dragged to the edge area of the screen (such as the area 0-3 cm away from the edge of the screen), the image can automatically adsorb the edge of the screen. And after the dragging is finished, merging the interface with the first interface processed according to the first display proportion to be used as a second interface, and displaying the second interface in a second area.
Of course, after the first interface is divided into the area including the virtual key image and the area not including the virtual key image, both the areas can be dragged freely. However, in view of the optimal display of the final second interface, in this embodiment, optionally, after step 102, the method further includes:
receiving a third input of the user on a remaining part obtained after the first interface is divided, wherein the remaining part is a part of the first interface except the third area;
concealing the remaining portion in response to the third input.
Here, the third input is used to hide the display of the remaining portion (the portion of the first interface other than the third area) to achieve the optimal display effect of the final second interface. Specifically, the third input may be a three-finger operation of the remaining portion by the user, and the remaining portion disappears in the screen after the inward convergence sliding.
For example, for the first image area 401, the second image area 402 and the third image area 403 shown in fig. 4, as shown in fig. 5, after the user operates the three-finger operation of the third image area 403 in the screen 1 to slide inward in a polymerization manner, the third image area 403 disappears in the screen; thereafter, the user drags the first image area 401 and the second image area 402 to the screen edge area, respectively. After the dragging is completed, as shown in fig. 6, a second interface 601 is displayed in the screen 1. And the second interface 601 includes a first interface processed in a first display scale, and a first image area 401 and a second image area 402. For example, in the first image area 401 and the second image area 402, an image of the third area included in the second interface and an edge of the screen edge area may generate an obvious dividing line, so as to avoid generating a visual error.
It should be appreciated that after the second interface is displayed, the image of the third area, which may be dragged before, may block the key part in the first interface processed according to the first display scale, so optionally, in this embodiment, after the step 102, further comprising:
receiving a fourth input of the user while the second interface is displayed;
in response to the fourth input, adjusting a size or a position of an image in the second interface corresponding to a third region of the first interface.
Here, the fourth input is used to wake up an adjustment state of the image corresponding to the third area, and after receiving the fourth input, the electronic device enters the adjustment state, where the user may zoom or move the image corresponding to the third area of the first interface.
In this embodiment, the manner of establishing the association relationship may be implemented by coordinate association. Specifically, taking the screen as 1920 × 1080 as an example, a coordinate system 1920 × 1080 may be established for the first interface, and the same coordinate system may be established for the second interface. After the first interface is divided and moved, the distance of movement, i.e., the offset, is recorded, and taking the movement of the first virtual key image area 202 in fig. 2 as an example, the movement of the first virtual key image area 202 in the horizontal and vertical directions is recorded as s1 and s2, respectively. Thereafter, for the click event of the first virtual key image area 202, the current click coordinate is (x, y), and based on the coordinate system and the recorded offset, one and the same click event is simulated at the coordinates (x + s1, y + s2) in the second interface. Therefore, even if the divided virtual key image is dragged to other positions, the operation on the area where the virtual key image is located can still be fed back to the virtual key in the second interface, and more convenient operation can be achieved.
In summary, the method of the embodiment of the application can automatically adapt to the application interface, and solves the problem of inconvenient operation caused by the fact that the virtual key position design on the application interface is not adapted to the screen of the electronic device.
It should be noted that, in the adaptation processing method for interface display provided in the embodiment of the present application, the execution subject may be an adaptation processing device for interface display, or a control module for executing the adaptation processing method for interface display in the adaptation processing device for interface display. In the embodiment of the present application, an adaptation processing method for an interface display executed by an adaptation processing device for interface display is taken as an example, and the adaptation processing device for interface display provided in the embodiment of the present application is described.
Fig. 7 is a block diagram of an adaptation processing apparatus of an interface display according to an embodiment of the present application. The adapting processing device of the interface display shown in fig. 7 includes a first receiving module 710, a first processing module 720, a second receiving module 730, and a first display module 740.
A first receiving module 710, configured to receive a first input of a user when the first interface is displayed in the first area;
a display module 720, configured to display a second interface in a second area in response to the first input;
the first interface comprises S first virtual keys, the second interface comprises images corresponding to third areas in the first interface, and the third areas are areas corresponding to N first virtual keys in the S first virtual keys; the second interface comprises second virtual keys, and the second virtual keys and the N first virtual keys establish a relational relationship; s, N is a positive integer and S.gtoreq.N.gtoreq.1.
Optionally, the apparatus further comprises:
the second receiving module is used for receiving a second input of the user to the second interface;
the first processing module is used for responding to the second input and triggering the target virtual key;
and the target virtual key is a second virtual key associated with the first virtual key corresponding to the second input in the second interface.
Optionally, the display module comprises:
the first obtaining submodule is used for obtaining an image of the first interface;
the identification submodule is used for identifying S first virtual keys in the image of the first interface;
the second obtaining submodule is used for obtaining initial images of N first virtual keys in the S first virtual keys in the image of the first interface;
the first processing submodule is used for synthesizing the initial image and the first interface processed according to the first display proportion to obtain the second interface;
a first display sub-module for displaying the second interface in the second area
Optionally, the display module comprises:
the second processing submodule is used for dividing the first interface and obtaining an image of the third area;
the third processing submodule is used for merging the image of the third area with the first interface processed according to the first display proportion to obtain the second interface;
and the second display submodule is used for displaying the second interface in the second area.
Optionally, the apparatus further comprises:
a third receiving module, configured to receive a third input of a remaining part obtained after the first interface is divided by the user, where the remaining part is a part of the first interface other than the third area;
a second processing module to conceal the remaining portion in response to the third input.
Optionally, the apparatus further comprises:
a fourth receiving module, configured to receive a fourth input of the user when the second interface is displayed;
and the third processing module is used for responding to the fourth input and adjusting the size or the position of the image corresponding to the third area of the first interface in the second interface.
The device can display a second interface in a second area in response to the first input after receiving the first input of a user, and the second interface comprises images of areas corresponding to N first virtual keys in the first interface and second virtual keys related to the N first virtual keys, so that the second virtual keys do not need to be operated, the related second virtual keys can be triggered through the images of the areas corresponding to the first virtual keys, inconvenience in operation caused by direct operation of the virtual keys of the application interface is avoided, and convenience of electronic equipment is improved.
The adaptation processing device of the interface display in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, a PDA, or the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, or the like, and the embodiments of the present application are not particularly limited.
The adaptation processing device of the interface display in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The adaptation processing device for interface display provided in the embodiment of the present application can implement each process implemented in the method embodiments of fig. 1 to fig. 6, and is not described here again to avoid repetition.
Optionally, as shown in fig. 8, an electronic device 800 is further provided in this embodiment of the present application, and includes a processor 801, a memory 802, and a program or an instruction stored in the memory 802 and executable on the processor 801, where the program or the instruction is executed by the processor 801 to implement each process of the foregoing embodiment of the interface display adaptation processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 9 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present application.
The electronic device 900 includes, but is not limited to: a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909, and a processor 910.
Those skilled in the art will appreciate that the electronic device 900 may further include a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 910 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description thereof is omitted.
The input unit 904 is configured to receive a first input of a user;
a display unit 906 for displaying a second interface in a second area in response to the first input;
the first interface comprises S first virtual keys, the second interface comprises images corresponding to third areas in the first interface, and the third areas are areas corresponding to N first virtual keys in the S first virtual keys; the second interface comprises second virtual keys, and the second virtual keys and the N first virtual keys establish a relational relationship; s, N is a positive integer and S.gtoreq.N.gtoreq.1.
After the electronic equipment receives the first input of a user, a second interface can be displayed in a second area in response to the first input, the second interface comprises images of areas corresponding to N first virtual keys in the first interface, and second virtual keys related to the N first virtual keys are established, the second virtual keys do not need to be operated, the related second virtual keys can be triggered through the images of the areas corresponding to the first virtual keys, inconvenience in operation caused by direct operation of the virtual keys of the application interface is avoided, and convenience of the electronic equipment is improved.
Optionally, the input unit 904 is further configured to receive a second input to the second interface by the user;
a processor 910, configured to trigger a target virtual key in response to the second input;
wherein the target virtual key is a second virtual key associated with a first virtual key corresponding to the second input in the second interface
Optionally, the processor 910 is further configured to obtain an image of the first interface; identifying S first virtual keys in the image of the first interface; acquiring initial images of N first virtual keys in the S first virtual keys in the image of the first interface; synthesizing the initial image and the first interface processed according to the first display proportion to obtain the second interface;
the display unit 906 is further configured to display the second interface in the second area.
Optionally, the processor 910 is further configured to divide the first interface and obtain an image of the third area; merging the image of the third area with the first interface processed according to the first display proportion to obtain the second interface;
the display unit 906 is further configured to display the second interface in the second area. (ii) a
Optionally, the input unit 904 is further configured to receive a third input of a remaining part obtained after the first interface is divided by the user, where the remaining part is a part of the first interface except for the third area;
a processor 910, further configured to conceal the remaining portion in response to the third input.
Optionally, the input unit 904 is further configured to receive a fourth input of the user when the second interface is displayed;
the processor 910 is further configured to, in response to the fourth input, adjust a size or a position of an image in the second interface corresponding to a third area of the first interface.
It should be understood that in the embodiment of the present application, the input Unit 904 may include a Graphics Processing Unit (GPU) 9041 and a microphone 9042, and the Graphics Processing Unit 9041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 907 includes a touch panel 9071 and other input devices 9072. A touch panel 9071 also referred to as a touch screen. The touch panel 9071 may include two parts, a touch detection device and a touch controller. Other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 909 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. The processor 910 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 910
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the interface display adaptation processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the embodiment of the adaptation processing method for interface display, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as a system-on-chip, or a system-on-chip.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order, depending on the functionality involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the above embodiment method can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better embodiment. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a computer software product stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk), and including instructions for enabling a terminal (e.g., a mobile phone, a computer, a server, or a network device) to execute the methods according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the scope of the invention as defined by the appended claims.

Claims (10)

1. An adaptation processing method for interface display is characterized by comprising the following steps:
receiving a first input of a user under the condition that a first interface is displayed in a first area;
displaying a second interface in a second area in response to the first input;
the first interface comprises S first virtual keys, the second interface comprises images corresponding to third areas in the first interface, and the third areas are areas corresponding to N first virtual keys in the S first virtual keys; the second interface comprises second virtual keys, and the second virtual keys and the N first virtual keys establish a relational relationship; s, N is a positive integer and S.gtoreq.N.gtoreq.1.
2. The method of claim 1, wherein after displaying the virtual key image in the second area in response to the first input, further comprising:
receiving a second input of the user to the second interface;
triggering a target virtual key in response to the second input;
and the target virtual key is a second virtual key associated with the first virtual key corresponding to the second input in the second interface.
3. The method of claim 1, wherein said displaying the second interface in the second area comprises:
acquiring an image of the first interface;
identifying S first virtual keys in the image of the first interface;
acquiring initial images of N first virtual keys in the S first virtual keys in the image of the first interface;
synthesizing the initial image and the first interface processed according to the first display proportion to obtain the second interface;
displaying the second interface in the second area.
4. The method of claim 1, wherein said displaying the second interface in the second area comprises:
dividing the first interface and obtaining an image of the third area;
merging the image of the third area with the first interface processed according to the first display proportion to obtain the second interface;
displaying the second interface in the second area.
5. The method according to claim 4, wherein after the dividing the first interface and obtaining the image corresponding to the third area, further comprising:
receiving a third input of the user on a remaining part obtained after the first interface is divided, wherein the remaining part is a part of the first interface except the third area;
concealing the remaining portion in response to the third input.
6. The method of claim 1, wherein, after displaying the second interface in the second area in response to the first input, further comprising:
receiving a fourth input of the user while the second interface is displayed;
in response to the fourth input, adjusting a size or a position of an image in the second interface corresponding to a third area of the first interface.
7. An adaptation processing apparatus for interface display, comprising:
the first receiving module is used for receiving a first input of a user under the condition that the first interface is displayed in the first area;
the display module is used for responding to the first input and displaying a second interface in a second area;
the first interface comprises S first virtual keys, the second interface comprises images corresponding to third areas in the first interface, and the third areas are areas corresponding to N first virtual keys in the S first virtual keys; the second interface comprises second virtual keys, and the second virtual keys and the N first virtual keys establish a relational relationship; s, N is a positive integer and S.gtoreq.N.gtoreq.1.
8. The apparatus of claim 7, further comprising:
the second receiving module is used for receiving a second input of the user to the second interface;
the first processing module is used for responding to the second input and triggering the target virtual key;
and the target virtual key is a second virtual key associated with the first virtual key corresponding to the second input in the second interface.
9. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the method of adapting an interface display according to any one of claims 1 to 6.
10. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the adaptation processing method of the interface display according to any one of claims 1 to 6.
CN202110986863.0A 2021-08-26 2021-08-26 Interface display adaptation processing method and device and electronic equipment Pending CN113655929A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110986863.0A CN113655929A (en) 2021-08-26 2021-08-26 Interface display adaptation processing method and device and electronic equipment
PCT/CN2022/113628 WO2023025060A1 (en) 2021-08-26 2022-08-19 Interface display adaptation processing method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110986863.0A CN113655929A (en) 2021-08-26 2021-08-26 Interface display adaptation processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113655929A true CN113655929A (en) 2021-11-16

Family

ID=78482120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110986863.0A Pending CN113655929A (en) 2021-08-26 2021-08-26 Interface display adaptation processing method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN113655929A (en)
WO (1) WO2023025060A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114217732A (en) * 2021-12-13 2022-03-22 深圳Tcl新技术有限公司 Display page switching method and device, storage medium and display equipment
CN114510194A (en) * 2022-01-30 2022-05-17 维沃移动通信有限公司 Input method, input device, electronic equipment and readable storage medium
WO2023025060A1 (en) * 2021-08-26 2023-03-02 维沃移动通信有限公司 Interface display adaptation processing method and apparatus, and electronic device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130017241A (en) * 2011-08-10 2013-02-20 삼성전자주식회사 Method and apparauts for input and output in touch screen terminal
CN105094609A (en) * 2015-07-20 2015-11-25 小米科技有限责任公司 Method and device for achieving key operation in single-handed mode
CN109224436B (en) * 2018-08-28 2021-12-14 努比亚技术有限公司 Virtual key definition method based on game interface, terminal and storage medium
CN111651109A (en) * 2020-05-11 2020-09-11 广州视源电子科技股份有限公司 Display interface layout adjusting method and device, electronic equipment and storage medium
CN112035052A (en) * 2020-09-27 2020-12-04 深圳市恒晋升科技有限公司 Computer device and method for operating game on touch computer device
CN112306351B (en) * 2020-10-30 2022-05-13 腾讯科技(深圳)有限公司 Virtual key position adjusting method, device, equipment and storage medium
CN113655929A (en) * 2021-08-26 2021-11-16 维沃移动通信有限公司 Interface display adaptation processing method and device and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023025060A1 (en) * 2021-08-26 2023-03-02 维沃移动通信有限公司 Interface display adaptation processing method and apparatus, and electronic device
CN114217732A (en) * 2021-12-13 2022-03-22 深圳Tcl新技术有限公司 Display page switching method and device, storage medium and display equipment
CN114510194A (en) * 2022-01-30 2022-05-17 维沃移动通信有限公司 Input method, input device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
WO2023025060A1 (en) 2023-03-02

Similar Documents

Publication Publication Date Title
US20200204738A1 (en) Photographing method and mobile terminal
CN112135181B (en) Video preview method and device and electronic equipment
CN113655929A (en) Interface display adaptation processing method and device and electronic equipment
CN112162665B (en) Operation method and device
CN112099707A (en) Display method and device and electronic equipment
WO2022121790A1 (en) Split-screen display method and apparatus, electronic device, and readable storage medium
CN112433693B (en) Split screen display method and device and electronic equipment
CN113703624A (en) Screen splitting method and device and electronic equipment
CN112911147A (en) Display control method, display control device and electronic equipment
CN113794795A (en) Information sharing method and device, electronic equipment and readable storage medium
CN112783406B (en) Operation execution method and device and electronic equipment
CN112099714B (en) Screenshot method and device, electronic equipment and readable storage medium
CN112399010B (en) Page display method and device and electronic equipment
CN111796746A (en) Volume adjusting method, volume adjusting device and electronic equipment
CN114779977A (en) Interface display method and device, electronic equipment and storage medium
CN115291778A (en) Display control method and device, electronic equipment and readable storage medium
CN112148175B (en) Notification display mode setting method and device, electronic equipment and storage medium
CN114995713A (en) Display control method and device, electronic equipment and readable storage medium
CN114115639A (en) Interface control method and device, electronic equipment and storage medium
CN113703630A (en) Interface display method and device
CN112165584A (en) Video recording method, video recording device, electronic equipment and readable storage medium
CN113253884A (en) Touch method, touch device and electronic equipment
CN111857496A (en) Operation execution method and device and electronic equipment
CN111880702A (en) Interface switching method and device and electronic equipment
US20170168696A1 (en) Method and electronic device for adjusting video window based on multi-point control

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination