CN117687549A - Interaction method based on keys and electronic equipment - Google Patents

Interaction method based on keys and electronic equipment Download PDF

Info

Publication number
CN117687549A
CN117687549A CN202311132090.5A CN202311132090A CN117687549A CN 117687549 A CN117687549 A CN 117687549A CN 202311132090 A CN202311132090 A CN 202311132090A CN 117687549 A CN117687549 A CN 117687549A
Authority
CN
China
Prior art keywords
key
interface
electronic device
screen
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311132090.5A
Other languages
Chinese (zh)
Inventor
齐宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311132090.5A priority Critical patent/CN117687549A/en
Publication of CN117687549A publication Critical patent/CN117687549A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a key-based interaction method and electronic equipment. According to the electronic equipment, the entity keys are arranged on the side frames, and different operations of the electronic equipment can be realized through various operations acting on the entity keys. When the screen is inconvenient to touch (such as wet hands, wearing thick gloves in winter, and the like), a user does not need to touch and interact with the screen, and the electronic equipment can be conveniently controlled by utilizing the entity keys of the side frames. The scheme fully utilizes the side edge part of the electronic equipment and expands the functions of the side edge keys.

Description

Interaction method based on keys and electronic equipment
Technical Field
The application relates to the technical field of terminals, in particular to a key-based interaction method and electronic equipment.
Background
Electronic devices such as mobile phones and tablets are provided with input devices such as touch screens, and a small number of physical keys such as power keys and volume keys. The user inputs instructions to the electronic device through the input device, thereby operating the electronic device. However, the touch screen cannot be used when the user wets the hands or wears the glove, and a small number of physical keys can only control the electronic device to realize a small number of functions.
Disclosure of Invention
The application provides a key-based interaction method and electronic equipment, which support a user to conveniently control the electronic equipment by utilizing entity keys of a side frame, and do not need touch interaction between the user and a screen.
In a first aspect, a key-based interaction method is provided, and the key-based interaction method is applied to an electronic device, a frame of the electronic device is provided with a first key, and the first key is a physical key, and the method may include: receiving a first operation acting on the first key, the first operation comprising a sliding operation on a first face of the first key; and responding to the first operation, and controlling the interface element displayed by the electronic equipment.
By implementing the method of the first aspect, a user can conveniently control the electronic device by using the entity keys of the side frame without touching and interacting with the screen. The method can control the electronic equipment when the user is inconvenient to touch the screen (such as hand wetting, wearing thick gloves in winter, and the like), fully utilizes the side edge part of the electronic equipment, and expands the functions of the side edge keys.
The first operation includes a sliding operation on the first face of the first key in a direction not limited, and may be upward or downward or forward or backward, for example. Upward may refer to the direction in which the bottom of the screen points to the top, opposite downward and upward; rearward may refer to the direction in which the screen is directed toward the rear housing, opposite forward and rearward.
With reference to the first aspect, in some embodiments, before receiving the first operation, the electronic device further displays a first interface, and the interface elements for controlling the display of the electronic device specifically include: and switching the first interface to the second interface. The scheme can realize the switching of page contents. The first interface may be an interface comprising any one of: web pages, electronic books, chat logs, small videos, pictures, desktops, etc.
With reference to the first aspect, in some embodiments, the electronic device may further display a third interface, and the interface elements for controlling the display of the electronic device specifically include: and controlling the position or the size of the interface element in the third interface displayed by the electronic equipment.
In combination with the above embodiment, the manner of controlling the position or the size of the interface element in the third interface displayed by the electronic device may include the following several ways:
when the third interface is a video playing interface, the control of the interface elements in the video playing interface comprises any one of the following steps: adjusting a video playing progress bar, a screen brightness bar and a video volume bar;
when the third interface is an audio playing interface, the control of the interface elements in the audio playing interface comprises any one of the following steps: adjusting an audio playing progress bar and an audio volume bar;
When the third interface includes the first scalable element, the first scalable element includes a map or a picture or a video, and the controlling of the interface element in the third interface includes: zooming in or out of the first scalable element;
when the third interface is a shooting preview interface, the control of the interface elements in the shooting preview interface comprises any one of the following: adjusting the selected focal length, adjusting the selected aperture, and adjusting the selected shooting mode;
when the third interface is a text editing interface, controlling the interface elements in the text editing interface comprises adjusting the position of a cursor in the text editing interface;
when the third interface includes a prompt box of the first notification message, the first operation includes a sliding operation of pressing the first key and on the first surface of the first key, and the control of the interface element in the third interface includes: and displaying a floating window, wherein the floating window displays the detailed content of the first notification message, or stops displaying the prompt box of the first notification message.
With reference to the first aspect, in some embodiments, before receiving the first operation on the first key, the method may further include: displaying a fourth interface, wherein the fourth interface comprises a first interaction element and a second interaction element; receiving a second operation acting on the first key; highlighting the first interactive element in the second interface in response to the second operation; an interface element for controlling a display of an electronic device, comprising: highlighting the first interactive element is stopped and the second interactive element is highlighted. The scheme can switch the operation focus by using the first key.
The first interactive element may be a default initial operating focus of the electronic device.
In some embodiments, after highlighting the second interactive element, the electronic device may also receive a pressing operation of the first key; in response to the pressing operation, a function of the second interactive element characterization is performed. The scheme can utilize the first key to start the function of the operation focus characterization.
With reference to the first aspect, in some embodiments, the method may further include: displaying a prompt box of the second notification message; receiving a pressing operation of a first key; displaying a user interface of a first application, and displaying the detailed content of a second notification message in the user interface of the first application, wherein the first application is a provider of the second notification message. The scheme may utilize a first key to view details of the notification message.
With reference to the first aspect, in some embodiments, the method may further include: displaying a fifth interface; receiving a third operation acting on the first key; responding to the third operation, starting the front camera, and acquiring a first image containing human eyes through the front camera; displaying an indicator on the fifth interface, or highlighting a third interaction element in the fifth interface, wherein the indicator and the third interaction element are both positioned at the gaze point of the human eye in the screen, and the gaze point of the human eye in the screen is determined according to the first image; receiving a pressing operation of a first key; and executing the function of the third interactive element characterization. The scheme can determine the operation focus through eye movement tracking, and trigger the function of operation focus characterization through a first key.
With reference to the first aspect, in some embodiments, the method may further include: displaying a sixth interface, the sixth interface comprising a second scalable element, the second scalable element comprising a map or picture or video; receiving a third operation acting on the first key; responding to the third operation, starting the front camera, and acquiring a second image containing human eyes through the front camera; displaying an indicator on a sixth interface, wherein the indicator is positioned at a point of gaze of a human eye in the screen, and the point of gaze of the human eye in the screen is determined according to the second image; receiving a sliding operation on a first face of a first key; the second scalable element is zoomed in or zoomed out centered on the gaze point. The scheme may determine a center point by eye tracking and scale a scalable element in the user interface by a first key.
In combination with the above two embodiments, the electronic device may further activate a sensor to collect data in response to a third operation, the data collected by the sensor being used to determine a gaze point of the human eye in the screen. The sensor data can reflect the state of the electronic equipment, and the gaze point is determined by combining the gesture of the electronic equipment, so that the determined gaze point can be more accurate.
With reference to the first aspect, in some embodiments, the electronic device may further receive a fourth operation acting on the first key, where the fourth operation includes a pressing operation on the first key; executing the first function in response to the fourth operation; and when any one or more of the position, the duration, the intensity and the times of pressing the first key in the fourth operation are different, the first function is different. Equivalently, different operations acting on the first key may be used to trigger the electronic device to perform different functions.
The first function may include, but is not limited to: the method comprises the steps of starting a mute mode, starting a flight mode, recording a screen, capturing a screen, starting a camera application, turning on a flashlight, adjusting the volume of a system, lighting a screen, locking the screen, shutting down and the like.
With reference to the first aspect, in some embodiments, the electronic device further outputs vibration feedback in response to the first operation. The electronic device may also output different vibration feedback for different types of first operations.
In some embodiments, the electronic device may output the vibration feedback via a motor disposed below the first key. This allows direct feedback on the user's finger touching the key 140.
With reference to the first aspect, in some embodiments, the first key is disposed on a first frame of the electronic device, and a first surface of the first key is parallel to the first frame.
The first bezel may be any of the bezels of the electronic device, such as a left bezel, a right bezel, an upper bezel, or a lower bezel. The first surface may be a surface of the first key far away from the electronic device, and the first surface may be a plane surface or a curved surface.
With reference to the first aspect, in some embodiments, a pressure sensor is disposed below the first key, and the pressure sensor is configured to detect an operation acting on the first key.
With reference to the first aspect, in some embodiments, the number of first keys is a plurality. When the electronic device is provided with a plurality of first keys, different first keys can trigger different functions. For example, the first key of the left bezel may be used to switch page content and the first key of the right bezel may be used to switch the operating focus. In other embodiments, when the electronic device is provided with a plurality of first keys, different first keys may trigger the same function, so that the user may autonomously select one first key from the plurality of first keys to trigger the function to be executed.
With reference to the first aspect, in some embodiments, the electronic device is further provided with a second key, where the second key is a physical key; the first key is used for receiving sliding operation in a first direction, and the second key is used for receiving sliding operation in a second direction, and the first direction is perpendicular to the second direction.
The sliding operation of the first key in the first direction and the sliding operation of the second key in the second direction can be used for triggering the electronic device to execute different functions. Corresponding to configuring different functions for the first key and the second key.
In combination with the above embodiment, the second key may be disposed on a frame of the electronic device, or the second key may be disposed on a back surface of the electronic device, where the back surface of the electronic device is opposite to the display surface of the electronic device.
In combination with the above embodiment, the first key may be disposed on a right frame of the electronic device, and the second key may be disposed on a left frame of the electronic device.
In a second aspect, there is provided an electronic device comprising: a first key, a memory, one or more processors; the first key is a physical key arranged on a frame of the electronic equipment; the first key, memory, and one or more processors are coupled, the memory is for storing computer program code, the computer program code comprising computer instructions, the one or more processors invoking the computer instructions to cause the electronic device to perform the method as provided in the first aspect or any implementation of the first aspect.
In a third aspect, there is provided a computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform a method as provided in the first aspect or any implementation of the first aspect.
In a fourth aspect, there is provided a computer program product for, when run on a computer, causing the computer to perform the method as provided in the first aspect or any of the embodiments of the first aspect.
In a fifth aspect, there is provided a chip system comprising one or more processors configured to invoke computer instructions to cause performance of a method as provided in the first aspect or any implementation of the first aspect.
Drawings
Fig. 1A to 1C are electronic devices provided in embodiments of the present application;
FIGS. 2A-2D illustrate user interfaces for implementing page content switching by key presses;
3A-3G illustrate user interfaces that implement some specific refinement operations by key presses;
FIGS. 4A-4B illustrate user interfaces for enabling switching and selection of an operational focus by a key press;
FIG. 5A is a schematic view of a scenario in which a user uses an electronic device;
5B-5C illustrate a user interface for controlling an electronic device via eye tracking and keys;
FIG. 6 is a flowchart of a key-based interaction method provided in an embodiment of the present application;
fig. 7 is a block diagram of a hardware structure of an electronic device according to an embodiment of the present application;
fig. 8 is a software architecture of an electronic device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides electronic equipment with entity keys and a man-machine interaction method suitable for the electronic equipment, so that convenient man-machine interaction experience is provided for users. Embodiments of the present application are described below with reference to the accompanying drawings. The drawings presented herein are only examples and do not limit the scope of the present application.
The electronic equipment provided by the application is intelligent terminal equipment and can be of various types, and the application is not limited to the specific type. For example, the electronic device may be a mobile phone, for example, a tablet computer, a desktop computer with a touch-sensitive surface or touch panel, a laptop computer (laptop), a handheld computer, a notebook computer, a smart screen, a wearable device (e.g., a smart watch, a smart bracelet, etc.), an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, an artificial intelligence (artificial intelligence, AI) device, a car set, a smart headset, a game machine, an internet of things (internet of things, IOT) device, or a smart home device such as a smart water heater, a smart light fixture, a smart air conditioner, etc.
Fig. 1A-1B schematically illustrate an electronic device provided in an embodiment of the present application.
As shown in fig. 1A and 1B, the electronic device 100 may include: screen 110, back case 120, side frame 130, keys 140, power key 150.
The screen 110 may be provided with a front camera 111, an earpiece (not shown), an proximity light sensor (not shown), etc. The front camera 111 may include a depth camera for sensing depth, capturing eye position in real time, etc., according to time-of-flight 3D imaging (3D time of flight,3D ToF) techniques.
The rear case 120 serves to support the screen 110, and the rear case 120 constitutes a non-display surface of the screen 110. The material of the rear case 120 may be a metallic material, such as an aluminum magnesium alloy, or a nonmetallic material. The rear housing 120 may be provided with a rear camera on the face thereof. In addition to supporting the screen, the rear case 120 may also function to protect the internal components of the electronic device, such as a flexible circuit board (flexible printed circuit, FPC). The following embodiments will describe the electronic configuration of the electronic device 100, which includes which components, not previously described herein.
The rear housing 120 and the side frames 130 may be integral or may be two separate parts.
The side frames 130 are used to wrap around the screen 110. The side frames 130 may include an upper frame 130A, a lower frame 130B, a left frame 130C, and a right frame 130D, and the four frames may be separate frames or portions of a single frame. The material of the side frame 130 may be a non-metal material, such as stainless steel, plastic, etc., or a combination of a metal material and a non-metal material, such as a combination of titanium metal and a non-metal material. The lower frame of the side frames 130 may be provided with a charging port, a microphone, etc.
The keys 140 and 150 are physical keys, also called physical keys, and may be made of plastic, metal, or other materials. The power key 150 is a pressure sensitive key, under which a pressure sensor may be provided. The power key 150 is used to light up the screen 110 or to extinguish the screen 110. The power key 150 may be disposed at an upper position in the middle of the right frame and below the key 140.
Referring to fig. 1C, a side view of the electronic device 100 from a right side view angle is shown. As shown in fig. 1C, the key 140 is located at the upper middle of the right frame 130D, and the power key 150 is located below the key 140. The key 140 has a certain width, but is narrower than the width of the right frame 130D, and is long (i.e., rectangular) in the up-down direction. Four sides of key 140 include edge 140A, edge 140B, edge 140C, and edge 140D. Edge 140A is the side closest to the top of screen 110, edge 140B is the side closest to the bottom of screen 110, edge 140C is the side closest to screen 110, and edge 140D is the side closest to back case 120. The key 140 may be implemented in other shapes, not limited to the strip shape, and the present application is not limited thereto.
The keys 140 may be disposed at other positions, for example, on the rear case 120, without being limited to the frame of the electronic device 100. For example, the key 140 may be disposed on the back of the electronic device 100, where the back of the electronic device 100 is opposite to the display surface of the electronic device 100, and the display surface refers to the plane where the screen is located.
One face of the key 140 remote from the electronic device 100 may be referred to as a first face. For example, when the key 140 is disposed on the right frame 130D as shown in fig. 1A and 1B, the first surface is a surface surrounded by the edge 140A, the edge 140B, the edge 140C, and the edge 140D. For another example, when the key 140 is disposed on the back surface of the electronic device 100, the first surface is a surface of the key 140 away from the electronic device 100. The first surface of the key 140 may be a plane or a curved surface, which is not limited herein.
The keys 140 are pressure sensitive keys. One or more pressure sensors may be disposed below the keys 140, and the pressure sensors may be configured to sense pressure signals and convert the pressure signals into electrical signals. Pressure sensors are of many types, such as resistive pressure sensors, electromagnetic pressure sensors (e.g., inductive pressure sensors, hall pressure sensors, electric eddy current pressure sensors), capacitive pressure sensors, and the like. When a pressing operation is applied to the key 140, the electronic apparatus 100 may determine the intensity of the pressing operation, the position of the pressing operation, the duration of the pressing operation, the number of times of pressing, and the like, based on the signal detected by the pressure sensor. The pressing strength may be set to a plurality of stages, for example, three or more stages, which is not limited herein. The positions of the keys 140 may be classified into an upper position, a lower position, or may be classified into an upper position, a middle position, a lower position, and the like. The number of presses refers to the number of presses in a short time, for example, 5 seconds, and the number of presses is counted again when the time exceeds the number.
The user may input a sliding operation on the keys 140, i.e., the keys 140 may be used to receive a sliding operation of the user's finger. The sliding operation is specifically applied to the first face of the key 140. The sliding operation of the user's finger input on the key 140 may include at least four directions, such as sliding up, sliding down, sliding forward, sliding backward, etc. Upward pointing edge 140B points in the direction of edge 140A, downward pointing edge 140A points in the direction of edge 140B, forward pointing edge 140D points in the direction of edge 140C, and backward pointing edge 140C points in the direction of edge 140D. The key 140 may be used to receive sliding operations in more directions, not limited to four directions. The electronic device 100 may calculate the pressure trend of the key 140 according to the detection signal of the pressure sensor under the key 140, so as to determine the sliding direction of the user operation detected by the key 140.
In the embodiment of the present application, different user operations acting on the key 140 may correspond to different operation instructions.
For example, the pressing operation applied to different positions of the key 140 may correspond to different operation instructions.
For example, the pressing operation of the key 140 with different intensities may be performed in response to different operation instructions.
For another example, the operations of the keys 140 with different pressing durations may correspond to different operation instructions.
For another example, the different number of pressing operations applied to the key 140 may correspond to different operation instructions.
For example, the sliding operation in different directions on the key 140 may correspond to different operation instructions.
Different combinations of the user operations may also correspond to different operation instructions. The combination may be simultaneous or a combination of operations including a sequential order. Such as pressing while inputting a sliding operation, or pressing for several seconds before inputting a sliding operation, etc.
The correspondence between the operation and the operation command applied to the key 140 can be referred to in the following embodiments. The correspondence between the operation and the operation instruction applied to the key 140 may be set by default by the electronic device 100, and may be preset at the time of shipment, for example. In some embodiments, the corresponding relationship between the operation and the operation instruction of the key 140 may be opened and customized by the user, so that the user can conveniently set the corresponding relationship according to the requirement of the user.
The electronic device 100 may only be provided with one key 140, and the one key 140 may be disposed, for example, in a position above the middle of the right frame 130D as shown in fig. 1B. The electronic device 100 may also be provided with a plurality of keys 140, for example, one key 140 may be provided on each of the left and right rims 130C and 130D.
In some embodiments, the electronic device 100 may be provided with both a volume button and a button 140, with the volume button being used to implement a function of adjusting volume, and the button 140 being used to implement some other function.
In some embodiments, the electronic device 100 may not provide a volume button, and the function of adjusting the volume may be implemented through the button 140.
In some embodiments, the electronic device 100 may not be provided with the power key 150, and the key 140 may be used to implement the function of extinguishing the screen 110.
A motor may be provided below the keys 140, which may be located below the pressure sensors of the keys 140 for outputting vibration feedback.
Next, how the electronic device 100 is controlled by the keys 140 is described.
The user operation on the key 140 in the present application can achieve the same function as the user operation (such as a sliding operation, a clicking operation, a two-finger pinch operation, and a two-finger zoom operation) currently applied to the screen 110. When it is inconvenient to touch the screen 110 (such as wet hands, wearing thick gloves in winter, etc.), the user can also operate the electronic device 100 using the buttons 140 of the side frame without touching the screen. The scheme fully utilizes the side edge part of the electronic equipment and expands the functions of the side edge keys.
The manner in which the electronic device 100 is controlled by the keys 140 is developed in detail below by the plurality of scenes.
Scene 1
In this scenario, switching of page content is achieved through the keys 140.
If the page content vertically contains more content beyond the screen 110, more content beyond the screen 110 may be switched to be displayed in the screen 110. Vertical refers to the top-to-bottom direction of extension or bottom-to-top direction of extension of the screen 110. For example, when browsing a web page, the user switches to display the web page content, when browsing an electronic book, the user switches to display the chat content when browsing a chat record, when browsing an application interface (such as a shopping interface or a social interface), when watching a small video, the user switches to display the video, when browsing a long picture, the user switches to display more picture content, and the like.
Fig. 2A shows a user interface for switching web page content by means of keys 140.
As shown in fig. 2A, the electronic device 100 displays a web page provided by a browser, and after receiving an operation for sliding up the key 140, slides web page content that is not displayed on the screen 110 into the screen 110 for display, and slides web page content that is displayed on the screen 110 above out of the screen 110 for no longer display. The sliding distance of the web page in the screen 110 may be related to the sliding distance of the sliding operation received on the key 140, and the longer the sliding distance of the sliding operation received on the key 140, the longer the sliding distance of the web page. Not limited to the upward sliding, the electronic device 100 may also receive an operation to slide down the key 140 and slide down the web page in response to the operation.
The up/down sliding operation of the key 140 in fig. 2A corresponds to the up/down sliding operation of the finger in the screen 110, both of which trigger the electronic device 100 to switch the web page content.
Fig. 2B shows a user interface for switching small videos by means of keys 140.
As shown in fig. 2B, the electronic device 100 plays a small video provided by the video application first, and after receiving an operation for sliding up the key 140, switches to display the next small video in response to the operation. Not limited to the upward sliding, the electronic apparatus 100 may also receive an operation for sliding the key 140 downward and switch to display the last small video in response to the operation.
The up/down sliding operation in fig. 2B, which acts on the keys 140, corresponds to the up/down sliding operation of the finger in the screen 110, both of which trigger the electronic device 100 to switch the small video.
If the page content contains more content beyond the screen 110 in the lateral direction, more content beyond the screen 110 may be switched to be displayed in the screen 110. The lateral direction refers to the left-to-right direction of extension of the screen 110 or the right-to-left direction of extension. For example, when browsing pictures, the pictures are switched leftwards or rightwards, when browsing electronic books, the pages of the desktop are switched leftwards and rightwards, when browsing pictures in a gallery, the pictures are switched, and the like.
Fig. 2C shows a user interface for switching desktops by key 140.
As shown in fig. 2C, the electronic device 100 displays a first page of the desktop, and after receiving an operation for sliding the key 140 forward, switches to displaying a second page of the desktop in response to the operation. Not limited to the forward sliding, the electronic apparatus 100 may also receive an operation for the key 140 to slide backward and switch to display the last page, for example, the minus one screen, in response to the operation.
The forward/backward sliding operation of the key 140 in fig. 2C corresponds to a left/right sliding operation of the finger in the screen 110, both of which trigger the electronic device 100 to switch the desktop page.
Fig. 2D shows a user interface for switching pictures by means of keys 140.
As shown in fig. 2D, the user interface displayed by the electronic device 100 includes a picture display area, where a first picture is displayed, and after receiving an operation for sliding the key 140 forward, the display area is switched to display a second picture in response to the operation. Not limited to the forward sliding, the electronic apparatus 100 may also receive an operation to slide the key 140 backward and switch to display the previous photograph in response to the operation.
The sliding operation of the key 140 forward/backward in fig. 2D corresponds to the sliding operation of the finger left/right in the picture display area, both triggering the electronic device 100 to switch pictures.
Scene 2
In this scenario, some specific refinement operations are implemented through the keys 140, such as adjusting the volume, the playing progress, the screen brightness, the playing speed of the video in the video playing scenario, adjusting the audio playing progress, the playing speed, the playing volume, etc. in the audio playing scenario (such as playing music, listening to books, etc. scenarios), adjusting the cursor position in the text editing scenario, operating the selected text, etc., adjusting the focal length, the aperture or the shooting mode in the shooting scenario, zooming the map when viewing the map, zooming the picture when viewing the picture, executing various operations on the notification message when displaying the notification message, etc.
Fig. 3A shows a user interface for adjusting video playback progress via keys 140.
As shown in fig. 3A, the video playing interface displayed by the electronic device 100 includes a video playing area, and plays video in the video playing area. Upon receiving an operation to slide forward on the key 140, the electronic apparatus 100 adjusts the video playback progress forward in response to the operation. Not limited to the forward sliding, the electronic device 100 may also receive an operation to slide the key 140 backward and adjust the video playing progress backward in response to the operation.
The forward/backward sliding operation of the key 140 in fig. 3A corresponds to the leftward/rightward sliding operation of the finger in the video playing area, both of which trigger the electronic device 100 to adjust the video playing progress.
Alternatively, if the electronic device 100 receives an up/down sliding operation on the key 140 while displaying the video playing interface shown in fig. 3A, the video volume may be adjusted, or the screen brightness may be adjusted in response to the operation.
The electronic device 100 may display and adjust a video playing progress bar, a screen brightness bar, or a video volume bar in the video playing interface to prompt the user when adjusting the video playing progress, adjusting the video volume, or adjusting the screen brightness. As shown in fig. 3A, the electronic device 100 adjusts the play progress shown by the video play progress bar displayed in the video play area.
Similarly, in an audio playback scenario, such as when the electronic device 100 plays music or plays audio of a book, the user may play audio at multiple speeds by pressing the key 140 up long. The duration of the long press may be proportional to the speed of the audio playback. For another example, the user may also trigger the electronic device 100 to adjust the playing progress of the audio by inputting a sliding operation on the key 140. Under the audio playing scene, no matter the electronic equipment 100 is in an unlocking state or a screen locking state, a user can blindly operate the equipment through the keys 140, so that the method is simple and convenient.
Fig. 3B shows a user interface for zooming the map by means of keys 140.
As shown in fig. 3B, the electronic device displays a map interface, and after receiving an operation to slide up the key 140, the electronic device 100 enlarges the displayed map interface in response to the operation. The electronic device 100 may zoom in on the map interface centered on the center point of the display screen, or may zoom in on the map interface centered on other points of the display screen. Not limited to the upward sliding, the electronic device 100 may also receive an operation for sliding the key 140 downward and narrow the map interface in response to the operation.
The up/down sliding operation of the key 140 in fig. 3B corresponds to a two-finger zoom-in/two-finger pinch operation of the finger in the screen 110, both of which trigger the electronic device 100 to zoom the map.
Fig. 3C shows a user interface for adjusting focus via keys 140.
As shown in fig. 3C, the electronic device starts the camera application, opens the camera (e.g., the rear camera), and invokes the focus adjustment control. After receiving the operation for the key 140 to slide forward, the electronic device 100 responds to the operation, and the electronic device 100 displays an adjustment bar of the focal length in the user interface and increases the focal length at the same time; after that, the electronic device receives an operation acting on the key 140 to slide backward, and in response to the operation, the electronic device 100 reduces the focal length.
The sliding operation in forward/backward direction of the key 140 in fig. 3C corresponds to a sliding operation of a finger to the left/right direction on the focus adjustment control, both of which trigger the electronic device 100 to adjust the focus.
Fig. 3D shows a user interface for adjusting the cursor position by means of keys 140.
As shown in fig. 3D, the electronic device launches a note application that provides a user interface that includes a cursor. After receiving an operation to slide up the key 140, the electronic device 100 moves the cursor forward in response to the operation. The cursor can move from cell to cell when moving forward, the total number of moving cells is related to the sliding distance of the upward sliding operation, and the larger the sliding distance is, the more the total number of moving cells is. Not limited to the upward sliding, the electronic device 100 may also receive an operation to slide down the key 140 and move the cursor backward in response to the operation. In the process of moving the cursor, several characters in front of and behind the cursor can be enlarged and displayed, so that the user can pay more attention to the position of the cursor.
The up/down sliding operation of the key 140 in fig. 3D corresponds to a sliding operation after the finger presses the cursor for a long time, both of which trigger the electronic device 100 to move the cursor.
Alternatively, not limited to the up/down sliding operation, the electronic device 100 may move the cursor forward/backward in a frame-by-frame manner in response to the forward/backward sliding operation applied to the key 140. In other embodiments, a forward/backward sliding operation on the key 140 is used to trigger the electronic device 100 to move the cursor forward/backward in a row, and an upward/downward sliding operation on the key 140 is used to trigger the electronic device 100 to move the cursor upward/downward in a vertical direction.
The keys 140 may also be used to select text in a text editing interface. For example, referring to fig. 3D, after the electronic device moves the cursor, a long press operation on the key 140 may be detected, and in response to the long press operation, the electronic device 100 may select a portion of text in the text editing interface. The selected text may be a piece of text near the cursor, such as a piece of text before and after the cursor, or a piece of text before the cursor, or a piece of text after the cursor. As shown in fig. 3D, the text selected by the electronic device 100 is highlighted and the beginning and end of the text have adjustment bars for adjusting the selected text, which the user can drag to adjust the content of the selected text. Thereafter, the user may also input a double-click operation to the key 140, in response to which the electronic device 100 may copy the selected text in the text editing interface. Without being limited thereto, further operations on the selected text may be provided, for example, a three-click of the key 140 may trigger the electronic device 100 to cut the selected text, a long-press of the key 140 may trigger the electronic device 100 to translate the selected text, etc.
Fig. 3E-3G illustrate user interfaces that perform operations on notification messages via keys 140.
Fig. 3E shows a user interface for opening a floating window corresponding to a notification message via keys 140. As shown in fig. 3E, the electronic device displays a prompt box of the notification message on the top of the screen 110, and then receives an operation of continuously pressing the key 140 while sliding down on the key 140, and in response to the operation, the electronic device displays a floating window containing the details of the notification message in the screen 110. In fig. 3E, the operation of continuously pressing the key 140 and simultaneously sliding down on the key 140 corresponds to the operation of pressing the prompt box of the notification message by a finger and simultaneously sliding down, both of which trigger the electronic device 100 to open the floating window of the notification message.
Fig. 3F shows an application interface corresponding to the notification message being opened by the key 140. As shown in fig. 3F, the electronic device displays a prompt box of the notification message on top of the screen 110, and then receives an operation of pressing the key 140, in response to which the electronic device 100 starts an application that generates the notification message, and displays the details of the notification message in a user interface provided by the application. The operation of pressing the key 140 in fig. 3F corresponds to the operation of tapping or clicking the prompt box of the notification message by a finger, both of which trigger the electronic device 100 to start an application that generates the notification message, and display the details of the notification message in a user interface provided by the application.
Fig. 3G shows a user interface for eliminating notification messages by means of keys 140. As shown in fig. 3G, the electronic device displays a prompt box of the notification message on the top of the screen 110, and then receives an operation of continuously pressing the key 140 while sliding up on the key 140, and in response to the operation, the electronic device stops displaying the prompt box of the notification message in the screen 110. In fig. 3G, the operation of continuously pressing the key 140 and simultaneously sliding up on the key 140 corresponds to the operation of sliding up by a finger on the prompt box of the notification message, both of which trigger the electronic device 100 to cancel the prompt box of the notification message.
The notification message is usually displayed on the top of the screen, is not easy to touch by the finger of the user, and the user can conveniently and quickly process the notification message by performing an operation on the notification message through the key 140 without touching the top of the screen with the finger.
Not limited to the prompt box displaying the notification message at the top of the screen, in some embodiments, the electronic device 100 may also display the prompt box of the notification message in the middle or upper middle of the screen, e.g., as described above when locking the screen. In this case, similarly, the electronic apparatus 100 may open the floating window of the notification message after receiving the operation of pressing the key 140 and sliding down; after receiving the operation of pressing the key 140, displaying a detailed content page of the notification message; after receiving the operation of pressing the key 140 and sliding down/left, the display of the prompt box of the notification message is stopped.
Scene 3
In this scenario, selection and switching of the operation focus is achieved by the key 140. The operational focus may be an interactable element in a user interface displayed by the electronic device 100, the operational focus being selectable by a user. An interactable element refers to an operational mode in which upon input operation (e.g., a click operation, a long press operation, etc.) for the element, the electronic device 100 will make feedback, such as launching an application indicated by the interactable element, opening the interactable element, etc.
When the electronic device displays a user interface containing interactable elements, selection and switching of the operation focus can be achieved through the keys 140. For example, when the electronic device displays a desktop, when the electronic device displays a menu, and the like.
Fig. 4A shows a user interface for switching and selecting an operational focus in a desktop via keys 140.
As shown in fig. 4A, the electronic device 100 displays a desktop and highlights an initial operation focus in the desktop, which may be, for example, a memo icon that is highlighted. Thereafter, the electronic device 100 may detect a sliding down operation on the key 140, in response to which the electronic device 100 finds a new operation focus (e.g., an icon of a camera application under the memo icon) downward from the memo icon in the vertical direction, and highlights the operation focus. After the icon of the camera application is selected as the operation focus, the electronic device 100 may detect an operation of pressing the key 140 and start the camera application in response to the operation. The downward sliding operation and the pressing operation on the key 140 shown in fig. 4A correspond to an operation of clicking an icon of the camera application by a finger, both of which trigger the electronic device to start the camera application.
Fig. 4B shows a user interface for switching and selecting an operating mode by means of a key 140.
As shown in fig. 4B, the electronic device 100 displays a menu bar including a plurality of operation modes, and highlights an initial operation focus in the menu bar, which may be, for example, an option of a do-not-disturb mode in the menu bar, the option of the do-not-disturb mode being highlighted. Thereafter, the electronic device 100 may detect a downward sliding operation on the key 140, in response to which the electronic device 100 finds a new operation focus (e.g., an option of a personal mode) downward from the option of the do-not-disturb mode in the vertical direction, and highlights the operation focus. After the option of the personal mode is selected as the operation focus, the electronic device 100 may detect an operation of pressing the key 140 and initiate the personal mode in response to the operation. The downward sliding operation and the pressing operation on the key 140 shown in fig. 4B correspond to the operation of clicking the personal mode option by the finger, both of which trigger the electronic device to start the personal mode.
The initial operational focus may be set by the electronic device 100 by default, for example, may be the first interactable element from top to bottom in a user interface currently displayed by the electronic device 100. The manner in which the operational focus is highlighted may include, but is not limited to: increase background color, highlight, increase border, etc.
In some embodiments, the up/down sliding operation on the key 140 is the same as the left/right sliding operation on the key 140, and is used to trigger the electronic device 100 to switch the operation focus sequentially from top to bottom and left to right, or vice versa. In other embodiments, an up/down sliding operation acting on the key 140 is used to switch the operation focus in the vertical direction, and a left/right sliding operation acting on the key 140 is used to switch the operation focus in the lateral direction.
Scene 4
Fig. 5A shows a schematic diagram of a user using the electronic device 100 in scenario 4. In this scenario, the electronic device 100 tracks the gaze point of the user's eyeball in the screen 110, the gaze point is the operation focus, the user can switch the operation focus by moving the gaze point, and the confirmation of the interactable element where the gaze point is located can be achieved through the keys 140.
When the electronic device 100 displays a user interface including interactable elements, the selection and switching of the operation focus can be achieved through the eye tracking and the keys 140. The interactable elements may include, but are not limited to, application icons, pictures in a desktop, options in a menu bar, various types of controls contained in an application interface, and the like.
Fig. 5B shows a user interface for determining the focus of an operation in a desktop by eye tracking and buttons 140.
As shown in fig. 5B, first, the electronic apparatus 100 detects a long-press operation (for example, a press operation lasting at least 3 seconds) acting on the key 140, and in response to the long-press operation, the electronic apparatus 100 activates the front camera and starts tracking the gaze point of the user's eyeball in the screen 110. As shown in fig. 5B, when the electronic device 100 tracks that the gaze point of the eyeball of the user is located at the location of the camera application in the lower left corner of the screen 110, the interactable element (e.g., the icon of the camera application) at which the gaze point is located may be highlighted. Then, the electronic device 100 detects a pressing operation acting on the key 140, and may start the camera application characterized by the interactable element where the current gaze point is located, i.e. the icon of the camera application, in response to the operation. After the gaze point of the user's eyeball in the screen 110 is tracked, the pressing operation of the key 140, which is performed after the user's eyeball is focused on the position of the icon of the camera application, corresponds to the operation of clicking the icon of the camera application by the finger, and both triggers the electronic device to start the camera application.
Not limited to a desktop, in other user interfaces including interactable elements, such as a menu bar including a plurality of options displayed by an electronic device, the electronic device may also switch and confirm the interactable elements by eye tracking and pressing the button 140 in a similar manner to fig. 5B when displaying an instant messaging interface including a plurality of chat frames.
Instead of highlighting the interactable element at the gaze point as shown in fig. 5B, in other embodiments, the electronic device 100 may display an indicator (e.g. a circular pattern) at the gaze point without highlighting the interactable element at the gaze point, which may also have the effect of prompting the user about the gaze point.
Fig. 5C shows a user interface for zooming the map by eye-tracking and buttons 140.
As shown in fig. 5C, first, the electronic apparatus 100 detects a long-press operation (for example, a press operation lasting at least 3 seconds) acting on the key 140, and in response to the long-press operation, the electronic apparatus 100 activates the front camera and starts tracking the gaze point of the user's eyeball in the screen 110. As shown in fig. 5C, when the electronic device 100 tracks that the gaze point of the user's eyeball is located at the upper middle position of the screen 110, an indicator (such as a circular pattern in fig. 5C) may be displayed at the gaze point. Then, the electronic apparatus 100 detects an upward sliding operation acting on the key 140, and may enlarge the map interface centering on the gaze point in response to the operation. Not limited to the upward sliding, the electronic apparatus 100 may detect a downward sliding operation on the key 140 after detecting the gaze point, and reduce the map interface in response to the operation. After the start of tracking the gaze point of the user's eyeball in the screen 110, the up/down sliding operation acting on the keys 140, which is shown in fig. 5C, corresponds to a two-finger zoom-in/two-finger pinch operation of the finger at the position of the gaze point in the screen 110, both of which trigger the electronic device 100 to zoom the map around the position of the gaze point.
Not limited to map interfaces, in other scalable user interfaces, such as when the electronic device displays a picture, the interface may also be scaled by eye-tracking and pressing keys 140 in a manner similar to that of FIG. 5C.
The method of turning on the eye tracking by pressing the key 140 for a long time as shown in fig. 5B and 5C is not limited to the method of turning on the eye tracking by pressing the key 140 for a long time, but the electronic device 100 may also turn on the eye tracking by pressing the key 140 for a plurality of times to start the eye tracking, or may start the eye tracking by voice command, which is not limited in this application. In some embodiments, the electronic device 100 may also not need to turn on eye tracking under user trigger, but may start eye tracking by default.
After the electronic device 100 starts the eye tracking, the eye tracking may also be turned off. Closing eye movement tracking means closing the front camera and stopping tracking the gaze point of the user's eyeball. The embodiment of the present application does not limit the way to turn off the eye tracking, for example, the eye tracking may be turned off by long pressing the key 140 or a voice command.
An eye tracking algorithm is described below that may be used with the electronic device 100 to determine the gaze point of a user's eye in the screen 110. The electronic device 100 may collect an image containing the eyeball of the user through a front-facing camera (e.g., a depth camera), identify the eyeball characteristics in the image, and calculate the gaze point of the eyeball in the screen 110 in combination with the 3D depth information. In some embodiments, the electronic device 100 may also acquire the pose of the electronic device 100 through a gyroscope, an acceleration sensor, etc., and determine the gaze point of the eyeball in the screen 110 in combination with the eyeball characteristics, the 3D depth information, and the device pose. The ocular characteristics may include, for example, the position of the eye in the orbit, such as the positional relationship of the eye and upper and lower eyelids, left and right corners of the eye, and the size of the eye. The 3D depth information may reflect a distance between the user and the electronic device 100. The posture of the electronic device 100 reflects the manner in which the user holds the electronic device 100, such as the inclination of the screen 110, the landscape or portrait screen, the upside down or upside down hold, etc.
In one embodiment, the eye tracking algorithm is a neural network algorithm trained in advance, a researcher can take a plurality of groups of pictures containing eyes with electronic equipment in advance, collect gesture data of the electronic equipment, record the eye point of the eyes in a screen, extract eyeball characteristics and 3D depth information from the pictures, input the corresponding relation among the plurality of groups of eyeball characteristics, the 3D depth information, the gesture data and the eye point into the neural network, train the neural network and send the neural network to each electronic equipment of the same type.
Through the eye tracking scheme of scene 4, the operation that the user is positioned on the display screen through the fingers is saved, the operation focus can be found more conveniently by utilizing the eye fixation, and a more convenient and quick man-machine interaction mode is provided.
The eye tracking scheme of the scene 4 needs to be applied to the front-facing camera of the electronic device 100, so that the eye tracking scheme of the scene 4 cannot be applied to some scenes which also need to use the front-facing camera, such as front photographing, face unlocking, face authentication, etc. The main conflict is that the electronic device 100 cannot meet the requirements of front photographing, face unlocking, face authentication and the like on one hand, and performs eye tracking on the other hand. Thus, in the embodiment of the present application, when entering a scene that also requires the use of the front-facing camera, the electronic device 100 may prompt the user that the eye tracking scheme of the scene 4 is currently unavailable. The prompting message output by the electronic device 100 may be various types, such as text on a display screen, voice command, or other types, which are not limited herein.
Scene 5
In this scenario, some shortcuts are implemented through the keys 140, such as activating a mute mode of the electronic device 100, activating a flight mode, recording, capturing a screen, activating a camera application, turning on a flashlight, adjusting system volume, lighting a screen, locking a screen, turning off, etc.
For example, a light press operation on key 140 may be used to initiate a mute mode of electronic device 100, a medium press operation on key 140 may be used to initiate a sound recording, and a medium heavy press operation on key 140 may be used to initiate a camera application.
For another example, a long press operation of more than 3 seconds on key 140 may be used to initiate a screen recording.
For another example, two pressing operations on the key 140 may be used for screen capturing.
For another example, a long press operation of more than 10 seconds on the key 140 may be used for shutdown.
The following describes alternative embodiments of the key-based interaction method provided in the present application in practical application based on several scenarios described above.
Regarding collisions
Some of the above scenes may collide.
For example, when the electronic device 100 displays a map interface, the map interface includes content beyond the screen 110, so that the map interface can be moved by an up/down/forward/backward sliding operation on the key 140, and the map interface can be zoomed, or the map interface can be zoomed by an up/down sliding operation on the key 140, which causes a conflict between two operation instructions corresponding to the same operation. Embodiments of the present application provide a solution that employs a customization scheme to determine operational instructions for a user interface that includes both beyond screen 110 content and is scalable, with only one operational instruction being selected to correspond to the user interface. For example, with respect to the map interface, if an up/down sliding operation is received on the key 140, the electronic device 100 scales the map interface only in response to the operation without moving the map interface.
For another example, when the electronic device 100 displays the text editing page, if the text editing page includes content beyond the screen 110, the text editing page may be moved by an up/down sliding operation applied to the key 140, and the text editing page includes a cursor, and the cursor may be moved by an up/down sliding operation applied to the key 140, which may cause a conflict between the two operation instructions corresponding to the same operation. The embodiment of the application provides a solution, wherein for a text editing page containing both the content beyond the screen 110 and a cursor, a customization scheme is adopted to determine operation instructions, and only one operation instruction is selected to correspond to the text editing page. For example, for a text editing page, if an up/down slide operation is received on the key 140, the electronic device 100 moves the cursor only in response to the operation without moving the text editing page.
For another example, when the electronic device 100 displays a user interface, if the user interface includes content beyond the screen 110, the user interface may be moved by an up/down/forward/backward sliding operation on the key 140, and the user interface includes an interactable element, and the operation focus may be switched and selected by an up/down sliding operation on the key 140, which may occur that the same operation corresponds to a conflict of two operation instructions. The embodiment of the application provides a solution, and for a user interface containing both content beyond the screen 110 and interactive elements, a trigger mechanism is added to one of the operation instructions to ensure that the same operation can correspond to different operation instructions under different conditions. For example, when the electronic device displays a user interface, if the user interface contains content beyond the screen 110, the user interface can be moved by a slide-up/down/forward/backward operation acting on the keys 140; after the electronic device receives the operation of double-clicking the key 140, the electronic device can enter an operation focus mode, determine an operation focus in the user interface, and switch and select the operation focus according to the operation on the key 140; after entering the operation focus mode, the operation of the double-click key 140 may be received again, and the operation focus mode may be exited in response to the operation. The triggering mechanism or the exiting mechanism may be various, and may be an operation acting on the key 140, an operation acting on the display screen, or a voice command, which is not limited herein.
In general, when a conflict between scenes occurs, that is, when a conflict occurs between two operation instructions corresponding to the same operation, any one of the following ways may be adopted to solve the problem: 1. the customization scheme recognizes only one type of operating instruction. 2. A trigger mechanism and an exit mechanism of one operation instruction are added. When a conflict occurs, the electronic device may adopt any of the above schemes to resolve the conflict, which is not specifically limited in this application.
Correspondence relation between operation and operation instruction acting on key 140
The above several scenarios exemplarily illustrate the corresponding manner of operation and operation instructions acting on the keys 140, and do not constitute a limitation of the present application. In a specific implementation, the correspondence between the operation and the operation instruction applied to the key 140 may be set according to the actual requirement.
Different operation instructions can be corresponding to different positions, different durations, different intensities, different times of pressing operations, different directions of sliding operations and the like of the keys 140. For example, a fourth operation on the key 140 is used to trigger the electronic device 100 to perform the first function, where the first function is different when any one or more of the position, the duration, the intensity, and the number of times the first key is pressed in the fourth operation is different.
In some embodiments, where multiple keys 140 are provided in the electronic device 100, different scenes and functions may be assigned to different keys 140.
For example, assuming that the electronic device 100 is provided with two keys 140, one located in the left bezel 130C and the other located in the right bezel 130D, the keys 140 on the right bezel 130D may be used to implement switching of page content in scene 1 and some specific refinement operations in scene 2, and the keys 140 on the left bezel 130C may be used to implement switching of operation focus in scene 3, confirmation of interactable elements in scene 4, and some shortcuts in scene 5.
For another example, assume that the electronic device 100 is provided with two keys 140, one referred to as a first key, which may be used to receive a sliding operation in a first direction, and the other as a second key, which may be used to receive a sliding operation in a second direction, the first direction and the second direction being perpendicular. The first direction may be an up-down direction and the second direction may be a front-back direction. The sliding operation in different directions can be used to trigger different functions, for example, in combination with the scenario shown in fig. 4A and fig. 5B, the sliding operation in the up-down direction of the first key can be used to trigger the electronic device to switch the operation focus in the vertical direction, and the sliding operation in the front-back direction of the second key can be used to trigger the electronic device to switch the operation focus in the lateral direction. For another example, in connection with the scenario of adjusting the cursor position shown in fig. 3D, a sliding operation in the up-down direction of the first key may be used to trigger the electronic device to move the cursor in the vertical direction, and a sliding operation in the front-back direction of the second key may be used to trigger the electronic device to move the cursor in the lateral direction.
When a plurality of keys 140 are provided in the electronic device 100, the plurality of keys 140 may also be used to resolve the above-mentioned conflicts between scenes. For example, when the electronic device 100 displays a map interface, the map interface contains content beyond the screen 110, while the map interface is also zoomable, the map interface can be moved by a slide up/down/forward/backward operation on the keys 140 in the left frame, and zoomed by a slide up/down operation on the keys 140 in the right frame.
In some embodiments, the corresponding relationship between the operation and the operation instruction of the key 140 may be opened and customized by the user, so that the user can conveniently set the corresponding relationship according to the requirement of the user. For example, the user may autonomously set some user operations on the keys 140 to implement a partial shortcut, such as the user may set a long press operation on the keys 140 for more than 3 seconds to start the screen.
Feedback about operation
In some embodiments of the present application, the electronic device 100 may give vibration feedback through the motor after receiving a user operation on the key 140. In some embodiments, different vibration feedback may be given for different types of operations, for example, different vibration feedback may be given for a pressing operation, a long press operation, and a sliding operation that are applied to the key 140. Through vibration feedback, a user may be better interacted with the electronic device 100.
The vibration feedback output by the electronic device 100 in response to the user operation on the keys 140 may be generated by a motor under the keys 140, which may provide direct feedback to the user's finger touching the keys 140. Alternatively, the vibration feedback may be generated by a motor located elsewhere.
Guiding with respect to operations
The electronic device 100 may also output usage instructions when the keys 140 are first used to instruct a user how to manipulate the electronic device 100 through the keys 140. The instructions for use may be visual elements displayed on the screen 110, voice instructions, or other forms.
Combined implementation for several scenarios
The above-mentioned scenes 1 to 5 may be implemented in combination, and different situations in the same scene may be implemented in combination, which is not limited in this application.
Fig. 6 is a flowchart of a key-based interaction method provided in the present application. The method is applied to the electronic equipment.
As shown in fig. 6, the method may include the steps of:
s101, the electronic device 100 receives a first operation acting on a first key, where the first key is a physical key provided on a frame of the electronic device 100, and the first operation includes a sliding operation on a first face of the first key.
The key 40 disposed on the frame of the electronic device 100 is the first key. The first key may be disposed on any one of the frames of the electronic device, such as the upper frame 130A, the lower frame 130B, the left frame 130C, or the right frame 130D. For example, if the first key is disposed on the right frame 130D, the first surface of the first key may be parallel to the plane on which the right frame 130D is located; if the first key is disposed on the left frame 130C, the first surface of the first key may be parallel to the plane of the left frame 130C.
The number of the first keys may be one or more.
The first operation includes a sliding operation on the first face of the first key in a direction not limited, and may be upward or downward or forward or backward, for example.
S102, the electronic device 100 controls the interface element displayed in the electronic device 100 in response to the first operation.
The manner in which the electronic device 100 controls the displayed interface elements may include the following:
1. when the electronic equipment displays the first interface, the first operation is received, and the first interface is switched to the second interface in response to the first operation. The method can realize the switching of page contents.
If there is more page content beyond the screen 110, the page content currently displayed by the electronic device 100 in the screen 110 is the first interface. Examples of the first interface may refer to picture 1 of any one of fig. 2A-2D, examples of the second interface may refer to picture 2 of any one of fig. 2A-2D, and examples of the first operation may refer to the relevant descriptions of fig. 2A-2D in scenario 1 above.
2. The electronic device controls the position or size of the interface element in the third interface.
When the third interface is different, the manner in which the electronic device controls the position or the size of the interface element in the third interface is also different, which may specifically include the following cases:
(1) When the third interface is a video playing interface, the electronic equipment adjusts any one of a video playing progress bar, a screen brightness bar or a video volume bar in the video playing interface. Examples of third interfaces may refer to the user interface shown in fig. 3A, and examples of first operations may include the related description of fig. 3A.
(2) When the third interface is an audio playing interface, the electronic equipment adjusts any one of an audio playing progress bar and an audio volume bar.
(3) When the third interface includes the first scalable element, the first scalable element includes a map or a picture or a video, and the electronic device zooms in or zooms out on the first scalable element. Examples of the third interface may include the user interface shown in fig. 3B, and examples of the first operation may include the associated description of fig. 3B.
(4) When the third interface is a shooting preview interface, the electronic equipment adjusts the selected focal length, the selected aperture and the selected shooting mode. Examples of the third interface may include the user interface shown in fig. 3C, and examples of the first operation may include the associated description of fig. 3C.
(5) When the third interface is a text editing interface, the electronic equipment adjusts the position of a cursor in the text editing interface. Examples of the third interface may include the user interface shown in fig. 3D, and examples of the first operation may include the associated description of fig. 3D.
(6) When the third interface includes a prompt box of the first notification message, the first operation includes a sliding operation of pressing the first key and on the first surface of the first key, and the control of the interface element in the third interface includes: and displaying a floating window, wherein the floating window displays the detailed content of the first notification message, or stops displaying the prompt box of the first notification message.
Referring to fig. 3E, the third interface may be the user interface shown in fig. 3E, and the electronic device displays the floating window when the first operation includes a downward sliding operation of pressing the first key and on the first face of the first key.
Referring to fig. 3G, the third interface may be the user interface shown in fig. 3G, and when the first operation includes an upward sliding operation of pressing the first key and on the first face of the first key, the electronic device displays a prompt box of the first notification message.
3. The electronic equipment firstly displays a fourth interface, wherein the fourth interface comprises a first interaction element and a second interaction element; receiving a second operation acting on the first key; highlighting the first interactive element in the second interface in response to the second operation; the electronic device then receives the first operation, stops highlighting the first interactive element in response to the first operation, and highlights the second interactive element.
In some embodiments, after the electronic device highlights the second interactive element, a pressing operation of the first key may also be received, and in response to the pressing operation, a function characterized by the second interactive element is performed.
This mode 3 can be used to achieve selection and switching of the operating focus.
The second operation is an operation for triggering the electronic device to enter the focus mode, and may be, for example, an operation of double-clicking the first key. A first interactive element.
An example of the fourth interface may include the user interface shown in fig. 4A, an example of the first interactive element may include the memo icon in fig. 4A, and an example of the second interactive element may include the camera application icon in fig. 4A.
Alternatively, examples of the fourth interface may include the user interface shown in fig. 4B, examples of the first interactive element may include the option of the do-not-disturb mode in fig. 4B, and examples of the second interactive element may include the option of the personal mode in fig. 4B.
In some implementations, the electronic device 100 may also output vibration feedback in response to the first operation.
After S102, the method shown in fig. 6 may further include any of the following steps (not shown in fig. 6):
s103, the electronic equipment displays a prompt box of the second notification message; receiving a pressing operation of a first key; displaying a user interface of a first application, and displaying the detailed content of a second notification message in the user interface of the first application, wherein the first application is a provider of the second notification message.
Step S103 may refer to fig. 3F and related operations. Examples of the notification box of the second notification message may include the prompt box of the notification message shown in fig. 3F.
S104, the electronic equipment displays a fifth interface; receiving a third operation acting on the first key; responding to the third operation, starting the front camera, and acquiring a first image containing human eyes through the front camera; displaying an indicator on the fifth interface, or highlighting a third interaction element in the fifth interface, wherein the indicator and the third interaction element are both positioned at the gaze point of the human eye in the screen, and the gaze point of the human eye in the screen is determined according to the first image; receiving a pressing operation of a first key; and executing the function of the third interactive element characterization.
The third operation is used for triggering the electronic device to start eye tracking, and may be, for example, a long-press operation acting on the first key.
An example of a fifth interface may include the user interface shown in fig. 5B, and an example of a third interactive element may include an icon of the camera application in fig. 5B.
In some embodiments, the electronic device may further initiate a sensor to collect data in response to a third operation, the sensor collected data being used to determine a gaze point of the human eye in the screen.
S105, the electronic device displays a sixth interface, wherein the sixth interface comprises a second scalable element, and the second scalable element comprises a map or a picture or a video; receiving a third operation acting on the first key; responding to the third operation, starting the front camera, and acquiring a second image containing human eyes through the front camera; displaying an indicator on a sixth interface, wherein the indicator is positioned at a point of gaze of a human eye in the screen, and the point of gaze of the human eye in the screen is determined according to the second image; receiving a sliding operation on a first face of a first key; the second scalable element is zoomed in or zoomed out centered on the gaze point.
The third operation is used for triggering the electronic device to start eye tracking, and may be, for example, a long-press operation acting on the first key.
An example of a sixth interface may include the user interface shown in fig. 5C and an example of a second scalable element may include the map in fig. 5C.
In some embodiments, the electronic device may further initiate a sensor to collect data in response to a third operation, the sensor collected data being used to determine a gaze point of the human eye in the screen.
S103 and S104 may be implemented in combination, and S103 and S105 may be implemented in combination.
The electronic device provided by the embodiment of the application is described below.
Fig. 7 shows a schematic hardware structure of the electronic device 100 according to the embodiment of the present application. The electronic device 100 is configured to perform the key-based interaction method provided in the previous embodiments.
The electronic device 100 may include a processor 101, an internal memory 102, a wireless communication module 103, a mobile communication module 104, an antenna 103A, an antenna 104A, a power switch 105, a sensor module 106, a motor 107, a camera 108, a display 109, keys 110, and the like. The sensor module 106 may include, among other things, a gyroscope sensor 106A, an acceleration sensor 106B, an ambient light sensor 106C, an image sensor 106D, a distance sensor 106E, a pressure sensor 106F, and the like. The wireless communication module 103 may include a WLAN communication module, a bluetooth communication module, and the like. The plurality of portions may transmit data over a bus.
The processor 101 may include one or more processing units, such as: the processor 101 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 101 for storing instructions and data. In some embodiments, the memory in the processor 101 is a cache memory. The memory may hold instructions or data that has just been used or recycled by the processor 101. If the processor 101 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided and the latency of the processor 101 is reduced, thus improving the efficiency of the system.
The wireless communication function of the electronic device 100 can be realized by an antenna 103A, an antenna 104A, a mobile communication module 104, a wireless communication module 103, a modem processor, a baseband processor, and the like.
The electronic device 100 implements display functions through a GPU, a display screen 109, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 109 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 109 is used to display images, videos, and the like. The display screen 109 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD). The display panel may also be manufactured using organic light-emitting diode (OLED), active-matrix organic light-emitting diode (AMOLED) or active-matrix organic light-emitting diode (active-matrix organic light emitting diode), flexible light-emitting diode (FLED), mini, micro-OLED, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device may include 1 or N display screens 109, N being a positive integer greater than 1.
The display screen 109 may include the aforementioned screen 110.
The electronic device 100 may implement a photographing function through an ISP, a camera 108, a video codec, a GPU, a display 109, an application processor, and the like. The camera 108 may include the front-facing camera mentioned previously, which may be a depth camera.
The internal memory 102 may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (NVM).
The random access memory may include a static random-access memory (SRAM), a dynamic random-access memory (dynamic random access memory, DRAM), a synchronous dynamic random-access memory (synchronous dynamic random access memory, SDRAM), a double data rate synchronous dynamic random-access memory (doubledata rate synchronous dynamic random access memory, DDR SDRAM, such as fifth generation DDR SDRAM is commonly referred to as DDR5 SDRAM), etc.; the nonvolatile memory may include a disk storage device, a flash memory (flash memory).
The gyro sensor 106A may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 106A. The gyro sensor 106A may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 106A detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 106A may also be used for navigating, somatosensory game scenes.
The acceleration sensor 106B may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. It may also be used to recognize the gesture of an electronic device, for example, the acceleration sensor 106B may be applied to applications such as landscape switching, pedometers, etc.
The distance sensor 106E may be used to measure distance. The electronic device 100 may measure the distance by infrared or laser. In some shooting scenarios, the electronic device 100 may range using the distance sensor 106E to achieve fast focus.
The pressure sensor 106F is used for sensing a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 106F may be disposed below the display 194 and the keys 110. The pressure sensor 106F is of various types, such as a resistive pressure sensor, an electromagnetic pressure sensor (e.g., an inductive pressure sensor, a hall pressure sensor, a current vortex pressure sensor), a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. When a force is applied to the pressure sensor 106F, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a pressing operation or a sliding operation acts on the key 110, the electronic apparatus 100 determines the intensity of the pressing operation, the position of the pressing operation, the duration of the pressing operation, the number of times of pressing, the sliding direction of the sliding operation, or the like, based on the pressure signal detected by the pressure sensor 106F.
The keys 110 are physical keys, and may include the aforementioned keys 140, and may also include the aforementioned power key 150. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 107 may generate a vibration alert. The number of motors 107 may be plural. The motor 107 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 107 may also correspond to different vibration feedback effects by touching different areas of the display screen 109. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The motor 107 may also be disposed below the key 110, and may also be disposed at other positions of the electronic device 100. Motors disposed at other locations of the electronic device may be referred to as complete machine motors. After the electronic device 100 detects the user operation on the key 110, a vibration prompt may be generated by the motor 107 under the key 110, or may be generated by the motor of the whole machine. Different types of user operations on keys 140 may correspond to different vibration cues.
In the present application:
the internal memory 102 may be used to store a computer program implementing the key-based interaction method provided herein.
The processor 101 is configured to read a computer program in the internal memory 102, and call a corresponding module in the electronic device to execute the key-based interaction method. Specifically, the processor 101 may be configured to determine an operation acting on the key 140 according to a pressure signal acquired by a pressure sensor disposed below the key 110, and further generate an operation instruction corresponding to the operation acting on the key 140, and instruct the corresponding module to respond to the operation instruction. The processor 101 may be further configured to determine a gaze point of an eyeball of a user in the display screen based on the image captured by the front camera and the device pose data captured by the respective touch sensors.
The gyro sensor 106A and the acceleration sensor 106B may be used to collect gesture data of the electronic device 100, and the distance sensor 106E may be used to detect a distance between the electronic device 100 and the user.
The display 109 may be used to display various types of user interfaces provided by the previous embodiments of the present application, such as various user interfaces referred to in scenario 1-scenario 5.
Operations performed by the various devices in the electronic apparatus 100 may also be referred to in connection with the description of the method embodiments previously described.
The structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic apparatus 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The present embodiment exemplifies the software structure of the electronic device 100 by taking a mobile operating system with a hierarchical architecture as an example.
Fig. 8 is a software configuration block diagram of the electronic device 100 of the embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the mobile operating system is divided into four layers, from top to bottom, an application layer, an application framework layer/core services layer, a system library and runtime, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 8, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 8, the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, an event manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is for providing communication functions of the electronic device. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
The event manager is used to manage interaction events between the user and the electronic device 100, such as interaction events between the user and the keys 140, interaction events between the user and the display 109, voice commands input by the user, etc.
The runtime may refer to all code libraries, frameworks, etc. that are needed by the program to run. For example, for the C language, the runtime includes a series of libraries of functions that are required for the C program to run. For the Java language, the runtime includes virtual machines and the like required for running Java programs, in addition to core libraries. The core library may include function functions that the Java language needs to call.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The workflow of electronic device software and hardware is illustrated below in connection with various scenarios of the present application.
For scenario 1-scenario 3 and scenario 5, the workflow of the electronic device is as follows:
1, the user inputs an operation (e.g., a pressing operation, a sliding operation, etc.) to the key 140, the pressure sensor 106F under the key 140 detects a pressure signal, and the pressure sensor 106F sends the detected signal to the kernel layer.
2, the kernel layer encapsulates the user operation on the keys 140 into an input event, which may include information of a touch position, a touch duration, a touch number, a detected pressure value, a time stamp, etc. on the keys 140.
And 3, the event manager of the application program framework layer acquires an input event from the kernel layer, identifies an operation instruction corresponding to the input event, and sends the operation instruction to the related application of the application program layer.
Taking scenario 1 as an example, the operation on the key 140 is a sliding operation of up/down/forward/backward, the event manager sends an operation instruction to the foreground application, triggering the foreground application to switch page contents.
Taking scene 2 as an example, the event manager sends an operation instruction to the foreground application, and triggers the foreground application to realize some fine operations, such as adjusting the volume, playing progress and screen brightness of a video in a video playing scene, adjusting the cursor position in a text editing scene, adjusting the focal length, aperture or shooting mode in a shooting scene, zooming the map when viewing the map, zooming the picture when viewing the picture, executing various operations on the notification message when displaying the notification message, and the like.
Taking scenario 3 as an example, the event manager sends an operation instruction to the foreground application, and triggers the foreground application to make feedback on the operation focus selected by the user. For example, referring to fig. 4A, when an icon of a camera application in a desktop is selected as an operation focus, if a pressing operation acting on a key 140 is received, an event manager transmits an input event encapsulated by the operation to the camera application, the camera application invokes an interface of an application framework layer, starts the camera application, and further starts a camera driver by invoking a kernel layer, and captures a still image or video through a camera 108.
Taking scenario 5 as an example, the event manager sends an operation instruction to the application to be triggered, and triggers the application to execute the shortcut. For example, if the key 140 receives a long press operation exceeding 3 seconds, the event manager may send a screen recording instruction to the screen recording application, triggering the screen recording application to start the screen recording.
For scenario 4, the workflow of the electronic device is as follows:
1, the electronic device 100 starts a front camera and related sensors, and determines the gaze point of the human eye on the display screen 109 according to the image collected by the front camera and the data collected by the sensors.
2, the electronic device 100 highlights the interactable element at which the gaze point is located or displays an indicator at the gaze point location on the display screen 109.
3, the user inputs an operation (such as a pressing operation, a sliding operation, etc.) to the key 140, the pressure sensor 106F under the key 140 detects a pressure signal, and the pressure sensor 106F sends the detected signal to the kernel layer.
4, the kernel layer encapsulates the user operation on the key 140 into an input event, which may include information of a touch position, a touch duration, a touch number, a detected pressure value, a time stamp, etc. on the key 140.
And 5, the event manager of the application program framework layer acquires an input event from the kernel layer, identifies an operation instruction corresponding to the input event, and sends the operation instruction to the related application of the application program layer.
For example, referring to fig. 5C, after the electronic device 100 displays a map interface and a gaze point indicator is displayed in the map interface, if the key 140 receives an upward sliding operation, the event manager transmits an input event encapsulated by the operation to the map application, and the map application enlarges the map interface on the display screen.
It should be understood that the steps in the above-described method embodiments may be accomplished by integrated logic circuitry in hardware in a processor or instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor or in a combination of hardware and software modules in a processor.
The application also provides an electronic device, which may include: memory and a processor. Wherein the memory is operable to store a computer program; the processor may be configured to invoke the computer program in the memory to cause the electronic device to perform the method performed by the electronic device side in any of the embodiments described above.
The application also provides an electronic device, which may include: memory and a processor. Wherein the memory is operable to store a computer program; the processor may be configured to invoke the computer program in the memory to cause the electronic device to perform the method performed by the electronic device side in any of the embodiments described above.
The present application also provides a chip system, which includes at least one processor, for implementing the functions related to the electronic device side in any of the foregoing embodiments.
In one possible design, the system on a chip further includes a memory to hold program instructions and data, the memory being located either within the processor or external to the processor.
The chip system may be formed of a chip or may include a chip and other discrete devices.
Alternatively, the processor in the system-on-chip may be one or more. The processor may be implemented in hardware or in software. When implemented in hardware, the processor may be a logic circuit, an integrated circuit, or the like. When implemented in software, the processor may be a general purpose processor, implemented by reading software code stored in a memory.
Alternatively, the memory in the system-on-chip may be one or more. The memory may be integrated with the processor or may be separate from the processor, and embodiments of the present application are not limited. For example, the memory may be a non-transitory processor, such as a ROM, which may be integrated on the same chip as the processor, or may be separately disposed on different chips, and the type of memory and the manner of disposing the memory and the processor in the embodiments of the present application are not specifically limited.
Illustratively, the system-on-chip may be a field programmable gate array (field programmable gate array, FPGA), an application specific integrated chip (application specific integrated circuit, ASIC), a system on chip (SoC), a central processing unit (central processor unit, CPU), a network processor (network processor, NP), a digital signal processing circuit (digital signal processor, DSP), a microcontroller (micro controller unit, MCU), a programmable controller (programmable logic device, PLD) or other integrated chip.
The present application also provides a computer program product comprising: a computer program (which may also be referred to as code, or instructions), which when executed, causes a computer to perform the method performed by the electronic device in any of the embodiments described above.
The present application also provides a computer-readable storage medium storing a computer program (which may also be referred to as code, or instructions). The computer program, when executed, causes a computer to perform the method performed by the electronic device in any of the embodiments described above.
In the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; the text "and/or" is merely an association relation describing the associated object, and indicates that three relations may exist, for example, a and/or B may indicate: the three cases where a exists alone, a and B exist together, and B exists alone, and in addition, in the description of the embodiments of the present application, "plural" means two or more than two.
The terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature, and in the description of embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
The embodiments of the present application may be arbitrarily combined to achieve different technical effects.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions described in the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: ROM or random access memory RAM, magnetic or optical disk, etc.
In summary, the foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made according to the disclosure of the present application should be included in the protection scope of the present application.

Claims (18)

1. The key-based interaction method is characterized by being applied to electronic equipment, wherein a first key is arranged on a frame of the electronic equipment, and is a physical key, and the method comprises the following steps:
receiving a first operation acting on a first key, the first operation comprising a sliding operation on a first face of the first key;
and responding to the first operation, and controlling the interface element displayed by the electronic equipment.
2. The method according to claim 1, wherein the method further comprises:
displaying a first interface, and controlling interface elements displayed by the electronic equipment comprises: switching the first interface to a second interface;
or,
displaying a third interface, and controlling interface elements displayed by the electronic equipment includes: and controlling the position or the size of the interface element in the third interface.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
when the third interface is a video playing interface, the control of the interface elements in the video playing interface comprises any one of the following steps: adjusting a video playing progress bar, a screen brightness bar and a video volume bar;
when the third interface is an audio playing interface, the control of the interface elements in the audio playing interface comprises any one of the following steps: adjusting an audio playing progress bar and an audio volume bar;
when the third interface includes a first scalable element, the first scalable element includes a map or a picture or a video, and the controlling of the interface element in the third interface includes: zooming in or out of the first scalable element;
when the third interface is a shooting preview interface, the control of the interface elements in the shooting preview interface comprises any one of the following: adjusting the selected focal length, adjusting the selected aperture, and adjusting the selected shooting mode;
When the third interface is a text editing interface, controlling interface elements in the text editing interface comprises adjusting the position of a cursor in the text editing interface;
when the third interface includes a prompt box of a first notification message, the first operation includes a sliding operation of pressing the first key and on a first surface of the first key, and the control of the interface element in the third interface includes: and displaying a floating window, wherein the floating window displays the detailed content of the first notification message, or stops displaying the prompt box of the first notification message.
4. The method of claim 1, wherein prior to receiving the first operation on the first key, the method further comprises:
displaying a fourth interface, wherein the fourth interface comprises a first interaction element and a second interaction element;
receiving a second operation acting on the first key;
highlighting the first interactive element in the second interface in response to the second operation;
controlling an interface element displayed by the electronic device, including: and stopping highlighting the first interactive element and highlighting the second interactive element.
5. The method of claim 4, wherein after highlighting the second interactive element, the method further comprises:
receiving a pressing operation of the first key;
and executing the function of the second interactive element characterization.
6. The method according to any one of claims 1-5, further comprising:
displaying a prompt box of the second notification message;
receiving a pressing operation of the first key;
displaying a user interface of a first application, and displaying the detailed content of the second notification message in the user interface of the first application, wherein the first application is a provider of the second notification message.
7. The method according to any one of claims 1-6, further comprising:
displaying a fifth interface;
receiving a third operation acting on the first key;
responding to the third operation, starting a front camera, and acquiring a first image containing human eyes through the front camera;
displaying an indicator on the fifth interface, or highlighting a third interaction element in the fifth interface, wherein the indicator and the third interaction element are both positioned at the point of gaze of the human eye in the screen, and the point of gaze of the human eye in the screen is determined according to the first image;
Receiving a pressing operation of the first key;
and executing the function of the third interactive element characterization.
8. The method according to any one of claims 1-6, further comprising:
displaying a sixth interface, the sixth interface comprising a second scalable element, the second scalable element comprising a map or picture or video;
receiving a third operation acting on the first key;
responding to the third operation, starting a front camera, and acquiring a second image containing human eyes through the front camera;
displaying an indicator on the sixth interface, wherein the indicator is positioned at the point of gaze of the human eye in the screen, and the point of gaze of the human eye in the screen is determined according to the second image;
receiving a sliding operation on a first face of the first key;
the second scalable element is zoomed in or zoomed out centered on the gaze point.
9. The method according to claim 7 or 8, characterized in that the method further comprises:
and responding to the third operation, starting a sensor to collect data, wherein the data collected by the sensor are used for determining the fixation point of the human eyes in a screen.
10. The method according to any one of claims 1-9, wherein the method further comprises:
receiving a fourth operation acting on the first key, wherein the fourth operation comprises a pressing operation of the first key;
executing a first function in response to the fourth operation;
and when any one or more of the position, the duration, the intensity and the times of pressing the first key in the fourth operation are different, the first function is different.
11. The method according to any one of claims 1-10, further comprising:
and outputting vibration feedback in response to the first operation.
12. The method of any of claims 1-11, wherein the first key is disposed on a first bezel of the electronic device, a first face of the first key being parallel to the first bezel.
13. The method according to any one of claims 1-12, wherein a pressure sensor is arranged below the first key, the pressure sensor being arranged to detect an operation acting on the first key.
14. The method of any one of claims 1-13, wherein the number of first keys is a plurality.
15. The method of any one of claims 1-13, wherein the electronic device is further provided with a second key, the second key being a physical key;
the first key is used for receiving sliding operation in a first direction, and the second key is used for receiving sliding operation in a second direction, and the first direction is perpendicular to the second direction.
16. The method of claim 15, wherein the step of determining the position of the probe is performed,
the second key is arranged on the frame of the electronic equipment, or the second key is arranged on the back of the electronic equipment, and the back of the electronic equipment is opposite to the display surface of the electronic equipment.
17. An electronic device, comprising: a first key, a memory, one or more processors; the first key is coupled to the one or more processors, the memory is for storing computer program code, the computer program code comprises computer instructions, the one or more processors invoke the computer instructions to cause the electronic device to perform the method of any of claims 1-16.
18. A computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1-16.
CN202311132090.5A 2023-08-31 2023-08-31 Interaction method based on keys and electronic equipment Pending CN117687549A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311132090.5A CN117687549A (en) 2023-08-31 2023-08-31 Interaction method based on keys and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311132090.5A CN117687549A (en) 2023-08-31 2023-08-31 Interaction method based on keys and electronic equipment

Publications (1)

Publication Number Publication Date
CN117687549A true CN117687549A (en) 2024-03-12

Family

ID=90125226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311132090.5A Pending CN117687549A (en) 2023-08-31 2023-08-31 Interaction method based on keys and electronic equipment

Country Status (1)

Country Link
CN (1) CN117687549A (en)

Similar Documents

Publication Publication Date Title
US11347076B2 (en) Mirror tilt actuation
US10931877B2 (en) Dual camera magnet arrangement
US11797145B2 (en) Split-screen display method, electronic device, and computer-readable storage medium
CN110083282B (en) Man-machine interaction method, device, terminal and medium based on information display page
KR102308645B1 (en) User termincal device and methods for controlling the user termincal device thereof
US11588961B2 (en) Focusing lighting module
CN112230914B (en) Method, device, terminal and storage medium for producing small program
EP4207744A1 (en) Video photographing method and electronic device
CN114546545B (en) Image-text display method, device, terminal and storage medium
CN115033333B (en) Suspended window display method, electronic equipment and storage medium
KR102151206B1 (en) Mobile terminal and method for controlling the same
CN114461312B (en) Display method, electronic device and storage medium
CN117687549A (en) Interaction method based on keys and electronic equipment
US10447900B2 (en) Camera module design with lead frame and plastic moulding
US9621016B2 (en) Flat coil assembly for Lorentz actuator mechanism
CN115442509A (en) Shooting method, user interface and electronic equipment
CN107924276B (en) Electronic equipment and text input method thereof
KR20140133370A (en) Electronic device having camera
EP4243401A1 (en) Method for photographing video and electronic device
EP4240010A1 (en) Video photographing method and electronic device
US11381747B2 (en) Dual camera magnet arrangement
CN117251082A (en) Man-machine interaction method, device, equipment and storage medium based on user interface
CN113535054A (en) Display content switching method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination