WO2023222130A9 - Display method and electronic device - Google Patents

Display method and electronic device Download PDF

Info

Publication number
WO2023222130A9
WO2023222130A9 PCT/CN2023/095396 CN2023095396W WO2023222130A9 WO 2023222130 A9 WO2023222130 A9 WO 2023222130A9 CN 2023095396 W CN2023095396 W CN 2023095396W WO 2023222130 A9 WO2023222130 A9 WO 2023222130A9
Authority
WO
WIPO (PCT)
Prior art keywords
interface
image
user
terminal
area
Prior art date
Application number
PCT/CN2023/095396
Other languages
French (fr)
Chinese (zh)
Other versions
WO2023222130A1 (en
Inventor
邸皓轩
李丹洪
王春晖
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2023222130A1 publication Critical patent/WO2023222130A1/en
Publication of WO2023222130A9 publication Critical patent/WO2023222130A9/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Definitions

  • the present application relates to the field of terminals, and in particular, to a display method and electronic device.
  • the embodiment of the present application provides a display method.
  • the terminal device can detect the area where the user is looking at the screen, and then display an interface corresponding to the area. In this way, users can quickly obtain information in the above interface without touch operations.
  • this application provides a display method, which is applied to an electronic device.
  • the electronic device includes a screen, and the screen of the electronic device includes a first preset area.
  • the method includes: displaying a first interface; , the electronic device collects the first image; determines the user's first eyeball gaze area based on the first image, and the first eyeball gaze area is used to indicate the screen area that the user looks at when the user looks at the screen; when the first eyeball gaze area is in the first When within the preset area, the second interface is displayed.
  • the electronic device can collect images used to determine the user's eye gaze area when displaying an interface.
  • the electronic device can display an interface associated with the area. In this way, the user can quickly control the electronic device to display a certain interface through gaze operations, thereby quickly obtaining services or information provided by the interface.
  • the screen of the electronic device includes a second preset area, and the second preset area is different from the first preset area.
  • the method further includes: determining the user's location based on the first image.
  • the second eyeball fixation area, the position of the second eyeball fixation area on the screen is different from the position of the first eyeball fixation area on the screen; when the second eyeball fixation area is within the second preset area, the third interface is displayed, and the third interface is displayed.
  • the third interface is different from the second interface.
  • the electronic device can divide the screen into multiple preset areas. One area can correspond to one interface. When the electronic device detects which area the user is looking at, it can display an interface corresponding to the area. In this way, the user can quickly control the electronic device to display different interfaces by looking at different screen areas.
  • the second interface and the third interface are interfaces provided by the same application, or the second interface and the third interface are interfaces provided by different applications.
  • the method further includes: displaying a fourth interface; when displaying the fourth interface, the electronic device collects a second image; and determining the user's third eye gaze area based on the second image; third eye When the ball gaze area is within the first preset area, the fifth interface is displayed, and the fifth interface is different from the second interface.
  • the interfaces associated with a screen area of the electronic device may also be different.
  • the interface associated with the upper right corner area of the electronic device may be the payment interface
  • the interface associated with the upper right corner area of the electronic device may be the ride code interface.
  • displaying the second interface includes: when the first eyeball gaze area is within the first preset area, And when the duration of looking at the first preset area is the first duration, the second interface is displayed.
  • the electronic device can also monitor the user's gaze duration when detecting the user's eyeball gaze area. When the gaze duration meets the preset duration, the electronic device can display the corresponding interface.
  • the method further includes: when the first eyeball fixation area is within the first preset area and the duration of fixation on the first preset area is the second duration, displaying the second Six interfaces.
  • the electronic device can also associate a screen area with multiple interfaces, and determine which interface is specifically displayed based on the user's gaze duration.
  • the first eye gaze area is a cursor point formed by one display unit on the screen, or the first eye gaze area is a cursor point formed by multiple display units on the screen, or Cursor area.
  • the second interface is a non-private interface.
  • the method further includes: displaying the interface to be unlocked; when displaying the interface to be unlocked, the electronic device collects a third image; based on the third image Determine the user's fourth eyeball gaze area; when the fourth eyeball gaze position is within the first preset area, display the second interface.
  • the third interface is a privacy interface
  • the method further includes: not displaying the third interface when the fourth eye gaze position is within the second preset area.
  • the electronic device can also set the privacy type of the associated interface.
  • the associated interface is a non-private interface, in the locked screen state, after recognizing that the user is looking at the screen area corresponding to the non-private interface, the electronic device can directly display the above-mentioned non-private interface without unlocking. In this way, users can obtain the above-mentioned non-private interface more quickly.
  • the associated interface is a privacy interface, in the lock screen state, after recognizing that the user is looking at the screen area corresponding to the privacy interface, the electronic device may not display the above-mentioned privacy interface. In this way, electronic devices can avoid privacy leaks and improve user experience when providing users with fast services.
  • both the second interface and the third interface are privacy interfaces; the electronic device does not enable the camera to acquire images when displaying the interface to be unlocked.
  • the electronic device can not turn on the camera when the screen is locked, thereby saving power consumption.
  • a first control is displayed in the second preset area of the first interface, and the first control is used to indicate that the second preset area is associated with the third interface.
  • the electronic device can display prompt controls in the preset area of the associated interface during the process of detecting the user's eye gaze area.
  • This prompt control can be used to indicate to the user the interface associated with this area, as well as the services or information that the interface can provide.
  • users can intuitively understand whether each area has associated boundaries. interface, and the services or information that each interface can provide.
  • the user can decide which preset area to focus on and which associated interface to open.
  • the first control is not displayed in the second preset area of the interface to be unlocked.
  • the electronic device when the interface associated with a certain preset area is a privacy interface, the electronic device will not display a prompt control indicating the privacy interface in the lock screen state, preventing the user from ineffectively looking at the preset area. area.
  • the first control is any one of the following: a thumbnail of the first interface, an icon of an application corresponding to the first interface, an icon indicating that the first interface provides Function icons for services.
  • the duration for which the electronic device collects images is the first preset duration; the electronic device collects the first image, specifically: the electronic device collects the first image within the first preset time .
  • the terminal device will not detect the user's eyeball gaze area all the time, but will detect it within a preset period of time to save power consumption and avoid camera abuse from affecting user information security.
  • the first preset duration is the first 3 seconds of displaying the first interface.
  • the terminal device can detect the user's eyeball gaze area 3 seconds before displaying the first interface, and determine whether the user is gazing at the preset area of the screen. It not only meets user needs in most scenarios, but also reduces power consumption as much as possible.
  • the electronic device collects the first image through a camera module.
  • the camera module includes: at least one 2D camera and at least one 3D camera.
  • the 2D camera is used to acquire two-dimensional images.
  • Image the 3D camera is used to acquire an image including depth information; the first image includes a two-dimensional image and an image including depth information.
  • the camera module of the terminal device may include multiple cameras, and the multiple cameras include at least one 2D camera and at least one 3D camera.
  • the terminal device can obtain two-dimensional images and three-dimensional images indicating the gaze position of the user's eyeballs.
  • the combination of two-dimensional images and three-dimensional images can help improve the accuracy and accuracy of the terminal device in identifying the user's eye gaze area.
  • the first image acquired by the camera module is stored in the secure data buffer.
  • the method further includes: Obtain the first image from the secure data buffer in a trusted execution environment.
  • the terminal device can store the images collected by the camera module in the secure data buffer.
  • the image data stored in the secure data buffer can only be transmitted to the eye gaze recognition algorithm through the secure transmission channel provided by the security service, thereby improving the security of the image data.
  • the secure data buffer is set at the hardware layer of the electronic device.
  • determining the user's first eye gaze area based on the first image specifically includes: determining feature data of the first image, where the feature data includes a left eye image, a right eye image, a person One or more of the face image and face grid data; the eye gaze recognition model is used to determine the first eye gaze area indicated by the feature data, and the eye gaze recognition model is established based on a convolutional neural network.
  • the terminal device can obtain the two-dimensional images and three-dimensional images collected by the camera module. Obtain left eye images, right eye images, face images and face mesh data respectively to extract more features and improve recognition precision and accuracy.
  • determining the characteristic data of the first image specifically includes: performing face correction on the first image to obtain a first image with a corrected face; Image, determine the characteristic data of the first image.
  • the terminal device can perform face correction on the images collected by the camera module to improve the left eye image, right eye image, and face image. Image accuracy.
  • the first interface is any one of the first desktop, the second desktop, or the negative screen; the fourth interface is the first desktop, the second desktop, or the negative screen. Any one of them, and it is different from the first interface.
  • the main interfaces of the terminal device displaying the first desktop, the second desktop, the negative screen, etc. can respectively set the screen preset areas and their associated interfaces. Different main interfaces can reuse a screen preset area.
  • the association between the first preset area and the second interface and the fifth interface is set by the user.
  • the user can set the associated interfaces of different screen preset areas corresponding to each protagonist through the interface provided by the electronic device to meet their own personalized needs.
  • the present application provides an electronic device, which includes one or more processors and one or more memories; wherein one or more memories are coupled to one or more processors, and one or more
  • the memory is used to store computer program code.
  • the computer program code includes computer instructions.
  • embodiments of the present application provide a chip system, which is applied to an electronic device.
  • the chip system includes one or more processors, and the processor is used to call computer instructions to cause the electronic device to execute the first step. aspect and the method described in any possible implementation manner in the first aspect.
  • the present application provides a computer-readable storage medium, including instructions.
  • the above instructions When the above instructions are run on an electronic device, the above electronic device causes the above-mentioned electronic device to execute as described in the first aspect and any possible implementation manner of the first aspect. method.
  • the present application provides a computer program product containing instructions.
  • the computer program product When the computer program product is run on an electronic device, the electronic device causes the electronic device to execute as described in the first aspect and any possible implementation manner of the first aspect. method.
  • the electronic device provided by the second aspect the chip system provided by the third aspect, the computer storage medium provided by the fourth aspect, and the computer program product provided by the fifth aspect are all used to execute the method provided by this application. Therefore, the beneficial effects it can achieve can be referred to the beneficial effects in the corresponding methods, and will not be described again here.
  • Figure 1 is a schematic diagram of an eyeball gaze position provided by an embodiment of the present application.
  • Figures 2A-2I are a set of user interfaces provided by embodiments of the present application.
  • Figures 3A-3E are a set of user interfaces provided by embodiments of the present application.
  • Figures 4A-4D are a set of user interfaces provided by embodiments of the present application.
  • Figures 5A-5M are a set of user interfaces provided by embodiments of the present application.
  • Figures 6A-6I are a set of user interfaces provided by embodiments of the present application.
  • Figures 7A-7C are a set of user interfaces provided by embodiments of the present application.
  • Figure 8 is a flow chart of a display method provided by an embodiment of the present application.
  • Figure 9 is a schematic structural diagram of an eye gaze recognition model provided by an embodiment of the present application.
  • Figure 10 is a flow chart of a face correction method provided by an embodiment of the present application.
  • Figures 11A-11C are schematic diagrams of a set of face correction methods provided by embodiments of the present application.
  • Figure 12 is a structural diagram of a convolutional network of an eye gaze recognition model provided by an embodiment of the present application.
  • Figure 13 is a schematic diagram of a separable convolution technology provided by an embodiment of the present application.
  • Figure 14 is a schematic system structure diagram of the terminal 100 provided by the embodiment of the present application.
  • Figure 15 is a schematic diagram of the hardware structure of the terminal 100 provided by the embodiment of the present application.
  • the embodiment of the present application provides a display method. This method can be applied to terminal devices such as mobile phones and tablet computers. Terminal devices such as mobile phones and tablet computers that implement the above method can be recorded as terminal 100. In subsequent embodiments, the terminal 100 will be used to refer to the above-mentioned terminal devices such as mobile phones and tablet computers.
  • the terminal 100 can also be a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, or a personal digital assistant.
  • PDA personal digital assistant
  • AR augmented reality
  • VR virtual reality
  • AI artificial intelligence
  • wearable devices wearable devices
  • vehicle-mounted devices smart home devices and /or smart city equipment.
  • the terminal 100 can display a shortcut window in the main interface after unlocking.
  • Applications frequently used by users can be displayed in the shortcut window, such as the icon, main interface, or common interface of the application.
  • the above-mentioned common interfaces refer to the pages that users frequently open.
  • the terminal 100 may detect the user's eyeball gaze position. When it is detected that the user's eyeball gaze position is within the above-mentioned shortcut window area, the terminal 100 may display the main interface or common interface of the application program displayed in the shortcut window.
  • the layer where the above shortcut window is located is above the layer of the main interface. Therefore, the content displayed in the shortcut window will not be obscured.
  • the above-mentioned user's eyeball gaze position refers to the position where the user's line of sight focuses on the screen of the terminal 100 when the user gazes at the terminal 100 .
  • a cursor point S may be displayed on the screen of the terminal 100.
  • the position where the user's sight focuses on the screen shown in Figure 1 is the cursor point S, that is, the position where the user's eyeballs focus is the cursor point S.
  • the cursor point S can be anywhere on the screen.
  • Figure 1 also shows the shortcut window W.
  • the terminal 100 can determine that the user's eyeball gaze position is within the shortcut window area W, that is, the user is looking at the shortcut window.
  • shortcut windows can also be divided into privacy categories and non-privacy categories.
  • Shortcut windows marked as private can only be displayed on the main interface after the unlock is successful.
  • Non-privacy shortcut windows can also be displayed on the interface to be unlocked before the unlock is successful.
  • the terminal 100 may display the main interface or common interface of the application program. Whether a shortcut window is private depends on the privacy requirements of the information displayed in the window.
  • users can quickly open frequently used applications and common interfaces in commonly used applications, thereby saving user operations and improving user convenience.
  • the user can control whether the terminal 100 opens the above-mentioned common applications or common interfaces through the eye gaze position, further saving user operations.
  • the user controls the terminal 100 to perform a certain action through the eye gaze position, which provides the user with a new interactive control method and improves the user's experience.
  • the following is a detailed introduction to user scenarios in which the terminal 100 implements the above interaction method based on eye gaze recognition.
  • FIG. 2A exemplarily shows a user diagram of the terminal 100 in a screen-off state.
  • the terminal 100 When the user is not using the terminal 100, the terminal 100 may be in a screen-off state. As shown in FIG. 2A , when the terminal 100 is in the screen-off state, the display of the terminal 100 sleeps and becomes a black screen, but other devices and programs work normally. In other embodiments, when the user is not using the terminal 100, the terminal 100 may also be in the AOD (Always on Display) state.
  • the screen-off AOD state refers to the state of controlling part of the screen to light up without lighting up the entire mobile phone screen, that is, the state of controlling part of the screen to light up based on the screen-off state.
  • the terminal 100 can light up the entire screen and display the interface to be unlocked as shown in Figure 2B.
  • the time and date can be displayed on the interface to be unlocked for the user to view.
  • the terminal 100 detects the user operation of waking up the mobile phone, including but not limited to: the user's operation of picking up the mobile phone, the user's operation of waking up the mobile phone through the voice assistant, etc. This embodiment of the present application does not limit this.
  • the terminal 100 can enable the camera to collect and generate image frames.
  • the image frame may include an image of the user's face. Then, the terminal 100 can use the above image frame to perform facial recognition and determine whether the facial image in the above image frame is the facial image of the owner, that is, determine whether the user performing the unlocking operation is the owner himself.
  • the terminal 100 may be provided with a camera module 210 .
  • the camera module 210 of the terminal 100 includes at least a 2D camera and a 3D camera.
  • 2D cameras refer to cameras that generate two-dimensional images, such as cameras commonly used on mobile phones that generate RGB images.
  • the above-mentioned 3D camera refers to a camera that can generate a three-dimensional image or a camera that can generate an image including depth information, such as a TOF camera.
  • the images generated by 3D cameras also include depth information, that is, the distance information between the object being photographed and the 3D camera.
  • the camera module 210 may also include multiple 2D cameras and multiple 3D cameras, which is not limited in the embodiments of the present application.
  • the camera used by the terminal 100 may be one of the cameras in the above-mentioned camera module 210 .
  • this camera is the 3D camera in the camera module 210.
  • the terminal 100 may display the user interface shown in Figures 2C-2D.
  • the terminal 100 may display the unlocking success interface shown in FIG. 2C.
  • the interface may display an icon 211.
  • the icon 211 can be used to prompt the user that the face unlock is successful.
  • the terminal 100 may display the user interface shown in FIG. 2D. This interface may be called the main interface of the terminal 100 .
  • the unlocking success interface shown in Figure 2C is optional. After confirming that the unlocking is successful, the terminal 100 may also directly display the main interface shown in Figure 2D.
  • the terminal 100 can also adopt password unlocking (graphic password, digital password), fingerprint unlocking and other unlocking methods. After the unlocking is successful, the terminal 100 can also display the main interface shown in Figure 2D.
  • the main interface may include a notification bar 221, a page indicator 222, a frequently used application icon tray 223, and other application icon trays 224.
  • the notification bar may include one or more signal strength indicators (such as signal strength indicator 221A, signal strength indicator 221B) of mobile communication signals (also known as cellular signals), wireless fidelity (wireless fidelity, Wi-Fi) Fi) signal strength indicator 221C, battery status indicator 221D, time indicator 221E.
  • signal strength indicator 221A signal strength indicator 221A, signal strength indicator 221B
  • mobile communication signals also known as cellular signals
  • wireless fidelity wireless fidelity, Wi-Fi
  • Fi wireless fidelity
  • battery status indicator 221D battery status indicator 221D
  • time indicator 221E time indicator
  • the page indicator 222 may be used to indicate the positional relationship of the currently displayed page to other pages.
  • the main interface of the terminal 100 may include multiple pages.
  • the interface shown in Figure 2D may be one of the above-mentioned multiple pages.
  • the main interface of the terminal 100 also includes other pages. This other page is not shown in Figure 2D.
  • the terminal 100 may display the above other pages, that is, switch pages.
  • the page indicator 222 will also change to different forms to indicate different pages. Subsequent embodiments will be introduced in detail.
  • the frequently used application icon tray 223 may include multiple common application icons (such as a camera application icon, an address book application icon, a phone application icon, and an information application icon), and the frequently used application icons remain displayed when the page is switched.
  • the above common application icons are optional and are not limited in this embodiment of the present application.
  • the other application icon tray 224 may include a plurality of general application icons, such as a settings application icon, an application market application icon, a gallery application icon, a browser application icon, etc.
  • General application icons may be distributed in other application icon trays 224 on multiple pages of the main interface.
  • the general application icons displayed in the other application icon tray 224 will be changed accordingly when the page is switched.
  • the icon of an application can be a general application icon or a commonly used application icon. When the above icon is placed in the common application icon tray 223, the above icon is a common application icon; when the above icon is placed in the other application icon tray 224, the above icon is a general application icon.
  • FIG. 2D only illustrates a main interface or a page of a main interface of the terminal 100, and should not be construed as limiting the embodiments of the present application.
  • the terminal 100 can also display a shortcut window 225 and a shortcut window 226 on top of the layer of the above-mentioned main interface.
  • the shortcut window 225 can display a thumbnail of the payment interface.
  • the shortcut window 226 can display a thumbnail of the health code interface. It can be understood that in order to more vividly display the terminal's 100-layer display main interface and shortcut window, this display process is shown in Figure 2D and Figure 2E in the drawings of this application respectively. However, from the user's perspective, after the unlocking is successful, the interface the user sees is actually the interface shown in Figure 2E (the layers of the main interface and the layers of the shortcut window are displayed at the same time). It can be understood that the main interface may display more or fewer shortcut windows, such as 3, 4, 1, etc., and the embodiment of the present application does not limit this.
  • the first application program may be installed on the terminal 100 .
  • the first application can provide payment services to users.
  • the terminal 100 may display a payment interface.
  • the payment interface can include payment codes, such as payment QR codes, payment barcodes, etc. Users can complete the payment task by showing the above payment interface.
  • the first application to open is Refers to setting the first application as the foreground application.
  • the shortcut window 225 may display a thumbnail of the payment interface to prompt the user about the applications and commonly used interfaces associated with the shortcut window 225 .
  • a second application program can also be installed on the terminal 100 .
  • the terminal 100 may display the health code interface.
  • the health code reflecting the user's health status can be displayed on the health code interface. Users can complete the health check by showing the above health code interface.
  • the shortcut window 226 may display a thumbnail of the above health code interface.
  • the above payment interface can be called a common interface of the first application.
  • the above health code interface can be called a common interface of the second application.
  • the terminal 100 can collect the user's facial image through the camera module 210 .
  • the number of cameras used by the terminal 100 is two, including a 2D camera and a 3D camera. Of course, it is not limited to one 2D camera and one 3D camera.
  • the terminal 100 can also use more cameras to obtain more facial features of the user, especially eye features, so as to determine the user's eyeball gaze position more quickly and accurately in the future. .
  • the 3D camera of the terminal 100 is turned on. Therefore, at this time, the terminal 100 only needs to turn on the 2D camera of the camera module 210.
  • the camera of the terminal 100 is turned off. At this time, the terminal 100 needs to turn on the 2D camera and the 3D camera in the camera module 210.
  • the time when the terminal 100 collects the user's facial image through the camera module 210 (2D camera, 3D camera) can be recorded as the gaze recognition time.
  • the gaze recognition time is the first 3 seconds after the main interface is displayed after successful unlocking. After 3 seconds, the terminal 100 can turn off the camera module 210 to save power consumption. If the gaze recognition time is set too short, for example, 1 second, the image frames collected by the terminal 100 including the user's facial image may be insufficient, which may lead to inaccurate eye gaze recognition results. On the other hand, it is difficult for users to immediately focus on a shortcut window within 1 second after displaying the main interface. Setting the gaze recognition time too long, such as 7 seconds or 10 seconds, will result in excessive power consumption. Of course, it is not limited to 3 seconds.
  • the gaze recognition time can also be set to other values, such as 2.5 seconds, 3.5 seconds, 4 seconds, etc., which are not limited in the embodiments of the present application. Subsequent introductions will take 3 seconds as an example.
  • the terminal 100 may also display the shortcut window only within the gaze recognition time.
  • the camera module 210 is turned off, that is, when the terminal 100 no longer detects the user's eyeball gaze position, the terminal 100 no longer displays the shortcut window to avoid blocking the main interface for a long time and affecting the user experience.
  • the camera module 210 may continuously collect and generate image frames including the user's facial image.
  • the above image frames include two-dimensional images collected by the 2D camera and three-dimensional images collected by the 3D camera.
  • the terminal 100 can identify the user's eyeball gaze position and determine whether the user is gazing at the shortcut window 225 or the shortcut window 226 .
  • the terminal 100 may determine that the user is looking at the shortcut window 225 based on the collected image frames. In response to detecting the user action of the user looking at the shortcut window 225, the terminal 100 may open the first application program and display the payment interface corresponding to the shortcut window 225, see FIG. 2G. As shown in Figure 2G, the payment interface displays a payment QR code 231 and related information for providing payment services to users.
  • the terminal 100 may also determine that the user is looking at the shortcut window 226 based on the collected image frames. In response to detecting the user action of gazing at the shortcut window 226, the terminal 100 may open the second application and display the health code interface corresponding to the shortcut window 226, see FIG. 2I. As shown in Figure 2I, the health code interface displays the health code 232 required for the health check and its related information, so that the user can quickly complete the health check.
  • the terminal 100 can also display different interfaces by detecting the user's gaze on a certain area for different lengths of time. For example, referring to FIG. 2D , after entering the main interface, the terminal 100 may detect whether the user is looking at the upper right corner area of the screen. After detecting that the user is gazing at the upper right corner area for a first period of time, such as 2 seconds, the terminal 100 may display the shortcut window 225 . If it is detected that the user is still looking at the upper right corner area, and after reaching the second duration, for example, 3 seconds, the terminal 100 may switch the shortcut window 225 displayed in the upper right corner area to the shortcut window 226.
  • a first period of time such as 2 seconds
  • the terminal 100 may display the shortcut window 225 . If it is detected that the user is still looking at the upper right corner area, and after reaching the second duration, for example, 3 seconds, the terminal 100 may switch the shortcut window 225 displayed in the upper right corner area to the shortcut window 226.
  • the terminal 100 can detect the user's touch operation, blink control operation, or head turn control operation on the above-mentioned window to determine whether to display the corresponding shortcut window 225 or the shortcut window 226. interface.
  • the user can immediately obtain the common interface of commonly used applications after opening the terminal 100, thereby quickly obtaining the services and information provided by the common interface, such as the payment service provided by the above-mentioned payment interface, and the health code provided by the health code interface. and related information.
  • the user can control the terminal 100 to display the interface corresponding to the shortcut window by looking at the shortcut window without performing touch operations such as clicking, double-clicking, and long-pressing, thus avoiding the inability to control the terminal when the user's hands are occupied. 100 questions, providing convenience to users.
  • FIG. 3A exemplarily shows a main interface including multiple pages. Among them, each page can be called the main interface.
  • the main interface may include page 30, page 31, and page 32.
  • Page 30 can be called negative one screen.
  • Page 31 may be called the first desktop.
  • Page 32 may be called the second desktop.
  • the page layout of the second desktop is the same as that of the first desktop, which will not be described again here.
  • the number of desktops in the main interface can be increased or reduced according to the user's settings. Only the first desktop, the second desktop, etc. are shown in FIG. 3A.
  • the main interface displayed by the terminal 100 is actually the first desktop in the main interface shown in FIG. 3A .
  • the terminal 100 first displays the first desktop.
  • the terminal 100 may display the negative screen, the first desktop or the second desktop.
  • which one of the negative screen, first desktop, or second desktop the terminal 100 displays depends on the page you stayed on when you last exited.
  • the terminal 100 may first display the second desktop or the negative screen, and display the shortcut window 225 and the shortcut window 225 on the layer where the second desktop or the negative screen is located. Window 226, see Figures 3B and 3C.
  • the terminal 100 can also collect the user's facial image through the camera module 210 and identify whether the user is looking at the shortcut window 225 Or shortcut window 226.
  • the terminal 100 may also display the payment interface shown in FIG. 2G for the user to obtain the payment service provided by the first application.
  • the terminal 100 may also display the health code interface shown in FIG. 2I for the user to obtain the health code 232 and related information provided by the second application. information so that users can quickly complete the health check.
  • the terminal 100 may also use smaller icons to replace the above shortcut window.
  • the terminal 100 may display icons 311 and 312.
  • the icon 311 may correspond to the aforementioned shortcut window 225
  • the icon 312 may correspond to the aforementioned shortcut window 226.
  • the terminal 100 may display a payment interface that provides payment services or a health code interface that displays a health code for the user to use.
  • the above-mentioned icons 311 and 312 not only serve as prompts, but also reduce the obstruction of the main interface, thereby improving the user experience.
  • the terminal 100 may also display icons of application programs installed on the terminal 100, such as the application icon 321 and the application icon 322 shown in FIG. 3E.
  • the above-mentioned applications are applications frequently used by users. After detecting the user's gaze action, the terminal 100 can open the application program, thereby providing the user with a service to quickly open the above-mentioned application program without requiring the user to perform a touch operation.
  • the terminal 100 can collect the user's facial image, identify whether the user is looking at the shortcut window, and then determine whether to display the common interface corresponding to the shortcut window so that the user can quickly and conveniently obtain Information in commonly used interfaces.
  • eye gaze recognition is turned off, the terminal 100 will not recognize whether the user is gazing at the shortcut window, and thus will not display the common interface corresponding to the shortcut window.
  • 4A-4D exemplarily illustrate a set of user interface settings for enabling or disabling the eye gaze recognition function.
  • FIG. 4A exemplarily shows the setting interface on the terminal 100.
  • Multiple setting options may be displayed on the setting interface, such as account setting options 411, WLAN options 412, Bluetooth options 413, mobile network options 414, etc.
  • the setting interface also includes auxiliary function options 415. Accessibility option 415 can be used to set some shortcut operations.
  • Terminal 100 may detect user operations on accessibility options 415 .
  • the terminal 100 may display the user interface shown in FIG. 4B, which is referred to as the auxiliary function setting interface.
  • the interface may display multiple accessibility options, such as accessibility options 421, one-handed mode options 422, and so on.
  • the auxiliary function setting interface also includes quick start and gesture options 423. Quick start and gesture options 423 can be used to set some gesture actions and eye gaze actions to control interaction.
  • the terminal 100 can detect user operations on the quick launch and gesture options 423 .
  • the terminal 100 may display the user interface shown in FIG. 4C , which is denoted as the quick startup and gesture setting interface.
  • the interface can display multiple quick launch and gesture setting options, such as smart voice option 431, screenshot option 432, screen recording option 433, and quick call option 434.
  • the quick start and gesture setting interface also includes an eye gaze option 435.
  • the eye gaze option 435 can be used to set the area for eye gaze recognition and corresponding shortcut operations.
  • Terminal 100 may detect user operations on eye gaze option 435 .
  • the terminal 100 may display the user interface shown in FIG. 4D , which is referred to as the eye gaze recognition setting interface.
  • the interface can display multiple function options based on eye gaze recognition, such as payment code option 442 and health code option 443.
  • the payment code option 442 can be used to turn on or off the function of eye gaze control to display the payment code.
  • the terminal 100 can display the shortcut window 225 associated with the payment interface.
  • the image frame of the facial image confirms whether the user is looking at the shortcut window 225.
  • the terminal 100 can display the payment interface corresponding to the shortcut window 225 and obtain the payment code. In this way, users can quickly and easily obtain the payment code and complete the payment behavior, thus avoiding a large number of tedious user operations and obtaining a better user experience.
  • the health code option 443 can be used to turn on or off the function of eye gaze control to display the health code.
  • the terminal 100 can display the shortcut window 226 associated with the health code interface.
  • the terminal 100 can also display the collected information containing The image frame of the user's facial image confirms whether the user is looking at the shortcut window 226.
  • the terminal 100 may display a health code interface including the health code and related information. In this way, users can quickly and easily obtain health codes and complete health checks, thereby avoiding a large number of tedious user operations.
  • the eye gaze recognition setting interface shown in FIG. 4D may also include other shortcut function options based on eye gaze, such as notification bar option 444.
  • the terminal 100 may detect whether the user is looking at the notification bar area at the top of the screen. When detecting that the user looks at the notification bar, the terminal 100 may display a notification interface for the user to check notification messages.
  • users can customize the display area of the shortcut window according to their own usage habits and the layout of the main interface, so as to minimize the impact of the shortcut window on the main interface of the terminal 100.
  • the eye gaze recognition setting interface may also be shown in Figure 5A.
  • the terminal 100 may detect a user operation on the payment code option 442, and in response to the above operation, the terminal 100 may display the user interface (payment code setting interface) shown in FIG. 5B.
  • the interface may include buttons 511 and area selection controls 512 .
  • Button 511 can be used to turn on ("ON”) or turn off (“OFF") the function of eye gaze control to display the payment code.
  • the area selection control 512 can be used to set the display area of the payment code shortcut window 225 on the screen.
  • the area selection control 512 may include a control 5121, a control 5122, a control 5123, and a control 5124.
  • the payment code shortcut window 225 is displayed in the upper right corner area of the screen, corresponding to the display area shown by control 5122.
  • the icon 5125 selected icon
  • the control 5122 indicating the display area (upper right corner area) of the current payment code shortcut window 225 on the screen.
  • the icon 5126 (occupied icon) may be displayed in the control corresponding to the display area.
  • the display area shown in the control 5123 may correspond to the health code shortcut window 226. Therefore, the occupied icon may be displayed in the control 5123, indicating that the area in the lower left corner of the screen corresponding to the control 5123 is occupied and can no longer be used to set the payment code shortcut window 225.
  • the terminal 100 may detect a user operation on the control 5121. In response to the above operation, the terminal 100 may display a selection icon in the control 5121 to indicate the display area (upper left corner area) of the currently selected shortcut window 225 associated with the payment code on the screen. At this time, referring to Figure 5D, when successful unlocking is detected and the main interface is displayed, the shortcut window 225 corresponding to the payment code may be displayed in the upper left corner area above the layer of the main interface.
  • the terminal 100 can also set the health code shortcut window according to user operations. 226 display area, I won’t go into details here.
  • the eye gaze recognition setting interface may also include a control 445.
  • Control 445 can be used to add more shortcut windows, thereby providing users with more services for quickly opening commonly used applications and/or commonly used interfaces.
  • terminal 100 may detect user operations on control 445.
  • the terminal 100 may display the user interface (add shortcut window interface) shown in FIG. 5F.
  • the interface may include multiple shortcut window options, such as option 521, option 522, and so on.
  • Option 521 may be used to set a shortcut window 227 associated with the health check record.
  • the terminal 100 may display an interface (third interface) including the user's health detection record.
  • the health detection record shortcut window can be referred to Figure 5G.
  • Option 522 can be used to set a shortcut window associated with the electronic ID card. This shortcut window can be associated with the interface that displays the user's electronic ID card, thereby providing a service for quickly opening the interface, which will not be described again here.
  • Terminal 100 may detect user operations on option 521. In response to the above operation, the terminal 100 may display the user interface (health detection record setting interface) shown in FIG. 5H. As shown in Figure 5H, button 531 can be used to open the shortcut window 227 associated with the health test record. Page controls (control 5321, control 5322, control 5323) can be used to set the display page of the shortcut window 227.
  • Page control 5321 can be used to indicate the negative screen of the main interface.
  • Page control 5322 may be used to indicate the first desktop of the main interface.
  • Page control 5323 may be used to indicate a second desktop of the main interface.
  • the shortcut window 227 can be set on the first desktop (when all four display areas of the first desktop are not occupied).
  • the page control 5322 may also display a check mark to indicate that the shortcut window 227 is currently set in the first desktop.
  • the shortcut window 227 can be set in the lower right corner area of the first desktop, corresponding to the area selection control 5334.
  • the terminal 100 may detect a user operation on the return control 534, and in response to the above operation, the terminal 100 may display an eye gaze recognition setting interface, see FIG. 5I.
  • the interface also includes a health monitoring record option 446, which corresponds to the function of controlling eye gaze to display health monitoring records.
  • the terminal 100 can also display the shortcut window 227 associated with the health monitoring record on top of the layer of the main interface.
  • the shortcut window 227 in Figure 5J Based on the image frames collected during the gaze recognition time including the user's facial image, the terminal 100 can identify the user's eyeball gaze position and determine whether the user is gazing at the shortcut window 227 . In response to detecting the user's action of gazing at the shortcut window 227, the terminal 100 may display a third interface corresponding to the shortcut window 227 that displays the health detection record.
  • the terminal 100 can also change the display page and display area of the above-mentioned shortcut window 227 according to user operations to meet the user's personalized display needs, better fit the user's usage habits, and improve the user's usage experience.
  • the terminal 100 may detect a user operation acting on the page control 5323.
  • the "display area" corresponds to four display areas including the upper left corner and the upper right corner of the second desktop.
  • the terminal 100 may detect a user operation on the area selection control 5333.
  • the terminal 100 may determine to display the shortcut window 227 in the lower left corner area of the second desktop.
  • the terminal 100 can display the payment code shortcut window 225 and the health code shortcut window 226 on the layer of the first desktop; refer to Figure 5M.
  • the terminal 100 can also be on top of the layer of the second desktop within the preset gaze recognition time. Display shortcut window 227.
  • the terminal 100 can display the corresponding shortcut windows belonging to the page according to the page displayed after unlocking.
  • the terminal 100 can also set the privacy type (private and non-private) of various shortcut windows. For non-private shortcut windows, the terminal 100 can also display them on the interface to be unlocked.
  • the terminal 100 can detect the gaze position of the user's eyeballs on the interface to be unlocked, and determine whether the user is gazing at the non-private shortcut window. When detecting that the user looks at the non-private shortcut window, the terminal 100 may display a commonly used interface corresponding to the shortcut window. In this way, the user does not need to complete the unlocking operation, thereby further saving user operations and allowing the user to obtain commonly used applications and/or commonly used interfaces more quickly.
  • the payment code setting interface may also include a button 611.
  • Button 611 can be used to set the privacy type of the shortcut window 225 associated with the payment code. Button 611 is turned on ("ON") to indicate that payment code shortcut window 225 is private. Conversely, button 611 being turned off (“OFF") may indicate that shortcut window 225 is non-private. As shown in Figure 6A, shortcut window 225 can be set to be private.
  • the shortcut window 226 associated with the health code can also be set to be private or non-private.
  • button 612 is closed, which means that shortcut window 226 can be set to be non-private.
  • the option corresponding to the private shortcut window may be accompanied by a security display label 613 to remind the user that the shortcut window is private and will not be displayed on the screen before unlocking.
  • the terminal 100 may display the non-private health code shortcut window 226 on top of the layer of the interface to be unlocked.
  • the terminal 100 may collect the user's facial image.
  • the terminal 100 may recognize that the user is looking at the health code shortcut window 226 based on the collected image frame including the user's facial image.
  • the terminal 100 may display a health code interface corresponding to the health code shortcut window 226 that displays the health code, see FIG. 6G.
  • the terminal 100 will not display the shortcut window on the interface to be unlocked to avoid leakage of the payment code.
  • the terminal 100 can turn on the camera to identify the user's eyeball gaze position on the interface to be unlocked.
  • the terminal 100 may display the corresponding commonly used interface.
  • the terminal 100 may not display the corresponding common interface.
  • the terminal 100 When only a private shortcut window is provided in the terminal 100, in the interface to be unlocked, the terminal 100 does not need to turn on the camera to collect the user's facial image and identify the user's eyeball gaze position.
  • the terminal 100 can display the main interface. After displaying the main interface, the terminal 100 can display either the non-private health code shortcut window 226 or the private payment code shortcut window 225. That is to say, the terminal 100 can display a private shortcut window after being unlocked.
  • the terminal 100 can display a non-private shortcut window before unlocking or display a non-private shortcut window after unlocking, providing users with a more convenient service of controlling and displaying commonly used applications and/or commonly used interfaces.
  • the terminal 100 can also set the display times of various shortcut windows. After the above display times are exceeded, the terminal 100 may not display the shortcut window, but the terminal 100 may still recognize the user's eyeball gaze position and provide services for quickly displaying applications and/or commonly used interfaces.
  • the health code setting interface may also include a control 711.
  • the control 711 can be used to set the number of times the shortcut window is displayed.
  • the "100 times" displayed in the control 711 can mean: when the eye gaze control display health code function is enabled for the first 100 times, the terminal 100 can display the shortcut window 226 corresponding to the health code. , to prompt the user.
  • the terminal 100 may not display the shortcut window 226 corresponding to the health code (the dotted line box in Figure 7B represents the area where the user's eyeball gaze is located, and the above dotted line box is not displayed on the screen).
  • the terminal 100 can still collect the user's facial image. If it is detected that the user looks at the lower left corner area of the first desktop, the terminal 100 can still display the corresponding health code interface displaying the health code, see FIG. 7C , so that the user can use the above health code interface to complete the health code verification.
  • the terminal 100 may not display the above shortcut window, thereby reducing the shortcut window's obstruction of the main interface and improving the user experience.
  • Figure 8 exemplarily shows a flow chart of a display method provided by an embodiment of the present application. The following is a detailed introduction to the process of the terminal 100 implementing the above display method with reference to FIG. 8 and the user interface introduced previously.
  • the terminal 100 detects that the trigger condition for turning on eye gaze recognition is met.
  • the terminal 100 may be preset with some scenarios for enabling eye gaze recognition.
  • the terminal 100 When it is detected that the terminal 100 is in the above scene, the terminal 100 will turn on the camera to collect the user's facial image.
  • the terminal 100 can turn off the camera and stop collecting the user's facial image to avoid occupying camera resources, save power consumption, and protect user privacy.
  • R&D developers can determine the above-mentioned scenarios where eye gaze recognition needs to be turned on through advance analysis of user habits.
  • the above scene is usually the scene where the user picks up the mobile phone or just unlocks the mobile phone and enters the mobile phone.
  • the terminal 100 can provide the user with a service of quickly launching a certain application program (commonly used application program), so as to save the user operations and improve the user experience.
  • the terminal 100 can provide the user with a control method for controlling eye gaze to activate the above-mentioned applications, thereby avoiding the problem of inconvenience in performing touch operations in scenarios where the user's hands are occupied, and further improving the user experience.
  • the above scenarios include but are not limited to: the scenario of lighting up the mobile phone screen and displaying the interface to be unlocked, and the scenario of displaying the main interface (including the first desktop, second desktop, negative screen, etc.) after unlocking.
  • the triggering conditions for turning on eye gaze recognition include: detecting a user operation to wake up the phone, detecting a user operation to complete unlocking and display the main interface.
  • the user operation of waking up the mobile phone includes but is not limited to the operation of the user picking up the mobile phone, the operation of the user waking up the mobile phone through the voice assistant, etc.
  • the terminal 100 can display the main interface shown in Figure 2D. At the same time, the terminal 100 can also display on top of the layer of the main interface. Frequently used applications or Use the shortcut windows 225 and 226 associated with common interfaces in the application.
  • the above operation of instructing the terminal 100 to display the user interface shown in FIGS. 2C to 2E may be referred to as a user operation of detecting completion of unlocking and displaying the main interface.
  • the terminal 100 can turn on the camera to collect the user's facial image, identify the user's eyeball gaze position, and then determine whether the user is looking at the above-mentioned shortcut window.
  • the terminal 100 when it is detected that the terminal 100 is awakened but the unlocking is not completed, the terminal 100 can also display common applications or common interfaces in common applications on top of the layer of the interface to be unlocked. associated shortcut window. At this time, the terminal 100 can also turn on the camera to collect the user's facial image, identify the user's eyeball gaze position, and then determine whether the user is looking at the above-mentioned shortcut window.
  • the terminal 100 turns on the camera module 210 to collect the user's facial image.
  • the terminal 100 may determine to turn on the eye gaze recognition function.
  • the terminal 100 may display a shortcut window to prompt the user to open commonly used applications and common interfaces associated with the shortcut window by looking at the shortcut window.
  • the terminal 100 can turn on the camera module 210 and collect the user's facial image to identify whether and which shortcut window the user is looking at.
  • the camera module 210 of the terminal 100 includes at least a 2D camera and a 3D camera.
  • 2D cameras can be used to capture and generate two-dimensional images.
  • 3D cameras can be used to capture and generate three-dimensional images containing depth information.
  • the terminal 100 can obtain the two-dimensional image and the three-dimensional image of the user's face at the same time.
  • the terminal 100 can obtain richer facial features, especially eye features, so as to more accurately identify the user's eyeball gaze position, and more accurately determine whether the user is looking at the quick window and which shortcut he is looking at. window.
  • the terminal 100 will not always turn on the camera. Therefore, after turning on the camera module 210, the terminal 100 needs to set a time to turn off the camera module 210.
  • the terminal 100 can set the gaze recognition time.
  • the gaze recognition time opening time is the time when the terminal 100 detects the trigger condition described in S101.
  • the moment at which the gaze recognition time ends depends on the duration of the gaze recognition time.
  • the above duration is preset, such as 2.5 seconds, 3 seconds, 3.5 seconds, 4 seconds, etc. introduced in Figure 2F. Among them, 3 seconds is the preferred duration of gaze recognition time.
  • the terminal 100 can turn off the camera module 210, that is, it will no longer recognize the user's eyeball gaze position.
  • the terminal 100 may no longer display the shortcut window to avoid blocking the main interface for a long time and affecting the user experience.
  • the terminal 100 determines the user's eyeball gaze position based on the collected image frames including the user's facial image.
  • the image frames collected and generated by the camera module 210 during the gaze recognition time may be called target input images.
  • the terminal 100 can identify the user's eyeball gaze position using the above-mentioned target input image. Referring to the introduction of FIG. 1 , when the user looks at the terminal 100 , the position where the user's line of sight focuses on the screen of the terminal 100 may be called the eyeball gaze position.
  • the terminal 100 may input the above image into the eye gaze recognition model.
  • the eye gaze recognition model is a model preset in the terminal 100 .
  • the eye gaze recognition model can determine the user's eye gaze position using an image frame containing the user's facial image, with reference to the cursor point S shown in Figure 1.
  • the eye gaze recognition model can output the position coordinates of the eye gaze position on the screen.
  • the subsequent Figure 9 will specifically introduce the eye gaze used in this application.
  • the structure of the identification model will not be expanded upon here.
  • the terminal 100 can determine whether and which shortcut window the user is watching based on the above position coordinates, and then determine whether to open commonly used applications and common interfaces associated with the above shortcut window.
  • the eye gaze recognition model can also output the user's eye gaze area.
  • An eye-gaze area can be contracted into an eye-gaze position, and an eye-gaze position can also be expanded into an eye-gaze area.
  • a cursor point formed by one display unit on the screen can be called an eye gaze position, and correspondingly, a cursor point or a cursor area formed by multiple display units on the screen can be called an eye gaze area.
  • the terminal 100 can determine whether and which shortcut window the user is looking at by judging the position of the eye gaze area on the screen, and then determine whether to open commonly used applications and frequently used applications associated with the above shortcut window. interface.
  • the terminal 100 determines whether the user is gazing at the shortcut window based on the position coordinates of the eye gaze position and the current interface, and further determines whether to display commonly used applications and common interfaces associated with the above shortcut window.
  • the terminal 100 After determining the position coordinates of the user's eyeball gaze position, combined with the current interface of the terminal 100, the terminal 100 can determine whether the user is gazing at the shortcut window on the current interface.
  • the interface may be called the current interface of the terminal 100.
  • the terminal 100 can determine the position coordinates of the user's eyeball gaze position. Therefore, the terminal 100 can determine the area or control corresponding to the eye gaze position according to the position coordinates.
  • the terminal 100 can determine that the user is looking at the shortcut window 225; when the eyeball gaze position is within the shortcut window 226, the terminal 100 can determine that the user is looking at the shortcut window 226.
  • the terminal 100 may also determine that the user's eyeball gaze position corresponds to a certain application icon in the common application icon tray 223 or other application icon trays 224, such as the "Gallery" application and so on.
  • the above-mentioned eye gaze position can also be in a blank area on the screen, which does not correspond to the icons or controls in the main interface, nor to the shortcut window described in this application.
  • the terminal 100 may display the payment interface corresponding to the shortcut window 225 .
  • the payment interface is a common interface determined by users.
  • the terminal 100 may display a health code interface corresponding to the shortcut window 226 for displaying the health code.
  • the health code interface is also a commonly used interface for users to determine.
  • the terminal 100 may open the application corresponding to the application icon. For example, referring to FIG. 2F, when it is determined that the user is looking at the "Gallery" application icon, the terminal 100 may display the homepage of the "Gallery".
  • the terminal 100 can also display icons of commonly used applications (application icons 321 and 322).
  • application icons 321 and 322 When it is determined that the user is looking at the application icon 321 or the application icon 322, the terminal 100 can open the corresponding application icon 321 or the application icon 322.
  • frequently used applications such as displaying the home page of the above frequently used applications.
  • the terminal 100 may not perform any action until the eye gaze recognition time is over and turn off the eye gaze recognition function.
  • the terminal 100 may not display the shortcut window or icon when identifying the user's eyeball gaze position. However, the terminal 100 can still determine the specific area to which the eyeball gaze position belongs based on the position coordinates of the user's eyeball gaze position.
  • the above-mentioned specific areas are preset, such as the upper left corner area, the upper right corner area, the lower left corner area, the lower right corner area, etc. shown in FIG. 7A.
  • the terminal 100 can determine which application to open and which interface to display.
  • the terminal 100 can recognize that the user's eyeball gaze position is in the lower left corner area of the screen. Therefore, the terminal 100 can display a health code interface associated with the lower left corner area for displaying the health code. Refer to FIG. 7C .
  • Figure 9 exemplarily shows the structure of the eye gaze recognition model.
  • the eye gaze recognition model used in the embodiment of the present application will be introduced in detail below with reference to Figure 9 .
  • the eye gaze recognition model is established based on convolutional neural networks (Convolutional Neural Networks, CNN).
  • the eye gaze recognition model may include: a face correction module, a dimensionality reduction module, and a convolutional network module.
  • the image frames collected by the camera module 210 and including the user's facial image may first be input into the face correction module.
  • the face correction module can be used to identify whether the facial image in the input image frame is straight. For image frames in which the facial image is not straight (such as head tilt), the face correction module can correct the image frame to make it straight, thereby avoiding subsequent impact on the eye gaze recognition effect.
  • Figure 10 shows the processing flow of the face correction module performing face correction on the image frames collected by the camera module 210.
  • S201 Use the facial key point recognition algorithm to determine the facial key points in the image frame T1.
  • the key points of the human face include the left eye, the right eye, the nose, the left lip corner, and the right lip corner.
  • the face key point recognition algorithm is existing, such as the Kinect-based face key point recognition algorithm, etc., which will not be described again here.
  • FIG. 11A exemplarily shows an image frame including a user's facial image, which is denoted as image frame T1.
  • the face correction module can use the face key point recognition algorithm to determine the key points of the face in the image frame T1: left eye a, right eye b, nose c, left lip corner d, right lip corner e, and determine the key points of each key point.
  • image frame T1 in Figure 11B For the coordinate position, refer to image frame T1 in Figure 11B.
  • S202 Use the face key points to determine the calibrated line of the image frame T1, and then determine the face deflection angle ⁇ of the image frame T1.
  • the left and right eyes are on the same horizontal line, so the straight line connecting the key points of the left eye and the key points of the right eye (the calibrated line) is parallel to the horizontal line, that is, the face deflection angle (the composition of the calibrated line and the horizontal line) angle) ⁇ is 0.
  • the face correction module can use the recognized coordinate positions of the left eye a and the right eye b to determine the calibrated line L1. Then, based on L1 and the horizontal line, the face correction module can determine the face deflection angle ⁇ of the facial image in the image frame T1.
  • the face correction module can correct the image frame T1 to make the face in the image frame straight.
  • the face correction module can first use the coordinate positions of the left eye a and the right eye b to determine the rotation center point y, and then, Taking the y point as the rotation center, rotate the image frame T1 by ⁇ ° to obtain an image frame with a straight facial image, which is recorded as image frame T2.
  • point A can represent the position of the rotated left eye a
  • point B can represent the position of the right eye b after the rotation
  • point C can represent the position of the nose c after the rotation
  • point D can represent the position of the rotated nose c.
  • the position of the left lip corner d and point E can represent the position of the rotated right lip corner e.
  • S205 Process the corrected image frame of the corrected facial image to obtain a left eye image, a right eye image, a facial image and face grid data.
  • the face grid data can be used to reflect the position of the face image in the entire image.
  • the face correction module can center on the key points of the face and crop the corrected image frame according to the preset size, thereby obtaining the left eye image, right eye image, and face image corresponding to the image.
  • the face correction module may determine face mesh data.
  • the face correction module can determine a rectangle of fixed size with the left eye A as the center.
  • the image covered by this rectangle is the left eye image.
  • the face correction module can determine the right eye image with the right eye B as the center, and the face image with the nose C as the center.
  • the size of the left eye image and the right eye image are the same, and the size of the face image and the left eye image are different.
  • the face correction module can correspondingly obtain the face grid data, that is, the position of the face image in the entire image.
  • the terminal 100 can obtain the corrected image frame of the facial image, and obtain the corresponding left eye image, right eye image, facial image and face mesh data from the above image frame.
  • the face correction module can input the left eye image, right eye image, facial image and face mesh data output by itself into the dimensionality reduction module.
  • the dimensionality reduction module can be used to reduce the dimensionality of the input left eye image, right eye image, facial image and face grid data to reduce the computational complexity of the convolutional network module and improve the speed of eye gaze recognition.
  • the dimensionality reduction methods used by the dimensionality reduction module include but are not limited to principal component analysis (PCA), downsampling, 1*1 convolution kernel, etc.
  • Each dimensionally reduced image (left eye image, right eye image, face image and face mesh data) can be input to the convolutional network module.
  • the convolutional network module can output the eye gaze position based on the above input image.
  • the structure of the convolutional network in the convolutional network module can be referred to Figure 12.
  • the convolution network may include convolution group 1 (CONV1), convolution group 2 (CONV2), and convolution group 3 (CONV3).
  • a convolution group includes: convolution kernel (Convolution), activation function PRelu, pooling kernel (Pooling) and local response normalization layer (Local Response Normalization, LRN).
  • the convolution kernel of CONV1 is a 7*7 matrix, and the pooling kernel is a 3*3 matrix
  • the convolution kernel of CONV2 is a 5*5 matrix
  • the pooling kernel is a 3*3 matrix
  • the convolution of CONV3 The kernel is a 3*3 matrix
  • the pooling kernel is a 2*2 matrix.
  • convolution convolution
  • Pooling pooling kernel
  • separable convolution technology refers to decomposing an n*n matrix into an n*1 column matrix and a 1*n row matrix for storage, thereby reducing the demand for storage space. Therefore, the eye gaze module used in this application has the advantages of small size and easy deployment, so as to be adapted to be deployed on terminal electronic devices such as mobile phones.
  • the convolutional network may include connection layer 1 (FC1), connection layer 2 (FC2), and connection layer 3 (FC3).
  • FC1 may include a combination module (concat), a convolution kernel 1201, PRelu, and a fully connected module 1202. Among them, concat can be used to combine left eye images and right eye images.
  • the face image can be input into FC2 after passing through CONV1, CONV2, and CONV3.
  • FC2 may include a convolution kernel 1203, PRelu, a fully connected module 1204, and a fully connected module 1205.
  • FC2 can perform two full connections on face images.
  • the face mesh data can be input into FC3 after passing through CONV1, CONV2, and CONV3.
  • FC3 includes a fully connected module.
  • Connection layers with different structures are constructed for different types of images (such as left eye, right eye, face images), which can better obtain the characteristics of various types of images, thereby improving the accuracy of the model, so that the terminal 100 can be more accurate to identify the user’s eye gaze position.
  • images such as left eye, right eye, face images
  • the full connection module 1206 can perform another full connection on the left eye image, the right eye image, the face image, and the face grid data, and finally output the position coordinates of the eyeball gaze position.
  • the eyeball gaze position indicates the abscissa and ordinate of the focus of the user's sight on the screen, refer to the cursor point S shown in Figure 1.
  • the terminal 100 can determine that the user is gazing at the control.
  • the convolutional neural network set by the eye gaze model used in this application has fewer parameters. Therefore, the time required to calculate and predict the user's eye gaze position using the eye gaze model is relatively small, that is, the terminal 100 can quickly determine the user's eye gaze position, and then quickly determine whether the user will open commonly used applications and programs through eye gaze control. Commonly used interfaces.
  • the first preset area and the second preset area may be any two different areas of the upper left corner area, the upper right corner area, the lower left corner area, and the lower right corner area of the screen;
  • the first interface and the fourth interface can be any two different interfaces among the main interfaces such as the first desktop (page 31), the second desktop (page 32), the negative screen (page 30), etc.;
  • the second interface, the third interface, the fifth interface, and the sixth interface can be any one of the following interfaces: the payment interface shown in Figure 2G, the health code interface shown in Figure 2I, and the health detection record interface shown in Figure 5G. As well as various common user interfaces such as the ride code interface shown in the attached picture;
  • the shortcut window 225 displayed by the electronic device on the first desktop can be called the first control; referring to Figure 3D, the icon 331 can also be called the first control.
  • Figure 14 is a schematic system structure diagram of the terminal 100 according to the embodiment of the present application.
  • the layered architecture divides the system into several layers, and each layer has clear roles and division of labor.
  • the layers communicate through software interfaces.
  • the system is divided into five layers, from top to bottom: application layer (application layer), application framework layer (framework layer), hardware abstraction layer, driver layer and hardware layer.
  • the application layer can include multiple applications, such as dial-up applications, gallery applications, and so on.
  • the application layer also includes an eye gaze SDK (software development kit).
  • the system of the terminal 100 and the third application installed on the terminal 100 can identify the user's eyeball gaze position by calling the eyeball gaze SDK.
  • the framework layer provides application programming interface (API) and programming framework for applications in the application layer.
  • the framework layer includes some predefined functions.
  • the framework layer may include a camera service interface and an eyeball gaze service interface.
  • the camera service interface is used to provide an application programming interface and programming framework for using the camera.
  • the eye gaze service interface provides an application programming interface and programming framework that uses the eye gaze recognition model.
  • the hardware abstraction layer is the interface layer between the framework layer and the driver layer, providing a virtual hardware platform for the operating system.
  • the hardware abstraction layer may include a camera hardware abstraction layer and an eye gaze process.
  • the camera hardware abstraction layer can provide virtual hardware for camera device 1 (RGB camera), camera device 2 (TOF camera), or more camera devices.
  • the calculation process of identifying the user's eye gaze position through the eye gaze recognition module is performed during the eye gaze process.
  • the driver layer is the layer between hardware and software.
  • the driver layer includes drivers for various hardware.
  • the driver layer may include camera device drivers.
  • the camera device driver is used to drive the sensor of the camera to collect images and drive the image signal processor to preprocess the images.
  • the hardware layer includes sensors and secure data buffers.
  • the sensors include RGB camera (ie 2D camera) and TOF camera (ie 3D camera).
  • the camera included in the sensor corresponds to the virtual camera device included in the camera hardware abstraction layer one-to-one.
  • RGB cameras capture and generate 2D images.
  • TOF camera is a depth-sensing camera that can collect and generate 3D images with depth information.
  • Data collected by the camera is stored in a secure data buffer.
  • the secure data buffer can also avoid the problem of abuse of the image data collected by the camera, so it is called For safe data buffer.
  • the software layers introduced above and the modules or interfaces included in each layer run in a runnable environment (Runnable executive environment, REE).
  • the terminal 100 also includes a trusted execution environment (Trust executive environment, TEE). Data communication in TEE is more secure than REE.
  • TEE can include eye gaze recognition algorithm module, trust application (Trust Application, TA) module and security service module.
  • the eye gaze recognition algorithm module stores the executable code of the eye gaze recognition model.
  • TA can be used to safely send the recognition results output by the above model to the eye gaze process.
  • the security service module can be used to securely input the image data stored in the secure data buffer to the eye gaze recognition algorithm module.
  • the terminal 100 detects that the trigger condition for turning on eye gaze recognition is met. Accordingly, the terminal 100 may determine to perform the eye gaze recognition operation.
  • the terminal 100 can call the eye gaze service through the eye gaze SDK.
  • the eye gaze service can call the camera service of the frame layer to collect and obtain the user's facial image through the camera service.
  • the camera service can send instructions to start the RGB camera and TOF camera by calling camera device 1 (RGB camera) and camera device 2 (TOF camera) in the camera hardware abstraction layer.
  • the camera hardware abstraction layer sends this instruction to the camera device driver of the driver layer.
  • the camera device driver can start the camera according to the above instructions.
  • the instructions sent by camera device 1 to the camera device driver can be used to start the RGB camera.
  • the instructions sent by the camera device 2 to the camera device driver can be used to start the TOF camera.
  • the eye gaze service creates an eye gaze process and initializes the eye recognition model.
  • Images (two-dimensional images and three-dimensional images) generated by the image signal processor can be stored in a secure data buffer.
  • the image data stored in the secure data buffer can be transmitted to the eye gaze recognition algorithm through the secure transmission channel (TEE) provided by the security service.
  • TEE secure transmission channel
  • the eye gaze recognition algorithm can input the above image data into the eye gaze recognition model established based on CNN to determine the user's eye gaze position. Then, TA safely passes the above-mentioned eye gaze position back to the eye gaze process, and then returns it to the application layer eye gaze SDK through the camera service and eye gaze service.
  • the eye gaze SDK can determine the area or icon, window and other controls that the user is looking at based on the received eye gaze position, and then determine the display action associated with the above area or control.
  • Figure 15 shows a schematic diagram of the hardware structure of the terminal 100.
  • the terminal 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and user Identification module (subscriber identification module, SIM) card interface 195, etc.
  • a processor 110 an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the terminal 100.
  • the terminal 100 may include more or fewer components than shown in the figures, or some components may be combined, or some components may be separated, or may be arranged differently.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processing unit (NPU), etc.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • NPU neural network processing unit
  • different processing units can be independent devices or integrated in one or more processors.
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • Interfaces may include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, pulse code modulation (pulse code modulation, PCM) interface, universal asynchronous receiver and transmitter (universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and /or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous receiver and transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the interface connection relationships between the modules illustrated in the embodiment of the present invention are only schematic illustrations and do not constitute a structural limitation on the terminal 100 .
  • the terminal 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from the charger.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the wireless communication function of the terminal 100 can be implemented through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied to the terminal 100.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • LNA low noise amplifier
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, perform filtering, amplification and other processing on the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor and convert it into electromagnetic waves through the antenna 1 for radiation.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the wireless communication module 160 can provide applications on the terminal 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (bluetooth, BT), and global navigation satellite system. (global navigation satellite system, GNSS), frequency modulation (FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, frequency modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the terminal 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the terminal 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband code Wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the terminal 100 implements the display function through the GPU, the display screen 194, and the application processor.
  • the GPU is an image processing microprocessor and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 194 is used to display images, videos, etc.
  • Display 194 includes a display panel.
  • Display 194 includes a display panel.
  • the display panel can use a liquid crystal display (LCD).
  • the display panel can also use organic light-emitting diode (OLED), active matrix organic light-emitting diode or active matrix organic light-emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode ( Manufacturing of flex light-emitting diodes (FLED), miniled, microled, micro-oled, quantum dot light emitting diodes (QLED), etc.
  • the terminal 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the terminal 100 uses the display functions provided by the GPU, the display screen 194, and the application processor to display Figures 2A-2I, 3A-3E, 4A-4D, and 5A-5M. , the user interface shown in Figures 6A-6I and 7A-7C.
  • the terminal 100 can implement the shooting function through the ISP, camera 193, video codec, GPU, display screen 194, application processor, etc.
  • the camera 193 includes an RGB camera (2D camera) that generates two-dimensional images and a TOF camera (3D camera) that generates three-dimensional images.
  • the ISP is used to process the data fed back by the camera 193. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera sensor through the lens, the optical signal is converted into an electrical signal, and the camera sensor passes the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise and brightness. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be provided in the camera 193.
  • Camera 193 is used to capture still images or video.
  • the object passes through the lens to produce an optical image that is projected onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other format image signals.
  • the terminal 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals.
  • Video codecs are used to compress or decompress digital video. Terminal 100 may support one or more video codecs.
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • the NPU can realize intelligent cognitive applications of the terminal 100, such as image recognition, face recognition, speech recognition, text understanding, etc.
  • the terminal 100 collects the user's facial image through the shooting capability provided by the ISP and the camera 193 .
  • the terminal 100 can execute the eye gaze recognition algorithm through the NPU, and then identify the user's eye gaze position through the collected user facial image.
  • the internal memory 121 may include one or more random access memories (RAM) and one or more non-volatile memories (NVM).
  • RAM random access memories
  • NVM non-volatile memories
  • Random access memory can include static random-access memory (SRAM), dynamic random-access memory (DRAM), synchronous dynamic random-access memory (SDRAM), double data rate synchronous Dynamic random access memory (double data rate synchronous dynamic random access memory, DDR SDRAM, such as the fifth generation DDR SDRAM is generally called DDR5SDRAM), etc.
  • Non-volatile memory can include disk storage devices and flash memory.
  • the random access memory can be directly read and written by the processor 110, can be used to store executable programs (such as machine instructions) of the operating system or other running programs, and can also be used to store user and application data, etc.
  • the non-volatile memory can also store executable programs and user and application program data, etc., and can be loaded into the random access memory in advance for direct reading and writing by the processor 110.
  • the application code of the eye gaze SDK can be stored in a non-volatile memory.
  • the application code of the Eye Gaze SDK can be loaded into random access memory. Data generated when running the above code can also be stored in random access memory.
  • the external memory interface 120 can be used to connect an external non-volatile memory to expand the storage capability of the terminal 100 .
  • the external non-volatile memory communicates with the processor 110 through the external memory interface 120 to implement the data storage function. For example, save music, video and other files in external non-volatile memory.
  • the terminal 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals.
  • Speaker 170A also called “speaker”
  • Receiver 170B also called “earpiece”
  • microphone 170C is used to convert sound signals into electrical signals.
  • the terminal 100 when the terminal 100 is in the screen-off or screen-off AOD state, it can obtain the audio signal in the environment through the microphone 170C, and then determine whether the user's language wake-up word is detected.
  • the user When making a call or sending a voice message, the user can speak through the human mouth close to the microphone 170C to transfer the sound The signal is input to microphone 170C.
  • the headphone interface 170D is used to connect wired headphones.
  • the pressure sensor 180A is used to sense pressure signals and can convert the pressure signals into electrical signals.
  • the gyro sensor 180B may be used to determine the angular velocity of the terminal 100 around three axes (ie, x, y, and z axes), and thereby determine the motion posture of the terminal 100.
  • the acceleration sensor 180E can detect the acceleration of the terminal 100 in various directions (generally three axes). Therefore, the acceleration sensor 180E can be used to recognize the posture of the terminal 100 . In this embodiment of the present application, when the screen is off or the screen is off AOD, the terminal 100 can detect whether the user picks up the mobile phone through the acceleration sensor 180E and the gyroscope sensor 180B, and then determine whether to light up the screen.
  • Air pressure sensor 180C is used to measure air pressure.
  • Magnetic sensor 180D includes a Hall sensor.
  • the terminal 100 may use the magnetic sensor 180D to detect the opening and closing of the flip cover. Therefore, in some embodiments, when the terminal 100 is a flip machine, the terminal 100 can detect the opening and closing of the flip cover based on the magnetic sensor 180D, and then determine whether to light up the screen.
  • Distance sensor 180F is used to measure distance.
  • Proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector.
  • the terminal 100 can use the proximity light sensor 180G to detect a scene in which the user holds the terminal 100 close to the user, such as a handset conversation.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the terminal 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • Fingerprint sensor 180H is used to collect fingerprints.
  • the terminal 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application lock and other functions.
  • Temperature sensor 180J is used to detect temperature.
  • Bone conduction sensor 180M can acquire vibration signals.
  • Touch sensor 180K also known as "touch device”.
  • the touch sensor 180K can be disposed on the display screen 194.
  • the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near the touch sensor 180K.
  • the touch sensor can pass the detected touch operation to the application processor to determine the touch event type.
  • Visual output related to the touch operation may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the terminal 100 in a position different from that of the display screen 194 .
  • the terminal 100 uses the touch sensor 180K to detect whether there is a user operation on the screen, such as clicking, sliding and other operations. Based on the user operation on the screen detected by the touch sensor 180K, the terminal 100 can determine the actions to be performed subsequently, such as running a certain application program, displaying the interface of the application program, and so on.
  • the buttons 190 include a power button, a volume button, etc.
  • Key 190 may be a mechanical key. It can also be a touch button.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for vibration prompts for incoming calls and can also be used for touch vibration feedback.
  • the indicator 192 may be an indicator light, which may be used to indicate charging status, power changes, messages, missed calls, notifications, etc.
  • the SIM card interface 195 is used to connect a SIM card.
  • the terminal 100 can support 1 or N SIM card interfaces.
  • UI user interface
  • the term "user interface (UI)" in the description, claims and drawings of this application is a media interface for interaction and information exchange between an application program or an operating system and a user. It implements the internal form of information. Conversion to and from a user-acceptable form.
  • the user interface of an application is source code written in specific computer languages such as Java and extensible markup language (XML).
  • XML Java and extensible markup language
  • the interface source code is parsed and rendered on the terminal device, and finally presented as content that the user can recognize.
  • Control also called widget, is the basic element of user interface. Typical controls include toolbar, menu bar, text box, button, and scroll bar. (scrollbar), images and text.
  • the properties and contents of controls in the interface are defined through tags or nodes.
  • XML specifies the controls contained in the interface through nodes such as ⁇ Textview>, ⁇ ImgView>, and ⁇ VideoView>.
  • a node corresponds to a control or property in the interface. After parsing and rendering, the node is rendered into user-visible content.
  • applications such as hybrid applications, often include web pages in their interfaces.
  • a web page also known as a page, can be understood as a special control embedded in an application interface.
  • a web page is source code written in a specific computer language, such as hypertext markup language (GTML), cascading styles Tables (cascading style sheets, CSS), java scripts (JavaScript, JS), etc.
  • web page source code can be loaded and displayed as user-recognizable content by a browser or a web page display component with functions similar to the browser.
  • the specific content contained in the web page is also defined through tags or nodes in the web page source code.
  • GTML defines the elements and attributes of the web page through ⁇ p>, ⁇ img>, ⁇ video>, and ⁇ canvas>.
  • GUI graphical user interface
  • the commonly used form of user interface is graphical user interface (GUI), which refers to a user interface related to computer operations that is displayed graphically. It can be an icon, window, control and other interface elements displayed on the display screen of the terminal device.
  • the control can include icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, widgets, etc. Visual interface elements.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center through wired (such as coaxial cable, optical fiber, digital subscriber line) or wireless (such as infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated.
  • the available media may be magnetic media (eg, floppy disk, hard disk, tape), optical media (eg, DVD), or semiconductor media (eg, solid state drive), etc.
  • the program can be stored in a computer-readable storage medium. When executed, the program can include the processes of the above method embodiments.
  • the aforementioned storage media include: ROM, random access memory (RAM), magnetic disks, optical disks and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Ophthalmology & Optometry (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present application provides a display method. The method may be applied to a terminal device such as a mobile phone or a tablet computer. By implementing the method, the terminal device may display one or more shortcut windows in an unlocked main interface and/or an interface to be unlocked. Each shortcut window is associated with a commonly used interface set by a user. When it is detected that the user gazes at a certain shortcut window, the terminal device may display the commonly used interface associated with the shortcut window, such as a commonly used payment interface, a health code interface, etc. Therefore, the user may quickly acquire related information in the commonly used interface, and a touch operation is not required.

Description

一种显示方法和电子设备A display method and electronic device
本申请要求于2022年05月20日提交中国专利局、申请号为202210549347.6、申请名称为“一种显示方法和电子设备”的中国专利申请的优先权,和2022年06月30日提交中国专利局、申请号为202210761048.9、申请名称为“一种显示方法和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application submitted to the China Patent Office on May 20, 2022, with the application number 202210549347.6 and the application name "A display method and electronic device", and the Chinese patent submitted on June 30, 2022 Office, with application number 202210761048.9 and the priority of a Chinese patent application titled "A display method and electronic device", the entire content of which is incorporated into this application by reference.
技术领域Technical field
本申请涉及终端领域,尤其涉及一种显示方法和电子设备。The present application relates to the field of terminals, and in particular, to a display method and electronic device.
背景技术Background technique
随着移动终端崛起及通信技术的成熟,人们开始探索脱离鼠标和键盘的新型人机交互方式,例如语音控制、手势识别控制等,进而实现新型的人机交互方式,为用户的更多样化更便捷的交互体验,提升用户使用体验。With the rise of mobile terminals and the maturity of communication technology, people have begun to explore new human-computer interaction methods that are independent of the mouse and keyboard, such as voice control, gesture recognition control, etc., thereby realizing new human-computer interaction methods and providing users with more diverse needs. A more convenient interactive experience and improved user experience.
发明内容Contents of the invention
本申请实施例提供了一种显示方法。实施该方法,终端设备可检测用户注视屏幕的区域,进而显示与该区域对应的界面。这样,用户可以快捷地获取到上述界面中的信息,且无需触控操作。The embodiment of the present application provides a display method. By implementing this method, the terminal device can detect the area where the user is looking at the screen, and then display an interface corresponding to the area. In this way, users can quickly obtain information in the above interface without touch operations.
第一方面,本申请提供了一种显示方法,该方法应用于电子设备,电子设备包括屏幕,电子设备的屏幕包括第一预设区域,该方法包括:显示第一界面;显示第一界面时,电子设备采集第一图像;基于第一图像确定用户的第一眼球注视区域,第一眼球注视区域用于指示当用户注视屏幕时用户所注视的屏幕区域;当第一眼球注视区域在第一预设区域内时,显示第二界面。In a first aspect, this application provides a display method, which is applied to an electronic device. The electronic device includes a screen, and the screen of the electronic device includes a first preset area. The method includes: displaying a first interface; , the electronic device collects the first image; determines the user's first eyeball gaze area based on the first image, and the first eyeball gaze area is used to indicate the screen area that the user looks at when the user looks at the screen; when the first eyeball gaze area is in the first When within the preset area, the second interface is displayed.
实施第一方面提供的方法,电子设备可以在显示一个界面时,采集用于确定用户眼球注视区域的图像。当通过上述图像确定用户正在注视预设的某一区域时,电子设备可显示与该区域关联的界面。这样,用户可以通过注视操作,快捷地控制电子设备显示某一界面,从而快捷地获取到该界面提供的服务或信息。By implementing the method provided in the first aspect, the electronic device can collect images used to determine the user's eye gaze area when displaying an interface. When it is determined through the above image that the user is looking at a preset area, the electronic device can display an interface associated with the area. In this way, the user can quickly control the electronic device to display a certain interface through gaze operations, thereby quickly obtaining services or information provided by the interface.
结合第一方面提供的方法,在一些实施例中,电子设备的屏幕包括第二预设区域,第二预设区域与第一预设区域不同,该方法还包括:基于第一图像确定用户的第二眼球注视区域,第二眼球注视区域在屏幕上的位置与第一眼球注视区域在屏幕上的位置不同;当第二眼球注视区域在第二预设区域内时,显示第三界面,第三界面与第二界面不同。In conjunction with the method provided in the first aspect, in some embodiments, the screen of the electronic device includes a second preset area, and the second preset area is different from the first preset area. The method further includes: determining the user's location based on the first image. The second eyeball fixation area, the position of the second eyeball fixation area on the screen is different from the position of the first eyeball fixation area on the screen; when the second eyeball fixation area is within the second preset area, the third interface is displayed, and the third interface is displayed. The third interface is different from the second interface.
实施上述实施例提供的方法,电子设备可以将屏幕划分为多个预设区域。一个区域可对应一个界面。电子设备检测到用户注视哪一个区域时,可显示与该区域对应的界面。这样,用户可以通过注视不同的屏幕区域,快捷地控制电子设备显示不同的界面。By implementing the method provided by the above embodiments, the electronic device can divide the screen into multiple preset areas. One area can correspond to one interface. When the electronic device detects which area the user is looking at, it can display an interface corresponding to the area. In this way, the user can quickly control the electronic device to display different interfaces by looking at different screen areas.
结合第一方面提供的方法,在一些实施例中,第二界面与第三界面为同一应用提供的界面,或者,第二界面与第三界面为不同的应用提供的界面。Combined with the method provided in the first aspect, in some embodiments, the second interface and the third interface are interfaces provided by the same application, or the second interface and the third interface are interfaces provided by different applications.
结合第一方面提供的方法,在一些实施例中,该方法还包括:显示第四界面;显示第四界面时,电子设备采集第二图像;基于第二图像确定用户的第三眼球注视区域;第三眼 球注视区域在第一预设区域内时,显示第五界面,第五界面与二界面不同。In conjunction with the method provided in the first aspect, in some embodiments, the method further includes: displaying a fourth interface; when displaying the fourth interface, the electronic device collects a second image; and determining the user's third eye gaze area based on the second image; third eye When the ball gaze area is within the first preset area, the fifth interface is displayed, and the fifth interface is different from the second interface.
实施上述实施例提供的方法,在电子设备显示不同主界面时,电子设备的一个屏幕区域关联的界面也可以不同。例如,在第一桌面时,电子设备的右上角区域关联的界面可以为支付界面,在第二桌面时,电子设备的右上角区域关联的界面可以为乘车码界面。这样,用户可以设定更多的与屏幕区域关联的界面,从而满足用户通过注视操作打开该界面的需求。By implementing the method provided by the above embodiment, when the electronic device displays different main interfaces, the interfaces associated with a screen area of the electronic device may also be different. For example, on the first desktop, the interface associated with the upper right corner area of the electronic device may be the payment interface, and on the second desktop, the interface associated with the upper right corner area of the electronic device may be the ride code interface. In this way, the user can set more interfaces associated with the screen area to meet the user's need to open the interface through gaze operations.
结合第一方面提供的方法,在一些实施例中,当第一眼球注视区域在第一预设区域内时,显示第二界面,包括:当第一眼球注视区域在第一预设区域内,且注视第一预设区域的时长为第一时长时,显示第二界面。Combined with the method provided in the first aspect, in some embodiments, when the first eyeball gaze area is within the first preset area, displaying the second interface includes: when the first eyeball gaze area is within the first preset area, And when the duration of looking at the first preset area is the first duration, the second interface is displayed.
实施上述实施例提供的方法,电子设备在检测用户眼球注视区域时,还可监测用户的注视时长。当注视时长满足预设时长时,电子设备可显示对应的界面。By implementing the method provided by the above embodiments, the electronic device can also monitor the user's gaze duration when detecting the user's eyeball gaze area. When the gaze duration meets the preset duration, the electronic device can display the corresponding interface.
结合第一方面提供的方法,在一些实施例中,该方法还包括:当第一眼球注视区域在第一预设区域内,且注视第一预设区域的时长为第二时长时,显示第六界面。In conjunction with the method provided in the first aspect, in some embodiments, the method further includes: when the first eyeball fixation area is within the first preset area and the duration of fixation on the first preset area is the second duration, displaying the second Six interfaces.
实施上述实施例提供的方法,电子设备还可将一个屏幕区域关联多个界面,并通过用户注视时长确定具体显示哪一个界面。By implementing the method provided by the above embodiments, the electronic device can also associate a screen area with multiple interfaces, and determine which interface is specifically displayed based on the user's gaze duration.
结合第一方面提供的方法,在一些实施例中,第一眼球注视区域为屏幕上的一个显示单元构成的光标点,或者,第一眼球注视区域为屏幕上多个显示单元构成的光标点或光标区域。Combined with the method provided in the first aspect, in some embodiments, the first eye gaze area is a cursor point formed by one display unit on the screen, or the first eye gaze area is a cursor point formed by multiple display units on the screen, or Cursor area.
结合第一方面提供的方法,在一些实施例中,第二界面为非隐私界面,该方法还包括:显示待解锁界面;在显示待解锁界面时,电子设备采集第三图像;基于第三图像确定用户的第四眼球注视区域;当第四眼球注视位置在第一预设区域内时,显示第二界面。Combined with the method provided in the first aspect, in some embodiments, the second interface is a non-private interface. The method further includes: displaying the interface to be unlocked; when displaying the interface to be unlocked, the electronic device collects a third image; based on the third image Determine the user's fourth eyeball gaze area; when the fourth eyeball gaze position is within the first preset area, display the second interface.
结合第一方面提供的方法,在一些实施例中,第三界面为隐私界面,该方法还包括:当第四眼球注视位置在第二预设区域内时,不显示第三界面。Combined with the method provided in the first aspect, in some embodiments, the third interface is a privacy interface, and the method further includes: not displaying the third interface when the fourth eye gaze position is within the second preset area.
实施上述实施例提供的方法,电子设备还可设定关联的界面的隐私类型。当关联的界面为非隐私界面时,在锁屏状态下,在识别到用户注视该非隐私界面对应的屏幕区域后,电子设备可直接显示上述非隐私界面,无需解锁。这样,用户可以更加快捷地获取到上述非隐私界面。当关联的界面为隐私界面时,在锁屏状态下,在识别到用户注视该隐私界面对应的屏幕区域后,电子设备可不显示上述隐私界面。这样,电子设备在为用户提供快捷服务时也能避免隐私泄露,提升用户使用体验。By implementing the method provided by the above embodiments, the electronic device can also set the privacy type of the associated interface. When the associated interface is a non-private interface, in the locked screen state, after recognizing that the user is looking at the screen area corresponding to the non-private interface, the electronic device can directly display the above-mentioned non-private interface without unlocking. In this way, users can obtain the above-mentioned non-private interface more quickly. When the associated interface is a privacy interface, in the lock screen state, after recognizing that the user is looking at the screen area corresponding to the privacy interface, the electronic device may not display the above-mentioned privacy interface. In this way, electronic devices can avoid privacy leaks and improve user experience when providing users with fast services.
结合第一方面提供的方法,在一些实施例中,第二界面和第三界面均为隐私界面;电子设备在显示待解锁界面时不启用摄像头获取图像。Combined with the method provided in the first aspect, in some embodiments, both the second interface and the third interface are privacy interfaces; the electronic device does not enable the camera to acquire images when displaying the interface to be unlocked.
实施上述实施例提供的方法,当关联的所有界面均为隐私界面时,电子设备在锁屏状态下就可不开启摄像头,进而节省功耗。By implementing the method provided by the above embodiment, when all associated interfaces are private interfaces, the electronic device can not turn on the camera when the screen is locked, thereby saving power consumption.
结合第一方面提供的方法,在一些实施例中,第一界面的第二预设区域显示有第一控件,第一控件用于指示第二预设区域与第三界面关联。In combination with the method provided in the first aspect, in some embodiments, a first control is displayed in the second preset area of the first interface, and the first control is used to indicate that the second preset area is associated with the third interface.
实施上述实施例提供的方法,电子设备可以在检测用户眼球注视区域的过程中,在有关联界面的预设区域显示提示控件。该提示控件可用于指示用户该区域有关联的界面,以及该界面所能提供的服务或信息。这样,用户可以直观地了解到各个区域是否有关联的界 面,以及各个界面所能提供的服务或信息。在此基础上,用户可以通过决定注视哪一预设区域,打开哪一关联界面。By implementing the method provided by the above embodiments, the electronic device can display prompt controls in the preset area of the associated interface during the process of detecting the user's eye gaze area. This prompt control can be used to indicate to the user the interface associated with this area, as well as the services or information that the interface can provide. In this way, users can intuitively understand whether each area has associated boundaries. interface, and the services or information that each interface can provide. On this basis, the user can decide which preset area to focus on and which associated interface to open.
结合第一方面提供的方法,在一些实施例中,待解锁界面的第二预设区域不显示第一控件。Combined with the method provided in the first aspect, in some embodiments, the first control is not displayed in the second preset area of the interface to be unlocked.
实施上述实施例提供的方法,当某一预设区域关联的界面为隐私界面时,在锁屏状态下,电子设备不会显示指示该隐私界面的提示控件,避免用户无效的地注视该预设区域。By implementing the method provided in the above embodiment, when the interface associated with a certain preset area is a privacy interface, the electronic device will not display a prompt control indicating the privacy interface in the lock screen state, preventing the user from ineffectively looking at the preset area. area.
结合第一方面提供的方法,在一些实施例中,第一控件为以下多项中的任意一项:第一界面的缩略图、第一界面对应的应用程序的图标、指示第一界面提供的服务的功能图标。Combined with the method provided in the first aspect, in some embodiments, the first control is any one of the following: a thumbnail of the first interface, an icon of an application corresponding to the first interface, an icon indicating that the first interface provides Function icons for services.
结合第一方面提供的方法,在一些实施例中,电子设备采集图像的时长为第一预设时长;电子设备采集第一图像,具体为:电子设备在第一预设时间内采集第一图像。Combined with the method provided in the first aspect, in some embodiments, the duration for which the electronic device collects images is the first preset duration; the electronic device collects the first image, specifically: the electronic device collects the first image within the first preset time .
实施上述实施例提供的方法,终端设备不会在一直检测用户的眼球注视区域,而是会在预设的一段时间内检测,以节省功耗,同时避免摄像头滥用影响用户信息安全。By implementing the method provided by the above embodiments, the terminal device will not detect the user's eyeball gaze area all the time, but will detect it within a preset period of time to save power consumption and avoid camera abuse from affecting user information security.
结合第一方面提供的方法,在一些实施例中,第一预设时长为显示第一界面的前3秒。Combined with the method provided in the first aspect, in some embodiments, the first preset duration is the first 3 seconds of displaying the first interface.
实施上述实施例提供的方法,终端设备可以在显示第一界面的前3秒检测用户的眼球注视区域,判断用户是否在注视屏幕的预设区域。既满足了绝大多数场景下的用户需求,也尽可能地减少了功耗。By implementing the method provided in the above embodiment, the terminal device can detect the user's eyeball gaze area 3 seconds before displaying the first interface, and determine whether the user is gazing at the preset area of the screen. It not only meets user needs in most scenarios, but also reduces power consumption as much as possible.
结合第一方面提供的方法,在一些实施例中,电子设备采集第一图像是通过摄像头模组采集的,摄像头模组包括:至少一个2D摄像头和至少一个3D摄像头,2D摄像头用于获取二维图像,3D摄像头用于获取包括深度信息的图像;第一图像包括二维图像和包括深度信息的图像。Combined with the method provided in the first aspect, in some embodiments, the electronic device collects the first image through a camera module. The camera module includes: at least one 2D camera and at least one 3D camera. The 2D camera is used to acquire two-dimensional images. Image, the 3D camera is used to acquire an image including depth information; the first image includes a two-dimensional image and an image including depth information.
实施上述实施例提供的方法,终端设备的摄像头模组可包括多个摄像头,且这多个摄像头中包括至少一个2D摄像头和至少一个3D摄像头。这样,终端设备可以获取到指示用户眼球注视位置的二维图像和三维图像。二维图像和三维图像结合有利于提升终端设备识别用户眼球注视区域的精度和准确度。To implement the method provided by the above embodiments, the camera module of the terminal device may include multiple cameras, and the multiple cameras include at least one 2D camera and at least one 3D camera. In this way, the terminal device can obtain two-dimensional images and three-dimensional images indicating the gaze position of the user's eyeballs. The combination of two-dimensional images and three-dimensional images can help improve the accuracy and accuracy of the terminal device in identifying the user's eye gaze area.
结合第一方面提供的方法,在一些实施例中,摄像头模组获取的第一图像存储在安全数据缓冲区,在基于第一图像确定用户的第一眼球注视区域之前,该方法还包括:在可信执行环境下从安全数据缓冲区中获取第一图像。Combined with the method provided in the first aspect, in some embodiments, the first image acquired by the camera module is stored in the secure data buffer. Before determining the user's first eye gaze area based on the first image, the method further includes: Obtain the first image from the secure data buffer in a trusted execution environment.
实施上述实施例提供的方法,在终端设备处理摄像头模组采集图像之前,终端设备可将摄像头模组采集的图像存储在安全数据缓冲区。安全数据缓冲区中存储的图像数据仅可经由安全服务提供的安全传输通道输送到眼球注视识别算法中,从而提升图像数据的安全性。By implementing the method provided by the above embodiment, before the terminal device processes the images collected by the camera module, the terminal device can store the images collected by the camera module in the secure data buffer. The image data stored in the secure data buffer can only be transmitted to the eye gaze recognition algorithm through the secure transmission channel provided by the security service, thereby improving the security of the image data.
结合第一方面提供的方法,在一些实施例中,安全数据缓冲区设置在电子设备的硬件层。Combined with the method provided in the first aspect, in some embodiments, the secure data buffer is set at the hardware layer of the electronic device.
结合第一方面提供的方法,在一些实施例中,基于第一图像确定用户的第一眼球注视区域,具体包括:确定第一图像的特征数据,特征数据包括左眼图像、右眼图像、人脸图像和人脸网格数据中的一个或多个;利用眼球注视识别模型确定特征数据指示的第一眼球注视区域,眼球注视识别模型是基于卷积神经网络建立的。In conjunction with the method provided in the first aspect, in some embodiments, determining the user's first eye gaze area based on the first image specifically includes: determining feature data of the first image, where the feature data includes a left eye image, a right eye image, a person One or more of the face image and face grid data; the eye gaze recognition model is used to determine the first eye gaze area indicated by the feature data, and the eye gaze recognition model is established based on a convolutional neural network.
实施上述实施例提供的方法,终端设备可从摄像头模组采集的二维图像和三维图像中 分别获取左眼图像、右眼图像、人脸图像和人脸网格数据,从而提取出更多的特征,提升识别精度和准确度。By implementing the methods provided by the above embodiments, the terminal device can obtain the two-dimensional images and three-dimensional images collected by the camera module. Obtain left eye images, right eye images, face images and face mesh data respectively to extract more features and improve recognition precision and accuracy.
结合第一方面提供的方法,在一些实施例中,确定第一图像的特征数据,具体包括:对第一图像进行人脸校正,得到人脸端正的第一图像;基于人脸端正的第一图像,确定第一图像的特征数据。Combined with the method provided in the first aspect, in some embodiments, determining the characteristic data of the first image specifically includes: performing face correction on the first image to obtain a first image with a corrected face; Image, determine the characteristic data of the first image.
实施上述实施例提供的方法,在获取左眼图像、右眼图像、人脸图像之前,终端设备可以对摄像头模组采集的图像进行人脸校正,以提升左眼图像、右眼图像、人脸图像的准确性。By implementing the method provided in the above embodiment, before acquiring the left eye image, right eye image, and face image, the terminal device can perform face correction on the images collected by the camera module to improve the left eye image, right eye image, and face image. Image accuracy.
结合第一方面提供的方法,在一些实施例中,第一界面为第一桌面、第二桌面或负一屏中的任意一个;第四界面为第一桌面、第二桌面或负一屏中的任意一个,且与第一界面不同。Combined with the method provided in the first aspect, in some embodiments, the first interface is any one of the first desktop, the second desktop, or the negative screen; the fourth interface is the first desktop, the second desktop, or the negative screen. Any one of them, and it is different from the first interface.
实施上述实施例提供的方法,终端设备的显示第一桌面、第二桌面、负一屏等主界面可分别设置屏幕预设区域及其关联的界面。不同主界面可以复用一个屏幕预设区域。By implementing the method provided in the above embodiment, the main interfaces of the terminal device displaying the first desktop, the second desktop, the negative screen, etc. can respectively set the screen preset areas and their associated interfaces. Different main interfaces can reuse a screen preset area.
结合第一方面提供的方法,在一些实施例中,第一预设区域与第二界面、第五界面的关联关系是用户设置的。Combined with the method provided in the first aspect, in some embodiments, the association between the first preset area and the second interface and the fifth interface is set by the user.
实施上述实施例提供的方法,用户可以电子设备提供的接口设置各个主角对应的不同屏幕预设区域的关联界面,以满足自身个性化需求。By implementing the method provided by the above embodiments, the user can set the associated interfaces of different screen preset areas corresponding to each protagonist through the interface provided by the electronic device to meet their own personalized needs.
第二方面,本申请提供了一种电子设备,该电子设备包括一个或多个处理器和一个或多个存储器;其中,一个或多个存储器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,使得电子设备执行如第一方面以及第一方面中任一可能的实现方式描述的方法。In a second aspect, the present application provides an electronic device, which includes one or more processors and one or more memories; wherein one or more memories are coupled to one or more processors, and one or more The memory is used to store computer program code. The computer program code includes computer instructions. When one or more processors execute the computer instructions, the electronic device performs the method described in the first aspect and any possible implementation manner of the first aspect.
第三方面,本申请实施例提供了一种芯片系统,该芯片系统应用于电子设备,该芯片系统包括一个或多个处理器,该处理器用于调用计算机指令以使得该电子设备执行如第一方面以及第一方面中任一可能的实现方式描述的方法。In a third aspect, embodiments of the present application provide a chip system, which is applied to an electronic device. The chip system includes one or more processors, and the processor is used to call computer instructions to cause the electronic device to execute the first step. aspect and the method described in any possible implementation manner in the first aspect.
第四方面,本申请提供一种计算机可读存储介质,包括指令,当上述指令在电子设备上运行时,使得上述电子设备执行如第一方面以及第一方面中任一可能的实现方式描述的方法。In a fourth aspect, the present application provides a computer-readable storage medium, including instructions. When the above instructions are run on an electronic device, the above electronic device causes the above-mentioned electronic device to execute as described in the first aspect and any possible implementation manner of the first aspect. method.
第五方面,本申请提供一种包含指令的计算机程序产品,当上述计算机程序产品在电子设备上运行时,使得上述电子设备执行如第一方面以及第一方面中任一可能的实现方式描述的方法。In a fifth aspect, the present application provides a computer program product containing instructions. When the computer program product is run on an electronic device, the electronic device causes the electronic device to execute as described in the first aspect and any possible implementation manner of the first aspect. method.
可以理解地,上述第二方面提供的电子设备、第三方面提供的芯片系统、第四方面提供的计算机存储介质、第五方面提供的计算机程序产品均用于执行本申请所提供的方法。因此,其所能达到的有益效果可参考对应方法中的有益效果,此处不再赘述。It can be understood that the electronic device provided by the second aspect, the chip system provided by the third aspect, the computer storage medium provided by the fourth aspect, and the computer program product provided by the fifth aspect are all used to execute the method provided by this application. Therefore, the beneficial effects it can achieve can be referred to the beneficial effects in the corresponding methods, and will not be described again here.
附图说明Description of drawings
图1是本申请实施例提供的一个眼球注视位置示意图;Figure 1 is a schematic diagram of an eyeball gaze position provided by an embodiment of the present application;
图2A-图2I是本申请实施例提供的一组用户界面;Figures 2A-2I are a set of user interfaces provided by embodiments of the present application;
图3A-图3E是本申请实施例提供的一组用户界面;Figures 3A-3E are a set of user interfaces provided by embodiments of the present application;
图4A-图4D是本申请实施例提供的一组用户界面; Figures 4A-4D are a set of user interfaces provided by embodiments of the present application;
图5A-图5M是本申请实施例提供的一组用户界面;Figures 5A-5M are a set of user interfaces provided by embodiments of the present application;
图6A-图6I是本申请实施例提供的一组用户界面;Figures 6A-6I are a set of user interfaces provided by embodiments of the present application;
图7A-图7C是本申请实施例提供的一组用户界面;Figures 7A-7C are a set of user interfaces provided by embodiments of the present application;
图8是本申请实施例提供的一种显示方法的流程图;Figure 8 is a flow chart of a display method provided by an embodiment of the present application;
图9是本申请实施例提供的一种眼球注视识别模型的结构示意图;Figure 9 is a schematic structural diagram of an eye gaze recognition model provided by an embodiment of the present application;
图10是本申请实施例提供的一种人脸校正方法的流程图;Figure 10 is a flow chart of a face correction method provided by an embodiment of the present application;
图11A-图11C是本申请实施例提供的一组人脸校正方法的示意图;Figures 11A-11C are schematic diagrams of a set of face correction methods provided by embodiments of the present application;
图12是本申请实施例提供的一种眼球注视识别模型的卷积网络的结构图;Figure 12 is a structural diagram of a convolutional network of an eye gaze recognition model provided by an embodiment of the present application;
图13是本申请实施例提供的一种可分离卷积技术的示意图;Figure 13 is a schematic diagram of a separable convolution technology provided by an embodiment of the present application;
图14是本申请实施例提供的终端100的系统结构示意图;Figure 14 is a schematic system structure diagram of the terminal 100 provided by the embodiment of the present application;
图15是本申请实施例提供的终端100的硬件结构示意图。Figure 15 is a schematic diagram of the hardware structure of the terminal 100 provided by the embodiment of the present application.
具体实施方式Detailed ways
本申请以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。The terms used in the following embodiments of the present application are only for the purpose of describing specific embodiments and are not intended to limit the present application.
本申请实施例提供了一种显示方法。该方法可应用于手机、平板电脑等终端设备。实施上述方法的手机、平板电脑等终端设备可记为终端100。后续实施例将使用终端100指代上述手机、平板电脑等终端设备。The embodiment of the present application provides a display method. This method can be applied to terminal devices such as mobile phones and tablet computers. Terminal devices such as mobile phones and tablet computers that implement the above method can be recorded as terminal 100. In subsequent embodiments, the terminal 100 will be used to refer to the above-mentioned terminal devices such as mobile phones and tablet computers.
不限于手机、平板电脑,终端100还可以是桌面型计算机、膝上型计算机、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、人工智能(artificial intelligence,AI)设备、可穿戴式设备、车载设备、智能家居设备和/或智慧城市设备,本申请实施例对上述终端的具体类型不作特殊限制。Not limited to mobile phones and tablet computers, the terminal 100 can also be a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, or a personal digital assistant. (personal digital assistant, PDA), augmented reality (AR) devices, virtual reality (VR) devices, artificial intelligence (artificial intelligence, AI) devices, wearable devices, vehicle-mounted devices, smart home devices and /or smart city equipment. The embodiments of this application do not place special restrictions on the specific types of the above terminals.
在本申请实施例提供的一种显示方法中,终端100可在解锁后的主界面中显示快捷窗口。快捷窗口中可展示用户经常使用的应用程序,例如展示该应用程序的图标、主界面、或常用界面。上述常用界面是指用户经常打开的页面。在检测到解锁成功之后,终端100可检测用户的眼球注视位置。当检测到用户的眼球注视位置在上述快捷窗口区域内时,终端100可显示该快捷窗口中展示的应用程序的主界面或常用界面。In a display method provided by an embodiment of the present application, the terminal 100 can display a shortcut window in the main interface after unlocking. Applications frequently used by users can be displayed in the shortcut window, such as the icon, main interface, or common interface of the application. The above-mentioned common interfaces refer to the pages that users frequently open. After detecting successful unlocking, the terminal 100 may detect the user's eyeball gaze position. When it is detected that the user's eyeball gaze position is within the above-mentioned shortcut window area, the terminal 100 may display the main interface or common interface of the application program displayed in the shortcut window.
上述快捷窗口所在的图层在主界面的图层之上。因此,快捷窗口中显示的内容不会被遮挡。上述用户的眼球注视位置是指用户注视终端100时视线在终端100屏幕上聚焦的位置。如图1所示,终端100的屏幕上可显示光标点S。用户注视光标点S时,用户的视线在图1所示的屏幕上聚焦的位置为光标点S,即用户的眼球注视位置为光标点S。光标点S可以是屏幕中任意位置。图1还示出了快捷窗口W,当用户的眼球注视位置为光标点S'时,终端100可确定用户眼球注视位置在快捷窗口区域W内,即用户在注视快捷窗口。The layer where the above shortcut window is located is above the layer of the main interface. Therefore, the content displayed in the shortcut window will not be obscured. The above-mentioned user's eyeball gaze position refers to the position where the user's line of sight focuses on the screen of the terminal 100 when the user gazes at the terminal 100 . As shown in Figure 1, a cursor point S may be displayed on the screen of the terminal 100. When the user looks at the cursor point S, the position where the user's sight focuses on the screen shown in Figure 1 is the cursor point S, that is, the position where the user's eyeballs focus is the cursor point S. The cursor point S can be anywhere on the screen. Figure 1 also shows the shortcut window W. When the user's eyeball gaze position is the cursor point S', the terminal 100 can determine that the user's eyeball gaze position is within the shortcut window area W, that is, the user is looking at the shortcut window.
在一些实施例中,快捷窗口还可分为隐私类和非隐私类。被标记为隐私类的快捷窗口只能在解锁成功后显示在主界面之上。非隐私类的快捷窗口还可以在解锁成功之前显示在待解锁界面上。在待解锁界面中,当检测到用户注视快捷窗口中显示的非隐私类应用程序 时,终端100可显示该应用程序的主界面或常用界面。快捷窗口是否为隐私的依据该窗口中展示的信息的隐私要求而定。In some embodiments, shortcut windows can also be divided into privacy categories and non-privacy categories. Shortcut windows marked as private can only be displayed on the main interface after the unlock is successful. Non-privacy shortcut windows can also be displayed on the interface to be unlocked before the unlock is successful. In the interface to be unlocked, when it is detected that the user is looking at a non-privacy application displayed in the shortcut window , the terminal 100 may display the main interface or common interface of the application program. Whether a shortcut window is private depends on the privacy requirements of the information displayed in the window.
实施上述方法,用户可以快速地打开常用应用程序以及常用应用程序中的常用界面,以节省用户操作,提升用户的使用便利性。同时,用户可以通过眼球注视位置控制终端100是否打开上述常用应用程序或常用界面,进一步节省用户操作。特别的,在用户的双手被占用的情况下,用户通过眼球注视位置控制终端100执行某一动作为用户提供了新的交互控制方式,提升用户的使用体验。By implementing the above method, users can quickly open frequently used applications and common interfaces in commonly used applications, thereby saving user operations and improving user convenience. At the same time, the user can control whether the terminal 100 opens the above-mentioned common applications or common interfaces through the eye gaze position, further saving user operations. In particular, when the user's hands are occupied, the user controls the terminal 100 to perform a certain action through the eye gaze position, which provides the user with a new interactive control method and improves the user's experience.
下面具体介绍终端100实施上述基于眼球注视识别的交互方法的用户场景。The following is a detailed introduction to user scenarios in which the terminal 100 implements the above interaction method based on eye gaze recognition.
图2A示例性示出了终端100处于灭屏状态的用户示意图。FIG. 2A exemplarily shows a user diagram of the terminal 100 in a screen-off state.
在用户不使用终端100时,终端100可处于灭屏状态。如图2A所示,终端100处于灭屏状态时,终端100的显示器休眠成为黑屏,但是其他器件和程序正常工作的状态。在其他实施例中,在用户不使用终端100时,终端100还可处于灭屏AOD(Always on Display)状态。灭屏AOD状态是指在不点亮整块手机屏幕的情况下控制屏幕局部亮起的状态,即在灭屏状态的基础上控制屏幕局部亮起的状态。When the user is not using the terminal 100, the terminal 100 may be in a screen-off state. As shown in FIG. 2A , when the terminal 100 is in the screen-off state, the display of the terminal 100 sleeps and becomes a black screen, but other devices and programs work normally. In other embodiments, when the user is not using the terminal 100, the terminal 100 may also be in the AOD (Always on Display) state. The screen-off AOD state refers to the state of controlling part of the screen to light up without lighting up the entire mobile phone screen, that is, the state of controlling part of the screen to light up based on the screen-off state.
在检测到唤醒手机的用户操作时,终端100可点亮整块屏幕,显示如图2B所示的待解锁界面。待解锁界面可有显示时间、日期以供用户查看。其中,终端100检测到唤醒手机的用户操作包括但不限于:用户拿起手机的操作,用户通过语音助手唤醒手机的操作等,本申请实施例对此不做限制。When detecting a user operation to wake up the mobile phone, the terminal 100 can light up the entire screen and display the interface to be unlocked as shown in Figure 2B. The time and date can be displayed on the interface to be unlocked for the user to view. The terminal 100 detects the user operation of waking up the mobile phone, including but not limited to: the user's operation of picking up the mobile phone, the user's operation of waking up the mobile phone through the voice assistant, etc. This embodiment of the present application does not limit this.
在显示待解锁界面之后,终端100可启用摄像头采集并生成图像帧。该图像帧中可包括用户的面部图像。然后,终端100可利用上述图像帧进行面部识别,判断上述图像帧中的面部图像是否是机主的面部图像,即判断正在执行解锁操作的用户是否是机主本人。After displaying the interface to be unlocked, the terminal 100 can enable the camera to collect and generate image frames. The image frame may include an image of the user's face. Then, the terminal 100 can use the above image frame to perform facial recognition and determine whether the facial image in the above image frame is the facial image of the owner, that is, determine whether the user performing the unlocking operation is the owner himself.
参考图2B,终端100可设置有摄像头模组210。终端100的摄像头模组210至少包括一个2D摄像头和一个3D摄像头。2D摄像头是指生成二维图像的摄像头,例如手机上常用的生成RGB图像的摄像头。上述3D摄像头是指能够生成三维图像的摄像头或者能够生成包括深度信息的图像的摄像头,例如TOF摄像头。相比于2D摄像头,3D摄像头生成的图像还包括深度信息,即被拍摄的物体与3D摄像头的距离信息。可选的,摄像头模组210也可包括多个2D摄像头和多个3D摄像头,本申请实施例对此不作限定。Referring to FIG. 2B , the terminal 100 may be provided with a camera module 210 . The camera module 210 of the terminal 100 includes at least a 2D camera and a 3D camera. 2D cameras refer to cameras that generate two-dimensional images, such as cameras commonly used on mobile phones that generate RGB images. The above-mentioned 3D camera refers to a camera that can generate a three-dimensional image or a camera that can generate an image including depth information, such as a TOF camera. Compared with 2D cameras, the images generated by 3D cameras also include depth information, that is, the distance information between the object being photographed and the 3D camera. Optionally, the camera module 210 may also include multiple 2D cameras and multiple 3D cameras, which is not limited in the embodiments of the present application.
在本申请实施例中,在进行面部解锁检验时,终端100所使用的摄像头可以为上述摄像头模组210中的一个摄像头。通常的,这个摄像头为摄像头模组210中的3D摄像头。In this embodiment of the present application, when performing face unlocking verification, the camera used by the terminal 100 may be one of the cameras in the above-mentioned camera module 210 . Typically, this camera is the 3D camera in the camera module 210.
当面部解锁成功时,即采集到的面部图像与机主的面部图像匹配时,终端100可显示图2C-图2D所示的用户界面。When the facial unlocking is successful, that is, when the collected facial image matches the facial image of the owner, the terminal 100 may display the user interface shown in Figures 2C-2D.
首先,终端100可显示图2C所示的解锁成功界面。该界面可显示有图标211。图标211可用于提示用户面部解锁成功。随后,终端100可显示图2D所示的用户界面。该界面可称为终端100的主界面。 First, the terminal 100 may display the unlocking success interface shown in FIG. 2C. The interface may display an icon 211. The icon 211 can be used to prompt the user that the face unlock is successful. Subsequently, the terminal 100 may display the user interface shown in FIG. 2D. This interface may be called the main interface of the terminal 100 .
可以理解的,图2C所示的解锁成功界面是可选的。在确认解锁成功之后,终端100也可直接显示图2D所示的主界面。It can be understood that the unlocking success interface shown in Figure 2C is optional. After confirming that the unlocking is successful, the terminal 100 may also directly display the main interface shown in Figure 2D.
不限于上述图2C中介绍的面部解锁,终端100还可采用密码解锁(图形密码、数字密码)、指纹解锁等解锁方式。在解锁成功之后,终端100同样可显示图2D所示的主界面。Not limited to the facial unlocking introduced in FIG. 2C above, the terminal 100 can also adopt password unlocking (graphic password, digital password), fingerprint unlocking and other unlocking methods. After the unlocking is successful, the terminal 100 can also display the main interface shown in Figure 2D.
主界面可包括通知栏221、页面指示符222、常用应用程序图标托盘223,以及其他应用程序图标托盘224。The main interface may include a notification bar 221, a page indicator 222, a frequently used application icon tray 223, and other application icon trays 224.
其中:通知栏可包括移动通信信号(又可称为蜂窝信号)的一个或多个信号强度指示符(例如信号强度指示符221A、信号强度指示符221B)、无线高保真(wireless fidelity,Wi-Fi)信号强度指示符221C,电池状态指示符221D、时间指示符221E。Wherein: the notification bar may include one or more signal strength indicators (such as signal strength indicator 221A, signal strength indicator 221B) of mobile communication signals (also known as cellular signals), wireless fidelity (wireless fidelity, Wi-Fi) Fi) signal strength indicator 221C, battery status indicator 221D, time indicator 221E.
页面指示符222可用于指示当前显示的页面与其他页面的位置关系。一般的,终端100的主界面可包括多个页面。图2D所示的界面可以为上述多个页面中的一个页面。终端100的主界面还包括其他页面。该其他页面在图2D中未显示出来。当检测到用户的左滑、右滑操作时,终端100可显示上述其他页面,即切换页面。这时,页面指示符222也会变更不同的形态来指示不同的页面。后续实施例再详细介绍。The page indicator 222 may be used to indicate the positional relationship of the currently displayed page to other pages. Generally, the main interface of the terminal 100 may include multiple pages. The interface shown in Figure 2D may be one of the above-mentioned multiple pages. The main interface of the terminal 100 also includes other pages. This other page is not shown in Figure 2D. When detecting the user's left and right sliding operations, the terminal 100 may display the above other pages, that is, switch pages. At this time, the page indicator 222 will also change to different forms to indicate different pages. Subsequent embodiments will be introduced in detail.
常用应用程序图标托盘223可以包括多个常用应用图标(例如相机应用图标、通讯录应用图标、电话应用图标、信息应用图标),常用应用图标在页面切换时保持显示。上述常用应用图标是可选的,本申请实施例对此不作限定。The frequently used application icon tray 223 may include multiple common application icons (such as a camera application icon, an address book application icon, a phone application icon, and an information application icon), and the frequently used application icons remain displayed when the page is switched. The above common application icons are optional and are not limited in this embodiment of the present application.
其他应用程序图标托盘224可包括多个一般应用图标,例如设置应用图标、应用市场应用图标、图库应用图标、浏览器应用图标等。一般应用图标可分布在主界面的多个页面的其他应用程序图标托盘224中。其他应用程序图标托盘224中显示的一般应用图标在页面切换时会进行相应地变更。一个应用程序的图标可以是一般应用图标、也可以是常用应用图标。当上述图标被放置在常用应用程序图标托盘223,上述图标为常用应用图标;当上述图标被放置在其他应用程序图标托盘224,上述图标为一般应用图标。The other application icon tray 224 may include a plurality of general application icons, such as a settings application icon, an application market application icon, a gallery application icon, a browser application icon, etc. General application icons may be distributed in other application icon trays 224 on multiple pages of the main interface. The general application icons displayed in the other application icon tray 224 will be changed accordingly when the page is switched. The icon of an application can be a general application icon or a commonly used application icon. When the above icon is placed in the common application icon tray 223, the above icon is a common application icon; when the above icon is placed in the other application icon tray 224, the above icon is a general application icon.
可以理解的是,图2D仅仅示例性示出了终端100的一个主界面或一个主界面的一个页面,不应构成对本申请实施例的限定。It can be understood that FIG. 2D only illustrates a main interface or a page of a main interface of the terminal 100, and should not be construed as limiting the embodiments of the present application.
参考图2E,在显示图2D所示的主界面的同时,即解锁成功后,终端100还可在上述主界面的图层之上,显示快捷窗口225、快捷窗口226。Referring to Figure 2E, while displaying the main interface shown in Figure 2D, that is, after the unlocking is successful, the terminal 100 can also display a shortcut window 225 and a shortcut window 226 on top of the layer of the above-mentioned main interface.
快捷窗口225可显示支付界面的缩略图。快捷窗口226可显示健康码界面的缩略图。可以理解的,为了更形象地展示终端100分图层显示主界面和快捷窗口,本申请附图中分别用图2D和图2E展示了这一显示过程。但是,从用户角度来看,在解锁成功后,用户看到的界面实际为图2E所示的界面(主界面的图层和快捷窗口的图层是同时显示的)。可以理解的,主界面可显示更多或更少的快捷窗口,例如3个、4个、1个等,本申请实施例对此不作限制。The shortcut window 225 can display a thumbnail of the payment interface. The shortcut window 226 can display a thumbnail of the health code interface. It can be understood that in order to more vividly display the terminal's 100-layer display main interface and shortcut window, this display process is shown in Figure 2D and Figure 2E in the drawings of this application respectively. However, from the user's perspective, after the unlocking is successful, the interface the user sees is actually the interface shown in Figure 2E (the layers of the main interface and the layers of the shortcut window are displayed at the same time). It can be understood that the main interface may display more or fewer shortcut windows, such as 3, 4, 1, etc., and the embodiment of the present application does not limit this.
具体的,终端100上可安装有第一应用程序。第一应用程序可为用户提供支付服务。在开启第一应用程序后,终端100可显示支付界面。支付界面上可包括支付码,例如支付二维码,支付条形码等等。用户出示上述支付界面可完成支付任务。开启第一应用程序是 指将第一应用程序设定为前台应用程序。如图2E所示,在开启第一应用程序之前,快捷窗口225可显示上述支付界面的缩略图,以提示用户与快捷窗口225关联的应用程序及常用界面。Specifically, the first application program may be installed on the terminal 100 . The first application can provide payment services to users. After opening the first application program, the terminal 100 may display a payment interface. The payment interface can include payment codes, such as payment QR codes, payment barcodes, etc. Users can complete the payment task by showing the above payment interface. The first application to open is Refers to setting the first application as the foreground application. As shown in FIG. 2E , before opening the first application, the shortcut window 225 may display a thumbnail of the payment interface to prompt the user about the applications and commonly used interfaces associated with the shortcut window 225 .
同样的,终端100上还可安装有第二应用程序。在开启第二应用程序后,终端100可显示健康码界面。健康码界面上可显示反映用户的健康状况的健康码。用户出示上述健康码界面可完成健康检查。同样的,在开启第二应用程序之前,快捷窗口226可显示上述健康码界面的缩略图。Similarly, a second application program can also be installed on the terminal 100 . After starting the second application, the terminal 100 may display the health code interface. The health code reflecting the user's health status can be displayed on the health code interface. Users can complete the health check by showing the above health code interface. Similarly, before starting the second application, the shortcut window 226 may display a thumbnail of the above health code interface.
上述支付界面可称为第一应用程序的常用界面。上述健康码界面可称为第二应用程序的常用界面。The above payment interface can be called a common interface of the first application. The above health code interface can be called a common interface of the second application.
在显示主界面和快捷窗口的同时,终端100可通过摄像头模组210采集用户的面部图像。While displaying the main interface and shortcut window, the terminal 100 can collect the user's facial image through the camera module 210 .
这时,终端100所使用的摄像头的数量为2个,包括一个2D摄像头和一个3D摄像头。当然,不限于一个2D摄像头和一个3D摄像头,终端100还可以使用更多的摄像头,以获取更多的用户面部特征,特别是眼部特征,以便于后续更快速准确地确定用户的眼球注视位置。At this time, the number of cameras used by the terminal 100 is two, including a 2D camera and a 3D camera. Of course, it is not limited to one 2D camera and one 3D camera. The terminal 100 can also use more cameras to obtain more facial features of the user, especially eye features, so as to determine the user's eyeball gaze position more quickly and accurately in the future. .
在使用面部解锁的场景中,终端100的3D摄像头是开启的,因此,这时,终端100仅需开启摄像头模组210的2D摄像头。在使用密码解锁和指纹解锁的场景中,终端100的摄像头是关闭的。这时,终端100需开启摄像头模组210中的2D摄像头和3D摄像头。In the scenario of using face unlocking, the 3D camera of the terminal 100 is turned on. Therefore, at this time, the terminal 100 only needs to turn on the 2D camera of the camera module 210. In the scenario of using password unlocking and fingerprint unlocking, the camera of the terminal 100 is turned off. At this time, the terminal 100 needs to turn on the 2D camera and the 3D camera in the camera module 210.
终端100通过摄像头模组210(2D摄像头、3D摄像头)采集用户的面部图像的时间可记为注视识别时间。优选的,注视识别时间为显示解锁成功后主界面的前3秒。3秒后,终端100可关闭摄像头模组210,以节省功耗。注视识别时间设置过短,例如1秒,终端100采集的包含用户面部图像的图像帧可能不足,进而导致眼球注视识别结果不准确。另一方面,对用户而言,用户也难以在显示主界面后1秒内立刻注视某一快捷窗口。注视识别时间设置过长,例如7秒、10秒,会导致功耗过大。当然,不限于3秒,注视识别时间还可以设定为其他值,例如2.5秒、3.5秒、4秒等,本申请实施例对此不作限定。后续介绍均以3秒为例。The time when the terminal 100 collects the user's facial image through the camera module 210 (2D camera, 3D camera) can be recorded as the gaze recognition time. Preferably, the gaze recognition time is the first 3 seconds after the main interface is displayed after successful unlocking. After 3 seconds, the terminal 100 can turn off the camera module 210 to save power consumption. If the gaze recognition time is set too short, for example, 1 second, the image frames collected by the terminal 100 including the user's facial image may be insufficient, which may lead to inaccurate eye gaze recognition results. On the other hand, it is difficult for users to immediately focus on a shortcut window within 1 second after displaying the main interface. Setting the gaze recognition time too long, such as 7 seconds or 10 seconds, will result in excessive power consumption. Of course, it is not limited to 3 seconds. The gaze recognition time can also be set to other values, such as 2.5 seconds, 3.5 seconds, 4 seconds, etc., which are not limited in the embodiments of the present application. Subsequent introductions will take 3 seconds as an example.
相应的,终端100也可以只在注视识别时间内显示快捷窗口。当摄像头模组210关闭时,即终端100不再检测用户的眼球注视位置时,终端100也不再显示快捷窗口,以避免长期遮挡主界面,影响用户使用体验。Correspondingly, the terminal 100 may also display the shortcut window only within the gaze recognition time. When the camera module 210 is turned off, that is, when the terminal 100 no longer detects the user's eyeball gaze position, the terminal 100 no longer displays the shortcut window to avoid blocking the main interface for a long time and affecting the user experience.
在上述注视识别时间内,摄像头模组210可不断地采集并生成包含用户面部图像的图像帧。上述图像帧包括2D摄像头采集得到的二维图像,和3D摄像头采集得到的三维图像。During the above gaze recognition time, the camera module 210 may continuously collect and generate image frames including the user's facial image. The above image frames include two-dimensional images collected by the 2D camera and three-dimensional images collected by the 3D camera.
基于上述注视识别时间内采集到的图像帧,终端100可识别用户的眼球注视位置,确定用户是否在注视快捷窗口225或快捷窗口226。Based on the image frames collected during the above gaze recognition time, the terminal 100 can identify the user's eyeball gaze position and determine whether the user is gazing at the shortcut window 225 or the shortcut window 226 .
如图2F所示,终端100可基于采集到的图像帧确定用户在注视快捷窗口225。响应于检测到用户注视快捷窗口225的用户动作,终端100可开启第一应用程序,并显示快捷窗口225对应的支付界面,参考图2G。如图2G所示,支付界面显示有为用户提供支付服务支付二维码231及其相关信息。 As shown in FIG. 2F , the terminal 100 may determine that the user is looking at the shortcut window 225 based on the collected image frames. In response to detecting the user action of the user looking at the shortcut window 225, the terminal 100 may open the first application program and display the payment interface corresponding to the shortcut window 225, see FIG. 2G. As shown in Figure 2G, the payment interface displays a payment QR code 231 and related information for providing payment services to users.
如图2H所示,终端100也可基于采集到的图像帧确定用户在注视快捷窗口226。响应于检测到用户注视快捷窗口226的用户动作,终端100可开启第二应用程序,并显示快捷窗口226对应的健康码界面,参考图2I。如图2I所示,健康码界面显示有进行健康检查所需的健康码232及其相关信息,以便于用户可以快速完成健康检查。As shown in FIG. 2H , the terminal 100 may also determine that the user is looking at the shortcut window 226 based on the collected image frames. In response to detecting the user action of gazing at the shortcut window 226, the terminal 100 may open the second application and display the health code interface corresponding to the shortcut window 226, see FIG. 2I. As shown in Figure 2I, the health code interface displays the health code 232 required for the health check and its related information, so that the user can quickly complete the health check.
在另一些实施例中,终端100还可通过检测用户注视某一区域的不同时长,显示不同的界面。例如,参考图2D,终端100在进入主界面之后,可检测用户是否注视屏幕右上角区域。当检测到用户注视右上角区域达到第一时长之后,例如2秒,终端100可显示快捷窗口225。若检测到用户仍然在注视右上角区域,且达到第二时长之后,例如3秒,终端100可将右上角区域显示的快捷窗口225切换为快捷窗口226。In other embodiments, the terminal 100 can also display different interfaces by detecting the user's gaze on a certain area for different lengths of time. For example, referring to FIG. 2D , after entering the main interface, the terminal 100 may detect whether the user is looking at the upper right corner area of the screen. After detecting that the user is gazing at the upper right corner area for a first period of time, such as 2 seconds, the terminal 100 may display the shortcut window 225 . If it is detected that the user is still looking at the upper right corner area, and after reaching the second duration, for example, 3 seconds, the terminal 100 may switch the shortcut window 225 displayed in the upper right corner area to the shortcut window 226.
在显示快捷窗口225或快捷窗口226的场景下,终端100可检测用户作用于上述窗口的触控操作,或眨眼控制操作,或扭头控制操作,来确定是否显示快捷窗口225或快捷窗口226对应的界面。In the scenario where the shortcut window 225 or the shortcut window 226 is displayed, the terminal 100 can detect the user's touch operation, blink control operation, or head turn control operation on the above-mentioned window to determine whether to display the corresponding shortcut window 225 or the shortcut window 226. interface.
实施上述方法,用户可以在打开终端100之后立马获取到常用应用程序的常用界面,从而快速的获取该常用界面提供的服务和信息,例如上述支付界面提供的支付服务,健康码界面提供的健康码及其相关信息。By implementing the above method, the user can immediately obtain the common interface of commonly used applications after opening the terminal 100, thereby quickly obtaining the services and information provided by the common interface, such as the payment service provided by the above-mentioned payment interface, and the health code provided by the health code interface. and related information.
另一方面,用户可以通过注视快捷窗口的动作控制终端100显示该快捷窗口对应的界面,而无需执行点击、双击、长按等触控操作,避免了在用户双手被占用的情况下无法控制终端100的问题,为用户提供了便利。On the other hand, the user can control the terminal 100 to display the interface corresponding to the shortcut window by looking at the shortcut window without performing touch operations such as clicking, double-clicking, and long-pressing, thus avoiding the inability to control the terminal when the user's hands are occupied. 100 questions, providing convenience to users.
图3A示例性示出了一个包括多个页面的主界面。其中,每一个页面均可称为主界面。FIG. 3A exemplarily shows a main interface including multiple pages. Among them, each page can be called the main interface.
如图3A所示,主界面可包括页面30、页面31、页面32。页面30可称为负一屏。页面31可称为第一桌面。页面32可称为第二桌面。其中,第二桌面的页面布局与第一桌面相同,这里不再赘述。主界面中的桌面的数量可根据用户的设置增加或减少,图3A中仅示出了第一桌面和第二桌面等。As shown in Figure 3A, the main interface may include page 30, page 31, and page 32. Page 30 can be called negative one screen. Page 31 may be called the first desktop. Page 32 may be called the second desktop. The page layout of the second desktop is the same as that of the first desktop, which will not be described again here. The number of desktops in the main interface can be increased or reduced according to the user's settings. Only the first desktop, the second desktop, etc. are shown in FIG. 3A.
在图2D中,终端100显示的主界面实际为图3A所示的主界面中的第一桌面。在一些实施例中,在解锁成功之后,终端100均首先显示第一桌面。在另一些实施例中,在解锁成功之后,终端100可显示负一屏、第一桌面或第二桌面。可选的,终端100具体显示负一屏、第一桌面或第二桌面中的哪一个取决于上一次退出时停留的页面。In FIG. 2D , the main interface displayed by the terminal 100 is actually the first desktop in the main interface shown in FIG. 3A . In some embodiments, after successful unlocking, the terminal 100 first displays the first desktop. In other embodiments, after successful unlocking, the terminal 100 may display the negative screen, the first desktop or the second desktop. Optionally, which one of the negative screen, first desktop, or second desktop the terminal 100 displays depends on the page you stayed on when you last exited.
因此,在显示图2C所示的解锁成功界面之后,终端100还可首先显示第二桌面或负一屏,并在上述第二桌面或负一屏所在的图层之上显示快捷窗口225和快捷窗口226,参考图3B和图3C。Therefore, after displaying the successful unlocking interface shown in FIG. 2C , the terminal 100 may first display the second desktop or the negative screen, and display the shortcut window 225 and the shortcut window 225 on the layer where the second desktop or the negative screen is located. Window 226, see Figures 3B and 3C.
在显示图3B(第二桌面)或图3C(负一屏)所示的主界面的前3秒内,终端100也可通过摄像头模组210采集用户的面部图像、识别用户是否注视快捷窗口225或快捷窗口226。当识别到用户注视快捷窗口225时,终端100也可显示图2G所示的支付界面,以供用户获取第一应用程序提供的支付服务。当识别到用户注视快捷窗口226时,终端100也可显示图2I所示的健康码界面,以供用户获取第二应用程序提供的健康码232及其相关信 息,以便于用户可以快速完成健康检查。Within the first 3 seconds of displaying the main interface shown in Figure 3B (second desktop) or Figure 3C (negative screen), the terminal 100 can also collect the user's facial image through the camera module 210 and identify whether the user is looking at the shortcut window 225 Or shortcut window 226. When it is recognized that the user is looking at the shortcut window 225, the terminal 100 may also display the payment interface shown in FIG. 2G for the user to obtain the payment service provided by the first application. When it is recognized that the user is looking at the shortcut window 226, the terminal 100 may also display the health code interface shown in FIG. 2I for the user to obtain the health code 232 and related information provided by the second application. information so that users can quickly complete the health check.
这样,无论解锁后终端100显示的哪一主界面,用户均可获取到常用应用程序的常用界面,从而快速的获取该常用界面提供的服务和信息,以满足自身需求。In this way, no matter which main interface is displayed on the terminal 100 after unlocking, the user can obtain the common interface of commonly used applications, and thereby quickly obtain the services and information provided by the common interface to meet their own needs.
在一些实施例中,终端100也可使用面积更小的图标代替上述快捷窗口。In some embodiments, the terminal 100 may also use smaller icons to replace the above shortcut window.
参考图3D,终端100可显示图标311和图标312。图标311可对应前述快捷窗口225,图标312可对应前述快捷窗口226。当检测到用户注视图标311或图标312的动作时,终端100可显示提供支付服务的支付界面或展示健康码的健康码界面,以供用户使用。Referring to FIG. 3D, the terminal 100 may display icons 311 and 312. The icon 311 may correspond to the aforementioned shortcut window 225, and the icon 312 may correspond to the aforementioned shortcut window 226. When detecting that the user is gazing at the icon 311 or the icon 312, the terminal 100 may display a payment interface that provides payment services or a health code interface that displays a health code for the user to use.
上述图标311和图标312既起到提示作用,也减少了对主界面的遮挡,提升了用户使用体验。The above-mentioned icons 311 and 312 not only serve as prompts, but also reduce the obstruction of the main interface, thereby improving the user experience.
当然,终端100也可显示终端100上安装的应用程序的图标,例如图3E所示的应用图标321和应用图标322。一般的,上述应用程序为用户经常使用的应用程序。当检测到用户的注视动作后,终端100可开启该应用程序,从而用户提供快速打开上述应用程序的服务,且无需用户执行触控操作。Of course, the terminal 100 may also display icons of application programs installed on the terminal 100, such as the application icon 321 and the application icon 322 shown in FIG. 3E. Generally, the above-mentioned applications are applications frequently used by users. After detecting the user's gaze action, the terminal 100 can open the application program, thereby providing the user with a service to quickly open the above-mentioned application program without requiring the user to perform a touch operation.
用户可选择启用或关闭眼球注视识别功能。在启用眼球注视识别的场景下,在完成解锁之后,终端100可采集用户的面部图像、识别用户是否在注视快捷窗口,进而确定是否显示快捷窗口对应的常用界面,以便于用户可以快速便捷地获取常用界面中的信息。反之,在关闭眼球注视识别的场景下,终端100不会识别用户是否注视快捷窗口,进而不也显示快捷窗口对应的常用界面。Users can choose to enable or disable eye gaze recognition. In a scenario where eye gaze recognition is enabled, after the unlocking is completed, the terminal 100 can collect the user's facial image, identify whether the user is looking at the shortcut window, and then determine whether to display the common interface corresponding to the shortcut window so that the user can quickly and conveniently obtain Information in commonly used interfaces. On the contrary, in a scenario where eye gaze recognition is turned off, the terminal 100 will not recognize whether the user is gazing at the shortcut window, and thus will not display the common interface corresponding to the shortcut window.
图4A-图4D示例性示出了一组设置启用或关闭眼球注视识别功能的用户界面。4A-4D exemplarily illustrate a set of user interface settings for enabling or disabling the eye gaze recognition function.
图4A示例性示出了终端100上的设置界面。设置界面上可显示有多个设置选项,例如账号设置选择411、WLAN选项412、蓝牙选项413、移动网络选项414等。在本申请实施例中,设置界面还包括辅助功能选项415。辅助功能选项415可用于设置一些快捷操作。FIG. 4A exemplarily shows the setting interface on the terminal 100. Multiple setting options may be displayed on the setting interface, such as account setting options 411, WLAN options 412, Bluetooth options 413, mobile network options 414, etc. In this embodiment of the present application, the setting interface also includes auxiliary function options 415. Accessibility option 415 can be used to set some shortcut operations.
终端100可检测到作用于辅助功能选项415的用户操作。响应于上述操作,终端100可显示图4B所示的用户界面,记为辅助功能设置界面。该界面可显示多个辅助功能选项,例如无障碍选项421、单手模式选项422等等。在本申请实施例中,辅助功能设置界面还包括快捷启动及手势选项423。快捷启动及手势选项423可用于设置一些控制交互的手势动作和眼球注视动作。Terminal 100 may detect user operations on accessibility options 415 . In response to the above operation, the terminal 100 may display the user interface shown in FIG. 4B, which is referred to as the auxiliary function setting interface. The interface may display multiple accessibility options, such as accessibility options 421, one-handed mode options 422, and so on. In this embodiment of the present application, the auxiliary function setting interface also includes quick start and gesture options 423. Quick start and gesture options 423 can be used to set some gesture actions and eye gaze actions to control interaction.
终端100可检测到作用于快捷启动及手势选项423的用户操作。响应于上述操作,终端100可显示图4C所示的用户界面,记为快捷启动及手势设置界面。该界面可显示多个快捷启动及手势设置选项,例如智慧语音选项431、截屏选项432、录屏选项433、快速通话选项434。在本申请实施例中,快捷启动及手势设置界面还包括眼球注视选项435。眼球注视选项435可用于设置眼球注视识别的区域、对应的快捷操作。The terminal 100 can detect user operations on the quick launch and gesture options 423 . In response to the above operation, the terminal 100 may display the user interface shown in FIG. 4C , which is denoted as the quick startup and gesture setting interface. The interface can display multiple quick launch and gesture setting options, such as smart voice option 431, screenshot option 432, screen recording option 433, and quick call option 434. In this embodiment of the present application, the quick start and gesture setting interface also includes an eye gaze option 435. The eye gaze option 435 can be used to set the area for eye gaze recognition and corresponding shortcut operations.
终端100可检测到作用于眼球注视选项435的用户操作。响应于上述操作,终端100可显示图4D所示的用户界面,记为眼球注视识别设置界面。如图4D所示,该界面可显示多个基于眼球注视识别的功能选项,例如支付码选项442、健康码选项443。 Terminal 100 may detect user operations on eye gaze option 435 . In response to the above operation, the terminal 100 may display the user interface shown in FIG. 4D , which is referred to as the eye gaze recognition setting interface. As shown in Figure 4D, the interface can display multiple function options based on eye gaze recognition, such as payment code option 442 and health code option 443.
支付码选项442可用于开启或关闭眼球注视控制显示支付码的功能。例如,在开启支付码选项442(“ON”)的场景下,当解锁成功并显示主界面时,终端100可显示与支付界面关联的快捷窗口225,同时,终端100还可通过采集的包含用户面部图像的图像帧确认用户是否注视快捷窗口225。当检测到用户注视屏幕快捷窗口225的动作时,终端100可显示快捷窗口225对应的支付界面,获取支付码。这样,用户可以快速便捷地获取到支付码,完成支付行为,从而避免大量繁琐的用户操作,获得更好的使用体验。The payment code option 442 can be used to turn on or off the function of eye gaze control to display the payment code. For example, in the scenario where the payment code option 442 is turned on (“ON”), when the unlocking is successful and the main interface is displayed, the terminal 100 can display the shortcut window 225 associated with the payment interface. The image frame of the facial image confirms whether the user is looking at the shortcut window 225. When it is detected that the user looks at the shortcut window 225 on the screen, the terminal 100 can display the payment interface corresponding to the shortcut window 225 and obtain the payment code. In this way, users can quickly and easily obtain the payment code and complete the payment behavior, thus avoiding a large number of tedious user operations and obtaining a better user experience.
健康码选项443可用于开启或关闭眼球注视控制显示健康码的功能。例如,在开启健康码选项443(“ON”)的场景下,当解锁成功并显示主界面时,终端100可显示与健康码界面关联的快捷窗口226,同时,终端100还可通过采集的包含用户面部图像的图像帧确认用户是否注视快捷窗口226。当检测到用户注视快捷窗口226的动作时,终端100可显示包含健康码及其相关信息的健康码界面。这样,用户可以快速便捷地获取到健康码,完成健康检查,从而避免大量繁琐的用户操作。The health code option 443 can be used to turn on or off the function of eye gaze control to display the health code. For example, in the scenario where the health code option 443 is turned on (“ON”), when the unlocking is successful and the main interface is displayed, the terminal 100 can display the shortcut window 226 associated with the health code interface. At the same time, the terminal 100 can also display the collected information containing The image frame of the user's facial image confirms whether the user is looking at the shortcut window 226. When detecting the user's action of gazing at the shortcut window 226, the terminal 100 may display a health code interface including the health code and related information. In this way, users can quickly and easily obtain health codes and complete health checks, thereby avoiding a large number of tedious user operations.
图4D所示的眼球注视识别设置界面还可包括其他基于眼球注视的快捷功能选项,例如通知栏选项444。当解锁成功并显示主界面时,终端100可检测用户是否注视屏幕顶部的通知栏区域。当检测到用户注视通知栏的动作时,终端100可显示通知界面,以供用户查阅通知消息。The eye gaze recognition setting interface shown in FIG. 4D may also include other shortcut function options based on eye gaze, such as notification bar option 444. When the unlocking is successful and the main interface is displayed, the terminal 100 may detect whether the user is looking at the notification bar area at the top of the screen. When detecting that the user looks at the notification bar, the terminal 100 may display a notification interface for the user to check notification messages.
在一些实施例中,用户可以根据自身的使用习惯以及主界面的布局自定义快捷窗口的显示区域,以尽可能地减少快捷窗口对终端100的主界面的影响。In some embodiments, users can customize the display area of the shortcut window according to their own usage habits and the layout of the main interface, so as to minimize the impact of the shortcut window on the main interface of the terminal 100.
在一些实施例中,眼球注视识别设置界面也可如图5A所示。终端100可检测到作用于支付码选项442的用户操作,响应于上述操作,终端100可显示图5B所示的用户界面(支付码设置界面)。In some embodiments, the eye gaze recognition setting interface may also be shown in Figure 5A. The terminal 100 may detect a user operation on the payment code option 442, and in response to the above operation, the terminal 100 may display the user interface (payment code setting interface) shown in FIG. 5B.
如图5B所示,该界面可包括按钮511、区域选择控件512。按钮511可用于开启(“ON”)或关闭(“OFF”)眼球注视控制显示支付码的功能。区域选择控件512可用于设置支付码快捷窗口225在屏幕上的显示区域。As shown in FIG. 5B , the interface may include buttons 511 and area selection controls 512 . Button 511 can be used to turn on ("ON") or turn off ("OFF") the function of eye gaze control to display the payment code. The area selection control 512 can be used to set the display area of the payment code shortcut window 225 on the screen.
区域选择控件512又可包括控件5121、控件5122、控件5123、控件5124。默认的,在开启眼球注视控制显示支付码的功能时,支付码快捷窗口225显示在屏幕的右上角区域,对应控件5122所示的显示区域。这时,控件5122中可显示图标5125(选中图标),表示当前支付码快捷窗口225在屏幕上的显示区域(右上角区域)。The area selection control 512 may include a control 5121, a control 5122, a control 5123, and a control 5124. By default, when the function of controlling the display of payment codes by eye gaze is turned on, the payment code shortcut window 225 is displayed in the upper right corner area of the screen, corresponding to the display area shown by control 5122. At this time, the icon 5125 (selected icon) can be displayed in the control 5122, indicating the display area (upper right corner area) of the current payment code shortcut window 225 on the screen.
如果一个显示区域已用于显示快捷窗口,则该显示区域对应的控件中可显示有图标5126(占用图标)。例如,控件5123所示的显示区域可对应健康码快捷窗口226。因此,控件5123中可显示占用图标,表示该控件5123对应的屏幕左下角区域被占用,不可再用于设置支付码快捷窗口225。If a display area has been used to display a shortcut window, the icon 5126 (occupied icon) may be displayed in the control corresponding to the display area. For example, the display area shown in the control 5123 may correspond to the health code shortcut window 226. Therefore, the occupied icon may be displayed in the control 5123, indicating that the area in the lower left corner of the screen corresponding to the control 5123 is occupied and can no longer be used to set the payment code shortcut window 225.
参考图5C,终端100可检测到作用于控件5121的用户操作。响应于上述操作,终端100可在控件5121中显示选中图标,用于指示当前选中的与支付码关联的快捷窗口225在屏幕上的显示区域(左上角区域)。这时,参考图5D,在检测到解锁成功并显示主界面时,主界面的图层之上的左上角区域可显示对应支付码的快捷窗口225。Referring to FIG. 5C, the terminal 100 may detect a user operation on the control 5121. In response to the above operation, the terminal 100 may display a selection icon in the control 5121 to indicate the display area (upper left corner area) of the currently selected shortcut window 225 associated with the payment code on the screen. At this time, referring to Figure 5D, when successful unlocking is detected and the main interface is displayed, the shortcut window 225 corresponding to the payment code may be displayed in the upper left corner area above the layer of the main interface.
参考图5A-图5C所示的设置方法,终端100还可根据用户操作设定健康码快捷窗口 226的显示区域,这里不再赘述。Referring to the setting methods shown in Figures 5A-5C, the terminal 100 can also set the health code shortcut window according to user operations. 226 display area, I won’t go into details here.
如图5E所示,眼球注视识别设置界面还可包括控件445。控件445可用于添加更多的快捷窗口,从而为用户提供更多的快速打开常用应用程序和/或常用界面的服务。As shown in Figure 5E, the eye gaze recognition setting interface may also include a control 445. Control 445 can be used to add more shortcut windows, thereby providing users with more services for quickly opening commonly used applications and/or commonly used interfaces.
如图5E所示,终端100可检测到作用于控件445的用户操作。响应于上述操作,终端100可显示图5F所示的用户界面(添加快捷窗口界面)。该界面可包括多个快捷窗口选项,例如选项521、选项522等等。选项521可用于设置与健康检测记录关联的快捷窗口227。当识别到注视快捷窗口227的用户动作后,终端100可显示包含用户健康检测记录的界面(第三界面)。具体的,健康检测记录快捷窗口可参考图5G。选项522可用于设置与电子身份证关联的快捷窗口。该快捷窗口可与展示用户的电子身份证的界面关联,进而提供快速打开该界面的服务,这里不再赘述。As shown in Figure 5E, terminal 100 may detect user operations on control 445. In response to the above operation, the terminal 100 may display the user interface (add shortcut window interface) shown in FIG. 5F. The interface may include multiple shortcut window options, such as option 521, option 522, and so on. Option 521 may be used to set a shortcut window 227 associated with the health check record. After recognizing the user's action of looking at the shortcut window 227, the terminal 100 may display an interface (third interface) including the user's health detection record. Specifically, the health detection record shortcut window can be referred to Figure 5G. Option 522 can be used to set a shortcut window associated with the electronic ID card. This shortcut window can be associated with the interface that displays the user's electronic ID card, thereby providing a service for quickly opening the interface, which will not be described again here.
终端100可检测到作用于选项521的用户操作。响应于上述操作,终端100可显示图5H所示的用户界面(健康检测记录设置界面)。如图5H所示,按钮531可用于开启与健康检测记录关联的快捷窗口227。页面控件(控件5321、控件5322、控件5323)可用于设置快捷窗口227的显示页面。Terminal 100 may detect user operations on option 521. In response to the above operation, the terminal 100 may display the user interface (health detection record setting interface) shown in FIG. 5H. As shown in Figure 5H, button 531 can be used to open the shortcut window 227 associated with the health test record. Page controls (control 5321, control 5322, control 5323) can be used to set the display page of the shortcut window 227.
页面控件5321可用于指示主界面的负一屏。页面控件5322可用于指示主界面的第一桌面。页面控件5323可用于指示主界面的第二桌面。默认的,开启快捷窗口227之后,快捷窗口227可设置在第一桌面(在第一桌面的4个显示区域未全部占用的情况下)。这时,页面控件5322也可显示选中标记,以指示快捷窗口227当前被设置在第一桌面中。进一步的,默认的,快捷窗口227可设置在第一桌面的右下角区域,对应区域选择控件5334。Page control 5321 can be used to indicate the negative screen of the main interface. Page control 5322 may be used to indicate the first desktop of the main interface. Page control 5323 may be used to indicate a second desktop of the main interface. By default, after the shortcut window 227 is opened, the shortcut window 227 can be set on the first desktop (when all four display areas of the first desktop are not occupied). At this time, the page control 5322 may also display a check mark to indicate that the shortcut window 227 is currently set in the first desktop. Further, by default, the shortcut window 227 can be set in the lower right corner area of the first desktop, corresponding to the area selection control 5334.
在设置完成之后,终端100可检测到作用于返回控件534的用户操作,响应于上述操作,终端100可显示眼球注视识别设置界面,参考图5I。此时,该界面中还包括健康监测记录选项446,对应眼球注视控制显示健康检测记录的功能。After the setting is completed, the terminal 100 may detect a user operation on the return control 534, and in response to the above operation, the terminal 100 may display an eye gaze recognition setting interface, see FIG. 5I. At this time, the interface also includes a health monitoring record option 446, which corresponds to the function of controlling eye gaze to display health monitoring records.
这样,在完成解锁并显示主界面时,在预设的注视识别时间内,终端100还可在主界面的图层之上显示与健康监测记录关联的快捷窗口227,参考图5J中快捷窗口227。基于注视识别时间内采集到的包含用户面部图像的图像帧,终端100可识别用户的眼球注视位置,确定用户是否在注视快捷窗口227。响应于检测到用户注视快捷窗口227的动作,终端100可显示快捷窗口227对应的展示有健康检测记录的第三界面。In this way, after completing the unlocking and displaying the main interface, within the preset gaze recognition time, the terminal 100 can also display the shortcut window 227 associated with the health monitoring record on top of the layer of the main interface. Refer to the shortcut window 227 in Figure 5J . Based on the image frames collected during the gaze recognition time including the user's facial image, the terminal 100 can identify the user's eyeball gaze position and determine whether the user is gazing at the shortcut window 227 . In response to detecting the user's action of gazing at the shortcut window 227, the terminal 100 may display a third interface corresponding to the shortcut window 227 that displays the health detection record.
可以理解的,终端100也可根据用户操作改变上述快捷窗口227的显示页面和显示区域,以满足用户个性化的显示需求,更加契合用户使用习惯,提升用户使用体验。It is understandable that the terminal 100 can also change the display page and display area of the above-mentioned shortcut window 227 according to user operations to meet the user's personalized display needs, better fit the user's usage habits, and improve the user's usage experience.
示例性的,参考图5K,终端100可检测到作用于页面控件5323的用户操作。此时,“显示区域”对应第二桌面的左上角、右上角等4个显示区域。终端100可检测到作用于区域选择控件5333的用户操作。这时,终端100可确定在第二桌面的左下角区域显示快捷窗口227。For example, referring to FIG. 5K, the terminal 100 may detect a user operation acting on the page control 5323. At this time, the "display area" corresponds to four display areas including the upper left corner and the upper right corner of the second desktop. The terminal 100 may detect a user operation on the area selection control 5333. At this time, the terminal 100 may determine to display the shortcut window 227 in the lower left corner area of the second desktop.
参考图5L,在完成解锁并显示第一桌面时,在预设的注视识别时间内,终端100可在第一桌面的图层之上显示支付码快捷窗口225、健康码快捷窗口226;参考图5M,在完成解锁并显示第二桌面时,在预设的注视识别时间内,终端100还可在第二桌面的图层之上 显示快捷窗口227。Referring to Figure 5L, after completing the unlocking and displaying the first desktop, within the preset gaze recognition time, the terminal 100 can display the payment code shortcut window 225 and the health code shortcut window 226 on the layer of the first desktop; refer to Figure 5M. After completing the unlocking and displaying the second desktop, the terminal 100 can also be on top of the layer of the second desktop within the preset gaze recognition time. Display shortcut window 227.
可以理解的,当已启用的快捷窗口设置在主界面的不同页面时,终端100可根据解锁后显示的页面,显示对应的属于该页面的快捷窗口。It can be understood that when the enabled shortcut windows are set on different pages of the main interface, the terminal 100 can display the corresponding shortcut windows belonging to the page according to the page displayed after unlocking.
在一些实施例中,终端100还可设置各类快捷窗口的隐私类型(隐私的和非隐私的)。对于非隐私的快捷窗口,终端100还可将其显示在待解锁界面。In some embodiments, the terminal 100 can also set the privacy type (private and non-private) of various shortcut windows. For non-private shortcut windows, the terminal 100 can also display them on the interface to be unlocked.
终端100可以在待解锁界面检测用户的眼球注视位置,判断用户是否注视上述非隐私的快捷窗口。在检测到用户注视上述非隐私的快捷窗口的动作时,终端100可显示上述快捷窗口对应的常用界面。这样,用户可以无需完成解锁操作,从而进一步节省用户操作,使得用户可以更快捷地获取到常用应用程序和/或常用界面。The terminal 100 can detect the gaze position of the user's eyeballs on the interface to be unlocked, and determine whether the user is gazing at the non-private shortcut window. When detecting that the user looks at the non-private shortcut window, the terminal 100 may display a commonly used interface corresponding to the shortcut window. In this way, the user does not need to complete the unlocking operation, thereby further saving user operations and allowing the user to obtain commonly used applications and/or commonly used interfaces more quickly.
参考图6A,支付码设置界面还可包括按钮611。按钮611可用于设置与支付码关联的快捷窗口225的隐私类型。按钮611开启(“ON”)可表示支付码快捷窗口225是隐私的。反之,按钮611关闭(“OFF”)可表示快捷窗口225是非隐私的。如图6A所示,快捷窗口225可被设置为隐私的。Referring to FIG. 6A, the payment code setting interface may also include a button 611. Button 611 can be used to set the privacy type of the shortcut window 225 associated with the payment code. Button 611 is turned on ("ON") to indicate that payment code shortcut window 225 is private. Conversely, button 611 being turned off ("OFF") may indicate that shortcut window 225 is non-private. As shown in Figure 6A, shortcut window 225 can be set to be private.
参考上述过程,与健康码关联的快捷窗口226也可被设定为隐私的或非隐私的。如图6B所示,按钮612是关闭的,这也就是说,快捷窗口226可被设置为非隐私的。参考图6C,在眼球注视识别设置界面,隐私的快捷窗口对应的选项可附带有安全显示标签613,以提示用户该快捷窗口是隐私的,不会再解锁前显示在屏幕上。Referring to the above process, the shortcut window 226 associated with the health code can also be set to be private or non-private. As shown in Figure 6B, button 612 is closed, which means that shortcut window 226 can be set to be non-private. Referring to Figure 6C, in the eye gaze recognition setting interface, the option corresponding to the private shortcut window may be accompanied by a security display label 613 to remind the user that the shortcut window is private and will not be displayed on the screen before unlocking.
如图6D和图6E所示,在显示待解锁界面时,终端100可在待解锁界面的图层之上显示非隐私的健康码快捷窗口226。在显示健康码快捷窗口226时,终端100可采集用户的面部图像。参考图6F,终端100可基于采集的包含用户面部图像的图像帧识别到用户正在注视健康码快捷窗口226。响应于用户注视健康码快捷窗口226的动作,终端100可显示健康码快捷窗口226对应的展示健康码的健康码界面,参考图6G。As shown in FIG. 6D and FIG. 6E , when displaying the interface to be unlocked, the terminal 100 may display the non-private health code shortcut window 226 on top of the layer of the interface to be unlocked. When displaying the health code shortcut window 226, the terminal 100 may collect the user's facial image. Referring to FIG. 6F , the terminal 100 may recognize that the user is looking at the health code shortcut window 226 based on the collected image frame including the user's facial image. In response to the user's action of gazing at the health code shortcut window 226, the terminal 100 may display a health code interface corresponding to the health code shortcut window 226 that displays the health code, see FIG. 6G.
对于隐私的快捷窗口,例如支付码快捷窗口226,终端100不会在待解锁界面显示该快捷窗口,以避免支付码泄露。For private shortcut windows, such as the payment code shortcut window 226, the terminal 100 will not display the shortcut window on the interface to be unlocked to avoid leakage of the payment code.
当终端100中既设置有隐私的快捷窗口,又设置有非隐私的快捷窗口时,在待解锁界面,终端100可开启摄像头识别用户的眼球注视位置。当户的眼球注视位置在非隐私的快捷窗口内时,终端100可显示对应的常用界面。当户的眼球注视位置在隐私的快捷窗口内时,终端100可不显示对应的常用界面。When the terminal 100 is provided with both a private shortcut window and a non-private shortcut window, the terminal 100 can turn on the camera to identify the user's eyeball gaze position on the interface to be unlocked. When the user's eyeball gaze position is within the non-private shortcut window, the terminal 100 may display the corresponding commonly used interface. When the user's eyeball gaze position is within the privacy shortcut window, the terminal 100 may not display the corresponding common interface.
当终端100中只设置有隐私的快捷窗口时,在待解锁界面,终端100可不开启摄像头采集用户的面部图像以及识别用户的眼球注视位置。When only a private shortcut window is provided in the terminal 100, in the interface to be unlocked, the terminal 100 does not need to turn on the camera to collect the user's facial image and identify the user's eyeball gaze position.
参考图6H和图6I,如果终端100先完成解锁操作,这时,终端100可显示主界面。在显示主界面后,终端100既可显示非隐私的健康码快捷窗口226,还可显示隐私的支付码快捷窗口225。这也就是说,终端100可以在解锁后显示隐私的快捷窗口。终端100既可以在解锁前显示非隐私的快捷窗口,也可以在解锁后显示非隐私的快捷窗口,为用户提供更便捷的控制显示常用应用程序和/或常用界面的服务。 Referring to Figure 6H and Figure 6I, if the terminal 100 completes the unlocking operation first, at this time, the terminal 100 can display the main interface. After displaying the main interface, the terminal 100 can display either the non-private health code shortcut window 226 or the private payment code shortcut window 225. That is to say, the terminal 100 can display a private shortcut window after being unlocked. The terminal 100 can display a non-private shortcut window before unlocking or display a non-private shortcut window after unlocking, providing users with a more convenient service of controlling and displaying commonly used applications and/or commonly used interfaces.
在一些实施例中,终端100还可设置各类快捷窗口的显示次数。在超过上述显示次数之后,终端100可不显示快捷窗口,但是终端100仍然可以识别用户的眼球注视位置,提供快速显示应用程序和/或常用界面的服务。In some embodiments, the terminal 100 can also set the display times of various shortcut windows. After the above display times are exceeded, the terminal 100 may not display the shortcut window, but the terminal 100 may still recognize the user's eyeball gaze position and provide services for quickly displaying applications and/or commonly used interfaces.
具体的,参考图7A,健康码设置界面还可包括控件711。控件711可用于设定快捷窗口的显示次数,例如控件711中的显示的“100次”可表示:前100次启用眼球注视控制显示健康码功能时,终端100可显示对应健康码的快捷窗口226,以提示用户。Specifically, referring to Figure 7A, the health code setting interface may also include a control 711. The control 711 can be used to set the number of times the shortcut window is displayed. For example, the "100 times" displayed in the control 711 can mean: when the eye gaze control display health code function is enabled for the first 100 times, the terminal 100 can display the shortcut window 226 corresponding to the health code. , to prompt the user.
如图7B所示,在100次之后,终端100可不显示对应健康码的快捷窗口226(图7B虚线框表示用户眼球注视位置所在的区域,屏幕上并不显示上述虚线框)。As shown in Figure 7B, after 100 times, the terminal 100 may not display the shortcut window 226 corresponding to the health code (the dotted line box in Figure 7B represents the area where the user's eyeball gaze is located, and the above dotted line box is not displayed on the screen).
在注视识别时间内,虽然终端100不再显示对应健康码的快捷窗口226,但是,终端100仍然可采集用户面部图像。如果检测到用户注视第一桌面左下角区域的动作,终端100仍然可显示对应的展示健康码的健康码界面,参考图7C,以供用户使用上述健康码界面完成健康码查验。During the gaze recognition time, although the terminal 100 no longer displays the shortcut window 226 corresponding to the health code, the terminal 100 can still collect the user's facial image. If it is detected that the user looks at the lower left corner area of the first desktop, the terminal 100 can still display the corresponding health code interface displaying the health code, see FIG. 7C , so that the user can use the above health code interface to complete the health code verification.
这样,在用户长期使用眼球注视功能后,用户可以熟练地知道哪一主界面的哪一区域对应哪一常用界面,而无需终端100在上述区域显示该常用界面对应的快捷窗口。这时,终端100可不显示上述快捷窗口,从而减少快捷窗口对主界面的遮挡,提升用户使用体验。In this way, after the user uses the eye gaze function for a long time, the user can skillfully know which area of the main interface corresponds to which common interface, without the need for the terminal 100 to display the shortcut window corresponding to the common interface in the above area. At this time, the terminal 100 may not display the above shortcut window, thereby reducing the shortcut window's obstruction of the main interface and improving the user experience.
图8示例性示出了本申请实施例提供的一种显示方法的流程图。下面结合图8和前述介绍的用户界面,具体介绍终端100实施上述显示方法的流程。Figure 8 exemplarily shows a flow chart of a display method provided by an embodiment of the present application. The following is a detailed introduction to the process of the terminal 100 implementing the above display method with reference to FIG. 8 and the user interface introduced previously.
S101、终端100检测到满足开启眼球注视识别的触发条件。S101. The terminal 100 detects that the trigger condition for turning on eye gaze recognition is met.
长期开启摄像头采集用户的面部图像并识别用户的眼球注视位置是十分占用终端100资源(摄像头设备资源和计算资源)的,也极大地增加了终端100的功耗。同时,考虑到隐私安全,终端100的摄像头不会一直开启。Turning on the camera for a long time to collect the user's facial image and identify the user's eyeball gaze position takes up a lot of resources of the terminal 100 (camera equipment resources and computing resources), and also greatly increases the power consumption of the terminal 100. At the same time, considering privacy and security, the camera of the terminal 100 will not be turned on all the time.
因此,终端100可预设有一些开启眼球注视识别的场景。当检测到终端100处于上述场景中,终端100才会开启摄像头采集用户的面部图像。当上述场景结束时,终端100可关闭摄像头,停止采集用户的面部图像,以避免占用摄像头资源、节省功耗,同时保护用户隐私。Therefore, the terminal 100 may be preset with some scenarios for enabling eye gaze recognition. When it is detected that the terminal 100 is in the above scene, the terminal 100 will turn on the camera to collect the user's facial image. When the above scene ends, the terminal 100 can turn off the camera and stop collecting the user's facial image to avoid occupying camera resources, save power consumption, and protect user privacy.
研发人员可通过预先的用户习惯分析确定上述需要开启眼球注视识别的场景。一般的,上述场景通常是用户拿起手机或刚解锁进入手机的场景。这时,终端100可为用户提供快速启用某一应用程序(常用应用程序)的服务,以节省用户操作,提升用户使用体验。进一步的,终端100可为用户提供眼球注视控制启用上述应用程序的控制方法,避免用户双手被占用的场景下不方便执行触控操作的问题,进一步提升用户使用体验。R&D developers can determine the above-mentioned scenarios where eye gaze recognition needs to be turned on through advance analysis of user habits. Generally, the above scene is usually the scene where the user picks up the mobile phone or just unlocks the mobile phone and enters the mobile phone. At this time, the terminal 100 can provide the user with a service of quickly launching a certain application program (commonly used application program), so as to save the user operations and improve the user experience. Furthermore, the terminal 100 can provide the user with a control method for controlling eye gaze to activate the above-mentioned applications, thereby avoiding the problem of inconvenience in performing touch operations in scenarios where the user's hands are occupied, and further improving the user experience.
因此,上述场景包括但不限于:点亮手机屏幕并显示待解锁界面的场景,解锁后显示主界面(包括第一桌面、第二桌面、负一屏等页面)的场景。Therefore, the above scenarios include but are not limited to: the scenario of lighting up the mobile phone screen and displaying the interface to be unlocked, and the scenario of displaying the main interface (including the first desktop, second desktop, negative screen, etc.) after unlocking.
对应上述开启眼球注视识别的场景,开启眼球注视识别的触发条件包括:检测到唤醒手机的用户操作、检测到完成解锁并显示主界面的用户操作。其中,唤醒手机的用户操作包括但不限于用户拿起手机的操作,用户通过语音助手唤醒手机的操作等。Corresponding to the above scenario of turning on eye gaze recognition, the triggering conditions for turning on eye gaze recognition include: detecting a user operation to wake up the phone, detecting a user operation to complete unlocking and display the main interface. Among them, the user operation of waking up the mobile phone includes but is not limited to the operation of the user picking up the mobile phone, the operation of the user waking up the mobile phone through the voice assistant, etc.
参考图2C-图2E所示的用户界面,在检测到完成解锁操作之后,终端100可显示图2D所示的主界面,同时,终端100还可在上述主界面的图层之上,显示与常用应用程序或常 用应用程序中常用界面关联的快捷窗口225、快捷窗口226。上述指示终端100显示图2C-图2E所示的用户界面的操作可称为检测到完成解锁并显示主界面的用户操作。此时,终端100可开启摄像头采集用户的面部图像,识别用户的眼球注视位置,进而确定用户是否在注视上述快捷窗口。Referring to the user interfaces shown in Figures 2C-2E, after detecting the completion of the unlocking operation, the terminal 100 can display the main interface shown in Figure 2D. At the same time, the terminal 100 can also display on top of the layer of the main interface. Frequently used applications or Use the shortcut windows 225 and 226 associated with common interfaces in the application. The above operation of instructing the terminal 100 to display the user interface shown in FIGS. 2C to 2E may be referred to as a user operation of detecting completion of unlocking and displaying the main interface. At this time, the terminal 100 can turn on the camera to collect the user's facial image, identify the user's eyeball gaze position, and then determine whether the user is looking at the above-mentioned shortcut window.
参考图6D-图6E所示的用户界面,在检测到唤醒终端100但未完成解锁时,终端100也可在待解锁界面的图层之上,显示与常用应用程序或常用应用程序中常用界面关联的快捷窗口。这时,终端100也可开启摄像头采集用户的面部图像,识别用户的眼球注视位置,进而确定用户是否在注视上述快捷窗口。Referring to the user interfaces shown in Figures 6D-6E, when it is detected that the terminal 100 is awakened but the unlocking is not completed, the terminal 100 can also display common applications or common interfaces in common applications on top of the layer of the interface to be unlocked. associated shortcut window. At this time, the terminal 100 can also turn on the camera to collect the user's facial image, identify the user's eyeball gaze position, and then determine whether the user is looking at the above-mentioned shortcut window.
S102、终端100开启摄像头模组210采集用户的面部图像。S102. The terminal 100 turns on the camera module 210 to collect the user's facial image.
在检测到唤醒手机的用户操作,或检测到完成解锁并显示主界面的用户操作之后,终端100可确定开启眼球注视识别功能。After detecting a user operation to wake up the mobile phone, or detecting a user operation to complete unlocking and display the main interface, the terminal 100 may determine to turn on the eye gaze recognition function.
一方面,终端100可显示快捷窗口,以提示用户可以通过注视上述快捷窗口开启与上述快捷窗口关联的常用应用程序及常用界面。另一方面,终端100可开启摄像头模组210,采集用户的面部图像,以识别用户是否注视快速窗口以及注视哪一快捷窗口。On the one hand, the terminal 100 may display a shortcut window to prompt the user to open commonly used applications and common interfaces associated with the shortcut window by looking at the shortcut window. On the other hand, the terminal 100 can turn on the camera module 210 and collect the user's facial image to identify whether and which shortcut window the user is looking at.
参考图2B中的介绍,终端100的摄像头模组210至少包括一个2D摄像头和一个3D摄像头。2D摄像头可用于采集并生成二维图像。3D摄像头可用于采集并生成包含深度信息的三维图像。这样,终端100可同时获取到用户面部的二维图像和三维图像。结合上述二维图像和三维图像,终端100可以获取更丰富的面部特征,特别是眼部特征,从而更准确地识别用户的眼球注视位置,更准确地判断用户是否视快速窗口以及注视哪一快捷窗口。Referring to the introduction in FIG. 2B , the camera module 210 of the terminal 100 includes at least a 2D camera and a 3D camera. 2D cameras can be used to capture and generate two-dimensional images. 3D cameras can be used to capture and generate three-dimensional images containing depth information. In this way, the terminal 100 can obtain the two-dimensional image and the three-dimensional image of the user's face at the same time. Combining the above two-dimensional images and three-dimensional images, the terminal 100 can obtain richer facial features, especially eye features, so as to more accurately identify the user's eyeball gaze position, and more accurately determine whether the user is looking at the quick window and which shortcut he is looking at. window.
参考S101中介绍,终端100不会一直开启摄像头,因此,在开启摄像头模组210之后,终端100要设定关闭摄像头模组210的时刻。Referring to the introduction in S101, the terminal 100 will not always turn on the camera. Therefore, after turning on the camera module 210, the terminal 100 needs to set a time to turn off the camera module 210.
终端100可设定注视识别时间。注视识别时间开启时刻为终端100检测到S101所述的触发条件对应的时刻。注视识别时间结束的时刻取决于注视识别时间的持续时长。上述持续时长是预设的,例如图2F中介绍的2.5秒、3秒、3.5秒、4秒等。其中3秒为优选的注视识别时间的持续时长。当注视识别时间计时结束时,终端100可关闭摄像头模组210,即不再识别用户的眼球注视位置。The terminal 100 can set the gaze recognition time. The gaze recognition time opening time is the time when the terminal 100 detects the trigger condition described in S101. The moment at which the gaze recognition time ends depends on the duration of the gaze recognition time. The above duration is preset, such as 2.5 seconds, 3 seconds, 3.5 seconds, 4 seconds, etc. introduced in Figure 2F. Among them, 3 seconds is the preferred duration of gaze recognition time. When the gaze recognition time expires, the terminal 100 can turn off the camera module 210, that is, it will no longer recognize the user's eyeball gaze position.
相应的,在注视识别时间计时结束之后,终端100可不再显示快捷窗口,以避免长期遮挡主界面,影响用户使用体验。Correspondingly, after the gaze recognition time expires, the terminal 100 may no longer display the shortcut window to avoid blocking the main interface for a long time and affecting the user experience.
S103、终端100根据采集到的包含用户面部图像的图像帧确定用户的眼球注视位置。S103. The terminal 100 determines the user's eyeball gaze position based on the collected image frames including the user's facial image.
摄像头模组210在注视识别时间内采集并生成的图像帧可称为目标输入图像。终端100可利用上述目标输入图像识别用户的眼球注视位置。参考图1的介绍,用户注视终端100时视线在终端100屏幕上聚焦的位置可称为眼球注视位置。The image frames collected and generated by the camera module 210 during the gaze recognition time may be called target input images. The terminal 100 can identify the user's eyeball gaze position using the above-mentioned target input image. Referring to the introduction of FIG. 1 , when the user looks at the terminal 100 , the position where the user's line of sight focuses on the screen of the terminal 100 may be called the eyeball gaze position.
具体的,在获取到目标输入图像之后,终端100可将上述图像输入到眼球注视识别模型中。眼球注视识别模型是终端100中预置的模型。眼球注视识别模型可利用包含用户面部图像的图像帧确定用户的眼球注视位置,参考图1所示的光标点S。眼球注视识别模型可输出眼球注视位置在屏幕上的位置坐标。后续图9将具体介绍本申请所使用的眼球注视 识别模型的结构,这里先不展开。Specifically, after acquiring the target input image, the terminal 100 may input the above image into the eye gaze recognition model. The eye gaze recognition model is a model preset in the terminal 100 . The eye gaze recognition model can determine the user's eye gaze position using an image frame containing the user's facial image, with reference to the cursor point S shown in Figure 1. The eye gaze recognition model can output the position coordinates of the eye gaze position on the screen. The subsequent Figure 9 will specifically introduce the eye gaze used in this application. The structure of the identification model will not be expanded upon here.
在得到眼球注视位置的位置坐标之后,终端100可依据上述位置坐标确定用户是否注视快速窗口以及注视哪一快捷窗口,进而确定是否开启与上述快捷窗口关联的常用应用程序及常用界面。After obtaining the position coordinates of the eye gaze position, the terminal 100 can determine whether and which shortcut window the user is watching based on the above position coordinates, and then determine whether to open commonly used applications and common interfaces associated with the above shortcut window.
可选的,眼球注视识别模型还可输出用户的眼球注视区域。一个眼球注视区域可收缩为一个眼球注视位置,一个眼球注视位置也可扩展为一个眼球注视区域。在一些示例中,屏幕上的一个显示单元构成的光标点可称为一个眼球注视位置,对应的,屏幕上多个显示单元构成的光标点或光标区域即称为一个眼球注视区域。Optionally, the eye gaze recognition model can also output the user's eye gaze area. An eye-gaze area can be contracted into an eye-gaze position, and an eye-gaze position can also be expanded into an eye-gaze area. In some examples, a cursor point formed by one display unit on the screen can be called an eye gaze position, and correspondingly, a cursor point or a cursor area formed by multiple display units on the screen can be called an eye gaze area.
在输出一个眼球注视区域之后,终端100可通过判断眼球注视区域在屏幕中的位置,确定用户是否注视快速窗口以及注视哪一快捷窗口,进而确定是否开启与上述快捷窗口关联的常用应用程序及常用界面。After outputting an eye gaze area, the terminal 100 can determine whether and which shortcut window the user is looking at by judging the position of the eye gaze area on the screen, and then determine whether to open commonly used applications and frequently used applications associated with the above shortcut window. interface.
S104、终端100根据眼球注视位置的位置坐标和当前界面,确定用户是否注视快捷窗口,进而确定是否显示与上述快捷窗口关联的常用应用程序及常用界面。S104. The terminal 100 determines whether the user is gazing at the shortcut window based on the position coordinates of the eye gaze position and the current interface, and further determines whether to display commonly used applications and common interfaces associated with the above shortcut window.
在确定用户的眼球注视位置的位置坐标之后,结合终端100的当前界面,终端100可确定用户是否在注视当前界面上的快捷窗口。After determining the position coordinates of the user's eyeball gaze position, combined with the current interface of the terminal 100, the terminal 100 can determine whether the user is gazing at the shortcut window on the current interface.
参考图2F,终端100显示图2F所示的界面时,该界面可称为终端100的当前界面。这时,基于采集到的包含用户面部图像的图像帧,终端100可确定用户的眼球注视位置的位置坐标。于是,终端100可根据上述位置坐标确定上述眼球注视位置对应的区域或控件。Referring to FIG. 2F, when the terminal 100 displays the interface shown in FIG. 2F, the interface may be called the current interface of the terminal 100. At this time, based on the collected image frames including the user's facial image, the terminal 100 can determine the position coordinates of the user's eyeball gaze position. Therefore, the terminal 100 can determine the area or control corresponding to the eye gaze position according to the position coordinates.
当上述眼球注视位置在快捷窗口225内时,终端100可确定用户正在注视快捷窗口225;当上述眼球注视位置在快捷窗口226内时,终端100可确定用户正在注视快捷窗口226。在一些实施例中,终端100还可确定用户的眼球注视位置对应常用应用程序图标托盘223或其他应用程序图标托盘224中某一应用程序图标,例如“图库”应用等等。上述眼球注视位置还可在屏幕中的空白区域,既不对应主界面中的图标或控件,也不对应本申请所述的快捷窗口。When the eyeball gaze position is within the shortcut window 225, the terminal 100 can determine that the user is looking at the shortcut window 225; when the eyeball gaze position is within the shortcut window 226, the terminal 100 can determine that the user is looking at the shortcut window 226. In some embodiments, the terminal 100 may also determine that the user's eyeball gaze position corresponds to a certain application icon in the common application icon tray 223 or other application icon trays 224, such as the "Gallery" application and so on. The above-mentioned eye gaze position can also be in a blank area on the screen, which does not correspond to the icons or controls in the main interface, nor to the shortcut window described in this application.
参考图2F-图2G,当确定用户正在注视快捷窗口225时,终端100可显示快捷窗口225对应的支付界面。支付界面是用户确定一种常用界面。参考图2H-图2I,当确定用户正在注视快捷窗口226时,终端100可显示快捷窗口226对应的用于展示健康码的健康码界面。健康码界面也是用户确定一种常用界面。Referring to FIGS. 2F to 2G , when it is determined that the user is watching the shortcut window 225 , the terminal 100 may display the payment interface corresponding to the shortcut window 225 . The payment interface is a common interface determined by users. Referring to FIGS. 2H to 2I , when it is determined that the user is looking at the shortcut window 226 , the terminal 100 may display a health code interface corresponding to the shortcut window 226 for displaying the health code. The health code interface is also a commonly used interface for users to determine.
在一些实施例中,当确定用户正在注视常用应用程序图标托盘223或其他应用程序图标托盘224中某一应用程序图标时,终端100可开启该应用程序图标对应的应用程序。例如,参考图2F,当确定用户正在注视“图库”应用图标时,终端100可显示“图库”的首页。In some embodiments, when it is determined that the user is looking at an application icon in the frequently used application icon tray 223 or other application icon trays 224, the terminal 100 may open the application corresponding to the application icon. For example, referring to FIG. 2F, when it is determined that the user is looking at the "Gallery" application icon, the terminal 100 may display the homepage of the "Gallery".
参考图3E,终端100也可显示常用应用程序的图标(应用图标321和应用图标322),当确定用户正在注视应用图标321或应用图标322时,终端100可开启应用图标321或应用图标322对应的常用应用程序,例如显示上述常用应用程序的首页。Referring to Figure 3E, the terminal 100 can also display icons of commonly used applications (application icons 321 and 322). When it is determined that the user is looking at the application icon 321 or the application icon 322, the terminal 100 can open the corresponding application icon 321 or the application icon 322. frequently used applications, such as displaying the home page of the above frequently used applications.
当眼球注视位置为屏幕中的空白区域时,终端100可不执行任何动作,直至眼球注视识别时间结束,关闭眼球注视识别功能。 When the eye gaze position is a blank area on the screen, the terminal 100 may not perform any action until the eye gaze recognition time is over and turn off the eye gaze recognition function.
在一些实施例中,参考图7A-图7C,终端100在识别用户的眼球注视位置时,也可不显示快捷窗口或图标。但是,终端100仍然可根据用户的眼球注视位置的位置坐标确定上述眼球注视位置所属的特定区域。上述特定区域是预设的,例如图7A中所示的左上角区域、右上角区域,左下角区域、右下角区域等。进而,根据上述特定区域关联的常用应用程序及常用界面,终端100可确定开启哪一应用程序显示哪一界面。In some embodiments, referring to FIGS. 7A-7C , the terminal 100 may not display the shortcut window or icon when identifying the user's eyeball gaze position. However, the terminal 100 can still determine the specific area to which the eyeball gaze position belongs based on the position coordinates of the user's eyeball gaze position. The above-mentioned specific areas are preset, such as the upper left corner area, the upper right corner area, the lower left corner area, the lower right corner area, etc. shown in FIG. 7A. Furthermore, based on the common applications and common interfaces associated with the specific area, the terminal 100 can determine which application to open and which interface to display.
例如,参考图7B,终端100可识别到用户的眼球注视位置在屏幕的左下角区域内,于是,终端100可显示与上述左下角区域关联的用于展示健康码的健康码界面,参考图7C。For example, referring to FIG. 7B , the terminal 100 can recognize that the user's eyeball gaze position is in the lower left corner area of the screen. Therefore, the terminal 100 can display a health code interface associated with the lower left corner area for displaying the health code. Refer to FIG. 7C .
图9示例性示出了眼球注视识别模型的结构。下面结合图9具体介绍本申请实施例所使用的眼球注视识别模型。在本申请实施例中,眼球注视识别模型是基于卷积神经网络(Convolutional Neural Networks,CNN)建立的。Figure 9 exemplarily shows the structure of the eye gaze recognition model. The eye gaze recognition model used in the embodiment of the present application will be introduced in detail below with reference to Figure 9 . In the embodiment of this application, the eye gaze recognition model is established based on convolutional neural networks (Convolutional Neural Networks, CNN).
如图9所示,眼球注视识别模型可包括:人脸校正模块、降维模块、卷积网络模块。As shown in Figure 9, the eye gaze recognition model may include: a face correction module, a dimensionality reduction module, and a convolutional network module.
(1)、人脸校正模块。(1), Face correction module.
摄像头模组210采集的包含用户面部图像的图像帧可首先被输入人脸校正模块。人脸校正模块可用于识别输入的图像帧中的面部图像是否端正。对于面部图像不端正(例如歪头)的图像帧,人脸校正模块可对该图像帧进行校正,使其端正,从而避免后续影响眼球注视识别效果。The image frames collected by the camera module 210 and including the user's facial image may first be input into the face correction module. The face correction module can be used to identify whether the facial image in the input image frame is straight. For image frames in which the facial image is not straight (such as head tilt), the face correction module can correct the image frame to make it straight, thereby avoiding subsequent impact on the eye gaze recognition effect.
图10示出了人脸校正模块对摄像头模组210采集的图像帧进行人脸校正的处理流程。Figure 10 shows the processing flow of the face correction module performing face correction on the image frames collected by the camera module 210.
S201:利用人脸关键点识别算法确定图像帧T1中的人脸关键点。S201: Use the facial key point recognition algorithm to determine the facial key points in the image frame T1.
在本申请实施例中,人脸关键点包括左眼、右眼、鼻子、左唇角、右唇角。人脸关键点识别算法为现有的,例如基于Kinect的人脸关键点识别算法等等,这里不再赘述。In the embodiment of this application, the key points of the human face include the left eye, the right eye, the nose, the left lip corner, and the right lip corner. The face key point recognition algorithm is existing, such as the Kinect-based face key point recognition algorithm, etc., which will not be described again here.
参考图11A,图11A示例性示出了一帧包含用户面部图像的图像帧,记为图像帧T1。人脸校正模块可利用人脸关键点识别算法确定图像帧T1中的人脸关键点:左眼a、右眼b、鼻子c、左唇角d、右唇角e,并确定各个关键点的坐标位置,参考图11B中图像帧T1。Referring to FIG. 11A , FIG. 11A exemplarily shows an image frame including a user's facial image, which is denoted as image frame T1. The face correction module can use the face key point recognition algorithm to determine the key points of the face in the image frame T1: left eye a, right eye b, nose c, left lip corner d, right lip corner e, and determine the key points of each key point. For the coordinate position, refer to image frame T1 in Figure 11B.
S202:利用人脸关键点确定图像帧T1的被校准线,进而确定图像帧T1的人脸偏转角θ。S202: Use the face key points to determine the calibrated line of the image frame T1, and then determine the face deflection angle θ of the image frame T1.
端正的面部图像中左右眼处于同一水平线,因此左眼关键点与右眼关键点连成的直线(被校准线)与水平线是平行的,即人脸偏转角(被校准线与水平线的所构成的角)θ为0。In an upright facial image, the left and right eyes are on the same horizontal line, so the straight line connecting the key points of the left eye and the key points of the right eye (the calibrated line) is parallel to the horizontal line, that is, the face deflection angle (the composition of the calibrated line and the horizontal line) angle)θ is 0.
如图11B所示,人脸校正模块可利用识别到的左眼a、右眼b的坐标位置确定被校准线L1。于是,根据L1和水平线,人脸校正模块可确定图像帧T1中的面部图像的人脸偏转角θ。As shown in FIG. 11B , the face correction module can use the recognized coordinate positions of the left eye a and the right eye b to determine the calibrated line L1. Then, based on L1 and the horizontal line, the face correction module can determine the face deflection angle θ of the facial image in the image frame T1.
S203:如果θ=0°,确定图像帧T1中的面部图像是端正的,无需校正。S203: If θ=0°, it is determined that the facial image in the image frame T1 is straight and no correction is needed.
S204:如果θ≠0°,确定图像帧T1中的面部图像是不端正的,进一步的,对图像帧T1进行旋转校正,得到面部图像端正的图像帧。S204: If θ≠0°, it is determined that the facial image in the image frame T1 is not straight. Further, rotation correction is performed on the image frame T1 to obtain an image frame with a straight facial image.
在图11B中,θ≠0,即图像帧T1中的面部图像是不端正。这时,人脸校正模块可对图像帧T1进行校正,使图像帧中的面部变得端正。In FIG. 11B , θ≠0, that is, the facial image in the image frame T1 is not straight. At this time, the face correction module can correct the image frame T1 to make the face in the image frame straight.
具体的,人脸校正模块可首先利用左眼a、右眼b的坐标位置确定旋转中心点y,然后, 以y点为旋转中心,将图像帧T1旋转θ°,得到面部图像端正的图像帧,记为图像帧T2。如图11B所示,A点可表示旋转后的左眼a的位置、B点可表示旋转后的右眼b的位置、C点可表示旋转后的鼻子c的位置、D点可表示旋转后的左唇角d的位置、E点可表示旋转后的右唇角e的位置。Specifically, the face correction module can first use the coordinate positions of the left eye a and the right eye b to determine the rotation center point y, and then, Taking the y point as the rotation center, rotate the image frame T1 by θ° to obtain an image frame with a straight facial image, which is recorded as image frame T2. As shown in Figure 11B, point A can represent the position of the rotated left eye a, point B can represent the position of the right eye b after the rotation, point C can represent the position of the nose c after the rotation, and point D can represent the position of the rotated nose c. The position of the left lip corner d and point E can represent the position of the rotated right lip corner e.
可以理解的,在旋转图像帧T1时,图像帧中的每一个像素点都会被旋转。上述A、B、C、D、E仅为示例性示出了图像中的关键点的旋转过程,而并非只对人脸关键点进行旋转。It can be understood that when rotating the image frame T1, every pixel in the image frame will be rotated. The above A, B, C, D, and E are only examples of the rotation process of key points in the image, and do not only rotate the key points of the human face.
S205:对校正后得到的面部图像端正的图像帧进行处理,得到左眼图像、右眼图像、脸部图像和人脸网格数据。其中,人脸网格数据可用于反映图像中脸部图像在整个图像中的位置。S205: Process the corrected image frame of the corrected facial image to obtain a left eye image, a right eye image, a facial image and face grid data. Among them, the face grid data can be used to reflect the position of the face image in the entire image.
具体的,人脸校正模块可以以人脸关键点为中心,按预设的尺寸,对校正后的图像帧进行裁剪,从而得到该图像对应的左眼图像、右眼图像、脸部图像。在确定脸部图像时,人脸校正模块可确定人脸网格数据。Specifically, the face correction module can center on the key points of the face and crop the corrected image frame according to the preset size, thereby obtaining the left eye image, right eye image, and face image corresponding to the image. In determining the face image, the face correction module may determine face mesh data.
参考图11C,人脸校正模块可以以左眼A为中心确定一个固定尺寸的矩形。该矩形覆盖的图像即左眼图像。按同样的方法,人脸校正模块可以以右眼B为中心确定右眼图像,以鼻子C为中心确定人脸图像。其中,左眼图像与右眼图像的尺寸相同,人脸图像与左眼图像的尺寸不同。在确定人脸图像之后,人脸校正模块可相应地得到人脸网格数据,即人脸图像在整个图像中的位置。Referring to Figure 11C, the face correction module can determine a rectangle of fixed size with the left eye A as the center. The image covered by this rectangle is the left eye image. In the same way, the face correction module can determine the right eye image with the right eye B as the center, and the face image with the nose C as the center. Among them, the size of the left eye image and the right eye image are the same, and the size of the face image and the left eye image are different. After determining the face image, the face correction module can correspondingly obtain the face grid data, that is, the position of the face image in the entire image.
在完成人脸校正之后,终端100可得到校正后的面部图像端正的图像帧,并由上述图像帧得到对应的左眼图像、右眼图像、脸部图像和人脸网格数据。After completing the face correction, the terminal 100 can obtain the corrected image frame of the facial image, and obtain the corresponding left eye image, right eye image, facial image and face mesh data from the above image frame.
(2)、降维模块。(2) Dimensionality reduction module.
人脸校正模块可将自身输出的左眼图像、右眼图像、脸部图像和人脸网格数据输入降维模块。降维模块可用于对输入的左眼图像、右眼图像、脸部图像和人脸网格数据进行降维,以降低卷积网络模块的计算复杂度,提升眼球注视识别的速度。降维模块使用的降维方法包括但不限于主成分分析法(principal components analysis,PCA)、下采样、1*1卷积核等等。The face correction module can input the left eye image, right eye image, facial image and face mesh data output by itself into the dimensionality reduction module. The dimensionality reduction module can be used to reduce the dimensionality of the input left eye image, right eye image, facial image and face grid data to reduce the computational complexity of the convolutional network module and improve the speed of eye gaze recognition. The dimensionality reduction methods used by the dimensionality reduction module include but are not limited to principal component analysis (PCA), downsampling, 1*1 convolution kernel, etc.
(3)、卷积网络模块。(3), Convolutional network module.
经过降维处理的各个图像(左眼图像、右眼图像、脸部图像和人脸网格数据)可被输入卷积网络模块。卷积网络模块可基于上述输入的图像输出眼球注视位置。在本申请实施例中,卷积网络模块中卷积网络的结构可参考图12。Each dimensionally reduced image (left eye image, right eye image, face image and face mesh data) can be input to the convolutional network module. The convolutional network module can output the eye gaze position based on the above input image. In this embodiment of the present application, the structure of the convolutional network in the convolutional network module can be referred to Figure 12.
如图12所示,卷积网络可包括卷积组1(CONV1)、卷积组2(CONV2)、卷积组3(CONV3)。一个卷积组包括:卷积核(Convolution)、激活函数PRelu、池化核(Pooling)和局部响应归一化层(Local Response Normalization,LRN)。其中,CONV1的卷积核为7*7的矩阵,池化核为3*3的矩阵;CONV2的卷积核为5*5的矩阵,池化核为3*3的矩阵;CONV3的卷积核为3*3的矩阵,池化核为2*2的矩阵。As shown in Figure 12, the convolution network may include convolution group 1 (CONV1), convolution group 2 (CONV2), and convolution group 3 (CONV3). A convolution group includes: convolution kernel (Convolution), activation function PRelu, pooling kernel (Pooling) and local response normalization layer (Local Response Normalization, LRN). Among them, the convolution kernel of CONV1 is a 7*7 matrix, and the pooling kernel is a 3*3 matrix; the convolution kernel of CONV2 is a 5*5 matrix, and the pooling kernel is a 3*3 matrix; the convolution of CONV3 The kernel is a 3*3 matrix, and the pooling kernel is a 2*2 matrix.
其中,可分离卷积技术可以降低卷积核Convolution)、池化核(Pooling)的存储要求,从而降低整体模型对存储空间的需求,使得该模型可以部署在终端设备上。 Among them, separable convolution technology can reduce the storage requirements of convolution kernels (Convolution) and pooling kernel (Pooling), thereby reducing the overall model's demand for storage space, allowing the model to be deployed on terminal devices.
具体的,可分离卷积技术是指将一个n*n的矩阵分解为一个n*1的列矩阵和一个1*n的行矩阵进行存储,从而减少对存储空间的需求。因此,本申请所示用的眼球注视模块具有体量小,易部署的优势,以适应部属在手机等终端电子设备上。Specifically, separable convolution technology refers to decomposing an n*n matrix into an n*1 column matrix and a 1*n row matrix for storage, thereby reducing the demand for storage space. Therefore, the eye gaze module used in this application has the advantages of small size and easy deployment, so as to be adapted to be deployed on terminal electronic devices such as mobile phones.
具体的,参考图13,矩阵A可表示一个3*3的卷积核。假设直接存储矩阵A,则该矩阵A需要占9个存储单元。矩阵A可拆分成列矩阵A1和行矩阵A2(列矩阵A1×行矩阵A2=矩阵A)。列矩阵A1和行矩阵A2仅需6个存储单元。Specifically, referring to Figure 13, matrix A can represent a 3*3 convolution kernel. Assuming that matrix A is stored directly, matrix A needs to occupy 9 storage units. Matrix A can be split into column matrix A1 and row matrix A2 (column matrix A1 × row matrix A2 = matrix A). The column matrix A1 and the row matrix A2 only require 6 storage units.
在经过CONV1、CONV2、CONV3的处理之后,不同的图像可被输入不同的连接层进行全连接。如图12所示,卷积网络可包括连接层1(FC1)、连接层2(FC2)、连接层3(FC3)。After being processed by CONV1, CONV2, and CONV3, different images can be input to different connection layers for full connection. As shown in Figure 12, the convolutional network may include connection layer 1 (FC1), connection layer 2 (FC2), and connection layer 3 (FC3).
左眼图像和右眼图像在经过CONV1、CONV2、CONV3之后可被输入到FC1中。FC1可包括组合模块(concat)、卷积核1201、PRelu、全连接模块1202。其中,concat可用于组合左眼图像和右眼图像。人脸图像在经过CONV1、CONV2、CONV3之后可被输入到FC2中。FC2可包括卷积核1203、PRelu、全连接模块1204、全连接模块1205。FC2可对人脸图像进行两次全连接。人脸网格数据在经过CONV1、CONV2、CONV3之后可被输入到FC3中。FC3包括一个全连接模块。The left eye image and the right eye image can be input into FC1 after passing through CONV1, CONV2, and CONV3. FC1 may include a combination module (concat), a convolution kernel 1201, PRelu, and a fully connected module 1202. Among them, concat can be used to combine left eye images and right eye images. The face image can be input into FC2 after passing through CONV1, CONV2, and CONV3. FC2 may include a convolution kernel 1203, PRelu, a fully connected module 1204, and a fully connected module 1205. FC2 can perform two full connections on face images. The face mesh data can be input into FC3 after passing through CONV1, CONV2, and CONV3. FC3 includes a fully connected module.
不同结构的连接层是针对不同类型的图像(例如左眼、右眼、人脸图像)构建的,可以更好的获取各类图像的特征,从而提升模型的准确性,使得终端100可以更加准确的识别用户的眼球注视位置。Connection layers with different structures are constructed for different types of images (such as left eye, right eye, face images), which can better obtain the characteristics of various types of images, thereby improving the accuracy of the model, so that the terminal 100 can be more accurate to identify the user’s eye gaze position.
然后,全连接模块1206可对左眼图像和右眼图像、人脸图像、人脸网格数据再进行一次全连接,最终输出眼球注视位置的位置坐标。眼球注视位置指示了用户视线在屏幕的聚焦的横坐标、纵坐标,参考图1所示的光标点S。进而,当眼球注视位置在控件区域(图标、窗口等控件)内时,终端100可以确定用户在注视该控件。Then, the full connection module 1206 can perform another full connection on the left eye image, the right eye image, the face image, and the face grid data, and finally output the position coordinates of the eyeball gaze position. The eyeball gaze position indicates the abscissa and ordinate of the focus of the user's sight on the screen, refer to the cursor point S shown in Figure 1. Furthermore, when the eyeball gaze position is within the control area (control such as icon, window, etc.), the terminal 100 can determine that the user is gazing at the control.
此外,本申请所使用的眼球注视模型设置的卷积神经网络的参数较少。因此,在使用眼球注视模型计算和预测用户眼球注视位置所需的时间较小,即终端100可快速地确定用户的眼球注视位置,进而快速地确定用户是否再通过眼球注视控制开启常用应用程序及常用界面。In addition, the convolutional neural network set by the eye gaze model used in this application has fewer parameters. Therefore, the time required to calculate and predict the user's eye gaze position using the eye gaze model is relatively small, that is, the terminal 100 can quickly determine the user's eye gaze position, and then quickly determine whether the user will open commonly used applications and programs through eye gaze control. Commonly used interfaces.
在本申请实施例中:In this application example:
第一预设区域、第二预设区域可以是屏幕的左上角区域、右上角区域、左下角区域、右下角区域的任意不相同的两个区域;The first preset area and the second preset area may be any two different areas of the upper left corner area, the upper right corner area, the lower left corner area, and the lower right corner area of the screen;
参考图3A,第一界面、第四界面可以是第一桌面(页面31)、第二桌面(页面32)、负一屏(页面30)等主界面中的任意不相同的两个界面;Referring to Figure 3A, the first interface and the fourth interface can be any two different interfaces among the main interfaces such as the first desktop (page 31), the second desktop (page 32), the negative screen (page 30), etc.;
第二界面、第三界面、第五界面、第六界面可以是一下界面中的任意一个:图2G所示的支付界面、图2I所示的健康码界面、图5G指示的健康检测记录界面,以及附图中为展示的乘车码界面等各种用户常用界面;The second interface, the third interface, the fifth interface, and the sixth interface can be any one of the following interfaces: the payment interface shown in Figure 2G, the health code interface shown in Figure 2I, and the health detection record interface shown in Figure 5G. As well as various common user interfaces such as the ride code interface shown in the attached picture;
以设置为隐私的支付界面为例,在显示支付界面之前,参考图2D,电子设备在第一桌面上显示的快捷窗口225可称为第一控件;参考图3D,图标331也可称为第一控件;或者,提供支付界面的应用的快捷方式也可称为第一控件。 Taking the payment interface set to privacy as an example, before displaying the payment interface, referring to Figure 2D, the shortcut window 225 displayed by the electronic device on the first desktop can be called the first control; referring to Figure 3D, the icon 331 can also be called the first control. A control; alternatively, a shortcut to an application that provides a payment interface may also be called a first control.
图14为本申请实施例的终端100的系统结构示意图。Figure 14 is a schematic system structure diagram of the terminal 100 according to the embodiment of the present application.
分层架构将系统分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将系统分为五层,从上至下分别为应用程序层(应用层),应用程序框架层(框架层)、硬件抽象层、驱动层以及硬件层。The layered architecture divides the system into several layers, and each layer has clear roles and division of labor. The layers communicate through software interfaces. In some embodiments, the system is divided into five layers, from top to bottom: application layer (application layer), application framework layer (framework layer), hardware abstraction layer, driver layer and hardware layer.
应用层可以包括多个应用程序,例如拨号应用、图库应用等等。在本申请实施例中,应用层还包括眼球注视SDK(software development kit,软件开发工具包)。终端100的系统和终端100上安装的第三应用程序,可通过调用眼球注视SDK识别用户的眼球注视位置。The application layer can include multiple applications, such as dial-up applications, gallery applications, and so on. In this embodiment of the present application, the application layer also includes an eye gaze SDK (software development kit). The system of the terminal 100 and the third application installed on the terminal 100 can identify the user's eyeball gaze position by calling the eyeball gaze SDK.
框架层为应用层的应用程序提供应用编程接口(application programming interface,API)和编程框架。框架层包括一些预先定义的函数。在本申请实施例中,框架层可以包括相机服务接口、眼球注视服务接口。相机服务接口用于提供使用摄像头的应用编程接口和编程框架。眼球注视服务接口提供使用眼球注视识别模型的应用编程接口和编程框架。The framework layer provides application programming interface (API) and programming framework for applications in the application layer. The framework layer includes some predefined functions. In this embodiment of the present application, the framework layer may include a camera service interface and an eyeball gaze service interface. The camera service interface is used to provide an application programming interface and programming framework for using the camera. The eye gaze service interface provides an application programming interface and programming framework that uses the eye gaze recognition model.
硬件抽象层为位于框架层和驱动层之间的接口层,为操作系统提供虚拟硬件平台。本申请实施例中,硬件抽象层可以包括相机硬件抽象层和眼球注视进程。相机硬件抽象层可以提供相机设备1(RGB摄像头)、相机设备2(TOF摄像头)或更多的相机设备的虚拟硬件。通过眼球注视识别模块识别用户眼球注视位置的计算过程在眼球注视进程中执行。The hardware abstraction layer is the interface layer between the framework layer and the driver layer, providing a virtual hardware platform for the operating system. In this embodiment of the present application, the hardware abstraction layer may include a camera hardware abstraction layer and an eye gaze process. The camera hardware abstraction layer can provide virtual hardware for camera device 1 (RGB camera), camera device 2 (TOF camera), or more camera devices. The calculation process of identifying the user's eye gaze position through the eye gaze recognition module is performed during the eye gaze process.
驱动层为硬件和软件之间的层。驱动层包括各种硬件的驱动。驱动层可以包括相机设备驱动。相机设备驱动用于驱动摄像头的传感器采集图像以及驱动图像信号处理器对图像进行预处理。The driver layer is the layer between hardware and software. The driver layer includes drivers for various hardware. The driver layer may include camera device drivers. The camera device driver is used to drive the sensor of the camera to collect images and drive the image signal processor to preprocess the images.
硬件层包括传感器和安全数据缓冲区。其中,传感器包括RGB摄像头(即2D摄像头)、TOF摄像头(即3D摄像头)。传感器中包括的摄像头与相机硬件抽象层中包括的虚拟的相机设备一一对应。RGB摄像头可采集并生成2D图像。TOF摄像头即深感摄像头,可采集并生成带有深度信息的3D图像。The hardware layer includes sensors and secure data buffers. Among them, the sensors include RGB camera (ie 2D camera) and TOF camera (ie 3D camera). The camera included in the sensor corresponds to the virtual camera device included in the camera hardware abstraction layer one-to-one. RGB cameras capture and generate 2D images. TOF camera is a depth-sensing camera that can collect and generate 3D images with depth information.
摄像头采集的数据存储在安全数据缓冲区中。任何上层进程或引用在获取摄像头采集的图像数据时,需要从安全数据缓冲区中获取,而不能通过其他方式获取,因此安全数据缓冲区还可以避免摄像头采集的图像数据被滥用的问题,因而称为安全数据缓冲区。Data collected by the camera is stored in a secure data buffer. When any upper-layer process or reference obtains the image data collected by the camera, it needs to obtain it from the secure data buffer and cannot obtain it through other means. Therefore, the secure data buffer can also avoid the problem of abuse of the image data collected by the camera, so it is called For safe data buffer.
上述介绍的软件层级和各层中包括的模块或接口运行在可运行环境(Runnable executive environment,REE)中。终端100还包括可信执行环境(Trust executive environment,TEE)。TEE中的数据通信相比于REE更安全。The software layers introduced above and the modules or interfaces included in each layer run in a runnable environment (Runnable executive environment, REE). The terminal 100 also includes a trusted execution environment (Trust executive environment, TEE). Data communication in TEE is more secure than REE.
TEE中可包括眼球注视识别算法模块、信任应用(Trust Application,TA)模块以及安全服务模块。眼球注视识别算法模块存储有眼球注视识别模型的可执行代码。TA可用于安全地将上述模型输出的识别结果发送到眼球注视进程中。安全服务模块可用于将安全数据缓冲区中存储的图像数据安全地输入到眼球注视识别算法模块。TEE can include eye gaze recognition algorithm module, trust application (Trust Application, TA) module and security service module. The eye gaze recognition algorithm module stores the executable code of the eye gaze recognition model. TA can be used to safely send the recognition results output by the above model to the eye gaze process. The security service module can be used to securely input the image data stored in the secure data buffer to the eye gaze recognition algorithm module.
下面结合上述硬件结构以及系统结构,对本申请实施例中的基于眼球注视识别的交互方法进行具体描述:The following is a detailed description of the interaction method based on eye gaze recognition in the embodiment of the present application in combination with the above hardware structure and system structure:
终端100检测到满足开启眼球注视识别的触发条件。于是,终端100可确定执行眼球注视识别操作。 The terminal 100 detects that the trigger condition for turning on eye gaze recognition is met. Accordingly, the terminal 100 may determine to perform the eye gaze recognition operation.
首先,终端100可通过眼球注视SDK调用眼球注视服务。First, the terminal 100 can call the eye gaze service through the eye gaze SDK.
一方面,眼球注视服务可调用框架层的相机服务,通过相机服务采集并获得用户的面部图像。相机服务可通过调用相机硬件抽象层中的相机设备1(RGB摄像头)、相机设备2(TOF摄像头)发送启动RGB摄像头和TOF摄像头的指令。相机硬件抽象层将该指令发送到驱动层的相机设备驱动。相机设备驱动依据上述指令可以启动摄像头。相机设备1发送到相机设备驱动的指令可用于启动RGB摄像头。相机设备2发送到相机设备驱动的指令可用于启动TOF摄像头。RGB摄像头、TOF摄像头开启后采集光信号,经过图像信号处理器生成电信号的二维图像和三维图像。On the one hand, the eye gaze service can call the camera service of the frame layer to collect and obtain the user's facial image through the camera service. The camera service can send instructions to start the RGB camera and TOF camera by calling camera device 1 (RGB camera) and camera device 2 (TOF camera) in the camera hardware abstraction layer. The camera hardware abstraction layer sends this instruction to the camera device driver of the driver layer. The camera device driver can start the camera according to the above instructions. The instructions sent by camera device 1 to the camera device driver can be used to start the RGB camera. The instructions sent by the camera device 2 to the camera device driver can be used to start the TOF camera. After the RGB camera and TOF camera are turned on, they collect light signals and use the image signal processor to generate two-dimensional and three-dimensional images of electrical signals.
另一方面,眼球注视服务可创建眼球注视进程,初始化眼球识别模型。On the other hand, the eye gaze service creates an eye gaze process and initializes the eye recognition model.
图像信号处理器生成的图像(二维图像和三维图像)可被存储到安全数据缓冲区。在眼球注视进程创建完成并初始化后,安全数据缓冲区中存储的图像数据可经由安全服务提供的安全传输通道(TEE)输送到眼球注视识别算法。眼球注视识别算法在接收到图像数据之后,可将上述图像数据输入到基于CNN建立的眼球注视识别模型中,从而确定用户的眼球注视位置。然后,TA将上述眼球注视位置安全地传回眼球注视进程,进而经由相机服务、眼球注视服务返回到应用层眼球注视SDK中。Images (two-dimensional images and three-dimensional images) generated by the image signal processor can be stored in a secure data buffer. After the eye gaze process is created and initialized, the image data stored in the secure data buffer can be transmitted to the eye gaze recognition algorithm through the secure transmission channel (TEE) provided by the security service. After receiving the image data, the eye gaze recognition algorithm can input the above image data into the eye gaze recognition model established based on CNN to determine the user's eye gaze position. Then, TA safely passes the above-mentioned eye gaze position back to the eye gaze process, and then returns it to the application layer eye gaze SDK through the camera service and eye gaze service.
最后,眼球注视SDK可根据接收到的眼球注视位置确定用户注视的区域或图标、窗口等控件,进而确定与上述区域或控件关联的显示动作。Finally, the eye gaze SDK can determine the area or icon, window and other controls that the user is looking at based on the received eye gaze position, and then determine the display action associated with the above area or control.
图15示出了终端100的硬件结构示意图。Figure 15 shows a schematic diagram of the hardware structure of the terminal 100.
终端100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。The terminal 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and user Identification module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
可以理解的是,本发明实施例示意的结构并不构成对终端100的具体限定。在本申请另一些实施例中,终端100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the terminal 100. In other embodiments of the present application, the terminal 100 may include more or fewer components than shown in the figures, or some components may be combined, or some components may be separated, or may be arranged differently. The components illustrated may be implemented in hardware, software, or a combination of software and hardware.
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。 The processor 110 may include one or more processing units. For example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processing unit (NPU), etc. Among them, different processing units can be independent devices or integrated in one or more processors.
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。The controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。The processor 110 may also be provided with a memory for storing instructions and data. In some embodiments, the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。In some embodiments, processor 110 may include one or more interfaces. Interfaces may include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, pulse code modulation (pulse code modulation, PCM) interface, universal asynchronous receiver and transmitter (universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and /or universal serial bus (USB) interface, etc.
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对终端100的结构限定。在本申请另一些实施例中,终端100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationships between the modules illustrated in the embodiment of the present invention are only schematic illustrations and do not constitute a structural limitation on the terminal 100 . In other embodiments of the present application, the terminal 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
充电管理模块140用于从充电器接收充电输入。电源管理模块141用于连接电池142,充电管理模块140与处理器110。The charging management module 140 is used to receive charging input from the charger. The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
终端100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。The wireless communication function of the terminal 100 can be implemented through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
天线1和天线2用于发射和接收电磁波信号。移动通信模块150可以提供应用在终端100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals. The mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied to the terminal 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc. The mobile communication module 150 can receive electromagnetic waves through the antenna 1, perform filtering, amplification and other processing on the received electromagnetic waves, and transmit them to the modem processor for demodulation. The mobile communication module 150 can also amplify the signal modulated by the modem processor and convert it into electromagnetic waves through the antenna 1 for radiation.
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。A modem processor may include a modulator and a demodulator. Among them, the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
无线通信模块160可以提供应用在终端100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。The wireless communication module 160 can provide applications on the terminal 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (bluetooth, BT), and global navigation satellite system. (global navigation satellite system, GNSS), frequency modulation (FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions. The wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 . The wireless communication module 160 can also receive the signal to be sent from the processor 110, frequency modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
在一些实施例中,终端100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得终端100可以通过无线通信技术与网络以及其他设备通信。所 述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the antenna 1 of the terminal 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the terminal 100 can communicate with the network and other devices through wireless communication technology. Place The wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband code Wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology, etc. The GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
终端100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The terminal 100 implements the display function through the GPU, the display screen 194, and the application processor. The GPU is an image processing microprocessor and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD)。显示面板还可以采用有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),miniled,microled,micro-oled,量子点发光二极管(quantum dot light emitting diodes,QLED)等制造。在一些实施例中,终端100可以包括1个或N个显示屏194,N为大于1的正整数。The display screen 194 is used to display images, videos, etc. Display 194 includes a display panel. Display 194 includes a display panel. The display panel can use a liquid crystal display (LCD). The display panel can also use organic light-emitting diode (OLED), active matrix organic light-emitting diode or active matrix organic light-emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode ( Manufacturing of flex light-emitting diodes (FLED), miniled, microled, micro-oled, quantum dot light emitting diodes (QLED), etc. In some embodiments, the terminal 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
在本申请实施例中,终端100通过GPU,显示屏194,以及应用处理器等提供的显示功能,显示图2A-图2I、图3A-图3E、图4A-图4D、图5A-图5M、图6A-图6I、图7A-图7C所示的用户界面。In the embodiment of the present application, the terminal 100 uses the display functions provided by the GPU, the display screen 194, and the application processor to display Figures 2A-2I, 3A-3E, 4A-4D, and 5A-5M. , the user interface shown in Figures 6A-6I and 7A-7C.
终端100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。在本申请实施例中,摄像头193包括生成二维图像的RGB摄像头(2D摄像头)和生成三维图像的TOF摄像头(3D摄像头)。The terminal 100 can implement the shooting function through the ISP, camera 193, video codec, GPU, display screen 194, application processor, etc. In the embodiment of the present application, the camera 193 includes an RGB camera (2D camera) that generates two-dimensional images and a TOF camera (3D camera) that generates three-dimensional images.
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。The ISP is used to process the data fed back by the camera 193. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera sensor through the lens, the optical signal is converted into an electrical signal, and the camera sensor passes the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise and brightness. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be provided in the camera 193.
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,终端100可以包括1个或N个摄像头193,N为大于1的正整数。 Camera 193 is used to capture still images or video. The object passes through the lens to produce an optical image that is projected onto the photosensitive element. The photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to convert it into a digital image signal. ISP outputs digital image signals to DSP for processing. DSP converts digital image signals into standard RGB, YUV and other format image signals. In some embodiments, the terminal 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。视频编解码器用于对数字视频压缩或解压缩。终端100可以支持一种或多种视频编解码器。Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. Video codecs are used to compress or decompress digital video. Terminal 100 may support one or more video codecs.
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现终端100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。NPU is a neural network (NN) computing processor. By drawing on the structure of biological neural networks, such as the transmission mode between neurons in the human brain, it can quickly process input information and can continuously learn by itself. The NPU can realize intelligent cognitive applications of the terminal 100, such as image recognition, face recognition, speech recognition, text understanding, etc.
在本申请实施例中,终端100通过ISP,摄像头193提供的拍摄能力,采集用户的面部图像。终端100可通过NPU执行眼球注视识别算法,进而通过采集到的用户面部图像识别用户的眼球注视位置。In this embodiment of the present application, the terminal 100 collects the user's facial image through the shooting capability provided by the ISP and the camera 193 . The terminal 100 can execute the eye gaze recognition algorithm through the NPU, and then identify the user's eye gaze position through the collected user facial image.
内部存储器121可以包括一个或多个随机存取存储器(random access memory,RAM)和一个或多个非易失性存储器(non-volatile memory,NVM)。The internal memory 121 may include one or more random access memories (RAM) and one or more non-volatile memories (NVM).
随机存取存储器可以包括静态随机存储器(static random-access memory,SRAM)、动态随机存储器(dynamic random access memory,DRAM)、同步动态随机存储器(synchronous dynamic random access memory,SDRAM)、双倍资料率同步动态随机存取存储器(double data rate synchronous dynamic random access memory,DDR SDRAM,例如第五代DDR SDRAM一般称为DDR5SDRAM)等。非易失性存储器可以包括磁盘存储器件、快闪存储器(flash memory)。Random access memory can include static random-access memory (SRAM), dynamic random-access memory (DRAM), synchronous dynamic random-access memory (SDRAM), double data rate synchronous Dynamic random access memory (double data rate synchronous dynamic random access memory, DDR SDRAM, such as the fifth generation DDR SDRAM is generally called DDR5SDRAM), etc. Non-volatile memory can include disk storage devices and flash memory.
随机存取存储器可以由处理器110直接进行读写,可以用于存储操作系统或其他正在运行中的程序的可执行程序(例如机器指令),还可以用于存储用户及应用程序的数据等。非易失性存储器也可以存储可执行程序和存储用户及应用程序的数据等,可以提前加载到随机存取存储器中,用于处理器110直接进行读写。The random access memory can be directly read and written by the processor 110, can be used to store executable programs (such as machine instructions) of the operating system or other running programs, and can also be used to store user and application data, etc. The non-volatile memory can also store executable programs and user and application program data, etc., and can be loaded into the random access memory in advance for direct reading and writing by the processor 110.
在本申请实施例中,眼球注视SDK的应用程序代码可存储到非易失性存储器中。在运行眼球注视SDK调用眼球注视服务时,眼球注视SDK的应用程序代码可被加载到随机存取存储器中。运行上述代码时产生的数据也可存储到随机存取存储器中。In the embodiment of the present application, the application code of the eye gaze SDK can be stored in a non-volatile memory. When running the Eye Gaze SDK to call the Eye Gaze service, the application code of the Eye Gaze SDK can be loaded into random access memory. Data generated when running the above code can also be stored in random access memory.
外部存储器接口120可以用于连接外部的非易失性存储器,实现扩展终端100的存储能力。外部的非易失性存储器通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部的非易失性存储器中。The external memory interface 120 can be used to connect an external non-volatile memory to expand the storage capability of the terminal 100 . The external non-volatile memory communicates with the processor 110 through the external memory interface 120 to implement the data storage function. For example, save music, video and other files in external non-volatile memory.
终端100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。The terminal 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playback, recording, etc.
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。终端100可以通过扬声器170A收听音乐,或收听免提通话。受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当终端100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。在本申请实施例中,终端100在灭屏或灭屏AOD状态下,可通过麦克风170C获取环境中的音频信号,进而确定是否检测到用户的语言唤醒词。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音 信号输入到麦克风170C。耳机接口170D用于连接有线耳机。The audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals. Speaker 170A, also called "speaker", is used to convert audio electrical signals into sound signals. The terminal 100 can listen to music through the speaker 170A, or listen to a hands-free call. Receiver 170B, also called "earpiece", is used to convert audio electrical signals into sound signals. When the terminal 100 answers a call or a voice message, the voice can be heard by bringing the receiver 170B close to the human ear. Microphone 170C, also called "microphone" or "microphone", is used to convert sound signals into electrical signals. In this embodiment of the present application, when the terminal 100 is in the screen-off or screen-off AOD state, it can obtain the audio signal in the environment through the microphone 170C, and then determine whether the user's language wake-up word is detected. When making a call or sending a voice message, the user can speak through the human mouth close to the microphone 170C to transfer the sound The signal is input to microphone 170C. The headphone interface 170D is used to connect wired headphones.
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。陀螺仪传感器180B可以用于确定终端100围绕三个轴(即,x,y和z轴)的角速度,进而确定终端100的运动姿态。加速度传感器180E可检测终端100在各个方向上(一般为三轴)加速度的大小。因此,加速度传感器180E可用于识别终端100的姿态。在本申请实施例中,终端100在灭屏或灭屏AOD状态下,可通过加速度传感器180E、陀螺仪传感器180B检测用户是否拿起手机,进而确定是否点亮屏幕。The pressure sensor 180A is used to sense pressure signals and can convert the pressure signals into electrical signals. The gyro sensor 180B may be used to determine the angular velocity of the terminal 100 around three axes (ie, x, y, and z axes), and thereby determine the motion posture of the terminal 100. The acceleration sensor 180E can detect the acceleration of the terminal 100 in various directions (generally three axes). Therefore, the acceleration sensor 180E can be used to recognize the posture of the terminal 100 . In this embodiment of the present application, when the screen is off or the screen is off AOD, the terminal 100 can detect whether the user picks up the mobile phone through the acceleration sensor 180E and the gyroscope sensor 180B, and then determine whether to light up the screen.
气压传感器180C用于测量气压。磁传感器180D包括霍尔传感器。终端100可以利用磁传感器180D检测翻盖皮套的开合。因此,在一些实施例中,当终端100是翻盖机时,终端100可以根据磁传感器180D检测翻盖的开合,进而确定是否点亮屏幕。Air pressure sensor 180C is used to measure air pressure. Magnetic sensor 180D includes a Hall sensor. The terminal 100 may use the magnetic sensor 180D to detect the opening and closing of the flip cover. Therefore, in some embodiments, when the terminal 100 is a flip machine, the terminal 100 can detect the opening and closing of the flip cover based on the magnetic sensor 180D, and then determine whether to light up the screen.
距离传感器180F用于测量距离。接近光传感器180G可以包括例如发光二极管(LED)和光检测器。终端100可以利用接近光传感器180G检测用户手持终端100贴近用户的场景,例如听筒通话。环境光传感器180L用于感知环境光亮度。终端100可以根据感知的环境光亮度自适应调节显示屏194亮度。Distance sensor 180F is used to measure distance. Proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector. The terminal 100 can use the proximity light sensor 180G to detect a scene in which the user holds the terminal 100 close to the user, such as a handset conversation. The ambient light sensor 180L is used to sense ambient light brightness. The terminal 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
指纹传感器180H用于采集指纹。终端100可以利用采集的指纹特性实现指纹解锁,访问应用锁等功能。温度传感器180J用于检测温度。骨传导传感器180M可以获取振动信号。Fingerprint sensor 180H is used to collect fingerprints. The terminal 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application lock and other functions. Temperature sensor 180J is used to detect temperature. Bone conduction sensor 180M can acquire vibration signals.
触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于终端100的表面,与显示屏194所处的位置不同。Touch sensor 180K, also known as "touch device". The touch sensor 180K can be disposed on the display screen 194. The touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation on or near the touch sensor 180K. The touch sensor can pass the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation may be provided through display screen 194 . In other embodiments, the touch sensor 180K may also be disposed on the surface of the terminal 100 in a position different from that of the display screen 194 .
在本申请实施例中,终端100通过触摸传感器180K检测是否有作用于屏幕的用户操作,例如点击、滑动等操作。基于触摸传感器180K检测到的作用于屏幕的用户操作,终端100得以确定后续将要执行的动作,例如运行某一应用程序、显示应用程序的界面等等。In this embodiment of the present application, the terminal 100 uses the touch sensor 180K to detect whether there is a user operation on the screen, such as clicking, sliding and other operations. Based on the user operation on the screen detected by the touch sensor 180K, the terminal 100 can determine the actions to be performed subsequently, such as running a certain application program, displaying the interface of the application program, and so on.
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。指示器192可以是指示灯,可以用于指示充电状态、电量变化、也可以用于指示消息、未接来电、通知等。The buttons 190 include a power button, a volume button, etc. Key 190 may be a mechanical key. It can also be a touch button. The motor 191 can generate vibration prompts. The motor 191 can be used for vibration prompts for incoming calls and can also be used for touch vibration feedback. The indicator 192 may be an indicator light, which may be used to indicate charging status, power changes, messages, missed calls, notifications, etc.
SIM卡接口195用于连接SIM卡。终端100可以支持1个或N个SIM卡接口。The SIM card interface 195 is used to connect a SIM card. The terminal 100 can support 1 or N SIM card interfaces.
本申请的说明书和权利要求书及附图中的术语“用户界面(user interface,UI)”,是应用程序或操作系统与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。应用程序的用户界面是通过java、可扩展标记语言(extensible markup language,XML)等特定计算机语言编写的源代码,界面源代码在终端设备上经过解析,渲染,最终呈现为用户可以识别的内容,比如图片、文 字、按钮等控件。控件(control)也称为部件(widget),是用户界面的基本元素,典型的控件有工具栏(toolbar)、菜单栏(menu bar)、文本框(text box)、按钮(button)、滚动条(scrollbar)、图片和文本。界面中的控件的属性和内容是通过标签或者节点来定义的,比如XML通过<Textview>、<ImgView>、<VideoView>等节点来规定界面所包含的控件。一个节点对应界面中一个控件或属性,节点经过解析和渲染之后呈现为用户可视的内容。此外,很多应用程序,比如混合应用(hybrid application)的界面中通常还包含有网页。网页,也称为页面,可以理解为内嵌在应用程序界面中的一个特殊的控件,网页是通过特定计算机语言编写的源代码,例如超文本标记语言(hyper text markup language,GTML),层叠样式表(cascading style sheets,CSS),java脚本(JavaScript,JS)等,网页源代码可以由浏览器或与浏览器功能类似的网页显示组件加载和显示为用户可识别的内容。网页所包含的具体内容也是通过网页源代码中的标签或者节点来定义的,比如GTML通过<p>、<img>、<video>、<canvas>来定义网页的元素和属性。The term "user interface (UI)" in the description, claims and drawings of this application is a media interface for interaction and information exchange between an application program or an operating system and a user. It implements the internal form of information. Conversion to and from a user-acceptable form. The user interface of an application is source code written in specific computer languages such as Java and extensible markup language (XML). The interface source code is parsed and rendered on the terminal device, and finally presented as content that the user can recognize. Such as pictures, text Controls such as words and buttons. Control, also called widget, is the basic element of user interface. Typical controls include toolbar, menu bar, text box, button, and scroll bar. (scrollbar), images and text. The properties and contents of controls in the interface are defined through tags or nodes. For example, XML specifies the controls contained in the interface through nodes such as <Textview>, <ImgView>, and <VideoView>. A node corresponds to a control or property in the interface. After parsing and rendering, the node is rendered into user-visible content. In addition, many applications, such as hybrid applications, often include web pages in their interfaces. A web page, also known as a page, can be understood as a special control embedded in an application interface. A web page is source code written in a specific computer language, such as hypertext markup language (GTML), cascading styles Tables (cascading style sheets, CSS), java scripts (JavaScript, JS), etc., web page source code can be loaded and displayed as user-recognizable content by a browser or a web page display component with functions similar to the browser. The specific content contained in the web page is also defined through tags or nodes in the web page source code. For example, GTML defines the elements and attributes of the web page through <p>, <img>, <video>, and <canvas>.
用户界面常用的表现形式是图形用户界面(graphic user interface,GUI),是指采用图形方式显示的与计算机操作相关的用户界面。它可以是在终端设备的显示屏中显示的一个图标、窗口、控件等界面元素,其中控件可以包括图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、Widget等可视的界面元素。The commonly used form of user interface is graphical user interface (GUI), which refers to a user interface related to computer operations that is displayed graphically. It can be an icon, window, control and other interface elements displayed on the display screen of the terminal device. The control can include icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, widgets, etc. Visual interface elements.
在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括复数表达形式,除非其上下文中明确地有相反指示。还应当理解,本申请中使用的术语“和/或”是指并包含一个或多个所列出项目的任何或所有可能组合。上述实施例中所用,根据上下文,术语“当…时”可以被解释为意思是“如果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…”或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。As used in the specification and appended claims of this application, the singular expressions "a," "an," "the," "above," "the" and "the" are intended to also include Plural expressions unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used in this application refers to and includes any and all possible combinations of one or more of the listed items. As used in the above embodiments, the term "when" may be interpreted to mean "if..." or "after" or "in response to determining..." or "in response to detecting..." depending on the context. Similarly, depending on the context, the phrase "when determining..." or "if (stated condition or event) is detected" may be interpreted to mean "if it is determined..." or "in response to determining..." or "on detecting (stated condition or event)” or “in response to detecting (stated condition or event)”.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如DVD)、或者半导体介质(例如固态硬盘)等。In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented using software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are generated in whole or in part. The computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center through wired (such as coaxial cable, optical fiber, digital subscriber line) or wireless (such as infrared, wireless, microwave, etc.) means. The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated. The available media may be magnetic media (eg, floppy disk, hard disk, tape), optical media (eg, DVD), or semiconductor media (eg, solid state drive), etc.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程 可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。 Those of ordinary skill in the art can understand all or part of the processes for implementing the methods of the above embodiments. Relevant hardware can be instructed to complete by a computer program. The program can be stored in a computer-readable storage medium. When executed, the program can include the processes of the above method embodiments. The aforementioned storage media include: ROM, random access memory (RAM), magnetic disks, optical disks and other media that can store program codes.

Claims (24)

  1. 一种显示方法,应用于电子设备,所述电子设备包括屏幕,其特征在于,所述电子设备的所述屏幕包括第一预设区域,所述方法包括:A display method, applied to an electronic device, the electronic device including a screen, characterized in that the screen of the electronic device includes a first preset area, the method includes:
    显示第一界面;Display the first interface;
    显示所述第一界面时,所述电子设备采集第一图像;When displaying the first interface, the electronic device collects a first image;
    基于所述第一图像确定用户的第一眼球注视区域,所述第一眼球注视区域用于指示当所述用户注视屏幕时所述用户所注视的屏幕区域;Determine a first eye gaze area of the user based on the first image, the first eye gaze area being used to indicate an area of the screen that the user is looking at when the user is looking at the screen;
    当所述第一眼球注视区域在所述第一预设区域内时,显示第二界面。When the first eyeball gaze area is within the first preset area, the second interface is displayed.
  2. 根据权利要求1所述的方法,其特征在于,所述电子设备的屏幕包括第二预设区域,所述第二预设区域与所述第一预设区域不同,所述方法还包括:The method of claim 1, wherein the screen of the electronic device includes a second preset area, and the second preset area is different from the first preset area, and the method further includes:
    基于所述第一图像确定用户的第二眼球注视区域,所述第二眼球注视区域在所述屏幕上的位置与所述第一眼球注视区域在所述屏幕上的位置不同;Determine a second eye gaze area of the user based on the first image, the second eye gaze area having a different position on the screen than the first eye gaze area on the screen;
    当所述第二眼球注视区域在所述第二预设区域内时,显示第三界面,所述第三界面与所述第二界面不同。When the second eyeball gaze area is within the second preset area, a third interface is displayed, and the third interface is different from the second interface.
  3. 根据权利要求2所述的方法,其特征在于,所述第二界面与所述第三界面为同一应用提供的界面,或者,所述第二界面与所述第三界面为不同的应用提供的界面。The method of claim 2, wherein the second interface and the third interface are interfaces provided by the same application, or the second interface and the third interface are provided by different applications. interface.
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1-3, characterized in that the method further includes:
    显示第四界面;Display the fourth interface;
    显示所述第四界面时,所述电子设备采集第二图像;When the fourth interface is displayed, the electronic device collects a second image;
    基于所述第二图像确定用户的第三眼球注视区域;Determining a third eye gaze area of the user based on the second image;
    当所述第三眼球注视区域在所述第一预设区域内时,显示第五界面,所述第五界面与所述二界面不同。When the third eye gaze area is within the first preset area, a fifth interface is displayed, and the fifth interface is different from the second interface.
  5. 根据权利要求1所述的方法,其特征在于,所述当所述第一眼球注视区域在所述第一预设区域内时,显示第二界面,包括:当所述第一眼球注视区域在所述第一预设区域内,且注视所述第一预设区域的时长为第一时长时,显示第二界面。The method of claim 1, wherein displaying the second interface when the first eyeball gaze area is within the first preset area includes: when the first eyeball gaze area is within Within the first preset area, and when the duration of gazing at the first preset area is the first duration, the second interface is displayed.
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:当所述第一眼球注视区域在所述第一预设区域内,且注视所述第一预设区域的时长为第二时长时,显示第六界面。The method of claim 5, further comprising: when the first eyeball gaze area is within the first preset area, and the duration of gazing at the first preset area is a third After two seconds, the sixth interface is displayed.
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述第一眼球注视区域为屏幕上的一个显示单元构成的光标点,或者,所述第一眼球注视区域为屏幕上多个显示单元构成的光标点或光标区域。 The method according to any one of claims 1 to 6, characterized in that the first eyeball gaze area is a cursor point formed by a display unit on the screen, or the first eyeball gaze area is a cursor point on the screen. A cursor point or cursor area composed of multiple display units.
  8. 根据权利要求2所述的方法,其特征在于,所述第二界面为非隐私界面,所述方法还包括:显示待解锁界面;在显示所述待解锁界面时,所述电子设备采集第三图像;The method of claim 2, wherein the second interface is a non-private interface, and the method further includes: displaying an interface to be unlocked; and when displaying the interface to be unlocked, the electronic device collects a third image;
    基于所述第三图像确定用户的第四眼球注视区域;Determining a fourth eye gaze area of the user based on the third image;
    当所述第四眼球注视位置在所述第一预设区域内时,显示所述第二界面。When the fourth eyeball gaze position is within the first preset area, the second interface is displayed.
  9. 根据权利要求8所述的方法,其特征在于,所述第三界面为隐私界面,所述方法还包括:The method according to claim 8, wherein the third interface is a privacy interface, and the method further includes:
    当所述第四眼球注视位置在所述第二预设区域内时,不显示所述第三界面。When the fourth eyeball gaze position is within the second preset area, the third interface is not displayed.
  10. 根据权利要求2所述的方法,其特征在于,所述第二界面和所述第三界面均为隐私界面;所述电子设备在显示待解锁界面时不启用摄像头获取图像。The method of claim 2, wherein both the second interface and the third interface are privacy interfaces; and the electronic device does not enable a camera to acquire images when displaying an interface to be unlocked.
  11. 根据权利要求8-10中任一项所述的方法,其特征在于,所述第一界面的所述第二预设区域显示有第一控件,所述第一控件用于指示所述第二预设区域与所述第三界面关联。The method according to any one of claims 8-10, characterized in that the second preset area of the first interface displays a first control, and the first control is used to indicate the second The preset area is associated with the third interface.
  12. 根据权利要求11所述的方法,其特征在于,所述待解锁界面的所述第二预设区域不显示所述第一控件。The method according to claim 11, characterized in that the first control is not displayed in the second preset area of the interface to be unlocked.
  13. 根据权利要求11或12所述的方法,其特征在于,所述第一控件为以下多项中的任意一项:所述第一界面的缩略图、所述第一界面对应的应用程序的图标、指示所述第一界面提供的服务的功能图标。The method according to claim 11 or 12, characterized in that the first control is any one of the following items: a thumbnail of the first interface, an icon of an application corresponding to the first interface , function icons indicating services provided by the first interface.
  14. 根据权利要求1所述的方法,其特征在于,所述电子设备采集图像的时长为第一预设时长;所述电子设备采集第一图像,具体为:所述电子设备在所述第一预设时间内采集所述第一图像。The method according to claim 1, characterized in that the duration for which the electronic device collects images is a first preset duration; the electronic device collects the first image, specifically: the electronic device collects the first image during the first preset duration. The first image is collected within a set time.
  15. 根据权利要求14所述的方法,其特征在于,所述第一预设时长为显示所述第一界面的前3秒。The method of claim 14, wherein the first preset duration is 3 seconds before the first interface is displayed.
  16. 根据权利要求1所述的方法,其特征在于,所述电子设备包括摄像头模组,所述电子设备采集所述第一图像是通过所述摄像头模组采集的,所述摄像头模组包括:至少一个2D摄像头和至少一个3D摄像头,所述2D摄像头用于获取二维图像,所述3D摄像头用于获取包括深度信息的图像;所述第一图像包括所述二维图像和所述包括深度信息的图像。The method of claim 1, wherein the electronic device includes a camera module, and the electronic device collects the first image through the camera module, and the camera module includes: at least A 2D camera and at least one 3D camera, the 2D camera is used to obtain a two-dimensional image, and the 3D camera is used to obtain an image including depth information; the first image includes the two-dimensional image and the depth information. Image.
  17. 根据权利要求16所述的方法,其特征在于,所述摄像头模组获取的所述第一图像存储在安全数据缓冲区,The method according to claim 16, characterized in that the first image acquired by the camera module is stored in a secure data buffer,
    在基于所述第一图像确定用户的第一眼球注视区域之前,所述方法还包括: Before determining the user's first eye gaze area based on the first image, the method further includes:
    在可信执行环境下从所述安全数据缓冲区中获取所述第一图像。The first image is obtained from the secure data buffer under a trusted execution environment.
  18. 根据权利要求17所述的方法,其特征在于,所述安全数据缓冲区设置在所述电子设备的硬件层。The method of claim 17, wherein the secure data buffer is provided at a hardware layer of the electronic device.
  19. 根据权利要求1所述的方法,其特征在于,所述基于所述第一图像确定用户的第一眼球注视区域,具体包括:The method of claim 1, wherein determining the user's first eye gaze area based on the first image specifically includes:
    确定所述第一图像的特征数据,所述特征数据包括左眼图像、右眼图像、人脸图像和人脸网格数据中的一个或多个;Determine the characteristic data of the first image, the characteristic data including one or more of a left eye image, a right eye image, a face image, and face grid data;
    利用眼球注视识别模型确定所述特征数据指示的第一眼球注视区域,所述眼球注视识别模型是基于卷积神经网络建立的。An eye gaze recognition model is used to determine the first eye gaze area indicated by the feature data, and the eye gaze recognition model is established based on a convolutional neural network.
  20. 根据权利要求19所述的方法,其特征在于,所述确定所述第一图像的特征数据,具体包括:The method according to claim 19, wherein determining the characteristic data of the first image specifically includes:
    对所述第一图像进行人脸校正,得到人脸端正的第一图像;Perform face correction on the first image to obtain a first image with a straight face;
    基于所述人脸端正的第一图像,确定所述第一图像的特征数据。Characteristic data of the first image is determined based on the first image of the correct human face.
  21. 根据权利要求4所述的方法,其特征在于,所述第一界面为第一桌面、第二桌面或负一屏中的任意一个;所述第四界面为第一桌面、第二桌面或负一屏中的任意一个,且与所述第一界面不同。The method of claim 4, wherein the first interface is any one of a first desktop, a second desktop, or a negative screen; and the fourth interface is a first desktop, a second desktop, or a negative screen. Any one of the screens, and is different from the first interface.
  22. 根据权利要求4所述的方法,其特征在于,所述第一预设区域与所述第二界面、所述第五界面的关联关系是用户设置的。The method according to claim 4, characterized in that the association between the first preset area, the second interface and the fifth interface is set by the user.
  23. 一种电子设备,其特征在于,包括一个或多个处理器和一个或多个存储器;其中,所述一个或多个存储器与所述一个或多个处理器耦合,所述一个或多个存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述一个或多个处理器执行所述计算机指令时,使得执行如权利要求1-22任一项所述的方法。An electronic device, characterized by comprising one or more processors and one or more memories; wherein the one or more memories are coupled to the one or more processors, and the one or more memories For storing computer program code, the computer program code includes computer instructions that, when executed by the one or more processors, cause the method of any one of claims 1-22 to be performed.
  24. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在电子设备上运行时,使得执行如权利要求1-22任一项所述的方法。 A computer-readable storage medium comprising instructions, characterized in that when the instructions are run on an electronic device, the method according to any one of claims 1-22 is executed.
PCT/CN2023/095396 2022-05-20 2023-05-19 Display method and electronic device WO2023222130A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202210549347.6 2022-05-20
CN202210549347 2022-05-20
CN202210761048.9 2022-06-30
CN202210761048.9A CN116048243B (en) 2022-05-20 2022-06-30 Display method and electronic equipment

Publications (2)

Publication Number Publication Date
WO2023222130A1 WO2023222130A1 (en) 2023-11-23
WO2023222130A9 true WO2023222130A9 (en) 2024-02-15

Family

ID=86118708

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/095396 WO2023222130A1 (en) 2022-05-20 2023-05-19 Display method and electronic device

Country Status (2)

Country Link
CN (1) CN116048243B (en)
WO (1) WO2023222130A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116048243B (en) * 2022-05-20 2023-10-20 荣耀终端有限公司 Display method and electronic equipment

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324290A (en) * 2013-07-04 2013-09-25 深圳市中兴移动通信有限公司 Terminal equipment and eye control method thereof
CN106233294B (en) * 2015-03-31 2021-01-12 华为技术有限公司 Mobile terminal privacy protection method and device and mobile terminal
CN104915099A (en) * 2015-06-16 2015-09-16 努比亚技术有限公司 Icon sorting method and terminal equipment
CN105338192A (en) * 2015-11-25 2016-02-17 努比亚技术有限公司 Mobile terminal and operation processing method thereof
CN105867603A (en) * 2015-12-08 2016-08-17 乐视致新电子科技(天津)有限公司 Eye-controlled method and device
CN105843383B (en) * 2016-03-21 2019-03-12 努比亚技术有限公司 Using starter and method
DE202017101642U1 (en) * 2017-03-21 2017-08-22 Readio Gmbh Application software for a mobile, digital terminal
CN107608514A (en) * 2017-09-20 2018-01-19 维沃移动通信有限公司 Information processing method and mobile terminal
CN107977586B (en) * 2017-12-22 2021-04-13 联想(北京)有限公司 Display content processing method, first electronic device and second electronic device
CN111131594A (en) * 2018-10-30 2020-05-08 奇酷互联网络科技(深圳)有限公司 Method for displaying notification content, intelligent terminal and storage medium
CN209805952U (en) * 2019-07-23 2019-12-17 北京子乐科技有限公司 Camera controlling means and smart machine
CN113723144A (en) * 2020-05-26 2021-11-30 华为技术有限公司 Face watching unlocking method and electronic equipment
CN111737775A (en) * 2020-06-23 2020-10-02 广东小天才科技有限公司 Privacy peep-proof method and intelligent equipment based on user eyeball tracking
CN114253491A (en) * 2020-09-10 2022-03-29 华为技术有限公司 Display method and electronic equipment
CN114445903A (en) * 2020-11-02 2022-05-06 北京七鑫易维信息技术有限公司 Screen-off unlocking method and device
CN112487888B (en) * 2020-11-16 2023-04-07 支付宝(杭州)信息技术有限公司 Image acquisition method and device based on target object
CN114466102B (en) * 2021-08-12 2022-11-25 荣耀终端有限公司 Method for displaying application interface, related device and traffic information display system
CN116048243B (en) * 2022-05-20 2023-10-20 荣耀终端有限公司 Display method and electronic equipment

Also Published As

Publication number Publication date
CN116048243B (en) 2023-10-20
WO2023222130A1 (en) 2023-11-23
CN116048243A (en) 2023-05-02

Similar Documents

Publication Publication Date Title
US11450322B2 (en) Speech control method and electronic device
WO2021129326A1 (en) Screen display method and electronic device
EP3846427B1 (en) Control method and electronic device
WO2021213164A1 (en) Application interface interaction method, electronic device, and computer readable storage medium
US20220150403A1 (en) Input Method and Electronic Device
WO2021037223A1 (en) Touch control method and electronic device
WO2021180089A1 (en) Interface switching method and apparatus and electronic device
WO2022017393A1 (en) Display interaction system, display method, and device
WO2021185210A1 (en) Skin quality test method, and electronic device
WO2022052712A1 (en) Method and apparatus for processing interaction event
WO2022012418A1 (en) Photographing method and electronic device
EP4199499A1 (en) Image capture method, graphical user interface, and electronic device
WO2023222130A9 (en) Display method and electronic device
WO2023222128A9 (en) Display method and electronic device
WO2022222688A1 (en) Window control method and device
WO2022022406A1 (en) Always-on display method and electronic device
WO2020228735A1 (en) Method for displaying application, and electronic device
WO2024037379A1 (en) Notification checking method and system, and related apparatus
US20240126897A1 (en) Access control method and related apparatus
US20240179237A1 (en) Screenshot Generating Method, Control Method, and Electronic Device
WO2022222705A1 (en) Device control method and electronic device
WO2023246604A1 (en) Handwriting input method and terminal
WO2024037384A1 (en) Display method for electronic device and electronic device
WO2022037408A1 (en) Display method and electronic device
WO2024060968A1 (en) Service widget management method and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23807079

Country of ref document: EP

Kind code of ref document: A1