CN117148959A - Frame rate adjusting method for eye movement tracking and related device - Google Patents

Frame rate adjusting method for eye movement tracking and related device Download PDF

Info

Publication number
CN117148959A
CN117148959A CN202310231426.7A CN202310231426A CN117148959A CN 117148959 A CN117148959 A CN 117148959A CN 202310231426 A CN202310231426 A CN 202310231426A CN 117148959 A CN117148959 A CN 117148959A
Authority
CN
China
Prior art keywords
camera
frame rate
hal
image
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310231426.7A
Other languages
Chinese (zh)
Inventor
吕建明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310231426.7A priority Critical patent/CN117148959A/en
Publication of CN117148959A publication Critical patent/CN117148959A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • G06F1/3231Monitoring the presence, absence or movement of users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/325Power saving in peripheral device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Abstract

The embodiment of the application provides a frame rate adjusting method and a related device for eye movement tracking, and relates to the technical field of terminals. The terminal equipment comprises a camera, and the method comprises the following steps: at a first moment, the terminal equipment receives the notification message, the terminal equipment displays a first message window in a first area, the terminal equipment controls the camera to acquire images at a first frame rate, and the first message window comprises part or all of the content of the notification message; at a second moment, the terminal equipment detects that the gaze point of human eyes is out of the first area based on the image acquired by the camera, and the terminal equipment controls the camera to acquire the image at a second frame rate; the second frame rate is less than the first frame rate, and the second time is later than the first time; and the terminal equipment detects that the gaze point of the human eye is in the first area based on the image acquired by the camera at a third moment, and the terminal equipment controls the camera to acquire the image at a first frame rate, wherein the third moment is later than the second moment. Thus, by adjusting the frame rate of the camera, the power consumption of the terminal device can be reduced.

Description

Frame rate adjusting method for eye movement tracking and related device
Technical Field
The application relates to the technical field of terminals, in particular to a frame rate adjustment method for eye movement tracking and a related device.
Background
Eye tracking (eye tracking) technology is the tracking of the movement of the eye ball that can be achieved by detecting the position of the gaze point of the eye or the movement of the eyeball relative to the head. Some terminal devices may predict the needs of the user through eye tracking techniques and respond so that the user may control the terminal device through the eye.
In some implementations, when the terminal device interacts with the user through the eye tracking technology, the terminal device may control the camera of the terminal device to collect a face image of the user, process the face image of the user to determine a gaze point of eyes of the user, and execute an operation corresponding to selecting an icon at the position. For example, when the terminal device displays a setting interface and the terminal device detects that the user's eyes look at a certain option in the setting interface, the terminal device may perform an operation of selecting the option.
However, the terminal device may consume high power in performing eye tracking.
Disclosure of Invention
The embodiment of the application provides a frame rate adjustment method and a related device for eye tracking, which can control a camera to collect images at a lower frame rate when a gaze point of a human eye is not in a notification message display area when a terminal device performs eye tracking, so that power consumption of the terminal device during eye tracking is reduced.
In a first aspect, an embodiment of the present application provides a frame rate adjustment method for eye tracking, which is applied to a terminal device, where the terminal device includes a camera, and the method includes: at a first moment, the terminal equipment receives a notification message, the terminal equipment displays a first message window in a first area, the terminal equipment controls the camera to acquire images at a first frame rate, and the first message window comprises part or all of the content of the notification message; at a second moment, the terminal equipment detects that the gaze point of human eyes is out of the first area based on the image acquired by the camera, and the terminal equipment controls the camera to acquire the image at a second frame rate; the second frame rate is less than the first frame rate, and the second time is later than the first time; and the terminal equipment detects that the gaze point of human eyes is positioned in the first area based on the image acquired by the camera, and controls the camera to acquire the image at the first frame rate, wherein the third moment is later than the second moment.
Thus, when the gazing point of the human eye is not in the area where the notification message is located, the terminal device controls the camera to acquire the image at a lower frame rate, so that the power consumption of the terminal device in eye tracking can be reduced, and when the gazing point of the human eye is in the area where the notification message is located, the terminal device controls the camera to acquire the image at a higher frame rate, so that the terminal device can accurately determine the gazing point of the human eye based on the image acquired by the camera.
In one possible implementation, the terminal device includes a camera hardware abstraction layer HAL, a camera driver; the terminal equipment controls the camera to acquire images at a second frame rate, and the method comprises the following steps: the camera HAL determines that the camera acquires images at a second frame rate; the camera HAL transmits a first frame length corresponding to the second frame rate to the camera driver; the camera driver writes the first frame length into a register of the camera, and the camera captures images at a second frame rate. In this way, the terminal device can adjust the frame rate of the camera to a smaller frame rate through the camera HAL, so that the power consumption of the terminal device is reduced.
In one possible implementation, the terminal device further includes an intelligent perception hardware abstraction layer HAL, a camera service, an image preprocessing IFE module, and a secure memory; after the camera acquires the image at the first frame rate, the method comprises the following steps: the camera transmits images acquired by the camera to the IFE module; the IFE module transmits the image to a safe memory for storage, and obtains a file descriptor FD corresponding to the image; the IFE module transmits FD to the intelligent perception HAL sequentially through the camera driver, the camera HAL and the camera service; the intelligent awareness HAL invokes an image in secure memory based on FD. Therefore, the terminal equipment stores the image acquired by the camera in the safe memory, and the safe memory runs in the TEE with higher safety, so that the safety of the image is higher, and the accuracy of eye movement tracking can be improved.
In a possible implementation, the terminal device further includes an image processing engine IPE module, the IFE module includes a first IFE module, and the camera includes an RGB camera; the IFE module transmits the image to a secure memory for storage, including: the first IFE module transmits images acquired by the RGB camera to the IPE module; and the IPE module transmits the image acquired by the RGB camera to a safe memory for storage. Therefore, the images acquired by the RGB camera are stored in the safe memory through the IFE module and the IPE module, and the safety of the images can be improved.
In a possible implementation, the terminal device further includes a trusted camera service module, a smart aware trusted application TA; the intelligent perception HAL calls an image in a secure memory based on the FD, comprising: the intelligent perception HAL transmits a calling instruction to the intelligent perception TA based on the FD; the intelligent perception TA calls the image in the safe memory through the trusted camera service module based on the call instruction, and transmits the image to the intelligent perception HAL. Therefore, the intelligent sensing HAL can acquire the image acquired by the camera in the safe memory of the TEE, and the accuracy of eye movement tracking by the intelligent sensing HAL is high because the security of the image stored in the safe memory is high.
In a possible implementation, the terminal device further includes a shared memory smart sensor TA transmitting an image to the smart sensor HAL, including: the intelligent perception TA encrypts the image and transmits the encrypted image to the intelligent perception HAL through the shared memory. Therefore, the intelligent perception TA is used for encrypting the image, so that the security of the image in the transmission process is higher.
In a possible implementation, the terminal device further comprises a notification application, a smart awareness service; the terminal equipment receives the notification message, and the terminal equipment controls the camera to collect images at a first frame rate, and the method comprises the following steps: the notification application receives the notification message and invokes the intelligent perception service to perform eye tracking; the intelligent perception service transmits a request for human eye tracking to the intelligent perception HAL; the intelligent sensing HAL transmits first indication information to the camera HAL through the camera service, wherein the first indication information carries the identification of the camera and a first frame rate; the camera HAL transmits second indication information to the camera driver, wherein the second indication information carries the identification of the camera and a second frame length corresponding to the first frame rate, and the second frame length is used for configuring the frame rate of the camera to be the first frame rate; the camera driver drives the camera to collect images at a first frame rate based on the identification of the camera and the second frame length. Thus, the terminal equipment can track human eyes when receiving the notification message, and realize human-computer interaction.
In one possible implementation, the cameras include a time-of-flight TOF camera and an RGB camera; the smart perception HAL transmitting first indication information to the camera HAL, comprising: the intelligent sensing HAL determines the ambient light intensity of the terminal equipment; when the ambient illuminance is greater than a first value, the intelligent perception HAL transmits indication information carrying the identification of the RGB camera to the camera HAL; when the ambient illuminance is less than or equal to the first value, the intelligent perception HAL transmits indication information carrying the identity of the TOF camera to the camera HAL. Therefore, the terminal equipment can acquire images by using different cameras according to the ambient illuminance, so that the terminal equipment can accurately track human eyes according to the acquired images.
In a possible implementation, the first message window includes part of the content of the notification message, and after the third time, the method includes: and at a fourth moment, the terminal equipment detects that the gaze point of the human eye is in the first area based on the image acquired by the camera, and displays a second message window, wherein the second message window comprises the whole content of the notification message, and the time interval between the fourth moment and the third moment is smaller than or equal to a first preset value. Therefore, when the terminal equipment receives the notification message, the notification message can be unfolded according to the gaze point of human eyes when the notification message is longer, the operation of manually inputting the notification message by a user is not needed, and the user experience is improved.
In a possible implementation, at a fourth time, the terminal device further includes: the terminal equipment controls the camera to collect images at a second frame rate. Thus, when the terminal equipment expands the message, the camera can be controlled to collect images at a smaller frame rate, and the power consumption of the terminal equipment is reduced.
In a possible implementation, the first message window includes the entire content of the notification message, and after the third time, the method includes: and at a fifth moment, the terminal equipment detects that the gaze point of the human eye is in the first area based on the image acquired by the camera, the terminal equipment displays a first interface, the first interface is an interface of an application corresponding to the notification message, and the time interval between the fifth moment and the third moment is smaller than or equal to a first preset value. Therefore, when the notification message is shorter, the interface of the application of the notification message can be displayed according to the gaze point of human eyes, the operation of opening the application of the notification message does not need to be manually input by a user, and the user experience is improved.
In a possible implementation, after the third time instant, the method includes: at a sixth moment, the terminal device detects that the gaze point of the human eye is out of the first area based on the image acquired by the camera, and the terminal device displays a first interface, wherein the first interface does not comprise a notification message. Therefore, when the terminal equipment detects that the gaze point of the human eye is not at the notification message, the notification message can be packed, the operation of packing up the notification message does not need to be manually input by a user, and the user experience is improved.
In a second aspect, an embodiment of the present application provides an apparatus for adjusting a frame rate of eye tracking, where the apparatus for adjusting a frame rate of eye tracking may be a terminal device, or may be a chip or a chip system in the terminal device. The device for adjusting the frame rate of eye tracking may include a processing unit and a display unit. The processing unit is configured to implement the first aspect or any method related to processing in any possible implementation manner of the first aspect. The display unit may be a display screen or the like, and the display unit may implement the first aspect or any step related to display in any one of the possible implementations of the first aspect based on the control of the processing unit. The processing unit may be a processor when the means for frame rate adjustment of eye tracking is a terminal device. The means for frame rate adjustment of eye tracking may further comprise a storage unit, which may be a memory. The storage unit is configured to store instructions, and the processing unit executes the instructions stored in the storage unit, so that the terminal device implements a method described in the first aspect or any one of possible implementation manners of the first aspect. The processing unit may be a processor when the means of frame rate adjustment of the eye tracking is a chip or a system-on-chip within the terminal device. The processing unit executes instructions stored by the storage unit to cause the terminal device to implement a method as described in the first aspect or any one of the possible implementations of the first aspect. The memory unit may be a memory unit (e.g., a register, a cache, etc.) in the chip, or a memory unit (e.g., a read-only memory, a random access memory, etc.) located outside the chip in the terminal device.
The display unit is used for receiving the notification message at the first moment, and displaying a first message window in a first area, wherein the first message window comprises part or all of the content of the notification message; and the processing unit is used for controlling the camera to acquire images at a first frame rate. The processing unit is also used for controlling the camera to acquire images at a second frame rate based on the images acquired by the camera to detect that the gaze point of the human eye is outside the first area at a second moment; the second frame rate is less than the first frame rate, and the second time is later than the first time; the processing unit is further configured to, at a third time, control the camera to collect images at the first frame rate, where the third time is later than the second time, based on the images collected by the camera detecting that the gaze point of the human eye is within the first region.
In a possible implementation, the processing unit includes a camera hardware abstraction layer HAL, a camera driver; the camera HAL is used for determining that the camera collects images at a second frame rate and transmitting a first frame length corresponding to the second frame rate to the camera driver; and the camera driver is used for writing the first frame length into a register of the camera. And the camera is used for acquiring images at a second frame rate.
In a possible implementation, the processing unit further comprises an intelligent aware hardware abstraction layer HAL, camera services, an image preprocessing IFE module, and a secure memory. And the camera is used for transmitting the image acquired by the camera to the IFE module. And the IFE module is used for transmitting the image to a safe memory for storage and acquiring a file descriptor FD corresponding to the image. And the IFE module is used for transmitting the FD to the intelligent perception HAL through the camera driver, the camera HAL and the camera service in sequence. The intelligent perception HAL is used for calling the image in the secure memory based on the FD.
In a possible implementation, the processing unit further includes an image processing engine IPE module, the IFE module includes a first IFE module, and the camera includes an RGB camera. And the first IFE module is used for transmitting the image acquired by the RGB camera to the IPE module. And the IPE module is used for transmitting the image acquired by the RGB camera to the safe memory for storage.
In a possible implementation, the processing unit further comprises a trusted camera service module, a smart aware trusted application TA. The intelligent perception HAL is used for transmitting a calling instruction to the intelligent perception TA based on the FD. The intelligent perception TA is used for calling the image in the safe memory through the trusted camera service module based on the calling instruction and transmitting the image to the intelligent perception HAL.
In one possible implementation, the smart sensor TA is specifically configured to encrypt the image, and transmit the encrypted image to the smart sensor HAL through the shared memory.
In a possible implementation, the processing unit further comprises a notification application, a smart awareness service. And the notification application is used for receiving the notification message and calling the intelligent perception service to track the eye movement. The intelligent perception service is used for transmitting a request for human eye tracking to the intelligent perception HAL. The intelligent perception HAL is used for transmitting first indication information to the camera HAL through the camera service, wherein the first indication information carries the identification of the camera and a first frame rate. The camera HAL is used for transmitting second indication information to the camera driver, wherein the second indication information carries the identification of the camera and a second frame length corresponding to the first frame rate, and the second frame length is used for configuring the frame rate of the camera to be the first frame rate. And the camera driving is used for driving the camera to acquire images at a first frame rate based on the identification of the camera and the second frame length.
In one possible implementation, the cameras include time-of-flight TOF cameras and RGB cameras. The intelligent perception HAL is also used for determining the ambient light intensity of the terminal equipment; when the ambient illuminance is greater than a first value, the intelligent perception HAL transmits indication information carrying the identification of the RGB camera to the camera HAL; when the ambient illuminance is less than or equal to the first value, the intelligent perception HAL transmits indication information carrying the identity of the TOF camera to the camera HAL.
In a possible implementation, the first message window includes a part of the content of the notification message, and the processing unit is further configured to detect, at a fourth moment, that the gaze point of the human eye is within the first area based on the image acquired by the camera. The display unit is further used for displaying a second message window, the second message window comprises all contents of the notification message, and the time interval between the fourth moment and the third moment is smaller than or equal to the first preset value.
In a possible implementation, the processing unit is further configured to control the camera to acquire the image at the second frame rate at the fourth moment.
In a possible implementation, the first message window includes the entire content of the notification message, and the processing unit is further configured to detect, at a fifth moment, that the gaze point of the human eye is within the first area based on the image acquired by the camera. The display unit is further used for displaying a first interface, the first interface is an interface of an application corresponding to the notification message, and the time interval between the fifth moment and the third moment is smaller than or equal to a first preset value.
In a possible implementation, the processing unit is further configured to detect, at a sixth moment, that the gaze point of the human eye is outside the first area based on the image acquired by the camera. The display unit is further used for displaying a first interface, and the first interface does not comprise notification messages.
In a third aspect, an embodiment of the present application provides a terminal device, including: comprising the following steps: a processor and a memory; the memory stores computer-executable instructions; the processor executes computer-executable instructions stored in the memory to cause the terminal device to perform a method as described in the first aspect or any one of the possible implementations of the first aspect.
In a third aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program. The computer program, when executed by a processor, implements a method as described in the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer program product comprising a computer program which, when run, causes a computer to perform a method as described in the first aspect or any one of the possible implementations of the first aspect.
In a fifth aspect, an embodiment of the application provides a chip comprising a processor for invoking a computer program in memory to perform a method as described in the first aspect or any one of the possible implementations of the first aspect.
It should be understood that, the second aspect to the sixth aspect of the present application correspond to the technical solutions of the first aspect of the present application, and the advantages obtained by each aspect and the corresponding possible embodiments are similar, and are not repeated.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 2 is a schematic software structure of a terminal device according to an embodiment of the present application;
fig. 3 is a schematic diagram of man-machine interaction based on eye tracking according to an embodiment of the present application;
fig. 4 is a schematic diagram of an internal path of a terminal device for face recognition in some implementations according to an embodiment of the present application;
fig. 5 is a schematic software module interaction diagram of a terminal device according to an embodiment of the present application;
fig. 6 is a schematic interaction diagram of software modules for controlling a TOF camera to collect images by a terminal device according to an embodiment of the present application;
fig. 7 is a schematic interaction diagram of software modules for controlling and driving an RGB camera to collect images by a terminal device according to an embodiment of the present application;
fig. 8 is a schematic diagram of an internal path of a terminal device for performing eye tracking according to an embodiment of the present application;
fig. 9 is an interactive schematic diagram of a frame rate adjustment method for eye tracking according to an embodiment of the present application;
Fig. 10 is an interface interaction schematic diagram of a terminal device when the content of a short message notification message provided by the embodiment of the present application is long;
fig. 11 is an interface interaction schematic diagram of a terminal device when the content of a short message notification message of the terminal device is shorter, provided in an embodiment of the present application;
fig. 12 is a flowchart of a method for adjusting a frame rate of eye tracking for a notification message, which is a short message, according to an embodiment of the present application;
fig. 13 is a flowchart of a method for adjusting a frame rate of eye tracking in which a notification message is a WeChat message according to an embodiment of the present application;
fig. 14 is a schematic hardware structure diagram of another terminal device according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a chip according to an embodiment of the present application.
Detailed Description
In order to facilitate the clear description of the technical solutions of the embodiments of the present application, the following simply describes some terms and techniques involved in the embodiments of the present application:
1. frame rate: the frame rate is the number of images acquired per second by the camera, for example, one camera frame rate of 25fps, indicating that the camera can acquire and display 25 images for 1 second.
2. Eye movement tracking: refers to tracking of the movement of the ball of the eye by detecting the position of the gaze point of the eye or the movement of the eyeball relative to the head.
3. Time of flight (TOF) cameras: a TOF sensor (TOF sensor) is a camera that emits infrared light or laser light to the outside, receives light returned from an object, calculates the time or phase difference from the emission to the reflection back to the camera, and obtains a set of distance depth data, thereby generating a depth image or a three-dimensional image. The TOF camera may include an emitter that may be used to emit infrared light or laser pulses, and a receiver to receive the reflections and image the reflected light.
4. General purpose operating environment (rich execution environment, REE): the general operating environment, also called rich or ordinary or untrusted, refers to a system operating environment of the mobile terminal, where operating systems such as Android, IOS, linux, and the like may be executed.
5. A trusted execution environment (trusted execution environment, TEE), also known as a secure side or secure area, is an area that requires authorization to be accessed. In an operating environment where the TEE and the REE coexist in the electronic device, the TEE can be isolated from the REE through hardware support.
The REE+TEE architecture refers to an architecture that provides services for applications in combination with REE through the TEE. That is, the TEE is co-present with the REE in the electronic device.
6. Trusted application (trusted application, TA): refers to an application running in the TEE that is capable of providing security services for CAs running outside the TEE, such as entering passwords, generating transaction signatures, face recognition, etc.
7. Client application (client application, CA): refers to an application running in the REE. The CA may make a call to the TA through a Client (Client) application programming interface (application programming interface, API) and instruct the TA to perform the corresponding security operation.
8. Other terms
For purposes of clarity in describing the embodiments of the present application, the words "exemplary" or "such as" are used herein to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-b-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
9. Terminal equipment
The terminal device of the embodiment of the application can also be any form of electronic device, for example, the electronic device can include a handheld device with an image processing function, a vehicle-mounted device and the like. For example, some electronic devices are: a mobile phone, tablet, palm, notebook, mobile internet device (mobile internet device, MID), wearable device, virtual Reality (VR) device, augmented reality (augmented reality, AR) device, wireless terminal in industrial control (industrial control), wireless terminal in unmanned (self driving), wireless terminal in teleoperation (remote medical surgery), wireless terminal in smart grid (smart grid), wireless terminal in transportation security (transportation safety), wireless terminal in smart city (smart city), wireless terminal in smart home (smart home), cellular phone, cordless phone, session initiation protocol (session initiation protocol, SIP) phone, wireless local loop (wireless local loop, WLL) station, personal digital assistant (personal digital assistant, PDA), handheld device with wireless communication function, public computing device or other processing device connected to wireless modem, vehicle-mounted device, wearable device, terminal device in future communication network (public land mobile network), or land mobile communication network, etc. without limiting the application.
By way of example, and not limitation, in embodiments of the application, the electronic device may also be a wearable device. The wearable device can also be called as a wearable intelligent device, and is a generic name for intelligently designing daily wear by applying wearable technology and developing wearable devices, such as glasses, gloves, watches, clothes, shoes and the like. The wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also can realize a powerful function through software support, data interaction and cloud interaction. The generalized wearable intelligent device includes full functionality, large size, and may not rely on the smart phone to implement complete or partial functionality, such as: smart watches or smart glasses, etc., and focus on only certain types of application functions, and need to be used in combination with other devices, such as smart phones, for example, various smart bracelets, smart jewelry, etc. for physical sign monitoring.
In addition, in the embodiment of the application, the electronic equipment can also be terminal equipment in an internet of things (internet of things, ioT) system, and the IoT is an important component of the development of future information technology, and the main technical characteristics of the IoT are that the article is connected with a network through a communication technology, so that the man-machine interconnection and the intelligent network of the internet of things are realized.
The electronic device in the embodiment of the application may also be referred to as: a terminal device, a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), an access terminal, a subscriber unit, a subscriber station, a mobile station, a remote terminal, a mobile device, a user terminal, a wireless communication device, a user agent, a user equipment, or the like.
In an embodiment of the present application, the electronic device or each network device includes a hardware layer, an operating system layer running on top of the hardware layer, and an application layer running on top of the operating system layer. The hardware layer includes hardware such as a central processing unit (central processing unit, CPU), a memory management unit (memory management unit, MMU), and a memory (also referred to as a main memory). The operating system may be any one or more computer operating systems that implement business processes through processes (processes), such as a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, a hong-mong system, or a windows operating system. The application layer comprises applications such as a browser, an address book, word processing software, instant messaging software, notification and the like.
By way of example, fig. 1 shows a schematic diagram of an electronic device.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device. In other embodiments of the application, the electronic device may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc., respectively, through different I2C bus interfaces. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 communicates with the touch sensor 180K through an I2C bus interface to implement a touch function of the electronic device 100.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of electronic device 100. The processor 110 and the display 194 communicate via a DSI interface to implement the display functionality of the electronic device 100.
It should be understood that the connection relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device. In other embodiments of the present application, the electronic device may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The electronic device implements display functions via a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device may implement shooting functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, an ISP may include an IFE module, an IFE lite module, an IPE module, and the like.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format.
In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1. For example, the electronic device 100 may include 2 front cameras and 4 rear cameras. Wherein, front camera can include TOF camera and RGB camera in the front camera. The TOF camera includes an emitter that can be used to emit an optical signal (infrared light or laser pulses) and a receiver that can be used to receive the imaging. The emitter may be, for example, an infrared light emitter and the receiver may be a photosensitive element.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, eye tracking, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer-executable program code that includes instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device (e.g., audio data, phonebook, etc.), and so forth. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor. For example, the frame rate adjustment method of the embodiment of the present application may be performed.
The ambient light sensor 180L is for sensing ambient light illumination. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. In an embodiment of the present application, the electronic device 100 may determine whether to use a TOF camera to capture an image or an RGB camera to capture an image based on the ambient light level perceived by the ambient light sensor 180L. For example, when the ambient light level is greater than a first value, the RGB camera may be controlled to capture an image, and when the ambient light level is less than or equal to the first value, the TOF camera may be controlled to capture an image. The first value may be a preset ambient illuminance, which is not limited in the embodiment of the present application.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 2 is a software configuration block diagram of a terminal device according to an embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into five layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system library, a hardware abstraction layer (hardware abstraction layer, HAL), and a kernel layer, respectively. It should be noted that, in the embodiment of the present application, an Android system is illustrated, and in other operating systems (such as a hong mo system, an IOS system, etc.), the scheme of the present application can be implemented as long as the functions implemented by the respective functional modules are similar to those implemented by the embodiment of the present application.
The application layer may include a series of application packages.
As shown in fig. 2, the application packages may include applications for cameras, calendars, phones, maps, games, settings, notifications, and the like. For example, in the embodiment of the present application, the display of the interface and the interface interaction of the user may be implemented at the application layer.
The setting application has the function of inputting a face, and the input face is used for face unlocking and eye tracking. The notification application has the function of receiving and displaying notification messages, and in the embodiment of the application, the notification application can also have the function of starting human eye tracking when displaying the notification. When the notification application starts eye tracking and the eye gaze point is within the display area of the notification message, the terminal device may expand the notification message to display the entire content of the notification message or display an interface of the application corresponding to the notification message.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. For example, in the embodiment of the application, the system side can provide the bottom layer implementation of the shortcut application card, including related operations of creating, managing, removing and the like of the stack of the application program.
As shown in FIG. 2, the application framework layer may include a window manager, resource manager, notification manager, view system, camera service, wisdom-aware service, graphics rendering, and graphics rendering, among others.
The window manager is used for managing window programs. The window manager may obtain the display screen size, determine if there is a status bar, lock screen, touch screen, drag screen, intercept screen, etc. For example, in an embodiment of the present application, a window manager may be used to implement operations related to interface display.
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in a status bar, giving out a prompt tone, vibrating a terminal device, flashing an indicator light, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
Graphics rendering is used to render graphics.
The graphic rendering is used for rendering the drawn graphic.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), graphics processing Libraries (e.g., openGL ES), graphics engines (e.g., SGL), graphics composition, etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
Graphics compositing is used to composite one or more rendered views into a display interface.
Android runtimes include core libraries and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The HAL layer is encapsulation of a Linux kernel driver, provides an interface upwards, and shields implementation details of low-level hardware.
The HAL layer may include Wi-Fi HAL, audio (audio) HAL, camera HAL (Camera HAL) and smart perception HAL, algorithm module, etc.
Wherein the Camera HAL is the core software framework of Camera. The intelligent perception HAL is the core software framework/application of eye tracking.
Smart sense TA (awareness trusted application) is an application running in a TEE environment. The intelligent perception TA can be used for executing and processing eagle eye business in a TEE environment, is responsible for secure image acquisition, image encryption, communication with intelligent perception HAL on the REE side and the like.
The shared memory may be a CA/TA secure buffer, which may be used to pass the encrypted image to the intelligent perception HAL on the REE side.
The kernel layer is a layer between hardware and software. The kernel layer may include display drivers, camera drivers, audio drivers, central processor drivers, and the like.
The camera driving is a driving layer of the camera device and is mainly responsible for interaction with hardware.
A trusted camera service (trusted camera) module is an application running in a TEE environment. In embodiments of the present application, a trusted camera service module may be used for the transmission of secure images.
The hardware layer may include a display, a TOF camera, an RGB camera, an Image Front End (IFE) module, an image processing engine (Image processing engine, IPE) module, a Secure memory (Secure Buffer), and the like.
The secure memory may be a memory running in a TEE environment and having a secure protection function, and may be used for storing raw data acquired by the TOF camera.
The IFE module may be configured to forward the image data without processing the image data during the forwarding process. In an embodiment of the present application, the IFE module may include a first IFE module and a second IFE module, where the second IFE module may be an IFE lite module. The first IFE module is used for forwarding the image acquired by the RGB camera, and the IFE lite module is a lightweight image preprocessing module and can be used for forwarding the image acquired by the TOF camera.
For example, when forwarding the image collected by the RGB camera, the first IFE module may convert the image collected by the RGB camera into YUV data with original data.
The IPE module may be used to scale the image so that the resolution of the image meets the application requirements. For example, in the present application, the IPE module may perform scaling processing on the image so that the resolution of the image satisfies the requirements of the intelligent perception HAL.
The notification application in the terminal device may be configured to display a notification message on a screen of the terminal device, where the notification message may include a short message, a micro-message, a system message, and the like. In some implementations, when the terminal device displays the notification message, the terminal device may display a part of the content of the notification message when the terminal device receives the notification message having a longer content because the length of the content of the notification message is different and the size of the area for displaying the notification message is preset. If the user wants to view the whole content of the notification message, clicking a control corresponding to the notification message, and the terminal equipment responds to the operation input by the user to display the whole content of the notification message.
In some implementations, the terminal device may interact with the user through an eye tracking technology, and fig. 3 shows a schematic diagram of man-machine interaction based on eye tracking.
As shown in fig. 3, the terminal device may control the RGB camera to collect the face image of the user at a certain frame rate, and process the face image of the user to determine the gaze point of the eyes of the user, so that the terminal device performs an operation corresponding to the icon at the selected gaze point. For example, when the terminal device displays a setting interface and the terminal device detects that the user's eyes look at a certain option in the setting interface, the terminal device may perform an operation of selecting the option.
Fig. 4 illustrates an internal path diagram of a terminal device for face recognition in some implementations.
As shown in fig. 4, in some implementations, the terminal device typically controls RGB cameras for image acquisition when the terminal device performs eye tracking. The image acquisition process may be: after the RGB Camera sensor of the terminal equipment collects the face image, the face image data raw can be transmitted to an IFE module, the IFE module can convert the face image data raw into YUV data and transmit the YUV data to an IPE module, and the IPE module transmits the YUV data to a Camera HAL. When the IPE module transmits YUV data to the Camera HAL, a Face Detection module (Face Detection) in the terminal device may transmit information of the Face image to an automatic exposure control module (auto exposure control, AEC) of the Camera HAL, so that the Face information is obtained from the Camera HAL. After the automatic exposure control module obtains the face information, the face information can be processed, the exposure time of the next frame is calculated, and the Camera HAL can control the RGB Camera to acquire images according to the exposure time based on the calculated exposure time.
However, when the camera of the terminal device collects the face image, the frame rate of the camera may be high, so that the power consumption when the terminal device performs eye tracking is high.
In view of this, an embodiment of the present application provides a frame rate adjustment method for eye tracking, where when a terminal device displays a notification message and the terminal device performs eye tracking, the terminal device adjusts a frame rate of a camera. When the gaze point of the user's eye is outside the range of the notification message display, indicating that the user is less likely to view the notification message, the terminal device may control the camera to capture images at a smaller frame rate. In this way, the terminal device can control the camera to collect images at a smaller frame rate, and the power consumption of the terminal device can be reduced.
The following describes software modules and interactions between modules involved in a frame rate adjustment method for eye tracking according to an embodiment of the present application with reference to fig. 1 to 3. Fig. 5 is a schematic diagram of software module interaction of a terminal device according to an embodiment of the present application.
As shown in fig. 5, when the notification application of the terminal device receives the notification message and displays the notification message on the display screen of the terminal device, the notification application in the application layer may interact with the smart sensor service in the application framework layer through a banner notification call preset application program interface (application programming interface, API), and the smart sensor service may interact with the smart sensor HAL in the HAL layer. The intelligent sensing HAL may determine the camera used to capture the image based on the ambient light level sensed by ambient light sensor 180L. The smart aware HAL may control a Camera (Camera) control module to interact with a Camera service in the application framework layer through a VNDK (vendor native development kit) interface and transmit the identity of the Camera and an initial frame rate to the Camera HAL in the HAL layer through the Camera service. The camera HAL may interact with a camera driver in the kernel layer, which may drive a TOF camera or an RGB camera in the hardware layer to capture images at an initial frame rate based on the identity of the camera.
For example, when the TOF camera is driven to collect images at an initial frame rate, the TOF camera may transmit the collected images to the IFE life module, and the IFE life module may transmit the images collected by the TOF camera to the Secure buffer for storage. The IFE life module may also obtain a file descriptor FD corresponding to the image in the secure memory.
For example, when the camera driver drives the RGB camera to collect images at the initial frame rate, the RGB camera may transmit the collected images to the IFE module, and the IFE module may transmit the images to the IPE module, and the IPE module transmits YUV image data collected by the RGB camera to the secure memory for storage. The IPE module may also obtain a file descriptor FD corresponding to the image in the secure memory.
As shown in fig. 5, when the camera driver drives the TOF camera to capture images at the initial frame rate, the IFE life module may transmit the file descriptor FD to the intelligent perception HAL sequentially through the camera driver, the kernel, the camera HAL, the camera service. When the camera driver drives the RGB camera to collect images at an initial frame rate, the IPE module may transmit the file descriptor FD to the smart perception HAL sequentially through the camera driver, the kernel, the camera HAL, and the camera service.
The intelligent awareness HAL may invoke data stored in secure memory through the intelligent awareness TA based on the FD.
As shown in fig. 5, when the smart sensor HAL invokes the data stored in the secure memory through the smart sensor TA based on the FD, the smart sensor HAL transmits the FD to the smart sensor TA, and the smart sensor TA may obtain the image corresponding to the FD from the secure memory through the Trusted Camera service module (Trusted Camera service) according to the FD. The intelligent perception TA can encrypt the image and transmit the encrypted image to a CA/TA shared memory (CA/TA Secure buffer), the CA/TA shared memory transmits the encrypted image to the intelligent perception CA, the intelligent perception CA transmits the encrypted image to the intelligent perception HAL, and the intelligent perception HAL can call an algorithm module to process the image.
It should be noted that, when the image is an image acquired by the TOF camera, the intelligent sensing HAL may also acquire TOF calibration data so as to process the TOF image. The TOF calibration data may include noise, temperature drift, and the like.
As shown in fig. 5, the algorithm module may include image processing, an eye movement recognition algorithm, eye movement calibration data, and a face recognition algorithm, and the algorithm module may determine coordinates of a gaze point of a human eye by processing the image.
Illustratively, when the algorithm module determines the coordinates of the gaze point of the human eye, the smart perception HAL may transmit the coordinates of the gaze point of the human eye to the notification application via the smart perception service, which determines whether the gaze point of the human eye is within the area in which the notification message is located based on the coordinates of the gaze point of the human eye. In this way, the notification application can control the intelligent perception HAL to adjust the frame rate at which the camera captures images, depending on whether the gaze point of the human eye is within the area in which the notification message is located.
Illustratively, when the notification application detects that the gaze point of the human eye is outside the area where the notification message is located, the smart perception service may be invoked by the method described in the above embodiments, the smart perception service interacting with the smart perception HAL. The intelligent perception HAL is driven by the camera service, the camera HAL, the kernel and the camera, and the TOF camera is controlled to acquire images at a second frame rate.
The second frame rate is smaller than the first frame rate, and the embodiment of the application does not specifically limit the first frame rate and the second frame rate.
In one possible implementation, when the TOF camera captures images at the second frame rate and the notification application detects that the gaze point of the human eye is within the region where the notification message is located, the terminal device may control the TOF camera to capture images at the first frame rate. The method for the terminal device to control the TOF camera to acquire the image at the first frame rate may be described in the above embodiment, and will not be described herein.
It should be noted that, as shown in fig. 5, the user may enter the face image through the setting application, so that the terminal device may perform man-machine interaction through face recognition, for example, face unlocking and the like. When the application is set to enter the face image, the workflow of each software module in the terminal device informs the application to carry out human eye tracking by calling the intelligent perception service in a similar process, and the description is omitted here.
Based on the internal logic diagram shown in fig. 5, the internal logic when the terminal device controls the TOF camera to collect an image and controls the RGB camera to collect an image will be described below.
Fig. 6 is a schematic interaction diagram of software modules for controlling a TOF camera to collect images by a terminal device according to an embodiment of the present application.
As shown in fig. 6, when the notification application receives the notification message, a smart sensor service may be invoked, which may interact with the smart sensor HAL. The intelligent sensing HAL may acquire the ambient light level sensed by the ambient light sensor 180L, and when the ambient light level is greater than a first value, the intelligent sensing HAL may control the TOF camera to capture images at a first frame rate.
Illustratively, the intelligent perception HAL may interact with the camera service through a camera control module, which in turn, through the camera HAL, kernel, camera drive, controls the TOF camera to capture images at a first frame rate. The specific process of controlling the TOF camera to acquire images at the first frame rate may be described in the above embodiments, and will not be described herein.
The process of receiving the FD corresponding to the image acquired by the TOF camera by the intelligent perception HAL and invoking the image based on the FD can be described in the above embodiments, and the embodiments of the present application are not repeated.
Note that, solid arrows in fig. 6 may be used to represent control flows, and broken arrows may be used to represent data flows.
Fig. 7 is an interaction schematic diagram of software modules for controlling and driving an RGB camera to collect images by a terminal device according to an embodiment of the present application.
As shown in fig. 7, when the notification application receives the notification message, a smart sensor service may be invoked, which may interact with the smart sensor HAL. The intelligent sensing HAL may acquire the ambient light level sensed by the ambient light sensor 180L, and when the ambient light level is less than or equal to the first value, the intelligent sensing HAL may control the RGB camera to be driven to capture images at the first frame rate.
By way of example, the smart perception HAL may interact with the Camera service through Camera control, and further through Camera HAL, kernel, camera driver, control the RGB Camera to capture images at a first frame rate. The specific process of controlling the RGB camera to collect images at the first frame rate can be described in the above embodiments, and will not be described herein.
The process of receiving the FD corresponding to the image acquired by the driving RGB camera and invoking the image based on the FD by the intelligent perception HAL can be described in the above embodiments, and the embodiments of the present application are not repeated.
Note that, solid arrows in fig. 7 may be used to represent control flows, and broken arrows may be used to represent data flows.
Based on the internal logic shown in fig. 7, when the terminal device controls the RGB camera to collect an image, the path of the terminal device can be seen in fig. 8. Fig. 8 is a schematic diagram of an internal path of a terminal device for performing eye tracking according to an embodiment of the present application.
As shown in fig. 8, after the RGB camera sensor of the terminal device collects the face image, the face image data raw may be transmitted to the IFE module, the IFE module may convert the face image data raw into YUV data, and transmit the YUV data to the IPE module, and the IPE module transmits the YUV data to the trusted application TA, where the trusted application TA may be an intelligent perception TA. The trusted application TA transmits YUV data to the smart perception HAL. The intelligent perception HAL can process the YUV data, for example, perform face detection through a face detection algorithm, so as to obtain face frame information. The smart perception HAL may transmit the facial frame information to the camera HAL through the camera service. After the automatic exposure control module obtains the face information, the face information can be processed, and the exposure time of the next frame is calculated. In this way, the camera HAL can control the RGB camera to perform image acquisition according to the calculated exposure time based on the exposure time.
It should be noted that, the process of transmitting the YUV data to the trusted application TA by the IPE module may include transmitting the YUV data to the secure memory for storage by the IPE module, and transmitting the YUV data to the trusted application TA when the secure memory receives the call of the trusted application TA.
It will be appreciated that the process of intelligently sensing YUV data obtained by HAL is shown in fig. 7, i.e. the YUV data is transmitted to the re after being passed through the TEE.
Based on the above-mentioned fig. 4, in some implementations, the data obtained by the camera HAL is directly output to the camera HAL by the IPE module, and may not pass through the TEE, and due to the good openness and expansibility of the REEs but low security, the security of the transmitted YUV data is poor, and the result of eye tracking is affected. In the embodiment of the application, the YUV data is transmitted to the re after passing through the TEE, and because the TEE has an own running space, strict protection measures are defined, the TEE can resist software attacks easily suffered by the conventional TEE side, for example, the TEE can protect assets (assets) in the TEE, such as data, software and the like, from software attacks, and the security is higher, so that the security of the YUV data is higher.
In order to facilitate understanding, the following describes in detail the frame rate adjustment method for eye tracking according to the embodiment of the present application in conjunction with the internal logic described in the above embodiment.
Fig. 9 is an interactive schematic diagram of a frame rate adjustment method for eye tracking according to an embodiment of the present application.
As shown in fig. 9, the frame rate adjustment method of eye tracking may include the steps of:
s901, the notification application receives the notification message and invokes the intelligent perception service.
The notification message may be a short message, a micro message, a system update message, etc., which is not limited in the embodiment of the present application.
S902, the intelligent perception service transmits a request for human eye tracking to the intelligent perception HAL.
In embodiments of the present application, the request for eye tracking may be used to request the intelligent sensory HAL for eye tracking.
S903, the intelligent perception HAL transmits first indication information to the camera service.
For example, the first indication information may carry an identifier of the camera and a first frame rate. The identification of the camera is the identification of the TOF camera.
It will be appreciated that when the smart sensor HAL receives a request for eye tracking transmitted by the smart sensor service, it may obtain the ambient light level sensed by the ambient light sensor 180L, and when the ambient light level is less than or equal to the first value, the smart sensor HAL generates the first indication information.
S904, the camera service transmits the first indication information to the camera HAL.
S905, the camera HAL transmits second instruction information to the camera driver.
The second indication information may carry a second frame length (frame length line, FLL) corresponding to the first frame rate and the identification of the camera, where the second frame length is used to configure the frame rate of the camera to be the first frame rate.
In one possible implementation, the second indication information may carry a second total vertical line number VTS corresponding to the first frame rate and the identification of the camera, where the second VTS may also be used to configure the frame rate of the camera to be the first frame rate.
For example, when the camera HAL receives the first indication information transmitted by the camera service, the camera HAL may calculate a second frame length corresponding to the first frame rate according to the first frame rate in the first indication information, and generate the second indication information based on the second frame length.
S906, driving the TOF camera by the camera driver to acquire images at a first frame rate.
S907, the TOF camera transmits images acquired by the TOF camera to the IFE lite module.
And S908, the IFE lite module transmits the image to a safe memory for storage, and acquires a file descriptor FD corresponding to the image.
And S909, the IFE lite module drives the FD corresponding to the transmission image to the camera.
It should be noted that, each image acquired by the TOF camera corresponds to one FD.
S910, the camera driver transmits the FD corresponding to the image to the camera HAL.
S911, the camera HAL transmits the FD corresponding to the image to the camera service.
S912, the camera service transmits the FD corresponding to the image to the smart perception HAL.
S913, the intelligent perception HAL transmits a call instruction to the intelligent perception TA based on the FD.
S914, the intelligent perception TA sends a calling instruction to the trusted camera service module.
S915, the trusted camera service module acquires the corresponding image in the secure memory based on the FD and transmits the image to the intelligent perception TA.
S916, the intelligent perception TA encrypts the image and transmits the encrypted image to the CA/TA shared memory.
S917, CA/TA shared memory transmits the encrypted image to the intelligent perception HAL.
S918, the intelligent perception HAL decrypts the encrypted image, and performs eye tracking processing on the decrypted image to obtain the eye point of the human eye.
For example, the intelligent perception HAL may perform eye tracking processing on the decrypted image according to the algorithm module shown in fig. 5, so as to obtain the gaze point of the human eye.
In one possible implementation, the smart perception HAL obtains that the gaze point of the human eye may be the coordinate position of the gaze point of the human eye on the display screen of the terminal device.
S919, the intelligent perception HAL transmits the gaze point of the human eye to the intelligent perception service.
S920, the intelligent perception service transmits the gaze point of the human eye to the notification application.
S921, notifying an application to determine that the gaze point of the human eye is outside the first area, and calling the intelligent perception service.
For example, the notification application may determine whether the gaze point of the human eye is within the first region based on coordinates of the gaze point of the human eye and coordinates of the first region.
In one possible implementation, the notification application may determine that the gaze point of the human eye is within the first region when the gaze point of the human eye is at an edge of the first region.
S922, the intelligent perception service transmits a request for adjusting the frame rate of the camera to the intelligent perception HAL.
S923, the intelligent perception HAL transmits third indication information to the camera service.
The third indication information may include an identifier of the camera and a second frame rate, where the identifier of the camera is an identifier of the TOF camera.
S924, the camera service transmits third indication information to the camera HAL.
S925, the camera HAL transmits fourth instruction information to the camera driver.
In one possible implementation, the second indication information may carry a first frame length corresponding to the second frame rate and an identification of the camera, where the first frame length may be used to configure the frame rate of the camera to be the second frame rate.
In another possible implementation, the second indication information may carry a first VTS corresponding to the identifier of the camera and the second frame rate, where the first VTS is configured to configure the frame rate of the camera to be the second frame rate.
S926, the camera drives the TOF camera to acquire images at a second frame rate.
Thus, when the terminal device performs human eye tracking, and when the gaze point of the human eye is not in the area displayed by the notification message, the terminal device can control the TOF camera to collect images at a lower frame rate, and the power consumption of the terminal device can be reduced.
In the embodiment of the present application, after step S1126, the notification application may detect that the gaze point of the human eye is within the first area, where the notification application may control the TOF camera to acquire the image at the first frame rate by using the method described in the foregoing embodiment.
As described in connection with the above embodiment, in one possible implementation of the above step S903, when the smart sensor HAL receives the request for eye tracking transmitted by the smart sensor service, the smart sensor HAL may acquire the ambient light level sensed by the ambient light sensor 180L, and when the ambient light level is greater than the first value, the smart sensor HAL may transmit the indication information carrying the identification of the RGB camera to the camera HAL, so that the RGB camera captures the image at the first frame rate.
When the RGB camera captures images at the first frame rate, the steps different from those shown in fig. 9 described above are: the RGB camera transmits the image acquired by the RGB camera to the IFE module, the IFE module transmits the image acquired by the RGB camera to the IPF module, and the IPF module transmits the image to the safe memory for storage and acquires a file descriptor FD corresponding to the image. The IPF module transmits the acquired FD to the camera driver. Other steps are similar to those of fig. 9 and will not be described again here.
Fig. 5-9 above implement logic from within the electronic device, illustrating an internal implementation in which the terminal device displays notification messages, and the terminal device controls the frame rate at which the camera captures images. The following takes the example of displaying a short message notification message by a terminal device, and details about possible interface interactions when displaying the notification message according to the embodiments of the present application in conjunction with a user interface.
For example, since the content of the short message notification message may be longer or shorter, there are two cases where the terminal device displays the communication notification message: when the content of the short message notification message is longer, the terminal equipment displays part of the content of the short message notification message; when the content of the short message notification message is short, the terminal equipment displays the whole content of the short message notification message.
The following describes an interactive interface of the terminal device under two conditions that the content of the short message notification message is longer and the content of the short message notification message is shorter.
Fig. 10 is an interface interaction schematic diagram of a terminal device when the content of a short message notification message provided by the embodiment of the present application is long.
By way of example, the interface shown in a of fig. 10 may be an interface for a video application, which may include video a and controls associated with video a, e.g., profile controls, selection controls, etc.
When the terminal device displays an interface as shown in a of fig. 10 and the terminal device receives a short message notification message, the terminal device may display an interface as shown in b of fig. 10. In the interface shown as b in fig. 10, the terminal device may display a first message window in a first area above the interface of the video application in a banner manner. The first message window may include a part of content of the sms notification message.
It can be understood that the terminal device may also display a short message window when displaying a desktop or other application interfaces, and the embodiment of the present application is only described by taking the interface of the terminal device for displaying a video application as an example.
The sizes of the first areas corresponding to the notification messages of different types may be the same or different, for example, the size of the first area corresponding to the short message may be greater than the size of the preset area corresponding to the micro message.
Note that, in the interface shown in b in fig. 10, the short message notification message may also be displayed in a capsule, and the embodiment of the present application is illustrated by only displaying in a banner manner.
Based on the above embodiment, when the terminal device displays the interface shown as b in fig. 10, the terminal device performs eye tracking, and when the terminal device detects that the gaze point of the human eye is within the first area based on the image captured by the camera, the terminal device may display the interface shown as c in fig. 10. A second message window may be included in the interface shown as c in fig. 10, where the second message window includes the entire content of the sms notification message.
By way of example, the case where the terminal device displays an interface as shown in c in fig. 10 may include two possible implementations:
in a possible implementation, when the terminal device displays an interface as shown in b in fig. 10, the terminal device may control the camera to capture images at the first frame rate, and the terminal device may detect whether the gaze point of the human eye is within the first area based on the images captured by the camera. When the terminal device detects whether the gaze point of the human eye is within the first area based on the image acquired by the camera, and the duration of the gaze point of the human eye within the first area is greater than or equal to a first preset value, the terminal device displays an interface as shown in c in fig. 10.
The first preset value may be, for example, 3 seconds or 5 seconds, which is not limited in the embodiment of the present application.
Therefore, when the gaze point of the human eye is in the first area and the duration time is longer than or equal to the first preset value, the possibility that the user wants to check the notification message is high, and at the moment, the terminal equipment displays the whole content of the notification message, so that the user does not need to perform manual operation, and the user experience is improved.
In another possible implementation, when the terminal device displays an interface as shown in b in fig. 10, the terminal device may control the camera to capture images at the first frame rate, and the terminal device may detect whether the gaze point of the human eye is within the first area based on the images captured by the camera. When the terminal equipment detects whether the gaze point of the human eye is inside or outside the first area based on the image acquired by the camera, the terminal equipment can control the camera to acquire the image at the second frame rate, and the terminal equipment can continue to detect whether the gaze point of the human eye is inside the first area based on the image acquired by the camera. When the terminal device detects that the gaze point of the human eye is outside the first area, the terminal device can control the camera to acquire images at the first frame rate, and the terminal device can continue to detect whether the gaze point of the human eye is inside the first area based on the images acquired by the camera. When the terminal device detects that the gaze point of the human eye is within the first area based on the image acquired by the camera, and the duration of the gaze point of the human eye within the first area is greater than or equal to a first preset value, the terminal device displays an interface as shown in c in fig. 10.
Thus, when the gaze point of the human eye is located outside the first area, the terminal device controls the camera to acquire images at a smaller frame rate, and power consumption of the terminal device can be reduced. And when the terminal equipment detects that the gaze point of the human eye is in the first area, the terminal equipment controls the camera to acquire images again at a larger frame rate, so that the terminal equipment can more accurately determine the gaze point of the human eye.
Specifically, the terminal device enters the internal implementation of the interface shown in b in fig. 10 to the interface shown in c in fig. 10, for example, the internal implementation includes, but is not limited to, the terminal device controlling the camera to acquire the image at the second frame rate, which is described in the above embodiments and will not be repeated herein.
As shown in fig. 10, when the terminal device detects that the gaze point of the human eye is outside the first area based on the image acquired by the camera and the duration of the gaze point of the human eye being outside the first area is a second preset value when the terminal device displays the interface shown as b in fig. 10, the terminal device may display the interface shown as d in fig. 10. The interface shown as d in fig. 10 is the same as the interface shown as a in fig. 10, and the interface shown as d in fig. 10 does not include a short message notification message.
The second preset value may be 1 second or 2 seconds, which is not limited in the embodiment of the present application.
Therefore, when the terminal equipment detects that the gaze point of the human eye is located at the second preset value of the duration beyond the first area, the possibility that the user wants to view the notification message is small, and the terminal equipment does not display the notification message any more, so that the user does not need to input an operation of manually inputting the stowing short message notification message, stowing the notification message can be achieved, and user experience is improved.
It should be noted that, when the terminal device displays the interface shown as c in fig. 10, the terminal device may continue to detect the gaze point of the human eye, and when the terminal device detects that the gaze point of the human eye is outside the first area, the terminal device may display the interface shown as d in fig. 10.
Therefore, when the terminal equipment displays the whole content of the short message notification message and the terminal equipment detects that the gaze point of human eyes is out of the first area, the user can look up the short message notification message, and at the moment, the terminal equipment packs up the short message notification message without the operation of manually inputting the pack up short message notification message by the user, so that the user experience is improved.
In the embodiment of the present application, when the terminal device displays the interface shown as c in fig. 10, the terminal device may control the camera to capture images at the second frame rate. In this way, the power consumption of the terminal device can be reduced.
In the interface shown in fig. 10, the terminal device receives the sms notification message and displays a part of the content of the sms notification message on the interface, and in one possible implementation, when the content of the sms notification message received by the terminal device is short, the terminal device may display the whole content of the sms notification message. Fig. 11 shows an interface interaction schematic diagram of a terminal device when the content of a short message notification message of the terminal device is short.
A in fig. 11 is similar to a in fig. 10, and a detailed description of a in fig. 11 is omitted in the embodiments of the present application.
As shown in fig. 11, when the terminal device displays an interface as shown in a in fig. 11 and the terminal device receives a short message notification message, the terminal device may display an interface as shown in b in fig. 11. In the interface shown as b in fig. 11, the terminal device may display a second message window in a first area above the interface of the video application in a banner manner. The second message window may include all contents of the sms notification message.
Based on the above embodiment, when the terminal device displays the interface shown as b in fig. 11, the terminal device performs eye tracking, and when the terminal device detects that the gaze point of the human eye is within the first area based on the image collected by the camera, the terminal device may display an interface of the short message application, where the interface may include the entire content of the short message notification message.
The case where the terminal device displays the interface of the short message application is similar to the case where the terminal device displays the interface shown as c in fig. 10, and will not be described herein.
Therefore, when the gaze point of the human eye is in the first area, the terminal equipment displays the interface of the short message application, so that the user can input and reply the content of the short message notification message on the interface of the short message application, and the user experience is improved.
As shown in fig. 11, when the terminal device detects that the gaze point of the human eye is outside the first area based on the image acquired by the camera and the duration of the gaze point of the human eye being outside the first area is a second preset value when the terminal device displays the interface shown as b in fig. 11, the terminal device may display the interface shown as c in fig. 11. The interface shown as c in fig. 11 is the same as the interface shown as a in fig. 11, and the interface shown as c in fig. 11 does not include a short message notification message.
The frame rate adjustment method for eye tracking provided by the embodiment of the present application is described by taking a notification message as a short message and a WeChat message as an example respectively in combination with the internal implementation logic and the interface interaction of the terminal device described in the above embodiment.
Fig. 12 is a flowchart of a method for adjusting a frame rate of eye tracking for notifying a message to be a short message.
As shown in fig. 12, the method for adjusting the frame rate of eye tracking may include the steps of:
s1201, the terminal equipment receives the short message and starts gaze detection.
For example, when the terminal device receives the short message, an interface shown as b in fig. 10 or an interface shown as b in fig. 11 may be displayed. The process of gaze detection may be a detection process of determining a gaze point of a human eye based on eye tracking, and the process of starting gaze detection by the terminal device may refer to the process of the terminal device controlling the camera to collect images at the first frame rate described in the above embodiment, which is described in the above embodiment and will not be repeated here.
It will be appreciated that the process of starting the eye gaze point detection by the terminal device may take 0.5 seconds or less, and embodiments of the present application are not limited thereto.
S1202, the terminal device continues to detect for 1 second in the high frame rate detection mode, and determines whether or not gazing is detected.
For example, the high frame rate detection mode is a mode in which the terminal device controls the camera to collect the image at the first frame rate in the above embodiment, which is described in the above embodiment and will not be described herein.
The present application is described by taking a continuous detection for 1 second as an example, and is not limited in any way.
In the embodiment of the present application, the process that the terminal device determines whether to detect gazing may be a process that the terminal device detects whether the gazing point of human eyes is located in the first area based on the image acquired by the camera in the above embodiment, which is described in the above embodiment and will not be repeated herein. The first area is an area for displaying short message.
When the terminal device detects gazing, the following step S1205 may be performed, and when the terminal device detects non-gazing, the following step S1203 may be performed.
S1203, the terminal device enters a low frame rate detection mode, continues to detect for 3.5 seconds, and determines whether fixation is detected.
For example, the low frame rate detection mode is a mode in which the terminal device controls the camera to collect the image at the second frame rate as described in the above embodiment, which is described in the above embodiment and will not be described herein.
The present application is described by taking the continuous detection for 3 seconds as an example, and is not limited in any way.
When the terminal device detects gazing, the following step S1204 may be performed, and when the terminal device detects non-gazing, the following step S1209 may be performed.
S1204, the terminal equipment enters a high frame rate detection mode.
And S1205, the terminal equipment judges whether the sight line is separated or not, and the separating duration exceeds 1 second.
Illustratively, the line of sight is away, i.e. the situation where the gaze point of the human eye is outside the first area.
The present application is described by way of example only, and not by way of limitation, as to whether the duration of departure exceeds 3 seconds.
When the terminal device determines that the line of sight is out, and the duration of the out-of-sight is longer than 1 second, the terminal device may perform step S1209 described below, otherwise, perform step S1206 described below.
S1206, the terminal equipment judges whether the gazing time length is more than 3 seconds.
The gaze duration is, for example, a duration in which the gaze point of the human eye is within the first region. The present application is described by taking the example of judging whether the gazing time length is longer than 3 seconds, and the present application is not limited in any way.
When the gazing duration is greater than 3 seconds, the following step S1207 may be performed, and if not, the following step S1205 is continued.
S1207, the terminal equipment expands the short message and enters a low frame rate detection mode.
For example, the interface when the terminal device expands the sms message may refer to the interface shown in b in fig. 10.
S1208, the terminal equipment judges whether the sight line is away or not, and the duration of the away time exceeds 2 seconds.
The present application is described by taking the case of judging whether the duration of the line of sight departure is longer than 2 seconds as an example, and is not limited in any way.
When the terminal device determines that the line of sight is out, step S1209 described below may be performed, which may be performed until it is determined that the line of sight is out when the terminal device determines that the line of sight is not out.
S1209, the terminal equipment packs up the short message and cancels the fixation detection call.
For example, the gaze detection call may refer to the call for eye tracking in the above embodiment, and will not be described herein.
In summary, according to the method for adjusting the frame rate of eye tracking provided by the embodiment of the application, when the terminal device displays the short message, the terminal device can perform eye tracking, and can dynamically perform frame rate adjustment during eye tracking, so that the power consumption of the terminal device is reduced.
Fig. 13 is a flow chart illustrating a method of frame rate adjustment for eye tracking of notification messages as WeChat messages.
As shown in fig. 13, the method for adjusting the frame rate of eye tracking may include the steps of:
s1301, the terminal equipment receives the WeChat message and starts gaze detection.
S1302, the terminal device continues to detect for 1 second in the high frame rate detection mode, and determines whether or not gazing is detected.
When the terminal device detects gazing, the following step S1305 may be performed, and when the terminal device detects non-gazing, the following step S1303 may be performed.
S1303, the terminal equipment enters a low frame rate detection mode, continuously detects for 3.5 seconds, and judges whether fixation is detected.
The present application is described by taking the continuous detection for 3 seconds as an example, and is not limited in any way.
When the terminal device detects gazing, the following step S1304 may be performed, and when the terminal device detects non-gazing, the following step S1309 may be performed.
S1304, the terminal device enters a high frame rate detection mode.
S1305, the terminal equipment judges whether the sight line is separated or not, and the separating duration exceeds 1 second.
Illustratively, the line of sight is away, i.e. the situation where the gaze point of the human eye is outside the first area.
The present application is described by way of example only, and not by way of limitation, as to whether the duration of departure exceeds 3 seconds.
When the terminal device determines that the line of sight is away and the duration of the departure exceeds 1 second, the terminal device may perform step S1309 described below, otherwise, perform step S1306 described below.
S1306, the terminal equipment judges whether the gazing time length is more than 3 seconds.
The gaze duration is, for example, a duration in which the gaze point of the human eye is within the first region. The present application is described by taking the example of judging whether the gazing time length is longer than 3 seconds, and the present application is not limited in any way.
When the gazing duration is greater than 3 seconds, the following step S1207 may be performed, and if not, the following step S1205 is continued.
S1307, the terminal equipment expands the WeChat message and enters a low frame rate detection mode.
The interface when the terminal device expands the WeChat message is illustratively similar to the interface shown in b in FIG. 10.
S1308, the terminal device judges whether the line of sight is away, and the duration of the away exceeds 2 seconds.
When the terminal device determines that the line of sight is out, step S1309 described below may be performed, which may be performed until it is determined that the line of sight is out when the terminal device determines that the line of sight is not out.
S1309, the terminal equipment packs up the WeChat message and cancels the fixation detection call.
In summary, according to the method for adjusting the frame rate of eye tracking provided by the embodiment of the application, when the terminal device displays the WeChat message, the terminal device can perform eye tracking, and can dynamically perform frame rate adjustment during eye tracking, so that the power consumption of the terminal device is reduced.
The foregoing description of the solution provided by the embodiments of the present application has been mainly presented in terms of a method. To achieve the above functions, it includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the present application may be implemented in hardware or a combination of hardware and computer software, as the method steps of the examples described in connection with the embodiments disclosed herein. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional modules of the device for realizing the frame rate adjustment method of eye tracking according to the method example, for example, each functional module can be divided corresponding to each function, and two or more functions can be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
Fig. 14 is a schematic hardware structure of another terminal device according to an embodiment of the present application, as shown in fig. 14, where the terminal device includes a processor 1401, a communication line 1404 and at least one communication interface (the communication interface 1403 is illustrated in fig. 14 as an example).
The processor 1401 may be a general purpose central processing unit (central processing unit, CPU), microprocessor, application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the programs of the present application.
Communication line 1404 may include circuitry for communicating information between the components described above.
Communication interface 1403 uses any transceiver-like device for communicating with other devices or communication networks, such as ethernet, wireless local area network (wireless local area networks, WLAN), etc.
Possibly, the terminal device may also comprise a memory 1402.
Memory 1402 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, or an electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), a compact disc (compact disc read-only memory) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be separate and coupled to the processor via communication line 1404. The memory may also be integrated with the processor.
Wherein the memory 1402 is used for storing computer-executable instructions for performing aspects of the present application, and is controlled for execution by the processor 1401. The processor 1401 is configured to execute computer-executable instructions stored in the memory 1402, thereby implementing the frame rate adjustment method for eye tracking provided by the embodiment of the present application.
Possibly, the computer-executable instructions in the embodiments of the present application may also be referred to as application program codes, which are not limited in particular.
In a particular implementation, processor 1401 may include one or more CPUs, such as CPU0 and CPU1 of FIG. 14, as an example.
In a specific implementation, as an embodiment, the terminal device may include multiple processors, such as processor 1401 and processor 1405 in fig. 14. Each of these processors may be a single-core (single-CPU) processor or may be a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
Fig. 15 is a schematic structural diagram of a chip according to an embodiment of the present application. Chip 150 includes one or more (including two) processors 151, communication lines 152, communication interfaces 153, and memory 154.
In some implementations, the memory 154 stores the following elements: executable modules or data structures, or a subset thereof, or an extended set thereof.
The method described in the above embodiments of the present application may be applied to the processor 151 or implemented by the processor 151. Processor 151 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 151 or by instructions in the form of software. The processor 151 may be a general purpose processor (e.g., a microprocessor or a conventional processor), a digital signal processor (digital signal processing, DSP), an application specific integrated circuit (application specific integrated circuit, ASIC), an off-the-shelf programmable gate array (field-programmable gate array, FPGA) or other programmable logic device, discrete gates, transistor logic, or discrete hardware components, and the processor 151 may implement or perform the methods, steps, and logic diagrams associated with the various processes disclosed in embodiments of the application.
The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a state-of-the-art storage medium such as random access memory, read-only memory, programmable read-only memory, or charged erasable programmable memory (electrically erasable programmable read only memory, EEPROM). The storage medium is located in the memory 154, and the processor 151 reads information in the memory 154, and in combination with its hardware, performs the steps of the above method.
The processor 151, the memory 154, and the communication interface 153 may communicate with each other via the communication line 152.
In the above embodiments, the instructions stored by the memory for execution by the processor may be implemented in the form of a computer program product. The computer program product may be written in the memory in advance, or may be downloaded in the form of software and installed in the memory.
Embodiments of the present application also provide a computer program product comprising one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL), or wireless (e.g., infrared, wireless, microwave, etc.), or semiconductor medium (e.g., solid state disk, SSD)) or the like.
The embodiment of the application also provides a computer readable storage medium. The methods described in the above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. Computer readable media can include computer storage media and communication media and can include any medium that can transfer a computer program from one place to another. The storage media may be any target media that is accessible by a computer.
As one possible design, the computer-readable medium may include compact disk read-only memory (CD-ROM), RAM, ROM, EEPROM, or other optical disk memory; the computer readable medium may include disk storage or other disk storage devices. Moreover, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, digital versatile disc (digital versatiledisc, DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.

Claims (15)

1. The frame rate adjustment method for eye tracking is applied to terminal equipment, wherein the terminal equipment comprises a camera, and is characterized by comprising the following steps:
at a first moment, the terminal equipment receives a notification message, the terminal equipment displays a first message window in a first area, the terminal equipment controls the camera to acquire images at a first frame rate, and the first message window comprises part or all of the content of the notification message;
at a second moment, the terminal equipment detects that the gaze point of human eyes is out of the first area based on the image acquired by the camera, and the terminal equipment controls the camera to acquire the image at a second frame rate; the second frame rate is less than the first frame rate, and the second time is later than the first time;
and the terminal equipment detects that the gaze point of human eyes is positioned in the first area based on the image acquired by the camera, and controls the camera to acquire the image at the first frame rate, wherein the third moment is later than the second moment.
2. The method according to claim 1, wherein the terminal device comprises a camera hardware abstraction layer HAL, a camera driver;
The terminal device controls the camera to acquire images at a second frame rate, including:
the camera HAL determining that the camera captures images at a second frame rate;
the camera HAL drives and transmits a first frame length corresponding to the second frame rate to the camera;
the camera driver writes the first frame length into a register of the camera, and the camera acquires images at the second frame rate.
3. The method according to claim 1 or 2, wherein the terminal device further comprises a smart aware hardware abstraction layer HAL, camera services, an image pre-processing IFE module, a secure memory;
after the camera acquires images at a first frame rate, the method comprises the following steps:
the camera transmits the image acquired by the camera to the IFE module;
the IFE module transmits the image to the safe memory for storage, and obtains a file descriptor FD corresponding to the image;
the IFE module transmits the FD to the intelligent perception HAL through the camera driver, the camera HAL and the camera service in sequence;
the intelligent awareness HAL invokes the image in the secure memory based on the FD.
4. A method according to claim 3, wherein the terminal device further comprises an image processing engine IPE module, the IFE module comprising a first IFE module, the camera comprising an RGB camera;
The IFE module transmits the image to the secure memory for storage, including:
the first IFE module transmits the image acquired by the RGB camera to the IPE module;
and the IPE module transmits the image acquired by the RGB camera to the safe memory for storage.
5. The method according to claim 3 or 4, characterized in that the terminal device further comprises a trusted camera service module, a smart aware trusted application TA;
the intelligent awareness HAL invoking the image in the secure memory based on the FD, comprising:
the intelligent perception HAL transmits a calling instruction to the intelligent perception TA based on the FD;
the intelligent perception TA invokes the image in the secure memory through the trusted camera service module based on the invoking instruction and transmits the image to the intelligent perception HAL.
6. The method of claim 5, wherein the terminal device further comprises a shared memory in which the smart sensor TA transmits the image to the smart sensor HAL, comprising:
and the intelligent perception TA encrypts the image and transmits the encrypted image to the intelligent perception HAL through the shared memory.
7. The method according to any of claims 1-6, wherein the terminal device further comprises a notification application, a smart awareness service;
the terminal equipment receives the notification message, and controls the camera to acquire images at a first frame rate, including:
the notification application receives the notification message and invokes the intelligent perception service to perform eye tracking;
the intelligent perception service transmits a request for human eye tracking to the intelligent perception HAL;
the intelligent perception HAL transmits first indication information to the camera HAL through a camera service, wherein the first indication information carries the identification of the camera and the first frame rate;
the camera HAL drives and transmits second indication information to the camera, wherein the second indication information carries an identification of the camera and a second frame length corresponding to the first frame rate, and the second frame length is used for configuring the frame rate of the camera to be the first frame rate;
the camera driver drives the camera to acquire images at the first frame rate based on the identification of the camera and the second frame length.
8. The method of any of claims 1-7, wherein the cameras comprise a time of flight TOF camera and an RGB camera;
The smart perception HAL transmitting first indication information to the camera HAL, comprising:
the intelligent perception HAL determines the ambient light illumination of the terminal equipment;
when the ambient illuminance is greater than a first value, the intelligent perception HAL transmits indication information carrying the identification of the RGB camera to the camera HAL;
when the ambient light illuminance is less than or equal to the first value, the smart perception HAL transmits to the camera HAL an indication information carrying an identification of the TOF camera.
9. The method according to any of claims 1-8, wherein the first message window includes a portion of the content of the notification message, and wherein after the third time instant, the method includes:
and at a fourth moment, the terminal equipment detects that the gaze point of human eyes is positioned in the first area based on the image acquired by the camera, the terminal equipment displays a second message window, the second message window comprises all the contents of the notification message, and the time interval between the fourth moment and the third moment is smaller than or equal to a first preset value.
10. The method of claim 9, wherein at the fourth time instant the terminal device further comprises: and the terminal equipment controls the camera to acquire images at the second frame rate.
11. The method according to any of claims 1-8, wherein the first message window includes the entire content of the notification message, and wherein after the third time instant, the method comprises:
and at a fifth moment, the terminal equipment detects that the gaze point of human eyes is positioned in the first area based on the image acquired by the camera, the terminal equipment displays a first interface, the first interface is an interface of an application corresponding to the notification message, and the time interval between the fifth moment and the third moment is smaller than or equal to a first preset value.
12. The method according to any one of claims 1-8, characterized in that after the third moment in time, it comprises:
at a sixth moment, the terminal device detects that the gaze point of the human eye is out of the first area based on the image acquired by the camera, and the terminal device displays a first interface, wherein the notification message is not included in the first interface.
13. A terminal device, comprising: a memory for storing a computer program and a processor for executing the computer program to perform the method of any of claims 1-12.
14. A computer readable storage medium storing instructions that, when executed, cause a computer to perform the method of any one of claims 1-12.
15. A computer program product comprising a computer program which, when run, causes an electronic device to perform the method of any of claims 1-12.
CN202310231426.7A 2023-02-27 2023-02-27 Frame rate adjusting method for eye movement tracking and related device Pending CN117148959A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310231426.7A CN117148959A (en) 2023-02-27 2023-02-27 Frame rate adjusting method for eye movement tracking and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310231426.7A CN117148959A (en) 2023-02-27 2023-02-27 Frame rate adjusting method for eye movement tracking and related device

Publications (1)

Publication Number Publication Date
CN117148959A true CN117148959A (en) 2023-12-01

Family

ID=88903333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310231426.7A Pending CN117148959A (en) 2023-02-27 2023-02-27 Frame rate adjusting method for eye movement tracking and related device

Country Status (1)

Country Link
CN (1) CN117148959A (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011111711A1 (en) * 2010-03-09 2011-09-15 株式会社Hdt Color display device and method
CN104853668A (en) * 2012-09-27 2015-08-19 感官运动仪器创新传感器有限公司 Tiled image based scanning for head position for eye and gaze tracking
CN106797460A (en) * 2014-09-22 2017-05-31 三星电子株式会社 The reconstruction of 3 D video
WO2017124899A1 (en) * 2016-01-20 2017-07-27 努比亚技术有限公司 Information processing method, apparatus and electronic device
CN107203270A (en) * 2017-06-06 2017-09-26 歌尔科技有限公司 VR image processing methods and device
US9775512B1 (en) * 2014-03-19 2017-10-03 Christopher W. Tyler Binocular eye tracking from video frame sequences
CN107608514A (en) * 2017-09-20 2018-01-19 维沃移动通信有限公司 Information processing method and mobile terminal
CN109327626A (en) * 2018-12-12 2019-02-12 Oppo广东移动通信有限公司 Image-pickup method, device, electronic equipment and computer readable storage medium
CN110221696A (en) * 2019-06-11 2019-09-10 Oppo广东移动通信有限公司 Eyeball tracking method and Related product
CN110245601A (en) * 2019-06-11 2019-09-17 Oppo广东移动通信有限公司 Eyeball tracking method and Related product
CN209514548U (en) * 2018-12-04 2019-10-18 塔普翊海(上海)智能科技有限公司 AR searcher, the articles search system based on AR searcher
CN110399039A (en) * 2019-07-03 2019-11-01 武汉子序科技股份有限公司 A kind of actual situation scene fusion method based on eye-tracking
CN112104905A (en) * 2020-07-06 2020-12-18 聚好看科技股份有限公司 Server, display equipment and data transmission method
US20210169417A1 (en) * 2016-01-06 2021-06-10 David Burton Mobile wearable monitoring systems
CN113485546A (en) * 2021-06-29 2021-10-08 歌尔股份有限公司 Control method of wearable device, wearable device and readable storage medium
CN114047964A (en) * 2022-01-13 2022-02-15 麒麟软件有限公司 Method for enabling Android-supported camera to be hot-plugged when Linux is compatible with Android system
CN114302088A (en) * 2020-09-22 2022-04-08 Oppo广东移动通信有限公司 Frame rate adjusting method and device, electronic equipment and storage medium
CN115016869A (en) * 2021-10-22 2022-09-06 荣耀终端有限公司 Frame rate adjusting method, terminal equipment and frame rate adjusting system
CN115079886A (en) * 2022-07-21 2022-09-20 荣耀终端有限公司 Two-dimensional code recognition method, electronic device, and storage medium
US11501419B1 (en) * 2021-06-03 2022-11-15 Baylor University System and method for displaying super saturated color

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011111711A1 (en) * 2010-03-09 2011-09-15 株式会社Hdt Color display device and method
CN104853668A (en) * 2012-09-27 2015-08-19 感官运动仪器创新传感器有限公司 Tiled image based scanning for head position for eye and gaze tracking
US9775512B1 (en) * 2014-03-19 2017-10-03 Christopher W. Tyler Binocular eye tracking from video frame sequences
CN106797460A (en) * 2014-09-22 2017-05-31 三星电子株式会社 The reconstruction of 3 D video
US20210169417A1 (en) * 2016-01-06 2021-06-10 David Burton Mobile wearable monitoring systems
WO2017124899A1 (en) * 2016-01-20 2017-07-27 努比亚技术有限公司 Information processing method, apparatus and electronic device
CN107203270A (en) * 2017-06-06 2017-09-26 歌尔科技有限公司 VR image processing methods and device
CN107608514A (en) * 2017-09-20 2018-01-19 维沃移动通信有限公司 Information processing method and mobile terminal
CN209514548U (en) * 2018-12-04 2019-10-18 塔普翊海(上海)智能科技有限公司 AR searcher, the articles search system based on AR searcher
CN109327626A (en) * 2018-12-12 2019-02-12 Oppo广东移动通信有限公司 Image-pickup method, device, electronic equipment and computer readable storage medium
CN110245601A (en) * 2019-06-11 2019-09-17 Oppo广东移动通信有限公司 Eyeball tracking method and Related product
CN110221696A (en) * 2019-06-11 2019-09-10 Oppo广东移动通信有限公司 Eyeball tracking method and Related product
CN110399039A (en) * 2019-07-03 2019-11-01 武汉子序科技股份有限公司 A kind of actual situation scene fusion method based on eye-tracking
CN112104905A (en) * 2020-07-06 2020-12-18 聚好看科技股份有限公司 Server, display equipment and data transmission method
CN114302088A (en) * 2020-09-22 2022-04-08 Oppo广东移动通信有限公司 Frame rate adjusting method and device, electronic equipment and storage medium
US11501419B1 (en) * 2021-06-03 2022-11-15 Baylor University System and method for displaying super saturated color
CN113485546A (en) * 2021-06-29 2021-10-08 歌尔股份有限公司 Control method of wearable device, wearable device and readable storage medium
CN115016869A (en) * 2021-10-22 2022-09-06 荣耀终端有限公司 Frame rate adjusting method, terminal equipment and frame rate adjusting system
CN114047964A (en) * 2022-01-13 2022-02-15 麒麟软件有限公司 Method for enabling Android-supported camera to be hot-plugged when Linux is compatible with Android system
CN115079886A (en) * 2022-07-21 2022-09-20 荣耀终端有限公司 Two-dimensional code recognition method, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
WO2021120914A1 (en) Interface element display method and electronic device
EP3958117A1 (en) User interface layout method and electronic device
EP4224797A1 (en) Method and apparatus for processing push message
WO2020107463A1 (en) Electronic device control method and electronic device
WO2022095744A1 (en) Vr display control method, electronic device, and computer readable storage medium
CN115079886B (en) Two-dimensional code recognition method, electronic device, and storage medium
CN112991494A (en) Image generation method and device, computer equipment and computer readable storage medium
WO2022160991A1 (en) Permission control method and electronic device
CN113723397B (en) Screen capturing method and electronic equipment
WO2023005751A1 (en) Rendering method and electronic device
WO2022042163A1 (en) Display method applied to electronic device, and electronic device
CN117148959A (en) Frame rate adjusting method for eye movement tracking and related device
CN113763517B (en) Facial expression editing method and electronic equipment
CN116382896B (en) Calling method of image processing algorithm, terminal equipment, medium and product
CN114637392A (en) Display method and electronic equipment
CN115623318B (en) Focusing method and related device
CN116204059B (en) Frame rate adjustment method and device for eye movement tracking
CN116688494B (en) Method and electronic device for generating game prediction frame
CN116672707B (en) Method and electronic device for generating game prediction frame
US20240020152A1 (en) Method for loading component of application and related apparatus
CN114816311B (en) Screen movement method and device
CN116700655B (en) Interface display method and electronic equipment
US20230368429A1 (en) Electronic apparatus and controlling method thereof
WO2023072113A1 (en) Display method and electronic device
WO2023078133A1 (en) Video playback method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination