CN113688019B - Response time duration detection method and device - Google Patents

Response time duration detection method and device Download PDF

Info

Publication number
CN113688019B
CN113688019B CN202110912153.3A CN202110912153A CN113688019B CN 113688019 B CN113688019 B CN 113688019B CN 202110912153 A CN202110912153 A CN 202110912153A CN 113688019 B CN113688019 B CN 113688019B
Authority
CN
China
Prior art keywords
image frame
user operation
operation event
occurrence time
indication information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110912153.3A
Other languages
Chinese (zh)
Other versions
CN113688019A (en
Inventor
夏兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110912153.3A priority Critical patent/CN113688019B/en
Publication of CN113688019A publication Critical patent/CN113688019A/en
Application granted granted Critical
Publication of CN113688019B publication Critical patent/CN113688019B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3419Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment by assessing time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3438Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment monitoring of user actions

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

The application provides a response duration detection method and device. The method comprises the following steps: by writing the occurrence time of the user operation event into the image frame information corresponding to the image frame, the detection module can acquire the occurrence time of the user operation event from the image frame information. The detection module can detect the response time between the user operation and the image frame display based on the occurrence time of the user operation event and the drawing completion time of the first image frame corresponding to the user operation event, so that the automatic detection of the response time between the user operation and the image frame display is realized, and the self-checking capability of the device performance is improved.

Description

Response time duration detection method and device
Technical Field
The present application relates to the field of terminal devices, and in particular, to a response duration detection method and apparatus.
Background
At present, the use experience of users is more and more emphasized by the terminal equipment. In the use process of a touch screen terminal, such as a touch screen mobile phone, the response time between the touch screen operation of a user and the picture display is used as an important index of the performance of the touch screen mobile phone. The response time between the touch screen operation and the screen display of the current user usually needs to be obtained by means of an external device.
Disclosure of Invention
In order to solve the above problem, the present application provides a response duration detection method and apparatus. In the method, the response duration detection device can determine the response duration between the user operation and the image frame display based on the acquired occurrence time of the operation event and the time of completing the image frame drawing, so that a convenient and quick automatic detection mode of the response duration is provided.
In a first aspect, the present application provides a response duration detection apparatus. The device comprises a sensor module, a sensing module, a view basic capability implementation module and a detection module. The sensor module is used for responding to the received first user operation and outputting a first user operation event to the perception module and the view basic capability implementation module. And the sensing module is used for responding to the acquired first user operation event and acquiring the occurrence time of the first user operation event. And the sensing module is also used for writing the occurrence time of the first user operation event into the first image frame information. And the view basic capability implementation module is used for responding to the acquired first user operation event and drawing the first image frame. Wherein the first image frame corresponds to the first image frame information. And after the first image frame is drawn, outputting first indication information to the detection module, wherein the first indication information is used for indicating that the first image frame is drawn completely. And the detection module is used for responding to the received first indication information, acquiring the drawing completion time of the first image frame, and acquiring the occurrence time of the first user operation event from the first image frame information. The detection module is further configured to send second indication information to the server when detecting that a difference between the occurrence time of the first user operation event and the drawing completion time of the first image frame is greater than a set threshold, where the second indication information is used to indicate that the response duration of the electronic device is abnormal. Therefore, the device in the application can write the occurrence time of the user operation event into the image frame information to record the corresponding relation between the user operation event and the image frame, so that the device can obtain the response time length between the user operation and the image frame display based on the occurrence time of the user operation event and the drawing completion event of the image frame included in the image frame information, namely, the difference value between the occurrence time of the user operation event and the drawing completion time of the image frame, thereby providing a convenient and quick response time length detection mode, realizing the automatic detection of the response time length, and improving the self-detection capability of the response time length performance of the device without the help of external equipment.
Illustratively, the second indication information may include a specific value of the response time duration.
Illustratively, if the response time duration is less than the set threshold, the detection of the response time duration of the next image frame is continued.
For example, when the first user operation is a click operation, the occurrence time of the first user operation event is the time corresponding to the hand-up operation.
According to the first aspect, the sensing module is specifically configured to determine that a time when the first user operation event is received is an occurrence time of the first user operation event. In this way, the device can acquire the specific time point to be detected.
According to the first aspect, or any implementation manner of the first aspect, the first image frame information includes an operation event occurrence time field, and the sensing module is specifically configured to write the occurrence time of the first user operation event into the operation event occurrence time field of the first image frame information in the memory. In this way, each module in the device can read or write the first image frame information from or into the memory.
According to the first aspect, or any implementation manner of the first aspect, the detection module is specifically configured to, in response to the received first indication information, obtain an occurrence time of the first user operation event from an operation event occurrence time field of the first image frame in the memory. Therefore, the detection module can read the image frame information from the memory and read the occurrence time of the first user operation event from the operation event occurrence time field in the image frame information, thereby reducing data transmission among the modules and effectively reducing the transmission bandwidth occupation among the modules.
According to the first aspect, or any implementation manner of the first aspect above, the sensing module is further configured to write the occurrence time of the first user operation event into the second image frame information. And the view basic capability implementation module is also used for drawing a second image frame, and the second image frame corresponds to the second image frame information. And after the second image frame is drawn, outputting third indication information to the detection module, wherein the third indication information is used for indicating that the second image frame is drawn completely. And the detection module is further used for responding to the received third indication information, acquiring the drawing completion time of the second image frame, and acquiring the occurrence time of the first user operation event from the second image frame information. And detecting that the occurrence time of the first user operation event in the second image frame information is the same as the occurrence time of the first user operation event in the first image frame information, wherein the detection module does not detect the response time of the second image frame. In this way, the sensing module can write the occurrence time of the first user operation event into the image frame information of each image frame corresponding to the local user operation. Correspondingly, when the detection module detects that the occurrence time of the operation event in the image frame information corresponding to the image frame is the same as the occurrence time of the operation event acquired last time, the detection module does not process the operation event. Optionally, the non-processing means that the detection module continues to wait for detecting the corresponding duration of the next image frame without detecting the response duration of the current image frame. It can also be understood that the detection module only detects the response time length of the first image frame corresponding to the user operation, and does not need to detect the response time length of the other image frames corresponding to the user operation.
According to the first aspect, or any implementation manner of the first aspect, the sensor module is further configured to output, in response to the received second user operation, a second user operation event to the sensing module and the view basic capability implementing module. And the sensing module is also used for responding to the acquired second user operation event and acquiring the occurrence time of the second user operation event. And the sensing module is also used for writing the occurrence time of the second user operation event into the third image frame information. The view basic capability implementation module is used for responding to the acquired second user operation event and drawing a third image frame; the third image frame corresponds to the third image frame information. And after the third image frame is drawn, outputting fourth indication information to the detection module, wherein the fourth indication information is used for indicating that the third image frame is drawn completely. And the detection module is used for responding to the received fourth indication information, acquiring the drawing completion time of the third image frame, and acquiring the occurrence time of the second user operation event from the third image frame information. The detection module is further configured to detect that occurrence time of a second user operation event in the third image frame information is different from occurrence time of a first user operation event in the second image frame information, and detect whether a difference between the occurrence time of the second user operation event and drawing completion time of the third image frame is greater than a set threshold. The detection module is further configured to send fifth indication information to the server when detecting that a difference between the occurrence time of the second user operation event and the drawing completion time of the third image frame is greater than a set threshold, where the fifth indication information is used to indicate that the response duration of the electronic device is abnormal. In this way, the device can detect the response time between each user operation and the first image frame displayed after the user operation, so that the response time delay condition of the device can be monitored in real time.
According to the first aspect, or any implementation manner of the first aspect, the detection module is specifically configured to determine that a time when the first indication information is received is a drawing completion time of the first image frame. In this way, the view basic capability implementing module can inform the detection module of the drawing completion time of the image frame in a signal triggering mode.
According to the first aspect or any one of the above implementation manners of the first aspect, the first indication information includes a drawing completion time of the first image frame. In this way, the view basic capability implementing module can inform the detection module of the drawing completion time of the image frame in a mode that the drawing completion time of the image frame is carried in the information.
According to a first aspect, or any implementation manner of the first aspect above, the first user operation is a click operation, a slide operation, a zoom operation, or a double click operation. Therefore, the response duration detection mode in the application can be applied to different user operation scenes.
In a second aspect, the present application provides a response duration detection apparatus. The apparatus includes a sensor module, a view base capability enforcement module, and a detection module. And the sensor module is used for responding to the received first user operation and outputting a first user operation event to the view basic capability implementation module. And the view basic capability implementation module is used for responding to the acquired first user operation event, drawing a first image frame and acquiring the occurrence time of the first user operation event. And after the first image frame is drawn, outputting first indication information to the detection module, wherein the first indication information comprises the occurrence time of a first user operation event, and the first indication information is used for indicating that the first image frame is drawn completely. And the detection module is used for responding to the received first indication information and acquiring the drawing completion time of the first image frame and the occurrence time of the first user operation event. And sending second indication information to the server when detecting that the difference value between the occurrence time of the first user operation event and the drawing completion time of the first image frame is greater than a set threshold value, wherein the second indication information is used for indicating that the response time length of the electronic equipment is abnormal. Like this, this application provides a detection mode when convenient, swift response, realizes the automated inspection when the response is long, need not with the help of external equipment to the self-test ability of the time performance when the response of device is improved.
According to a second aspect, the view base capability implementation module is specifically configured to determine that the time when the first user operation event is received is the occurrence time of the first user operation event.
According to a second aspect, or any implementation manner of the second aspect above, the view base capability implementation module is further configured to: and drawing a second image frame, wherein the second image frame corresponds to the second image frame information. And after the second image frame is drawn, outputting third indication information to the detection module, wherein the third indication information comprises the occurrence time of the first user operation event and is used for indicating that the second image frame is drawn completely. And the detection module is further used for responding to the received third indication information and acquiring the drawing completion time of the second image frame and the occurrence time of the first user operation event. And detecting that the occurrence time of the first user operation event included in the third indication information is the same as the occurrence time of the first user operation event included in the first indication information, wherein the detection module does not detect the response time of the second image frame.
According to the second aspect, or any implementation manner of the second aspect above, the sensor module is further configured to output a second user operation event to the view base capability implementation module in response to the received second user operation. And the view basic capability implementation module is used for responding to the acquired second user operation event, drawing a third image frame and acquiring the occurrence time of the second user operation event. And after the third image frame is drawn, outputting fourth indication information to the detection module, wherein the fourth indication information comprises the occurrence time of a second user operation event, and the fourth indication information is used for indicating that the third image frame is drawn completely. And the detection module is used for responding to the received fourth indication information and acquiring the drawing completion time of the third image frame and the occurrence time of the second user operation event. And detecting whether the occurrence time of the second user operation event included in the fourth indication information is different from the occurrence time of the first user operation event included in the third indication information, and detecting whether the difference value between the occurrence time of the second user operation event and the drawing completion time of the third image frame is greater than a set threshold value. And sending fifth indication information to the server when detecting that the difference value between the occurrence time of the second user operation event and the drawing completion time of the third image frame is greater than the set threshold value, wherein the fifth indication information is used for indicating that the response duration of the electronic equipment is abnormal.
According to the second aspect, or any implementation manner of the second aspect above, the detection module is specifically configured to determine that the time when the first indication information is received is a drawing completion time of the first image frame.
According to a second aspect, or any implementation manner of the second aspect above, the first indication information includes a drawing completion time of the first image frame.
According to a second aspect, or any implementation manner of the second aspect above, the first user operation is a click operation, a slide operation, or a zoom operation.
Any one implementation manner of the second aspect and the second aspect corresponds to any one implementation manner of the first aspect and the first aspect, respectively. For technical effects corresponding to any one implementation manner of the second aspect and the second aspect, reference may be made to the technical effects corresponding to any one implementation manner of the first aspect and the first aspect, and details are not repeated here.
In a third aspect, the present application provides a response duration detection method. The method is applied to a response duration detection device, the device comprises a sensor module, a sensing module, a view basic capability implementation module and a detection module, and the method comprises the following steps: the sensor module responds to the received first user operation and outputs a first user operation event to the sensing module and the view basic capability implementation module; the sensing module responds to the acquired first user operation event and acquires the occurrence time of the first user operation event; writing the occurrence time of the first user operation event into the first image frame information; the view basic capability implementation module responds to the acquired first user operation event and draws a first image frame; the first image frame corresponds to first image frame information; after the first image frame is drawn, the view basic capability implementation module outputs first indication information to the detection module, wherein the first indication information is used for indicating that the first image frame is drawn; the detection module responds to the received first indication information, acquires the drawing completion time of a first image frame, and acquires the occurrence time of a first user operation event from the first image frame information; the detection module detects that a difference value between the occurrence time of the first user operation event and the drawing completion time of the first image frame is larger than a set threshold value, and sends second indication information to the server, wherein the second indication information is used for indicating that the response duration of the electronic equipment is abnormal.
According to a third aspect, the acquiring, by the sensing module, an occurrence time of the first user operation event in response to the acquired first user operation event includes: the sensing module determines that the time of receiving the first user operation event is the occurrence time of the first user operation event.
According to the third aspect, or any implementation manner of the third aspect above, the first image frame information includes an occurrence time field of an operation event, and the writing, by the sensing module, the occurrence time of the first user operation event into the first image frame information includes: the sensing module writes the occurrence time of the first user operation event into an operation event occurrence time field of the first image frame information in the memory.
According to the third aspect, or any implementation manner of the third aspect above, the acquiring, by the detection module, the occurrence time of the first user operation event from the first image frame information includes: the detection module responds to the received first indication information and obtains the occurrence time of the first user operation event from the operation event occurrence time field of the first image frame in the memory.
According to the third aspect, or any one of the above implementation manners of the third aspect, the method further includes: the sensing module writes the occurrence time of the first user operation event into the second image frame information; the view basic capability implementation module draws a second image frame, and the second image frame corresponds to the information of the second image frame; after the second image frame is drawn, the view basic capability implementation module outputs third indication information to the detection module, wherein the third indication information is used for indicating that the second image frame is drawn; the detection module responds to the received third indication information, acquires the drawing completion time of the second image frame, and acquires the occurrence time of the first user operation event from the second image frame information; the detection module detects that the occurrence time of the first user operation event in the second image frame information is the same as the occurrence time of the first user operation event in the first image frame information, and the detection module does not detect the response duration of the second image frame.
According to the third aspect, or any one of the above implementation manners of the third aspect, the method further includes: the sensor module responds to the received second user operation and outputs a second user operation event to the sensing module and the view basic capability implementation module; the sensing module responds to the acquired second user operation event and acquires the occurrence time of the second user operation event; the sensing module writes the occurrence time of the second user operation event into the third image frame information; the view basic capability implementation module responds to the acquired second user operation event and draws a third image frame; the third image frame corresponds to the third image frame information; after the third image frame is drawn, the view basic capability implementation module outputs fourth indication information to the detection module, wherein the fourth indication information is used for indicating that the third image frame is drawn; the detection module responds to the received fourth indication information, acquires the drawing completion time of a third image frame, and acquires the occurrence time of a second user operation event from the third image frame information; the detection module detects that the occurrence time of a second user operation event in the third image frame information is different from the occurrence time of a first user operation event in the second image frame information, and detects whether a difference value between the occurrence time of the second user operation event and the drawing completion time of the third image frame is larger than a set threshold value. And the detection module detects that the difference value between the occurrence time of the second user operation event and the drawing completion time of the third image frame is greater than a set threshold value, and sends fifth indication information to the server, wherein the fifth indication information is used for indicating that the response time length of the electronic equipment is abnormal.
According to the third aspect, or any implementation manner of the third aspect above, the obtaining, by the detection module, a drawing completion time of the first image frame in response to the received first indication information includes: the detection module determines that the time when the first indication information is received is the drawing completion time of the first image frame.
According to the third aspect, or any implementation manner of the third aspect above, the first indication information includes a drawing completion time of the first image frame.
According to a third aspect, or any implementation manner of the third aspect above, the first user operation is a click operation, a slide operation, a zoom operation, or a double click operation.
In a fourth aspect, the present application provides a response duration detection method. The method is applied to a response duration detection device, the device comprises a sensor module, a view basic capability implementation module and a detection module, and the method comprises the following steps: the sensor module responds to the received first user operation and outputs a first user operation event to the view basic capability implementation module; the view basic capability implementation module responds to the acquired first user operation event, draws a first image frame and acquires the occurrence time of the first user operation event; after the first image frame is drawn, the view basic capability implementation module outputs first indication information to the detection module, wherein the first indication information comprises the occurrence time of a first user operation event, and the first indication information is used for indicating that the first image frame is drawn completely; the detection module responds to the received first indication information and acquires the drawing completion time of the first image frame and the occurrence time of a first user operation event; the detection module detects that a difference value between the occurrence time of the first user operation event and the drawing completion time of the first image frame is larger than a set threshold value, and sends second indication information to the server, wherein the second indication information is used for indicating that the response duration of the electronic equipment is abnormal.
According to a fourth aspect, the view base capability enforcement module obtains an occurrence time of a first user operation event, including: the view base capability enforcement module determines that the time of receiving the first user operation event is the occurrence time of the first user operation event.
According to the fourth aspect, or any implementation manner of the fourth aspect above, the view basic capability implementing module draws a second image frame, where the second image frame corresponds to second image frame information; after the second image frame is drawn, the view basic capability implementation module outputs third indication information to the detection module, wherein the third indication information comprises the occurrence time of the first user operation event and is used for indicating that the second image frame is drawn; the detection module responds to the received third indication information and acquires the drawing completion time of the second image frame and the occurrence time of the first user operation event; the detection module detects that the occurrence time of the first user operation event included in the third indication information is the same as the occurrence time of the first user operation event included in the first indication information, and the detection module does not detect the response time length of the second image frame.
According to a fourth aspect, or any implementation manner of the fourth aspect above, the sensor module outputs a second user operation event to the view base capability enforcement module in response to the received second user operation; the view basic capability implementation module responds to the acquired second user operation event, draws a third image frame and acquires the occurrence time of the second user operation event; after the third image frame is drawn, the view basic capability implementation module outputs fourth indication information to the detection module, wherein the fourth indication information comprises the occurrence time of a second user operation event, and the fourth indication information is used for indicating that the third image frame is drawn; the detection module responds to the received fourth indication information and acquires the drawing completion time of the third image frame and the occurrence time of the second user operation event; the detection module detects that the occurrence time of a second user operation event included in the fourth indication information is different from the occurrence time of a first user operation event included in the third indication information, and detects whether a difference value between the occurrence time of the second user operation event and the drawing completion time of the third image frame is larger than a set threshold value. And the detection module detects that the difference value between the occurrence time of the second user operation event and the drawing completion time of the third image frame is greater than a set threshold value, and sends fifth indication information to the server, wherein the fifth indication information is used for indicating that the response time length of the electronic equipment is abnormal.
According to a fourth aspect, or any implementation manner of the fourth aspect above, the acquiring, by the detection module, in response to the received third indication information, a drawing completion time of the second image frame and an occurrence time of the first user operation event includes: the detection module determines that the time when the first indication information is received is the drawing completion time of the first image frame.
According to a fourth aspect, or any implementation manner of the fourth aspect above, the first indication information includes a drawing completion time of the first image frame.
According to a fourth aspect or any implementation manner of the fourth aspect above, the first user operation is a click operation, a slide operation, or a zoom operation.
In a fifth aspect, the present application provides a computer readable medium for storing a computer program comprising instructions for performing the method of the third aspect or any possible implementation manner of the third aspect.
In a sixth aspect, the present application provides a computer readable medium for storing a computer program comprising instructions for performing the method of the fourth aspect or any possible implementation manner of the fourth aspect.
In a seventh aspect, the present application provides a computer program comprising instructions for carrying out the method of the third aspect or any possible implementation manner of the third aspect.
In an eighth aspect, the present application provides a computer program comprising instructions for carrying out the method of the fourth aspect or any possible implementation manner of the fourth aspect.
In a ninth aspect, the present application provides a chip comprising a processing circuit, a transceiver pin. Wherein the transceiver pin and the processing circuit are in communication with each other via an internal connection path, and the processing circuit performs the method of the third aspect or any possible implementation manner of the third aspect to control the receiving pin to receive signals and to control the transmitting pin to transmit signals.
In a tenth aspect, the present application provides a chip comprising a processing circuit, a transceiver pin. Wherein the transceiver pin and the processing circuit are in communication with each other via an internal connection path, and the processing circuit performs the method of the fourth aspect or any possible implementation manner of the fourth aspect to control the receiving pin to receive signals and to control the sending pin to send signals.
Drawings
Fig. 1 is a schematic diagram of a hardware configuration of an exemplary electronic device;
fig. 2 is a schematic diagram of a software structure of an exemplary illustrated electronic device;
FIG. 3 is an exemplary illustrative user interface diagram;
FIG. 4 is an exemplary illustration of user operation;
FIG. 5 is an exemplary illustrative user interface diagram;
FIG. 6 is a schematic diagram illustrating an exemplary response duration detection process;
FIGS. 7 a-7 b are exemplary module interaction diagrams;
fig. 8 is an exemplary interaction diagram of a mobile phone and a cloud;
FIG. 9 is an exemplary illustrative module interaction diagram;
FIG. 10 is a schematic diagram illustrating an exemplary response time duration detection process;
FIG. 11 is an exemplary module interaction diagram;
FIG. 12 is an exemplary module interaction diagram;
FIG. 13 is an exemplary module interaction diagram;
fig. 14 is a schematic structural diagram of an exemplary illustrated apparatus.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first" and "second," and the like, in the description and in the claims of the embodiments of the present application are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first target object and the second target object, etc. are specific sequences for distinguishing different target objects, rather than describing target objects.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present application, the meaning of "a plurality" means two or more unless otherwise specified. For example, a plurality of processing units refers to two or more processing units; the plurality of systems refers to two or more systems.
Fig. 1 shows a schematic structural diagram of an electronic device 100. It should be understood that the electronic device 100 shown in fig. 1 is only one example of an electronic device, and that the electronic device 100 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in fig. 1 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device 100 may include: the mobile terminal includes a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
For example, in the embodiment of the present application, the electronic device 100 may communicate with the cloud through the mobile communication module 150 or the wireless communication module 160. For example, the electronic device 100 may send the corresponding delay time to the cloud through the mobile communication module 150. The cloud may be a server cluster consisting of a plurality of servers.
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The pressure sensor 180A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a variety of types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor may pass the detected touch operation to the application processor to determine a touch event type, which may include, for example, a swipe, a click, a long press, and the like. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes an Android system with a layered architecture as an example, and exemplarily illustrates a software structure of the electronic device 100.
Fig. 2 is a block diagram of a software structure of the electronic device 100 according to the embodiment of the present application.
The layered architecture of the electronic device 100 divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures. For example, in the embodiment of the present application, the view system may be further configured to detect a response duration between a user operation and an interface display in the process of displaying the application interface.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer at least comprises a display driver, a camera driver, an audio driver, a sensor driver and the like. For example, a sensor driver may be used to output a detection signal of a sensor (e.g., a touch sensor) to the viewing system, such that the viewing system displays a corresponding application interface in response to the detection signal.
It is to be understood that the components contained in the system framework layer, the system library and the runtime layer shown in fig. 2 do not constitute a specific limitation of the electronic device 100. In other embodiments of the present application, the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
Fig. 3 is an exemplary user interface diagram. Referring to FIG. 3, display interface 301 illustratively includes one or more controls therein. Controls include, but are not limited to: network controls, power controls, application icon controls, and the like. Exemplary application icon controls include, but are not limited to: video application icon controls, weather application icon controls, set application icon controls 302, and the like. In the embodiment of the present application, the example in which the user clicks the icon setting control and detects the interface display response time is described.
Illustratively, the user clicks on the set icon control 302. Referring to fig. 4, the user clicks the set icon control 302 and then lifts his hand. Referring to fig. 5, the mobile phone illustratively displays a setup application interface 501 in response to a received user click operation. It should be noted that there may be a certain response time between the user clicking operation and the display setting application interface 501. The embodiment of the application provides a detection mode, which can be used for detecting the response time between the user click operation and the display setting application interface 501 so as to accurately position the mobile phone performance problem. In the embodiment of the present application, only the user click operation is described as an example. In other embodiments, the detection method in the embodiment of the present application may be applied to a slide operation, a pinch operation (also referred to as a zoom operation), a double-click operation, and the like to detect a response delay between a user operation and an interface display. Optionally, the detection manner in the embodiment of the present application may also be applied to detection of a response delay between an air-gap gesture and an interface display in an operation scene of an electronic device, where the air-gap gesture (e.g., a pinch gesture, an air-gap slide gesture, etc.) is performed.
Fig. 6 is a schematic diagram illustrating an exemplary flow of detecting the response time duration in conjunction with the scenarios shown in fig. 3 to 5. Referring to fig. 6, the method specifically includes:
s101, the touch sensor transmits a detection signal to the inputDispacher.
For example, in the embodiment of the present application, the view system includes but is not limited to: an input message handler (also referred to as an input message handler module), a view basic capability implementation module (also referred to as a view basic capability implementation module), a detection module, and a reporting module.
As shown in fig. 3-4, the user clicks on the settings icon control 302 and raises his hand to trigger the settings application. Referring to fig. 7a, the touch sensor illustratively outputs a detection signal to the sensor driver in response to a received user operation. The sensor driver outputs a detection signal to a view system (specifically, an inputDispacher) in response to the received detection signal.
S102, the inputDispacher detects that the user operation is a click event.
For example, the inputDispacher may determine a touch event corresponding to the user operation based on a detection signal input by the touch sensor. For example, in this embodiment, the inputDispacher may determine that the current user operation is a click event based on a detection signal input by the touch sensor.
It should be noted that, as shown in fig. 3 and 4, there is a time difference between the user clicking the setting icon control 302 and the user raising his hand, which may be 500ms, for example. In the embodiment of the present application, the inputDispacher detects the user raising event, which may also be understood as the end time of the current click event, and then triggers the subsequent processes, for example, execute S103 and S105.
In one possible implementation, the sensor driver may also output the detection signal to the inputDispacher and ViewRootimpl. After the inputDispacher determines the corresponding event type based on the detection signal, S105 is performed. Illustratively, the ViewRootimpl may perform the steps performed by the inputDispacher in S102 in response to the received detection signal, that is, the ViewRootimpl may determine that the user operation is a click event based on the detection signal, and further obtain the occurrence time of the lift event.
S103, the inputDispacher sends a hand raising event to the ViewRootimpl.
Illustratively, as described above, the inputDispacher sends a lift event to the ViewRootimpl after detecting a user lift. The hand-up event is used for indicating that a user hand-up event currently exists so as to trigger the ViewRootimpl to execute a subsequent response duration detection process.
S104, the ViewRootimpl records the occurrence time of the hand raising event.
Illustratively, after the ViewRootimpl receives the hand raising event input by the inputDispacher, the time of receiving the hand raising event is recorded as the occurrence time of the hand raising event. It should be noted that there may be a certain delay in data transmission between ViewRootimpl and inputDispacher, and the delay effect is small and negligible.
In another example, the inputDispacker sends the lift event and the occurrence time corresponding to the lift event to the ViewRootimpl after detecting the user lift. The occurrence time of the hand-lifting event is the moment when the inputDispacker detects the hand lifting of the user. Accordingly, the ViewRootimpl can acquire the occurrence time of the hand lifting event from the inputDispacher and record the acquired occurrence time.
It should be noted that, as described above, the detection manner in the embodiment of the present application is also applicable to response time length detection in a scene of a slide operation, a pinch operation, a double-click operation, or the like. For example, for a swipe operation, which is a continuous operation, the inputDispacher may determine a swipe event based on a detection signal sent by the touch sensor. The inputDispacher sends a slip event to the ViewRootimpl after detecting the slip event. After the ViewRootimpl receives the sliding event input by the inputDispacher, the subsequent processing procedure is the same as the processing of the hand-raising event, and is not described herein again.
S105, the inputDispacher sends a click event to the setting application.
Illustratively, in response to a detected user click event, the inputDispacher sends a click event to an application clicked by the user, e.g., a setup application, to indicate that there is currently a click event corresponding to the application (i.e., the setup application).
It should be noted that the execution order of S103 and S105 is not sequential, and the present application is not limited thereto.
S106, the setting application requests the ViewRootimpl to refresh the interface.
Illustratively, in response to a click event received by the input Dispacher input, the setup application may send a request signal to ViewRootimpl to request the ViewRootimpl to refresh the interface. In this embodiment, requesting the view rootimpl to refresh the interface may be understood as requesting the view rootimpl to display the interface of the setting application, i.e., as shown in fig. 5, displaying the setting application interface 501.
S107, ViewRootimpl generates image frame 1.
Illustratively, in the embodiment of the present application, the frame rate (i.e., the number of image frames displayed per second) is 60 fps. That is, ViewRootimpl can generate 60 image frames in 1 second in displaying the image frames. For example, referring to fig. 5, in the process of displaying the setting application interface 501, the display interface of the mobile phone is switched from the desktop (i.e., the display interface 301) to the setting application interface 501, and the switching process is optionally a display interface switching action. For example, the settings application interface 501 may optionally be displayed in a manner that gradually expands upward from the bottom of the display interface 301. Therefore, in displaying the setting application interface 501, it is actually to play a plurality of image frames to realize a play action of the setting application interface 501. Correspondingly, the ViewRootimpl generates a plurality of image frames corresponding to the interface switching effect. For example, the interface switch effect may include 60 image frames.
Illustratively, each time an image frame is generated, the ViewRootimpl performs S108, that is, the detection module is triggered to detect the occurrence time of the hand-up event.
Illustratively, referring to fig. 7b, after the ViewRootimpl generates image frame 1, the view system outputs the image frame to the display driver. The display driver can correspondingly process the image frame and output the image frame to the display. The display displays the image corresponding to the image frame 1 on the display interface.
S108, the ViewRootimpl sends the occurrence time of the hand-raising event to the detection module.
Illustratively, after the ViewRootimpl generates the image frame 1, an indication signal is sent to the detection module. The indication signal may include the time of occurrence of the hand-up event. The detection module receives the indication signal and determines that the time when the indication signal is received is the drawing completion time of the image frame 1. Accordingly, the detection module may acquire the occurrence time of the hand-up event and the drawing completion time of the image frame 1.
Optionally, the instruction signal sent by ViewRootimpl to the detection module may include the occurrence time of the raising event and the drawing completion time of the image frame 1. The detection module may acquire the occurrence time of the hand-lifting event and the drawing completion time of the image frame 1 based on the received indication signal.
Note that, as described above, ViewRootimpl optionally outputs the image frame 1 to the display driver. The order between this step and S108 is not limited.
And S109, the detection module detects that the occurrence time of the hand-lifting event is different from the occurrence time of the hand-lifting event acquired last time.
For example, as described above, each time the ViewRootimpl generates an image frame, the occurrence time of the hand-up event is sent to the detection module, and in the embodiment of the present application, when the detection module detects the response time, the detection module obtains the time difference between the occurrence time of the hand-up event and the first image frame (for example, image frame 1) displayed after the hand-up event. Correspondingly, the detection module needs to detect whether the occurrence time of the hand-up event acquired this time is the same as the post-occurrence event of the hand-up event acquired last time, so as to determine whether the currently determined image frame is the first image frame after hand-up.
In an example, the occurrence time of the hand-up event acquired this time by the detection module is different from the occurrence time of the hand-up event acquired last time by the detection module, that is, the drawing time of the image frame corresponding to the hand-up event acquired this time by the detection module is the drawing completion time corresponding to the first image frame after hand-up. Accordingly, the detection module performs S110. For example, the last click event of the user corresponds to a video application, that is, the user clicks the video application. The occurrence time of the last hand-up event recorded by the detection module is the time after the user clicks the video application and raises his hand, and is different from the occurrence time of the hand-up event acquired this time. Optionally, the detection module may store the recorded occurrence time of the hand-lifting event in the memory, and update the stored occurrence time of the hand-lifting event after acquiring a new occurrence time of the hand-lifting event each time. The occurrence time of the acquired new hand-up event may be the same as or different from the occurrence time of the recorded hand-up event, and specific reasons may refer to the above description, which is not described herein again.
In another example, if the occurrence time of the hand-up event acquired this time by the detection module is the same as the occurrence time of the hand-up event acquired last time by the detection module, the detection module does not process the hand-up event. Specific examples will be described in the following embodiments.
S110, the detection module detects that a difference between the drawing completion time of the image frame 1 and the occurrence time of the hand-up event is greater than a preset threshold.
For example, after the detection module detects that the occurrence time of the hand-up event acquired this time is different from the occurrence time of the hand-up event acquired last time, the detection module acquires a difference between the drawing completion time of the image frame 1 and the occurrence time of the hand-up event, where the difference is the above-mentioned response duration.
In an example, when the detection module detects that the response duration is greater than the preset threshold, S111 is executed, that is, the reporting procedure is triggered. In the embodiment of the present application, the preset threshold may be 1.5 s. The preset threshold in the embodiment of the present application is only an illustrative example, and may be set according to actual requirements, and the present application is not limited.
In another example, if the detection module detects that the response time duration is less than the preset threshold, no processing is performed.
In a possible implementation manner, the detection module may set a preset interval, for example, the preset interval may be greater than or equal to 500ms and less than 1.5 s. For example, if the response duration acquired by the detection module is within the interval (for example, the response duration is 800ms), the detection module may record that the response duration is within a preset interval. If the detection module detects that the response duration is within the preset interval for multiple times (for example, 10 times may be set according to actual requirements, and the present application is not limited), the detection module may also trigger a subsequent reporting procedure.
And S111, the detection module sends an indication signal to the reporting module.
For example, after detecting that a difference (i.e., a response duration) between the drawing completion time of the image frame 1 and the occurrence time of the hand-up event is greater than a preset threshold (e.g., 1.5s), the detection module sends an indication signal to the reporting module, where the indication signal is used to indicate the reporting module to report the response delay event to the cloud. Alternatively, the difference between the drawing completion time of the image frame 1 and the occurrence time of the hand-up event, that is, the response time period may be included in the instruction signal.
Optionally, as described above, the detection module may also send an indication signal to the reporting module when it is counted that the multiple response time durations are within the preset interval.
And S112, the reporting module reports the response delay event to the background.
For example, as shown in fig. 8, the reporting module reports the response delay event to a background server (which may be called a cloud, or a server cluster, a host, etc.) in response to the received indication signal sent by the detecting module, so as to indicate that the mobile phone has the response delay event. For example, the response delay event reported by the reporting module to the background server may include a difference between the drawing completion time of the image frame 1 and the occurrence time of the hand-up event, that is, a response duration. The response delay event may also include an application corresponding to the click event, such as a setup application.
Optionally, the interaction between the mobile phone and the cloud may be based on a mobile network or a wireless network, which is not limited in this application.
For example, the operator may acquire the response delay event of the mobile phone from the background server and analyze the response delay event. For example, response delay events reported by multiple terminals (which may be mobile phones, tablets, and the like) acquired by the background server may detect whether most (e.g., 80%) of the response delay events are for the same application. For example, 80% of reported response delay events may be for the same video application. For another example, the background server may also count distribution conditions of manufacturers, models, and the like of the devices in which the response delay event occurs, so as to count the performance of each device.
Illustratively, as described above, the ViewRootimpl generates a plurality of image frames, and the step in S108 is performed for each image frame. For example, as shown in fig. 9, referring to fig. 9, after ViewRootimpl outputs image frame 1 to the display driver, ViewRootimpl generates image frame 2. After the image frame 2 is generated by the ViewRootimpl, the occurrence time of the hand-up event is sent to the detection module. The hand-lifting event is still the hand-lifting event time recorded by ViewRootimpl, that is, the occurrence time of the hand-lifting event acquired by ViewRootimpl in S104. For example, the detection module detects whether the occurrence time of the currently acquired hand-lifting event is the same as the occurrence time of the last acquired hand-lifting event. For example, the occurrence time of the hand-lifting event acquired last time is the occurrence time of the hand-lifting event acquired by the detection module in S108. Correspondingly, if the detection module detects that the occurrence time of the hand-lifting event acquired this time is the same as the occurrence time of the hand-lifting event acquired last time, the detection module does not process the hand-lifting event.
Continuing to refer to fig. 9, exemplary ViewRootimpl generates image frame 3 after outputting image frame 2 to the display driver. After the image frame 3 is generated by the ViewRootimpl, the occurrence time of the hand-up event is sent to the detection module. The occurrence time of the hand-up event is still the hand-up event time recorded by the ViewRootimpl, that is, the occurrence time of the hand-up event acquired by the ViewRootimpl in S104.
For example, the detection module detects whether the occurrence time of the currently acquired hand-lifting event is the same as the occurrence time of the last acquired hand-lifting event. Illustratively, the occurrence time of the hand-up event last acquired by the detection module is the occurrence time of the hand-up event sent to the detection module after the image frame 2 is generated by the ViewRootimpl. Correspondingly, if the detection module detects that the occurrence time of the hand-lifting event acquired this time is the same as the occurrence time of the hand-lifting event acquired last time, the detection module does not process the hand-lifting event.
In fig. 9, only the processing procedure of the image frames 2 and 3 is described as an example. Optionally, ViewRootimpl may continue to generate a plurality of image frames, and processing of each image frame may refer to the processing of image frame 2 or image frame 3. For example, the ViewRootimpl corresponds to the click event, and the number of the generated image frames is 60, that is, the processing procedure of the image frames 4 to 60 can be described with reference to the processing procedure of the image frame 2.
Illustratively, continuing with FIG. 5, if the user clicks on any of the options on the settings application interface 501, such as clicking on the notification option. The handset repeatedly executes the steps in S101 to S112 in response to the received user operation. In step S109, the occurrence time of the last raising event recorded by the detection module is the above-mentioned ViewRootimpl generated image frame 60 and then sent to the detection module, where the occurrence time of the raising event is the same as the occurrence time of the raising event corresponding to the image frame 1. The occurrence time of the hand-lifting event acquired by the detection module at this time is acquired after the user clicks the notification option. Accordingly, since the occurrence time of the present hand-lifting event is different from the occurrence time of the last hand-lifting event, the detection module continues to perform the subsequent process, that is, to perform S110.
Fig. 10 is a flowchart illustrating another response duration detection method. Referring to fig. 10, the method specifically includes:
s201, the touch sensor transmits a detection signal to the inputDispacher.
S202, the inputDispacher detects that the user operation is a click event.
S203, the inputDispacher sends a hand raising event to the perception module.
Illustratively, the inputDispacher sends a lift event to the perception module after detecting a user lift. The hand-up event is used for indicating that a user hand-up event currently exists so as to trigger the sensing module to execute a subsequent response duration detection process.
And S204, the sensing module records the occurrence time of the hand raising event.
Illustratively, the sensing module receives a hand raising event input by the inputDispacher. And the sensing module records the occurrence time of the acquired hand lifting event. The occurrence time of the hand raising event is the time when the sensing module receives the hand raising event input by the inputDispacher. In another example, if the hand raising event input by the inputDispacher includes the occurrence time of the hand raising event, the sensing module saves the occurrence time of the hand raising event acquired from the inputDispacher into a storage (e.g., a memory).
Optionally, if the sensing module has recorded the occurrence time of the hand-lifting event, the sensing module updates the recorded occurrence time of the hand-lifting event after acquiring the occurrence time of the new hand-lifting event.
The specific details of S201 to S204 can refer to the related descriptions of S101 to S104, and are not described herein again.
S205, the sensing module writes the occurrence time of the raising event into Frameinfo (image frame information).
Illustratively, in the embodiment of the present application, each image frame corresponds to one Frameinfo, and the Frameinfo includes related information describing the image frame. Frameinfo includes one or more fields. Optionally, the fields in Frameinfo include, but are not limited to: a Flags field, an endoredvsync field, a Vsync field, an OldestInputEvent field, a NewestInputEvent field, a handlelnputstar field, an animation start field, a PerformTraversalsStart field, a FrameCompleted field, a dequeue buffer duration field, a gpumcompleted field, a handspike event occurrence time field, etc. And the hand raising event occurrence time field is used for indicating the occurrence time of the hand raising event corresponding to the image frame.
Referring to fig. 11, for example, after the sensing module obtains the occurrence time of the handsup event, the occurrence time of the handsup event is written into the field of the occurrence time of the handsup event of Frameinfo. Illustratively, Frameinfo may be contained in memory. It is understood that other modules may also perform operations such as writing and reading on the Frameinfo field by reading the Frameinfo field in the memory.
S206, the inputDispacher sends a click event to the setting application.
S207, the setup application requests the ViewRootimpl to refresh the interface.
S208, ViewRootimpl generates image frame 1.
The specific details of S206 to S208 can refer to the descriptions of S105 to S107, and are not described herein again.
S209, ViewRootimpl generates Frameinfo.
Illustratively, referring to fig. 12, as mentioned above, Frameinfo includes a plurality of fields, and in S205, the sensing module completes writing the field of the hand raising event occurrence time. In this step, the ViewRootimpl generates Frameinfo, specifically, other fields of Frameinfo in the memory of the ViewRootimpl are written completely to generate complete Frameinfo. Optionally, the image frame 1 generated by ViewRootimpl is optionally contained in memory. It should be noted that the sequence of S209 and S208 is not sequential.
It should be noted that, corresponding to the current click event, ViewRootimpl optionally generates a plurality of image frames, for example, image frame 2 to image frame 60. ViewRootimpl generates a corresponding Frameinfo for each image frame 2 through 60.
For example, the sensing module optionally writes the occurrence time of the hand raising event into the hand raising event occurrence time field in the Frameinfo corresponding to each image frame. That is to say, the occurrence time of the hand raising event in the hand raising event occurrence time field in Frameinfo corresponding to each image frame after the click event is the same. Until the next user operation such as a click event, a slide event, a pinch event and the like is received, the information in the hand raising event occurrence time field in Frameinfo of the image frame is not changed.
S210, the ViewRootimpl sends a first indication signal to the detection module.
Illustratively, ViewRootimpl sends a first indication signal to the detection module after generating image frame 1. The first indication signal is used to indicate that the image frame 1 has been completely drawn. That is, similarly to in S108, the time at which the detection module receives the first instruction signal is recorded as the drawing completion time of the image frame 1.
Alternatively, the drawing completion time of the image frame 1 may be included in the first indication signal. Accordingly, after receiving the first indication signal, the detection module may obtain the drawing completion time of the image frame 1 from the first indication signal.
S211, the detecting module detects that the occurrence time of the hand-up event is different from the occurrence time of the hand-up event acquired last time.
Illustratively, the detection module receives a first indication signal sent by the ViewRootimpl. Referring to fig. 13, for example, the detection module records a time when the first indication signal is received as a drawing completion time of the image frame 1 in response to the received first indication signal.
Illustratively, the detection module reads the occurrence time of the hand-up event in the hand-up event occurrence time field of Frameinfo of the image frame 1 from the memory in response to the received first indication signal.
It should be noted that, each time the ViewRootimpl generates an image frame, the ViewRootimpl sends a first indication signal to the detection module, and in the embodiment of the present application, when the detection module detects the response time length, the detection module obtains a time difference between the occurrence time of the hand-up event and the first image frame (for example, image frame 1) displayed after the hand-up event occurs. Therefore, the detection module needs to detect whether the occurrence time of the hand-lifting event acquired this time is the same as the event after the occurrence of the hand-lifting event acquired last time.
In one example, the occurrence time of the hand-up event acquired this time by the detection module is different from the occurrence time of the hand-up event acquired last time by the detection module, and the detection module executes S212. For example, the last click event of the user corresponds to a click video application. The occurrence time of the last hand-up event recorded by the detection module is the time after the user clicks the video application and raises his hand, and is different from the occurrence time of the hand-up event in Frameinfo of the current image frame 1, that is, the occurrence time of the hand-up event acquired by the detection module this time. To illustrate again, ViewRootimpl generates a plurality of image frames up to image frame 60, the processing flow for each image frame being consistent with the processing flow for image frame 1. That is, the occurrence time of the hand-up event currently acquired by the detection module (i.e., the occurrence time a of the hand-up event) is acquired from Frameinfo of the image frame 60. Also, the occurrence time of the currently acquired handstand event is the same as the occurrence time of the last acquired handstand event, for example, the occurrence time of the handstand event of Frameinfo of the image frame 50. If the user clicks any option in the setting application interface, the sensing module receives a hand raising event sent by the inputDispatcher. The sensing module records the occurrence time of the hand-up event (for example, the occurrence time B of the hand-up event), and writes the occurrence time B of the hand-up event into Frameinfo. Accordingly, the ViewRootimpl will respond to the request of setting the application, draw the corresponding image frame, and trigger the detection module to execute the subsequent process. And the detection module acquires the occurrence time B of the hand raising event from the Frameinfo. The occurrence time of the hand-lifting event (i.e. the occurrence time a of the hand-lifting event) last acquired by the detection module is different from the occurrence time B of the hand-lifting event acquired this time, and accordingly, the detection module executes S212.
In another example, if the occurrence time of the hand-up event acquired this time by the detection module is the same as the occurrence time of the hand-up event acquired last time by the detection module, the detection module does not process the hand-up event. For example, the sensing module writes the occurrence time of the hand-in event into the hand-in event occurrence time field in Frameinfo of the image frame 2. Since the sensing module does not receive a new raise-hand event, correspondingly, the occurrence time of the raise-hand event in Frameinfo of the image frame 2 is the same as the occurrence time of the raise-hand event in Frameinfo of the image frame 1. After the ViewRootimpl generates the image frame 2, it sends a first indication signal to the detection module. The detection module responds to the received first indication signal, reads Frameinfo of the image frame 2 from the memory, and obtains occurrence time of the hand-up event in a hand-up event occurrence time field in the Frameinfo. The occurrence time of the hand-up event acquired last time by the detection module is the occurrence time of the hand-up event acquired from the Frameinfo of the image frame 1, and is the same as the occurrence time of the hand-up event acquired from the Frameinfo of the image frame 2 this time.
S212, the detecting module detects that the difference between the drawing completion time of the image frame 1 and the occurrence time of the hand-up event is greater than a preset threshold.
For example, after the detection module detects that the occurrence time of the hand-up event acquired this time is different from the occurrence time of the hand-up event acquired last time, the detection module acquires a difference between the drawing completion time of the image frame 1 and the occurrence time of the hand-up event, where the difference is the above-mentioned response duration.
In an example, if the detection module detects that the response duration is greater than the preset threshold, S111 is executed, that is, the reporting procedure is triggered. In the embodiment of the present application, the preset threshold may be 1.5 s. The preset threshold in the embodiment of the present application is only an illustrative example, and may be set according to actual requirements, and the present application is not limited.
In another example, if the detection module detects that the corresponding duration is less than the preset threshold, no processing is performed.
In a possible implementation manner, the detection module may set a preset interval, for example, the preset interval may be greater than or equal to 500ms and less than 1.5 s. For example, if the response time duration is within the interval (e.g., the response time duration is 800ms), the detection module may record that the response time duration is within the preset interval. If the detection module detects that the response duration is within the preset interval for multiple times (for example, 10 times may be set according to actual requirements, and the present application is not limited), the detection module may also trigger a subsequent reporting procedure.
S213, the detection module sends a second indication signal to the reporting module.
For example, after detecting that the difference (i.e., the response time duration) between the drawing completion time of the image frame 1 and the occurrence time of the hand-up event is greater than a preset threshold (e.g., 1.5s), the detection module sends a second indication signal to the reporting module, where the second indication signal is used to indicate the reporting module to report the response delay event to the cloud. Alternatively, the difference between the drawing completion time of the image frame 1 and the occurrence time of the hand-up event may be included in the indication signal.
S214, the reporting module reports the response delay event to the background.
For example, the reporting module reports the response delay event to a background server (which may be called a cloud, or a server cluster, a host, etc.) in response to the received indication signal sent by the detecting module, so as to indicate that the mobile phone has the response delay event. For example, the response delay event reported by the reporting module to the background server may include a difference between the drawing completion time of the image frame 1 and the occurrence time of the hand-up event, that is, a response duration. The response delay event may also include an application corresponding to the click event, such as a setup application.
In one possible implementation manner, the Frameinfo may include an image frame drawing start time field, an image frame drawing completion time field, and the like, in addition to the Flag field and the raising event occurrence time field described above. For example, after the sensing module writes the occurrence time of the hand raising event into Frameinfo, ViewRootimpl may write the image drawing start time into the image frame drawing start time field during the generation of Frameinfo, and write the image frame drawing completion time into the image frame drawing completion time field after the image frame is drawn. The ViewRootimpl sends a first indication signal to the detection module for indicating that the image frame has been drawn. The detection module responds to the received first indication signal, reads an image drawing completion time field and a hand-up event occurrence time field in Frameinfo to acquire the drawing completion time of the image frame and the occurrence time of the hand-up event so as to execute the subsequent detection steps.
It should be noted that, in the embodiment of the present application, only the click event is taken as an example for description. In other embodiments, the user sliding, long pressing, and the like may all trigger the response duration detection process in the embodiment of the present application. For example, referring to fig. 3, if the user slides the screen downwards from the upper edge of the screen, the mobile phone displays the pull-down menu in the display interface 301 in response to the received user operation, and by using the response duration detection method in the embodiment of the present application, the mobile phone can detect the response duration between the time when the user slides the screen and the time when the pull-down menu is displayed.
It should be further noted that, in the embodiment of the present application, a click event is taken as an example for description, and correspondingly, the hand raising event occurrence time field in Frameinfo is only a better description of the association relationship between the field and the information. In other embodiments, the hand raising event occurrence time field may be named in other ways as well. For example, the field may be referred to as an event occurrence time field, an event trigger event field, etc., and the present application is not limited thereto.
It will be appreciated that the electronic device, in order to implement the above-described functions, comprises corresponding hardware and/or software modules for performing the respective functions. The present application is capable of being implemented in hardware or a combination of hardware and computer software in conjunction with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In one example, fig. 14 shows a schematic block diagram of an apparatus 1400 of an embodiment of the present application that the apparatus 1400 may comprise: a processor 1401 and transceiver/transceiver pins 1402, and optionally a memory 1403.
The various components of the device 1400 are coupled together by a bus 1404, where the bus 1404 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various busses are referred to in the drawings as busses 1404.
Optionally, memory 1403 may be used for the instructions in the foregoing method embodiments. The processor 1401 is operable to execute instructions in the memory 1403 and to control the receive pin to receive signals and the transmit pin to transmit signals.
The apparatus 1400 may be an electronic device or a chip of an electronic device in the above method embodiments.
All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The present embodiment further provides a computer storage medium, where computer instructions are stored in the computer storage medium, and when the computer instructions are run on an electronic device, the electronic device is caused to execute the relevant method steps to implement the response duration detection method in the foregoing embodiments.
The present embodiment also provides a computer program product, which when running on a computer, causes the computer to execute the relevant steps described above, so as to implement the response duration detection method in the above embodiments.
In addition, an apparatus, which may be specifically a chip, a component or a module, may include a processor and a memory connected to each other; when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip can execute the response time length detection method in the above-mentioned method embodiments.
The electronic device, the computer storage medium, the computer program product, or the chip provided in this embodiment are all configured to execute the corresponding method provided above, and therefore, the beneficial effects that can be achieved by the electronic device, the computer storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Through the description of the above embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the above functional modules is used as an example, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a module or a unit may be divided into only one logic function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed to a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
Any of the various embodiments of the present application, as well as any of the same embodiments, can be freely combined. Any combination of the above is within the scope of the present application.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
The steps of a method or algorithm described in connection with the disclosure of the embodiments of the application may be embodied in hardware or in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash Memory, Read Only Memory (ROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a compact disc Read Only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (34)

1. A response time detection device is characterized by comprising a sensor module, a sensing module, a view basic capability implementation module and a detection module;
the sensor module is configured to:
in response to the received first user operation, outputting a first user operation event to the perception module and the view base capability implementation module;
the sensing module is configured to:
responding to the acquired first user operation event, and acquiring the occurrence time of the first user operation event;
writing the occurrence time of the first user operation event into first image frame information;
the view base capability enforcement module is configured to:
drawing a first image frame in response to the acquired first user operation event; the first image frame corresponds to the first image frame information; the first image frame is a first image frame to be displayed by the electronic equipment after the first user operates the first image frame;
after the first image frame is completely drawn, outputting first indication information to the detection module, wherein the first indication information is used for indicating that the first image frame is completely drawn;
the detection module is configured to:
acquiring the drawing completion time of the first image frame in response to the received first indication information, and acquiring the occurrence time of the first user operation event from the first image frame information;
and sending second indication information to a server when detecting that the difference value between the occurrence time of the first user operation event and the drawing completion time of the first image frame is greater than a set threshold value, wherein the second indication information is used for indicating that the response duration of the electronic equipment for the first user operation event is abnormal.
2. The apparatus according to claim 1, wherein the sensing module is specifically configured to:
determining that the time of receiving the first user operation event is the occurrence time of the first user operation event.
3. The apparatus according to claim 1, wherein the first image frame information comprises an operational event occurrence time field, and the perception module is specifically configured to:
and writing the occurrence time of the first user operation event into the operation event occurrence time field of the first image frame information in the memory.
4. The apparatus according to claim 3, wherein the detection module is specifically configured to:
and in response to the received first indication information, acquiring the occurrence time of the first user operation event from the operation event occurrence time field of the first image frame information in the memory.
5. The apparatus of claim 1,
the sensing module is further configured to:
writing the occurrence time of the first user operation event into second image frame information;
the view base capability implementation module is further configured to:
drawing a second image frame, wherein the second image frame corresponds to the second image frame information;
after the second image frame is completely drawn, outputting third indication information to the detection module, wherein the third indication information is used for indicating that the second image frame is completely drawn;
the detection module is further configured to:
acquiring the drawing completion time of the second image frame in response to the received third indication information, and acquiring the occurrence time of the first user operation event from the second image frame information;
when the occurrence time of the first user operation event in the second image frame information is detected to be the same as the occurrence time of the first user operation event in the first image frame information, the detection module does not detect the response duration of the second image frame.
6. The apparatus of claim 5,
the sensor module is further configured to:
responding to the received second user operation, and outputting a second user operation event to the perception module and the view basic capability implementation module;
the sensing module is further configured to:
responding to the acquired second user operation event, and acquiring the occurrence time of the second user operation event;
writing the occurrence time of the second user operation event into third image frame information;
the view base capability enforcement module is configured to:
drawing a third image frame in response to the acquired second user operation event; the third image frame corresponds to the third image frame information;
after the third image frame is completely drawn, outputting fourth indication information to the detection module, wherein the fourth indication information is used for indicating that the third image frame is completely drawn;
the detection module is configured to:
acquiring the drawing completion time of the third image frame in response to the received fourth indication information, and acquiring the occurrence time of the second user operation event from the third image frame information;
detecting that the occurrence time of the second user operation event in the third image frame information is different from the occurrence time of the first user operation event in the second image frame information, and detecting whether a difference value between the occurrence time of the second user operation event and the drawing completion time of the third image frame is greater than the set threshold value;
and sending fifth indication information to the server when detecting that the difference value between the occurrence time of the second user operation event and the drawing completion time of the third image frame is greater than the set threshold, wherein the fifth indication information is used for indicating that the response time length of the electronic equipment for the second user operation event is abnormal.
7. The apparatus according to claim 1, wherein the detection module is specifically configured to:
determining a time when the first indication information is received as a drawing completion time of the first image frame.
8. The apparatus according to claim 1, wherein the first indication information includes a drawing completion time of the first image frame.
9. The apparatus of claim 1, wherein the first user operation is a click operation, a slide operation, a zoom operation, a double click operation.
10. The response time detection device is characterized by comprising a sensor module, a view basic capability implementation module and a detection module;
the sensor module is configured to:
in response to the received first user operation, outputting a first user operation event to the view base capability implementation module;
the view base capability enforcement module is configured to:
drawing a first image frame in response to the acquired first user operation event, and acquiring the occurrence time of the first user operation event; the first image frame is a first image frame to be displayed by the electronic equipment after the first user operates the first image frame;
after the first image frame is completely drawn, outputting first indication information to the detection module, wherein the first indication information comprises the occurrence time of the first user operation event, and the first indication information is used for indicating that the first image frame is completely drawn;
the detection module is configured to:
acquiring the drawing completion time of the first image frame and the occurrence time of the first user operation event in response to the received first indication information;
and sending second indication information to a server when detecting that the difference value between the occurrence time of the first user operation event and the drawing completion time of the first image frame is greater than a set threshold value, wherein the second indication information is used for indicating that the response duration of the electronic equipment for the first user operation event is abnormal.
11. The apparatus according to claim 10, wherein the view base capability enforcement module is specifically configured to:
determining that the time of receiving the first user operation event is the occurrence time of the first user operation event.
12. The apparatus of claim 10,
the view base capability implementation module is further configured to:
drawing a second image frame, wherein the second image frame corresponds to the second image frame information;
after the second image frame is completely drawn, outputting third indication information to the detection module, wherein the third indication information comprises the occurrence time of the first user operation event and is used for indicating that the second image frame is completely drawn;
the detection module is further configured to:
acquiring the drawing completion time of the second image frame and the occurrence time of the first user operation event in response to the received third indication information;
detecting that the occurrence time of the first user operation event included in the third indication information is the same as the occurrence time of the first user operation event included in the first indication information, wherein the detection module does not detect the response duration of the second image frame.
13. The apparatus of claim 12,
the sensor module is further configured to:
in response to the received second user operation, outputting a second user operation event to the view base capability implementation module;
the view base capability enforcement module is configured to:
drawing a third image frame in response to the acquired second user operation event, and acquiring the occurrence time of the second user operation event;
after the third image frame is completely drawn, outputting fourth indication information to the detection module, wherein the fourth indication information comprises the occurrence time of the second user operation event, and the fourth indication information is used for indicating that the third image frame is completely drawn;
the detection module is configured to:
acquiring the drawing completion time of the third image frame and the occurrence time of the second user operation event in response to the received fourth indication information;
detecting that the occurrence time of the second user operation event included in the fourth indication information is different from the occurrence time of the first user operation event included in the third indication information, and detecting whether a difference value between the occurrence time of the second user operation event and the drawing completion time of the third image frame is greater than the set threshold value;
and sending fifth indication information to the server when detecting that the difference value between the occurrence time of the second user operation event and the drawing completion time of the third image frame is greater than the set threshold, wherein the fifth indication information is used for indicating that the response time length of the electronic equipment for the second user operation event is abnormal.
14. The apparatus according to claim 10, wherein the detection module is specifically configured to:
determining a time when the first indication information is received as a drawing completion time of the first image frame.
15. The apparatus according to claim 10, wherein the first indication information includes a drawing completion time of the first image frame.
16. The apparatus of claim 10, wherein the first user operation is a click operation, a swipe operation, a zoom operation.
17. A response duration detection method is applied to a response duration detection device, the device comprises a sensor module, a sensing module, a view basic capability implementation module and a detection module, and the method comprises the following steps:
the sensor module responds to the received first user operation and outputs a first user operation event to the perception module and the view basic capability implementation module;
the sensing module responds to the acquired first user operation event and acquires the occurrence time of the first user operation event; writing the occurrence time of the first user operation event into first image frame information;
the view basic capability implementation module responds to the acquired first user operation event and draws a first image frame; the first image frame corresponds to the first image frame information; the first image frame is a first image frame to be displayed by the electronic equipment after the first user operates the first image frame;
after the first image frame is completely drawn, the view basic capability implementing module outputs first indication information to the detection module, wherein the first indication information is used for indicating that the first image frame is completely drawn;
the detection module acquires the drawing completion time of the first image frame in response to the received first indication information, and acquires the occurrence time of the first user operation event from the first image frame information;
the detection module detects that a difference value between the occurrence time of the first user operation event and the drawing completion time of the first image frame is larger than a set threshold value, and sends second indication information to a server, wherein the second indication information is used for indicating that the response duration of the electronic equipment for the first user operation event is abnormal.
18. The method of claim 17, wherein the obtaining, by the awareness module, the occurrence time of the first user operation event in response to the obtained first user operation event comprises:
the sensing module determines that the time of receiving the first user operation event is the occurrence time of the first user operation event.
19. The method of claim 17, wherein the first image frame information includes an operational event occurrence time field, and wherein the perception module writes the occurrence time of the first user operational event into the first image frame information, including:
and the sensing module writes the occurrence time of the first user operation event into the operation event occurrence time field of the first image frame information in the memory.
20. The method of claim 19, wherein the detecting module obtaining the time of occurrence of the first user operation event from the first image frame information comprises:
the detection module responds to the received first indication information and obtains the occurrence time of the first user operation event from the operation event occurrence time field of the first image frame information in the memory.
21. The method of claim 17, further comprising:
the sensing module writes the occurrence time of the first user operation event into second image frame information;
the view basic capability implementation module draws a second image frame, wherein the second image frame corresponds to the second image frame information;
after the second image frame is completely drawn, the view basic capability implementing module outputs third indication information to the detection module, wherein the third indication information is used for indicating that the second image frame is completely drawn;
the detection module is used for responding to the received third indication information, acquiring the drawing completion time of the second image frame, and acquiring the occurrence time of the first user operation event from the second image frame information;
the detection module detects that the occurrence time of the first user operation event in the second image frame information is the same as the occurrence time of the first user operation event in the first image frame information, and the detection module does not detect the response duration of the second image frame.
22. The method of claim 21, further comprising:
the sensor module responds to the received second user operation and outputs a second user operation event to the sensing module and the view basic capability implementation module;
the sensing module responds to the acquired second user operation event and acquires the occurrence time of the second user operation event;
the sensing module writes the occurrence time of the second user operation event into third image frame information;
the view basic capability implementation module responds to the acquired second user operation event and draws a third image frame; the third image frame corresponds to the third image frame information;
after the third image frame is completely drawn, the view basic capability implementing module outputs fourth indication information to the detection module, wherein the fourth indication information is used for indicating that the third image frame is completely drawn;
the detection module acquires the drawing completion time of the third image frame in response to the received fourth indication information, and acquires the occurrence time of the second user operation event from the third image frame information;
the detection module detects that the occurrence time of the second user operation event in the third image frame information is different from the occurrence time of the first user operation event in the second image frame information, and detects whether the difference value between the occurrence time of the second user operation event and the drawing completion time of the third image frame is greater than the set threshold value;
the detection module detects that a difference value between the occurrence time of the second user operation event and the drawing completion time of the third image frame is greater than the set threshold value, and sends fifth indication information to the server, wherein the fifth indication information is used for indicating that the response time length of the electronic equipment for the second user operation event is abnormal.
23. The method of claim 17, wherein the detecting module obtains a rendering completion time of the first image frame in response to the received first indication information, comprising:
the detection module determines that the time when the first indication information is received is the drawing completion time of the first image frame.
24. The method according to claim 17, wherein the first indication information includes a drawing completion time of the first image frame.
25. The method of claim 17, wherein the first user operation is a click operation, a slide operation, a zoom operation, or a double click operation.
26. A response time detection method is applied to a response time detection device, the device comprises a sensor module, a view basic capability implementation module and a detection module, and the method comprises the following steps:
the sensor module outputs a first user operation event to the view base capability enforcement module in response to the received first user operation;
the view basic capability implementation module responds to the acquired first user operation event, draws a first image frame and acquires the occurrence time of the first user operation event; the first image frame is a first image frame to be displayed by the electronic equipment after the first user operates the first image frame;
after the first image frame is completely drawn, the view basic capability implementing module outputs first indication information to the detection module, wherein the first indication information comprises the occurrence time of the first user operation event, and the first indication information is used for indicating that the first image frame is completely drawn;
the detection module responds to the received first indication information and acquires the drawing completion time of the first image frame and the occurrence time of the first user operation event;
the detection module detects that a difference value between the occurrence time of the first user operation event and the drawing completion time of the first image frame is larger than a set threshold value, and sends second indication information to a server, wherein the second indication information is used for indicating that the response duration of the electronic equipment to the first user operation event is abnormal.
27. The method of claim 26, wherein the view base capability enforcement module obtaining the time of occurrence of the first user operation event comprises:
the view base capability enforcement module determines that the time of receiving the first user operation event is the occurrence time of the first user operation event.
28. The method of claim 26,
the view basic capability implementation module draws a second image frame, wherein the second image frame corresponds to the second image frame information;
after the second image frame is drawn, the view basic capability implementing module outputs third indication information to the detection module, wherein the third indication information comprises the occurrence time of the first user operation event and is used for indicating that the second image frame is drawn completely;
the detection module responds to the received third indication information and acquires the drawing completion time of the second image frame and the occurrence time of the first user operation event;
the detection module detects that the occurrence time of the first user operation event included in the third indication information is the same as the occurrence time of the first user operation event included in the first indication information, and the detection module does not detect the response duration of the second image frame.
29. The method of claim 28,
the sensor module outputs a second user operation event to the view base capability enforcement module in response to the received second user operation;
the view basic capability implementation module responds to the acquired second user operation event, draws a third image frame and acquires the occurrence time of the second user operation event;
after the third image frame is completely drawn, the view basic capability implementing module outputs fourth indication information to the detection module, wherein the fourth indication information comprises the occurrence time of the second user operation event, and the fourth indication information is used for indicating that the third image frame is completely drawn;
the detection module responds to the received fourth indication information and acquires the drawing completion time of the third image frame and the occurrence time of the second user operation event;
the detection module detects that the occurrence time of the second user operation event included in the fourth indication information is different from the occurrence time of the first user operation event included in the third indication information, and detects whether a difference value between the occurrence time of the second user operation event and the drawing completion time of the third image frame is greater than the set threshold value;
the detection module detects that a difference value between the occurrence time of the second user operation event and the drawing completion time of the third image frame is greater than the set threshold value, and sends fifth indication information to the server, wherein the fifth indication information is used for indicating that the response time length of the electronic equipment for the second user operation event is abnormal.
30. The method of claim 26, wherein the detecting module obtains a rendering completion time of the first image frame in response to the received first indication information, comprising:
the detection module determines that the time when the first indication information is received is the drawing completion time of the first image frame.
31. The method according to claim 26, wherein the first indication information includes a drawing completion time of the first image frame.
32. The method of claim 26, wherein the first user operation is a click operation, a swipe operation, or a zoom operation.
33. A computer-readable storage medium comprising a computer program, which, when run on an electronic device, causes the electronic device to perform the method of any one of claims 17-25.
34. A computer-readable storage medium comprising a computer program, which, when run on an electronic device, causes the electronic device to perform the method of any one of claims 26-32.
CN202110912153.3A 2021-08-10 2021-08-10 Response time duration detection method and device Active CN113688019B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110912153.3A CN113688019B (en) 2021-08-10 2021-08-10 Response time duration detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110912153.3A CN113688019B (en) 2021-08-10 2021-08-10 Response time duration detection method and device

Publications (2)

Publication Number Publication Date
CN113688019A CN113688019A (en) 2021-11-23
CN113688019B true CN113688019B (en) 2022-08-09

Family

ID=78579416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110912153.3A Active CN113688019B (en) 2021-08-10 2021-08-10 Response time duration detection method and device

Country Status (1)

Country Link
CN (1) CN113688019B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327127B (en) * 2021-11-27 2022-12-23 荣耀终端有限公司 Method and apparatus for sliding frame loss detection
CN116662130A (en) * 2022-11-21 2023-08-29 荣耀终端有限公司 Method for counting application use time length, electronic equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1976306A (en) * 2006-11-08 2007-06-06 华为技术有限公司 Testing method and testing device for media request/response time
CN105302701A (en) * 2014-06-23 2016-02-03 中兴通讯股份有限公司 Method, apparatus and device for testing reaction time of terminal user interface
CN107797904A (en) * 2017-09-12 2018-03-13 福建天晴数码有限公司 A kind of method and terminal for measuring the response time
CN110058997A (en) * 2019-03-12 2019-07-26 平安普惠企业管理有限公司 Application response time test method, device, computer equipment and storage medium
CN111090570A (en) * 2019-12-13 2020-05-01 Oppo广东移动通信有限公司 Method and device for measuring response time of terminal screen and terminal equipment
CN111338934A (en) * 2020-02-13 2020-06-26 北京字节跳动网络技术有限公司 Page refreshing test method and device, computer equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2728481A1 (en) * 2012-11-04 2014-05-07 Rightware Oy Evaluation of page load performance of web browser
CN107102936B (en) * 2017-05-27 2021-06-15 腾讯科技(深圳)有限公司 Fluency evaluation method, mobile terminal and storage medium
CN111858318B (en) * 2020-06-30 2024-04-02 北京百度网讯科技有限公司 Response time testing method, device, equipment and computer storage medium
CN112817831A (en) * 2021-01-13 2021-05-18 中国工商银行股份有限公司 Application performance monitoring method, device, computer system and readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1976306A (en) * 2006-11-08 2007-06-06 华为技术有限公司 Testing method and testing device for media request/response time
CN105302701A (en) * 2014-06-23 2016-02-03 中兴通讯股份有限公司 Method, apparatus and device for testing reaction time of terminal user interface
CN107797904A (en) * 2017-09-12 2018-03-13 福建天晴数码有限公司 A kind of method and terminal for measuring the response time
CN110058997A (en) * 2019-03-12 2019-07-26 平安普惠企业管理有限公司 Application response time test method, device, computer equipment and storage medium
CN111090570A (en) * 2019-12-13 2020-05-01 Oppo广东移动通信有限公司 Method and device for measuring response time of terminal screen and terminal equipment
CN111338934A (en) * 2020-02-13 2020-06-26 北京字节跳动网络技术有限公司 Page refreshing test method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113688019A (en) 2021-11-23

Similar Documents

Publication Publication Date Title
CN112130742B (en) Full screen display method and device of mobile terminal
WO2021017889A1 (en) Display method of video call appliced to electronic device and related apparatus
CN109766036B (en) Message processing method and electronic equipment
CN113645351B (en) Application interface interaction method, electronic device and computer-readable storage medium
WO2021000881A1 (en) Screen splitting method and electronic device
WO2021169337A1 (en) In-screen fingerprint display method and electronic device
CN113254120B (en) Data processing method and related device
CN114363462B (en) Interface display method, electronic equipment and computer readable medium
CN113704205B (en) Log storage method, chip, electronic device and readable storage medium
WO2021238370A1 (en) Display control method, electronic device, and computer-readable storage medium
CN113688019B (en) Response time duration detection method and device
CN114995715B (en) Control method of floating ball and related device
CN113641271A (en) Application window management method, terminal device and computer readable storage medium
WO2022166435A1 (en) Picture sharing method and electronic device
CN112449101A (en) Shooting method and electronic equipment
CN115016697A (en) Screen projection method, computer device, readable storage medium, and program product
CN116048831B (en) Target signal processing method and electronic equipment
CN113438366A (en) Information notification interaction method, electronic device and storage medium
CN110609650A (en) Application state switching method and terminal equipment
CN115119048A (en) Video stream processing method and electronic equipment
CN114489469B (en) Data reading method, electronic equipment and storage medium
CN113050864A (en) Screen capturing method and related equipment
CN112286596A (en) Message display method and electronic equipment
WO2024109573A1 (en) Method for floating window display and electronic device
CN115016666A (en) Touch processing method, terminal device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant