CN116415951B - Method for displaying two-dimensional code and electronic equipment - Google Patents

Method for displaying two-dimensional code and electronic equipment Download PDF

Info

Publication number
CN116415951B
CN116415951B CN202210109243.3A CN202210109243A CN116415951B CN 116415951 B CN116415951 B CN 116415951B CN 202210109243 A CN202210109243 A CN 202210109243A CN 116415951 B CN116415951 B CN 116415951B
Authority
CN
China
Prior art keywords
action
data
wrist
signal
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210109243.3A
Other languages
Chinese (zh)
Other versions
CN116415951A (en
Inventor
邸皓轩
张�成
李丹洪
张晓武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to US18/001,751 priority Critical patent/US20240127218A1/en
Priority to PCT/CN2022/113601 priority patent/WO2023124129A1/en
Priority to EP22821834.3A priority patent/EP4227876A4/en
Publication of CN116415951A publication Critical patent/CN116415951A/en
Application granted granted Critical
Publication of CN116415951B publication Critical patent/CN116415951B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/32Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
    • G06Q20/327Short range or proximity payments by means of M-devices
    • G06Q20/3274Short range or proximity payments by means of M-devices using a pictured code, e.g. barcode or QR-code, being displayed on the M-device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Strategic Management (AREA)
  • Accounting & Taxation (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a method for displaying a two-dimensional code and electronic equipment, wherein the method is executed by the electronic equipment and comprises the following steps: acquiring first sensor data, wherein the first sensor data are acquired at a first moment; if the first action of the user corresponding to the first moment is determined to be the preset action according to the first sensor data, determining whether the user has wrist turning action within a preset duration range from the first moment; and displaying a preset two-dimensional code page under the condition that the user has wrist turning action within the preset duration range. The method can make the process of presenting the two-dimension code by the user simple and efficient.

Description

Method for displaying two-dimensional code and electronic equipment
Technical Field
The application relates to the technical field of electronics, in particular to a method for displaying a two-dimensional code and electronic equipment.
Background
With the wide application of electronic devices such as smart phones, people's life is more and more convenient. For example, cash is not required to be carried when the article is purchased, and the payment two-dimensional code in the mobile phone is directly used; or the bus card is not required to be carried when the bus is taken in the public transport means, and the two-dimension code of riding in the mobile phone is directly used, and the like.
Currently, when a user needs to pay or take a bus by using a two-dimension code, a corresponding Application (APP) is usually opened on a mobile phone, and then the two-dimension code corresponding to the APP is opened. For example, in a payment scenario, it is necessary to first openAPP or/>APP, clicking "payment" option in APP can only call out corresponding payment two-dimensional code, and operation steps are complicated and time-consuming.
Disclosure of Invention
The application provides a method for displaying a two-dimensional code and electronic equipment, which can enable a process of presenting the two-dimensional code by a user to be simple and efficient.
In a first aspect, the present application provides a method for displaying a two-dimensional code, the method being performed by an electronic device, including: acquiring first sensor data, wherein the first sensor data are acquired at a first moment; if the first action of the user corresponding to the first moment is determined to be the preset action according to the first sensor data, determining whether the user has wrist turning action within a preset duration range from the first moment; and displaying a preset two-dimensional code page under the condition that the user has wrist turning action within the preset duration range.
Alternatively, the first sensor data may be data acquired from at least one of a gyroscope sensor, an acceleration sensor, or a pressure sensor, such as at least one of gyroscope signal data, acceleration signal data, or pressure signal data.
Optionally, the wrist turning action may include an extended backward wrist turning action, and the extended backward wrist turning action may include a vertical screen wrist turning action, a horizontal screen wrist turning action, a reverse wrist turning action, an angled wrist turning action, or a hand lifting inward wrist turning action.
The first time (i.e., the first acquisition time) when the data is acquired may also be acquired when the first sensor data is acquired. Then, when determining that the gesture motion of the user is the preset motion at the first acquisition time, monitoring whether a wrist turning motion exists in a preset duration range (for example, within 3 seconds) from the first acquisition time, and if so, displaying a two-dimensional code page. In this implementation, the displayed two-dimensional code page may be a default two-dimensional code, such asTwo-dimensional code of payment; or can set a common two-dimensional code for the user, such as setting/>The two-dimension code of payment is displayed at the momentTwo-dimensional code of payment.
In the implementation manner, when the electronic equipment recognizes that the user has preset actions according to the acquired sensor data, whether the wrist turning action exists in a preset duration range or not is monitored, and the two-dimensional code is determined to be displayed when the wrist turning action exists, so that the operation steps of the user for calling the two-dimensional code are reduced, and the process of presenting the two-dimensional code by the user is simple and efficient; meanwhile, the corresponding two-dimensional code is jointly determined and displayed by combining the preset action and the wrist turning action of the user, and the accuracy of the electronic equipment on the display of the two-dimensional code judgment result can be improved.
With reference to the first aspect, in some implementation manners of the first aspect, the determining whether the user has a wrist turning motion within a preset duration range from the first moment includes: acquiring second sensor data, wherein the second sensor data are acquired at a second moment, and the second moment is within a preset duration range after the first moment; and determining whether the second action of the user corresponding to the second moment is wrist turning action or not according to the second sensor data.
The second sensor data may include gyroscope signal data and acceleration signal data, and the acquisition time of the second sensor data is denoted as a second time (i.e., a second acquisition time). The second collection time is within a preset time range after the first collection time, that is, the electronic device determines whether the wrist turning action of the user exists in the time range by analyzing the sensor data within the preset time range after the first collection time. Therefore, when the wrist turning action is determined, the electronic equipment can display the two-dimension code page, and the efficiency of presenting the two-dimension code by the user is improved.
With reference to the first aspect, in some implementations of the first aspect, determining, according to the second sensor data, whether the second action of the user corresponding to the second moment is a wrist turning action includes: preprocessing the second sensor data, and determining whether the second motion is a suspected wrist turning motion, wherein the probability of the suspected wrist turning motion being the wrist turning motion is greater than or equal to a preset probability threshold; if the second motion is a suspected wrist turning motion, determining whether the second motion is a wrist turning motion.
When the electronic device analyzes the data of the second sensor, the second sensor may be preprocessed to determine whether the second sensor corresponds to a suspected wrist turning action, and further determine whether the second sensor extends out and turns the wrist when the second sensor corresponds to the suspected wrist turning action. Therefore, the electronic equipment firstly screens out the suspected wrist turning action with relatively high possibility of the wrist turning action, then determines whether the suspected wrist turning action is a real wrist turning action or not, and improves the accuracy of the final recognition result.
With reference to the first aspect, in some implementations of the first aspect, the second sensor data includes gyroscope signal data and acceleration signal data, and the preprocessing the second sensor data to determine whether the second motion is a suspected wrist-turning motion includes: acquiring first sub-data from the gyroscope signal data, wherein the first sub-data is continuous data of a preset frame number in the gyroscope signal data; acquiring second sub-data from the acceleration signal data, wherein the second sub-data is a continuous signal with a preset frame number in the acceleration signal data, and the position of the first sub-data in the gyroscope signal data is the same as the position of the second sub-data in the acceleration signal data; if the first sub-data and the second sub-data meet the first preset condition, determining that the second action is a suspected wrist turning action.
The electronic device may intercept 100 frame signals as first sub-data by taking the first frame signal of the gyroscope signal data as a starting point, intercept 100 frame signals as second sub-data by taking the first frame signal of the acceleration signal data as a starting point, determine whether the first sub-data and the second sub-data meet a first preset condition, and if so, determine that the second motion is a suspected wrist turning motion. If the data do not meet the requirements, then taking the second frame signal of the gyroscope signal data as a starting point, taking the 100 frame signal as first sub-data, taking the second frame signal of the acceleration signal data as a starting point, taking the 100 frame signal as second sub-data, judging again, and the like.
Optionally, the first preset condition includes at least one of the following conditions:
condition 1: the angular velocity module value corresponding to the last frame signal of the first sub data is in a first interval;
condition 2: the acceleration module value corresponding to the last frame signal of the second sub data is in a second interval;
condition 3: the z-axis acceleration value corresponding to the last frame signal of the second sub-data is in a third interval or smaller than a first threshold value;
condition 4: the first sub-data has a main peak signal;
condition 5: the main peak signal is positioned in the middle area of the first sub data;
condition 6: the signal distribution before the main peak signal is in a monotonically increasing trend, the signal distribution after the main peak signal is in a monotonically decreasing trend, or the signal distribution before the main peak signal is in a monotonically decreasing trend, and the signal distribution after the main peak signal is in a monotonically increasing trend.
Wherein the first interval may be an [ N1, N2] interval, such as [0,3], the second interval may be an [ N3, N4] interval, such as [0.8,1.2], the third interval may be an [ N5, N6] interval, such as [ -0.707,0.707], and the first threshold may be-0.707. Therefore, whether the gesture motion corresponding to the first sub-data and the second sub-data is the wrist turning motion or not is screened out through the first preset condition, so that the accuracy of a final recognition result is improved.
With reference to the first aspect, in some implementations of the first aspect, the determining whether the second motion is a wrist turning motion includes: identifying the first sub data and the second sub data through a preset first model to obtain a first identification result; and under the condition that the first identification result is a first preset result, determining the second motion as the wrist turning motion.
The first model may be any one of a recurrent neural network model (recurrent neural network, RNN), a long short-term memory (LSTM), and a gated loop unit network model (gated recurrent unit, GRU). After determining that the second motion is a suspected wrist turning motion, the electronic device may continue to process the first sub-data and the second sub-data, for example, may perform feature extraction on the first sub-data and the second sub-data to obtain a feature set, and then input the feature set into a first model to obtain a first recognition result. Alternatively, the first recognition result may use 0 to indicate a non-wrist turning action, 1 to indicate an extended wrist turning action, and 2 to indicate a retracted wrist turning action. Then, when the first recognition result is 1, it can be determined that the second motion is the wrist turning motion after extension. The electronic equipment firstly screens out the suspected wrist turning action with high possibility of the wrist turning action, then determines whether the suspected wrist turning action is a real wrist turning action, and improves the accuracy of the final identification result.
With reference to the first aspect, in some implementation manners of the first aspect, determining, according to the first sensor data, a first action of a user corresponding to the first time as a preset action includes: identifying the first sensor data through a preset second model to obtain a second identification result; and determining the first action as a preset action under the condition that the second identification result is a second preset result.
Optionally, the preset action may include a double click action, a triple click action, or a shake action of the electronic device.
The second model may be a decision tree model, and the second model is used to identify the first sensor data, so as to obtain a second identification result. Alternatively, the second recognition result may use 0 to represent a non-gesture motion, 1 to represent a double-click motion to the back of the electronic device, 2 to represent a triple-click motion to the back of the electronic device, and 3 to represent a shake motion of the handheld electronic device. Then, when the second recognition result is 1,2 or 3, the first action is determined to be the preset action. Therefore, the accuracy of the identification result can be improved by judging the first sensor data through the model.
With reference to the first aspect, in some implementations of the first aspect, displaying the preset two-dimensional code page includes: if the first action is a double-click action of the user on the back of the electronic equipment, displaying a first two-dimensional code page; or if the first action is a three-click action of the user on the back of the electronic equipment, displaying a second two-dimensional code page; or if the first action is the shaking action of the electronic equipment held by the user, displaying a third two-dimensional code page.
With reference to the first aspect, in some implementations of the first aspect, the method further includes: displaying a first interface comprising display setting controls, wherein the display setting controls comprise a setting control for double-click actions, a setting control for triple-click actions and a setting control for shaking actions; receiving a first operation of a user on a first interface for displaying a setting control; in response to the first operation, the first two-dimensional code page is displayed when the user performs a double-click action on the back of the electronic device, the second two-dimensional code page is displayed when the user performs a triple-click action on the back of the electronic device, and the third two-dimensional code page is displayed when the user performs a shaking action on the electronic device.
The electronic device can provide different gesture actions for the user to display different two-dimensional code pages, and when the user wants to show one two-dimensional code page, the user only needs to execute the corresponding gesture actions. For example, setting a double click action on the back of the electronic device corresponds toThe two-dimension code of payment corresponding to the three-click action on the back of the electronic equipment is thatThe two-dimension code of payment and the shaking action of the electronic equipment are corresponding to the health code. And the electronic equipment can also provide an operation interface (namely a first interface) for setting for the user, so that the user can conveniently execute corresponding setting operation for different requirements, and the use experience of the user is improved.
With reference to the first aspect, in some implementations of the first aspect, after displaying the preset two-dimensional code page, the method further includes: acquiring third sensor data; processing the third sensor data to obtain a third identification result; and displaying a first page under the condition that a third action of a user corresponding to the third sensor data is represented by a third identification result as a wrist turning action after retraction, wherein the first page is displayed by the electronic equipment before the two-dimensional code page is displayed.
Because the electronic device continuously acquires the gyroscope signal data and the acceleration signal data and performs the identification processing, there may be a moment when the identification result of the wrist turning action obtained by the electronic device is 2 (i.e., the third action represented by the third identification result is the wrist turning action after retraction). In this scenario, the electronic device may obtain the data of the current display frame on the display screen from the video memory, and if the current display is a two-dimensional code page, the electronic device may close the two-dimensional code page and display any other page (i.e. the first page) before the two-dimensional code page is displayed. Therefore, after the user scans the code and when the electronic equipment is retracted, the electronic equipment can automatically close the two-dimensional code page, so that the simplicity of the two-dimensional code closing process of the user is improved.
In a second aspect, the present application provides an apparatus, which is included in an electronic device, the apparatus having a function of implementing the above first aspect and the behavior of the electronic device in the possible implementation manners of the above first aspect. The functions may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more modules or units corresponding to the functions described above. Such as a receiving module or unit, a processing module or unit, etc.
In a third aspect, the present application provides an electronic device, including: a processor, a memory, and an interface; the processor, the memory and the interface cooperate with each other such that the electronic device performs any one of the methods of the technical solutions of the first aspect.
In a fourth aspect, the present application provides a chip comprising a processor. The processor is configured to read and execute a computer program stored in the memory to perform the method of the first aspect and any possible implementation thereof.
Optionally, the chip further comprises a memory, and the memory is connected with the processor through a circuit or a wire.
Further optionally, the chip further comprises a communication interface.
In a fifth aspect, the present application provides a computer-readable storage medium, in which a computer program is stored, which when executed by a processor causes the processor to perform any one of the methods of the first aspect.
In a sixth aspect, the application provides a computer program product comprising: computer program code which, when run on an electronic device, causes the electronic device to perform any one of the methods of the solutions of the first aspect.
Drawings
Fig. 1 is a schematic diagram of a process of presenting a two-dimensional code page in the related art;
fig. 2 is a schematic structural diagram of an example of an electronic device according to an embodiment of the present application;
FIG. 3 is a block diagram of a software architecture of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic diagram of various wrist turning actions of an example code scanning scene provided by the embodiment of the application;
FIG. 5 (a) is a diagram illustrating an example of an operation interface for turning on the intelligent sensing function according to the embodiment of the present application;
FIG. 5 (b) is a diagram illustrating another example of an operation interface for turning on the intelligent sensing function according to the present application;
FIG. 6 is a flowchart illustrating an exemplary method for displaying two-dimensional codes according to an embodiment of the present application;
FIG. 7 is a signal diagram of exemplary gyroscope signal data and acceleration signal data provided by an embodiment of the present application;
FIG. 8 is a signal diagram of a model of the radian value of the three-axis angular velocity and a model of the three-axis acceleration according to an embodiment of the present application;
FIG. 9 is a schematic structural diagram of an exemplary GRU model according to an embodiment of the present application;
FIG. 10 is a flowchart illustrating another method for displaying two-dimensional codes according to an embodiment of the present application;
FIG. 11 is a signal distribution diagram of acceleration signal data for a back of an electronic device according to an embodiment of the present application;
FIG. 12 is a flowchart illustrating a method for displaying two-dimensional codes according to another embodiment of the present application;
FIG. 13 is a diagram illustrating an operation interface for turning on the intelligent sensing function according to another embodiment of the present application;
fig. 14 is a flowchart illustrating another method for displaying a two-dimensional code according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. Wherein, in the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in the description of the embodiments of the present application, "plurality" means two or more than two.
The terms "first," "second," "third," and the like, are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", or a third "may explicitly or implicitly include one or more such feature.
At present, electronic devices are increasingly used, for example, mobile phones, on which a plurality of APP's can be installed, for exampleAPP、/>APP, health code APP, etc. Suppose that the user currently needs to use/>Payment is made, as shown in fig. 1, the user may click/>The icon enters the application interface, then clicks the payment option on the application interface, and the/> -is displayed on the display interface of the mobile phoneTwo-dimensional code of payment. At this time, the user can aim the payment two-dimension code at a code scanning port provided by a merchant so as to carry out code scanning payment; after payment is finished, the user also needs to close the page of the payment two-dimension code or close/>APP is closed. For other types of APP, similar operation steps are needed when a user wants to use the two-dimensional code. Therefore, the operation steps of the user for calling the two-dimensional code are complicated and time-consuming.
In view of this, the embodiment of the application provides a method for displaying a two-dimensional code, which can determine whether to display the two-dimensional code by identifying a gesture action when a user operates a mobile phone, and display a corresponding two-dimensional code page when determining to display the two-dimensional code, so as to reduce an operation step of calling the two-dimensional code by the user, and make a process of presenting the two-dimensional code by the user simple and efficient. It should be noted that, the method for displaying the two-dimensional code provided by the embodiment of the application can be applied to mobile phones, tablet computers, wearable devices and other electronic devices capable of installing APP or having corresponding two-dimensional code functions, and the embodiment of the application does not limit the specific type of the electronic device.
Fig. 2 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a memory, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The structures of the antennas 1 and 2 in fig. 2 are only one example. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an organic light-emitting diode (OLED), an active-matrix organic LIGHT EMITTING diode (AMOLED), a flexible light-emitting diode (FLED), miniled, microLed, micro-oLed, a quantum dot LIGHT EMITTING diode (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light outward through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that there is an object in the vicinity of the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there is no object in the vicinity of the electronic device 100. The electronic device 100 can detect that the user holds the electronic device 100 close to the ear by using the proximity light sensor 180G, so as to automatically extinguish the screen for the purpose of saving power. The proximity light sensor 180G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
In one embodiment, the electronic device 100 also includes a Sensor Hub (or Sensor Hub), otherwise known as a Sensor Hub, sensor co-processor, that is primarily connected to and processes data from the Sensor module 180 with low power consumption. Sensor Hub may include, but is not limited to, low power processing modules or processing circuits such as application processors, coprocessors, microprocessors (micro-programmed control unit, MCUs), and the like. In general, sensor Hub can process data of sensors such as the above-described pressure Sensor 180A, gyro Sensor 180B, air pressure Sensor 180C, magnetic Sensor 180D, acceleration Sensor 180E, distance Sensor 180F, proximity Sensor 180G, fingerprint Sensor 180H, temperature Sensor 180J, touch Sensor 180K, ambient light Sensor 180L, bone conduction Sensor 180M, and the like, and perform fusion processing of the respective Sensor data.
According to different electronic devices and service scene requirements, the current Sensor Hub is mainly divided into three types: one is to place Sensor Hub as a separate chip between the application processor and various sensors; the other is to combine Sensor Hub with various sensors, receive the data of various sensors for fusion, and then provide the fused data to an application processor; and the application processor integrates the Sensor Hub, and various sensors provide data to the Sensor Hub in the application processor after the fusion processing of the Sensor Hub.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Fig. 3 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present application. The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system may be divided into several layers, from top to bottom, an application layer, a framework layer, a Sensor Hub layer, a system layer, and a hardware layer, respectively.
As shown in fig. 3, the application layer may include various applications installed in the electronic device 100, such as APP、/>APP, health code APP, etc. The framework layer may include a decision module for making an instruction decision according to data transmitted by the upper layer or the lower layer, and instructing the upper layer or the lower layer to execute a corresponding instruction action. The algorithm module in the Sensor Hub layer is used for calling the Sensor algorithm (SensorAlg) to process the Sensor data of the hardware layer Sensor, and the processing result is sent to the decision module to make a decision. The display driver of the system layer can receive the display data transmitted by the upper layer and transmit the display data to the display screen for display. The hardware layers may include various hardware modules in the electronic device 100, such as a gyroscope sensor, an acceleration sensor, a display screen, and the like.
For easy understanding, the following embodiments of the present application will take an electronic device having a structure shown in fig. 2 and fig. 3 as an example, and specifically describe a method for displaying a two-dimensional code provided by the embodiments of the present application in conjunction with the accompanying drawings and application scenarios.
When the user scans the adjusted two-dimensional code with respect to the code scanning port provided by the merchant, as shown in fig. 4, the direction of the code scanning port provided by the merchant may be vertically forward, horizontally upward, or obliquely upward, and the user needs to vertically, horizontally, reversely buckle or obliquely align the display screen of the electronic device with the code scanning port. Therefore, in the embodiment of the application, based on the gesture action of a user operating the mobile phone during code scanning, sensor data of a gyroscope sensor and an acceleration sensor in the electronic equipment are analyzed to determine whether the user has wrist turning action corresponding to the code scanning. For example, the vertical screen wrist turning action, the horizontal screen wrist turning action, the reverse buckling wrist turning action, or the inclined wrist turning action with a certain angle shown in fig. 4 can automatically pop up and display the corresponding two-dimensional code when the electronic device determines that the user has the wrist turning action. The embodiment of the application can refer to a wrist turning action when a user holds the electronic equipment, wherein the vertical screen wrist turning action, the horizontal screen wrist turning action, the reverse buckle wrist turning action or the wrist turning action with a certain angle is referred to as an extended back wrist turning action, and the action of retracting the wrist after the extended back wrist turning action is referred to as a retracted back wrist turning action; the following embodiment describes a method of displaying a two-dimensional code by taking the wrist-turning action after extension as an example.
The function of automatically popping up and displaying the two-dimension code by the electronic equipment can be set by user definition, and if the user wants to use the function, the switch of the function can be opened through a setting path. As shown in fig. 5, after the setting page has a "smart sensing" option, the user clicks the option, and then jumps to the switch page of the smart sensing function (i.e. the function of automatically popping up and displaying the two-dimension code by the electronic device); on the page, the user can turn on the smart awareness function by clicking on the switch control. In one embodiment, as shown in fig. 5 (a), when the user opens the function, a default two-dimensional code is corresponding, for exampleThe payment two-dimensional code of the electronic device is automatically popped up and displayed later to be/>Two-dimensional code of payment. In one embodiment, as shown in the diagram (b) in fig. 5, when the user opens the function, a two-dimensional code which is more commonly used by the user can be selected as a two-dimensional code which is automatically popped up and displayed by the electronic device; for example, select/> The payment two-dimensional code of the electronic device is automatically popped up and displayed later to be/>Two-dimensional code of payment.
After the user opens the functions in the electronic device, the electronic device can continuously acquire the sensor data of the gyroscope sensor and the acceleration sensor to determine whether the user has wrist turning action corresponding to the code scanning, and then the corresponding two-dimensional code is displayed. Specifically, fig. 6 is a flowchart of a method for displaying a two-dimensional code according to an embodiment of the present application, where the method is executed by an electronic device and includes:
s101, acquiring sensor data.
Alternatively, the sensor data may include sensor data a and sensor data B. The sensor data a may be data acquired by a gyro sensor, for example, gyro signal data, and the sensor data B may be data acquired by an acceleration sensor, for example, acceleration signal data. The gyroscope signal data is typically the angular velocity of three axes (i.e., x, y, and z axes) as the electronic device moves, where the three axes are the coordinate axes of the gyroscope sensor's own coordinate system; the acceleration signal data is typically the acceleration of the electronic device in three axes (i.e., x, y, and z axes) as it moves, where the three axes are coordinate axes of the acceleration sensor's own coordinate system. For example, the signal distribution diagrams of the gyroscope signal data and the acceleration signal data acquired by the electronic device may be referred to fig. 7, where the horizontal axis represents the number of signal frames, the vertical axis represents the magnitudes of the x, y, and z-axis angular velocities corresponding to each frame of signal in the gyroscope signal data diagram of fig. 7, and the horizontal axis represents the number of signal frames in the acceleration signal data diagram of fig. 7, and the vertical axis represents the magnitudes of the x, y, and z-axis accelerations corresponding to each frame of signal.
The electronic device continuously acquires the sensor data at a certain frequency, and continuously executes the following steps when the sensor data is acquired. In one embodiment, because the user typically has a requirement of presenting the two-dimensional code after the electronic device is unlocked, the electronic device may acquire the sensor data a and the sensor data B when the electronic device is in the unlocked and bright screen state.
S102, preprocessing the sensor data, and determining whether gesture actions of a user are suspected wrist turning actions. If yes, S103 is executed.
The suspected wrist turning action may be understood as that the probability of determining the wrist turning action is greater than or equal to a preset probability threshold (for example, 90%). In this embodiment, the electronic device may perform preprocessing on sensor data, for example, the sensor data a and the sensor data B, for example, intercept first sub-data in the sensor data a and second sub-data in the sensor data B, so as to determine whether the gesture motion of the corresponding user is a suspected wrist-turning motion according to the first sub-data and the second sub-data. That is, the electronic device screens out the suspected wrist turning motion with relatively high possibility of the wrist turning motion, and then subsequently determines whether the suspected wrist turning motion is a real wrist turning motion, thereby improving the accuracy of the final recognition result.
The process of the electronic device determining whether the gesture action of the user is a suspected wrist-turning action may be as follows:
Taking the sensor data a as gyroscope signal data and the sensor data B as acceleration signal data as examples, after the electronic device obtains the gyroscope signal data and the acceleration signal data, the electronic device can perform filtering processing on the gyroscope signal data and the acceleration signal data to remove noise in the signal data. In one embodiment, the electronic device may filter the gyroscope signal data and the acceleration signal data using a mean filtering method or other filtering methods. After obtaining the filtered gyroscope signal data and the acceleration signal data, the electronic device may analyze the gyroscope signal data and the acceleration signal data to determine whether a suspected wrist-turning motion is performed.
As one implementation, the electronic device may determine whether it is a suspected wrist-turning action in the following manner: for the gyroscope signal data, the electronic device takes a first frame signal as a starting point, intercepts a signal of a preset frame number (for example, 100 frames, namely, 1 second signal data) as a first signal segment A (namely, first sub data); for the acceleration signal data, the electronic device also takes the first frame signal as a starting point, intercepts 100 frame signals as a first signal segment B (i.e. second sub-data), and performs analysis and judgment on the first signal segment a and the first signal segment B, where a specific analysis process may be as follows:
Because the electronic device is in a relatively static state when the user holds the electronic device to scan the code, the electronic device takes the last frame signal of the first signal segment. Assuming that the three-axis angular velocity radian value corresponding to the last frame signal of the first signal segment a is (x 1, y1, z 1), the unit is rad/s, the electronic device may calculate the modulus value M1 of the three-axis angular velocity radian value of the gyroscope according to (x 1, y1, z 1), for example, according to And then determining whether M1 is within the [ N1, N2] interval (condition 1), for example, the [ N1, N2] interval may be the [0,5] interval, alternatively the [0,4] interval or the [0,3] interval, and the modulus M1 is approximately 0 when the electronic device is at rest.
Assuming that the triaxial acceleration value corresponding to the last frame signal of the first signal segment B is (x 2, y2, z 2) and has the unit of M/s 2, the electronic device may determine the modulus M2 of the triaxial acceleration value according to (x 2, y2, z 2), for example, according toAnd then normalizes M2, and determines whether the normalized M2 is in the [ N3, N4] section (condition 2), for example, the [ N3, N4] section may be the [0.6,1.5] section, the optional [0.8,1.2] section, and the modulus M2 is approximately 1 when the electronic device is at rest.
Meanwhile, as shown in fig. 4, when the direction of the code scanning port is vertical forward, the display screen of the electronic device generally faces forward, and at this time, the front-back inclination angle or the left-right inclination angle of the display screen is within 45 °, so the electronic device may further determine whether z2 in the triaxial acceleration value is in the [ N5, N6] section (condition 3), for example, the [ N5, N6] section may be the [ -0.707,0.707] section. When the direction of the code scanning port is horizontal and upward, the display screen of the electronic device is generally downward, and at this time, the vertical inclination angle or the horizontal inclination angle of the display screen is within 45 degrees, then the electronic device may further determine whether z2 in the three-axis acceleration value is less than a first threshold value (condition 3), for example, the first threshold value may be-0.707.
As can be seen from the description of the above conditions 1 to 4, the electronic device is determined according to the termination state of the electronic device, and in addition, the electronic device may further extend from the user to perform analysis in the process of scanning the code. Since the user does not always take a very rapid or slow action to scan the code when extending out of the electronic device, for the first signal segment a (i.e., the signal data of the gyroscope), there will generally be a main peak signal (condition 4), where the position of the main peak signal is generally located in the middle area of the first signal segment a (condition 5), the signal distribution before the main peak signal has a monotonically increasing trend, the signal distribution after the main peak signal has a monotonically decreasing trend, or the signal distribution before the main peak signal has a monotonically decreasing trend, and the signal distribution after the main peak signal has a monotonically increasing trend (condition 6).
In summary, the embodiment of the application provides 6 judgment conditions:
condition 1: whether M1 is in the [ N1, N2] interval;
condition 2: whether M2 is in the interval [ N3, N4] interval;
condition 3: whether z2 is in the [ N5, N6] interval, or whether z2 is less than a first threshold;
condition 4: the first signal segment A has a main peak signal;
Condition 5: the position of the main peak signal is located in the middle area of the first signal section A;
condition 6: the signal distribution before the main peak signal is in a monotonically increasing trend, the signal distribution after the main peak signal is in a monotonically decreasing trend, or the signal distribution before the main peak signal is in a monotonically decreasing trend, and the signal distribution after the main peak signal is in a monotonically increasing trend.
Then, when (condition 4 is satisfied) and (condition 5 is satisfied) and (condition 6 is satisfied) and (condition 1 is satisfied) and (condition 2 is satisfied) and (condition 3 is satisfied), it may be determined that the gesture actions of the user corresponding to the first signal segment a and the first signal segment B are suspected wrist-turning actions. For example, the electronic device output 1 is when the wrist turning operation is suspected, and the electronic device output 0 is not when the wrist turning operation is suspected.
Fig. 8 is a schematic diagram of the triaxial angular velocity radian value corresponding to each frame of gyroscope signal and the triaxial acceleration value corresponding to each frame of acceleration signal, in which the horizontal axis represents the number of signal frames, the vertical axis represents the magnitude of the triaxial angular velocity radian value corresponding to each frame of gyroscope signal, and the vertical axis represents the number of signal frames, in the schematic diagram of the triaxial acceleration value of fig. 8. In the gyroscope signal data and the acceleration signal data shown in fig. 7, the signal data corresponding to the rectangular dashed-line frame may be the first signal segment a and the first signal segment B, respectively, and the modulus of the triaxial angular velocity radian value of the first signal segment a and the modulus of the triaxial acceleration value of the first signal segment B may be the data in the rectangular dashed-line frame in fig. 8. As can be seen from fig. 7, the z value of the last frame signal of the first signal segment B is less than-0.707, the first signal segment a has a main peak signal and is located in the middle area of the first signal segment a, the signal distribution before the main peak signal has a monotonically decreasing trend, and the signal distribution after the main peak signal has a monotonically increasing trend; as can be seen from fig. 8, the modulus of the last frame signal of the first signal segment a is in the interval [0,3], and the modulus of the last frame signal of the first signal segment B is in the interval [0.8,1.2], that is, the first signal segment a and the first signal segment B satisfy the above conditions, so that it is determined that the current gesture motion of the user is a suspected wrist turning motion.
When determining whether the above conditions are satisfied, the electronic device may determine in the order of conditions 4, 5, 6, 1,2, and 3, or may determine in the order of 5, 4, 1, 6, 2, and 3, that is, the determination order of the conditions is not limited. And if a certain condition of the sequential judgment is not met, the judgment of the subsequent condition can be omitted; for example, with respect to the order of the conditions 4, 5, 6, 1,2, 3, if the condition 4 is not satisfied, the condition judgment after the condition 5 is no longer performed, or if the condition 4 is satisfied but the condition 5 is not satisfied, the condition judgment after the condition 6 is no longer performed, and so on.
Through the analysis and judgment process, the electronic equipment can determine whether the gesture action of the user is a suspected wrist turning action according to the first signal section A and the first signal section B, if not, the electronic equipment takes a second frame signal as a starting point for gyroscope signal data, and intercepts 100 frames of signals as a second signal section C; for the acceleration signal data, the electronic device also takes the second frame signal as a starting point, and intercepts the 100 frame signal as a second signal segment D. And continuing to analyze the two second signal segments in the analysis process so as to determine whether the gesture of the corresponding user is a suspected wrist turning action or not, and pushing the gesture.
It should be noted that, the moments corresponding to the ith signal segment of the gyroscope signal data and the ith signal segment of the acceleration signal data (i is a natural number) are the same, so that the gesture actions of the user corresponding to the first signal segment a and the first signal segment B are the same action.
S103, determining whether the suspected wrist turning action is an extended wrist turning action, if so, executing S104.
After the electronic device determines the suspected wrist turning action, the electronic device may continue to process signal data corresponding to the suspected wrist turning action to determine whether the suspected wrist turning action is an extended wrist turning action.
Assuming that the signal data corresponding to the suspected wrist turning action, namely the first signal segment a and the first signal segment B, for the j-th frame signal of the first signal segment a and the j-th frame signal (j is greater than or equal to 1 and less than or equal to 100) of the first signal segment B, the electronic device may perform feature extraction respectively to obtain a corresponding feature set, where the extracted features may be used to characterize gesture change information of the electronic device, so that the electronic device further determines whether the user has the wrist turning action.
Illustratively, when the electronic device performs feature extraction on the jth frame signal of the first signal segment a and the jth frame signal of the first signal segment B, the class 9 features as shown in table 1 may be obtained.
TABLE 1
Taking j=1 as an example, assuming that the three-axis angular velocity radian value corresponding to the 1 st frame signal of the first signal segment a is (x 3, y3, z 3), and the three-axis acceleration value corresponding to the 1 st frame signal of the first signal segment B is (x 4, y4, z 4), the modulus of the three-axis angular velocity radian valueModulus of triaxial acceleration value/>In addition, since the acceleration sensor has its own coordinate system (including x-axis, y-axis and z-axis), the direction of the z-axis is not perpendicular to the ground in general, and therefore, the electronic device can determine the acceleration in the direction perpendicular to the ground according to the three-axis acceleration values (x 4, y4, z 4); for example, the gravitational acceleration in the direction perpendicular to the ground can be obtained by mapping the relation between the coordinate system of the electronic equipment (such as the gesture of a mobile phone or a tablet) and the reference coordinate system (the earth coordinate system). For the three-axis speed, the speed when any point on the object performs circular motion on the fixed axis is called as the linear speed, and the electronic equipment can determine the three-axis speed according to the radian value (x 3, y3, z 3) of the three-axis angular speed; for example, the triaxial linear velocity is determined from the relation of (x 3, y3, z 3) ×r, r being the circumference radius. For the distribution of gravity in three axes, the embodiment of the application can obtain an Euler angle by referring to a six-axis fusion algorithm, and the standard gravity acceleration is mapped to three axes according to the Euler angle to obtain the distribution of gravity in three axes. For the rotation matrix, the matrix which has the effect of changing the direction of the vector but not the size when multiplied by a vector and maintains the chirality is called the rotation matrix, and the rotation matrix from the reference coordinate system (earth coordinate system) to the coordinate system of the electronic equipment is calculated by the embodiment of the application. For quaternions, each quaternion is a linear combination of 1, m, n and k, i.e. quaternions can be generally expressed as a+bm+cn+dk, a, b, c, d is a real number, the geometric meaning of m, n and k can be understood as a rotation, and the embodiment of the application calculates the rotation from a reference coordinate system (earth coordinate system) to an electronic device coordinate system; wherein m rotation represents a z-axis forward y-axis forward rotation in a z-axis and y-axis intersection plane, n rotation represents an x-axis forward z-axis forward rotation in an x-axis and z-axis intersection plane, and k rotation represents a y-axis forward x-axis forward rotation in a y-axis and x-axis intersection plane.
Then for a 100 frame signal there are 100 x 28 features in total, which 100 x 28 features are referred to herein as feature sets. It should be noted that the features extracted by the electronic device are not limited to the above 9 types of features, and may be more or less than the 9 types of features, which is not limited by the embodiment of the present application.
After the feature set is obtained, the electronic device may input the obtained feature set into a preset first model to identify, so as to determine whether the suspected wrist turning action is a real wrist turning action.
In one embodiment, the first model may be a RNN, LSTM, GRU or other network model. Taking the GRU as an example, the GRU is a variant of the LSTM network, and is simpler than the LSTM network in structure; typically, three gate functions are introduced in LSTM: input gate, forget gate and output gate to control input, memory and output values, whereas in a GRU there are only two gates: the specific structure of the update gate and the reset gate can be seen in fig. 9. Z t and r t in FIG. 9 represent an update gate and a reset gate, respectively, the update gate controlling how much state information was brought into the current state at a previous time, a larger value of the update gate indicating more state information was brought in at the previous time, and the reset gate controlling how much information was written into the current candidate set at the previous timeThe smaller the reset gate, the less information of the previous state is written. After the electronic device inputs the feature set into the GRU network, a wrist turning action recognition result can be obtained, wherein 0 can be used for representing non-wrist turning action, 1 represents extending wrist turning action after extending, and 2 represents retracting wrist turning action after retracting, so that under the condition that the output result is 1, the suspected wrist turning action can be determined to be extending wrist turning action after extending, and then a two-dimensional code needs to be displayed.
It will be appreciated that the electronic device typically also needs to train the first model before it can be identified using the first model to improve the accuracy of the first model identification result. In this embodiment of the present application, positive samples and negative samples in the training process of the first model are comprehensively considered, and the first model is trained by taking a data set composed of the positive samples and the negative samples as training data. The positive samples are gyroscope signal data and acceleration signal data acquired when the user holds the electronic equipment to perform wrist turning action in different initial states and different termination states, and the negative samples are gyroscope signal data and acceleration signal data acquired when the user holds the electronic equipment to perform non-wrist turning action in different states. For example, the positive sample may be signal data in a state in which the collected user holds the electronic device from a state in which the electronic device is flipped left to a landscape screen, and the negative sample may be signal data in a state in which the collected user holds the electronic device from a state in which the electronic device is picked up for reading. After the positive and negative samples are collected, the electronic device may train the first model as training data.
In order to accord with the use habit of a user as much as possible, when the positive sample is collected, the object to which the electronic equipment is attached is light-colored as much as possible and is kept on the right side of the body; stopping for 1-2 seconds after approaching the target object, changing the state, stopping for 1-2 seconds again after changing the state, and executing the next action; each set of data of each scene is acquired at least 10 times, and the optical sensor of the electronic device is close to the target object. When the negative sample is collected, each scene is collected for 15 minutes; when data are collected in the same scene, various behaviors can be changed; each action is performed and then quiesced for 1-2 seconds before the next action is performed.
S104, displaying the two-dimensional code page.
Through the execution of the process, if the current user is determined to have the wrist turning action after extending, the electronic device can display the corresponding two-dimensional code page. As can be seen from the description of FIG. 5, the electronic device can correspondingly display a default two-dimensional code, such asThe payment two-dimensional code of (2) is displayed at the moment as/>Two-dimensional code of payment; or the electronic device can correspondingly display the commonly used two-dimensional codes set by the user, for example, the/>, is setThe payment two-dimensional code of (2) is displayed at the moment as/>Two-dimensional code of payment.
In one embodiment, before the two-dimensional code page is displayed, other arbitrary pages can be displayed on a display screen of the electronic device, and when the electronic device determines that the two-dimensional code page needs to be displayed, the other arbitrary pages are switched to the corresponding two-dimensional code page.
According to the method for displaying the two-dimension code, the electronic equipment identifies the gesture action of the user when operating the mobile phone according to the acquired sensor data so as to determine whether the two-dimension code needs to be displayed or not, and displays the corresponding two-dimension code page when determining to display the two-dimension code, so that the operation steps of calling the two-dimension code by the user are reduced, and the process of presenting the two-dimension code by the user is simple and efficient.
According to the description of the embodiment, the electronic device determines whether the user has the extending wrist turning action according to the acquired sensor data so as to display the two-dimensional code, but in daily application, the user may have the extending wrist turning situation under the non-code scanning scene, so that in order to improve the accuracy of the electronic device on whether the two-dimensional code judging result needs to be displayed, the electronic device can also judge by combining other gesture actions (hereinafter referred to as preset actions) of the user on the basis that the user has the extending wrist turning action.
In one embodiment, the combined preset action of the user may be a double click action, a triple click action, or a shake action of the electronic device. The method comprises the steps of analyzing sensor data of a gyroscope sensor and an acceleration sensor in electronic equipment to determine whether a user has preset actions and a wrist turning-over action after stretching out corresponding to a code scanning action, and automatically popping up and displaying corresponding two-dimensional codes when the user determines that the gesture actions are available. The following describes a specific process of the electronic device in combination with a preset action and a wrist turning action after extension of a user to determine to display a two-dimensional code page according to the embodiment shown in fig. 10, including:
s201, acquiring first sensor data, wherein the acquisition time of the first sensor data is the first acquisition time.
Alternatively, the first sensor data may be data acquired by an acceleration sensor, such as acceleration signal data; data acquired by an acceleration sensor, such as acceleration signal data; data collected by the pressure sensor, such as pressure signal data; the above-mentioned several signal data may also be included at the same time.
S202, determining whether the gesture action of the user is a preset action according to the first sensor data, and if so, executing S206.
After the electronic device obtains the first sensor data, the electronic device may first perform filtering processing on the first sensor data to remove noise in the signal data. The electronic device may then analyze the first sensor data to determine whether the gesture action of the user is a preset action. The preset actions may include a double click action, a triple click action, or a shake action of the electronic device.
As one implementation manner, taking the first sensor data as acceleration signal data as an example, the process of analyzing the acceleration signal data to determine whether the action is preset may include: starting from the first frame of acceleration signal data, the electronic device may determine in sequence whether there are two consecutive peaks of the acceleration signal at about 1000HZ that satisfy the condition: the difference between the corresponding moments of the two peaks is within a preset threshold (for example, two peaks exist within 1 second), the peak value of the two peaks is in a [ N7, N8] interval, and the peak width is in a [ N9, N10] interval, for example, for x-axis data of acceleration signal data, the [ N7, N8] interval may be a [5, 10] interval, and the [ N9, N10] interval may be a [50, 100] interval. If two wave peaks meeting the condition exist, the electronic equipment determines that the current gesture motion of the user is a double-click motion to the back of the electronic equipment, namely, the current gesture motion meets the preset motion.
For example, the signal distribution diagram of the acceleration signal data of the knocking operation on the back of the electronic device may be seen in fig. 11, where the horizontal axis represents the number of signal frames in the x-axis data diagram in fig. 11, and the vertical axis represents the magnitude of the x-axis acceleration value corresponding to each frame of signal; in the y-axis data diagram in fig. 11, the horizontal axis represents the number of signal frames, and the vertical axis represents the magnitude of the y-axis acceleration value corresponding to each frame of signal; in the z-axis data diagram of fig. 11, the horizontal axis represents the number of signal frames, and the vertical axis represents the magnitude of the z-axis acceleration value corresponding to each frame of signal. As can be seen from fig. 11, within the rectangular dashed box of the x-axis data, there are two continuous peaks, and the peak of the two peaks is within the [5, 10] interval and the peak width is within the [50, 100] interval, then the gesture action of the user corresponding to this piece of data is a double click action.
As another implementation, the process of analyzing the first sensor data to determine whether it is a preset action may include: the electronic equipment can input the first sensor data into the second model for recognition so as to obtain a preset action recognition result. Optionally, the second model may be a decision tree model, and in the obtained recognition result, 0 may be used to represent a non-gesture action, 1 represents a double-click action on the back of the electronic device, 2 represents a triple-click action on the back of the electronic device, and 3 represents a shake action of the electronic device; if the output result is 1,2 or 3, the electronic device may determine that the gesture of the user is a preset action.
It will be appreciated that the electronic device also needs to train the second model before using the second model to perform recognition, so as to improve accuracy of the second model recognition result. In this case, the embodiment of the application comprehensively considers the positive sample and the negative sample in the training process of the second model, and trains the second model by taking the data set consisting of the positive sample and the negative sample as training data. The positive sample is acceleration signal data of knocking actions and shaking actions performed by the user for many times, and the negative sample is acceleration signal data of actions such as walking, running, jumping, putting down, picking up, lifting hands and the like of the user in daily use. After the positive and negative samples are collected, the electronic device may train the second model as training data.
S203, acquiring second sensor data, wherein the acquisition time of the second sensor data is the second acquisition time.
Wherein the second sensor data may be data collected by a gyro sensor, such as gyro signal data; data acquired by an acceleration sensor, such as acceleration signal data; both gyroscope signal data and acceleration signal data may also be included.
S204, preprocessing the second sensor data, and determining whether the gesture of the user is a suspected wrist turning action. If yes, S205 is executed.
S205, determining whether the suspected wrist turning action is a wrist turning action after extension, if yes, executing S206.
The implementation process of S203-S205 is similar to the implementation process of S101-S103 described above, and will not be described here again.
S206, if the first acquisition time is within a preset duration range before the second acquisition time, displaying the two-dimensional code page.
The preset duration range may be 3 seconds, that is, the embodiment of the present application sets the occurrence time of the preset motion to be within 3 seconds before the occurrence time of the wrist turning motion after extension. If it is determined in S202 that the gesture corresponding to the first acquisition time is the preset action and in S205 that the gesture corresponding to the second acquisition time is the extended backward wrist turning action, the electronic device displays the corresponding two-dimensional code interface if the first acquisition time is within 3 seconds before the second acquisition time.
It will be appreciated that the electronic device may display a default two-dimensional code, such asTwo-dimensional code of payment; or the electronic device can display the commonly used two-dimensional codes set by the user, such as/>, for exampleTwo-dimensional code of payment.
It may be further understood that if the gesture in S202 is not a preset action, or the gesture in S205 is not a wrist-turning-after-extension action, or the first acquisition time is not within a preset duration range before the second acquisition time, the electronic device does not display the two-dimensional code interface.
According to the method for displaying the two-dimensional code, the electronic equipment identifies gesture actions when the user operates according to the acquired sensor data to determine whether the two-dimensional code needs to be displayed or not, and displays the corresponding two-dimensional code page when the two-dimensional code is determined to be displayed, so that the operation steps of calling the two-dimensional code by the user are reduced, and the process of presenting the two-dimensional code by the user is simple and efficient; meanwhile, the corresponding two-dimensional code is jointly determined and displayed by combining the preset action and the wrist turning action of the user, and the accuracy of the electronic equipment on the display of the two-dimensional code judgment result can be improved.
The following describes another specific process of the electronic device in combination with the preset action and the extended wrist-turning action of the user to determine to display the two-dimensional code page according to the embodiment shown in fig. 12, including:
S301, acquiring first sensor data, wherein the acquisition time of the first sensor data is the first acquisition time.
The first sensor data may be data acquired from at least one of a gyroscope sensor, an acceleration sensor, or a pressure sensor, such as at least one of acceleration signal data, gyroscope signal data, or pressure signal data.
S302, determining whether the gesture action of the user is a preset action according to the first sensor data, and if so, executing S303.
The implementation process of this step may be referred to the description of S202 above, and will not be repeated here.
S303, monitoring whether second sensor data in a preset duration range corresponds to suspected wrist turning action or not from the first acquisition moment; if so, S304 is performed.
Wherein the second sensor data may include gyroscope signal data and acceleration signal data.
S304, determining whether the suspected wrist turning action is a wrist turning action after extension, if yes, executing S305.
The preset duration range may be 3 seconds, that is, after the user performs the preset action on the electronic device, the electronic device starts to monitor whether there is a wrist turning action after extending within 3 seconds, for example, a monitoring thread may be started to monitor. For the process of determining whether the suspected wrist turning action is the extended wrist turning action in S303 and S304, the process of S102-S103 may be referred to, and will not be described herein.
S305, displaying a two-dimensional code page.
In the step, the electronic equipment can display the two-dimensional code page under the condition that the suspected wrist turning action is the wrist turning action after extending. For example, the electronic device may display a default two-dimensional code, such asTwo-dimensional code of payment; or the electronic device can display the commonly used two-dimensional codes set by the user, such as/>, for exampleTwo-dimensional code of payment.
In one implementation, the listening operation may be initiated when the preset action determined by the electronic device is any one of a double click action, a triple click action, or a shake action of the user on the back of the electronic device. And when the wrist turning action after the extension is monitored, a two-dimensional code page is displayed, wherein the displayed two-dimensional code can be a default two-dimensional code, such asTwo-dimensional code of payment; or can display the two-dimension codes which are set by the user and are commonly used per se, such as/>, for exampleTwo-dimensional code of payment.
In another implementation manner, because the current two-dimensional codes are more, in order to enable the electronic device to accurately display the corresponding two-dimensional codes according to different requirement scenes, as shown in fig. 13, when the user turns on the switch of the intelligent sensing function, the two-dimensional codes corresponding to different gesture actions can be set. For example, setting a double click action on the back of the electronic device corresponds toThe payment two-dimensional code of the electronic equipment corresponds to the three-click action on the back of the electronic equipmentThe two-dimension code of payment and the shaking action of the electronic equipment are corresponding to health codes; wherein the user can click on the triangle icon/>, behind the corresponding action, on the page of fig. 13The corresponding two-dimensional code is selected in a pop-up two-dimensional code list (not shown in the figure).
Then, in this implementation, when the preset action determined by the electronic device is a double-click action of the user on the back of the electronic device, and the wrist turning action after extension is monitored again, the preset action may be displayedTwo-dimensional code of payment; when the preset action determined by the electronic equipment is a three-click action of the user on the back of the electronic equipment and the wrist turning action after stretching out is monitored, the electronic equipment can display Two-dimensional code of payment; when the preset action determined by the electronic equipment is the shaking action of the user on the electronic equipment and the wrist turning action after the stretching out is monitored, the health code can be displayed.
It can be understood that if the electronic device does not monitor the extending wrist turning action within the preset time range from the first acquisition time, the two-dimensional code interface is not displayed.
In one embodiment, after the two-dimensional code page is displayed by the electronic device, whether the retracting back wrist turning action exists or not can be monitored continuously, the two-dimensional code page is closed when the retracting back wrist turning action exists, and any other page before the two-dimensional code page is displayed. It should be noted that, if the two-dimensional code currently displayed by the electronic device is a payment type two-dimensional code, the two-dimensional code page needs to be closed after the user inputs the password to pay successfully; or when some payment applications are set to be free of the secret payment, the two-dimension code page can be closed directly. For the process of determining whether there is a retracting backward wrist turning action, refer to the processes of S102-S103, for example, when the identification result of the wrist turning action obtained by the electronic device in S103 is 2 (i.e. the retracting backward wrist turning action), it may be determined that there is a retracting backward wrist turning action.
According to the method for displaying the two-dimension code, when the electronic equipment recognizes that the user has the preset action according to the acquired sensor data, whether the user stretches out and turns over the wrist in the preset duration range or not is monitored, and the two-dimension code is determined to be displayed when the user stretches out and turns over the wrist, so that the operation steps of the user for calling the two-dimension code are reduced, and the process of presenting the two-dimension code by the user is simple and efficient; meanwhile, the corresponding two-dimensional code is jointly determined and displayed by combining the preset action and the wrist turning action of the user, and the accuracy of the electronic equipment on the display of the two-dimensional code judgment result can be improved.
The process for the embodiment shown in fig. 14 described below is described in connection with the software architecture of the electronic device shown in fig. 3 described above. Fig. 14 is a timing chart of a method for displaying a two-dimensional code according to an embodiment of the present application, taking first sensor data as acceleration signal data, and second sensor data as gyroscope signal data and acceleration signal data as examples, where the method includes:
S401, the acceleration sensor collects acceleration signal data, and the collection time of the acceleration signal data is the first collection time.
S402, the acceleration sensor sends acceleration signal data to the algorithm module.
S403, the algorithm module calls SensorAlg to process the acceleration signal data, determines whether the gesture action of the user is a preset action, and if yes, executes S404.
S404, the algorithm module starts a monitoring thread for monitoring whether the wrist turning-over action after extending out exists within a preset time range from the first acquisition time.
S405, the algorithm module acquires gyroscope signal data from a gyroscope sensor and acquires acceleration signal data from an acceleration sensor.
S406, the algorithm module preprocesses the gyroscope signal data and the acceleration signal data and determines whether the gesture motion of the user is a suspected wrist turning motion. If yes, S407 is executed.
S407, the algorithm module determines that the suspected wrist turning action is the wrist turning action after stretching out.
S408, the monitoring thread monitors that the wrist turning action after extending is performed.
S409, the monitoring thread sends the monitoring result to the decision module.
S410, the decision module sends the result to the corresponding application (FIG. 14. The application isFor illustration purposes) sends a display instruction.
It can be appreciated that the monitoring result may include an identifier of the corresponding application, so that the decision module can accurately send the display instruction to the corresponding application.
S411,And sending the page data of the payment two-dimension code to a decision module according to the received display instruction.
And S412, the decision module sends the page data of the payment two-dimension code to the display driver, and instructs the display driver to call the display screen to display the payment two-dimension code page.
S413, the display driver calls the display screen to display the payment two-dimensional code page.
S414, displaying the payment two-dimensional code page on the display screen.
The above describes in detail an example of a method for displaying a two-dimensional code provided by the embodiment of the present application. It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional modules of the electronic device according to the method example, for example, each function can be divided into each functional module, for example, a detection unit, a processing unit, a display unit, and the like, and two or more functions can be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
It should be noted that, all relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
The electronic device provided in this embodiment is configured to execute the method for displaying a two-dimensional code, so that the same effect as that of the implementation method can be achieved.
In case an integrated unit is employed, the electronic device may further comprise a processing module, a storage module and a communication module. The processing module can be used for controlling and managing the actions of the electronic equipment. The memory module may be used to support the electronic device to execute stored program code, data, etc. And the communication module can be used for supporting the communication between the electronic device and other devices.
Wherein the processing module may be a processor or a controller. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. A processor may also be a combination that performs computing functions, e.g., including one or more microprocessors, a Digital Signal Processor (DSP) and a microprocessor such as a combination of a DIGITAL SIGNAL processor and a microprocessor. The memory module may be a memory. The communication module can be a radio frequency circuit, a Bluetooth chip, a Wi-Fi chip and other equipment which interact with other electronic equipment.
In one embodiment, when the processing module is a processor and the storage module is a memory, the electronic device according to this embodiment may be a device having the structure shown in fig. 2.
The embodiment of the application also provides a computer readable storage medium, in which a computer program is stored, which when executed by a processor, causes the processor to execute the method for displaying the two-dimensional code according to any of the above embodiments.
The embodiment of the application also provides a computer program product, which when running on a computer, causes the computer to execute the related steps so as to realize the method for displaying the two-dimensional code in the embodiment.
In addition, embodiments of the present application also provide an apparatus, which may be embodied as a chip, component or module, which may include a processor and a memory coupled to each other; the memory is used for storing computer-executed instructions, and when the device is operated, the processor can execute the computer-executed instructions stored in the memory, so that the chip executes the method for displaying the two-dimensional code in the method embodiments.
The electronic device, the computer readable storage medium, the computer program product or the chip provided in this embodiment are used to execute the corresponding method provided above, so that the beneficial effects thereof can be referred to the beneficial effects in the corresponding method provided above, and will not be described herein.
It will be appreciated by those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (16)

1. A method of displaying a two-dimensional code, the method performed by an electronic device, the method comprising:
acquiring first sensor data, wherein the first sensor data are acquired at a first moment;
if the first action of the user corresponding to the first moment is determined to be a preset action according to the first sensor data, determining whether the user has wrist turning action within a preset duration range from the first moment;
Displaying a preset two-dimensional code page under the condition that the user has wrist turning action within the preset duration range;
after the two-dimensional code page is displayed, acquiring third sensor data;
and if the third action of the user corresponding to the third sensor data is determined to be the wrist turning action after retraction according to the third sensor data, displaying a first page, wherein the first page is displayed by the electronic equipment before the two-dimensional code page is displayed.
2. The method of claim 1, wherein the determining whether the user has a wrist-turning motion within a preset time period from the first time comprises:
acquiring second sensor data, wherein the second sensor data are acquired at a second moment, and the second moment is within a preset duration range after the first moment;
And determining whether a second action of the user corresponding to the second moment is a wrist turning action according to the second sensor data.
3. The method according to claim 2, wherein determining whether the second action of the user corresponding to the second moment is a wrist turning action according to the second sensor data includes:
Preprocessing the second sensor data, and determining whether the second motion is a suspected wrist turning motion, wherein the probability of the suspected wrist turning motion being the wrist turning motion is greater than or equal to a preset probability threshold;
and if the second motion is a suspected wrist turning motion, determining whether the second motion is a wrist turning motion.
4. The method of claim 3, wherein the second sensor data comprises gyroscope signal data and acceleration signal data, wherein the preprocessing the second sensor data to determine whether the second motion is a suspected wrist-turning motion comprises:
Acquiring first sub-data from the gyroscope signal data, wherein the first sub-data are continuous data of a preset frame number in the gyroscope signal data;
Acquiring second sub-data from the acceleration signal data, wherein the second sub-data is a signal with a continuous preset frame number in the acceleration signal data, and the position of the first sub-data in the gyroscope signal data is the same as the position of the second sub-data in the acceleration signal data;
if the first sub-data and the second sub-data meet a first preset condition, determining that the second action is a suspected wrist turning action;
If the first sub-data and the second sub-data do not meet a first preset condition, determining that the second motion is not a suspected wrist turning motion.
5. The method of claim 4, wherein the first preset condition comprises at least one of:
condition 1: the angular velocity module value corresponding to the last frame signal of the first sub-data is in a first interval;
condition 2: the acceleration module value corresponding to the last frame signal of the second sub data is in a second interval;
Condition 3: the z-axis acceleration value corresponding to the last frame signal of the second sub-data is in a third interval or smaller than a first threshold value;
condition 4: the first sub-data has a main peak signal;
Condition 5: the main peak signal is positioned in the middle area of the first sub data;
Condition 6: the signal distribution before the main peak signal is in a monotonically increasing trend, the signal distribution after the main peak signal is in a monotonically decreasing trend, or the signal distribution before the main peak signal is in a monotonically decreasing trend, and the signal distribution after the main peak signal is in a monotonically increasing trend.
6. The method of claim 4, wherein the determining whether the second action is a wrist flip action comprises:
identifying the first sub data and the second sub data through a preset first model to obtain a first identification result;
determining the second motion as a wrist turning motion under the condition that the first identification result is a first preset result;
and under the condition that the first identification result is not a first preset result, determining that the second action is not a wrist turning action.
7. The method according to any one of claims 1-6, wherein the determining, according to the first sensor data, that the first action of the user corresponding to the first time is a preset action includes:
identifying the first sensor data through a preset second model to obtain a second identification result;
And under the condition that the second identification result is a second preset result, determining the first action as a preset action.
8. The method of claim 7, wherein the preset action comprises a double click action, a triple click action, or a shake action of the user on the back of the electronic device.
9. The method of claim 8, wherein displaying the preset two-dimensional code page comprises:
If the first action is a double-click action of the user on the back of the electronic equipment, a first two-dimensional code page is displayed; or alternatively, the first and second heat exchangers may be,
If the first action is a three-click action of the user on the back of the electronic equipment, displaying a second two-dimensional code page; or alternatively, the first and second heat exchangers may be,
And if the first action is the shaking action of the electronic equipment held by the user, displaying a third two-dimensional code page.
10. The method according to claim 9, wherein the method further comprises:
Displaying a first interface comprising display setting controls, wherein the display setting controls comprise a setting control for the double-click action, a setting control for the triple-click action and a setting control for the shaking action;
Receiving a first operation of the user acting on the display setting control on the first interface;
Responding to the first operation, displaying a first two-dimensional code page when the user performs double-click action on the back of the electronic equipment, displaying a second two-dimensional code page when the user performs triple-click action on the back of the electronic equipment, and displaying a third two-dimensional code page when the user performs shaking action on the electronic equipment.
11. The method of any one of claims 1-6, wherein the wrist-turning motion comprises an extended backward wrist-turning motion, the extended backward wrist-turning motion comprising a vertical screen wrist-turning motion, a horizontal screen wrist-turning motion, a reverse wrist-turning motion, an angled wrist-turning motion, or a hand-up inward wrist-turning motion.
12. The method of any of claims 1-6, wherein the first sensor data is data acquired from at least one of a gyroscope sensor, an acceleration sensor, or a pressure sensor.
13. The method of claim 6, wherein the first model is any one of an RNN model, an LSTM model, and a GRU model.
14. The method of claim 7, wherein the second model is a decision tree model.
15. An electronic device, comprising:
One or more processors;
one or more memories;
the memory stores one or more programs that, when executed by the processor, cause the electronic device to perform the method of any of claims 1-14.
16. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, causes the processor to perform the method of any of claims 1 to 14.
CN202210109243.3A 2021-12-28 2022-01-28 Method for displaying two-dimensional code and electronic equipment Active CN116415951B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US18/001,751 US20240127218A1 (en) 2021-12-28 2022-08-19 Method for Displaying Two-Dimensional Code and Electronic Device
PCT/CN2022/113601 WO2023124129A1 (en) 2021-12-28 2022-08-19 Method for displaying two-dimensional code, and electronic device
EP22821834.3A EP4227876A4 (en) 2021-12-28 2022-08-19 Method for displaying two-dimensional code, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111633073 2021-12-28
CN2021116330730 2021-12-28

Publications (2)

Publication Number Publication Date
CN116415951A CN116415951A (en) 2023-07-11
CN116415951B true CN116415951B (en) 2024-06-07

Family

ID=87055282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210109243.3A Active CN116415951B (en) 2021-12-28 2022-01-28 Method for displaying two-dimensional code and electronic equipment

Country Status (1)

Country Link
CN (1) CN116415951B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013132505A (en) * 2011-12-27 2013-07-08 Daiichi Shokai Co Ltd Game machine
CN107037955A (en) * 2016-10-24 2017-08-11 阿里巴巴集团控股有限公司 A kind of method and device of display image information
WO2018058605A1 (en) * 2016-09-30 2018-04-05 华为技术有限公司 Method and apparatus for obtaining distance threshold value when performing user related operation
CN109146463A (en) * 2018-07-25 2019-01-04 南昌努比亚技术有限公司 Method of mobile payment, mobile terminal and computer readable storage medium
CN110032313A (en) * 2019-02-28 2019-07-19 努比亚技术有限公司 A kind of screen switching, terminal and computer readable storage medium
CN110536004A (en) * 2019-07-23 2019-12-03 华为技术有限公司 Multisensor is applied to the method and electronic equipment of the electronic equipment with flexible screen
CN110554768A (en) * 2018-05-31 2019-12-10 努比亚技术有限公司 intelligent wearable device control method and device and computer readable storage medium
CN110989852A (en) * 2019-10-15 2020-04-10 华为终端有限公司 Touch screen, electronic equipment and display control method
CN111596751A (en) * 2020-05-19 2020-08-28 歌尔智能科技有限公司 Display control method and device for wrist-worn device, wrist-worn device and storage medium
CN113283493A (en) * 2021-05-19 2021-08-20 Oppo广东移动通信有限公司 Sample acquisition method, device, terminal and storage medium
CN113572896A (en) * 2021-06-23 2021-10-29 荣耀终端有限公司 Two-dimensional code display method based on user behavior model and related equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109917956B (en) * 2019-02-22 2021-08-03 华为技术有限公司 Method for controlling screen display and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013132505A (en) * 2011-12-27 2013-07-08 Daiichi Shokai Co Ltd Game machine
WO2018058605A1 (en) * 2016-09-30 2018-04-05 华为技术有限公司 Method and apparatus for obtaining distance threshold value when performing user related operation
CN107037955A (en) * 2016-10-24 2017-08-11 阿里巴巴集团控股有限公司 A kind of method and device of display image information
CN110554768A (en) * 2018-05-31 2019-12-10 努比亚技术有限公司 intelligent wearable device control method and device and computer readable storage medium
CN109146463A (en) * 2018-07-25 2019-01-04 南昌努比亚技术有限公司 Method of mobile payment, mobile terminal and computer readable storage medium
CN110032313A (en) * 2019-02-28 2019-07-19 努比亚技术有限公司 A kind of screen switching, terminal and computer readable storage medium
CN110536004A (en) * 2019-07-23 2019-12-03 华为技术有限公司 Multisensor is applied to the method and electronic equipment of the electronic equipment with flexible screen
CN110989852A (en) * 2019-10-15 2020-04-10 华为终端有限公司 Touch screen, electronic equipment and display control method
CN111596751A (en) * 2020-05-19 2020-08-28 歌尔智能科技有限公司 Display control method and device for wrist-worn device, wrist-worn device and storage medium
CN113283493A (en) * 2021-05-19 2021-08-20 Oppo广东移动通信有限公司 Sample acquisition method, device, terminal and storage medium
CN113572896A (en) * 2021-06-23 2021-10-29 荣耀终端有限公司 Two-dimensional code display method based on user behavior model and related equipment

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Object-Based Activity Recognition with Heterogeneous Sensors on Wrist;Takuya Maekawa 等;《Pervasive Computing,Proceedings》;第6030卷;第246-264页 *
Smartphone based car-searching system for large parking lot;Junhuai Li 等;《2016 IEEE 11th Conference on Industrial Electronics and Applications (ICIEA)》;第1994-1998页 *
基于图像识别的水库水位自动监测仪研发;张成;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》(第1期);第C037-134页 *
影响智能手机应用程序界面用户体验因素的研究;章玉宛;《设计》(第18期);第102-103页 *
老年人健康监测系统的设计;纪凯鑫 等;《肇庆学院学报》;第38卷(第5期);第22-26页 *

Also Published As

Publication number Publication date
CN116415951A (en) 2023-07-11

Similar Documents

Publication Publication Date Title
CN112717370B (en) Control method and electronic equipment
CN110321047B (en) Display control method and device
CN113645351B (en) Application interface interaction method, electronic device and computer-readable storage medium
CN111127509B (en) Target tracking method, apparatus and computer readable storage medium
CN110401768B (en) Method and device for adjusting working state of electronic equipment
CN110839128B (en) Photographing behavior detection method and device and storage medium
WO2021008589A1 (en) Application running mehod and electronic device
WO2021000943A1 (en) Method and apparatus for managing fingerprint switch
CN110012153A (en) It goes out and shields the method and electronic equipment of display
CN113490943A (en) Integrated chip and method for processing sensor data
CN110705614A (en) Model training method and device, electronic equipment and storage medium
CN113971271A (en) Fingerprint unlocking method and device, terminal and storage medium
CN110058729B (en) Method and electronic device for adjusting sensitivity of touch detection
CN113391775A (en) Man-machine interaction method and equipment
CN111582184B (en) Page detection method, device, equipment and storage medium
CN111524528B (en) Voice awakening method and device for preventing recording detection
CN114384465A (en) Azimuth angle determination method and device
CN111723124B (en) Data collision analysis method and device, electronic equipment and storage medium
CN111931712A (en) Face recognition method and device, snapshot machine and system
CN111353513B (en) Target crowd screening method, device, terminal and storage medium
CN116737290B (en) Finger joint knocking event identification method and electronic equipment
CN116415951B (en) Method for displaying two-dimensional code and electronic equipment
WO2023124129A1 (en) Method for displaying two-dimensional code, and electronic device
CN115390738A (en) Scroll screen opening and closing method and related product
CN113936240A (en) Method, device and equipment for determining sample image and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant