CN115718913A - User identity identification method and electronic equipment - Google Patents

User identity identification method and electronic equipment Download PDF

Info

Publication number
CN115718913A
CN115718913A CN202310027093.6A CN202310027093A CN115718913A CN 115718913 A CN115718913 A CN 115718913A CN 202310027093 A CN202310027093 A CN 202310027093A CN 115718913 A CN115718913 A CN 115718913A
Authority
CN
China
Prior art keywords
measurement data
user
electronic device
mobile phone
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310027093.6A
Other languages
Chinese (zh)
Other versions
CN115718913B (en
Inventor
门慧超
刘兴宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310027093.6A priority Critical patent/CN115718913B/en
Publication of CN115718913A publication Critical patent/CN115718913A/en
Application granted granted Critical
Publication of CN115718913B publication Critical patent/CN115718913B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Telephone Function (AREA)

Abstract

A user identity recognition method and electronic equipment relate to the technical field of terminals and can enable the electronic equipment to automatically utilize measurement data generated by an inertia measurement unit when a user operates the electronic equipment to determine the user identity. In the method, an electronic device receives a first touch operation of a user on a touch screen; detecting whether a preset identification model corresponding to the application controlled by the first touch operation exists or not; if the preset identification model exists, acquiring first measurement data generated by the inertia measurement unit according to the first touch operation; and inputting the first measurement data into a preset identification model to output the identity of the user.

Description

User identity identification method and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of user identification, in particular to a user identity identification method and electronic equipment.
Background
With the continuous development of electronic devices (e.g., mobile phones), the identification technology is widely applied to the usage scenarios of the electronic devices. For example, before the mobile phone performs the payment operation, the mobile phone may determine whether to perform the payment operation by recognizing the identity of the user.
In the related art, the electronic device may obtain biometric information, a digital password, or a fingerprint, which is input by a user, where the biometric information may include face recognition information and voiceprint recognition information. And identifying the user identity by using the information. However, the information is inconvenient for the user to use or cannot be accurately obtained by the electronic device in some cases, so that the electronic device cannot identify the user identity by using the information, and normal use of the electronic device is affected.
Disclosure of Invention
In view of this, the present application provides a user identity identification method and an electronic device, which can enable the electronic device to automatically determine a user identity by using measurement data generated by an inertia measurement unit when a user operates the electronic device, and when the method is used to identify the user identity, fewer limiting conditions are used, thereby improving the generalization of user identification.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect, a user identity recognition method is provided, which is applied to an electronic device including a touch screen and an inertial measurement unit. In the method, the electronic equipment receives a first touch operation of a user on the touch screen. And then, the electronic equipment detects whether a preset recognition model corresponding to the application controlled by the first touch operation exists or not. If the preset identification model exists in the mobile phone, first measurement data generated by the inertia measurement unit according to the first touch operation are obtained. And inputting the first measurement data into a preset identification model to output the identity of the user. The preset identification model is obtained by training second measurement data generated by the inertia measurement unit according to second touch operation of the user on the touch screen and the corresponding user identity in advance.
Therefore, when the electronic equipment is operated by a user, the user identity can be automatically recognized through the preset recognition model, the limitation conditions are fewer when the user identity is recognized by using the method, and the generalization of user recognition is improved. Moreover, each application corresponds to the respective preset identification model, so that the electronic equipment can utilize the corresponding preset identification model of the touch application when receiving the first touch operation, and further the accuracy of user identity identification can be improved.
In an implementation manner of the first aspect, if the preset recognition model does not exist in the electronic device, the electronic device continues to perform receiving of the first touch operation of the user on the touch screen. Therefore, the electronic equipment can automatically identify the user identity by continuously receiving the first touch operation of the user on the touch screen in the background.
In one implementation form of the first aspect, the first measurement data measured by the inertial measurement unit includes measurement data within a time period starting with the initial timestamp and ending with the ending timestamp. The initial timestamp is determined by the electronic device according to the detected initial time when the hand of the user contacts the touch screen, and the end timestamp is determined by the electronic device according to the detected time when the hand of the user leaves the touch screen.
Due to the device self-limitation of the inertia measurement unit, there may be a delay between the electronic device receiving the touch operation and the inertia measurement unit measuring the measurement data due to the touch operation. In order to improve the accuracy of the electronic equipment for identifying the user identity, an initial time stamp and a termination time stamp of the measurement data of the inertia measurement unit are determined according to the initial time of the electronic equipment for detecting that the hand of the user is in contact with the touch screen and the time of the electronic equipment for detecting that the hand of the user is away from the touch screen, so that the measurement data of the inertia measurement unit between the initial time stamp and the termination time stamp is the measurement data measured when the electronic equipment receives touch operation, and when the measurement data is input into a preset identification model, the accuracy of outputting the user identity can be improved. In addition, in the embodiment of the application, the measurement data measured by the inertia measurement unit is irrelevant to the type of touch operation, and the inertia measurement unit can measure the measurement data no matter whether the touch operation is point touch or sliding operation or the like, so that the inertia measurement unit can obtain undifferentiated data under any touch operation, and the use generalization of the preset identification model is improved.
In an implementation form of the first aspect, the inertial measurement unit comprises at least one or more of an acceleration sensor, a linear acceleration sensor, a gravitational acceleration sensor, a magnetometer, a gyroscope sensor, a rotation vector sensor, and an orientation meter.
In one implementation form of the first aspect, the inertial measurement unit includes a predetermined measurement unit, and the predetermined measurement unit uses the earth as an absolute reference frame. The first measurement data measured by the preset measurement unit includes at least two measurement data. The process of inputting the first measurement data into the preset identification model by the electronic device may be regarded as determining, by the electronic device, a difference between two adjacent first measurement data in the first measurement data measured by the preset measurement unit, and obtaining the first relative measurement data. And then, inputting the first relative measurement data into a preset identification model to output the user identity. Therefore, the difference value of the two first measurement data can remove the part of the first measurement data affected by the geographic position, and the electronic equipment inputs the first relative measurement data into the preset identification model, so that the accuracy of the electronic equipment in identifying the user identity can be improved.
In an implementation form of the first aspect, the predetermined measurement unit comprises one or more of a magnetometer, a rotation vector sensor, and a direction meter.
In an implementation manner of the first aspect, the process of inputting the first relative measurement data into the preset recognition model by the electronic device may be regarded as the process of calculating the statistical characteristic value of the first relative measurement data by the electronic device. Then, the statistical characteristic value is input into a preset recognition model to output the user identity. Therefore, the electronic equipment calculates the statistical characteristic value of the first relative measurement data, the characteristics of the first relative measurement data can be highlighted, and the accuracy of the preset identification model for identifying the user identity is improved.
In an implementation manner of the first aspect, the electronic device may input the first measurement data into a preset identification model to output the user identity, which may be regarded as a process of inputting the first measurement data into a Gao Weishan class classification model to output the user identity. The user identities comprise positive users and negative users, the high-dimensional single-class classification model comprises key parameters, and the key parameters are determined by a bionics intelligent optimization algorithm or a conventional intelligent optimization algorithm. Thus, the electronic equipment can output the user identity more accurately by using the high-dimensional single-class classification model.
In one implementation form of the first aspect, the bionic intelligent optimization algorithm comprises a multi-objective decision function; the key parameters of the Gao Weishan classification model are determined according to a bionic intelligent optimization algorithm comprising a multi-objective decision function. Therefore, the bionics intelligent optimization algorithm can more accurately determine the key parameters of the high-dimensional single classification model by constructing a multi-objective strategy function so as to improve the accuracy of identifying the user identity.
In an implementation manner of the first aspect, the bionic intelligent optimization algorithm comprising a multi-objective decision function is a particle swarm optimization algorithm; the particle swarm optimization algorithm comprises a learning factor c1, a learning factor c2 and a particle swarm. The learning factors c1 and c2 are dynamically changed, and the change trends of the curves corresponding to the learning factors c1 and c2 are respectively in a gradually-changed nonlinear form with the slope increasing from high to low along with the increase of the iteration times of the particle swarm, so as to determine key parameters of the high-dimensional single classification model; the method further comprises the following steps: calculating the numerical value of a multi-target decision function corresponding to each round of particle swarm iteration; and determining key parameters of the Gao Weishan class classification model according to the numerical value of the multi-target decision function corresponding to each round of particle swarm iteration. Thus, the learning factor c1 and the learning factor c2 are dynamically changed, and the learning factor c1 and the learning factor c2 are subjected to multiple iterations, so that the key parameters of the Gao Weishan class classification model can be more accurately determined.
In an implementation manner of the first aspect, the method further includes: the electronic equipment receives a second touch operation of the user on the touch screen; determining an application of the second touch operation manipulation; acquiring a measurement data set generated by the inertial measurement unit according to the second touch operation; and training to obtain a preset identification model by taking the data in the measurement data set as sample data. Therefore, the electronic equipment trains to obtain the preset identification model corresponding to the application by using the measurement data set measured by the inertia measurement unit during the second touch operation of the user on the touch screen, and the accuracy of determining the identity of the user when the preset identification model is used by the subsequent electronic equipment can be improved.
In a second aspect, an electronic device is provided, comprising: the electronic device further comprises an inertial measurement unit; the touch screen, the memory and the inertia measurement unit are coupled with the processor; wherein the memory has stored therein computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the user identification method of any of the above first aspects.
In a third aspect, a computer-readable storage medium is provided, which includes computer instructions, when the computer instructions are executed on an electronic device, the electronic device executes the user identification method of any one of the first aspect.
In a fourth aspect, a computer program product is provided, which when run on a computer causes the computer to perform the user identification method of any of the first aspect above.
Drawings
Fig. 1 is a schematic view of a user interface of a mobile phone according to an embodiment of the present application;
fig. 2 is a schematic view of a user interface of another mobile phone according to an embodiment of the present application;
fig. 3 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic diagram of a software structure of an electronic device according to an embodiment of the present application;
fig. 5 is a flowchart of a user identity recognition method according to an embodiment of the present application;
fig. 6 is a schematic view of a user interface of another mobile phone according to an embodiment of the present application;
fig. 7 is a schematic diagram of a touch position where a user clicks a touch screen according to an embodiment of the present application;
fig. 8 is a schematic diagram illustrating a result of data measured by an acceleration sensor in a mobile phone according to an embodiment of the present disclosure;
FIG. 9 is a diagram illustrating results of an SVDD model according to an embodiment of the present application;
FIG. 10 is a diagram illustrating a variation of a Y-value curve with the number of iteration rounds according to an embodiment of the present application;
fig. 11 is a schematic diagram illustrating a principle of determining an optimal preset recognition model according to an embodiment of the present application;
fig. 12 is a scene diagram of a user identity recognition method according to an embodiment of the present application;
fig. 13 is a schematic view of a user interface of another mobile phone according to an embodiment of the present application;
fig. 14 is a schematic view of a user interface of another mobile phone provided in the embodiment of the present application;
fig. 15 is a schematic view of a user interface of another mobile phone according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the present application, unless otherwise stated, "and/or" in the present application is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. Also, in the description of the present application, "a plurality" means two or more than two unless otherwise specified. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple. In addition, in order to facilitate clear description of technical solutions of the embodiments of the present application, in the embodiments of the present application, terms such as "first" and "second" are used to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance. Also, in the embodiments of the present application, the words "exemplary" or "such as" are used herein to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "such as" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present relevant concepts in a concrete fashion for ease of understanding.
With the continuous development of electronic devices such as mobile phones, the identification technology is widely applied to various use scenes of the electronic devices. For example, the usage scenario may be a payment or unlocking scenario.
In the related art, the electronic device identifies the user identity through the acquired information such as a digital password, a fingerprint, a user face image, a voiceprint, a graphic and the like (referred to as a sensed physical exercise method).
But such information may in some cases be inconvenient for the user to use or may not be accurately acquired by the electronic device. The description will be given by taking a digital password and a fingerprint as examples.
In one example, referring to FIG. 1, an input numeric password interface may be displayed on the electronic device. And a digital control is displayed on the input digital password interface. The electronic device may pre-store a preset digital password set by an owner user. The electronic equipment determines the number input by the user by receiving the operation of touching the digital control by the user. And when the electronic equipment recognizes that the number input by the user is the same as the preset digital password, determining the user identity as the owner user.
It can be understood that when the electronic device identifies the user identity by using the digital password, the electronic device needs to receive the operation of touching the digital control by the user. In the process of user operation, the input number is easily exposed to others, so that the input number is leaked, and therefore, the digital password is inconvenient to use by the user in some cases.
In some embodiments, the pre-stored preset digital password and the user identity corresponding to the preset digital password may also be pre-stored in the server. In this way, the electronic device uploads the input number to the server according to the received operation of the user clicking the digital control. The server searches for the preset digital password with the same number, and feeds back the user identity corresponding to the preset digital password with the same number to the electronic equipment.
In another example, referring to fig. 2, an electronic device includes a touch screen on which a fingerprint sensor for collecting a fingerprint may be mounted. The electronic device collects a fingerprint of a user using a fingerprint sensor. The electronic device may pre-store a preset fingerprint and a user identity corresponding to the preset fingerprint. And if the similarity between the acquired fingerprint and the preset fingerprint is greater than the preset fingerprint similarity, the electronic equipment determines the user identity of the user as a preset stored user identity corresponding to the preset fingerprint.
It can be understood that when the electronic device uses the fingerprint to identify the user identity, if there is smudges on the touch screen or smudges on the user's hand, the fingerprint of the user cannot be accurately acquired by the electronic device.
In order to avoid the problem that information for identifying the identity of a user is inconvenient to use by the user or cannot be accurately acquired by electronic equipment, the embodiment of the application provides a method for identifying the identity of the user. In the method, the electronic equipment receives a first touch operation of a user on a touch screen, the first touch operation comprises any touch operation such as point touch or sliding, then a preset identification model corresponding to an application controlled by the first touch operation is searched, and if the preset identification model is searched, first measurement data generated by the inertia measurement unit according to the first touch operation is acquired. The electronic equipment inputs the first measurement data into a preset identification model so as to output the user identity. In the embodiment of the application, in the process that the electronic device is used by the user, according to the fact that different users perform the first touch operation on the application, the first measurement data generated by the inertia measurement unit are different, the user identity can be recognized in the background by using the preset recognition model (called as a non-inductive core-body method), the problem that information used for recognizing the user identity is inconvenient to use by the user or cannot be accurately acquired by the electronic device is solved, and the generalization of user identity recognition is improved.
In the embodiment of the application, the user identity results output by the preset identification model are two types, one type is a positive type user, and the other type is a negative type user. In one example, the positive class user that can be output by the preset recognition model is an owner user, and the negative class user is a non-owner user. In another example, the preset recognition model may output that the positive class of users are adults and the negative class of users are minors. In yet another example, the positive class user and the negative class user that the preset recognition model can output are male users and female users, respectively.
Certainly, the positive class user and the negative class user in the embodiment of the present application are not limited to the content disclosed above, and the user identities may also be set according to a user-defined requirement.
The electronic device in the embodiment of the present application may be, for example, a mobile phone, a portable computer, a tablet computer, a notebook computer, a Personal Computer (PC), a wearable electronic device (e.g., a smart watch), an Augmented Reality (AR) \ Virtual Reality (VR) device, an in-vehicle computer, and the like, and the following embodiment does not particularly limit a specific form of the electronic device.
Take the above electronic device as a mobile phone as an example. Fig. 3 shows a schematic structural diagram of the electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like.
Wherein the sensor module 180 may include a pressure sensor, a gyroscope sensor 180A, a barometric sensor, a magnetic sensor, an acceleration sensor 180B, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor 180C, an ambient light sensor, a bone conduction sensor, a magnetometer 180D, a linear acceleration sensor 180E, a gravitational acceleration sensor 180F, a rotation vector sensor 180G, a direction meter 180H, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the time sequence signal to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100.
The wireless communication module 160 may provide solutions for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. Display is a touch screen in some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a user takes a picture, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, an optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and converting into an image visible to the naked eye. The ISP can also carry out algorithm optimization on noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to be converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
In some embodiments, the NPU may process the input information by referring to a preset recognition model to obtain the user identity.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in the external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or sending voice information, the user can input a voice signal to the microphone 170C by uttering a voice signal close to the microphone 170C through the mouth of the user. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and perform directional recording.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The gyro sensor 180A may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180A. The gyro sensor 180A may be used for photographing anti-shake. Illustratively, when the shutter is pressed, the gyro sensor 180A detects a shake angle of the electronic device 100, calculates a distance to be compensated for the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyro sensor 180A may also be used for navigation, somatosensory gaming scenes.
The acceleration sensor 180B may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
The proximity light sensor may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic apparatus 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from a nearby object using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor can also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The fingerprint sensor is used for collecting fingerprints. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The touch sensor 180C is also referred to as a "touch panel". The touch sensor 180C may be disposed on the display screen 194, and the touch sensor 180C and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180C is used to detect a touch operation applied thereto or nearby. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180C may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The magnetometer 180D can be used for testing the magnetic field intensity and the direction and positioning the orientation of the electronic equipment, the principle of the magnetometer 180D is similar to that of a compass, and included angles between the current electronic equipment and four directions of the south, the east and the north can be measured.
The linear acceleration sensor 180E is data obtained by subtracting the influence of gravity from the acceleration sensor.
The gravitational acceleration sensor 180F is capable of sensing a change in an acceleration force, which is a force acting on the electronic device during acceleration.
The rotation vector in the rotation vector sensor 180G represents the direction of the electronic device, and is data obtained by mixing and calculating the coordinate axis and the angle.
The direction meter 180H can measure an azimuth (azimuth), a pitch (pitch), and a roll (roll), where the azimuth returns to the angle of the magnetic north pole and the Y-axis at normal times. The pitch angle refers to the angle between the x-axis and the horizontal plane. The roll angle refers to the angle between the y-axis and the horizontal plane.
When the user identification method in the embodiment of the present application is implemented based on the electronic device 100 shown in fig. 3, the electronic device 100 receives a touch operation of a user on a touch screen through the touch sensor 180C. The processor 110 of the electronic device determines an application manipulated by the touch operation through instructions run in the internal memory 121, and searches for a preset recognition model corresponding to the application. The processor 110 of the electronic device obtains measurement data generated by an inertial measurement unit according to the touch operation, wherein the inertial measurement unit at least comprises an acceleration sensor, a linear acceleration sensor, a gravitational acceleration sensor, a magnetometer, a gyroscope sensor, a rotation vector sensor and/or an orientation meter. The electronic device 100 processes the measurement data through the NPU, that is, inputs a preset identification model, and outputs a user identity.
The keys 190 include a power-on key, a volume key, and the like. The motor 191 may generate a vibration cue. Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc. The SIM card interface 195 is used to connect a SIM card.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In the embodiment of the invention, the software structure of the electronic device 100 is exemplarily described by taking the hierarchical Android system as an example.
Fig. 4 is a block diagram of the software configuration of the electronic device 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, namely, an application layer, an application framework layer, an Android Runtime (Android Runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 4, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 4, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a brief dwell, and does not require user interaction. Such as a notification manager used to notify download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scrollbar text in a status bar at the top of the system, such as a notification of a running application in the background, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android Runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), two-dimensional graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The two-dimensional graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
Based on the electronic device 100 shown in fig. 4, to implement the user identity recognition method in the embodiment of the present application, after the touch sensor 180C receives the touch operation, the application framework layer detects whether a preset recognition model corresponding to the application controlled by the touch operation exists. And if so, calling the kernel layer to acquire first measurement data generated by the inertial measurement unit according to the touch operation. And the application program framework layer inputs the first measurement data into a preset identification model to obtain the user identity.
The embodiment of the application provides a user identity identification method, which can be applied to electronic equipment, wherein the electronic equipment comprises a touch screen and at least one inertia measurement unit. Taking the above-mentioned electronic device as a mobile phone as an example, as shown in fig. 5, the user identification method may include S501-S506.
S501, the mobile phone receives the operation that the user starts the user identity recognition function.
In the embodiment of the application, whether the mobile phone executes the user identity identification method can be determined according to user settings so as to meet the use requirements of users.
Referring to fig. 6, a mobile phone navigation page is displayed on the touch screen of the mobile phone. The mobile phone navigation page comprises an inquiry message for inquiring whether the user executes the user identification method. Query message 600 may include "whether the handset is allowed to automatically identify the user's identity," while query message 600 also includes a decline control 601 and an agreement control 602.
When the mobile phone receives an operation of selecting the rejection control 601 by the user, the mobile phone does not execute the method for identifying the user identity. When the mobile phone receives an operation of the user selecting the consent control 602, that is, when the mobile phone receives a control selected by the user to confirm the use of the user identification, the mobile phone may execute the method of user identification.
Of course, in the embodiment of the present application, the mobile phone may also directly default that the mobile phone that the user agrees to use may execute the method for identifying the user identity without setting the query message.
S502, the mobile phone receives a first touch operation of a user on the touch screen.
In the embodiment of the application, the mobile phone achieves the purpose that the application in the mobile phone is controlled by the user by receiving the first touch operation of the user on the touch screen. The first touch operation includes any touch operation such as a point touch or a slide of the user on the touch screen.
In one example, the mobile phone may receive an operation of a user touching a control on the user interface, and execute a related operation according to the control. For example, the mobile phone may start the application corresponding to the application control after receiving an operation of touching the application control by the user. For another example, after receiving a user click-touch an album control in an album application, the mobile phone may display a picture in an album corresponding to the album control.
In another example, the mobile phone may receive an operation of sliding on the touch screen by the user, and perform the related operation. For example, when a mobile phone is playing a video by using a video application, an operation of sliding upwards by a user may be received to switch the played video. For another example, when a mobile phone runs a game application in the foreground, the operation of sliding the user in different directions can be received, so as to change the moving direction of the target object in the game.
In the embodiment of the application, the mobile phone can complete user identity recognition by using the preset recognition model in the using process of the user.
S503, the mobile phone detects whether a preset identification model corresponding to the application controlled by the first touch operation exists, wherein the preset identification model is obtained by second measurement data generated by the inertia measurement unit according to a second touch operation of the user on the touch screen in advance and corresponding user identity training.
In some embodiments, the preset identification model corresponding to the application is preset in the mobile phone at the time of factory shipment.
In other embodiments, the predetermined recognition model corresponding to the application is trained using sample data of the owner user. The mobile phone needs to spend a certain time for collecting sample data related to the owner user, so that a preset identification model corresponding to the application does not exist in the mobile phone within the time.
In other embodiments, since there is no scene for identifying the user identity when the mobile phone runs the application, the corresponding preset identification model is not set for the application.
Therefore, a situation that the preset recognition model corresponding to the application of the first touch operation manipulation does not exist may occur in the mobile phone.
In the embodiment of the application, when the mobile phone displays the user interface of the application, the first touch operation of the user on the touch screen can be received, so that the application can be controlled. The application can be a third-party application installed on the mobile phone, and can also be a system application. Exemplarily, the third party application may be a micro-communication ­ chamber, a small red book ­ chamber or the like.
The mobile phone can process the received first touch operation into an original input event, wherein the original input event comprises an application identifier of an application controlled by the first touch operation. The application identification indicates a unique one of the applications, and the application identification may include numbers, letters, and/or symbols. And the mobile phone determines the application through the original input event corresponding to the first touch operation.
In the embodiment of the application, because the first touch operations executed when the user controls different applications on the mobile phone are possibly different, in order to enable the recognition result of the preset recognition model to be more accurate, the corresponding preset recognition model is set for the applications, so that the most adaptive preset recognition model exists in each application. Therefore, when the mobile phone receives the first touch operation, the corresponding preset identification model of the application can be used for identifying the user identity, and therefore the accuracy of user identity identification can be improved.
The touch operation executed when the user operates different applications in the mobile phone is greatly different from the operation area on the user interface of the mobile phone, so that the measurement data measured by the inertia measurement unit can be greatly different when the touch operation is received, and the application needs to set a corresponding preset identification model.
In one example, when the user manipulates a proto-and royal-glory application, the operation regions on the user interface are mainly the lower left region and the lower right region, and the recognized touch operations are mainly rotational sliding and point touching, as the user recognized by the cell phone manipulates the application. In another example, the mobile phone recognizes that the user performs more click-to-touch actions when the user manipulates various types of game applications, such as text adventures and placement types, e.g., warm series or undefined event book. In yet another example, when the user manipulates a short video or live-type application, e.g., a tremble @, a fast-hand @, etc., the cell phone recognizes that the user performs a relatively large number of operations of sliding up and tapping on a lower right portion of the user interface while manipulating the application. In yet another example, when the user manipulates the news application, the mobile phone recognizes that the user performs more operations of sliding up and down.
In summary, it can be seen that the difference between the touch operations of the users identified by the mobile phone when the users operate different applications may be large.
In addition, when different users operate the same application in the mobile phone, the mobile phone recognizes that the touch operation received by the mobile phone and the data fed back by the inertia measurement unit have different characteristics, so that in the embodiment of the application, when the mobile phone receives the touch operation, the preset identification model corresponding to the application is established by using the measurement data measured by the inertia measurement unit, and the user identity can be accurately identified by using the preset identification model corresponding to the application.
As shown in fig. 7, (a) in fig. 7 is a schematic diagram of a position of a point touch on the touch screen when the user a manipulates the application a. Fig. 7 (B) is a schematic diagram of a touch position on the touch screen when the user B manipulates the application a. It can be seen that the touch positions of the user a and the user B identified by the mobile phone are not completely the same when using the same application, but the difference is not obvious.
As shown in fig. 8, (a) in fig. 8 is a schematic diagram of x-y axis data acquired by an acceleration sensor when a user a performs a point-and-touch operation while manipulating an application a. Fig. 8 (b) is a schematic diagram of y-z axis data acquired by the acceleration sensor when the user a performs the point-and-touch operation while manipulating the application a. Fig. 8 (c) is a schematic diagram of x-y axis data acquired by the acceleration sensor when the user B performs the point-touch operation while using the application a. Fig. 8 (d) is a schematic diagram of y-z axis data acquired by the acceleration sensor when the user B performs the point-and-touch operation while using the application a. Therefore, when the mobile phone recognizes that the difference of the touch positions is not obvious when the user performs the touch operation, the data acquired by the acceleration sensor still have great difference. Therefore, the mobile phone establishes the preset identification model corresponding to the application by using the measurement data measured by the inertia measurement unit, and can accurately identify the identity of the user.
In some embodiments, each application may be provided with a corresponding predetermined recognition model.
In other embodiments, multiple applications of the same type may have a corresponding predetermined recognition model.
The division of the applications of the same type may be determined according to whether the operation areas on the user interface are substantially the same and whether the touch operations in the operation areas are substantially the same when the user manipulates the applications. For example, the above-mentioned manipulation of a pro-spirit and a royal person glory application may set the same pre-set recognition model. As another example, the same pre-set recognition model may be set for both the trembling and fast-hand cells.
When the preset identification model can be preset in the mobile phone when the mobile phone leaves a factory, a mobile phone manufacturer carries out learning training on the preset identification model according to sample data collected in advance. And the mobile phone manufacturer directly presets the trained preset identification model in the mobile phone.
It should be noted that the manner of directly presetting the preset identification model by the mobile phone is suitable for the situation that the mobile phone does not need to collect sample data of the owner user for determining the preset identification model. If the user identity identified by the preset identification model is adult or minor, or male or female, the preset identification model can be directly preset in the mobile phone.
In the embodiment of the application, when the user identities to be identified are minors and adults, the preset identification model can be preset in the mobile phone when the mobile phone leaves a factory. The mobile phone manufacturer can search sample data in advance when the application is used by minors and adults, and learn to obtain the preset identification model by using the sample data.
When the user identities to be identified are male and female, the preset identification model can be preset in the mobile phone when the mobile phone leaves a factory. A mobile phone manufacturer can search sample data in application for men and women in advance, and train and learn by using the sample data to obtain a preset identification model.
When the preset identification model is obtained by learning and training the mobile phone by using the sample data collected by the mobile phone. The method is suitable for the situation that the mobile phone needs to use the sample data of the owner user for determining the preset identification model. If the user identities identified by the preset identification model are an owner user and a non-owner user, positive sample data of the owner user needs to be collected to determine the preset identification model.
In some embodiments, the method for obtaining the preset recognition model by the mobile phone through learning training by using the collected sample data about the owner and the user includes:
and the mobile phone receives a second touch operation of the user on the touch screen.
In this embodiment, the second touch operation may be an operation of a user touching or sliding on the touch screen.
And the mobile phone determines the application controlled by the second touch operation. This step is the same as the above-mentioned process of determining the application of the first touch operation manipulation, and is not described again.
And the mobile phone acquires a measurement data set generated by the inertia measurement unit according to the second touch operation.
In some embodiments, before the mobile phone obtains the measurement data set generated by the inertia measurement unit according to the second touch operation, the mobile phone may determine whether a preset identification model corresponding to an application controlled by the second touch operation exists. And if the preset identification model corresponding to the application controlled by the second touch operation does not exist in the mobile phone, executing the step of obtaining the measurement data set generated by the inertia measurement unit according to the second touch operation by the mobile phone.
In addition, the mobile phone does not have a preset identification model corresponding to the application controlled by the second touch operation, and whether the preset identification model needs to be generated for the application controlled by the second touch operation can be continuously determined. Whether the mobile phone needs to generate the preset identification model for the application controlled by the second touch operation can be determined according to the application, and the mobile phone does not need to generate the corresponding preset identification model for the application because the mobile phone does not need to use the user identity identification function when some applications are used. When other applications are used, the mobile phone needs to generate a corresponding preset identification model for the application because the mobile phone needs to use the user identity identification function. Therefore, when the mobile phone determines that the preset identification model needs to be generated for the application controlled by the second touch operation, the step that the mobile phone acquires the measurement data set generated by the inertia measurement unit according to the second touch operation is executed. And when the mobile phone determines that the preset identification model is not required to be generated for the application controlled by the second touch operation, the step that the mobile phone acquires the measurement data set generated by the inertia measurement unit according to the second touch operation is not executed.
And if the preset identification model corresponding to the application controlled by the second touch operation exists in the mobile phone, the step of acquiring the measurement data set generated by the inertia measurement unit according to the second touch operation is not executed. This is because there may already be a preset identification model corresponding to the application controlled by the second touch operation, which is preset in the mobile phone when the mobile phone leaves the factory, in the mobile phone, and at this time, it is not necessary to collect user data, that is, the inertia measurement unit generates the preset identification model according to the measurement data set generated by the second touch operation.
In the embodiment of the application, the mobile phone can receive a second touch operation of the user on the touch screen. And the inertia measurement unit generates second measurement data when the mobile phone receives a second touch operation. The handset may combine the second measurement data over a period of time into a measurement data set. The mobile phone can train the data in the measurement data set as sample data to obtain a preset identification model.
Referring again to fig. 3, the inertial measurement unit includes a gyroscope sensor 180A, an acceleration sensor 180B, a magnetometer 180D, a linear acceleration sensor 180E, a gravitational acceleration sensor 180F, a rotation vector sensor 180G, and/or a direction meter 180H.
In some embodiments, the second measurement data includes measurement data in a time period in the inertial measurement unit with an initial time stamp as a starting point and an end time stamp as an ending point, where the initial time stamp is determined by the mobile phone according to an initial time when the detected finger of the user contacts the touch screen, and the end time stamp is determined by the mobile phone according to a leaving time when the detected finger of the user leaves the touch screen.
Due to the limitation of the inertia measurement unit, there is a time delay between the mobile phone receiving the second touch operation and the inertia measurement unit measuring the second measurement data due to the second touch operation. In order to improve the accuracy of sample data of a preset identification model, in practical application, the time for acquiring second measurement data measured by the inertia measurement unit is determined according to the initial time for the user finger to contact the touch screen and the leaving time for the user finger to leave the touch screen, which are detected by the mobile phone.
In the embodiment of the application, in order to improve the accuracy of the electronic device for identifying the user identity, the initial timestamp and the termination timestamp of the measurement data of the inertia measurement unit are determined according to the initial time when the hand of the user detected by the electronic device contacts the touch screen and the departure time when the hand of the user leaves the touch screen, so that the measurement data of the inertia measurement unit between the initial timestamp and the termination timestamp are the measurement data measured when the electronic device receives touch operation, and when the measurement data is input into the preset identification model, the accuracy of outputting the user identity can be improved.
Illustratively, the mobile phone starts to acquire the second measurement data after detecting that the finger of the user touches the touch screen and after 10ms interval. The mobile phone finishes acquiring the second measurement data after detecting that the finger of the user leaves the touch screen and the interval is 10ms, so that the mobile phone can accurately acquire the second measurement data generated by the inertia measurement unit due to the second touch operation, and the second measurement data can reflect the influence of the second touch operation of the user on the second measurement data generated by the inertia measurement unit.
In addition, in the embodiment of the application, the measurement data measured by the inertia measurement unit is irrelevant to the type of touch operation, and the inertia measurement unit can measure the measurement data no matter whether the touch operation is point touch or sliding operation or the like, so that the inertia measurement unit can obtain undifferentiated data under any touch operation, and the use generalization of the preset identification model is improved.
In some embodiments, the second measurement data and the corresponding user identity in the measurement data set may be data of a positive type sample, and the preset third measurement data and the corresponding user identity are data of a negative type sample, where the user identity corresponding to the second measurement data is a positive type user, and the user identity corresponding to the third measurement data is a negative type user.
And training to obtain a preset recognition model corresponding to the application by using the positive sample and the negative sample.
In this embodiment of the application, the negative type sample may be data preset in the mobile phone. When the positive type sample is the owner user sample, the negative type sample, namely the non-owner sample, can be obtained in the mobile phone through factory presetting of the mobile phone.
And the first touch operation in the preset time period of the initial use of the default mobile phone is executed by the owner user, namely, the user identity corresponding to the second measurement data is the owner user by default.
In order to highlight the characteristics of the second measurement data, the mobile phone may calculate the second measurement data to obtain second statistical characteristic data. And (5) replacing the second measurement data with the second statistical characteristic data, and training a preset recognition model.
Statistical features include, but are not limited to, one or more of the following: maximum, minimum, mean, kurtosis, variance, skewness, discrete fourier transform, median absolute difference, quarter-decile, three-quarters, signal amplitude area, and the like.
If the mobile phone identifies that the second measurement data contains the three-axis data, the second measurement data corresponding to each axis are calculated respectively to obtain a second statistical characteristic value.
Correspondingly, when the sample data of the preset identification model is the calculated second statistical characteristic value, the mobile phone also needs to calculate the first measurement data to obtain the first statistical characteristic value in the process of actually using the preset identification model. And inputting the first statistical characteristic data into a preset recognition model to output the user identity.
In some embodiments, the inertial measurement unit includes a predetermined measurement unit, and the predetermined measurement unit is second measurement data measured by using the earth as an absolute reference frame and has a geographic characteristic. When the mobile phone recognizes that the same user performs the same first touch operation in different geographic positions, the preset measuring unit outputs different second measuring data. Therefore, in order to avoid the influence of the geographic position on the result of the second measurement data of the preset measurement unit, the mobile phone makes a difference between two adjacent second measurement data in the second measurement data measured by the preset measurement unit to obtain the second relative measurement data, so that the second relative measurement data is data without the influence of the geographic position.
In some embodiments, the predetermined measurement unit comprises a magnetometer, a rotation vector sensor, or an orientation meter.
For example, the preset measurement unit is a magnetometer, the mobile phone subtracts two adjacent second measurement data to obtain second relative measurement data, where the second measurement data includes measurement data B — measurement data a, measurement data C — measurement data B, and measurement data D — measurement data C, and the second measurement data includes measurement data a, measurement data C — measurement data B, and measurement data D — measurement data D, which are arranged in the measurement sequence on the x axis.
The arrangement order of the second relative measurement data is set according to the original arrangement order of the second measurement data. If the mobile phone has two second relative measurement data, when the first second relative measurement data is obtained by calculating the first second measurement data and the second measurement data, and the second relative measurement data is obtained by calculating the second measurement data and the third second measurement data, the first second relative measurement data is arranged before the second relative measurement data.
In some embodiments, the mobile phone may directly use the second relative measurement data to replace the second measurement data, and train to obtain the preset recognition model.
Correspondingly, when the sample data of the preset identification model is the second relative measurement data obtained by calculating the second measurement data, the mobile phone also needs to calculate the corresponding first relative measurement data for the first measurement data in the process that the mobile phone uses the preset identification model. And inputting the first relative measurement data into a preset identification model to output the identity of the user.
In other embodiments, after the mobile phone calculates the second relative measurement data, a third statistical characteristic value is calculated for the second relative measurement data; and replacing the second measurement data with the third statistical characteristic value, and training to obtain a preset recognition model corresponding to the application.
Correspondingly, when the sample data of the preset identification model is the third statistical characteristic value obtained by calculating the second relative measurement data, the mobile phone also needs to calculate the first relative measurement data to obtain the first relative measurement data, and calculate the first relative measurement data to obtain the fourth statistical characteristic value in the process of actually using the preset identification model. And inputting the fourth statistical characteristic value into a preset recognition model to output the user identity.
In the embodiment of the application, the preset identification model is modeled by a high-dimensional single-class classification model. Illustratively, the high-dimensional single-Class classification model includes a Support Vector data description model (SVDD), a single-Class Support Vector Machine (One-Class SVM), an ISO-Forest model (ISO-Forest), and the like.
The core problem of the embodiment of the application is the identification of the user identity, and in the aspect of classification, the user identity only has two types of users (positive type) and users (negative type). Since the data of users other than the type of the user can not be classified into the same data feature, that is, the negative type of the user may actually contain data of uncertain quantity and type, the traditional two-classification model or multi-classification model cannot be used. The preset identification model needs to separate the characteristics of the positive class users and the negative class users, so that the positive class users and the generalized different negative class users can be effectively identified, and a high-dimensional single class classification model needs to be adopted for modeling. Therefore, the mobile phone can output the user identity more accurately by using the high-dimensional single classification model.
As shown in FIG. 9, the SVDD model is a typical high-dimensional single-class classification model, and the principle is that positive-class data is wrapped by constructing a high-dimensional sphere, and negative-class data is outside the sphere.
SVDD models generally utilize kernel functions for data upscaling. Illustratively, the Kernel Function includes a Radial Basis Function Kernel (RBF Kernel), an exponential Kernel, a Polynomial Kernel Function (Poly Kernel), or a laplace Kernel. The penalty parameter C of the SVDD model itself and the parameter value of the corresponding kernel function are key parameters for constructing the SVDD model, and therefore, the two types of parameters need to be set. The parameter value of the kernel function may be a gamma parameter in the RBF kernel.
In the embodiment of the application, the bionics optimizing algorithm and the conventional intelligent optimizing algorithm can be used for determining the key parameters of the model. The bionic intelligent Optimization algorithm comprises a Particle Swarm Optimization (PSO), a bee colony, an ant colony, a fish colony or an artificial immune algorithm. Conventional intelligent optimization algorithms include genetic algorithms or trellis optimization algorithms.
The bionic intelligent optimization searching algorithm is a large class of intelligent optimization searching algorithms at present, and compared with a conventional intelligent optimization searching method, the bionic intelligent optimization searching method has the advantages of high efficiency and more accurate optimization searching result.
In the following, the optimization of the key parameters of the SVDD model by using the PSO algorithm by the mobile phone is taken as an example for introduction.
In the embodiment of the application, the key parameters of the SVDD model are optimized by using an improved PSO algorithm, and an objective function is constructed according to the problem of user identity identification, so that an optimal preset identification model is constructed. Therefore, the bionics intelligent optimization algorithm can more accurately determine the key parameters of the high-dimensional single classification model by constructing a multi-objective strategy function so as to improve the accuracy of identifying the user identity.
The specific method comprises the following steps: firstly, automatically dividing a particle swarm into a plurality of populations through a kernel fuzzy clustering method, and simultaneously designing a small quantity of mutually-covered particle distribution forms, namely, a small quantity of particles in different sub-populations are shared to increase information interaction among the different populations, and simultaneously controlling the quantity of shared particles to prevent the situation that the sub-populations tend to be locally optimal.
Meanwhile, according to the actual functions of the learning factors c1 (controlling global search ability) and c2 (controlling local search ability) in the PSO algorithm, in order to further prevent the particle swarm algorithm from falling into local optimum, the learning factors c1 and c2 are designed into a dynamic change form, and the change trend of the corresponding curves of the learning factors c1 and c2 is a gradually-changed nonlinear form which is gradually changed from high to low along with the increase of the slope of the curve along with the increase of the particle swarm iteration times. The reason is that the larger slope of the learning factor c1 at the early stage can enable the particles to be diffused rapidly, the smaller slope at the later stage enters the stable sub-population optimizing stage, if the search is stopped at the stable stage, the boundary value is close to, the optimal value is not obtained, the diffusion is carried out after a plurality of rounds of stabilization, and the optimal value can be found better.
In a decision function part of the bionic intelligent optimization searching algorithm, a multi-objective decision function is set in the embodiment of the application, weight parameters can be manually designed and can be further optimized, the decision function only comprises two targets of a positive user and a negative user, and the two targets can be set to be weighted and fused, for example:
y n = (1-x1 accuracy,n )*k 1 +(1-x2 accuracy,n )*k 2
Y=min[y 1 ,y 2 …y n ];
wherein, x1 accuracy,n Probability of the nth round of the positive class user, k 1 Weight for positive class user, x2 accuracy,n Probability of the nth round of negative users, k 2 Weight of users of negative class, k 1 And k 2 Can be set according to actual needs, y n For accuracy, n is the number of iteration rounds.
When the SVDD model is subjected to multi-round iterative learning, each round of iteration outputs the numerical values of the probability of the positive class user and the probability of the negative class user, the Y values are calculated by utilizing the numerical values, the identification effect of the positive class user and the negative class user representing the SVDD model is good when the Y value is the lowest, the key parameter of the SVDD model when the Y value is the lowest is determined to be the key parameter of the final SVDD model, and then the final SVDD model is determined.
In the embodiment of the application, the numerical value of the multi-target decision function corresponding to each round of particle swarm iteration is calculated; and determining key parameters of the Gao Weishan class classification model according to the numerical value of the multi-target decision function corresponding to each round of particle swarm iteration. In this way, because the learning factor c1 and the learning factor c2 are dynamically changed, the learning factor c1 and the learning factor c2 perform multiple iterations, which is helpful for determining more accurate key parameters of the high-dimensional single classification model.
As shown in fig. 10, a Y-value curve variation with the number of iteration rounds is shown in fig. 10, with the abscissa representing the number of iteration rounds and the ordinate representing the Y-value. It can be seen that the Y value tends to plateau as the number of iteration rounds increases.
When the intelligent optimization searching algorithm of the bionics is improved, the optimization searching algorithm can find a proper decision result more quickly and better by researching the setting of key parameters of the optimization searching algorithm and designing a mutual coverage multi-population method more suitable for searching optimization and a related parameter and population change trend function to dynamically change along with the iteration times according to the characteristics of a specific bionics algorithm.
According to the embodiment of the application, a multi-objective decision function of a bionic intelligent optimization algorithm is designed according to actual problems of user identity recognition problems, and an optimal single classification model is solved better for user identity recognition.
Through the improved bionic optimization-seeking intelligent optimization algorithm of the embodiment, the key parameters of the SVDD model in the embodiment of the application are solved, and then the optimal boundary of the high-dimensional sphere is solved. The method is applied to user type identification, and an optimal identification model is solved.
As shown in fig. 11, in the embodiment of the present application, an optimal model parameter of key parameters in the Gao Weishan class classification model is determined by using a bionic intelligent optimization algorithm, and an optimal preset identification model is finally determined. The bionic intelligent optimization algorithm is obtained by improving a learning factor dynamic change trend and a multi-target fusion decision function.
S504, if the preset identification model exists in the mobile phone, first measurement data generated by the inertia measurement unit according to the first touch operation are obtained.
In the embodiment of the application, if the mobile phone has the preset identification model, the user identity can be identified by using the preset identification model. Because the input data of the preset identification model is the first measurement data measured by the inertia measurement unit, the first measurement data generated by the inertia measurement unit according to the first touch operation is acquired after the preset identification model exists in the mobile phone.
In some embodiments, there is a delay between the inertial measurement unit measuring the first measurement data and the cell phone receiving the first touch operation due to device limitations of the inertial measurement unit. In order to improve the accuracy of the preset identification model, in practical application, the first measurement data measured by the inertial measurement unit may be acquired with a certain time delay.
The first measurement data comprises measurement data in a time period which takes the initial time stamp as a starting point and takes the ending time stamp as an ending point in the inertial measurement unit; the initial timestamp is determined by the electronic device according to the detected initial time when the hand of the user contacts the touch screen, and the final timestamp is determined by the electronic device according to the detected time when the hand of the user leaves the touch screen. The first measurement data and the second measurement data are obtained by the same method, and the method for obtaining the second measurement data is described above, so the method for obtaining the first measurement data is not described again.
As shown in table 1, table 1 shows that when the first touch operation is a point touch, the first measurement data of the inertia measurement unit is obtained by delaying different times, and the optimal positive class acceptance rate and the optimal negative class rejection rate after the preset identification model is input.
Specifically, the tap operation includes an initial timestamp and an end timestamp. It can be seen that the initial timestamp and the final timestamp are delayed by 20ms and 10ms respectively to obtain the first measurement data of the inertial measurement unit, and the optimal positive class acceptance rate and the optimal negative class rejection rate of the output result after the preset identification model is input are both at a higher level.
Figure 522015DEST_PATH_IMAGE001
As shown in table 2, table 2 shows that the first touch operation is a sliding operation, the first measurement data of different inertia measurement units is obtained with a delay of 10ms, and the average optimal positive class acceptance rate and the average optimal negative class rejection rate are obtained after the preset identification model is input. It can be seen that the average optimal positive class reception rate obtained after the first measurement data of the single inertial measurement unit is input into the preset identification model is at a higher level, and the average optimal positive class reception rate and the average optimal negative class rejection rate obtained after the first measurement data of all the inertial measurement units is input into the preset identification model are also at a higher level.
Figure 970314DEST_PATH_IMAGE002
And S505, the mobile phone inputs the first measurement data into a preset identification model to output the identity of the user.
It should be noted that, when the sample data of the preset identification model is the calculated second statistical characteristic value, the mobile phone also needs to calculate the first measurement data to obtain the first statistical characteristic value in the process of actually using the preset identification model. And inputting the first statistical characteristic data into a preset recognition model to output the user identity.
When the sample data of the preset identification model is the second relative measurement data obtained by calculating the second measurement data, the mobile phone also needs to calculate the corresponding first relative measurement data for the first measurement data in the process of using the preset identification model by the mobile phone. And inputting the first relative measurement data into a preset identification model to output the identity of the user.
Therefore, the difference value of the two first measurement data can remove the part of the first measurement data affected by the geographic position, and the electronic equipment inputs the first relative measurement data into the preset identification model, so that the accuracy of the electronic equipment in identifying the user identity can be improved.
When the sample data of the preset identification model is the third statistical characteristic value obtained by calculating the second relative measurement data, the mobile phone also needs to calculate the first relative measurement data to obtain the first relative measurement data, and calculate the first relative measurement data to obtain the fourth statistical characteristic value in the process of actually using the preset identification model. And inputting the fourth statistical characteristic value into a preset recognition model to output the user identity. The above-mentioned content has been described in detail above and will not be described again.
As shown in fig. 12, the feature engineering module receives first measurement data generated by the inertia measurement unit when the user performs a first touch operation on the mobile phone, and calculates the first measurement data. For example, relative measurement data and statistical feature data are calculated for the first measurement data. As another example, only statistical characteristic data is calculated for the first measurement data. And the characteristic engineering module sends the calculation result to the corresponding preset recognition model. For another example, only relative measurement data is calculated for the first measurement data.
In one example, if a first touch operation received by the mobile phone is used for operating the application a, the feature engineering module obtains first measurement data, calculates the first measurement data, sends a calculation result to a preset identification model a corresponding to the application a, and finally outputs the user identity. In another example, the first touch operation received by the mobile phone is used for operating the application B, and the mobile phone sends the calculation result to the preset recognition model B corresponding to the application B, and finally outputs the user identity.
In some embodiments, the first measurement data measured by the inertial measurement unit includes measurement data for a time period beginning with the initial timestamp and ending with the ending timestamp. The initial timestamp is determined by the electronic device according to the detected initial time when the hand of the user contacts the touch screen, and the end timestamp is determined by the electronic device according to the detected time when the hand of the user leaves the touch screen.
Due to the device limitation of the inertia measurement unit, there may be a time delay between the electronic device receiving the touch operation and the inertia measurement unit measuring the measurement data due to the touch operation. In order to improve the accuracy of the electronic equipment for identifying the user identity, an initial time stamp and a termination time stamp of the measurement data of the inertia measurement unit are determined according to the initial time of the electronic equipment for detecting that the hand of the user is in contact with the touch screen and the time of the electronic equipment for detecting that the hand of the user is away from the touch screen, so that the measurement data of the inertia measurement unit between the initial time stamp and the termination time stamp is the measurement data measured when the electronic equipment receives touch operation, and when the measurement data is input into a preset identification model, the accuracy of outputting the user identity can be improved. In addition, in the embodiment of the application, the measurement data measured by the inertia measurement unit is irrelevant to the type of touch operation, and the inertia measurement unit can measure the measurement data no matter whether the touch operation is point touch or sliding operation or the like, so that the inertia measurement unit can obtain undifferentiated data under any touch operation, and the use generalization of the preset identification model is improved.
In some embodiments, the inertial measurement unit includes a predetermined measurement unit, the predetermined measurement unit being based on the earth as an absolute reference frame; the first measurement data measured by the preset measurement unit comprises at least two measurement data; the step of inputting the first measurement data into the preset identification model by the mobile phone comprises the following steps: the mobile phone determines a difference value between two adjacent first measurement data in the first measurement data measured by a preset measurement unit to obtain first relative measurement data; and inputting the first relative measurement data into a preset identification model to output the identity of the user.
Therefore, the difference value of the two first measurement data can remove the part of the first measurement data affected by the geographic position, and the electronic equipment inputs the first relative measurement data into the preset identification model, so that the accuracy of the electronic equipment in identifying the user identity can be improved.
In some embodiments, the step of inputting the first relative measurement data into the preset identification model by the mobile phone comprises: calculating a statistical feature value for the first relative measurement data; and inputting the statistical characteristic value into a preset recognition model to output the user identity.
Therefore, the electronic equipment calculates the statistical characteristic value of the first relative measurement data, the characteristics of the first relative measurement data can be highlighted, and the accuracy of the preset identification model for identifying the user identity is improved.
In some embodiments, the preset identification model is a Gao Weishan classification model, and the step of inputting the first measurement data into the preset identification model by the mobile phone to output the user identity includes:
inputting the first measurement data into a Gao Weishan class classification model to output the user identity; the user identity comprises a positive user and a negative user; 5363 the class Gao Weishan classification model includes key parameters determined by a bionics intelligent optimization algorithm or a conventional intelligent optimization algorithm. Thus, the electronic equipment can output the user identity more accurately by using the high-dimensional single-class classification model.
In some embodiments, after the mobile phone obtains the user identity output by the preset recognition model, the mobile phone may prompt the user with the result of the user identity.
In an example, when the mobile phone is operated by a user, the mobile phone background can identify the user identity of the current user as a minor by using a preset identification model. When the mobile phone runs the game application, a prompt interface can be displayed, as shown in fig. 13, on which "minor forbids game" can be displayed, or as shown in fig. 14, "if game needs to be continued, parental authorization is needed" can be displayed.
In another example, when the mobile phone is operated by the user, the mobile phone background recognizes that the user identity of the current user is a non-owner user by using a preset recognition model. When the mobile phone needs to pay money, a prompt message may be displayed, as shown in fig. 15, where the prompt message may be "non-owner user may not pay".
In another example, when the mobile phone is operated by a user, the mobile phone background recognizes the user identity of the current user as an owner user by using a preset recognition model. When the mobile phone needs to pay money, the mobile phone can directly complete the payment without displaying a prompt message.
S506, if the preset recognition model does not exist in the mobile phone, the first touch operation of the user on the touch screen is continuously received.
In the embodiment of the present application, if the preset identification model is not found, the mobile phone continues to execute step S502.
Therefore, the electronic equipment can automatically identify the user identity by continuously receiving the first touch operation of the user on the touch screen in the background.
The embodiment of the application provides a method for identifying user identity. In the method, the electronic equipment receives a first touch operation of a user on a touch screen, the first touch operation comprises any touch operation such as point touch or sliding, then a preset identification model corresponding to an application controlled by the first touch operation is searched, and if the preset identification model is searched, first measurement data generated by an inertia measurement unit according to the first touch operation is acquired. The electronic equipment inputs the first measurement data into a preset identification model so as to output the identity of the user. In the embodiment of the application, in the process that the electronic device is used by a user, according to the fact that different users perform first touch operation on the application, the first measurement data generated by the inertia measurement unit are different, the identification of the user identity can be completed in the background by using the preset identification model. In this way, although the motion of the user performing the first touch operation varies when using the mobile phone, in the embodiment of the present application, an undifferentiated point touch or sliding motion is used as an identification source, and the first measurement data generated by the inertia measurement unit when the point touch or sliding motion occurs is input into the preset identification model to determine the identity of the user. The mobile phone does not distinguish the types of point touch or sliding actions, and the generalization is stronger.
When the electronic equipment is operated by a user, the user identity can be automatically recognized through the preset recognition model, the limitation conditions are less when the user identity is recognized by using the method, and the generalization of user recognition is improved. Moreover, each application corresponds to the respective preset identification model, so that the electronic equipment can utilize the corresponding preset identification model of the touch application when receiving the first touch operation, and further, the accuracy of user identity identification can be improved.
In addition, according to the user identity identification method in the embodiment of the application, the operation track generated when the user performs the first touch operation is not relied on, and the mobile phone can determine the user identity by adopting the first measurement data generated by the inertia measurement unit when the user performs the first touch operation under the scene that the user characteristics cannot be effectively constructed by the operation track.
An embodiment of the present application provides an electronic device, which may include: the touch screen, memory, and one or more processors described above. The electronic device may further include an inertial measurement unit. The touch screen, the inertial measurement unit, the memory, and the processor are coupled. The memory is for storing computer program code comprising computer instructions. When the processor executes the computer instructions, the electronic device may perform various functions or steps performed by the mobile phone in the above-described method embodiments. The structure of the electronic device may refer to the structure of the electronic device 100 shown in fig. 3.
The embodiment of the present application further provides a computer storage medium, where the computer storage medium includes computer instructions, and when the computer instructions are run on an electronic device, the electronic device is enabled to execute each function or step executed by a mobile phone in the foregoing method embodiments.
Embodiments of the present application further provide a computer program product, which when running on a computer, causes the computer to execute each function or step executed by the mobile phone in the foregoing method embodiments.
Through the description of the above embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. A user identity recognition method is applied to electronic equipment, and the electronic equipment comprises a touch screen and an inertia measurement unit and comprises the following steps:
receiving a first touch operation of a user on the touch screen;
detecting whether a preset identification model corresponding to the application controlled by the first touch operation exists or not;
if the preset identification model exists, acquiring first measurement data generated by the inertia measurement unit according to the first touch operation;
inputting the first measurement data into the preset identification model to output the user identity; the preset identification model is obtained by the inertia measurement unit according to second measurement data generated by a second touch operation of the user on the touch screen and corresponding user identity training.
2. The method of claim 1, wherein the first measurement data comprises measurement data for a time period in the inertial measurement unit starting with an initial timestamp and ending with an ending timestamp; the initial timestamp is determined by the electronic device according to the initial time when the detected hand of the user contacts the touch screen, and the final timestamp is determined by the electronic device according to the leaving time when the detected hand of the user leaves the touch screen.
3. The method of claim 2, wherein the inertial measurement unit comprises a predetermined measurement unit with the earth as an absolute reference frame; the first measurement data measured by the preset measurement unit comprises at least two measurement data;
the step of inputting the first measurement data into the preset identification model comprises:
determining a difference value between two adjacent first measurement data in the first measurement data measured by the preset measurement unit to obtain first relative measurement data;
and inputting the first relative measurement data into the preset identification model to output the user identity.
4. The method of claim 3, wherein the step of inputting the first relative measurement data into the predetermined identification model comprises:
calculating a statistical feature value for the first relative measurement data;
and inputting the statistical characteristic value into a preset recognition model to output the user identity.
5. The method of claim 1, wherein the inertial measurement unit comprises at least one or more of an acceleration sensor, a linear acceleration sensor, a gravitational acceleration sensor, a magnetometer, a gyroscope sensor, a rotation vector sensor, and a direction meter.
6. The method of claim 1, wherein the predetermined recognition model is a Gao Weishan class classification model, and the step of inputting the first measurement data into the predetermined recognition model to output the user identity comprises:
inputting the first measurement data into the Gao Weishan class classification model to output the user identity; the user identities comprise positive class users and negative class users; the Gao Weishan class classification model includes key parameters that are determined by a bionics intelligent optimization algorithm or a conventional intelligent optimization algorithm.
7. The method of claim 6, wherein the biomimetic intelligent optimization algorithm comprises a multi-objective decision function; the key parameters of the Gao Weishan class classification model are determined according to a bionic intelligent optimization algorithm comprising a multi-objective decision function.
8. The method according to claim 7, wherein the bionic intelligent optimization algorithm comprising a multi-objective decision function is a particle swarm optimization algorithm; the particle swarm optimization algorithm comprises a learning factor c1, a learning factor c2 and a particle swarm; the learning factor c1 and the learning factor c2 are dynamically changed, and the change trends of the curves corresponding to the learning factor c1 and the learning factor c2 are respectively in a gradually-changed nonlinear form that the slope of the curve is increased from high to low along with the increase of the iteration times of the particle swarm;
the method further comprises the following steps:
calculating the numerical value of a multi-target decision function corresponding to each round of particle swarm iteration;
and determining key parameters of the Gao Weishan class classification model according to the numerical value of the multi-target decision function corresponding to each round of particle swarm iteration.
9. The method of claim 1, further comprising:
receiving a second touch operation of a user on the touch screen;
determining an application of the second touch operation control;
acquiring a measurement data set generated by the inertial measurement unit according to the second touch operation;
and training to obtain the preset recognition model by taking the data in the measurement data set as sample data.
10. The method of claim 1, further comprising:
and if the preset recognition model does not exist, continuing to execute the first touch operation of the receiving user on the touch screen.
11. An electronic device, comprising: the electronic device further comprises an inertial measurement unit; the touch screen, the memory, the inertial measurement unit and the processor are coupled; wherein the memory has stored therein computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the method of any of claims 1-10.
12. A computer-readable storage medium comprising computer instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-10.
13. A computer program product, characterized in that, when run on a computer, causes the computer to perform the method according to any one of claims 1-10.
CN202310027093.6A 2023-01-09 2023-01-09 User identity recognition method and electronic equipment Active CN115718913B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310027093.6A CN115718913B (en) 2023-01-09 2023-01-09 User identity recognition method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310027093.6A CN115718913B (en) 2023-01-09 2023-01-09 User identity recognition method and electronic equipment

Publications (2)

Publication Number Publication Date
CN115718913A true CN115718913A (en) 2023-02-28
CN115718913B CN115718913B (en) 2023-07-14

Family

ID=85257898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310027093.6A Active CN115718913B (en) 2023-01-09 2023-01-09 User identity recognition method and electronic equipment

Country Status (1)

Country Link
CN (1) CN115718913B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116502203A (en) * 2023-06-28 2023-07-28 荣耀终端有限公司 User identity recognition method and electronic equipment
CN117609972A (en) * 2024-01-17 2024-02-27 中国人民解放军战略支援部队航天工程大学 VR system user identity recognition method, system and equipment
CN117608424A (en) * 2024-01-24 2024-02-27 江苏锦花电子股份有限公司 Touch knob screen management and control system and method based on Internet of things

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222730A (en) * 2019-05-16 2019-09-10 华南理工大学 Method for identifying ID and identification model construction method based on inertial sensor
CN110348186A (en) * 2019-05-28 2019-10-18 华为技术有限公司 A kind of display methods and electronic equipment based on user identity identification
CN111414970A (en) * 2020-03-27 2020-07-14 西安迅和电气科技有限公司 Wind power gear box abnormal data classification method
CN112699971A (en) * 2021-03-25 2021-04-23 荣耀终端有限公司 Identity authentication method and device
CN114272612A (en) * 2021-12-14 2022-04-05 杭州逗酷软件科技有限公司 Identity recognition method, identity recognition device, storage medium and terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222730A (en) * 2019-05-16 2019-09-10 华南理工大学 Method for identifying ID and identification model construction method based on inertial sensor
CN110348186A (en) * 2019-05-28 2019-10-18 华为技术有限公司 A kind of display methods and electronic equipment based on user identity identification
CN111414970A (en) * 2020-03-27 2020-07-14 西安迅和电气科技有限公司 Wind power gear box abnormal data classification method
CN112699971A (en) * 2021-03-25 2021-04-23 荣耀终端有限公司 Identity authentication method and device
CN114272612A (en) * 2021-12-14 2022-04-05 杭州逗酷软件科技有限公司 Identity recognition method, identity recognition device, storage medium and terminal

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116502203A (en) * 2023-06-28 2023-07-28 荣耀终端有限公司 User identity recognition method and electronic equipment
CN117609972A (en) * 2024-01-17 2024-02-27 中国人民解放军战略支援部队航天工程大学 VR system user identity recognition method, system and equipment
CN117609972B (en) * 2024-01-17 2024-04-12 中国人民解放军战略支援部队航天工程大学 VR system user identity recognition method, system and equipment
CN117608424A (en) * 2024-01-24 2024-02-27 江苏锦花电子股份有限公司 Touch knob screen management and control system and method based on Internet of things
CN117608424B (en) * 2024-01-24 2024-04-12 江苏锦花电子股份有限公司 Touch knob screen management and control system and method based on Internet of things

Also Published As

Publication number Publication date
CN115718913B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN109299315B (en) Multimedia resource classification method and device, computer equipment and storage medium
CN110825469A (en) Voice assistant display method and device
CN115718913B (en) User identity recognition method and electronic equipment
CN111666119A (en) UI component display method and electronic equipment
CN113163470A (en) Method and electronic equipment for identifying specific position on specific route
CN110471606B (en) Input method and electronic equipment
CN112907725B (en) Image generation, training of image processing model and image processing method and device
CN113705823A (en) Model training method based on federal learning and electronic equipment
CN113572896B (en) Two-dimensional code display method based on user behavior model, electronic device and readable storage medium
CN112383664B (en) Device control method, first terminal device, second terminal device and computer readable storage medium
CN111103922A (en) Camera, electronic equipment and identity verification method
CN110138999B (en) Certificate scanning method and device for mobile terminal
CN114090102A (en) Method, device, electronic equipment and medium for starting application program
CN112584037B (en) Method for saving image and electronic equipment
CN115032640B (en) Gesture recognition method and terminal equipment
CN114283195B (en) Method for generating dynamic image, electronic device and readable storage medium
CN113343709B (en) Method for training intention recognition model, method, device and equipment for intention recognition
CN113489895B (en) Method for determining recommended scene and electronic equipment
CN113380240B (en) Voice interaction method and electronic equipment
CN115437601A (en) Image sorting method, electronic device, program product, and medium
CN114120987B (en) Voice wake-up method, electronic equipment and chip system
CN114079725B (en) Video anti-shake method, terminal device, and computer-readable storage medium
CN115393676A (en) Gesture control optimization method and device, terminal and storage medium
CN114911400A (en) Method for sharing pictures and electronic equipment
CN111488895A (en) Countermeasure data generation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant