CN113641981A - Authentication method and electronic equipment - Google Patents

Authentication method and electronic equipment Download PDF

Info

Publication number
CN113641981A
CN113641981A CN202110313313.2A CN202110313313A CN113641981A CN 113641981 A CN113641981 A CN 113641981A CN 202110313313 A CN202110313313 A CN 202110313313A CN 113641981 A CN113641981 A CN 113641981A
Authority
CN
China
Prior art keywords
authentication
user
electronic device
service
security
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110313313.2A
Other languages
Chinese (zh)
Inventor
方习文
马小双
吕鑫
王旭
孟阿猛
龙全君
李志超
李林斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN113641981A publication Critical patent/CN113641981A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/44Program or device authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/45Structures or tools for the administration of authentication
    • G06F21/46Structures or tools for the administration of authentication by designing passwords or checking the strength of passwords

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The application provides an authentication method and electronic equipment, wherein the authentication method is executed by first electronic equipment, and the method comprises the following steps: the method comprises the steps that first electronic equipment receives an authentication request, wherein the authentication request is used for requesting authentication of a first service; the first electronic equipment determines a risk security level corresponding to the first service; then the first electronic equipment determines an authentication mode meeting the security risk level according to the risk security level; and finally, the first electronic equipment schedules M electronic equipment to authenticate the first service according to the authentication mode, wherein M is a positive integer. The method authenticates the first service to meet the corresponding risk security level, so that the security of the authentication result can be improved.

Description

Authentication method and electronic equipment
Cross Reference to Related Applications
The application requires the priority of Chinese patent application with application number 202110162842.7 and application name of 'an authentication method and electronic equipment' submitted by the Chinese patent office on 2021, 02/05; the present application also claims priority of chinese patent application filed on 11/05/2020 under the name "an authentication method, device and readable storage medium" by the chinese patent office under application number 202010393895.5; the application also requires the priority of Chinese patent application with application number 202110155795.3, application name "a cross-device authentication method and electronic device" submitted by the Chinese patent office at 04.02/2021; the application also requires the priority of Chinese patent application with application number 202110185361.8 and application name "a data association method and electronic equipment" submitted by the Chinese patent office at 10.02/2021; the application also requires the priority of Chinese patent application with application number 202011063402.8, application name "a cross-device authentication method and related device" submitted to the Chinese patent office at 30/09/2020; the present application also claims priority of chinese patent application with application number 202011070212.9, entitled "authentication method, device and system with multi-device cooperation" filed in 30/09/2020 by the chinese patent office. The entire contents of which are incorporated by reference in the present application.
Technical Field
The present application relates to the field of terminal technologies, and in particular, to an authentication method and an electronic device.
Background
With the development of biometric technology and graphic image recognition technology, besides the traditional way of authenticating a user based on a username and a password, the user can be authenticated through the biometric information (such as a human face, a fingerprint, a voiceprint, etc.) of the user.
In the actual application process, a user uses the currently operating intelligent device to perform local authentication on the service being accessed, authentication may not pass due to the fact that the intelligent device cannot successfully acquire the identity information of the user locally, and when the identity authentication does not pass, the intelligent device refuses to execute the service, so that the user cannot normally access the service, and the use experience of the user is influenced.
Disclosure of Invention
The embodiment of the application provides an authentication method and electronic equipment, which are used for realizing cross-equipment authentication, improving the convenience of the cross-equipment authentication and effectively improving user experience.
In a first aspect, the method is applied to a communication system comprising at least two electronic devices, and the method can be performed by a first electronic device of the communication system, the first electronic device being connected to a second electronic device, and the method comprises: the method comprises the steps that a first electronic device receives a first operation, responds to the first operation, executes local authentication on the first operation, responds to the fact that the local authentication result of the first electronic device is that authentication does not pass, and starts cross-device authentication, wherein the cross-device authentication is used for authenticating the first electronic device through a second electronic device; the method comprises the steps that a first electronic device obtains a cross-device authentication result; and then the first electronic equipment determines that the cross-equipment authentication result is that the authentication is passed, executes the instruction corresponding to the first operation, otherwise, does not execute the instruction corresponding to the first operation.
In the embodiment of the application, the method can realize cross-device authentication, can improve the convenience of the cross-device authentication, and effectively improves the user experience.
In one possible design, the first electronic device initiates cross-device authentication, including: the first electronic device sends a first request message to the second electronic device, wherein the first request message is used for requesting to acquire a local authentication result of the second electronic device. The first electronic device obtains a cross-device authentication result, and the cross-device authentication result comprises the following steps: the first electronic device receives a first response message from the second electronic device, wherein the first response message comprises a cross-device authentication result, and the cross-device authentication result is a local authentication result of the second electronic device.
In the embodiment of the application, the first electronic device can obtain the authentication result of the second electronic device on the first operation from the second electronic device by the method, so that the problem that the authentication fails because the first electronic device cannot successfully acquire the identity information of the user can be solved.
In one possible design, the first electronic device initiates cross-device authentication, including: the first electronic equipment sends a second request message to the second electronic equipment, wherein the second request message is used for requesting to acquire the identity authentication information of the second electronic equipment; the first electronic device then receives a second response message from the second electronic device, the second response message including the identity authentication information of the second electronic device. The first electronic device obtaining a cross-device authentication result includes: and the first electronic equipment authenticates the first operation according to the identity authentication information of the second electronic equipment to generate a cross-equipment authentication result.
In the embodiment of the application, the first electronic device can acquire the identity authentication information for authenticating the first operation, such as the information of the face and the like, from the second electronic device by using the method, so that the first electronic device utilizes the identity authentication information to authenticate the first operation, and the problem that the authentication fails because the first electronic device cannot successfully acquire the identity information of the user can be solved.
In one possible design, the first electronic device stores preset information therein; the first electronic device can match the identity authentication information sent by the second electronic device with preset information to generate a matching result; and when the matching result is larger than a first preset threshold value, determining that the local authentication result of the second electronic equipment is passed, otherwise, determining that the local authentication result of the second electronic equipment is not passed by the first electronic equipment.
In the embodiment of the application, the first electronic device can acquire the identity authentication information from the second electronic device, so that the problem that the first electronic device cannot successfully acquire the identity information of the user can be solved.
In one possible design, the first electronic device may perform local authentication on the first operation by: the first electronic equipment stores preset information; in response to the first operation, the first electronic equipment acquires identity authentication information of a user who inputs the first operation; then the first electronic equipment matches the identity authentication information of the user with preset information; and the first electronic equipment determines whether the local authentication result of the first electronic equipment passes according to the matching result. For example, if the first electronic device does not acquire the identity authentication information of the user who inputs the first operation, it may happen that the local authentication result of the first electronic device fails.
In one possible design, the determining, by the first electronic device, whether the local authentication result of the first electronic device passes according to the matching result includes: the first electronic equipment compares the matching degree in the matching result with a second preset threshold value; and when the matching degree is greater than a second preset threshold, the first electronic device determines that the local authentication result of the first electronic device is that the authentication is passed, and if not, the first electronic device determines that the local authentication result of the first electronic device is that the authentication is not passed. For example, if the identity authentication information of the user who inputs the first operation is not completely acquired by the first electronic device, it may happen that the matching result is smaller than the second preset threshold, and the local authentication result of the first electronic device does not pass.
In one possible design, the identity authentication information includes any one or more of face information, fingerprint information, voiceprint information, iris information, and touch screen behavior information; the preset information comprises any one or more of face information, fingerprint information, voiceprint information, iris information and touch screen behavior information.
In one possible design, the first operation is any one of an operation of unlocking a screen, an operation of unlocking an application, or an operation acting on a function control in the application.
In one possible design, the first electronic device and the second electronic device log in the same user account, and the user account is one of an instant messaging account, an email account and a mobile phone number.
In one possible design, the method further includes: the first electronic equipment detects the distance between the first electronic equipment and the second electronic equipment; the first electronic device determines that the cross-device authentication result is that the authentication is passed, and responds to the fact that when the fact that the distance between the first electronic device and the second electronic device is smaller than a first preset distance is detected, the first electronic device executes an instruction corresponding to the first operation.
In one possible design, the first electronic device detects the distance between the first electronic device and the second electronic device through bluetooth positioning technology, ultra wideband UWB positioning technology or WiFi positioning technology.
In one possible design, the first electronic device and the second electronic device are connected, including: the first equipment establishes connection with the second equipment through a short-distance wireless communication protocol; the near field communication protocol comprises one or more of a WiFi communication protocol, a UWB communication protocol, a Bluetooth communication protocol, a Zigbee communication protocol or an NFC protocol.
In one possible design, before the first electronic device initiates the cross-device authentication, the method further includes: the first electronic equipment also receives a second operation, and the second operation is used for triggering the starting of the cross-equipment authentication function; in response to the second operation and in response to detecting that the local authentication result of the first electronic device is that the authentication fails, the first electronic device initiates cross-device authentication.
In one possible design, the method further includes: the method comprises the steps that first electronic equipment obtains safety state information of second electronic equipment; the first electronic device determines that the cross-device authentication result is that the authentication is passed, and determines that the safety state information of the second electronic device indicates that the second electronic device is in a safety state, and the first electronic device executes an instruction corresponding to the first operation. Illustratively, the first electronic device obtains a detection result of the security application of the second electronic device, so that the second electronic device is determined not to have viruses such as trojan horses and the like according to the detection result, and further has a security execution chip, so that the second electronic device can be determined to be a secure device, and information obtained from the second electronic device is also secure.
In a second aspect, an embodiment of the present application provides an authentication method, which is applied to a communication system composed of at least two electronic devices, and which can be executed by a first electronic device of the communication system, the method including: the method comprises the steps that first electronic equipment receives an authentication request, the authentication request is used for requesting authentication of a first service, then the first electronic equipment determines an authentication mode corresponding to the first service, M pieces of electronic equipment are scheduled to authenticate the first service according to the authentication mode, and M is a positive integer.
In the method, at least two electronic devices use single or multiple authentication factors to cooperatively authenticate the same service, so that the safety of an authentication result can be improved.
In one possible design, the determining, by the first electronic device, an authentication method corresponding to the first service includes: the first electronic device determines a risk security level corresponding to the first service, and then the first electronic device determines an authentication mode meeting the security risk level according to the risk security level.
In the method, the mode of authenticating the first service meets the corresponding risk safety level, so that the safety of the authentication result can be improved, and the authentication of the same service by adopting the authentication factors which can be provided by at least two electronic devices can be realized, thereby ensuring the reliability of the authentication result and improving the authentication safety level of the equipment. For example, for a door opening service with a high security level, a camera and a door lock are called to perform face authentication and fingerprint authentication cooperatively, so that the reliability of an authentication result is ensured, and the authentication security level of equipment is improved.
In one possible design, the determining, by the first electronic device, an authentication manner that satisfies the security risk level according to the risk security level includes: the first electronic device determines an available authentication factor and an available acquisition capability associated with the available authentication factor, and then determines an authentication manner that satisfies the security risk level based on the risk security level and the available authentication factor and the available acquisition capability associated with the available authentication factor.
In the method, in the process of determining the authentication mode, the available authentication factors and the available acquisition capacity associated with the available authentication factors are screened, so that the problem of acquisition failure caused by unavailable acquisition mode of the selected authentication factors is avoided.
In a possible design, when the authentication request includes a biometric feature, the biometric feature is identified, a user corresponding to the biometric feature is determined, and then it is determined whether the user has an authority to execute the first service, if so, the first electronic device schedules the M electronic devices to authenticate the first service according to the authentication mode, otherwise, the M electronic devices are not scheduled to authenticate the first service.
In the method, the authority of the user who operates can be further judged, the user who does not have the operation authority can be prevented from accessing the first service, and the access safety of the first service can be guaranteed.
In one possible design, the determining, by the first electronic device, an authentication manner that satisfies the risk security level according to the risk security level includes: the first electronic device determines an available authentication factor associated with the user, and an authentication capability and an available acquisition capability associated with the available authentication factor, and determines an authentication manner that satisfies the risk security level based on the risk security level, and the available authentication factor, the available authentication capability, and the available acquisition capability.
In the method, in the process of determining the authentication mode, the available authentication factors and the available acquisition capacity and authentication capacity associated with the available authentication factors are screened, so that the problem that acquisition fails due to the fact that the selected authentication factor is unavailable in the acquisition mode or the problem that authentication fails due to the fact that the selected authentication factor is unavailable in the authentication mode is avoided.
In one possible design, the determining, by the first electronic device, an authentication manner according to the risk security level includes: the first electronic device determines an authentication mode satisfying the security risk level by using a decision policy, wherein the decision policy includes but is not limited to at least one of the following: preferentially adopting the collected authentication factors for authentication; preferentially adopting an acquisition capability acquisition authentication factor with a user imperceptible characteristic; preferably, the authentication factor is collected by adopting the collecting capability of the near-end equipment of the user, wherein the near-end equipment is at least one of the M electronic equipment.
In the method, the authentication mode determined by the first electronic equipment according to the decision strategy is more suitable for the current authentication scene, so that the authentication efficiency can be effectively improved, and the user experience can be improved.
In one possible design, after the first electronic device schedules the M electronic devices to authenticate the first service, the method further includes: the first electronic equipment acquires the authentication results of the M electronic equipment from the M electronic equipment; and then the first electronic equipment aggregates the authentication results of the M electronic equipment to generate a final authentication result.
In the method, when the first electronic device adopts at least two authentication factors to perform superposition authentication on the same service, the final authentication result can be obtained by aggregating at least two authentication results. The method can improve the reliability of the authentication result.
In one possible design, after the first electronic device schedules the M electronic devices to authenticate the first service, the method further includes: and if the authentication is passed, the first electronic device instructs an operation device to execute the first service, wherein the operation device is at least one of the first electronic device and the M electronic devices.
In the method, the first electronic device can instruct the operating device to execute the first service after the first electronic device passes the authentication according to the method, so that the security of the first service being accessed can be guaranteed.
In one possible design, the first electronic device further synchronizes resources in the M electronic devices to obtain a synchronized resource pool, where the resource pool includes authentication factors, acquisition capabilities, and authentication capabilities in the M electronic devices;
the first electronic device determines, according to the risk security level, an authentication mode that meets the security risk level, and specifically includes:
and the first electronic equipment determines an authentication mode meeting the security risk level according to the risk security level and the resource pool.
In the method, the first electronic equipment can obtain the authentication factors, the acquisition capacity and the authentication capacity of each electronic equipment in the equipment network by maintaining the resource pool, so that the authentication mode meeting the safety risk level can be determined, the authentication mode can be enriched to a certain extent, and the first electronic equipment can conveniently authenticate the first service by adopting a more applicable authentication mode.
In one possible design, the first electronic device is any one of the M electronic devices, or the first electronic device does not belong to the M electronic devices.
In one possible design, the M electronic devices are all connected to the same local area network, and/or the M electronic devices are all pre-bound with the same user account.
In one possible design, the first service is a door opening service, and the risk security level corresponding to the door opening service is a high risk security level;
or the operation of triggering and accessing the first service is a first operation on the intelligent home equipment, the first operation does not relate to personal privacy data, and the risk security level corresponding to the first service is a low risk security level;
or the operation of triggering and accessing the first service is a second operation on the intelligent home equipment, the second operation relates to personal privacy data but has a centralized risk, and the risk security level corresponding to the first service is an intermediate risk security level;
or the operation of triggering and accessing the first service is a third operation on the intelligent household equipment, the third operation relates to personal privacy data but has high risk, and the risk security level corresponding to the first service is a high risk security level.
In one possible design, the first electronic device receives an authentication request, including: the first electronic equipment receives a target operation, wherein the target operation is used for triggering generation of the authentication request; the method for determining the authentication mode corresponding to the first service by the first electronic device includes: the first electronic equipment determines a target safety value required for executing target operation, wherein the target operation is used for triggering the execution of the first service; the first electronic device determines M1 authentication devices, M1 is a positive integer, the M1 authentication devices are devices having the capability of authenticating user information, and the M1 authentication devices are included in the M electronic devices. The first electronic device schedules the M electronic devices to authenticate the first service according to the authentication mode, and the method comprises the following steps: the first electronic device acquires an authentication result of at least one authentication device in the M1 authentication devices; the first electronic equipment determines a total authentication security value according to the corresponding relation between the authentication mode and the authentication security value of at least one authentication device and the authentication result; and if the total authentication security value is not less than the target security value, the authentication is passed. When the authentication is passed, the method further comprises: and triggering the operation equipment to execute the target operation.
In one possible embodiment, the operating device is configured to receive a target operation request requesting execution of the target operation. It can be seen that, since the total authentication security value is determined according to the authentication result of at least one authentication device of the M1 authentication devices and the corresponding relationship between the authentication manner and the authentication security value of the at least one authentication device, and in the case that the total authentication security value is not less than the target security value required for executing the target operation, the operating device is triggered to execute the target operation, so as to provide the required identity authentication level for the target operation.
In one possible embodiment, when M1 equals 1, the device receives a first authentication request, the device determines a target security value required to perform the target operation; the device determines an authentication device and determines the authentication result of at least one authentication device in the authentication device; the device determines the authentication security value corresponding to the authentication equipment according to the corresponding relation between the authentication mode and the authentication security value of the at least one authentication equipment and the authentication result. And if the authentication security value corresponding to the authentication equipment is not less than the target security value, the device triggers the operation equipment to execute the target operation. When M1 is equal to 1, the authentication security value corresponding to the authentication device determined by the apparatus is the total authentication security value mentioned in the above embodiment of the method of the first aspect.
In one possible embodiment, the means for determining M1 authentication devices comprises: determining a group of authentication policy groups, wherein the group of authentication policies comprises one or more authentication policies, the total authentication security value corresponding to all the authentication policies included in the group of authentication policies is not less than the target security value, and the authentication device corresponding to each authentication policy in the group of authentication policies is determined as one authentication device in the M1 authentication devices; here, the M1 authentication devices include a first authentication device, and it can also be understood that one of the M1 authentication devices is referred to as a first authentication device. The authentication policy group comprises a first authentication policy, the first authentication policy comprises first authentication equipment and a first authentication mode corresponding to the first authentication equipment, and the first authentication policy corresponds to a first authentication security value. In this way, the device can authenticate the user information by determining one authentication strategy or determine a combination of a plurality of authentication strategies for performing collaborative authentication on the user information, thereby providing a required identity authentication level for the target operation.
In one possible embodiment, in the process of determining M1 authentication devices, the apparatus determines that there is one authentication device in the authentication devices currently in the communication reachable state, and when the authentication device performs authentication in one authentication manner, an authentication security value corresponding to the authentication manner of the authentication device is greater than a target security value, in this case, the authentication device may also be determined as the M1 authentication devices, where M1 is 1 in this embodiment. In this embodiment, it can be said that the authentication policy group determined by the device includes only one authentication policy.
In a possible implementation manner, the authentication policy group further satisfies that the authentication manner corresponding to each authentication policy in all the authentication policies included in the authentication policy group is a biometric authentication manner. Therefore, the user can finish authentication only in a biological characteristic authentication mode, and a complicated password input process is omitted.
In one possible embodiment, at least one of all authentication policies included in the authentication policy group has an authentication security value lower than the target security value, and/or; the authentication strategy group also satisfies that the authentication mode corresponding to each authentication strategy in all the authentication strategies included in the authentication strategy group is a biological characteristic authentication mode. When the user completes the authentication only by the biometric authentication mode, the complicated process of inputting the password can be avoided. When the authentication security value corresponding to at least one authentication policy is lower than the target security value, it can be seen that, in this embodiment, an authentication device with a relatively low authentication capability may be combined to complete an authentication that requires a relatively high authentication level, and thus, when a user needs to perform an operation that requires a relatively high authentication requirement, the authentication device with a relatively low authentication capability may also be used to perform a cooperative authentication, thereby improving the security of the authentication.
In one possible embodiment, each authentication policy of all authentication policies included in the authentication policy group has an authentication security value lower than the target security value, and/or; the authentication strategy group also satisfies that the authentication mode corresponding to each authentication strategy in all the authentication strategies included in the authentication strategy group is a biological characteristic authentication mode. When the user completes the authentication only by the biometric authentication, the complicated process of inputting the password can be avoided. When the authentication security value corresponding to each authentication policy is lower than the target security value, it can be seen that, in this embodiment, only the authentication device with the relatively low authentication capability can complete the authentication with the relatively high authentication level, and thus, when the user needs to perform an operation with a relatively high requirement on the identity authentication, the user can perform the cooperative authentication with the authentication devices with the relatively low authentication capability, thereby improving the security of the identity authentication.
In one possible embodiment, the target operation and the authentication policy group have a preset correspondence, and/or; the authentication strategy group also satisfies that the authentication mode corresponding to each authentication strategy in all the authentication strategies included in the authentication strategy group is a biological characteristic authentication mode. When the user completes the authentication only by the biometric authentication, the complicated process of inputting the password can be avoided. When the target operation and the authentication policy group have a preset corresponding relationship, it can be seen that, in this embodiment, the user may customize some authentication policies for some operations, so that the flexibility of the scheme may be improved.
In one possible embodiment, at least one of all authentication policies included in the authentication policy group has an authentication security value not lower than the target security value, and/or; the authentication strategy group also satisfies that the authentication mode corresponding to each authentication strategy in all the authentication strategies included in the authentication strategy group is a biological characteristic authentication mode. When the user completes the authentication only by the biometric authentication, the complicated process of inputting the password can be avoided. When the authentication security value corresponding to at least one authentication policy in all the authentication policies included in the authentication policy group is not lower than the target security value, the probability that the total authentication security value is not lower than the target security value can be improved.
To increase the flexibility of the scheme, in another possible embodiment, the apparatus determines the manner of the M1 authentication devices including: the apparatus determines the M1 authentication devices that satisfy a preset condition: the device and the first authentication equipment are in a communication reachable state. And, this scheme can simplify the scheme of the apparatus in determining M1 authentication devices. In a possible embodiment, in the process of determining M1 authentication devices, the apparatus determines that there is only one authentication device currently in a communication reachable state, and then the apparatus determines the authentication device in the communication reachable state as the above M1 authentication devices, in this embodiment, M1 is 1.
In a possible embodiment, the apparatus is the operating device, or the apparatus is a router, or the apparatus is one of the M1 authentication devices, or the operating device is one of the M1 authentication devices, the apparatus is one of the M1 authentication devices other than the operating device, or the apparatus is located at a server.
In one possible embodiment, when the apparatus is one of M1 authentication devices, the determining the authentication result of at least one of the M1 authentication devices includes: the device authenticates the user information and determines an authentication result corresponding to the device; the apparatus receives a second authentication response returned by each authentication device except the apparatus from the M1 authentication devices, and determines an authentication result of at least one authentication device from the M1 authentication devices according to the second authentication response. Wherein the second authentication response is sent to the apparatus after the at least one authentication device completes authentication of the user information, in which case the authentication result may be authentication success or authentication failure. Or the second authentication response is sent to the apparatus after the at least one authentication device has not authenticated the user information within the predetermined time, in this case, the authentication device may also send the second authentication response to the apparatus after the authentication device has not authenticated the user information within the predetermined time, that is, the authentication result is an authentication failure. For example, the predetermined time may be a period of time from the receipt of the second authentication request, such as 10 seconds or 2 minutes from the receipt of the second authentication request. The second authentication response returned by the first authentication device is used for indicating the authentication result of the first authentication device. That is, the apparatus for performing the authentication method may be one of M1 authentication devices, and in this case, the apparatus may also perform authentication of user information.
In a possible embodiment, when the apparatus is not one of the M1 authentication devices, such as a server, a router, or an operating device without authentication capability, the apparatus receives a second authentication response returned by each of the M1 authentication devices, and determines an authentication result of at least one of the M1 authentication devices according to the second authentication response; wherein the second authentication response is sent to the apparatus after the at least one authentication device completes authentication of the user information, in which case the authentication result may be authentication success or authentication failure. Or the second authentication response is sent to the apparatus after the at least one authentication device has not authenticated the user information within a predetermined time, in which case the authentication result is authentication failure. The second authentication response returned by the first authentication device is used for indicating the authentication result of the first authentication device.
In a possible embodiment, in a case where a first condition is satisfied, that is, the apparatus is the operating device, the apparatus triggers the operating device to perform the target operation, including: the device performs the target operation.
In another possible embodiment, in a case where the first condition is not satisfied, the apparatus is not the operating device, such as where the apparatus is a router, or the apparatus is one of the M1 authentication devices, or the operating device is one of the M1 authentication devices, the apparatus is one of the M1 authentication devices other than the operating device, or the apparatus is a server, then the apparatus triggers the operating device to perform the target operation, including: the device sends a first authentication success response to the operating equipment; wherein the first authentication success response is used for indicating that the device successfully authenticates the target operation. Subsequently, after the operating device receives the first authentication success response, the target operation may be performed.
In one possible embodiment, after determining the target security value required for performing the target operation and before determining M1 authentication devices, the apparatus further includes: in a case where the operation device has the authentication capability, the apparatus determines that the authentication security value of the operation device is smaller than the target security value. In this way, when the authentication capability of the operating device is insufficient, the help of other authentication devices is sought, and the rationality of the scheme can be improved.
In another possible embodiment, after the device determines the target security value required to perform the target operation, the method further includes: in a case where the operation device has an authentication capability, the apparatus determines that an authentication security value of the operation device is not less than the target security value; in the case where the first condition is satisfied, the method further includes: the device authenticates the user information; and executing the target operation under the condition that the device successfully authenticates the user information. In the case where the first condition is not satisfied, the method further includes: the apparatus sends a third authentication request to the operating device, where the third authentication request is for requesting: the operation device authenticates the user information, and executes the target operation if the user information is successfully authenticated. In this way, when the authentication capability of the operation device is sufficient, the operation device performs authentication, so that the rationality of the scheme can be improved.
In one possible embodiment, the apparatus determines a target security value required to perform the target operation, including: and the device takes the determined safety value corresponding to the target operation as the target safety value according to the corresponding relation between the preset operation and the safety value. In this way, the correspondence between the operation and the security value can be preset, thereby providing the authentication capability of the corresponding security level for the operation.
In a possible embodiment, the authentication security value of an authentication device has an association relationship with the root key storage environment of the authentication device and the authentication method adopted by the authentication device. In this way, the authentication security value of the authentication mode of an authentication device can be determined according to the root key storage environment and the authentication mode, and the authentication security value can reflect the authentication capability of the authentication device more comprehensively.
In a possible embodiment, before determining the total authentication security value according to the correspondence between the authentication mode and the authentication security value of the at least one authentication device and the authentication result, the apparatus further includes: determining a root key storage environment score corresponding to the first authentication device according to a preset corresponding relation between a root key storage environment and the root key storage environment score and the root key storage environment of the first authentication device; determining an authentication mode score corresponding to the authentication mode of the first authentication device according to a preset corresponding relation between the authentication mode and the authentication mode score and the authentication mode of the first authentication device; and calculating the authentication security value of the authentication mode of the first authentication device according to the first calculation rule, the root key storage environment score of the first authentication device and the authentication mode score corresponding to the authentication mode. In this way, the authentication security value of the authentication mode of an authentication device can be determined according to the root key storage environment and the authentication mode, and the authentication security value can reflect the authentication capability of the authentication device more comprehensively.
In one possible embodiment, the apparatus determines the total authentication security value according to the correspondence between the authentication mode and the authentication security value of the at least one authentication device and the authentication result, and includes: when the authentication result corresponding to the first authentication device is authentication success, determining an authentication security value of the authentication mode of the first authentication device according to the corresponding relation between the authentication mode of the first authentication device and the authentication security value; when the authentication result corresponding to the first authentication device is authentication failure, determining that the authentication security value of the authentication mode of the first authentication device is 0; and calculating a total authentication security value according to the authentication security value of the authentication mode of each authentication device in the M1 authentication devices and a second calculation rule. In this manner, the total authentication security value can reflect the total authentication capability of the M1 authentication devices currently performing authentication, thereby determining whether to permit the target operation according to the total authentication capability.
In one possible embodiment, after the apparatus determines the M1 authentication devices, before the apparatus determines the authentication result of at least one authentication device of the M1 authentication devices, the apparatus further includes: when the apparatus is not an authentication device, for example, an operating device, a server, a router, or the like that does not have authentication capability, the apparatus transmits, to each of the M1 authentication devices, a second authentication request for requesting the authentication device to authenticate the user information. Alternatively, when the apparatus is one of the M1 authentication devices, the apparatus transmits the second authentication request to each of the M1 authentication devices except for the apparatus.
In one possible embodiment, the M1 authentication devices include a first authentication device, and the apparatus sends a second authentication request to each of the M1 authentication devices, including: the device sends the second authentication request to the first authentication device, wherein the second authentication request carries indication information for indicating the first authentication mode of the first authentication device. In this way, the first authentication device may authenticate the user information in the first authentication manner indicated in the second authentication request.
In one possible embodiment, before the apparatus sends the second authentication request to the first authentication device, the apparatus determines the first authentication mode of the first authentication device by:
determining all or part of all authentication modes supported by the first authentication device as the first authentication mode of the first authentication device;
all or part of all the biometric authentication modes supported by the first authentication device are determined as the first authentication mode of the first authentication device, so that a complicated password input process can be avoided;
determining the authentication mode with the highest authentication security value in all authentication modes supported by the first authentication device as the first authentication mode of the first authentication device, so that the authentication security level for authenticating the user information can be improved;
And determining the authentication mode with the highest authentication security value in all the biological characteristic authentication modes supported by the first authentication device as the first authentication mode of the first authentication device, so that a complicated password input process can be omitted, and the authentication security level for authenticating the user information can be improved.
In one possible embodiment, to improve the flexibility of the scheme, the apparatus, before determining the authentication result of at least one of the M1 authentication devices, further includes: the device receives a first message sent by the first authentication device, wherein the first message carries indication information for indicating an authentication mode supported by the first authentication device. Or, the apparatus sends an inquiry request to the first authentication device, where the inquiry request is used to inquire about the authentication method supported by the first authentication device, and the apparatus receives an inquiry response returned by the first authentication device, where the inquiry response carries indication information used to indicate the authentication method supported by the first authentication device.
In a possible embodiment, the apparatus is a router, an operating device, one of the M1 authentication devices, or the apparatus is located at a server. When the device is the router, the operating device and the M1 authentication devices are in the same local area network, so that data among the device, the operating device and the authentication devices can be transmitted through the same local area network, and the data transmission speed can be improved.
In one possible design, the first electronic device receives an authentication request, including: the first electronic equipment receives a first operation, wherein the first operation is used for triggering generation of the authentication request; the determining an authentication method corresponding to the first service includes: determining a first authentication mode corresponding to the first service;
the method further comprises the following steps: the first electronic equipment responds to the first operation received according to the first authentication mode, and detects whether a local authentication result of the first electronic equipment passes or not; in response to detecting that the local authentication result of the first electronic device fails, the first electronic device determines an authentication mode corresponding to the first service, and further includes: and determining a second authentication mode corresponding to the first service.
The method further comprises the following steps: the first electronic equipment sends a request for acquiring a local authentication result of the second electronic equipment to the second electronic equipment according to the second authentication mode; the first electronic equipment receives a local authentication result of the second electronic equipment sent by the second electronic equipment; in response to receiving the local authentication result of the second electronic device, detecting whether the local authentication result of the second electronic device passes; and in response to the fact that the local authentication result of the second electronic equipment passes, the first electronic equipment executes an instruction corresponding to the first operation.
Therefore, the identity authentication of the first electronic equipment can be realized through the local authentication result of the second electronic equipment, the convenience of cross-equipment authentication is effectively improved, and better user experience is created.
In one possible implementation manner, after the first electronic device receives the first operation, the method further includes: the first electronic device detects whether the first operation trigger is a locked low-risk application; in response to detecting that the first operation triggered the low-risk application for locking, the first electronic device detects whether a local authentication result of the first electronic device passes. Therefore, for the locked low-risk application, the identity authentication of the first electronic equipment can be realized through the local authentication result of the second electronic equipment, and the convenience of controlling the locked low-risk application is effectively improved.
In a possible implementation manner, when the first operation is a first voice instruction, before the detecting, by the first electronic device, whether the local authentication result of the first electronic device passes, the method further includes: the first electronic equipment detects whether the voiceprint features in the first voice command accord with the voiceprint features of a preset user or not; and in response to detecting that the voiceprint feature in the first voice command accords with the voiceprint feature of the preset user, the first electronic equipment detects whether the local authentication result of the first electronic equipment passes. Therefore, the first electronic equipment can realize voice control on the first electronic equipment based on the local authentication result of the second electronic equipment, and the convenience of the voice control is improved.
In one possible implementation manner, the performing, by the first electronic device, the local persistent authentication and generating the local authentication result of the first electronic device are performed while the first electronic device receives the first operation or after the first electronic device receives the first operation, where the performing, by the first electronic device, the local persistent authentication includes at least one of: face identification authentication, iris identification authentication and touch screen behavior identification authentication; the local authentication result of the first electronic device may characterize whether the first electronic device authenticates the identity of the user.
In a possible implementation manner, the second electronic device performs local continuous authentication and generates a local authentication result of the second electronic device, while the first electronic device receives the first operation or before the first electronic device receives the first operation; the method for the second electronic device to perform local persistent authentication includes at least one of the following: face identification authentication, iris identification authentication and touch screen behavior identification authentication; the local authentication result of the second electronic device may characterize whether the second electronic device authenticates the identity of the user.
In a possible implementation manner, before the first electronic device executes an instruction corresponding to the first operation, the method further includes: the method comprises the steps that first electronic equipment detects the distance between the first electronic equipment and second electronic equipment; and when the first electronic equipment and the second electronic equipment are detected to be less than a first preset distance, the first electronic equipment executes an instruction corresponding to the first operation. Therefore, the convenience of identity authentication is improved by utilizing cross-device authentication, and meanwhile, the safety of the cross-device authentication is guaranteed through the distance limitation of the first electronic device and the second electronic device.
In a possible implementation manner, before the first electronic device executes an instruction corresponding to the first operation, the method further includes: the method comprises the steps that the first electronic equipment detects whether the first electronic equipment is in a safe state; and when the first electronic equipment is detected to be in the safe state, the first electronic equipment executes an instruction corresponding to the first operation. Therefore, the convenience of identity authentication is improved by using cross-device authentication, and meanwhile, the safety of the cross-device authentication is guaranteed by confirming the safety state of the second electronic device.
In a possible implementation manner, before the first electronic device executes an instruction corresponding to the first operation, the method further includes: the method comprises the steps that the first electronic equipment detects whether the priority of local continuous authentication of the second electronic equipment is lower than that of the local continuous authentication of the first electronic equipment; in response to detecting that the priority of the local continuous authentication of the second electronic device is not lower than the priority of the local continuous authentication of the first electronic device, the first electronic device executes an instruction corresponding to the first operation. Therefore, the convenience of identity authentication is improved by utilizing cross-device authentication, and meanwhile, the safety of the cross-device authentication is guaranteed by confirming that the priority of the local continuous authentication of the second electronic device is not lower than that of the local continuous authentication of the first electronic device.
In one possible design, the first electronic device receives an authentication request, including:
the first electronic device receives a target operation acting on a first interface of the first electronic device, wherein the target operation is used for triggering access to the first service, and the first service is associated with the second electronic device; the method for determining the authentication mode corresponding to the first service by the first electronic device includes: the first electronic equipment acquires a target authentication mode corresponding to the first service; the first electronic device schedules M electronic devices to authenticate the first service according to the authentication mode, and the method comprises the following steps: and the first electronic equipment acquires authentication information according to the target authentication mode.
The method comprises the steps that a first electronic device sends an authentication request to a second electronic device, wherein the authentication request comprises authentication information, the authentication request is used for requesting the second electronic device to authenticate the first service, and the second electronic device is contained in the M electronic devices.
The first service can be a service in the second electronic device, or the first service is related to sensitive data of the second electronic device, or the first service is a service of the second electronic device.
According to the method, the first electronic device can collect the authentication information, the second electronic device can authenticate the authentication information, cross-device collection of the authentication information is achieved, convenience of authentication operation is improved, a user is prevented from operating on a plurality of electronic devices, and user experience is improved. In addition, the first electronic device and the second electronic device cooperatively authenticate the first service, so that the security of the authentication result can be improved, and the problem that the security of the authentication result is low due to the limitation of hardware or insufficient authentication capability and acquisition capability of a single electronic device is solved.
In one possible design, the method further includes: the first electronic equipment receives the authentication result sent by the second electronic equipment; the first electronic device then responds to the target operation according to the authentication result. For example, in a non-multi-screen collaborative scenario, the first electronic device may respond to a payment success or payment failure to the payment operation according to an authentication result of the second electronic device.
In one possible design, the method further includes: the first electronic equipment receives the authentication result sent by the second electronic equipment; and responding to the authentication result, the first electronic equipment switches from displaying the first interface to displaying a second interface, and the second interface comprises a result of triggering the first service. For example, in a multi-screen coordination scenario, after the second electronic device authenticates the payment operation, the interface is switched and synchronized to the first electronic device, and therefore the first electronic device also sends the interface switch.
Wherein, the multi-screen collaborative scene means: the first electronic device and the second electronic device perform multi-screen cooperation, the target operation is a first object acting on a first window in the first interface, the first window is a display window of the second electronic device, and the first service is a service of the second electronic device.
In one possible design, the acquiring, by the first electronic device, a target authentication manner corresponding to the first service includes:
the first electronic device locally obtains a target authentication mode corresponding to the first service from the first electronic device. It should be understood that, before this, the first electronic device needs to perform resource synchronization with the second electronic device, that is, the first electronic device and the second electronic device need to perform synchronization on authentication manners corresponding to different services or authentication manners corresponding to different operations, so that the first electronic device determines a target authentication manner corresponding to the first service. If the first electronic device has a secure execution environment, the method can improve the security of the authentication result to a certain extent.
In one possible design, the first electronic device may obtain a target authentication manner corresponding to the first service from the second electronic device. Or, the first electronic device may send the request message to the second electronic device, and then the second electronic device sends the authentication method to the first electronic device. In the method, a target authentication mode corresponding to a first service is decided by first electronic equipment. Optionally, the first electronic device may also perform resource synchronization with the second electronic device, that is, the first electronic device and the second electronic device perform synchronization on authentication manners corresponding to different services or authentication manners corresponding to different operations, so that the first electronic device determines a target authentication manner corresponding to the first service, for example, when the first electronic device has a collection capability, the first electronic device is preferentially adopted for collection.
In addition, an authentication method may be applied to a second electronic device, where the first electronic device and the second electronic device may be connected in a wired or wireless manner, and the method includes:
the method comprises the steps that a second electronic device receives a request message from a first electronic device, wherein the request message is used for requesting a target authentication mode corresponding to a first service, and the first service is associated with the second electronic device; and the second electronic equipment sends a target authentication mode corresponding to the first service to the first electronic equipment. The second electronic device may also receive an authentication request from the first electronic device, the authentication request including authentication information; and then, according to the authentication information, authenticating the first service to generate an authentication result.
In the method, the second electronic device decides a target authentication mode corresponding to the first service, and the second electronic device authenticates by using the authentication information acquired from the first electronic device, so that the cooperative authentication of the first electronic device and the second electronic device can be completed, the security and reliability of an authentication result can be improved, and the problem of low security of the authentication result caused by limited hardware or insufficient authentication capability and acquisition capability of a single device can be solved.
In one possible design, the method further includes: and the second electronic equipment sends an authentication result to the first electronic equipment, wherein the authentication result is used for triggering the first electronic equipment to respond to the target operation triggering the first service.
In one possible design, the second electronic device may switch from the first interface to a second interface in response to the target operation that triggers the first service according to the authentication result, where the second interface includes a result that triggers the first service; the second interface is then synchronized to the first electronic device.
In one possible design, the first service is associated with a second electronic device, including:
the first service is a service in the second electronic device, or the first service is associated with sensitive data of the second electronic device, or the first service is a service of the second electronic device.
In a possible design, the first electronic device and the second electronic device perform multi-screen coordination, and the target operation for triggering access to the first service is a first object acting on a first window, where the first window is a display window of the second electronic device, and the first service is a service of the second electronic device.
In one possible design, an embodiment of the present application provides a cross-device authentication method, which may be applied to a first electronic device, where the first electronic device is connected to a second electronic device, and the method includes:
The method comprises the steps that a first electronic device receives a target operation acted on the first electronic device by a user, the target operation is used for triggering access to a first service, and the first service is associated with a second electronic device; the first electronic equipment determines an authentication mode corresponding to the first service according to a resource pool, wherein the resource pool comprises the authentication mode corresponding to the operation of the second electronic equipment and a template of authentication information; the first electronic equipment collects authentication information according to the authentication mode; and then, the first service is authenticated by using the authentication information to generate an authentication result.
Optionally, the first electronic device may also send the authentication result to the second electronic device for the second electronic device to respond.
According to the method, the first electronic device acquires authentication information and performs authentication by using an authentication mode acquired from the second electronic device, so that the convenience of authentication operation is improved, the user is prevented from operating on a plurality of electronic devices, and the experience of the user is improved. In addition, the first electronic device and the second electronic device cooperatively authenticate the first service, so that the security of the authentication result can be improved, and the problem that the security of the authentication result is low due to the limitation of hardware or insufficient authentication capability and acquisition capability of a single electronic device is solved.
In one possible design, before receiving a target operation of a user on the first electronic device, the first electronic device further includes:
the first electronic device may further synchronize resources from the second electronic device, and generate a resource pool, where the resource pool further includes an authentication manner corresponding to the operation of the first electronic device, and a template of authentication information. Therefore, the first electronic device can determine the authentication mode by using the resource pool and authenticate the collected authentication information by using the template of the authentication information.
In one possible design, before the first electronic device receives the authentication request, the method further includes:
the method comprises the steps that a first electronic device receives operation of a user, wherein the operation comprises characteristic information input by the user, and the characteristic information is associated with a user identification of the user; matching the feature information by using a first feature template in the first electronic equipment to generate a first matching result; then the first electronic equipment sends the characteristic information to the second electronic equipment; and the second electronic equipment matches the feature information by using a second feature template in the second electronic equipment to obtain a second matching result, and the first electronic equipment acquires the second matching result from the second electronic equipment. And when the first matching result and the second matching result are both matched successfully, the first electronic equipment establishes an association relation among the first characteristic template, the second characteristic template and the user identification.
The method can realize the association of the feature templates of the same user in a plurality of electronic devices, so that in the scene of changing between new and old devices, the user can obtain the feature templates on each device belonging to the same user according to the association relation, and then correspondingly distribute the feature templates on each old device to the new electronic device so as to realize the one-key migration of the feature templates on the new and old devices.
In one possible design, the method further includes: the method comprises the steps that a first electronic device obtains use constraint conditions of a first feature template and a second feature template; the first electronic device establishes an association relationship between the first feature template, the second feature template, and the usage constraint. The usage constraints may include at least one of the constraints: 1. constraint conditions on the use authority of the feature template; 2. constraints on the equipment environment to which the feature template is applicable; 3. constraint conditions of the services applicable to the feature templates; 4. constraints on the security level of the feature template. For example, a user configures the usage constraint conditions of the first feature template and the second feature template on the first electronic device, for example, configures an applicable service, so that the first electronic device can obtain the usage constraint conditions of the first feature template and the second feature template according to the configuration information of the user. For another example, the first electronic device may obtain the usage constraints of the first feature template and the second feature template from the cloud server or other devices. In the method, the use scene of the feature template can be restricted by associating the feature template with the use restriction condition, so that the feature template is prevented from being abused.
In one possible design, the method further includes: the first electronic equipment can share the first record information and/or the second record information to the second electronic equipment; the first record information comprises an incidence relation among a first characteristic template, a second characteristic template and a user identifier; the second record information includes an association relationship between the first feature template, the second feature template, and the usage constraint. In this way, other electronic devices can provide personalized services for the user according to the recorded information, such as providing services of cross-device authentication or device collaborative authentication.
In one possible design, the feature information includes user secret data and/or biometric data, the first feature template includes a template of user secret data and/or a template of biometric data, and the second feature template includes a template of the user secret data and/or a template of the biometric data.
In a possible design, the second electronic device and the first electronic device are connected to the same local area network, and/or the second electronic device and the first electronic device are pre-bound with the same user account.
In addition, an embodiment of the present application provides a data association method, which may be applied to a first electronic device, and the method includes: the method comprises the steps that first electronic equipment receives first operation of a user, the first operation is used for requesting to enter a first feature template, in response to the first operation, the electronic equipment authenticates the identity of the user by using an existing second feature template, the second feature template is associated with a user identifier of the user, and after the authentication is passed, the first feature template entered by the user is received; and establishing an incidence relation between the first characteristic template and the user identification.
In the method, the second characteristic template is bound with the user identifier, so that the user identity can be authenticated by using the second characteristic template, and the association relation between the first characteristic template and the user identifier can be established under the condition that the authentication is passed, so that the characteristics belonging to the same user in the electronic equipment can be associated together. The association relationship can be shared with other electronic devices, so that the other electronic devices can provide personalized services for the user according to the record information, such as cross-device authentication or device collaborative authentication.
In one possible design, the electronic device receives a second operation of the user; the second operation is used for triggering the association of the entered third feature template with the user identification; in response to the second operation, the electronic device establishes an association between the third feature template and the user identifier. According to the method, a user can manually associate an entered feature template in an electronic device.
In a possible design, before the electronic device receives the second operation of the user, if the user cannot simply distinguish which feature templates belong to the same user according to the naming information of the feature templates, the method may further include a process of identifying the feature templates, and specifically, the electronic device may further receive feature information input by the user; and matching the feature information input by the user with at least one feature template in the first electronic equipment, and determining the third feature template matched with the features input by the user.
In one possible design, the method further includes: the electronic equipment acquires a use constraint condition corresponding to the third feature template; the electronic device establishes an association between the third feature template and the usage constraint. Wherein the usage constraint may include at least one of the constraints: 1. constraint conditions on the use authority of the feature template; 2. constraints on the equipment environment to which the feature template is applicable; 3. constraint conditions of the services applicable to the feature templates; 4. constraints on the security level of the feature template.
In the method, after the feature template and the usage constraint condition establish an association relationship, the user can share the record information including the usage constraint condition to other trusted devices, such as other electronic devices or a hub device in a device group network, so that the other electronic devices can provide personalized services for the user according to the record information, such as cross-device authentication or device collaborative authentication and other services.
In one possible design, after the first electronic device receives the first feature template entered by the user, the method further includes: the first electronic device sends the first feature template to a second electronic device, wherein the second electronic device is connected with the first electronic device. The second electronic equipment matches the feature information by using a fourth feature template in the second electronic equipment to obtain a matching result, and the first electronic equipment obtains the matching result from the second electronic equipment; and when the matching results are successful, establishing an association relation among the fourth feature template, the first feature template and the user identifier.
The method can realize the association of the feature templates of the same user in a plurality of electronic devices, so that in the scene of changing between new and old devices, the user can obtain the feature templates on each device belonging to the same user according to the association relation, and then correspondingly distribute the feature templates on each old device to the new electronic device so as to realize the one-key migration of the feature templates on the new and old devices.
In one possible design, the method further includes: the electronic equipment shares the record information to the second electronic equipment; the record information comprises the fourth characteristic template, the association relation between the first characteristic template and the user identification. In this way, other electronic devices can provide personalized services for the user according to the recorded information, such as providing services of cross-device authentication or device collaborative authentication.
In one possible design, the feature information includes user secret data and/or biometric data, the first feature template includes a template of user secret data and/or a template of biometric data, and the second feature template includes a template of the user secret data and/or a template of the biometric data.
In a possible design, the second electronic device and the first electronic device are connected to the same local area network, and/or the second electronic device and the first electronic device are pre-bound with the same user account.
In a third aspect, an embodiment of the present application provides a first electronic device, including a processor and a memory, where the memory is used to store one or more computer programs; the one or more computer programs stored in the memory, when executed by the processor, enable the first electronic device to implement any of the possible design methods described above as being performed by the first electronic device in the first aspect.
In a fourth aspect, an embodiment of the present application provides a second electronic device, including a processor and a memory, where the memory is used to store one or more computer programs; the one or more computer programs stored in the memory, when executed by the processor, enable the second electronic device to implement any of the possible design methods described above as being performed by the second electronic device in the first aspect.
In a fifth aspect, a communication device is provided, which comprises a receiving device and a processor, so as to perform any implementation manner of any method of the first aspect.
The processor may be configured to retrieve and execute the computer program or instructions from the memory, and when the processor executes the computer program or instructions in the memory, the communication device may be configured to perform any of the embodiments of any of the methods of the first aspect.
Optionally, the processor may be one or more.
The receiving means is arranged to perform a function related to the receiving. The receiving means may be a receiving unit. In one design, the communication device may be a communication chip and the receiving device may be an input circuit or port of the communication chip. In another embodiment, the receiving device may also be a receiver or a receiver.
In a possible embodiment, the communication device may further comprise transmitting means for performing functions related to transmission. The transmitting means may be a transmitting unit. In one design, the communication device may be a communication chip and the transmitting device may be an output circuit or port of the communication chip. In another embodiment, the transmitting device may also be a transmitter or a transmitter.
In a sixth aspect, a communication device is provided, which may be the above device for performing the authentication method. Including a processor and memory. Optionally, the communication device further comprises a communication interface, the memory is used for storing a computer program or instructions, and the processor is used for calling and running the computer program or instructions from the memory, and when the processor executes the computer program or instructions in the memory, the communication device can execute any implementation mode of any method of the first aspect through the communication interface.
Alternatively, the processor may be one or more, and the memory may be one or more.
Alternatively, the memory may be integrated with the processor, or may be provided separately from the processor.
Alternatively, the communication interface may be an input-output circuit or port, a transmitter and a receiver, or a transmitter and a receiver.
In a seventh aspect, a communications apparatus is provided that includes a processor. The processor is coupled to the memory and is operable to perform the method of any of the first aspects and any possible implementation of the first aspects. Optionally, the communication device further comprises a memory. Optionally, the communication device further comprises a communication interface, the processor being coupled to the communication interface.
In another implementation, the communication device may be a device for performing any of the methods of the first aspect described above. The communication interface may be a transceiver, or an input/output interface. Alternatively, the transceiver may be a transmit-receive circuit. Alternatively, the input/output interface may be an input/output circuit.
In yet another implementation, the communication device may also be a chip or a system of chips. When the communication device is a chip or a system of chips, the communication interface may be an input/output interface, an interface circuit, an output circuit, an input circuit, a pin or related circuit, etc. on the chip or the system of chips. A processor may also be embodied as a processing circuit or a logic circuit.
In an eighth aspect, an embodiment of the present application provides an authentication system, where the authentication system includes a plurality of electronic devices, and the plurality of electronic devices are connected to each other, and the plurality of electronic devices include: the system comprises acquisition equipment, authentication equipment and decision equipment;
the decision device is used for receiving an authentication request, wherein the authentication request is used for requesting authentication of the first service; determining an authentication mode corresponding to the first service;
the decision device is also used for scheduling the acquisition device to acquire the authentication factor according to the authentication mode and scheduling the authentication device to authenticate;
the acquisition device is used for acquiring at least one authentication factor and sending the at least one authentication factor to the authentication device;
the authentication device is configured to authenticate the at least one authentication factor to obtain at least one authentication result, and send the at least one authentication result to the decision device;
the decision device is configured to process the at least one authentication result to obtain an authentication result of the first service.
In a possible design, when determining the authentication manner corresponding to the first service, the decision device is specifically configured to:
Determining a risk security level corresponding to the first service;
and determining the authentication mode meeting the safety risk level according to the risk safety level.
In one possible design, the system further includes a business device;
the service equipment is used for receiving a target operation, wherein the target operation is used for triggering generation of the authentication request and sending the authentication request to the decision equipment;
when the decision device receives the authentication request, the decision device is specifically configured to: receiving the authentication request from the service equipment;
when the decision device determines the authentication method corresponding to the first service, the decision device is specifically configured to: determining a target security value required for executing the target operation, wherein the target operation is used for triggering the execution of the first service; determining M1 authentication devices, M1 being a positive integer, the M1 authentication devices being devices having the capability of authenticating user information, the M1 authentication devices being included in the M electronic devices;
the decision device processes the at least one authentication result to obtain an authentication result of the first service, and is specifically configured to:
acquiring an authentication result of at least one authentication device in the M1 authentication devices;
Determining a total authentication security value according to the corresponding relation between the authentication mode and the authentication security value of the at least one authentication device and the authentication result;
if the total authentication security value is not less than the target security value, the authentication is passed;
the decision device is further configured to trigger the operation device to execute the target operation.
In a possible design, when the decision device receives the authentication request, it is specifically configured to: receiving a first operation, wherein the first operation is used for triggering generation of the authentication request;
the decision device determines an authentication method corresponding to the first service, and is specifically configured to:
determining a first authentication mode corresponding to the first service;
the decision device is further configured to: according to the first authentication mode, in response to receiving the first operation, the decision device detects whether a local authentication result of the decision device passes;
determining a second authentication mode corresponding to the first service in response to detecting that the local authentication result of the decision-making equipment does not pass; sending a request for obtaining a local authentication result of the authentication device to the authentication device;
the authentication result is a local authentication result of the authentication device sent to the decision device;
The decision device is further configured to detect whether a local authentication result of the authentication device passes in response to receiving the local authentication result of the authentication device; and in response to the fact that the local authentication result of the authentication device passes, the authentication device executes an instruction corresponding to the first operation.
In one possible design, the system further includes a business device;
the service device is used for receiving a target operation acting on a first interface of a first electronic device, the target operation is used for triggering access to the first service, and the first service is associated with the second electronic device;
when the decision device receives the authentication request, the decision device is specifically configured to: receiving the authentication request from the service equipment;
when the decision device determines the authentication method corresponding to the first service, the decision device is specifically configured to: acquiring a target authentication mode corresponding to the first service;
the decision device is further configured to: acquiring authentication information according to the target authentication mode; sending an authentication request to an authentication device, the authentication request including authentication information;
and the authentication equipment is used for authenticating the first service according to the authentication request.
In one possible design, the plurality of electronic devices includes a first device and a second device, and the first device and the second device are any two electronic devices in the plurality of electronic devices;
in one possible implementation manner, the plurality of electronic devices include a first device and a second device, where the first device and the second device are any two electronic devices in the plurality of electronic devices; the first device is configured to send first information to the second device before the acquisition device acquires at least one authentication factor, where the first information includes at least one of: the acquisition capability of the first device, the authentication capability of the first device, and the decision-making capability of the first device; the acquisition capability of the first device comprises authentication factor types which can be acquired by the first device, the authentication capability of the first device comprises authentication factor types which can be authenticated by the first device, and the decision capability of the first device is whether the first device can obtain the at least one aggregation result according to at least one authentication result.
In a possible implementation manner, at least two electronic devices in the plurality of electronic devices are configured to negotiate to determine a type of at least one authentication factor before the acquisition device acquires the at least one authentication factor; or at least one electronic device in the plurality of electronic devices is used for responding to the input operation of the user and determining the type of the at least one authentication factor before the acquisition device acquires the at least one authentication factor.
In the embodiment of the present application, resource information may be synchronized among a plurality of electronic devices, where the resource information includes at least one of the following: acquisition capability, authentication capability, and decision capability. After the resource information is synchronized, any one electronic device may acquire the resource information of other electronic devices in the plurality of electronic devices. Accordingly, the plurality of electronic devices may coordinate the confirmation based on the acquired resource information: the type of at least one authentication factor used in the user identity authentication process is used, so that the authentication process which is safe and reliable and has less influence on power consumption is realized through the acquisition equipment, the authentication equipment and the decision equipment.
In one possible implementation, the plurality of electronic devices are further configured to: receiving a first aggregation result and a second aggregation result sent by the decision device, wherein the first aggregation result and the second aggregation result are obtained by the decision device at different time points.
In this embodiment, the aggregation results obtained by the plurality of electronic devices may include aggregation results obtained at different times. Any one electronic device can perform continuous authentication of the user identity according to the aggregation results obtained at different moments, and the security and the reliability are higher.
In a possible implementation manner, the manner of interconnecting the plurality of electronic devices specifically includes: the plurality of electronic devices are connected to the same local area network and/or the plurality of electronic devices log in to the same user account.
In the embodiment of the application, the mutual connection mode among the plurality of electronic devices can be a safe and credible mode, and the safety and the reliability of the authentication process can be further improved by realizing the authentication process of the user identity through the plurality of electronic devices.
In a possible implementation manner, when the authentication device authenticates the at least one authentication factor to obtain at least one authentication result, the authentication device is specifically configured to: comparing the at least one authentication factor with at least one pre-stored template authentication factor to obtain the similarity between the at least one authentication factor and the at least one template authentication factor, wherein the at least one authentication result is the similarity between the at least one authentication factor and the at least one template authentication factor; when the similarity of the at least one authentication factor and the at least one template authentication factor is greater than a first threshold, the at least one authentication result indicates that the user is legal.
In one possible implementation, the plurality of electronic devices further include a use device, wherein the use device is configured to: and when the using equipment receives the at least one aggregation result and the at least one aggregation result indicates that the user is legal, executing a first operation.
In the embodiment of the application, the user equipment can execute corresponding user operation according to at least one aggregation result obtained by the plurality of electronic equipment, and the authentication process which is safe and reliable and has small influence on power consumption is realized under the condition of no user feeling, so that the normal use of the user equipment by the user is not influenced.
In one possible implementation, the at least one aggregation result indicates that the user is legitimate, and includes at least one of: when any one authentication result in the at least one authentication result indicates that the user is legal, the at least one aggregation result indicates that the user is legal; and when the number of the authentication results indicating that the user is legal in the at least one authentication result is greater than a second threshold, the at least one aggregation result indicates that the user is legal.
In the embodiment of the application, the mode of determining whether the at least one aggregation result indicates that the user is legal can be various, different modes can be used for different application scenes, the flexibility is stronger, and the application scenes are wider.
In one possible implementation, the using apparatus is further configured to: after the using device receives the at least one aggregation result and before the using device executes the first operation, detecting a second operation acting on the using device; in response to detecting the second operation, a first application is run.
In a possible implementation manner, the plurality of electronic devices include a third device, a fourth device, a fifth device, and a sixth device, where the acquisition device is the third device, the authentication device is the fourth device, the decision device is the fifth device, and the usage device is the sixth device; or, the plurality of electronic devices include a seventh device and an eighth device, where the acquisition device, the authentication device, and the usage device are the seventh device, and the decision device is the eighth device.
In the embodiment of the present application, the acquisition device, the authentication device, the decision device, and the use device may be four different devices, or may be less than four devices. Any one of the electronic devices may be at least one of an acquisition device, an authentication device, a decision device and a use device, that is, any one of the electronic devices may have multiple roles, so that the implementation manner is more flexible, and the application scenarios are wider. Even a system including a small number of devices can realize an authentication process that is safe and reliable and has less influence on power consumption.
In a possible implementation manner, the plurality of electronic devices include a ninth device, a tenth device, an eleventh device, a twelfth device, a thirteenth device, a fourteenth device, a fifteenth device, and a sixteenth device, the collecting device includes the ninth device and the tenth device, the authenticating device includes the eleventh device and the twelfth device, the deciding device includes the thirteenth device and the fourteenth device, and the using device includes the fifteenth device and the sixteenth device.
In the embodiment of the application, any one of the acquisition device, the authentication device, the decision device and the use device is not limited to a single device, so that the resource integration and comprehensive scheduling of the acquisition capability, the authentication capability and the decision capability in a plurality of electronic devices are realized, the processing pressure of the single device is reduced, the influence on power consumption is reduced, and the usability is higher.
In a ninth aspect, there is provided a computer program product comprising: a computer program (also referred to as code, or instructions), which when executed, causes a computer to perform, or causes a computer to perform, a method of any of the possible implementations of the first or second aspect.
A tenth aspect provides a computer-readable storage medium storing a computer program (which may also be referred to as code or instructions) which, when run on a computer, causes the computer to perform the method of any one of the possible implementations of the first or second aspect described above, or causes the computer to perform the method of any one of the implementations described above.
In an eleventh aspect, there is provided a processing apparatus comprising: input circuit, output circuit and processing circuit. The processing circuitry is arranged to receive signals via the input circuitry and to transmit signals via the output circuitry such that the method of any one of the first aspects and any one of the possible implementations of the first aspects is implemented, or such that the method of any one of the second aspects and any one of the possible implementations of the second aspects is implemented.
In a specific implementation process, the processing device may be a chip, the input circuit may be an input pin, the output circuit may be an output pin, and the processing circuit may be a transistor, a gate circuit, a flip-flop, various logic circuits, and the like. The input signal received by the input circuit may be received and input by the receiver, the signal output by the output circuit may be output to and transmitted by the transmitter, and the input circuit and the output circuit may be the same circuit that functions as the input circuit and the output circuit, respectively, at different times. The embodiment of the present application does not limit the specific implementation manner of the processor and various circuits.
For technical effects that can be achieved by various designs in any one of the second aspect to the eleventh aspect, please refer to the description of the technical effects that can be achieved by various designs in the first aspect, and the description is not repeated here.
Drawings
Fig. 1A is a schematic diagram of a communication system according to an embodiment of the present application;
fig. 1B is a schematic diagram of an intelligent home communication system provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a mobile phone according to an embodiment of the present application;
fig. 3 is a schematic flowchart of an authentication method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of another authentication method according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of an apparatus according to an embodiment of the present disclosure;
fig. 6A is a schematic diagram of a resource synchronization method according to an embodiment of the present application;
fig. 6B is a schematic diagram of another resource synchronization method according to an embodiment of the present application;
fig. 7A is a schematic view of a door opening scene according to an embodiment of the present application;
fig. 7B is a schematic flowchart of an authentication method in a door opening scenario according to an embodiment of the present application;
fig. 7C is a schematic view of another door opening scenario provided in the embodiment of the present application;
fig. 7D is a schematic view of another door opening scenario provided in the embodiment of the present application;
FIG. 8 is a schematic diagram of a set of interfaces provided by an embodiment of the present application;
fig. 9A is a schematic view of a scene of a voice-operated smart television according to an embodiment of the present application;
fig. 9B is a schematic flowchart of an authentication method in a scenario of operating a smart television with voice according to an embodiment of the present application;
fig. 10A is a schematic view of another scenario of a voice-operated smart television according to an embodiment of the present application;
fig. 10B is a schematic flowchart of an authentication method in another scenario of voice-operated smart tv according to an embodiment of the present application;
fig. 11A is a schematic view of another scenario of a voice-operated smart television according to an embodiment of the present application;
fig. 11B is a schematic flowchart of an authentication method in another scenario of voice-operated smart tv according to an embodiment of the present application;
FIG. 12A is a schematic view of a voice operated microwave oven according to an embodiment of the present application;
fig. 12B is a schematic flowchart illustrating an authentication method in another scenario of operating a microwave oven with voice according to an embodiment of the present application;
fig. 13 is a schematic flowchart of an authentication method according to an embodiment of the present application;
fig. 14 is a schematic flowchart of another authentication method according to an embodiment of the present application;
fig. 15A is a schematic diagram of a possible application scenario provided in the embodiment of the present application;
Fig. 15B is a schematic diagram of an application scenario provided in the embodiment of the present application;
fig. 15C is a schematic diagram of an application scenario provided in the embodiment of the present application;
fig. 16 is a schematic flowchart of an authentication method according to an embodiment of the present application;
fig. 17 is a schematic flowchart of an authentication method according to an embodiment of the present application;
FIG. 18 is a schematic diagram of an apparatus according to an embodiment of the present disclosure;
FIG. 19 is a schematic diagram of another apparatus provided in an embodiment of the present application;
fig. 20 is a flowchart illustrating a cross-device authentication method according to an embodiment of the present application;
fig. 21 is a schematic system architecture diagram of a communication system according to a third implementation manner of the present application;
fig. 22 is a schematic interface diagram for starting cross-device authentication according to an embodiment of the present application;
fig. 23A to 23D are schematic diagrams of a voice control scenario provided in an embodiment of the present application;
fig. 24A to 24G are schematic diagrams of an interface for application locking according to an embodiment of the present application;
fig. 24H to 24M are schematic diagrams of an interface for locking application functions according to an embodiment of the present application;
fig. 25A to 25J are schematic diagrams of a voice control scenario provided in an embodiment of the present application;
fig. 26A to 26C are schematic diagrams of interfaces for setting a low-risk application according to an embodiment of the present application;
Fig. 27A to 27D are schematic diagrams of a voice control scenario provided in an embodiment of the present application;
fig. 28A to 28H are schematic diagrams of interfaces for triggering screen projection according to an embodiment of the present disclosure;
28I-28J are schematic interface diagrams of a screen projection control provided by an embodiment of the present application;
fig. 29A to 29I are schematic interface diagrams of a screen projection control provided in an embodiment of the present application;
fig. 30A to 30C are schematic views of interfaces of a screen projection control provided in the embodiment of the present application;
fig. 31A to 31B are schematic views of interfaces of a screen projection control provided in an embodiment of the present application;
FIGS. 32A-32E are schematic diagrams of an interface for adding an authorized user according to an embodiment of the present application;
fig. 33A to 33C are schematic views of interfaces of a screen projection control provided in the embodiment of the present application;
fig. 34A to 34C are schematic diagrams of a cross-device authentication system according to an embodiment of the present application;
fig. 35 is a schematic flowchart of a cross-device authentication method in a voice control scenario according to an embodiment of the present application;
fig. 36 is a schematic flowchart of a cross-device authentication method in a screen projection control scenario according to an embodiment of the present application;
fig. 37 is a schematic diagram of a cross-device authentication method according to an embodiment of the present application;
fig. 38A and 38B are schematic diagrams illustrating an apparatus authentication method according to an embodiment of the present application;
FIG. 39A is a schematic interface diagram of a PC according to an embodiment of the present application;
fig. 39B is a schematic diagram of face authentication performed by cooperation of a PC and a mobile phone according to an embodiment of the present application;
FIG. 39C is a schematic diagram of an interface of another PC according to an embodiment of the present application;
FIG. 39D is a schematic diagram of another PC interface provided in an embodiment of the present application;
fig. 40 is a schematic diagram of another cross-device authentication method according to an embodiment of the present application;
fig. 41 is a schematic diagram of another cross-device authentication method according to an embodiment of the present application;
fig. 42 is a schematic view of a payment scenario of a smart television according to an embodiment of the present application;
fig. 43 is a schematic view of a driving scene according to an embodiment of the present application;
FIG. 44 is a schematic structural diagram of an apparatus according to an embodiment of the present disclosure;
fig. 45 is a schematic diagram of a software structure of an electronic device according to an embodiment of the present application;
FIGS. 46A and 46B are a set of schematic diagrams of an interface provided by an embodiment of the present application;
FIG. 47 is a schematic view of another set of interfaces provided by embodiments of the present application;
FIGS. 48A and 48B are schematic views of another set of interfaces provided by embodiments of the present application;
FIGS. 49A and 49B are schematic views of another set of interfaces provided by embodiments of the present application;
FIG. 50 is a schematic view of another set of interfaces provided by embodiments of the present application;
fig. 51A is a schematic diagram of a fingerprint distribution method according to an embodiment of the present application;
fig. 51B is a schematic view of a face distribution method according to an embodiment of the present application;
fig. 51C is a schematic diagram of a data association manner provided in the embodiment of the present application;
fig. 52 is a schematic view of an association scenario provided in the embodiment of the present application;
FIG. 53 is a schematic view of another set of interfaces provided by embodiments of the present application;
fig. 54 is a schematic diagram of a data association method according to an embodiment of the present application;
FIG. 55 is a schematic diagram of another data association method provided in the embodiments of the present application;
fig. 56A to 56D are schematic structural views of further electronic devices according to embodiments of the present disclosure;
FIG. 57 is a flowchart illustrating a resource synchronization process according to an embodiment of the present application;
fig. 58 is a schematic flowchart of a multi-device cooperative authentication method according to an embodiment of the present application;
FIGS. 59-60 are flow diagrams of some of the decision processes provided by embodiments of the present application;
61-77 are flow diagrams of some further multi-device collaborative authentication methods provided by embodiments of the present application;
fig. 78 is a schematic diagram of a software structure of an electronic device according to an embodiment of the present application.
Fig. 79 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solution in the embodiments of the present application is described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments of the present application, the terminology used in the following embodiments is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in the specification of the present application and the appended claims, the terms "a", "an", "the" and "the" include such terms as "one or more" unless the context clearly dictates otherwise. It should also be understood that in the following embodiments of the present application, "at least one", "one or more" means one or more than two (including two). The term "and/or" is used to describe an association relationship that associates objects, meaning that three relationships may exist; for example, a and/or B, may represent: a alone, a and B together, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in a "and", "or" relationship.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like in various places throughout this specification are not necessarily all referring to the same embodiment, but may also refer to other embodiments. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise. The term "coupled" includes both direct and indirect connections unless otherwise noted. "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
Before technical solutions of embodiments of the present application are introduced, some terms in the present application are explained to facilitate understanding by those skilled in the art.
Some concepts related to embodiments of the present application are presented below:
(1) authentication device and operation device
In the embodiment of the present application, a terminal device having an authentication capability is referred to as an authentication device. For example, a smart speaker has the ability to authenticate a user's voiceprint, so the smart speaker belongs to one authentication device.
In the embodiments of the present application, a device that executes a service is referred to as an operating device. For example, if the device performing the door opening service is a door lock, the operation device is a door lock, and if the device performing the microwave oven heating service for five minutes is a microwave oven, the operation device is a microwave oven.
(2) Authentication method and correspondence between scores of authentication methods
Examples of authentication methods include authentication of a user using a biometric feature (e.g., a fingerprint, a voiceprint, an iris, or the like), authentication of a user using a user name, a password, or the like, and the like. In one possible case, if the security authentication level of the service requested to be accessed by the user is a low security level, the authentication mode adopted by the electronic device may be to authenticate a single authentication factor, such as to authenticate a fingerprint; in another possible case, if the security authentication level of the service requested to be accessed by the user is a high security level, the authentication mode adopted by the electronic device may also be to authenticate multiple authentication factors, such as fingerprint authentication on a fingerprint and face authentication on a face.
The reasonable use of the biological characteristics to authenticate the identity of the user can simplify the user operation. For example, in the case of payment in such an application scenario, if a user wants to pay using a mobile phone, the user is required to input a password, and a mobile phone using biometric authentication can avoid the trouble of inputting a password by authenticating the fingerprint or facial information of the user.
The False Acceptance Rate (FAR) and the rejection rate (FRR) are some evaluation indexes used for evaluating the performance of the algorithm in a manner of authenticating the user by using the biometric features.
The false recognition rate is simply the ratio of "treating a non-matching as a matching". The rejection rate is simply the ratio of "if the matching should be successful to the unmatched" ratio.
The following describes the false recognition rate and the rejection rate by taking fingerprint recognition as an example.
In fingerprint identification, the false positive rate may refer to a ratio of matching between different fingerprints that is greater than a given threshold when a fingerprint identification algorithm is tested on a standard fingerprint database, so that different fingerprints are considered as a proportion of the same fingerprint, which is simply a ratio of "taking a fingerprint that should not be matched as a matched fingerprint".
The rejection rate may refer to the proportion of identical fingerprints that have a matching rate below a given threshold when the fingerprint recognition algorithm is tested on a standard fingerprint database, such that identical fingerprints are considered as different fingerprints, simply "fingerprints that should match each other successfully are considered as non-matching fingerprints".
For example:
assuming that there are 110 persons, the fingerprint database of 110 x 8 ═ 880 fingerprint pictures of each person's thumb, i.e. 110 classes, 8 pictures per class. Ideally, the conditions are: any two pictures in each class are successfully matched, and any picture between the classes fails to be matched. And (3) matching each picture except the picture with other pictures in the library, and respectively calculating the false recognition rate and the rejection rate.
False recognition rate: if the matching is successful, the number of errors is 1000, which is assumed to be due to the performance of the fingerprint identification algorithm. Theoretically, images from the same fingerprint are matched successfully, 7 × 8 × 110 × 6160 times, and the total number of matches, i.e., 880 × (880-1) ═ 773520 times. The matching failure times are 773520 and 6160-767360 times. The false positive rate FAR is 1000/767360 × 100% ═ 0.13%.
Rejection rate: if the number of such errors is 160, it is assumed that the matching that should be successfully performed is judged as a matching failure due to the performance of the fingerprint identification algorithm. The rejection rate is 160/6160-2.6%.
In the embodiment of the application, the score of the authentication mode can be determined according to the rejection rate and the false recognition rate of the authentication mode. In general, the smaller the rejection rate and the false recognition rate of one authentication method, the higher the score of the authentication method corresponding to the authentication method. The larger the rejection rate and the false positive rate of one authentication method are, the lower the score of the authentication method corresponding to the authentication method is.
The correspondence between several authentication manners and scores of the authentication manners is exemplified by table 1.
TABLE 1
Authentication method Score of authentication method
Identifying a password 90
Identifying a user name and password 90
Iris recognition 90
Fingerprint identification 90
3D face recognition (structured light) 90
3D face recognition (binocular) 80
2D face recognition 70
Bone voiceprint recognition 70
Voiceprint recognition 20
(3) A secure environment of an authentication device may include: a hardware security element (inSE) level, a Trusted Execution Environment (TEE) level, a white box, and a key segment.
In this embodiment of the present application, the hardware security unit may refer to: and the independent security unit is built in the main chip and provides functions of secure storage of private information, secure execution of important programs and the like. The security level of the root key is protected by using the inSE level is higher, and hardware can be prevented from being tampered.
The TEE in the embodiment of the application may refer to that the trusted execution environment is a hardware security isolation area of the main processor, and provides functions of confidentiality and integrity protection of codes and data, security access of external devices, and the like. The security level of the root key is protected by using the TEE level is higher, and the hardware security level can be reached.
(4) Distributed storage systems (distributed storage systems) store data in a distributed manner on a plurality of independent devices. The traditional network storage system adopts a centralized storage server to store all data, the storage server becomes the bottleneck of the system performance, is also the focus of reliability and safety, and cannot meet the requirement of large-scale storage application. The distributed network storage system adopts an expandable system structure, utilizes a plurality of storage servers to share the storage load, and utilizes the position server to position the storage information, thereby not only improving the reliability, the availability and the access efficiency of the system, but also being easy to expand.
(5) The authentication factor may include: user secret data, biometric data, and the like. The user secret data may include a screen locking password of the user, a protection password of the user, and the like. The biometric data may include one or more of: physical biometric, behavioral biometric, soft biometric. The physical biometric characteristics may include: human face, fingerprint, iris, retina, deoxyribonucleic acid (DNA), skin, hand, vein. The behavioral biometric characteristics may include: voiceprint, signature, gait. Soft biometrics may include: gender, age, height, weight, etc.
(6) A service, a transaction that is a process performed to implement its function or service for a device. For example, the service may be an unlocking service, a payment service, a door opening service, an Artificial Intelligence (AI) computing service, various application services, a distribution service, and the like.
At present, the identity authentication of a device is basically based on an authentication process of a single device, for example, when a user opens a privacy cabinet application of a mobile phone, an interface prompts the user to perform fingerprint authentication, the user inputs a fingerprint of the user on the mobile phone, and then the mobile phone performs fingerprint authentication to generate an authentication result. For cooperative authentication among multiple devices, no related authentication scheme exists at present.
In order to implement cooperative authentication between devices, the authentication method provided in the embodiment of the present application is a possible method, and the method can implement authentication of the same service by using at least two electronic devices together, for example, a PC collects a face, the PC sends the face to a mobile phone, and the mobile phone authenticates a payment service triggered by a user by using the face collected by the PC, so that cross-device collection of authentication factors and cooperative authentication of the same service can be implemented, and the method can improve convenience of an authentication method. In another possible mode, the method can realize that at least two electronic devices use at least two authentication factors to jointly authenticate the same service, for example, a door lock collects fingerprints, a camera on a door collects a face, an intelligent sound box utilizes the fingerprints acquired from the door lock and the face acquired from the camera, and the intelligent sound box uses the two authentication factors of the fingerprints and the face to superpose and authenticate the door opening service.
The authentication method provided in the embodiment of the present application may be applied to the schematic diagram of the communication system architecture shown in fig. 1A, and as shown in fig. 1A, the communication system architecture may at least include: electronic device 100 and electronic device 200.
The electronic device 100 and the electronic device 200 may establish connection in a wired or wireless manner. In this embodiment, the wireless communication protocol used when the electronic device 100 and the electronic device 200 establish a connection in a wireless manner may be a wireless fidelity (Wi-Fi) protocol, a Bluetooth (Bluetooth) protocol, a ZigBee protocol, a Near Field Communication (NFC) protocol, various cellular network protocols, and the like, and is not limited in particular.
In a specific implementation, the electronic device 100 and the electronic device 200 may be a mobile phone, a tablet computer, a handheld computer, a Personal Computer (PC), a cellular phone, a Personal Digital Assistant (PDA), a wearable device (e.g., a smart watch), a smart home device (e.g., a television), a vehicle-mounted computer, a game machine, and an Augmented Reality (AR) \ Virtual Reality (VR) device, and the like, and the specific device configurations of the electronic device 100 and the electronic device 200 are not particularly limited in this embodiment. In this embodiment, the electronic device 100 and the electronic device 200 may have the same device form. Such as electronic device 100 and electronic device 200 both being cell phones. The device forms of the electronic device 100 and the electronic device 200 may be different. If the electronic device 100 is a PC, the electronic device 200 is a mobile phone.
The electronic device 100 and the electronic device 200 may be touch screen devices or non-touch screen devices. In this embodiment, the electronic device 100 and the electronic device 200 are both terminals that can run an operating system, install applications, and have a display (or a display screen).
In one possible embodiment of the present application, the system architecture may further include a server 300, and the electronic device 100 may establish a connection with the electronic device 200 through the server 300 in a wired or wireless manner.
As shown in fig. 1B, the communication system may be an intelligent home system, and the intelligent home devices in the intelligent home system may include: cell-phone 11, intelligent camera 12, smart television 13, intelligent audio amplifier 14, bracelet 15 and intelligent lock 16 etc.. In particular implementations, the number of devices may be greater or lesser. Multiple devices may be connected and communicate through a network. Multiple devices may be connected and communicate with each other by wire (e.g., USB, twisted pair, coaxial cable, and/or fiber optics, etc.) or wirelessly (e.g., wireless fidelity (Wi-Fi), bluetooth, and/or a mobile device network, etc.). That is, the network may include, but is not limited to, at least one of: communication lines such as wired lines and wireless lines, gateway devices such as routers and Access Points (APs), and a cloud server. For example, a plurality of devices may access the same local area network through the communication line and the gateway device and communicate through the local area network.
In some embodiments, multiple devices connected via a network are trusted devices with respect to each other. For example, a user terminal device such as a smart phone or a tablet computer among the multiple devices may be installed with an application program (for example, but not limited to, smart home) for implementing communication. The application may log into an account, which may be subsequently referred to as an account application. User terminal devices in the multiple devices can log in the same account or related accounts through the account application, and therefore communication is conducted through the account application server. Other devices except the user terminal device in the plurality of devices can be connected to the account application server in a wireless mode such as bluetooth and Wi-Fi or a wired mode such as USB (for example, the user can manually add smart home devices to smart home through bluetooth). Therefore, the identity of a communication object can be identified among the devices through the account application server (for example, the device logged in or registered in smart home is a trusted device, otherwise, the device is an untrusted device), and therefore communication with high safety is conducted through the account application server. Wherein, above-mentioned other equipment can be but not limited to including intelligent household equipment such as intelligent TV set, intelligent camera, wearable equipment such as intelligent bracelet, intelligent wrist-watch, intelligent glasses.
In a specific implementation, any one of the multiple devices may access the same Wi-Fi network after being authenticated, without being limited to the above-mentioned cases. Therefore, the identity of a communication object can be identified among the multiple devices through the Wi-Fi network (for example, the devices passing the password authentication are trusted devices, and otherwise, the devices are non-trusted devices), so that high-security communication can be carried out through the Wi-Fi network. The embodiments of the present application do not limit this.
The authentication method provided by the embodiment of the application can be applied to electronic equipment. In some embodiments, the electronic device may be a portable terminal, such as a cell phone, a tablet computer, a wearable device with wireless communication capabilities (e.g., a smart watch), an in-vehicle device, and/or the like, that includes functionality such as a personal digital assistant and/or a music player. Exemplary embodiments of the portable terminal include, but are not limited to, a mountHarmony (Hongmeng)
Figure BDA0002990820150000261
Or other portable terminal operating the system. The portable terminal may also be a portable terminal such as a Laptop computer (Laptop) with a touch sensitive surface, e.g. a touch panel, etc. It should also be understood that in other embodiments, the terminal may be a desktop computer with a touch-sensitive surface (e.g., a touch panel).
Fig. 2 shows a schematic structural diagram of an electronic device 200.
The electronic device 200 may include a processor 210, an external memory interface 220, an internal memory 221, a Universal Serial Bus (USB) interface 230, a charge management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 250, a wireless communication module 260, an audio module 270, a speaker 270A, a microphone 270B, a microphone 270C, an earphone interface 270D, a sensor module 280, keys 290, a motor 291, an indicator 292, a camera 293, a display 294, and a Subscriber Identification Module (SIM) card interface 295, and so forth. The sensor module 280 may include a pressure sensor 280A, a gyroscope sensor 280B, an air pressure sensor 280C, a magnetic sensor 280D, an acceleration sensor 280E, a distance sensor 280F, a proximity light sensor 280G, a fingerprint sensor 280H, a temperature sensor 280J, a touch sensor 280K, an ambient light sensor 280L, a bone conduction sensor 280M, a pulse sensor 280N, a heart rate sensor 280P, and the like.
The operation of the pulse sensor 280N and the heart rate sensor 280P will be exemplified as follows.
The pulse sensor 280N can detect a pulse signal. In some embodiments, the pulse sensor 280N can detect pressure changes generated during the pulsatility of the artery and convert them into electrical signals. The pulse sensor 280N is of various types, for example, a piezoelectric pulse sensor, a piezoresistive pulse sensor, a photoelectric pulse sensor, and the like. The piezoelectric pulse sensor and the piezoresistive pulse sensor can convert the pressure process of pulse pulsation into signal output through a micro-pressure type material (such as a piezoelectric sheet, a bridge and the like). The photoelectric pulse sensor can convert the change of light transmittance of the blood vessel in the pulse process into a signal output in a reflection or transmission mode, for example, a pulse signal is acquired by photoplethysmography (PPG).
The heart rate sensor 280P may detect a heart rate signal. In some embodiments, heart rate sensor 280P may acquire a heart rate signal through PPG. The heart rate sensor 280P may convert the change in the vascular dynamics, such as the change in the blood pulse rate (heart rate) or the blood volume (cardiac output), into a signal output by reflection or transmission. In some embodiments, the heart rate sensor 280P may measure signals of electrical activity induced in heart tissue via electrodes attached to the skin of the human body, i.e., acquiring heart rate signals via Electrocardiography (ECG).
In some embodiments, the pulse sensor 280N and the heart rate sensor 280P may be packaged in one pulse-heart rate sensor. The pulse and heart rate sensor may acquire a pulse signal and a heart rate signal through PPG.
Processor 210 may include one or more processing units, such as: the processor 210 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The electronic device 200 implements display functions via the GPU, the display screen 294, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 294 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or alter display information.
The electronic device 200 may implement a shooting function through the ISP, the camera 293, the video codec, the GPU, the display screen 294, and the application processor, etc.
The SIM card interface 295 is used to connect a SIM card. The SIM card can be attached to and detached from the electronic apparatus 200 by being inserted into the SIM card interface 295 or being pulled out from the SIM card interface 295. The electronic device 200 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. Multiple cards can be inserted into the same SIM card interface 295 at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 295 may also be compatible with different types of SIM cards. The SIM card interface 295 may also be compatible with external memory cards. The electronic device 200 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 200 employs esims, such as: an embedded SIM card.
The wireless communication function of the electronic device 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 200 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 250 may provide a solution including 2G/3G/4G/5G wireless communication applied on the electronic device 200. The mobile communication module 250 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 250 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 250 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be disposed in the processor 210. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be provided in the same device as at least some of the modules of the processor 210.
The wireless communication module 260 may provide a solution for wireless communication applied to the electronic device 200, including Wireless Local Area Networks (WLANs), such as wireless fidelity (Wi-Fi) networks, bluetooth (bluetooth), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR) technologies, and the like. The wireless communication module 260 may be one or more devices integrating at least one communication processing module. The wireless communication module 260 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 210. The wireless communication module 260 may also receive a signal to be transmitted from the processor 210, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 200 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 260, such that electronic device 200 may communicate with networks and other devices via wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. In this embodiment, the electronic device 200 may perform resource synchronization of the authentication factor, the acquisition capability, and the authentication capability with other electronic devices in the device networking through the mobile communication module 250 and/or the wireless communication module 260, so that multiple devices in the subsequent device networking each perform their own functions and perform authentication of the user identity in cooperation. The collection capability refers to that the electronic equipment can collect an authentication factor for identifying the identity of the user. The authentication capability means that the electronic device can authenticate the collected authentication factors to obtain an authentication result. Different authentication factors may correspond to different acquisition capabilities, and different authentication factors may correspond to different authentication capabilities. After the resource synchronization is performed among the multiple devices, the synchronized resource information may be stored in the memory 221, may also be uploaded to a connected cloud server, may also be stored in a device (hereinafter referred to as a hub device) for implementing resource integration in the multiple device group, or may also be stored in an external storage device connected to the device, which is not limited in this embodiment of the present application.
It is to be understood that the components shown in fig. 2 do not constitute a specific limitation on the electronic device 200, and that the electronic device 200 may also include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components. In addition, the combination/connection relationship between the components in fig. 2 may also be modified.
The software system of the electronic device may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In the embodiment of the present application, a hierarchical architecture is taken as an example, wherein the hierarchical architecture may include a hong meng (Harmony) operating system
Figure BDA0002990820150000281
Or other operating system. The authentication method provided by the embodiment of the application can be suitable for the terminal integrating the operating system.
It is to be appreciated that the electronic device 200 can be a different type of device, for example, the electronic device 200 is a smart phone, tablet, or other user terminal device. For another example, the electronic device 200 is an intelligent camera or other intelligent home devices. As another example, the electronic device 200 is a smart band or other wearable device.
An authentication method may be executed by a first electronic device, where the first electronic device may be any electronic device in the communication system, such as an electronic device with authentication capability, an electronic device with acquisition capability, an electronic device with both acquisition capability and authentication capability, or a hub device with scheduling capability, as shown in fig. 3, and the method includes the following steps:
S301, the first electronic device receives an authentication request, where the authentication request is used to request authentication of the first service.
In this step, the first electronic device may receive an operation from a user, in response to which the first electronic device generates an authentication request; or, the user performs an operation on the second electronic device, the second electronic device generates an authentication request and transmits the authentication request to the first electronic device in response to the operation, and then the first electronic device receives the authentication request from the second electronic device.
S302, the first electronic device determines an authentication mode corresponding to the first service.
In one possible manner, in this step, if the first service is a service with a low security level, the authentication manner corresponding to the first service may be authentication by using a single authentication factor; if the first service is a service with a high security level, the authentication mode corresponding to the first service may be authentication by using at least two authentication factors.
In this step, the first electronic device may individually decide an authentication method corresponding to the first service; or, the first electronic device may determine the authentication method corresponding to the first service according to the authentication method corresponding to the first service determined by the first electronic device, and then determine the authentication method corresponding to the first service by obtaining the authentication method corresponding to the first service determined by the other electronic device from the other electronic device.
And S303, the first electronic equipment schedules M electronic equipment to authenticate the first service according to the authentication mode, wherein M is a positive integer.
In this step, for example, the first electronic device may schedule the electronic device a (the electronic device a may be the first electronic device, or one of the M electronic devices) to collect the authentication factor according to the authentication manner, schedule the electronic device B to authenticate the authentication factor collected by the electronic device a, generate an authentication result, achieve cross-device collection of the authentication factor, and perform authentication on the same service cooperatively by multiple devices; according to another example, the first electronic device may schedule the electronic device a to collect a first authentication factor, schedule the electronic device B to collect a second authentication factor, schedule the electronic device C to authenticate the first authentication factor, generate a first authentication result, schedule the electronic device D to authenticate the second authentication factor, and generate a second authentication result, so as to complete authentication of the first service by combining the two authentication results.
Specifically, there may be several different implementations of the authentication method described in fig. 3, which are further described in detail below in the embodiments of the present application.
Implementation mode one
Based on the above steps shown in fig. 3, in the above step S302, a specific manner of determining, by the first electronic device, the authentication method corresponding to the first service may be: the method comprises the steps that first electronic equipment determines a risk security level corresponding to a first service; and then the first electronic equipment determines an authentication mode meeting the safety risk level according to the risk safety level.
In consideration of the fact that in the conventional technology, a user generally holds a plurality of terminals, such as a mobile phone, a smart watch, and the like, authentication capabilities of different terminals are different, and a security risk may exist when the user authenticates a service only by using an authentication mode (such as a voiceprint, a 2D face, and the like) that can be provided by the terminal. For example, at present, an intelligent door lock can generally authenticate the identity of a user by using the fingerprint of the user, and the door is opened when the authentication is passed, but in this way, the door lock is illegally opened due to the fact that an illegal user steals the fingerprint of a user owner, and thus, the problem that the traditional authentication mode still has insufficient safety can be seen. In the authentication method provided in the first implementation manner of the embodiment of the present application, since the authentication manner of the first service satisfies the corresponding risk security level, the security of the authentication result can be improved, the method can use multiple authentication factors to perform superposition authentication on the same service on multiple electronic devices, so as to ensure the reliability of the authentication result, and improve the authentication security level of the devices, for example, for a high security level door opening service, a camera and a door lock are invoked to perform face authentication and fingerprint authentication in cooperation.
As shown in fig. 4, a specific flow of the authentication method according to the first implementation includes the following steps.
S401, the first electronic device receives an authentication request, and the authentication request is used for requesting authentication of the first service.
The first electronic device may receive an operation by a user on the first electronic device, generate an authentication request, or the first electronic device may receive an authentication request from another electronic device.
Illustratively, if the user issues a voice wake-up to open the gallery application, the first electronic device is a smart speaker, the smart speaker receives a wake-up voice from the user, the wake-up voice is an authentication request, and the first service may refer to opening the gallery application. For example, the first electronic device may receive an authentication request from the user forwarded by the other electronic device. If the user performs the door opening operation, the door lock forwards an authentication request related to the door opening operation to the smart sound box. In this example, the first transaction may refer to a door open transaction.
S402, the first electronic device determines a risk security level corresponding to the execution of the first service.
Illustratively, as shown in table 8 below, different wake-up voices/operations correspond to different levels of risk security. If the first business is 'open weather application', the corresponding risk security level is determined to be a first level because the business or the operation does not relate to personal data and has no risk; if the first business is 'open gallery application', the corresponding risk security level is determined to be a third level because the business or the operation relates to personal data and the risk is centered; the business or the operation like "opening the safe application" involves sensitive personal data, the risk is high, so the corresponding risk security level is determined to be the fourth level.
And S403, the first electronic device determines an authentication mode meeting the security risk level according to the risk security level.
In this step, the authentication mode satisfying the security risk level may be authentication for one authentication factor, or may be authentication for two or more authentication factors. When the authentication method is an authentication of two or more authentication factors, it is also called an authentication combination method.
In one possible implementation manner, the first electronic device determines at least one authentication manner meeting the security risk level by using a decision policy; wherein the decision policy includes, but is not limited to, at least one of:
preferentially adopting the collected authentication factors; preferentially adopting acquisition capacity with user non-perception characteristics to acquire authentication factors; the authentication factor is preferably collected using the collection capability on the user's near-end device. Specific examples may refer to the embodiments shown above.
According to the risk security level, the available authentication factors and the available acquisition capacity associated with the available authentication factors are determined in advance between the first electronic device and the M electronic devices, and the authentication mode meeting the security risk level is determined according to the risk security level, the available authentication factors and the available acquisition capacity associated with the available authentication factors. For example, if the first electronic device and the M electronic devices do not support voiceprint authentication, and the first electronic device has fingerprint collection capability and fingerprint authentication capability, that is, the fingerprint collection capability is not available, the voiceprint authentication method is not adopted, but the fingerprint authentication method is adopted.
In another possible implementation manner, if the authentication request includes the biometric feature, the biometric feature is identified, the user corresponding to the biometric feature is determined, and then an available authentication factor associated with the user, and an available authentication capability and an available acquisition capability associated with the available authentication factor are determined; and determining an authentication mode meeting the safety risk level according to the risk safety level, the available authentication factor, the available authentication capability and the available acquisition capability. Illustratively, in combination with the first embodiment, the smart speaker acquires the authentication request from the door lock, and if the authentication request includes the fingerprint of the user, the smart speaker first determines that the user corresponding to the fingerprint is the owner alias, and then acquires that the available authentication factor associated with the owner alias includes a face, a touch screen behavior, and the like, and the processor of the smart speaker determines the 3D face acquisition capability and the 3D face authentication capability from the available authentication unit and the available acquisition unit associated with the available authentication factor according to the determined 3D face authentication manner. If the acquisition capability of the 3D face belongs to the camera, and the authentication capability of the 3D face belongs to the intelligent sound box. That is to say, the intelligent speaker scheduling camera collects the face image, and the authentication unit of the 3D face of the intelligent speaker authenticates the collected face image.
In another possible implementation manner, resources in the M electronic devices are synchronized in advance between the first electronic device and the M electronic devices to obtain a synchronized resource pool, where the resource pool includes authentication factors, acquisition capabilities, and authentication capabilities in the M electronic devices, and at least one authentication manner that satisfies the security risk level is determined according to the risk security level and the resource pool. That is, the first electronic device may obtain the currently available authentication factor, acquisition capability, and authentication capability, thereby selecting the acquisition capability and the authentication capability that satisfy the security level, and determine at least one authentication manner according to the selected acquisition capability and the selected authentication capability.
For example, when the first service is at a low security risk level, authentication may be performed by using an authentication method with a weak reliability, for example, voiceprint authentication, password authentication, or the like; when the first service is of a medium security risk level, authentication can be performed in an authentication mode with intermediate credibility, such as face authentication, or voiceprint authentication and touch screen behavior authentication; when the first service is at a high security risk level, authentication can be performed in an authentication mode with high reliability, such as fingerprint authentication and voiceprint authentication, or 3D face authentication and fingerprint authentication.
It should be noted that, in a possible case, the first electronic device may also determine at least one authentication manner corresponding to the first service according to a preset configuration table.
S404, the first electronic device schedules the M electronic devices to authenticate the first service according to the authentication mode.
In this embodiment of the present application, the first electronic device may belong to M electronic devices, or may not belong to M electronic devices. Illustratively, the first electronic device schedules the second electronic device to authenticate the first service according to the at least one authentication mode; or the first electronic device schedules the first electronic device and the second electronic device to authenticate the first service according to the at least one authentication mode. And if the final authentication result is that the authentication is passed, triggering the operation equipment to execute the first service, otherwise, not executing the first service.
In a possible implementation manner, when the authentication request includes a biometric feature, the first electronic device first identifies the biometric feature and determines a user corresponding to the biometric feature; and judging whether the user has the authority to execute the first service, and if so, further scheduling the M electronic devices to authenticate the first service. Illustratively, in combination with the first embodiment, the smart speaker acquires the authentication request from the door lock, and if the authentication request includes the fingerprint of the user, the smart speaker first determines that the user corresponding to the fingerprint is the owner alias, then determines whether the owner alias has the authority for the door opening operation, and further schedules the camera to collect the face if the owner alias has the authority for the door opening operation, and performs 3D face authentication by using the face image. It is noted that in another possible approach, the first electronic device may also adjust the authentication approach based on the available authentication factors associated with the user, and the available authentication capabilities and the available acquisition capabilities associated with the available authentication factors. For example, according to the security risk level of the first service, the decision unit of the smart sound box decides that at least one authentication mode is a 3D face authentication mode, but the acquisition capability of the 3D face associated with the householder alias is unavailable, and then the decision mode can be adjusted to use voiceprint authentication and pulse authentication.
Still further, in other possible implementations, the processor of the first electronic device may determine at least one authentication manner based on the security risk level of the first service and an authentication factor associated with the user, and an available authentication capability and an available capture capability associated with the authentication factor. Illustratively, as shown in embodiment one, the door opening service is a high security level operation, and the available authentication factors associated with the owner Alisa are 3D face, touch screen behavior, fingerprint and the like. Assuming that the available 3D face acquisition capabilities associated with the householder aisa are not available, the processor of the smart speaker may determine to use voiceprint authentication and pulse authentication.
In a possible embodiment, the first electronic device may further filter electronic devices that are farther from the user location according to the user location, and/or the first electronic device may further filter electronic devices that are not suitable for the current service according to the service scenario.
In another possible embodiment, the first electronic device may further filter electronic devices whose security environments do not meet the requirements according to the security level of the device and the current security state. Such as electronic devices that filter for the presence of trojan viruses.
In a possible embodiment, the first electronic device may also determine the authentication mode of the first service according to a preset configuration table. That is to say, the preset configuration table is configured with a priori constraint condition preset by a developer, that is, different authentication combination modes corresponding to the first service, such as fingerprint authentication and face authentication corresponding to door opening operation, are artificially configured in the preset configuration table. Since the preset configuration table takes the security risk level into consideration during configuration, the first electronic device may not determine the security risk level according to the first service.
It should be noted that, in the first implementation manner of the embodiment of the present application, the electronic device 200 may perform resource synchronization of the authentication factor, the acquisition capability, and the authentication capability with other electronic devices in the device networking through the mobile communication module 250 and/or the wireless communication module 260, so that multiple devices in the subsequent device networking respectively perform their own functions and perform authentication of the user identity in a coordinated manner. The acquisition capability refers to that the electronic equipment can acquire an authentication factor for identifying the identity of the user. The authentication capability means that the electronic device can authenticate the collected authentication factors to obtain an authentication result. Different authentication factors may correspond to different acquisition capabilities, and different authentication factors may correspond to different authentication capabilities. After the resource synchronization is performed among the multiple devices, the synchronized resource information may be stored in the internal memory 221, may also be uploaded to the connected cloud server, may also be stored in a device (hereinafter referred to as a hub device) for implementing resource integration in the multiple device group, or may also be stored in an external storage device connected to the device, which is not limited in this embodiment of the present application.
Referring to fig. 5, fig. 5 shows a schematic structural diagram of a communication device 500. The communication apparatus 500 may include an acquisition unit 501, an authentication unit 502, a resource management unit 503, a decision unit 504, and a scheduling unit 505. Wherein:
and the acquisition unit 501 is used for acquiring the authentication factor.
Specifically, the communication apparatus 500 may comprise at least one collecting unit 501, wherein one collecting unit 501 may be configured to collect at least one type of authentication factor (hereinafter simply referred to as an authentication factor). The embodiment of the present application takes an example in which an acquisition unit acquires an authentication factor. The authentication factor may be a fingerprint, a face, a heart rate, a pulse, a behavior habit or a device connection state, etc. For example, the face acquisition unit may be used to acquire a face, and the face acquisition unit may refer to the camera 293 shown in fig. 2; the gait acquisition unit can be used for acquiring gait, and the gait acquisition unit can be a camera 293 shown in fig. 2; the pulse acquisition unit can be used for acquiring pulses, and the pulse acquisition unit can be a pulse sensor 280N shown in fig. 2; the heart rate acquisition unit can be used for acquiring a heart rate, and the heart rate acquisition unit can be referred to as a heart rate sensor 280P shown in fig. 2; the acquisition unit of the touch screen behavior may be used to acquire the touch screen behavior, and the acquisition unit of the touch screen behavior may refer to the display screen 294 shown in fig. 2; the acquisition unit of the trusted device may be configured to acquire a connection status and/or a wearing status of the wearable device.
The authentication unit 502 is configured to perform authentication according to the authentication factor and generate an authentication result. The authentication unit 502 is generally an authentication service in software implementation, and is integrated in an operating system, the authentication service may be a process running in a computer, and a computer program corresponding to the authentication service may be stored in the internal memory 221.
The communication device 500 may comprise at least one authentication unit 502, wherein one authentication unit 502 may be configured to authenticate at least one authentication factor to obtain an authentication result. The embodiments of the present application take an example in which an authentication unit authenticates an authentication factor. For example, the authentication unit of the face may be configured to authenticate the face to obtain an authentication result of the face; the gait authentication unit can be used for authenticating the gait information to obtain the gait authentication result; the pulse authentication unit can be used for authenticating the collected pulse to obtain an authentication result of the pulse; the heart rate authentication unit can be used for authenticating the collected heart rate to obtain an authentication result of the heart rate; the authentication unit of the touch screen behavior can be used for authenticating the collected touch screen behavior information to obtain an authentication result of the touch screen behavior; the authentication unit of the trusted device may be configured to authenticate the acquired connection state and/or wearing state of the wearable device to obtain an authentication result of the trusted device.
The resource management unit 503 is configured to invoke a synchronization service mechanism to perform resource synchronization with other devices in the device networking, generate a resource pool, or maintain or manage the resource pool. The resource management unit 503 includes a resource pool, and the resource in the resource pool may be an authentication factor (or information of the authentication factor), a collection capability of the device, an authentication capability of the device, and the like. The resource management unit 503 may refer to the processor 210 shown in fig. 2, and the resource pool in the resource management unit 503 may refer to a resource pool stored in the internal memory 221.
Specifically, the acquisition unit 501 may actively report (this operation may be referred to as registration) the acquired authentication factor (or report information of the acquired authentication factor) and the acquisition capability of the device to the resource management unit 503. In addition, the resource management unit 503 may also actively acquire the acquired authentication factor (or report the information of the acquired authentication factor) and the acquisition capability from the acquisition unit 501.
The authentication unit 502 may actively report (this operation may be subsequently referred to as registration) the authentication capability to the resource management unit 503. In addition, the resource management unit 503 may also actively acquire the authentication capability from the device in which the authentication unit 502 is located.
Illustratively, the resource pool maintained by the resource management unit 503 may be as shown in table 2.
TABLE 2
Figure BDA0002990820150000331
In a possible implementation, it is considered that devices in the device networking may belong to different users, authentication factors of different users are stored in different devices, or the same device in the device networking may be used by different users, and templates of the authentication factors of different users may be stored in the same device, and for this reason, the resource management unit 503 also manages information of the templates of the authentication factors associated with each user. It should be understood that the template of the authentication factor of each user stored by each device in the device group network needs to be associated in advance, and specifically, one possible way of associating may be: the user selects the authentication factor template of the home user from the equipment, and then establishes a corresponding relation between the selected authentication factor template and the user identification; another possible way of association may be: when the device receives a template (such as a secret template or a biological characteristic template of a user) of an authentication factor newly entered by the user, a corresponding relationship between the biological characteristic template (such as a fingerprint characteristic template) entered by the user and the template of the authentication factor newly entered by the user is established, and because the entered biological characteristic template (such as a fingerprint characteristic template) has a one-to-one corresponding relationship with the user identifier and the feature template newly entered by the user has a corresponding relationship with the biological characteristic template entered by the user, the feature template newly entered by the user also has a corresponding relationship with the user identifier. In this way, the resource management unit 503 can acquire the template of the authentication factor corresponding to the user identifier by querying the user identifier, thereby generating information of the template of the authentication factor associated with each user.
Illustratively, a template of the authentication factor of each user managed by the resource management unit 503 may be as shown in table 3.
TABLE 3
Figure BDA0002990820150000332
It should be noted that the resource pool and the authentication factor association relationship maintained by the resource management unit 503 may be dynamically changed, for example, when a device in the device web is powered off or offline, the authentication capability and the collection capability related to the device may be converted into an unavailable state. For another example, when the software version of the device is upgraded, the device may add an authentication capability. For another example, when the device set in the device group network changes (e.g., the third electronic device is deleted), the authentication factor, the authentication capability, and the collection capability of the third electronic device are no longer included in the authentication resource pool.
A decision unit 504, configured to determine, according to the resource in the resource pool maintained by the resource management unit 503, an authentication combination mode that meets the security risk level by using a decision policy. The decision unit 504 may refer to the processor 210 shown in fig. 2, that is, the actions performed by the decision unit are executed and processed by the processor 210.
The decision policy of the decision unit 504 may include, but is not limited to, at least one of the following: a decision policy 1 preferentially adopts an authentication factor stored in the device, for example, when a resource pool maintained by a resource management unit 503 of the device includes a template of a touch screen behavior, and the device has collected the touch screen behavior of a user in a historical use stage, the touch screen behavior is preferentially used as the authentication factor for authentication; a decision strategy 2, preferentially adopting a collecting unit without user perception (or with non-interrupt operation characteristics) to collect an authentication factor, for example, when a resource pool maintained by a resource management unit 503 of the device comprises a template of a user face, preferentially using a face collecting unit to collect the user face; a decision policy 3, preferentially adopting an authentication unit without user perception (or having an uninterrupted operation characteristic) to authenticate, for example, when a resource pool maintained by the resource management unit 503 of the device includes a template of a user face and the device collects the user face, preferentially adopting the authentication unit of the face to authenticate the user face; a decision strategy 4, which preferentially collects authentication factors on a collection unit on the near-end equipment of the user; and 5, preferentially authenticating by an authentication unit on the near-end equipment of the user. For example, when the user is in the living room, the device (e.g., smart speaker) in the living room is preferably used to collect or perform voiceprint authentication on the voice of the user, rather than the device in the bedroom.
Illustratively, one possible scenario is: the decision policy may be a pre-established mapping table, where the mapping table includes one or more authentication combination manners, and since each authentication combination manner identifies a corresponding security risk level, the decision unit 504 may select an authentication combination manner corresponding to a risk level of the current service from the mapping table. Another possible scenario is: the decision policy may be a pre-established mapping table, where the mapping table includes scores of one or more authentication manners and authentication manners, and index scores corresponding to various risk levels are pre-configured in the decision policy, and the decision unit 504 selects an authentication combination manner in which the sum of the scores of the authentication manners can meet the index scores from the mapping table according to the index scores corresponding to the risk levels corresponding to the current service.
The scheduling unit 505 is configured to determine a target authentication factor to be acquired, a target acquisition unit corresponding to the target authentication factor, and a target authentication unit corresponding to the target authentication factor according to the authentication combination manner determined by the decision unit 504, where the scheduling unit 505 schedules the target acquisition unit to acquire the target authentication factor, and the scheduling unit 505 schedules the target authentication unit to authenticate the acquired target authentication factor, so as to generate an authentication result. The target acquisition unit is an acquisition unit for acquiring a target authentication factor, and the target authentication unit is an authentication unit for authenticating the target authentication factor. In terms of software implementation, the scheduling unit 505 schedules a specific open interface, such as a function interface exposed by a scheduling camera, to obtain a face image, or schedules a function interface of a face authentication service, to obtain an authentication result of a face. The scheduling unit 505 may refer to the processor 210 shown in fig. 2.
The decision unit 504 is further configured to generate a final aggregated authentication result according to the authentication result obtained from the at least one target authentication unit.
It should be noted that, the embodiment of the present application does not limit the integration manner of the units in the electronic device, each of the units may be integrated in one electronic device, or may be partially integrated in one electronic device, and for different types of electronic devices, some electronic devices may not have a secure execution environment, that is, have no authentication capability, and therefore are integrated with the acquisition unit, but do not integrate the authentication unit; some electronic devices may be restricted by hardware components, for example, an intelligent sound box is not provided with a camera and a fingerprint sensor, so that an authentication unit is integrated, but an acquisition unit is not integrated; some electronic devices have both hardware and a secure execution environment, and therefore have both an acquisition unit and an authentication unit integrated therein.
In a possible case, the acquisition unit 501, the authentication unit 502, the resource management unit 503, the decision unit 504, and the scheduling unit 505 may all be integrated into each electronic device in the device networking.
In another possible case, the resource management unit 503 and the at least one authentication unit 502 are integrated in a third electronic device in the device networking, the resource management unit 503 and the at least one acquisition unit 501 are integrated in a second electronic device in the device networking, and the resource management unit 503, the decision unit 504, and the scheduling unit 505 are integrated in a first electronic device in the device networking.
Illustratively, as shown in fig. 6A, the first electronic device is a hub device with decision-making capability, generally the hub device has a secure execution environment and/or belongs to a normally open device, the first electronic device includes a resource management unit 503, a decision-making unit 504 and a scheduling unit 505, the second electronic device is an electronic device with acquisition capability, the second electronic device includes at least one acquisition unit 501 and a resource management unit 503, the third electronic device is an electronic device with authentication capability, and the third electronic device includes at least one authentication unit 502 and a resource management unit 503. Specifically, if there is no unified data synchronization service in the operating system of the electronic device, a synchronization service function may be newly added in the resource management unit, and the resource synchronization process between the devices may include the following steps:
s601a, the at least one acquisition unit on the second electronic device sends the registration information to the resource management unit of the second electronic device.
Illustratively, the acquisition unit may correspond to the camera 293, the sensor module 280, the microphone 270C, and the like. The resource management unit may correspond to the processor 210 shown in fig. 2 described above. In the process of starting up the second electronic device, the acquisition units such as the camera and the sensor notify the processor of the states of the acquisition units so that the processor 210 determines the acquisition units in the available state, and then stores the state information of the acquisition units in the available state in the internal memory 221.
Illustratively, the registration information may be the acquisition capability of the second electronic device, the acquired authentication factor or information of the acquired authentication factor, etc., e.g., the registration information may include status information and an index of heart rate, the status information indicating that the second electronic device has heart rate acquisition capability.
S602a, the resource management unit of the second electronic device stores the authentication factor and the collection capability of the local device.
Illustratively, the resource management unit of the second electronic device saves the authentication factor and the acquisition capability of the local device into a storage unit, such as the internal memory 221. The internal memory 221 in embodiments of the present application may be either volatile memory or non-volatile memory, or may include both volatile and non-volatile memory. The non-volatile memory may be a read-Only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, Synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM). It should be noted that the memories of the systems and methods described herein are intended to comprise, without being limited to, these and any other suitable types of memories.
S603a, the resource management unit of the second electronic device calls the synchronization service to perform data synchronization with the resource management unit of the first electronic device.
That is, the resource management unit of the second electronic device sends the currently maintained information to the first electronic device, so that the information on the second electronic device is synchronized to the resource management unit of the first electronic device. For example, the state information of the second electronic device and the index of the heart rate are sent to the resource management unit of the first electronic device, so that the resource management unit on the second electronic device and the resource management unit on the second electronic device both store the same state information and the same index of the heart rate.
S604a, the at least one authentication unit on the third electronic device sends the registration information to the resource management unit of the third electronic device.
Illustratively, the registration information may be an authentication capability of the third electronic device, e.g., the registration information may include status information indicating that the third electronic device has an authentication capability of a heart rate.
S605a, the resource management unit of the third electronic device maintains local authentication capability.
S606a, the resource management unit of the third electronic device reports the authentication capability, and the resource management unit of the third electronic device invokes the synchronization service to perform data synchronization with the resource management unit of the first electronic device.
That is, the resource management unit of the third electronic device synchronizes the currently maintained information to the resource management unit of the first electronic device. Such as synchronizing the state information of the third electronic device to the resource management unit of the first electronic device.
S607a, the resource management unit of the first electronic device maintains a resource pool including an authentication factor, an acquisition capability, and an authentication capability.
S608a, the resource management unit of the first electronic device invokes the synchronization service to synchronize the resource pool to the resource management unit of the second electronic device.
S609a, the resource management unit of the first electronic device invokes the synchronization service to synchronize the resource pool to the resource management unit of the third electronic device.
That is, the resource management unit of the first electronic device manages the authentication factors of the plurality of electronic devices in the device networking, the capability of the acquisition unit, and the authentication capability of the authentication unit, for example, the resource management unit of the first electronic device completes resource aggregation between the devices. After the resources are summarized, any device in the device networking may obtain, from the resource management unit of the first electronic device, information of the acquisition unit or the authentication unit deployed by any other device in the device networking.
It should be noted that the above-mentioned S601a to S603a may also occur after S604a to S606a, or the above-mentioned S601a to S603a may be executed in parallel with S604a to S606a, which is not limited in this embodiment.
Alternatively, the devices may be synchronized with each other via a connected network (e.g., a cloud server thereof). Wherein, the synchronization mechanism of the synchronization service may be but is not limited to at least one of the following: timing synchronization (e.g., once per minute), triggering synchronization (e.g., performing synchronization once in response to user operation), updating synchronization (e.g., performing synchronization once when information of the acquisition unit or the authentication unit changes).
Optionally, if the operating system of each electronic device is provided with a unified data synchronization service, the resource management unit may implement resource synchronization by using the existing data synchronization service without adding a synchronization service function, which may be specifically referred to as the example described in fig. 6A.
As shown in fig. 6B, the first electronic device is a hub device with decision-making capability, the first electronic device includes a resource management unit 503, a decision-making unit 504 and a scheduling unit 505, the second electronic device is an electronic device with acquisition capability, the second electronic device includes at least one acquisition unit 501, the third electronic device is an electronic device with authentication capability, and the third electronic device includes at least one authentication unit 502. The resource synchronization procedure between devices may include the steps of:
S601b, the at least one acquiring unit of the second electronic device sends the first registration information to the resource managing unit of the first electronic device.
For example, the first registration information may be the acquisition capability of the second electronic device, the acquired authentication factor or information of the acquired authentication factor, etc., e.g., the first registration information may include status information and an index of heart rate, and the status information indicates that the second electronic device has the heart rate acquisition capability.
S602b, the at least one authentication unit of the third electronic device sends the second registration information to the resource management unit of the first electronic device.
For example, the second registration information may be the acquisition capability of the second electronic device, the acquired authentication factor or information of the acquired authentication factor, etc., e.g., the second registration information may include status information and an index of heart rate, and the status information indicates that the second electronic device has the heart rate acquisition capability.
S603b, the first electronic device locally maintains a resource pool including an authentication factor, an acquisition capability, and an authentication capability.
That is, the resource management unit of the first electronic device manages the authentication factors of the plurality of electronic devices in the device networking, the acquisition capability of the acquisition unit, and the authentication capability of the authentication unit, that is, the resource management unit of the first electronic device completes the resource aggregation among the devices. After the resources are summarized, any device in the device networking may obtain, from the resource management unit of the first electronic device, information of the acquisition unit or the authentication unit deployed by any other device in the device networking.
For example, referring to fig. 1B, the camera 12 in the device networking shown in fig. 1B actively reports the acquired face biometric template to the smart television 13, and reports that the information of the acquisition unit is the acquisition unit of the face, and the acquisition unit of the face is in an available state. Or, the camera 12 in the device networking shown in fig. 1B actively reports the indication information to the smart television 13, where the indication information is used to indicate that the camera 12 stores the face biometric template and report information that the acquisition unit of the face is in an available state. After receiving the reported information, the resource management unit 503 of the smart television 13 maintains or updates a resource pool, where the resource pool includes a face biometric template of the camera 12 (or includes indication information indicating that the face biometric template is stored in the camera 12) and a face acquisition unit of the camera 12 is in an available state.
The method provided by the embodiments of the present application is exemplarily described below, and the embodiments are explained by taking the smart home system shown in fig. 1B as an example.
Example one
Fig. 7A to 8 are related to the first embodiment. In the scenario shown in fig. 7A, the intelligent door lock, the camera and the intelligent sound box are wirelessly connected, when the user goes out and returns, the user performs door opening operation, and after the intelligent door lock receives the door opening operation of the user, the intelligent door lock, the camera and the intelligent sound box are triggered to cooperatively authenticate the identity of the user.
Referring to fig. 7B, a specific flow of the authentication method in this scenario includes the following steps.
And S701, the intelligent door lock receives the operation input by the user and collects the fingerprint of the user.
For example, as shown in fig. 7A, when the user goes out and returns, the user performs a door opening operation, where the door opening operation may specifically be that the finger of the user touches a fingerprint sensor on a door lock, and a fingerprint collecting unit (e.g., a fingerprint sensor) on the door lock collects a fingerprint input by the user, and triggers generation of an authentication request, where the authentication request is used to request identity authentication, and the authentication request includes the collected fingerprint. It should be noted that the following steps are described by taking a door opening operation as fingerprint unlocking and an authentication request including a fingerprint of a user as an example.
S702, the intelligent door lock sends an authentication request to the intelligent sound box, and the authentication request comprises the fingerprint of the user.
That is to say, in this scenario, the smart speaker is used as a central device for scheduling the networking device in the smart home system to perform the cooperative authentication on the identity of the user.
And S703, the intelligent sound box authenticates the fingerprint to generate an authentication result of the fingerprint.
Specifically, an authentication unit of the fingerprint is arranged in the intelligent loudspeaker, the authentication unit of the fingerprint in the intelligent loudspeaker acquires a template of the fingerprint, the fingerprint is authenticated by using the template of the fingerprint, if the fingerprint passes the authentication, the generated authentication result of the fingerprint includes information that the fingerprint passes the authentication, optionally, the authentication result may further include an identifier of a target user corresponding to the fingerprint that passes the authentication, and if the fingerprint does not pass the authentication, the authentication result includes information that the fingerprint fails the authentication. Exemplarily, the smart speaker is used as a hub device, the fingerprint authentication unit in the smart speaker may perform fingerprint authentication by using an existing fingerprint template, if the authentication passes, the generated fingerprint authentication result includes that the authentication passes, and the target user matched with the fingerprint in the authentication request is the user owner alias.
S704, the intelligent sound box judges whether the fingerprint authentication result is that the fingerprint authentication is passed, and if the fingerprint authentication result is that the fingerprint authentication is not passed, the intelligent sound box executes S714 subsequently; when the authentication result is that the authentication is passed, S705 is executed subsequently.
S705, when the authentication result of the fingerprint is that the authentication is passed, the smart sound box may further determine a target user corresponding to the fingerprint, and determine whether the target user has a right to execute a door opening service, and if so, execute an unlocking function, or execute S706, otherwise execute S714.
Illustratively, the smart speaker may determine that the householder alias has the right to perform the door opening service by querying the resource pool, such as the following table 4.
TABLE 4
Figure BDA0002990820150000381
S706, the intelligent sound box determines that a target authentication factor associated with a target user is a 3D face by using a preset decision strategy, the target authentication capability associated with the target authentication factor is the authentication capability of the 3D face on the intelligent sound box, and the target acquisition capability is the acquisition capability of the 3D face on the camera.
The specific content of the decision policy may refer to decision policies 1 to 4 of the decision unit 504 corresponding to fig. 5.
In detail, the decision unit of the intelligent sound box can determine that the target authentication factor associated with the target user is a 3D face, the target authentication unit related to the target authentication factor is an authentication unit of the 3D face, and the target acquisition unit is an acquisition unit of the 3D face by querying the resource management unit and using a decision strategy.
In an implementation manner, the resource management unit of the smart speaker may synchronize the smart door lock, the acquisition capability on the camera, and the authentication capability in advance according to the method steps shown in fig. 4. For example, after synchronization, the resource pool maintained by the resource management unit of the smart speaker includes the collection capability of the smart door lock with fingerprints, the collection capability of the camera with 3D faces, the authentication unit of the smart speaker with 3D faces, and the authentication unit of fingerprints.
Illustratively, the smart sound box queries the authentication factor associated with the householder alias from the resource management unit, and the query result is shown in table 5.
TABLE 5
Figure BDA0002990820150000391
Further, the smart speaker may be used as the center device, and the risk level corresponding to the door opening operation may be determined as the high risk level according to table 6.
TABLE 6
Figure BDA0002990820150000392
Then, the intelligent sound box preferentially selects the existing fingerprint authentication mode by using a decision strategy under the condition that the fingerprint of the user is acquired by the fingerprint door lock of S701 according to the authentication factor associated with the owner Alisa, then superposes the 3D face acquisition mode and the 3D face authentication mode which are not sensed by the user, and finally determines to authenticate the identity of the user by adopting the 3D face + fingerprint authentication combination mode corresponding to the high security level, namely, the decision unit of the intelligent sound box determines that the target authentication factor associated with the target user is the 3D face, the target authentication unit associated with the target authentication factor is the 3D face authentication unit, and the target acquisition unit is the 3D face acquisition unit by using the decision strategy.
For example, the decision policy for the smart sound box to determine the authentication combination corresponding to the risk level requirement may include:
decision strategy 1: and a mapping table is established in advance, and a decision strategy of an authentication combination mode corresponding to the risk level requirement is determined according to the mapping table. The mapping table may be exhaustive by developers of all possible authentication combination modes, and corresponding risk levels and corresponding operations are configured for each authentication combination mode. The definition source and the distribution basis of the mapping table are as follows: a. an empirical value; b. experimental test results, such as testing each combination with a large data sample. The device integration mapping table may be preset in the device at the beginning of the development or downloaded from a server.
Decision strategy 2: the precondition can be set, that is, the lowest scores required by FAR and FRR are assigned for different risk levels of high, medium and low, then the sum of the scores of all combinations of currently available authentication modes is calculated according to a formula for summing the scores of various authentication modes, and the authentication combination mode larger than or equal to the lowest score is selected from the sum. For example: currently available authentication methods are: 2D face authentication, voiceprint authentication and fingerprint authentication, wherein different authentication modes have a quantized score for describing the credibility of authentication, namely the score of the voiceprint is 60; the score of the fingerprint is 80; the score of the 3D face is 95, and the authentication combination ways that can satisfy the lowest score by calculating the currently available combination ways are: authentication combination scheme 1) face + voiceprint; authentication combination scheme 2) face + fingerprint; authentication combination scheme 3) voiceprint + fingerprint. The assumption is a high risk operation: the lowest score is required to reach 160, so the decision policy decides to use authentication combination scheme 2) for authentication of high risk level services.
Then the intelligent sound box inquires the following table 7 in the resource management unit, determines an available authentication unit and an available acquisition unit for the 3D face and an available authentication unit for the fingerprint, and finally determines that the target authentication factor is the fingerprint and the 3D face, the target acquisition unit is the acquisition unit of the fingerprint and the acquisition unit of the 3D face, and the target authentication unit is the authentication unit of the fingerprint and the authentication unit of the 3D face by using a decision strategy. For the example shown in fig. 7A, it is finally determined that the authentication combination mode of fingerprint +3D face is utilized, the fingerprint acquisition unit of the smart door lock is used for acquiring fingerprints, the 3D face acquisition unit of the camera is used for acquiring 3D faces, the fingerprint authentication unit of the smart speaker is used for authenticating fingerprints, and the 3D face authentication unit of the smart speaker is used for authenticating 3D faces.
TABLE 7
Figure BDA0002990820150000401
Optionally, when the fingerprint acquisition unit (or 3D face acquisition unit) has a plurality of available acquisition units, or the fingerprint authentication unit (or 3D face authentication unit) has a plurality of available authentication units, the smart speaker filters the unsuitable acquisition units and/or unsuitable authentication units according to the service scene and/or the user position. For example, the current service is a door opening operation service, so that the fingerprint acquisition unit of the fingerprint set on the preference door acquires the fingerprint of the user, or the acquisition unit of the camera set on the preference door acquires the 3D face of the user, instead of selecting the indoor intelligent home device of the user to acquire the fingerprint and the 3D face.
And S707, the intelligent sound box sends a collecting instruction to the camera, and the collecting instruction is used for indicating the collection of the 3D face.
And S708, the camera collects a face image.
And S709, the camera sends the collected face image to the intelligent sound box.
It should be noted that, in the embodiment of the method, the smart speaker may also acquire the 3D face from the camera, that is, the smart speaker may report actively by the camera, or the smart speaker may acquire the 3D face actively from the camera.
And S710, the intelligent sound box authenticates the face image to generate an authentication result of the 3D face.
That is to say, the authentication unit of the 3D face of the smart speaker authenticates the 3D face collected by the camera by using the stored 3D face template, and generates an authentication result of the 3D face.
And S711, the intelligent sound box judges whether the authentication result of the face is that the face passes the authentication, if so, S712 is executed, and if not, S714 is executed.
And S712, when the face authentication is passed, the intelligent sound box sends an unlocking instruction to the intelligent door lock.
Illustratively, when the final authentication result is that the authentication is passed, the processor of the smart speaker sends a door opening instruction to the smart door lock, and when the final authentication result is that the authentication is failed, the processor of the smart speaker sends a door opening refusing instruction to the smart door lock.
And S713, controlling the door lock to be opened by the intelligent door lock according to the received unlocking instruction.
And S714, when the fingerprint authentication is not passed, the target user does not have the authority to execute the door opening service or the human face authentication is not passed, the intelligent door lock sends an instruction of refusing to unlock the intelligent door lock.
And S715, controlling the door lock not to be opened by the intelligent door lock according to the received unlocking refusing instruction.
It can be seen from the above embodiments that, because the security level of the door opening service belongs to a high security level, the intelligent door lock, the intelligent sound box and the camera cooperatively authenticate the fingerprint and the 3D face of the user, and because the reliability of the fingerprint authentication result and the authentication result of the 3D face is higher, the security problem caused by simply using the fingerprint of the user to perform identity authentication can be improved, and the reliability of the authentication result is improved.
In a possible embodiment, in the above S701, when the smart door lock has the fingerprint authentication capability, the smart door lock may authenticate the fingerprint of the user first, generate an authentication result of the fingerprint, and perform the subsequent steps after the authentication is successful, otherwise, not perform the subsequent steps.
In a possible embodiment, if the fingerprint sensor of the intelligent door lock is damaged, so that the user cannot normally collect the fingerprint of the user, the intelligent sound box can authenticate the identity of the user according to the authentication of the 3D face, and when the authentication is passed, the unlocking is successful.
In a possible embodiment, if the battery of the smart door lock is exhausted, which causes that the user cannot normally collect the user's fingerprint, the user may select to send a voice wake-up instruction "xiaozhi, please open the door", as shown in fig. 7C, a microphone arranged on the door may send the collected user's wake-up voice to the smart speaker, and the smart speaker performs voiceprint authentication. It should be noted that the microphone and the intelligent door lock are powered by independent power supply methods. Or the indoor intelligent sound box collects the awakening voice of the user, and the intelligent sound box carries out voiceprint authentication. In addition, after the voiceprint authentication is passed, the 3D face of the user can be collected according to the method to carry out 3D face authentication, and finally the intelligent door lock is instructed to open the door or the door is refused to open the door according to the authentication result of the 3D face and the authentication result of the voiceprint.
In a possible embodiment, as shown in fig. 7D, if the camera on the door lock performs face acquisition on the user in the setting area outside the door, the acquisition result includes faces of a plurality of users, where part of the face authentication passes, but part of the face authentication fails, in one possible case, the smart speaker may instruct the smart door lock to unlock directly, in another possible case, in order to improve security, the smart speaker may further send the acquired face image to the mobile phone of the user, and the user confirms whether to unlock, or the smart speaker instructs the camera on the door lock to send the acquired face image to the mobile phone of the user. Illustratively, as shown in fig. 8, the mobile phone displays an interface 800 as shown in (a) of fig. 8, which displays a notification about the smart-living application, the notification including receiving a door-opening request and asking for confirmation whether to open the door. When the user clicks to open the notification, the mobile phone displays an interface 810 as shown in (b) in fig. 8, the interface 810 displays an image 811 and a prompt box 812 as shown in the figure, the prompt box includes a door opening control 813 and a rejection control 814, when the user determines that the current user is a trusted user according to the image 811, the door opening control 813 can be clicked, otherwise, the rejection control 814 can be clicked. The method can further improve the reliability of the authentication result and can also improve the safety of the authentication result.
Example two
Embodiment two relates to fig. 9A and 9B. In the scenario shown in fig. 9A, a wireless connection is established between the smart television and the smart speaker, and when the user operates the smart television and sends a voice wake-up instruction "xiaozuo art, open weather application", the user is triggered to perform identity authentication on the user.
Referring to fig. 9B, a specific flow of the authentication method in this scenario includes the following steps.
And S901, the intelligent sound box collects the awakening voice of the user, wherein the awakening voice sent by the user is a small art and opens the weather application.
Illustratively, the smart speaker collects sounds in the surrounding environment in real time, and processes the sounds in real time to obtain the wake-up voice of the user.
S902, the intelligent sound box determines that the operation of opening the weather application belongs to the operation with low safety risk level, and then the intelligent sound box determines that the authentication mode corresponding to the operation can be voiceprint authentication by using a decision strategy.
The specific content of the decision policy may refer to decision policies 1 to 4 of the decision unit 504 corresponding to fig. 5.
Specifically, in a possible case, the decision unit of the smart speaker maintains a preset mapping table, which may be various possible authentication manners enumerated by developers, and each operation configures a corresponding risk security level. For example, as shown in table 8 below, the mapping table configures risk security levels corresponding to different wake-up voices or operations, and authentication trust level requirements.
TABLE 8
Figure BDA0002990820150000421
As shown in table 8, when the user sends the wake-up voice of "open weather application", the risk security level corresponding to the service is the first level, and the authentication manner for the operation needs to meet the requirement of the authentication trust level of the first level, specifically, the authentication manner may select voiceprint authentication.
And S903, the intelligent sound box performs voiceprint authentication on the awakening voice of the user to generate a voiceprint authentication result.
Illustratively, the scheduling unit of the smart speaker schedules the voiceprint authentication unit of the smart speaker to perform voiceprint authentication on the wake-up instruction of the user according to the authentication mode determined by the decision unit, and generates an authentication result.
And S904, the intelligent sound box judges whether the voiceprint authentication result is that the voiceprint authentication is passed, if the voiceprint authentication result is that the voiceprint authentication is passed, S905 is executed, and if the voiceprint authentication result is that the voiceprint authentication is not passed, S906 is executed.
And S905, when the authentication result is authentication failure, the intelligent sound box determines not to instruct the intelligent television to open the weather application.
Illustratively, the smart sound box may also reply to the user authentication failing by voice.
S906, when the authentication result is that the authentication is passed, the intelligent sound box indicates the intelligent television to open the weather application.
As can be seen, in this embodiment, when the operation of the user is an operation with a low security risk level, the electronic device may perform authentication in a voiceprint authentication manner corresponding to the security risk level of the operation.
EXAMPLE III
Embodiment three relates to fig. 10A and 10B. In the scenario shown in fig. 10A, a wireless connection is established among the smart television, the smart speaker, and the bracelet, and when the user operates the smart television and sends a wake-up voice "mini art, open gallery application", the user is triggered to perform identity authentication on the user.
Referring to fig. 10B, a specific flow of the authentication method in this scenario includes the following steps.
S1001, the intelligent sound box collects the awakening voice of the user, wherein the awakening voice of the user is a small art and art, and the gallery application is opened.
S1002, the intelligent sound box determines that the operation of opening the gallery application belongs to the operation with medium safety risk level, and then the intelligent sound box determines that the authentication mode corresponding to the operation can be an authentication combination mode of voiceprint authentication and pulse authentication by using a decision strategy.
The specific content of the decision policy may refer to decision policies 1 to 4 of the decision unit 504 corresponding to fig. 5.
Specifically, in a possible case, the decision unit of the smart speaker may determine, according to a maintained preset mapping table, that when the operation is a wake-up voice of "opening a gallery application", the risk security level corresponding to the operation is determined to be a third level, and an authentication manner for the operation needs to meet an authentication trust level requirement of the third level, and specifically, the authentication manner may select voiceprint authentication + pulse authentication.
And S1003, the intelligent sound box performs voiceprint authentication on the awakening voice of the user to generate a voiceprint authentication result.
Illustratively, the scheduling unit of the smart speaker instructs the voiceprint authentication unit of the smart speaker to perform voiceprint authentication on the wake-up instruction of the user according to the authentication mode determined by the decision unit.
And S1004, the intelligent sound box authenticates the pulse of the user by utilizing the acquired pulse information acquired from the intelligent bracelet in advance, and generates an authentication result of the pulse.
Illustratively, the scheduling unit of the smart speaker acquires the acquired pulse information from the resource management unit, and authenticates the pulse of the user by using the authentication unit of the pulse in the smart speaker.
And S1005, the intelligent sound box aggregates the voiceprint authentication result and the pulse authentication result to generate a final authentication result.
Illustratively, the decision unit of the smart speaker acquires the voiceprint authentication result from the voiceprint authentication unit of the smart speaker, acquires the pulse authentication result from the pulse authentication unit, and aggregates the two authentication results to generate a final authentication result. The specific aggregation method may be that if any one or more authentication results fail, the authentication result does not pass; and if all the authentication results are authentication pass, the authentication result is authentication pass.
And S1006, the intelligent sound box judges whether the final authentication result is that the authentication is passed, if the authentication is passed, S1007 is executed, and if not, S1008 is executed.
And S1007, when the authentication result is authentication failure, determining not to instruct the smart television to open the weather application.
Illustratively, the smart sound box may also reply to the user authentication failing by voice.
And S1008, when the authentication result is that the authentication is passed, the intelligent sound box indicates the intelligent television to open the gallery application.
Optionally, when the smart television receives an instruction to open the gallery application, the smart television may open and display the pictures stored in the local or external storage disk.
As can be seen, in this embodiment, when the operation of the user is an operation with a medium security risk level, the electronic device may perform authentication in an authentication combination manner of voiceprint authentication and pulse authentication corresponding to the security risk level of the operation.
Example four
Embodiment four relates to fig. 11A and 11B. In the scenario shown in fig. 11A, the smart television is externally connected to a storage disk, the storage disk stores sensitive data of the user, and the smart television, the mobile phone and the smart sound box are wirelessly connected. The user operates the intelligent television, and when the user sends a voice awakening instruction 'Xiaoyi, open the safe application', the identity authentication of the user is triggered.
Referring to fig. 11B, a specific flow of the authentication method in this scenario includes the following steps.
S1101, the intelligent sound box collects the awakening voice of the user, wherein the awakening voice of the user is a small art, and the safe box is opened for application.
The data in the application of the safe may come from a storage disk, and the storage disk may store sensitive data of the user, such as financial statements, physical examination results and the like.
And S1102, the intelligent sound box determines that the operation of opening the safe box application belongs to the operation with high security risk level, and then the intelligent sound box determines that the authentication mode corresponding to the operation can be the authentication combination mode of 3D face authentication and voiceprint authentication by utilizing a decision strategy.
The specific content of the decision policy may refer to decision policies 1 to 4 of the decision unit 504 corresponding to fig. 5.
Specifically, in a possible case, the decision unit of the smart speaker may determine, according to a maintained preset mapping table, that the risk security level corresponding to the wake-up voice operated as "opening the safe application" is a fourth level, and an authentication mode for the operation needs to meet an authentication trust level requirement of the fourth level, and specifically, the authentication mode may select 3D face authentication + voiceprint authentication.
And S1103, the intelligent sound box performs voiceprint authentication on the awakening voice of the user to generate a voiceprint authentication result.
Illustratively, the scheduling unit of the smart speaker instructs the voiceprint authentication unit of the smart speaker to perform voiceprint authentication on the wake-up instruction of the user according to the authentication mode determined by the decision unit, the voiceprint authentication unit of the smart speaker generates a voiceprint authentication result, and then the smart decision unit obtains the voiceprint authentication result from the voiceprint authentication unit.
And S1104, the intelligent sound box indicates the intelligent television to collect the face image.
Illustratively, the scheduling unit of the smart sound box instructs a camera of the smart television to collect the face image according to the authentication mode determined by the decision unit.
S1105, the camera of the smart television collects the face image.
Illustratively, in one possible case, a camera of the smart television acquires a face image of the user without being perceived by the user; in another possible case, the smart television may display a prompt message on the display screen, where the prompt message is used to prompt the user to stand within a shooting range of the camera of the smart television, for example, stand in front of the smart television facing the smart television, so that the camera can accurately acquire a face image of the user.
And S1106, the smart television sends the acquired face image to the smart sound box.
S1107 the smart sound box obtains the face image from the smart television, authenticates the face of the user and generates the authentication result of the face.
Illustratively, the authentication unit of the face of the smart speaker authenticates the face image, and generates an authentication result of the 3D face.
And S1108, the intelligent sound box aggregates the voiceprint authentication result and the 3D face authentication result to generate a final authentication result.
Illustratively, the decision unit of the smart speaker acquires the voiceprint authentication result from the voiceprint authentication unit of the smart speaker, acquires the 3D face authentication result from the face authentication unit, and aggregates the two authentication results to generate a final authentication result. The specific aggregation method may be that if any one or more authentication results fail, the authentication result does not pass; and if all the authentication results are authentication pass, the authentication result is authentication pass.
And S1109, the intelligent sound box judges whether the final authentication result is that the authentication is passed, if the authentication is passed, S1110 is executed, and if not, S1111 is executed.
And S1110, when the authentication result is authentication failure, the intelligent sound box determines not to instruct the intelligent television to open the application of the safe.
Illustratively, the smart sound box may also reply that the user authentication failed and instruct the smart television to refuse to open the safe application.
S1111, when the authentication result is that the authentication is passed, the intelligent sound box indicates the intelligent television to open the application of the safe case.
As can be seen, in this embodiment, when the operation of the user is an operation with a high security risk level, the electronic device may perform authentication in an authentication combination manner of voiceprint authentication and 3D face authentication corresponding to the security risk level of the operation.
EXAMPLE five
Embodiment five relates to fig. 12A and 12B. In the scenario shown in fig. 12A, the microwave oven, the smart speaker, and the mobile phone are wirelessly connected, and the user operates the microwave oven, and when the user sends a voice wake-up command "xiaozhi, the microwave oven heats for five minutes on fire", the user is triggered to perform identity authentication.
Referring to fig. 12B, a specific flow of the authentication method in this scenario includes the following steps.
And S1201, the intelligent sound box collects the awakening voice of the user, wherein the awakening voice of the user is a Xiaoyi, and the microwave oven heats the voice for five minutes on high fire.
And S1202, the intelligent sound box determines that the operation of heating the microwave oven for five minutes belongs to the operation with high safety risk level, and then the intelligent sound box determines that the authentication mode corresponding to the operation can be a verification combination mode of fingerprint authentication and voiceprint authentication by utilizing a decision strategy.
The specific content of the decision policy may refer to decision policies 1 to 4 of the decision unit 504 corresponding to fig. 5.
Specifically, in a possible case, the decision unit of the smart speaker may determine, according to a maintained preset mapping table, that the risk security level corresponding to the operation related to the microwave oven is a fourth level, and an authentication mode for the operation needs to meet an authentication trust level requirement of the fourth level, and specifically, the authentication mode may select fingerprint authentication + voiceprint authentication.
S1203, the intelligent sound box conducts voiceprint authentication on the awakening voice of the user to generate a voiceprint authentication result.
Illustratively, the scheduling unit of the smart speaker instructs the voiceprint authentication unit of the smart speaker to perform voiceprint authentication on the wake-up instruction of the user according to the authentication mode determined by the decision unit, the voiceprint authentication unit of the smart speaker generates a voiceprint authentication result, and then the smart decision unit obtains the voiceprint authentication result from the voiceprint authentication unit.
And S1204, the smart sound box indicates the mobile phone to collect the fingerprint of the user.
And S1205, the mobile phone receives the fingerprint input by the user.
Illustratively, the smart life application in the mobile phone displays prompt information to prompt a user to input a fingerprint, the user actively touches the fingerprint sensor after receiving the prompt, and the fingerprint sensor in the mobile phone acquires the fingerprint of the user.
Optionally, the step S1205 may also be executed by the smart speaker and the mobile phone, for example, the smart speaker sends a voice prompt to prompt the user to input a fingerprint on the mobile phone, then the user operates on the fingerprint sensor of the mobile phone, and the fingerprint acquisition unit in the mobile phone acquires the fingerprint of the user.
S1206, the mobile phone sends the collected fingerprints to the intelligent sound box.
S1207, the mobile phone authenticates the fingerprint input by the user and generates an authentication result of the fingerprint.
Illustratively, the fingerprint authentication unit in the mobile phone authenticates the received fingerprint, and generates an authentication result of the fingerprint.
And S1208, aggregating the voiceprint authentication result and the fingerprint authentication result by the intelligent sound box to generate a final authentication result.
S1209, the smart speaker determines whether the final authentication result is that the authentication is passed, if the authentication is passed, S1210 is executed, otherwise, S1211 is executed.
S1210, when the authentication result is authentication failure, the intelligent sound box determines not to instruct the microwave oven to start the microwave oven for heating.
Illustratively, the smart speaker may also respond with a voice that the user authentication failed and no indication is made to the microwave.
And S1211, when the authentication result is that the authentication is passed, the intelligent sound box instructs the microwave oven to start the microwave oven to heat for five minutes.
As can be seen, in this embodiment, when the operation of the user is an operation with a high security risk level, the electronic device may perform authentication in an authentication combination manner of voiceprint authentication and fingerprint authentication corresponding to the security risk level of the operation.
In a possible embodiment, before executing the above S1203, it may further be determined whether the user corresponding to the voiceprint after passing the authentication has an authority to operate the microwave oven, if so, the subsequent step is further executed, otherwise, the subsequent step is not executed. Exemplarily, if the user corresponding to the voice print passing the authentication is the user owner alias, the subsequent steps may be performed; and if the user corresponding to the voiceprint passing the authentication is a child, the subsequent steps are not executed, and the microwave oven is indicated to refuse to start the microwave oven for heating or the user authentication failure is prompted.
In one possible embodiment, the collection/authentication capabilities on the unsecure device may be further filtered taking into account the security level and current security status of the device. Assume that in the scenario shown in example five, the collection capability, authentication capability, and current device status of the smart speaker and the smart phone are as shown in table 9.
TABLE 9
Unit name Authentication method Current state Identification of located equipment Current device state
Voiceprint authentication unit Voiceprint Can be used Intelligent sound box Threat of trojan
Voiceprint authentication unit Voiceprint Can be used Mobile phone Secure
Fingerprint authentication unit Finger print Can be used Mobile phone Secure
As can be seen from table 9, the security state of the smart speaker does not satisfy the requirement of authentication due to the existence of the trojan threat, so that the voiceprint authentication unit on the smart speaker is filtered, and the voiceprint and the fingerprint are authenticated by using the mobile phone, and then the above S1203 may be: the mobile phone acquires the collected awakening voice from the intelligent sound box, and the voiceprint authentication unit of the mobile phone performs voiceprint authentication on the awakening voice to generate a voiceprint authentication result. Optionally, the above S1207 may be: the mobile phone acquires the authentication result of the fingerprint from the authentication unit of the fingerprint of the mobile phone, acquires the authentication result of the voiceprint from the authentication unit of the voiceprint, aggregates the authentication result of the voiceprint and the authentication result of the fingerprint, and generates a final authentication result.
In a possible embodiment, the improper acquisition/authentication capabilities may be further filtered according to the service scenario or the current location of the user. Assume that in the scenario shown in example five, the collection capability, authentication capability, and current device location of the smart speaker and the smart phone are as shown in table 10.
Watch 10
Unit name Authentication method Current state Identification of located equipment Current device location
Voiceprint authentication unit Voiceprint Can be used Intelligent sound box Parlor
Voiceprint authentication unit Voiceprint Can be used Mobile phone Bedroom
Fingerprint authentication unit Finger print Can be used Mobile phone Bedroom
As can be seen from table 10, if the current location of the user is in the bedroom, the smart speaker is relatively far away from the user, and the user can select to use the mobile phone to collect the voiceprint of the user and use the mobile phone to collect the fingerprint of the user according to the current location of the user, and finally complete fingerprint authentication and voiceprint authentication on the mobile phone.
It should be understood that, in this embodiment, the location of the device and the location of the user need to be determined in advance, and one possible implementation manner for determining the location of the device may be that the user may actively mark the location of a fixed device, where the fixed device may be a smart television, a smart sound box, a camera, and the like, and when the user starts using the device, the location of the device may be marked in an application.
In addition, there are various ways to determine the position of the user, and one possible way to determine the position of the user may be that the image acquisition device is provided with a user position detection module, and the user position detection module updates the current position of the user according to the image acquired in real time. If the baby room is provided with the monitoring camera, the bedroom can acquire images in real time, and when the monitoring camera of the baby room shoots a user, the current position of the user is updated to be the baby room. If the intelligent screen of the living room is provided with the camera, the intelligent screen shoots the user, and then the current position of the user is updated to be the living room. In other possible cases, because the bedroom may not be provided with a camera, if the smart screen does not detect the user, the user can be prompted to come to the living room for authentication when the user initiates authentication.
Alternatively, another way of determining the location of the user may be: the user position detection module in the electronic device can continuously sense a user signal or periodically sense user information, and when the user is sensed, the current position of the device is determined as the current position of the user.
Based on the above embodiments, it can be seen that the authentication method provided in the embodiments of the present application can implement that at least two authentication factors that can be provided by two electronic devices are used to authenticate the same service together, that is, multiple authentication factors are used to authenticate the same service in a superposition manner on multiple electronic devices, for example, for a high-security-level door opening service, a camera and a door lock are called to perform face authentication and fingerprint authentication in a coordinated manner, so as to ensure the reliability of an authentication result and improve the authentication security level of the device.
Implementation mode two
Based on the steps shown in fig. 3, in S301, the receiving, by the first electronic device, an authentication request includes: the first electronic device receives a target operation, which is used to trigger generation of an authentication request. In the above S302, the specific way for the first electronic device to determine the authentication way corresponding to the first service may be: the method comprises the steps that first electronic equipment inquires a corresponding relation between target operation and a target safety value, and determines the target safety value required by executing the target operation, wherein the target operation is used for triggering execution of a first service; the first electronic device determines M1 authentication devices, M1 being a positive integer, M1 authentication devices being devices having the capability of authenticating user information, the M1 authentication devices being included in the M electronic devices. In the above S303, the scheduling, by the first electronic device, the M electronic devices to authenticate the first service according to the authentication method includes: the first electronic device acquires an authentication result of at least one authentication device in the M1 authentication devices; determining a total authentication security value according to the corresponding relation between the authentication mode and the authentication security value of the at least one authentication device and the authentication result; if the total authentication safety value is not smaller than the target safety value, the authentication is passed, otherwise, the authentication is not passed. Optionally, in the case that the authentication passes, the method further includes: the first electronic device triggers the operation device to execute the target operation.
Considering that in the conventional technology, a user may perform some relatively highly sensitive operations (such as unlocking operation and payment operation), which generally require relatively strict identity authentication, if the user can perform face authentication only through a weak authentication method (such as 2D face) possessed by the device itself, it is likely that the security of the authentication result is relatively low. Compared with the prior art, the authentication method provided by the second implementation mode can trigger the operation device to execute the target operation under the condition that the total authentication security value is not less than the target security value required for executing the target operation, so that the authentication of the required identity authentication level is provided for the first service, and the security of the identity authentication is improved.
It should be noted that, for the sake of understanding, some concepts related to the second implementation will be described below:
(a) correspondence between a root key storage environment of an authentication apparatus and a score of the root key storage environment
The root key may refer to a key that encrypts the stored authentication credentials. The higher the security level of the root key storage environment, the higher the score of the root key storage environment of the authentication device.
A root key storage environment of an authentication device may comprise: a hardware Secure Element (inSE) level, a Trusted Execution Environment (TEE) level, a white box, and a key segment.
The white box in the embodiment of the application can be a white box cryptographic technology, and the main design idea is to confuse the cryptographic algorithm, so that an attacker cannot know the specific algorithm operation flow, the root key can be hidden in software for realizing the white box cryptographic technology, the whole algorithm execution process is represented by using a lookup table, the attacker cannot obtain any information about the root key from the software or the cryptographic operation flow, and the protection of the root key is effectively realized.
In the key segmentation technology in the embodiment of the application, key components forming a root key are stored in a system in a scattered manner, the root key is dynamically generated by the key components only when needed, all key components are needed for generating the root key, each key component is independently stored in a logic entity, and all components are needed to be stored in a scattered manner. The method can solve the problem of 'hard coding' of the root key and can ensure the security of the root key to a certain extent.
Table 11 correspondence relationship between root key storage environment of terminal device and score of root key storage environment
Root key storage environment Scoring (points) of root key storage environment
inSE class
100
TEE grade 90
White box 20
Key segmentation 10
(b) Corresponding relation of authentication equipment, authentication mode and authentication safety value
In the embodiment of the present application, based on the score of the root key storage environment of the authentication device and the score of the authentication method, a correspondence relationship between the authentication device, the authentication method, and the authentication security value may be established. Table 12 illustrates an example of a correspondence between authentication devices, authentication methods, and authentication security, where the authentication security value may be calculated according to a first calculation rule, which in one possible embodiment includes: and carrying out weighted addition on the score of the root key storage environment and the score of the authentication mode. In table 12, the weight corresponding to the score of the root key storage environment is set to 0.3, and the weight of the score of the authentication method is set to 0.7.
Table 12 correspondence between authentication device, authentication method, and authentication security value
Figure BDA0002990820150000481
The authentication security value in the embodiment of the present application may be understood as an authentication security level of the authentication device. When the authentication device and the authentication mode correspond to a higher authentication security value, the following steps are carried out: the higher the authentication security level of the authentication device adopting the authentication mode, on the contrary, the lower the authentication security level of the authentication device adopting the authentication mode.
(c) And an operation device
In the embodiment of the present application, a terminal device that receives a request operation of a user is referred to as an operation device.
The operation device may not have the capability of authenticating the user information or have the capability of authenticating the user information.
For example, a user requests to perform a payment operation on a smart television, and the device provided by the embodiment of the application may call the smart television to perform face recognition on the user. In this example, it can be seen that the smart television belongs to the operating device, and meanwhile, the smart television needs to perform 2D face recognition on the user, so the smart television is also an authentication device.
(d) Operation and safety value
The correspondence of the operation to the security value is used to indicate: a safety value required for operation. Only when the obtained authentication security value is not less than the security value required by an operation or a total authentication security value calculated according to the obtained authentication security values is not less than the security value required by the operation, the operation is considered to be successfully authenticated and then can be executed, otherwise, the operation is considered to be not successful and cannot be executed.
In a possible embodiment, the correspondence of the operation to the security value may be preset. Specifically, when the device provided in the embodiment of the present application is a server deployed in a cloud, the corresponding relationship between the operation and the security value may be stored in the server in the cloud, when the device provided in the embodiment of the present application is a router, the corresponding relationship between the operation and the security value may be stored in the router, and when the device provided in the embodiment of the present application is a terminal device, the corresponding relationship between the operation and the security value may be stored in the terminal device. In another possible embodiment, the correspondence between the operation and the security value may also be set by the user.
The security value may be a score or a rating. In table 13 of the embodiment of the present application, the correspondence between several operations and the security value is illustrated by taking the security value as a score, and as shown in table 13, the higher security values (e.g., 95 scores, 90 scores, 85 scores, etc.) may be set in relation to highly sensitive operations such as payment operation and unlocking. Moderate security values (e.g., 75 points) may be set in relation to moderately sensitive operations such as unlocking the screen, logging in the account, etc. A lower security value (e.g., 20 points) may be set in relation to operations that identify the type of user.
In an application scenario, before the user watches the smart tv, the smart tv needs to perform an operation of "identify the user type" (the operation may be applied by the user actively or may be triggered automatically after the smart tv is turned on), and the purpose of the operation is to enable the smart tv to play the tv according to the user type. For example, when the user is a parent, a television program list corresponding to the parent is provided, and the playing time length is not limited; and when the user is a child, providing a television program list corresponding to the child, and automatically closing the television program list after the television program list is played for 20 minutes. This operation does not require a high level of security relative to payment, unlocking, etc., and therefore a lower security value, such as 20 points in table 13, may be set. As shown in table 13, the micropayment operation may be defined as a payment operation with an amount less than 300 dollars, and the micropayment operation may be defined as a payment operation with an amount not less than 300 dollars.
TABLE 13 correspondence of operations to security values
Figure BDA0002990820150000491
Figure BDA0002990820150000501
The authentication method provided by the embodiment of the present application is introduced below with reference to an actual application scenario.
Based on the above, fig. 13 exemplarily shows a flowchart of an authentication method provided in an embodiment of the present application, and the following describes fig. 13 with reference to a scenario in which a user requests a micropayment operation on a smart television. Fig. 13 illustrates an authentication scheme in which the apparatus provided in the embodiment of the present application is used as a server, and an operating device has an authentication capability. Since the operation device has the authentication capability in fig. 13, the operation device is also identified as an authentication device b2 in fig. 13.
It should be noted that the authentication scheme shown in fig. 13 is described by taking the device provided in the embodiment of the present application as an example, and the execution subject "server" in fig. 13 described below may be replaced with the "device" provided in the embodiment of the present application. When the device provided by the embodiment of the present application is a router, the content of the related scheme executed by the device is similar to that of the scheme executed when the device is a server, and the execution subject "server" referred to in fig. 13 may need to be replaced by "router", which is not described in detail herein.
As shown in fig. 13, the method includes:
s1300, the operation device receives a request for a user to perform a target operation on the operation device.
In combination with a scenario that a user requests a micropayment operation on a smart television, in S1300, the operating device is the smart television, as shown in fig. 15A, a user wants to watch a drama on the smart television, and when a video application of the smart television is in a child mode, the smart television displays "is the drama to be watched for payment, 10 yuan is enough to buy a complete collection, and is there a purchase? If the user clicks 'buy now', the smart television receives a request for executing '10-yuan payment operation'. It can also be said that in this scenario, the target operates as: "10-dollar payment operation".
S1301, the operation equipment generates a first authentication request and sends the first authentication request to the server. The first authentication request is used for requesting the server to authenticate the target operation. The first authentication request may include first indication information indicating a target operation.
Correspondingly, the server receives a first authentication request.
In connection with a scenario where a user requests a micropayment operation on the smart television, in S1301, the smart television sends a first authentication request to the server, where the first authentication request is used to request authentication for "10-dollar payment operation".
S1302, the server determines a safety value corresponding to the target operation according to the corresponding relation between the preset operation and the safety value. For convenience of description in the embodiments of the present application, a security value corresponding to a target operation is referred to as a target security value. The correspondence relationship between the preset operation and the safety value may be as shown in the above table 14.
In combination with a scenario that a user requests to perform a micropayment operation on the smart television, the target operation is a "10-dollar payment operation", and in table 13, if a security value corresponding to the micropayment operation smaller than 300 dollars is defined as 85 cents, the server may determine that the security value corresponding to the "10-dollar payment operation" is 85 cents according to table 13 by querying a preset corresponding relationship between the operation and the security value ("10-dollar payment operation", that is, the "micropayment operation smaller than 300 dollars in table 13).
And S1303, under the condition that the operating device can support authentication of user information, the server determines one or more authentication security values corresponding to the operating device according to the corresponding relationship among the authentication device, the authentication mode and the authentication security values.
Alternatively, the operating device may send an authentication method supported by the operating device and a root key storage environment of the operating device to the server before S1303, so that the server establishes a correspondence relationship between the operating device, the authentication method, and the authentication security value. In one possible implementation, the operating device sends the authentication method supported by itself and the root key storage environment of itself to the server when the operating device accesses the network for the first time, so that the server can store the authentication method. In another possible embodiment, after the foregoing S1302 and before the foregoing S1303, after the server receives the first authentication request, the server may query the operating device whether the operating device has the authentication capability. In the case where the operation device has the authentication capability, the operation device transmits the authentication method supported by itself and the root key storage environment to the server. Illustratively, the operating device (smart tv) has 2D face recognition capability, and reports its own capability to the server before S1303.
It should be noted that one operating device may have one or more authentication capabilities, and accordingly, the server may obtain all authentication manners supported by the operating device and determine an authentication security value corresponding to each authentication manner.
In a scenario where the user requests a micropayment operation on the smart tv, the smart tv sends a self-supported authentication method and a root key storage environment to the server before S1303. Therefore, in S1303, it may be determined that the authentication method adopted by the smart television is 2D face recognition, and when the root key storage environment is TEE, the corresponding authentication security value is 76 points.
S1304, the server determines whether the authentication security value corresponding to the operating device is smaller than the target security value.
In S1304, the following cases are classified.
In the first case, the operating device supports an authentication mode.
In this case, if it is found that the authentication security value corresponding to the operating device is smaller than the target security value, S1305 is executed. If not, S1416 in fig. 14 (which will be described later and will not be described again here) may be executed.
In the second case, the operating device supports multiple authentication modes.
In a first possible implementation manner, in a case that the operating device supports multiple authentication manners, if it is found that a maximum value of all authentication security values corresponding to the operating device is smaller than a target security value, it may be determined that the authentication security value corresponding to the operating device is smaller than the target security value, and step S1305 is performed. If it is not smaller than that, that is, at least one authentication method among the plurality of authentication methods supported by the operation device can satisfy the authentication of the target operation, S1416 in fig. 14 described below may be executed.
In a second possible implementation manner, in a case that the operating device supports multiple authentication manners, the server may calculate multiple authentication security values corresponding to the operating device by synthesizing the multiple authentication manners supported by the operating device, and if the result is smaller than the target security value, it may be determined that the authentication security value corresponding to the operating device is smaller than the target security value, and step S1305 is executed. If not, S1416 in fig. 14 described below may be executed. The calculation of the plurality of authentication security values may be referred to in the discussion of S1310 below, and will not be described herein.
In a third possible embodiment, if the operation device supports multiple authentication methods, and if the multiple authentication methods supported by the operation device include a password authentication method and a biometric authentication method, in one possible embodiment, multiple authentication security values corresponding to the biometric authentication method corresponding to the operation device may be calculated, and if the obtained result is smaller than the target security value, it may be determined that the authentication security value corresponding to the operation device is smaller than the target security value, and S1305 is executed. If not, S1416 in fig. 14 described below may be executed. Therefore, the user can be authenticated only by the biological characteristic authentication mode, the operation of requiring the user to input the password can be avoided, and the convenience of user authentication is improved. The calculation of the plurality of authentication security values may be referred to as discussed in S1310 below, and will not be described here. Further, in this case, in the subsequent S1306, if the operation device is determined as one authentication device of the M1 authentication devices, indication information indicating a biometric authentication method of the operation device may be carried in the second authentication request transmitted to the operation device, and thus, the operation device may authenticate the user information by using only the biometric authentication method indicated in the second authentication request.
In a possible embodiment, the server may not perform the above S1303 to S1304, that is, after receiving the first authentication request of S1301, the server directly performs S1305.
Taking the first case as an example, in combination with a scenario in which a user requests a micropayment operation on a smart television, in S1304, the smart television only supports one authentication method, that is, 2D face recognition, and the authentication security value of the smart television is 76 points and less than the target security value of 85 points, and S1305 is executed.
S1305, the server determines M1 authentication devices, where M1 is a positive integer, and the M1 authentication devices may be all or part of the M authentication devices in the above embodiment of the method shown in fig. 3.
Alternatively, each authentication device may send, to the server, an authentication method supported by itself and a root key storage environment of itself before S1305, so that the server establishes a correspondence relationship between the authentication device, the authentication method, and the authentication security value (as in table 12 above). In a possible implementation manner, each authentication device may send a first message to the server, where each authentication device reports one first message, and the first message reported by the first authentication device carries indication information for indicating the authentication manner supported by the first authentication device; the M1 authentication devices include a first authentication device, and it can also be understood that one of the M1 authentication devices is referred to as a first authentication device. For example, each authentication device may send an authentication manner supported by itself to the server when initially accessing the network (for example, each authentication device reports a first message to send the authentication manner supported by itself), and optionally, each authentication device may also report a root key storage environment of itself to the server (for example, the root key storage environment of the first device may also be carried by the first message reported by the first device), so that the server stores the root key storage environment. In another possible embodiment, after the above S1302 and before S1305, the apparatus may send an inquiry request to the first authentication device, where the inquiry request is used to inquire about the authentication methods supported by the first authentication device; the device receives a query response returned by the first authentication device, wherein the query response carries indication information for indicating the authentication mode supported by the first authentication device. Optionally, after receiving the first authentication request, the server queries, to each authentication device, an authentication manner supported by each authentication device (for example, the server may query the authentication manner supported by each authentication device). Optionally, each authentication device may be queried about the root key storage environment of each authentication device (for example, the root key storage environment of each authentication device may also be queried about by sending a query request to each authentication device, in this example, the query request is also used for querying about the root key storage environment of the authentication device). Illustratively, the authentication device (smart speaker) has voiceprint recognition capability and has reported its own capability to the server before S1305.
In S1305, in one possible implementation, the M1 authentication devices may be all the authentication devices in the communication reachable state that the current server can search for. For example, if there are K authentication devices registered in advance on the server, where K is an integer not less than M1, then M1 authentication devices are part or all of the K authentication devices. There are various ways to select M1 authentication devices from the K authentication devices, which will be described in detail later and will not be described here.
In combination with a scenario that the user requests to perform the micropayment operation on the smart television, before S1305, the server has already acquired the authentication method and the root key storage environment of the smart television, and also acquired the authentication method and the root key storage environment of the smart speaker. In S1305, the authentication devices in the communication reachable state that the server can currently search for are: smart speakers and smart televisions (possibly with other devices in a powered off, damaged, etc.). In this case, the server may determine the smart tv and the smart speaker as M1 authentication devices, and perform S1306. In fig. 13, the authentication device b1 is a smart sound box. Authentication device b2 is a smart television.
S1306, the server transmits a second authentication request to each of the M1 authentication devices. The second authentication request is used for requesting the authentication device to authenticate the user information.
Correspondingly, each of the M1 authentication devices receives the second authentication request sent by the server.
For the second authentication request received by one authentication device, when the authentication device may support multiple authentication manners, in one possible implementation, the server may determine an authentication manner adopted by the authentication device, and carry indication information for indicating the authentication manner in the second authentication request, so that the authentication device performs authentication in the authentication manner indicated in the second authentication request. For the server to determine the authentication mode adopted by the authentication device, reference is made to the description of the subsequent contents, which is not described herein again.
In another possible embodiment, the second authentication request does not carry indication information indicating an authentication method, and the authentication device determines by itself which authentication method to use for authentication, or the authentication device performs authentication by using all authentication methods supported by the authentication device. The authentication device itself determines which authentication method is to be used, which is similar to the above-described method in which the server determines which authentication method is to be used by the authentication device. Alternatively, a calculation rule of the authentication security value may also be stored, and thus, the authentication device may calculate the corresponding authentication security value according to the calculation rule of the authentication security value, its own authentication manner, and the root key storage environment. In the embodiment of the present application, after the hardware and/or software of the authentication device is updated, the authentication security value or the calculation rule of the authentication security value stored in the authentication device may be updated accordingly.
In combination with a scenario that a user requests to perform a small payment operation on the smart television, for example, the server sends second authentication requests to the smart speaker and the smart television respectively, and the second authentication requests do not carry indication information for indicating an authentication mode.
S1307, the authenticating device that receives the second authentication request sent by the server authenticates the user information, and generates a second authentication response.
In S1307, an authentication device acquires the user information first, and then authenticates the user information. Specifically, there are various ways for acquiring user information by one authentication device, and the user information may be acquired by the authentication device itself, for example, by using a camera of a smart television to acquire face information of a user. The face information of the user can be collected by a camera in a server scheduling room, and the face information can be sent to the smart television.
In combination with the scenario that the user requests to perform the micropayment operation on the smart television, as shown in fig. 15B, the smart television receives the second authentication request sent by the server, and the second authentication request does not carry indication information for indicating an authentication method, so that the smart television automatically determines which authentication method to use. Because the smart television only supports one authentication mode of 2D face recognition, the smart television determines to authenticate the user by adopting the authentication mode of 2D face recognition, and when the smart television authenticates the user by adopting the 2D face authentication mode, a prompt for asking to see the camera is displayed on the display screen. When a user watches the camera of the intelligent television, the face information of the user collected by the camera is displayed on the display screen of the intelligent television. The face information of the user for authentication can be prestored on the smart television, the collected face information of the user is compared with the stored face information of the user for authentication, if the comparison is successful, the authentication is determined to be successful, and if the comparison is not successful, the authentication is determined to be failed. And after the authentication is completed, generating a second authentication response by the smart television, and indicating the success or failure of the authentication in the second authentication response.
On the other hand, the smart speaker receives the second authentication request sent by the server, and the smart speaker determines which authentication method to use because the second authentication request does not carry indication information for indicating the authentication method. Because the smart speaker only supports an authentication method of voiceprint recognition, the smart speaker decides to authenticate the user by using the authentication method of voiceprint recognition, and when the smart speaker authenticates the user by using the authentication method of voiceprint recognition, the smart speaker makes a sound "please confirm whether to agree with payment? And if the user can answer the agreement, the intelligent sound box authenticates the collected voiceprint information ' agreement ', on one hand, the user answers the agreement but not the disagreement ', and on the other hand, whether the voiceprint information of the user is matched with the voiceprint information which is stored in the intelligent sound box by the user in advance for authentication is determined. If the intelligent sound box determines that the voiceprint information of the user is matched with the voiceprint information which is stored by the user in advance and is used for authentication, semantic analysis is carried out on the voiceprint information input by the user, and the user input is determined to be 'consent', the authentication is determined to be successful; otherwise, authentication fails. And after the authentication is completed, the intelligent sound generates a second authentication response, and indicates success or failure of the authentication in the second authentication response.
S1308, for one authentication device of the M1 authentication devices, the authentication device returns a second authentication response to the server. The second authentication response may carry an identifier for indicating the authentication device, an authentication method used by the authentication device, and indication information indicating whether authentication is successful. Optionally, when the authentication device only supports one authentication method, the second authentication response may not carry indication information for indicating the authentication method adopted by the authentication device. Optionally, when the second acknowledgement request carries authentication information indicating an authentication method, the second authentication response returned by the authentication device may not carry indication information indicating the authentication method adopted by the authentication device.
Each of the M1 authentication devices may return one or more second authentication responses (for example, if one authentication device performs authentication in two authentication manners, one second authentication response may be returned, where the second authentication response carries authentication results of the two authentication manners, or may also return two second authentication responses, where one second authentication response carries an authentication result corresponding to one authentication manner), or some authentication devices in the M1 authentication devices may return second authentication responses, where, for example, some authentication devices may not return second authentication responses due to a failure of their own link or other reasons.
In a possible embodiment, the second authentication response is sent to the apparatus after the at least one authentication device completes authentication of the user information. After the authentication device completes authentication of the user information, the authentication result indicated by the second authentication response returned by the authentication device may be authentication success or authentication failure.
In another possible embodiment, the second authentication response is sent to the apparatus after the at least one authentication device has not authenticated the user information within a predetermined time, in which case, the authentication result indicated by the second authentication response returned by the authentication device is authentication failure. For example, the predetermined time may be a period of time from the receipt of the second authentication request, such as 10 seconds or 2 minutes from the receipt of the second authentication request. When the authentication device does not authenticate the user information within 10 seconds or 2 minutes from the time when the second authentication request is received, the authentication result is determined to be authentication failure, and a second authentication response for indicating that the authentication result is authentication failure is sent to the apparatus.
In a possible scenario, the authentication device receives the second authentication request, but the user does not complete identity authentication in the authentication device, or the authentication device does not authenticate the user information within a preset time, and the authentication device does not return the second authentication response or return the second authentication response indicating that the authentication result is authentication failure.
In combination with the scenario that the user requests to perform the micropayment operation on the smart television, in S1307, taking as an example that both the smart television and the smart speaker are successfully authenticated, the second authentication response returned by the smart speaker may include: and the indication information is used for indicating the successful authentication and the identification of the intelligent loudspeaker box. Optionally, the second authentication response returned by the smart sound box may further include: and the indication information is used for indicating the intelligent sound box to adopt the voiceprint recognition authentication mode. The second authentication response returned by the smart television may include: indication information used for indicating the success of the authentication and identification of the intelligent television. Optionally, the second authentication response returned by the smart television may further include: and the indication information is used for indicating the intelligent television to adopt the authentication mode of 2D face recognition.
For example, if the authentication manner adopted by each authentication device has been specified in the second authentication request, the second authentication response may not carry indication information for indicating the authentication manner adopted by the authentication device. Namely, the second authentication response returned by the smart sound box includes: and the indication information is used for indicating the successful authentication and the identification of the intelligent loudspeaker box. The second authentication response returned by the smart television comprises: indication information used for indicating the success of the authentication and identification of the intelligent television.
In one possible implementation manner, in the above-mentioned S1308, for one authentication device, the second authentication response returned by the authentication device may carry an authentication security value corresponding to the authentication device, in this case, the correspondence between the authentication method and the authentication security value of the authentication device or the calculation rule between the authentication method and the authentication security value needs to be stored in the authentication device, and the server does not need to execute the following S1309, and directly executes the step 210. For example, the second authentication response returned by the smart speaker is: the authentication is successful, and the identification and authentication safety value of the intelligent sound box are 20 points; the second authentication response returned by the smart television is as follows: the authentication is successful, and the identification and authentication security value of the smart television are 76 points. In the embodiment of the present application, the format of the message in the second authentication response is not limited.
In the case where the authentication security value is not carried in the second authentication response, the server performs S1309.
S1309, the server determines, according to the preset correspondence between the authentication devices, the authentication manner, and the authentication security values, and the second authentication response, the authentication security values corresponding to the M1 authentication devices.
And aiming at a second authentication response returned by the authentication equipment, if the second authentication response indicates that the authentication is successful, determining an authentication security value corresponding to the authentication equipment according to the preset corresponding relation among the authentication equipment, the authentication mode and the authentication security value. And if the second authentication response indicates that the authentication fails, determining that the authentication security value corresponding to the authentication equipment is 0 point. And if one authentication device does not return the second authentication response, the authentication security value corresponding to the authentication device is 0 point.
In combination with the scenario that the user requests to perform the micropayment operation on the smart television, in S1309, the server learns that the voiceprint authentication of the smart speaker is successful according to the second authentication response returned by the smart speaker, and then, according to the table 12, it may query: the authentication security value corresponding to the intelligent sound box is 20 minutes. The server learns that the 2D face recognition of the smart speaker is successful according to the second authentication response returned by the smart television, and may query, according to the table 12: the corresponding authentication security value of the smart television is 76 points.
In a possible manner, if in S1308, the second authentication response returned by any M1 device already carries the authentication security value corresponding to the authentication method adopted by the authentication device, the server may obtain the authentication security values of the M1 authentication device electronic devices, and execute step 210 without querying according to the preset corresponding relationship among the authentication device, the authentication method, and the authentication security value.
S1310, the server calculates a total authentication security value according to the authentication security values corresponding to the M1 authentication devices.
In S1310, the server may calculate a total authentication security value according to a second calculation rule. There are various methods for calculating the total authentication security value, which are exemplified below.
Example one, such as two authentication security values, then the total authentication security value may be calculated according to equation (1):
Figure BDA0002990820150000551
in formula (1), x is one authentication security value, y is another authentication security value, and z is a total authentication security value. In this example, it can also be said that formula (1) is an example of one type of the second calculation rule.
In the scenario that the user requests the micropayment operation on the smart tv, in S1310, the two authentication security values are 76 points and 20 points, respectively, and then the total authentication security value can be calculated by substituting into formula (1):
Figure BDA0002990820150000552
The embodiment of the application also provides other methods for calculating the total authentication security value, such as:
example two, if there are more than two authentication security values, one possible calculation scheme, or one possible second calculation rule, is that the above formula (1) may be used in a loop to calculate the total authentication security value, for example, if there are three authentication security values, two of the results calculated according to formula (1) are further substituted into the above formula (1) to calculate, and the obtained value is the total authentication security value.
For example, there are three authentication security values, which are 76 points, 20 points and 20 points, respectively, the 76 points and 20 points are used as the values of the parameters x and y in the above formula (1), the result is 86, 86 and 20 are substituted into the formula (1) again, and the obtained value is the total authentication security value:
Figure BDA0002990820150000561
the above-mentioned contents respectively exemplify a scheme how to calculate the total authentication security value under the condition that there are two authentication security values and three authentication security values, and if there are four or more authentication security values, the above-mentioned condition that there are three authentication security values can be referred to, and details are not described again.
Example three, such as a plurality of authentication security values, then the total authentication security value may be calculated according to equation (2):
Figure BDA0002990820150000562
In the formula (2), i is a variable, i takes values in sequence, Fi is the ith authentication security value, j is the total number of the authentication security values, and a1, a2, n, c and M1 are constants, wherein a1 and a2 may be the same or different, specific values may take values according to actual situations, x is a multiplication, and z is a total authentication security value. In this example, it can also be said that formula (2) is an example of one type of the second calculation rule.
Example four, in addition to the schemes provided in example one and example two above, there are other schemes for determining the total authentication security value, for example, a plurality of authentication security values may be added and then multiplied by a preset value. For example, if the two authentication security values are 76 points and 20 points, the total authentication security value is: (76+20) × 0.95 ═ 91.2.
The several ways of calculating the total authentication security value from the plurality of authentication security values shown in the above examples one to five are merely examples and are not intended to be limiting.
In a scenario where the user requests a micropayment operation on the smart television, for example, the server calculates the total authentication security value to be 86 points by using formula (1).
S1311, the server determines whether the total authentication security value is smaller than a target security value required for a target operation.
If the total authentication security value is not less than the target security value required for the target operation, S1312 is performed; if the total authentication security value is less than the target security value required for the target operation, S1314 is performed.
In S1311, in combination with a scenario in which the user requests a micropayment operation on the smart tv, the total authentication security value 86 point calculated using the above equation (1) is greater than the target security value by 85 points, and thus S1312 is performed.
S1312, the server returns a first authentication success response to the operating device.
Correspondingly, the operating device receives a first authentication success response, and the first authentication success response carries indication information for indicating authentication success.
In combination with the scenario that the user requests the micropayment operation on the smart television, in S1312, the server returns a first authentication success response to the smart television.
S1313, the operating device executes the target operation when receiving the first authentication success response.
In combination with the scenario that the user requests to perform the micropayment operation on the smart tv, in S1313, when the smart tv receives the first authentication success response, the "10-tuple payment operation" requested by the user is executed. As shown in fig. 15C, the "purchase success, available watching" is displayed on the display screen of the smart tv, and the user can select the episode desired to watch through the remote controller.
S1314, the server returns a first authentication failure response to the operating device.
Correspondingly, the operating device receives a first authentication failure response, wherein the first authentication failure response carries indication information for indicating authentication failure.
S1315, the operating device refuses to execute the target operation when receiving the first authentication failure response.
Through the scene that the user requests to perform the micropayment operation on the smart television, it can be seen that if the smart television is only used for performing 2D face recognition on the user, the corresponding authentication security value is only 76 minutes, which is lower than 85 minutes required by the target operation, that is, the authentication capability of the smart television is not enough to satisfy the micropayment operation, and the security is poor. And the user may refuse to adopt the smart television to perform payment operation based on security considerations, resulting in payment failure. In the scheme provided by fig. 13, a plurality of devices with weak authentication capability (with small authentication security values) may be cooperatively authenticated, so that whether authentication is successful or not may be comprehensively determined according to a plurality of authentication results. Therefore, when the authentication device without strong authentication capability performs authentication, the application can also combine a plurality of authentication devices with weak authentication capability to perform cooperative authentication, so as to meet the operation with high user security level requirement (high security value). In addition, since the user information can be authenticated by one or more authentication devices in one or more authentication modes in the embodiment of the application, the requirement on the authentication capability of the operation device can be reduced, so that the requirement on a single terminal device can be reduced, and the manufacturing cost of the operation device can be reduced. On the other hand, when the apparatus provided in the embodiment of the present application is a router, and the router, the operating device and the authentication device belong to the same lan, the interaction of signaling between the router and the operating device, and between the router and the authentication device can be transmitted through the lan, so that the transmission speed can be greatly increased, and the speed of the data processing flow can be increased.
Fig. 14 illustrates one possible implementation of fig. 13 in which the authentication security value corresponding to the operating device is determined not to be less than the target security value in step 1304.
It should be noted that the authentication scheme shown in fig. 14 is described by taking the device provided in the embodiment of the present application as an example, and the execution subject "server" in fig. 14 described below may be replaced with the "device" provided in the embodiment of the present application. When the device provided in the embodiment of the present application is a router, the content of the related scheme executed by the device is similar to the "scheme executed when the device is a server", and it is only necessary to replace the execution main body "server" in fig. 14 with "router", which is not described herein again.
As shown in fig. 14, the authentication method includes the steps of:
s1400 to S1404 are the same as S1300 to S1304 described above.
When it is determined in step 1404 that the authentication security value corresponding to the operating device is not less than the target security value, step 1416 is performed.
In step 1416, the server transmits a third authentication request to the operating device.
Correspondingly, the operating device receiving server sends a third authentication request.
In the first case, the operating device supports an authentication mode.
In this case, the third authentication request is used to request the operating device to authenticate the user, and may or may not carry indication information indicating the authentication method used.
In the second case, the operating device supports multiple authentication modes.
In a case where the operating device supports multiple authentication manners, in one possible implementation manner, the third authentication request may carry indication information indicating all the authentication manners supported by the operating device, or may not carry indication information indicating the authentication manners. The operating device authenticates by adopting all authentication modes supported by the operating device.
In a case that the operating device supports multiple authentication manners, in another possible implementation manner, the server determines that the operating device adopts a part of all authentication manners supported by the server, and the third authentication request carries indication information for indicating the authentication manner adopted by the operating device.
The server determines which authentication method or authentication methods are used by the operating device, and the following possible schemes are provided:
in the first scheme, if an authentication security value corresponding to an authentication method supported by the operating device is greater than a target security value, the authentication method is determined as an authentication method required by the operating device.
And in the second scheme, the server determines a plurality of authentication modes to be adopted by the operating equipment, wherein the plurality of authentication modes meet the following conditions: "the total authentication security value corresponding to the plurality of authentication methods is greater than the target security value". The total authentication security value corresponding to the plurality of authentication modes can be calculated by the formula (2).
In the second scheme, optionally, if the multiple authentication modes supported by the operating device include both a password authentication mode and a multiple biometric authentication mode, multiple biometric authentication modes may be used for authentication, so that an operation of inputting a password by a user may be avoided, an operation of the user is simplified, and convenience of the user is improved.
In a case where the operating device supports multiple authentication manners, the third authentication request sent by the server may also not carry indication information indicating the authentication manners, the operating device may determine the authentication manners used by itself, and the determination process may be similar to the scheme of determining the authentication manners used by the device by the server.
Step 1417, the operating equipment authenticates the user information and judges whether the authentication is successful; if not, go to step 1418; if successful, go to step 1419.
In step 1418, the operation device rejects the execution of the target operation.
In step 1419, the operation device performs the target operation.
As can be seen from the scheme shown in fig. 14, if the operation device has the authentication capability, and the authentication capability can meet the requirement of the target operation, the operation device can execute the authentication, so that the authentication process can be simplified, and the convenience of the operation can be improved.
As another possible embodiment, the apparatus provided in this embodiment may also be an authentication device. Fig. 16 is a schematic flowchart illustrating an authentication method in which the apparatus provided in the embodiment of the present application is an authentication device, and as shown in fig. 16, the apparatus provided in the embodiment of the present application is an authentication device b1, and the method includes:
s1600 to S1605 refer to the aforementioned portions from step 1300 to step 1305 in fig. 13, and it is necessary to replace the execution main body "server" with "authentication device b 1" or "the apparatus", and other contents are not described herein again.
Since the apparatus provided in the embodiment of the present application is one authentication device among M1 authentication devices, the apparatus does not need to send a second authentication request to the authentication device. Based on this, in S1606, the apparatus transmits the second authentication request to each of the M1 authentication devices except the authentication device b 1. The second authentication request is for requesting the authentication device to authenticate the user information.
S1607, the authentication device (authentication device b2) which received the second authentication request transmitted from the authentication device b1 authenticates the user information, and generates a second authentication response.
S1608, when the authentication device b1 is one of the M1 authentication devices, the authentication device b1 authenticates the user information and generates a second authentication response.
It should be noted that in S1608, since the apparatus provided in the embodiment of the present application is the authentication device b1, the apparatus does not need to send the second authentication request to the authentication device b1 any more, and the authentication device b1 authenticates the user information and generates the second authentication response. And the authentication device b1 need not return the second authentication response to the apparatus. Based on this, in S1609, the apparatus receives that the authentication device other than the authentication device b1 returns the second authentication response.
S1610 to S1616 may refer to the aforementioned parts of step 1309 to step 1315 in fig. 13, and it is necessary to replace the execution main body "server" with "authentication apparatus b 1" or "the apparatus", and other contents are not described herein again.
It should be noted that fig. 16 merely illustrates a flowchart of an authentication method when the apparatus provided in the embodiment of the present application is the authentication device b1, and in this figure, reference may be made to the related content in fig. 13 for each possible implementation manner of the steps, and details are not described here again.
The related steps can be referred to the related contents of fig. 13, and are not described herein again. As can be seen from fig. 16, when the apparatus provided in the embodiment of the present application is an authentication device, the difference with respect to the scheme shown in fig. 13 is that, when the apparatus determines the authentication device as one of M1 authentication devices, it is not necessary to send a second authentication request to the authentication device in step 1606, and after authenticating the user information, the authentication device does not need to return a second authentication response to the apparatus, but rather, the authentication device obtains the authentication result of the authentication device. Therefore, the device provided by the embodiment of the application is the authentication device, so that the signaling interaction between the authentication device and the device can be reduced, resources can be saved, and the scheme execution process can be accelerated.
As a possible embodiment, the apparatus provided in this embodiment of the present application may be an operating device in addition to a server or a router, fig. 17 exemplarily shows a flowchart of an authentication method in which the apparatus provided in this embodiment of the present application is an operating device, and as shown in fig. 17, when the apparatus provided in this embodiment of the present application is an operating device, the operating device has an authentication capability. The method comprises the following steps:
S1720 may refer to the aforementioned part of step 1300 in fig. 13, and the execution main body "server" needs to be replaced with "operation device" or "the apparatus", and other contents are not described herein again.
Because the device provided by the embodiment of the application is the operating equipment, the operating equipment does not need to send the first authentication request to the server. After S1720, S1721 to S1723 executed by the apparatus may refer to the aforementioned portions from step 1302 to step 1304 in fig. 13, and the execution subject "server" is replaced with "operating device" or "the apparatus", and other contents are not described herein again.
When it is determined in S1723 that the authentication security value corresponding to the operating device is not less than the target security value, since the apparatus provided in this embodiment of the present application is the operating device, the apparatus does not need to return the third authentication request to the operating device, and the apparatus directly performs S1732. In S1732, referring to the aforementioned part of step 1317 in fig. 13, the execution main body "server" is replaced with "operation device" or "the apparatus", and other contents are not described herein again.
S1724 can refer to the aforementioned part of step 1305 in fig. 13, and is not described herein again.
Since the apparatus provided in the embodiment of the present application is an operating device, and in fig. 17, the operating device also belongs to one authentication device among M1 authentication devices. Therefore, it is not necessary to transmit the second authentication request to the operation device, and based on this, the apparatus transmits the second authentication request to each of the M1 authentication devices except the operation device in S1725. The second authentication request is for requesting the authentication device to authenticate the user information.
As shown in fig. 17, the operation device transmits a second authentication request to the authentication device b 1.
S1726, the authentication device (authentication device b1) that received the second authentication request sent by the operating device authenticates the user information and generates a second authentication response.
S1727, when the operation device is one of the M1 authentication devices, the operation device authenticates the user information and generates a second authentication response.
It should be noted that in S1727, since the apparatus provided in this embodiment of the present application is an operating device, the apparatus does not need to send a second authentication request to the operating device, and the operating device authenticates the user information and generates a second authentication response. And the operating device need not perform the step of returning the second authentication response to the apparatus. In S1728, the operation device receives the second authentication response returned by the authentication device other than the operation device among the M1 authentication devices.
S1729 to S1731 refer to the aforementioned parts of step 1309 to step 1311 in fig. 13, where the execution main body "server" is replaced by "operating device" or "the apparatus", and other contents are not described herein again.
Since the apparatus provided in the embodiment of the present application is an operating device, after determining whether the total authentication security value is smaller than the target security value in S1731, the step of returning a first authentication response (the first authentication response refers to a first authentication success response or a second authentication failure response) to the operating device does not need to be performed. Instead, when the device determines in S1731 that the total authentication security value is not less than the target security value required for the target operation, S1734 is performed; if the total authentication security value is less than the target security value required for the target operation, S1733 is performed.
S1733, the operation device refuses to perform the target operation.
S1734, the operation device executes the target operation.
It should be noted that fig. 17 merely illustrates a flowchart of an authentication method when the apparatus provided in the embodiment of the present application is an operating device, and in this figure, reference may be made to relevant contents of fig. 13 regarding possible implementation manners of various steps, and an execution main body "server" is replaced with "operating device" or "the apparatus", and other contents are not described again here.
As can be seen from the above flow, when the apparatus provided in the embodiment of the present application is an operating device, in this case, the difference with respect to the scheme shown in fig. 13 is that the operating device does not need to send the first authentication request to the server, but the operating device can determine the target security value required to perform the target operation after performing step 1700. On the other hand, after the apparatus identifies M1 authentication devices, if the operating device is identified as one of M1 authentication devices, the operating device does not need to send a second authentication request to the operating device, and after the operating device authenticates the user information, the operating device does not need to return a second authentication response to the apparatus, but rather the operating device obtains an authentication result of the operating device. In a third aspect, after the apparatus confirms the relationship between the total authentication security value and the target security value, it is not necessary to feed back whether authentication is successful to the operating device, but the operating device determines whether authentication is successful according to the total authentication security value and the target security value, and then determines whether to execute the target operation. Therefore, the device provided by the embodiment of the application is the operating equipment, so that the signaling interaction between the operating equipment and the device can be reduced, the resources can be saved, and the scheme execution process can be accelerated.
In step 205, the apparatus provided in this embodiment of the present application may determine M1 authentication devices first. A specific manner of how to determine M1 authentication devices is described below by taking the apparatus provided in the embodiment of the present application as a server as an example, where when the apparatus is a router or a terminal device, the manner of determining M1 authentication devices is similar to that described below, and an execution main body "server" needs to be replaced by "the apparatus", and other contents are not described again.
The apparatus determines the M1 authentication devices in the following manner.
First, the second authentication request is sent to all the authentication devices registered in the server in step 206. For example, K authentication devices are registered in the server in advance, and K is a positive integer not less than M1. In the first embodiment, the M1 authentication devices are K authentication devices. Some of the K authentication devices may be in a non-communication reachable state, such as the smart tv is not turned on. The authentication device that is not on-line of the K authentication devices may not respond to the second authentication request.
In a second mode, the server sends a first message to the K authentication devices or to an authentication device in an online state (e.g., searchable via a network) in the K authentication devices, where the first message is used to query whether the authentication device is in a communication reachable state. The first message may carry an identifier of the server.
And receiving the first message response, and determining the authentication device corresponding to the first message response as M1 authentication devices. The first message response may carry an identifier of the authentication device that sent the first response. That is, in the second mode, the M1 authentication devices in the communication reachable state are queried by sending the first message, and then the second authentication request is sent to the M1 authentication devices.
Optionally, there may be a plurality of states of querying M1 authentication devices in a communication reachable state in the K authentication devices, in addition to the second mode, for example, the apparatus provided in this embodiment of the present application is a terminal device, the apparatus may query the authentication devices that are in a local area network with the terminal device, and the queried authentication devices are the M1 authentication devices mentioned in the foregoing. For another example, the apparatus provided in this embodiment of the present application is a terminal device, and the apparatus may check that the authentication device is in a local area network, send a first message to the checked authentication device, and determine the authentication device that receives the first message response as the M1 authentication devices mentioned in the above.
In a third way, the K authentication devices may be set with priorities, such as according to a preference of the user, and further such as sorting according to the level of the authentication security value (for example, the higher the highest authentication security value corresponding to one authentication device is, the higher the priority of the authentication device is), and so on. And the server sequentially and sequentially sends the first messages to the K authentication devices according to the priorities of the K authentication devices.
Receiving the first message response, and determining the authentication device corresponding to the first message response as one of the M1 authentication devices until: m1 authentication devices are determined or all authentication devices are polled. In three ways, the value of M1 may be preset.
In a fourth aspect, before step 205, the server already obtains the authentication method and the root key storage environment supported by each authentication device, and establishes a corresponding relationship among the authentication devices, the authentication methods, and the authentication security values. The server determines, as M1 authentication devices, the authentication device corresponding to the authentication security value that is not less than the target security value according to the preset correspondence between the authentication device, the authentication manner, and the authentication security value. In this manner, a higher level of authentication may be provided for operations with higher security requirements.
Determining, by the server, the authentication devices meeting the preset condition from among the preset K authentication devices as M1 authentication devices; k is a positive integer not less than M1. The preset conditions include: the server and the authentication device are in a communication reachable state, and/or the position of the authentication device is within a preset distance from the current position of the user. The communication reachable state mentioned in the embodiment of the present application refers to that communication can be performed between the server and the authentication device, for example, communication can be performed based on technologies such as NFC, Wi-Fi, bluetooth, and 5G mentioned in the foregoing. The preset distance may be set shorter, for example, to 0.3 m. The position of authentication equipment and the current position of user are in predetermineeing the distance, then authentication equipment can gather user's information, for example, the user unblanks through the fingerprint outdoors, the distance that is in between indoor intelligent audio amplifier and the user probably just belongs to outside the preset distance range so, can not select to be in indoor intelligent audio amplifier and use as an authentication equipment promptly, this kind of scheme also more fits actual conditions, the user unblanks through the fingerprint outdoors, it also can't gather user's voiceprint to be in indoor intelligent audio amplifier in fact, do not use intelligent audio amplifier as this authentication equipment, accord with practical application scene more. In this scenario, the server may infer the location of the user through a target operation that the user needs to perform, for example, when the user performs a target operation of unlocking, it is inferred that the user should be out of the door currently, in this case, it may be inferred that distances between some indoor smart devices, such as a smart speaker and a smart television, and the user are out of a preset distance, and in this case, these authentication devices may not be enabled to authenticate the user.
In the above first, second, third, fourth, and fifth modes, if the server needs to determine the authentication method used by the authentication device for one authentication device of the K authentication devices, one of the following may be determined as the authentication method used by the authentication device:
all or part of all authentication modes supported by the authentication equipment;
all or part of all the biometric authentication modes supported by the authentication device;
the authentication device supports one authentication mode with the highest authentication security value in all authentication modes;
the authentication device supports one authentication mode with the highest authentication security value in all the biometric authentication modes.
And in the sixth mode, for the K authentication devices, authenticating one authentication device in one authentication mode is called as one authentication strategy. That is, an authentication policy includes an authentication device and an authentication method used by the authentication device. For example, if one authentication device corresponds to two authentication methods, the authentication device corresponds to two authentication policies, where the authentication devices included in the two authentication policies are the same, but the authentication methods included in the two authentication policies are two different authentication methods, respectively. In the embodiment of the application, one authentication policy corresponds to one authentication security value.
In the sixth mode, all the authentication policies may be prioritized according to the authentication security value, and the authentication policies with higher authentication security values have higher priorities. (note that, in the third mode, the K authentication devices are prioritized, and in the sixth mode, the authentication policies are prioritized). And the server sequentially and alternately sends the first messages to the authentication equipment corresponding to all the authentication strategies according to the priority of all the authentication strategies. The first message is used for inquiring whether the authentication device is in a communication reachable state. Receiving the first message response, and determining the authentication device corresponding to the first message response as one of the M1 authentication devices until: m1 authentication devices are determined or all authentication strategies are polled. And determining the authentication mode corresponding to the authentication strategy as the authentication mode adopted by the authentication equipment corresponding to the authentication strategy.
And the server combines the authentication strategies corresponding to the authentication equipment in the communication reachable state at present to obtain one or more authentication strategy group forms. (see the sixth scenario above for the definition and introduction of authentication policy here). Calculating a total authentication security value in each authentication policy group form, selecting an authentication policy group of which the total authentication security value is higher than a target security value, determining the authentication devices in the authentication policy group as M1 authentication devices, wherein an authentication mode included in one authentication policy in the authentication policy group is: the authentication policy includes an authentication mode used by the authentication device.
For example, a target security value required by a target operation that a user needs to perform may be higher, but authentication capabilities of devices that can be currently authenticated are all lower, and if a single authentication device performs authentication, the obtained single authentication security value is lower than the target security value. The total authentication security value may be calculated by the above formula (1) or formula (2). It can be seen that, when the authentication device without strong authentication capability performs authentication, the embodiment of the present application may also combine multiple authentication devices with weak authentication capabilities to perform cooperative authentication, so as to satisfy an operation with a high user security level requirement (a high security value).
In the seventh scheme, the server combines the authentication policies corresponding to the authentication devices currently in the communication reachable state, and may combine the authentication policies corresponding to the biometric authentication mode. Therefore, the operation that the user needs to input the password can be omitted, and the convenience of the user operation is improved.
In an eighth mode, the server determines, according to the preset correspondence between the operation and the authentication policy, the authentication devices included in the authentication policy corresponding to the target operation as M1 authentication devices. And aiming at an authentication strategy corresponding to the target operation, determining an authentication mode included in the authentication strategy as follows: the authentication policy includes an authentication mode used by the authentication device.
In this way, some authentication devices may be preset for some operations, and the authentication method adopted by the authentication device may be preset. For example, the authentication device corresponding to fingerprint unlocking is preset to perform 2D face recognition for the mobile phone a1 of the user and perform fingerprint recognition for the intelligent door lock. In this way, one or more authentication devices can be set for operation by the user according to personal preferences and habits, which can improve the flexibility of the scheme.
According to the foregoing method, fig. 18 is a schematic diagram of a system architecture provided in the embodiment of the present application, where the system architecture includes an operating device 5100, an authentication device 6100, and an apparatus 7100. The apparatus 7100 in fig. 18 is a schematic structural diagram of an apparatus for performing the authentication method according to an embodiment of the present disclosure, and as shown in fig. 18, the apparatus may be a server, a router, a terminal device, a chip or a circuit, for example, a chip or a circuit that may be disposed in a server, a router, or a terminal device.
In fig. 18, it is illustrated that the apparatus 7100 is independent of the operating device 5100 and the apparatus 7100 is independent of the authentication device 6100, and if the apparatus 7100 is the same device as the operating device 5100, in this case, the operating device includes not only two modules in the operating device 5100 but also the respective modules in the apparatus 7100. Similarly, if the apparatus 7100 is the same device as the authentication device 6100, in this case, the authentication device includes not only two modules of the authentication device 6100 but also each module of the apparatus 7100. Similarly, if the operating device 5100 also has an authentication capability, and can perform an authentication operation as one authentication device, the operating device needs to include two modules in the authentication device 6100 in addition to the two modules in the operating device 5100 shown in fig. 18.
As shown in fig. 18, the operation device 5100 may include an operation interception and execution module 5101 and a device connection module 5102, the device connection module 5102 may be used to transmit data between devices or within a device, the operation interception and execution module 5101 may be used to perform a target operation, generate a first authentication request, and the like.
As shown in fig. 18, the authentication device 6100 may include an authenticator 6101 and a device connection module 6102, the device connection module 6102 may be used to transfer data between devices or within devices, and the authenticator 6101 may be used to authenticate user information.
As shown in fig. 18, apparatus 7100 may include an authentication security level evaluation module 7101, an authentication capability discovery module 7102, an authentication scheme decision 7103, an authenticator schedule module 7104, and a device connection module 7105. The authentication security level evaluation module 7101 may be configured to determine a target security value required for a target operation, and calculate a total authentication security value. The authentication capability discovery module 7102 may be used to acquire the authentication capability of each authentication device, and the like. The authentication scheme decision 7103 may be used to determine a set of authentication policy groups, or to determine the operation of M1 authentication devices, etc. in the previous step 1305. The authenticator scheduling module 7104 may be used to send a second authentication request to the respective authentication devices. The device connection module 7105 may be used to transfer data between devices or within a device.
The scheme provided by the embodiment of the present application is further described below by taking fig. 18 as an example. Fig. 19 illustrates a schematic diagram of a possible embodiment on the basis of fig. 18, and fig. 19 is described below, as shown in fig. 19, and comprises:
In step 1900, the operation device 5100 receives a target operation through the operation interception and execution module 5101. The related content of this step can be referred to the aforementioned step 1300 of fig. 13, and is not described herein again.
In step 1901, the operation interception and execution module 5101 generates a first authentication request, and transmits it to the device connection module 7105 of the apparatus 7100 through the device connection module 5102. Step 1901 can be referred to the related content of step 1301 of fig. 13.
After the device 7100 receives through the device connection module 7105, a first authentication request may be transmitted within the device 7100 to the authentication security level evaluation module 7101.
At step 1902, the authentication security level evaluation module 7101 is used to determine a target security value required for the target operation. Step 1902 can be seen in relation to step 1302 of fig. 13, supra.
At step 1903, authentication security level evaluation module 7101 may be configured to send a target security value to authentication scheme decision 7103.
The following schemes of steps 1905 to 1908 are used to exemplarily describe the foregoing embodiment of how M1 authentication devices are determined in step 1305 in fig. 13. Other embodiments related to the foregoing method embodiments for determining M1 authentication devices may also be performed by the apparatus 7100, and are not described herein again.
At step 1904, authentication scheme decision 7103 sends a signaling to authentication capability discovery module 7102 to query for available authentication devices.
At step 1905, the authentication capability discovery module 7102 transmits a signaling for inquiring the authentication capability of the authentication device 6100 to the authentication device 6100. For example, the signaling may be transmitted through the device connection module 7105 of the apparatus 7100 and the device connection module 6102 of the authentication device 6100.
In step 1906, the authentication device 6100 may report its authentication capability to the authentication capability discovery module 7102 of the apparatus 7100. The authentication capabilities in this example may include the authentication modes and key storage environments supported by the authentication device 6100.
The above-mentioned steps 1905 and 1906 may be referred to as a capability discovery procedure, and the capability discovery procedure may be performed after the step 1904, or before the step 1904, for example, when the authentication device 6100 accesses the network. For a description of the authentication capability reported by an authentication device, reference may be made to the content of the foregoing method embodiment, and details are not described herein again.
At step 1907, authentication capability discovery module 7102 determines the available authentication devices and the authentication capabilities of each authentication device and returns an identification of the available authentication devices and the authentication capabilities of each authentication device to authentication scheme decision 7103.
Authentication scheme decision 7103 is used to decide the authentication scheme based on the target security value and the authentication capabilities of each of the available authentication devices, step 1908. The decided authentication scheme may be a set of authentication policies. The decided authentication scheme includes M1 authentication devices. In step 1908, the relevant content of the M1 authentication devices may be referred to as the relevant content of step 1305, which is not described herein again.
In step 1909, authentication scheme decision 7103 sends the authentication scheme, which includes the identities of the M1 authentication devices, to authenticator dispatch module 7104. In the example of fig. 19, M1 authentication devices are taken as one authentication device, and the authentication device is taken as an authentication device 6100.
In step 1910, the authenticator scheduling module 7104 performs coordinated scheduling, and specifically, may send a second authentication request to the authentication device 6100 through the device connection module 7105. The content of the second authentication request can refer to the content of the foregoing step 1306, and is not described herein again.
In step 1911, after the authentication device 6100 receives the second authentication request through the device connection module 6102, the user authentication may be performed through the authenticator 6101, and the authentication result may be returned to the apparatus 7100 through the device connection module 6102.
In step 1912, after the apparatus 7100 receives the authentication result returned by the authentication device 6100 through the device connection module 7105, the authentication result is transmitted to the authentication security level evaluation module 7101.
In step 1913, the authentication security level evaluation module 7101 calculates a total authentication security value according to the authentication result and the authentication security value corresponding to the authentication mode of the authentication device. And generating a first authentication response according to the relationship between the total authentication security value and the target security value. The first authentication success response may be transmitted when the authentication security level evaluation module 7101 of the device 7100 determines that the total authentication security value is not less than the target security value, and the first authentication failure response may be transmitted when the total authentication security value is determined to be less than the target security value. In this embodiment, the first authentication success response and the second authentication failure response may be both referred to as a first authentication response. Step 1913 can be referred to the relevant contents of step 1310, step 1311, step 1312 and step 1314.
At step 1914, the authentication security level evaluation module 7101 of the apparatus 7100 transmits the generated first authentication response to the operation interception and execution module 5101 through the device connection module 7105 and the device connection module 5102.
The operation interception and execution module 5101 of the operation device 5100 is configured to perform a target operation if the first authentication response indicates authentication success, and not perform the target operation if the first authentication response indicates authentication failure.
Fig. 19 is only described in conjunction with the partial embodiment shown in fig. 13 on the basis of fig. 18. In this embodiment, the operating device 5100 may further perform other related schemes related to the operating device in the above method embodiment, the authentication device 6100 may further perform other related schemes related to the authentication device in the above method embodiment, and the apparatus 7100 may further perform other related schemes related to the apparatus in the above method embodiment.
It should be understood that the above division of the units is only a division of logical functions, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated.
As can be seen from the foregoing embodiments, since a total authentication security value is determined according to an authentication result of at least one authentication device of the M1 authentication devices and a corresponding relationship between an authentication manner and an authentication security value of the at least one authentication device, and in a case where the total authentication security value is not less than a target security value required for executing the target operation, the operating device is triggered to execute the target operation, so as to provide a required identity authentication level for the target operation, and thus, security of the authentication result can be improved.
Implementation mode three
Based on the steps shown in fig. 3, in S301, the receiving, by the first electronic device, an authentication request includes: the first electronic device receives a first operation, and the first operation is used for triggering generation of an authentication request. In the above S302, the determining, by the first electronic device, the authentication manner corresponding to the first service includes: the first electronic equipment determines a first authentication mode corresponding to the first service. The third implementation method further includes: the first electronic equipment responds to the received first operation according to the first authentication mode, and detects whether a local authentication result of the first electronic equipment passes or not; in response to detecting that the local authentication result of the first electronic device fails, in the above S302, the determining, by the first electronic device, an authentication manner corresponding to the first service further includes: and the first electronic equipment determines a second authentication mode corresponding to the first service. The implementation method in the third implementation mode further includes: the first electronic equipment starts cross-equipment authentication according to the second authentication mode, and the cross-equipment authentication is used for the first electronic equipment to authenticate through the second electronic equipment; the method comprises the steps that a first electronic device obtains a cross-device authentication result; and then the first electronic equipment determines that the cross-equipment authentication result is that the authentication is passed, and then executes an instruction corresponding to the first operation.
In consideration of the conventional technology, when a user uses a mobile phone, the mobile phone may continuously perform identity authentication on the user by using technical means such as face recognition, fingerprint recognition, touch screen behavior recognition and the like based on collected biometric information (e.g., face image, fingerprint, touch input on a touch screen) of the user. When the identity authentication is not passed, the mobile phone can not respond to the input operation of the current user. When the user intends to use the mobile phone, but the mobile phone cannot acquire the biometric information of the user, the mobile phone cannot respond to the input operation of the user. Compared with the prior art, the authentication mode provided by the third implementation mode can realize cross-device authentication, can improve the convenience of cross-device authentication, and effectively improves the user experience.
In one possible implementation, a first electronic device initiates cross-device authentication, including: the method comprises the steps that first electronic equipment sends a first request message to second electronic equipment, the first request message is used for requesting to obtain a local authentication result of the second electronic equipment, then the first electronic equipment receives a first response message from the second electronic equipment, the first response message comprises a cross-equipment authentication result, and the cross-equipment authentication result is the local authentication result of the second electronic equipment.
In another possible implementation manner, the first electronic device initiates cross-device authentication, including: the first electronic device sends a second request message to the second electronic device, the second request message is used for requesting to acquire identity authentication information of the second electronic device, the first electronic device receives a second response message from the second electronic device, the second response message comprises the identity authentication information of the second electronic device, and then the first electronic device authenticates the first operation according to the identity authentication information of the second electronic device to generate a cross-device authentication result.
The first electronic device establishes connection with the second electronic device through a near field communication protocol, exemplarily, the second electronic device is a bracelet, the first electronic device is a mobile phone, and the mobile phone establishes connection with the bracelet through a bluetooth communication protocol. In a possible implementation manner, if the mobile phone detects that the bracelet is not worn, the mobile phone stops performing cross-device authentication through the bracelet to improve the security of an authentication result.
Exemplarily, fig. 20 illustrates a cross-device authentication method provided in an embodiment of the present application. The above-mentioned cross-device authentication method includes, but is not limited to, steps S2001 to S2006, in which:
S2001, the first electronic device receives the first operation.
In the embodiment of the present application, the first electronic device may be the electronic device 100 in the following embodiments. In some embodiments, the first operation may be a voice instruction received by the electronic device 100 in a voice control scenario described below. Illustratively, as shown in fig. 25E below, the electronic device 100 receives the voice instruction "small art, did Anna get home with application 1". In some embodiments, the first operation may be a screen-casting operation received by the electronic device 100 in a screen-casting control scenario described below. Illustratively, the first operation may be the user clicking on option 401A of the electronic device 200 as shown in FIG. 28B. In some embodiments, the first operation may be a touch operation indirectly received by the electronic device 100 through the electronic device 200 in the below-described screen projection control scenarios 2 to 4. It should be noted that, after receiving the touch operation of the user, the electronic device 200 may send the touch parameter of the touch operation to the electronic device 100, and the electronic device 100 determines the trigger event corresponding to the touch operation, and then executes a corresponding response operation. Illustratively, the first operation may be the user clicking on the music icon 404C in the screen-in window 403 as shown in fig. 29A.
S2002, in response to receiving the first operation, the first electronic device detects whether a local authentication result of the first electronic device passes.
Specifically, the first electronic device collects the biometric information of the user in the detection range, determines whether the collected biometric information matches with the biometric information of the preset user, and if the collected biometric information matches with the biometric information of the preset user, the local authentication result of the first electronic device is passed. The implementation manner of determining whether the collected biometric information matches the biometric information of the preset user may refer to the following description of the embodiments; the preset user may refer to the following description of the user 1 and the authorized user 3 in the following embodiments, which are not described herein again.
In some embodiments, after the first electronic device receives the first operation, the method further includes: the first electronic equipment detects whether the first operation trigger is a locked low-risk application; in response to detecting that the first operation triggered the low-risk application for locking, the first electronic device detects whether a local authentication result of the first electronic device passes. Referring to the following description of voice control scenario 3 and screen shot control scenario 4, in some embodiments of the present application, the locked application may comprise a low risk application of locking. The setting of the locked application may refer to the related description of fig. 24A to 24G, and the setting of the locked low-risk application may refer to the related description of fig. 26A to 26C.
In some embodiments, when the first operation is a first voice instruction, before the first electronic device detects whether a local authentication result of the first electronic device passes, the method further includes: the first electronic equipment detects whether the voiceprint features in the first voice command accord with the voiceprint features of a preset user or not; and in response to detecting that the voiceprint feature in the first voice command accords with the voiceprint feature of the preset user, the first electronic equipment detects whether the local authentication result of the first electronic equipment passes. The first voice instruction may refer to a voice instruction received by the electronic device 100 in a voice control scenario described below, for example, the voice instruction 1.
In some embodiments, when the matching degree of the voiceprint feature in the first voice command and the voiceprint feature of the preset user reaches a preset threshold 2, the voiceprint feature in the first voice command conforms to the voiceprint feature of the preset user. For example, the preset threshold 2 is equal to 95%.
In some embodiments, the local persistent authentication performed by the first electronic device and the local authentication result generated by the first electronic device are performed at the same time or after the first operation is received by the first electronic device, wherein the local persistent authentication performed by the first electronic device includes at least one of: face identification authentication, iris identification authentication and touch screen behavior identification authentication; the local authentication result of the first electronic device may characterize whether the first electronic device authenticates the identity of the user.
And S2003, in response to detecting that the local authentication result of the first electronic device does not pass, the first electronic device sends a first request message to the second electronic device, wherein the first request message is used for requesting to acquire the local authentication result of the second electronic device.
In some embodiments, the second electronic device performs local persistent authentication and generates a local authentication result of the second electronic device at the same time or before the first electronic device receives the first operation; the method for the second electronic device to perform local persistent authentication includes at least one of the following: face identification authentication, iris identification authentication and touch screen behavior identification authentication; the local authentication result of the second electronic device may represent whether the second electronic device passes the identity authentication of the user.
In the embodiment of the present application, the second electronic device may be the electronic device 200.
S2004, the first electronic device receives a first response message from the second electronic device, where the first response message includes a cross-device authentication result, and the cross-device authentication result is a local authentication result of the second electronic device.
S2005, in response to receiving the local authentication result of the second electronic device, the first electronic device detects whether the local authentication result of the second electronic device passes.
And S2006, in response to the fact that the local authentication result of the second electronic device passes, the first electronic device executes an instruction corresponding to the first operation.
Optionally, the method may further include: and in response to detecting that the local authentication result of the second electronic equipment does not pass, the first electronic equipment does not execute the instruction corresponding to the first operation.
In this embodiment of the application, the first operation may be a voice instruction 1 in the related embodiment of fig. 35, and the instruction corresponding to the first operation may be a response operation corresponding to the voice instruction 1 in the related embodiment of fig. 35. The first operation may be touch operation 1 in the related embodiment of fig. 36, and the instruction corresponding to the first operation may be a response operation corresponding to touch operation 1 in the related embodiment of fig. 36. For example, voice command 1 is "mini art" shown in fig. 25E, and is sent home to Anna by application 1, and the response operation corresponding to voice command 1 is that the first electronic device (i.e., electronic device 100) sends a message "home to" contact "Anna" in application 1 by application 1. For example, if the user clicks the music icon 404C in the screen projection window 403 as shown in fig. 29A, the response operation corresponding to the touch operation 1 starts a music application for the first electronic device (i.e., the electronic device 100), and projects the interface content of the music to the second electronic device (i.e., the electronic device 200).
In some embodiments, before the first electronic device executes the instruction corresponding to the first operation, the method further includes: the first electronic device detects the distance between the first electronic device and the second electronic device; and when the first electronic equipment and the second electronic equipment are detected to be less than a first preset distance, the first electronic equipment executes an instruction corresponding to the first operation. Here, the first preset distance may also be referred to as a preset distance 1. How to measure the distance between the first electronic device and the second electronic device may refer to the description of the embodiment in fig. 35 in which the electronic device 100 measures the distance between the first electronic device and the second electronic device 200, and is not described herein again.
In some embodiments, before the first electronic device executes the instruction corresponding to the first operation, the method further includes: the first electronic device detects whether the first electronic device is in a safe state; and when the first electronic equipment is detected to be in the safe state, the first electronic equipment executes an instruction corresponding to the first operation. The determination as to whether the electronic device is in the safe state may be to determine whether the first electronic device has a trojan virus or has a safe execution environment, and may filter electronic devices whose safe environment does not meet the requirement, such as electronic devices that have a trojan virus. The security status in this embodiment may refer to the related description of the security status in the foregoing, and will not be described herein.
In some embodiments, before the first electronic device executes the instruction corresponding to the first operation, the method further includes: the first electronic device detects whether the priority level of the local continuous authentication of the second electronic device is lower than that of the local continuous authentication of the first electronic device; in response to detecting that the priority of the local continuous authentication of the second electronic device is not lower than the priority of the local continuous authentication of the first electronic device, the first electronic device executes an instruction corresponding to the first operation. The priority of the local persistent authentication, that is, the priority of the authentication method of the local persistent authentication, and the priority of the authentication method of the local persistent authentication may refer to the following description of the embodiments, which is not described herein again. For example, the authentication modes of the local continuous authentication include face authentication, iris identification authentication, heart rate detection authentication, gait identification authentication and touch screen behavior identification authentication, and the priorities of the authentication modes of the local continuous authentication are sorted from big to small as follows: the method comprises the following steps of face recognition authentication (iris recognition authentication), heart rate detection authentication, gait recognition authentication and touch screen behavior recognition authentication, wherein the face recognition authentication and the iris recognition authentication have the same priority.
The cross-device authentication method provided by the embodiment of the present application is introduced below for a voice control scenario.
In some embodiments of the present application, user 1 is not within detection range of local authentication by electronic device 100. When the electronic device 100 receives the voice instruction 1, the electronic device 100 may obtain the identity authentication information of at least one electronic device connected to the electronic device 100; when it is determined that the identity authentication information of the electronic device 200 in the at least one electronic device matches the preset information, the electronic device 100 determines that the cross-device identity authentication is passed; then, the electronic device 100 may respond to the voice instruction 1 to start the function 1 triggered by the voice instruction 1, that is, execute the voice instruction 1 to trigger a corresponding response operation. It can be understood that, in the embodiment of the present application, the electronic device 100 can implement voice control on the electronic device 100 based on the identity authentication information of the electronic device 200, so that convenience of the voice control is improved.
In some embodiments of the present application, the electronic device 100 requires the user to manually turn on the cross-device authentication function before using the cross-device authentication. In some embodiments of the present application, the electronic device 100 may default to turning on the cross-device authentication function without the user turning on.
First, related concepts related to the cross-device authentication method provided in the third implementation manner of the embodiment of the present application are introduced.
In a third implementation manner of the embodiment of the application, in addition to locking the screen of the electronic device, the application in the electronic device may also be locked, and the function of the application in the electronic device may also be locked.
Locking a screen: the screen locking can be called as screen locking, the privacy of the electronic equipment can be protected by the screen locking, misoperation of the touch screen is prevented, and electric quantity is saved under the condition that system software is not closed. The user may select the type of screen-locked password, which may include, but is not limited to: face image, fingerprint and digital password. It can be understood that after the user sets the screen to be locked and the electronic device enters the screen locking state, the user needs to unlock the screen through the password to normally use the electronic device.
Application locking and application function locking: the application lock pointer locks a particular application (e.g., payment software, mailbox, etc.). The application function locking pointer locks a specific application function (e.g., a payment function in payment software, a personal center of instant messaging software) in a specific application (e.g., payment software, instant messaging software). The user may select the password types for application locking and application function locking, which may include but are not limited to: face image, fingerprint and digital password. The application locking and/or application function locking can use a password of screen locking and can also use a custom password.
In an implementation manner of the third embodiment of the application, the application locking or the application function locking may be set by a user, may also be set by default by the electronic device, and may also be determined by the electronic device based on a usage scenario in a self-adaptive manner, which is not specifically limited herein.
Local continuous authentication: the local continuous authentication means that the electronic device can continuously perform identity authentication on the user in the detection range by using technical means such as face recognition, iris recognition and touch screen behavior recognition. Besides authentication modes such as face recognition, fingerprint recognition and touch screen behavior recognition, the third implementation mode of the embodiment of the application can continuously perform identity authentication on the user through other authentication modes, such as fingerprint recognition, gait recognition and heart rate recognition. In the third implementation manner of the embodiment of the present application, single identity authentication may also be referred to as local authentication. The local authentication result may refer to: the electronic equipment performs the authentication result of the identity authentication by using one or more authentication modes such as face recognition, fingerprint recognition, touch screen behavior recognition and the like, and is used for representing whether the identity authentication passes or not. In the third implementation manner of the embodiment of the application, the identity authentication information may include a local authentication result, and may also include biometric information acquired by the electronic device by using one or more authentication manners such as face recognition, fingerprint recognition, and touch screen behavior recognition. In some embodiments, the electronic device may periodically perform local continuous authentication, and the result of one local authentication may include whether the one local authentication passes or not, and a timestamp of the one local authentication.
For example, the electronic device 100 prestores the biometric information 1 of the user 1, and the biometric information 1 is used for verifying the identity of the user 1. In the local continuous authentication, when the matching degree between the biometric information acquired by the electronic device 100 by using technical means such as face recognition, fingerprint recognition or touch screen behavior recognition and the biometric information 1 reaches a preset threshold value 1, the electronic device 100 may determine that the local authentication of the user 1 passes. For example, the preset threshold value 1 is equal to 90%.
It should be noted that, in some embodiments, after the user unlocks the screen, the electronic device may perform local persistent authentication. When a user enters a locked application or a locked application function, if the identity authentication of the local continuous authentication of the electronic equipment passes, the electronic equipment can enter the application or the application function without receiving the unlocking operation of the user through an unlocking interface of the application or the application function.
The following describes a communication system 210 according to a third implementation of the embodiment of the present application. Fig. 21 schematically illustrates a communication system 210 provided in a third implementation manner of the embodiment of the present application. As shown in fig. 21, the communication system 210 includes the electronic apparatus 100 and the electronic apparatus 200. The electronic devices in the communication system 210 may establish a wired or wireless connection through one or more connection means.
In an embodiment of the application, the electronic device 100 may be directly connected with the electronic device 200 through a short-range wireless communication connection or a local wired connection. For example, the electronic devices 100 and 200 may have one or more of a wireless fidelity (WiFi) communication module, an Ultra Wide Band (UWB) communication module, a bluetooth (bluetooth) communication module, a Near Field Communication (NFC) communication module, and a ZigBee communication module. Taking the electronic device 100 as an example, the electronic device 100 may detect and scan electronic devices (e.g., the electronic device 200) near the electronic device 100 by transmitting signals through a short-range communication module (e.g., a bluetooth communication module), so that the electronic device 100 may discover the nearby electronic devices through a short-range wireless communication protocol (e.g., a bluetooth wireless communication protocol), establish a wireless communication connection with the nearby electronic devices, and transmit data to the nearby electronic devices. For example, the electronic device 100 may also connect directly via a WiFi peer-to-peer (Wi-Fi P2P) communication protocol.
In some embodiments, in a short-range communication scenario, the electronic device 100 may also measure the distance of the electronic device 200 through a positioning technology such as bluetooth positioning technology, UWB positioning technology, or WiFi positioning technology.
In an embodiment of the present application, the electronic device 100 and the electronic device 200 may be connected to a Local Area Network (LAN) through the electronic device 300 based on a wired or wireless fidelity (WiFi) connection. The electronic device 100 and the electronic device 200 are indirectly connected through the electronic device 300. For example, the electronic device 300 may be a router, a gateway, a smart device controller, or other third party device. For example, the electronic device 300 may transmit data to the electronic device 100 and/or the electronic device 200 through a network, and may also receive data transmitted by the electronic device 100 and/or the electronic device 200 through the network.
In a third implementation manner of the embodiment of the present application, the electronic device 100 and the electronic device 200 may also be indirectly connected through at least one network device in the wide area network. For example, electronic device 100 and electronic device 200 establish an indirect connection through electronic device 400. The electronic device 400 may be a hardware server or a cloud server embedded in a virtualized environment, for example, the cloud server may include a virtual machine executing on a hardware server of at least one other virtual machine. For example, the electronic device 400 may transmit data to the electronic device 100 and/or the electronic device 200 through the network, and may also receive data transmitted by the electronic device 100 and/or the electronic device 200 through the network. In some embodiments, the electronic device 100 and the electronic device 200 may be electronic devices that log in to the same account through the electronic device 400, and the electronic device 100 and the electronic device 200 may also be electronic devices that log in to different accounts through the electronic device 400. For example, the electronic device 100 logs in to a first account through the electronic device 400, and the electronic device 200 logs in to a second account through the electronic device 400. The electronic device 100 logging in the first account may establish a connection relationship between the first account and the second account by a connection request initiated by the electronic device 400 to the electronic device 200 logging in the second account. The first account and the second account may be an instant messaging account, an email account, a mobile phone number, and the like. The first account and the second account may belong to different operator networks or may belong to the same operator network, which is not specifically limited herein.
In some embodiments of the present application, taking the electronic device 100 as an example, after the electronic device 100 and the electronic device 200 establish a connection (the connection establishment method described in the present application includes the foregoing types, and is not described herein again), the electronic device 100 may grant permission to the electronic device 200 to designate the electronic device 200 to control the electronic device 100. In some embodiments, the rights of the electronic device 200 include that the electronic device 200 can obtain authentication information of the electronic device 100. In some embodiments, the permissions of the electronic device 200 further include that the electronic device 200 can transmit interface content of the user interface of the electronic device 200 to the electronic device 100 (e.g., screen projection), and can control the user interface of the electronic device 200 back on the electronic device 100. In some embodiments, the rights of the electronic device 200 further include interface contents that the electronic device 200 can actively acquire the user interface of the electronic device 100, and the user interface of the electronic device 100 can be reversely controlled on the electronic device 200. Similarly, after the electronic device 100 and the electronic device 200 are connected, the electronic device 200 may also grant permission to the electronic device 100 to designate the electronic device 100 to control the electronic device 200 (for example, obtain the identity authentication information of the electronic device 100), which is not described herein again.
In a third implementation manner of the embodiment of the present application, at least one of the electronic device 100 and the electronic device 200 may have a local persistent authentication capability. The electronic equipment has local continuous authentication capability, which means that the electronic equipment can carry out local continuous authentication through at least one authentication mode.
It is to be understood that the system configuration shown in fig. 21 does not constitute a specific limitation of the communication system 10. In other embodiments of the present application, the communication system 210 may further include more electronic devices, and reference may be made to the electronic device 100 and the electronic device 200 for connection relationship between any two devices in the communication system 210, which is not described herein again.
It should be noted that, the third implementation manner of the present application is not specifically limited to the types of the electronic device 100 and the electronic device 200, and in some embodiments, the electronic device related to the third implementation manner of the present application may be a mobile phone, a wearable device (e.g., a smart band, a smart watch), a tablet computer, a laptop computer (laptop), a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR) \\ Virtual Reality (VR) device, or other portable devices. Exemplary embodiments of the electronic device include, but are not limited to, a mount
Figure BDA0002990820150000701
Figure BDA0002990820150000702
Or other operating system.
Based on the foregoing hardware structure, system and related concepts, the following describes a cross-device authentication method provided in an embodiment of the present application with reference to the accompanying drawings for different usage scenarios.
In the cross-device authentication method according to implementation manner three in this embodiment of the present application, the electronic device 100 may establish a connection with at least one electronic device, and after the connection is established, the electronic device 100 may obtain the identity authentication information of the at least one electronic device, where the at least one electronic device includes the electronic device 200. In this embodiment, when the local authentication of the electronic device 100 is not passed, the electronic device 100 may start cross-device authentication, that is, determine whether the authentication information of the at least one electronic device (e.g., the electronic device 200) matches preset information, when the authentication information matches the preset information, the electronic device 100 determines that the cross-device authentication is passed, and when the authentication information does not match the preset information, the electronic device 100 determines that the cross-device authentication is not passed. Therefore, the identity authentication of the equipment can be realized through the local authentication result or the biological characteristic information of other credible equipment, the convenience of the identity authentication is effectively improved, and better user experience is created.
In some embodiments, the identity authentication information may include biometric information acquired by the electronic device 200 through one or more authentication methods, such as face recognition, fingerprint recognition, touch screen behavior recognition, and the like, and the preset information is the same type of biometric information of a preset user of the electronic device 100. For example, the biometric information sent by the electronic device 200 to the electronic device 100 is the face image 1 captured by the electronic device 200, and the preset information is the face image 2 of a preset user pre-stored in the electronic device. It can be understood that when the matching degree of the face features in the face image 1 and the face features in the face image 2 is greater than the preset threshold 1, the electronic device 100 may determine that the face features in the face image 1 and the face features in the face image 2 are matched, and the cross-device identity authentication is passed; otherwise, the electronic device 100 determines that the face features in the face image 1 and the face features in the face image 2 do not match, and the cross-device identity authentication does not pass. For example, the preset threshold value 1 is equal to 90%.
In some embodiments, the electronic device 100 and the electronic device 200 include the same preset user (e.g., user 1), and the electronic device 100 and the electronic device 200 each have the biometric information 1 of the preset user pre-stored therein. The identity authentication information may include a local authentication result of the electronic device 200 performing identity authentication on the user within the detection range by using one or more authentication manners such as face recognition, fingerprint recognition, and touch screen behavior recognition, and the preset information may be a local authentication result representing that the identity authentication passes. It can be understood that when the electronic device 200 determines that the matching degree of the biometric information collected by the electronic device 200 and the biometric information 1 of the preset user is greater than the preset threshold 1, the local authentication result is that the authentication is passed. For example, the preset information is equal to 1, and when the local authentication result of the electronic device 200 is equal to 1, the identity authentication is passed, and the local authentication result is equal to 0, the identity authentication is not passed. It can be understood that, when the local authentication result sent by the electronic device 200 is equal to the preset information, or the matching degree between the person and the face feature of the face image 2 is equal to 100%, the electronic device 100 determines that the identity authentication information of the electronic device 200 matches with the preset information, and the cross-device identity authentication is passed; otherwise, the electronic device 100 determines that the identity authentication information of the electronic device 200 does not match the preset information, and the cross-device identity authentication is not passed.
The cross-device authentication method provided by the embodiment of the present application is introduced below for a voice control scenario.
In some embodiments of the present application, user 1 is not within detection range of local authentication by electronic device 100. When the electronic device 100 receives the voice instruction 1, the electronic device 100 may obtain the identity authentication information of at least one electronic device connected to the electronic device 100; when it is determined that the identity authentication information of the electronic device 200 in the at least one electronic device matches the preset information, the electronic device 100 determines that the cross-device identity authentication is passed; then, the electronic device 100 may respond to the voice instruction 1 to start the function 1 triggered by the voice instruction 1, that is, execute the voice instruction 1 to trigger a corresponding response operation. It can be understood that, in the embodiment of the present application, the electronic device 100 can implement voice control on the electronic device 100 based on the identity authentication information of the electronic device 200, so that convenience of the voice control is improved.
In some embodiments of the present application, the electronic device 100 requires the user to manually turn on the cross-device authentication function before using the cross-device authentication. In some embodiments of the present application, the electronic device 100 may default to turning on the cross-device authentication function without the user turning on.
Exemplarily, (a) in fig. 22 shows a user interface 2211 on the electronic device 100 for presenting an application installed by the electronic device 100. The user interface 2211 may include: status field 2101, calendar indicator 2102, weather indicator 2103, tray 2104 with frequently used application icons, and other application icons 2105. Wherein:
the tray 2104 with common application icons may expose: phone icon, contact icon, short message icon, camera icon. Other application icons 2105 may show: icon of application 1, icon of photo album, icon of music, icon of smart home, icon of mailbox, icon of cloud sharing, icon of memo, icon of setting 2105A. User interface 2211 may also include page indicator 2106. Other application icons may be distributed across multiple pages, and page indicator 2106 may be used to indicate which page the user is currently viewing for the application. The user may slide the area of the other application icons side-to-side to view the application icons in the other pages.
It is to be understood that (a) in fig. 22 only illustrates the user interface on the electronic device 100, and should not be construed as limiting the embodiment of the present application.
As shown in (b) in fig. 22, when an input operation of a downward slide on the status field 2101 is detected, the electronic apparatus 100 may display the notification field interface 2212 in response to the slide input. The notification bar interface 2212 includes a window 2107, the window 2107 being used for switch controls for a plurality of shortcut functions. As shown in fig. 22 (b), a "cross-device" switch control 2107A may be displayed in the window 2107, and switch controls for other shortcut functions (e.g., Wi-Fi, bluetooth, flashlight, ringer, auto-rotate, flight mode, movement data, location information, screen shot, etc.) may also be displayed.
When electronic device 100 may receive an input operation (e.g., a touch operation) acting on switch control 2107, electronic device 100 may turn on cross-device authentication in response to the input operation. As shown in fig. 22 (c), electronic device 100 may characterize that cross-device authentication has been turned on by changing the impression of switch control 2107A.
Not limited to opening cross-device authentication in window 2107, the user may also open cross-device authentication in other ways. And is not particularly limited herein. For example, the user may also initiate cross-device authentication in a system setting.
Based on the features of the function triggered by the voice command 1, different voice control scenarios are introduced below.
Voice control scenario 1: in this scenario, the electronic device 100 cannot recognize the identity of the user 1 through the voice instruction 1. Regardless of whether the voice command 1 triggers the locking function or the unlocking function, the electronic device 100 starts the cross-device authentication, and starts the voice command triggering function after the identity authentication of the cross-device authentication is passed. In the embodiment of the present application, the locking function may be an application entering locking, may also be an application entering locking, and may also be a file (e.g., a picture, a document, a video, etc.) entering locking. And is not particularly limited herein.
Specifically, for the above-mentioned electronic device 100 being unable to recognize the identity of the user 1 through the voice instruction 1, in an implementation manner, the electronic device 100 has the capability of recognizing the identity of the user through voice, but the electronic device 100 does not pre-store the voiceprint feature of the user 1; in another implementation, the electronic device 100 does not have the capability to recognize the identity of the user through speech.
In some embodiments, in scenario 1, when the electronic device 100 receives the voice command 1, the electronic device 100 may be in the screen lock state. In some embodiments, in scenario 1, when electronic device 100 receives voice command 1, electronic device 100 may be in a screen unlock state. No specific state is made here.
For example, as shown in fig. 23A to 23D, when the user 1 is not in the detection range of the local continuous authentication of the electronic device 100, the electronic device 100 cannot acquire the biometric information of the user 1, and the local authentication of the electronic device 100 is failed.
As shown in fig. 23A, the user 1 is not in the detection range of the electronic device 100 for local authentication, the user 1 is using the electronic device 200, and issues a voice instruction 1 "xiaozuoxiaozi, play song 1", where the voice instruction 1 includes a wake-up word "xiaozi" of the electronic device 100. After receiving and recognizing the voice command, the electronic device 100 starts cross-device authentication to obtain the identity authentication information of the electronic device 200. When it is determined that the authentication information of the electronic apparatus 200 matches the preset information, the electronic apparatus 100 plays song 1 in response to the voice command 1, and may issue a voice response "good, play song 1" as shown in fig. 23B. At this time, the electronic device 100 may display a playing interface of song 1, or may be in a screen lock state, which is not specifically limited herein.
As shown in fig. 23C, the user 1 is not in the local authentication detection range of the electronic device 100 and the electronic device 200, and the user 1 concurrently issues a voice instruction 1 "mini art, play song 1", where the voice instruction 1 includes the wake-up word "mini art" of the electronic device 100. After receiving and recognizing the voice command, the electronic device 100 starts cross-device authentication to obtain the identity authentication information of the electronic device 200. Since the electronic apparatus 200 does not detect the biometric information of the user 1, the identity authentication information of the electronic apparatus 200 does not match the preset information. Therefore, the electronic apparatus 100 does not play Song 1, and may issue a voice response "please unlock" as shown in FIG. 23D. At this time, the electronic apparatus 100 may be in the lock screen state. In some embodiments, the electronic device 100 in fig. 23D may also display a screen unlock interface.
Voice control scenario 2: in this scenario, the electronic device 100 may recognize the identity of the user through the voice of the user. In some embodiments, after the electronic device 100 receives the voice command 1 of the user, when it is determined that the user is the preset user 1 through voice recognition and it is determined that the unlocked function is triggered by the voice command 1, the electronic device 100 starts the function triggered by the voice command 1 in response to the voice command 1. In some embodiments, after the electronic device 100 receives the voice command 1, when it is determined that the user is the preset user 1 through voice recognition and it is determined that the locking function is triggered by the voice command 1, the electronic device 100 starts cross-device authentication; after the electronic device 100 determines that the identity authentication of the cross-device authentication passes, the function triggered by the voice instruction 1 is started in response to the voice instruction 1.
Illustratively, FIGS. 24A-24G show the user interfaces involved in locking application 1.
As shown in fig. 24A, the other application icons 105 of the user interface 2411 include system settings icons 105A. The electronic apparatus 100 may receive an input operation (e.g., a touch operation) acting on the set icon 105A, and in response to the input operation, the electronic apparatus may display a set user interface 2413 as shown in fig. 24B.
As shown in fig. 24B, the user interface 2413 may include a security and privacy settings entry 201, and may further include a Huawei account settings entry, a wired and network settings entry, a device connection settings entry, an application and notification settings entry, a battery settings entry, a display settings entry, a sound settings entry, a storage settings entry, a user and account settings entry, and the like. Among other things, the security and privacy settings entry 201 may be used to set a face unlock, a fingerprint, a lock screen password, an application lock, and so on. The electronic device 100 may receive an input operation (e.g., a touch operation) acting on the security and privacy setting item 201, and in response to the input operation, the electronic device may display a user interface 2414 for setting security and privacy as shown in fig. 24C.
As shown in fig. 24C, the user interface 2414 may include an application lock settings entry 202, and may also include an emergency help settings entry, a location services settings entry, a biometric and password settings entry, a privacy space settings entry, a security detection settings entry. The application lock settings entry 202 may be used to lock the privacy application, effectively preventing unauthorized access by others. The electronic device 100 may receive an input operation (e.g., a touch operation) acting on the application lock setting entry 202, and in response to the input operation, the electronic device may display a user interface 2415 of the application lock as shown in fig. 24D.
As shown in fig. 24D, the user interface 2415 may include an application search box, at least one application settings bar. The at least one application setting bar may include an application 1 setting bar 203, an application 2 setting bar 204, a music setting bar, a payment setting bar, an album setting bar, a memo setting bar, and the like. And each setting bar is provided with a switch control which can be used for opening or closing the protection lock of the application access entrance. Illustratively, a switch control 203A is displayed on the application 1 setting bar 203, and a switch control 204A is displayed on the application 2 setting bar 204. When the switch control is in an Open (ON) state and the user accesses the application, the user needs to verify the identity, that is, the application is unlocked. When the switch control is in an OFF state, the user can directly access the application without verifying the identity when accessing the application.
Illustratively, as shown in fig. 24D, the switch control 203A is in an off state. The electronic apparatus 100 may receive an input operation (e.g., a touch operation) that acts on the application switch control 203A, and in response to the input operation, the electronic apparatus may switch the state of the switch control 203A to the open state shown in fig. 24E.
As shown in fig. 24E, the user interface 2415 may also include a set control 205 for applying a lock. The electronic device 100 may receive an input operation (e.g., a touch operation) acting on the setting control 205, and in response to the input operation, the electronic device may display a setting interface 2416 that applies a lock as shown in fig. 24F. The settings interface 2416 may include password type setting fields for multiple application locks, such as a lock screen password setting field, a custom password setting field, a face recognition setting field 206, and a fingerprint setting field. In some embodiments of the present application, the password type may further include voiceprint recognition, touch screen behavior feature recognition, and the like (not shown in fig. 24F). And each setting bar is displayed with a switch control which can be used for switching on or switching off the password using the type. Illustratively, as shown in fig. 24F, the switch control 206A is in an off state. The electronic apparatus 100 may receive an input operation (e.g., a touch operation) applied to the application switch control 206A, and in response to the input operation, the electronic apparatus 100 may switch the state of the switch control 206A to the open state shown in fig. 24G.
Not limited to the manner in which the application is locked as shown in fig. 24A-24G, the user may also lock the application in other manners. And is not particularly limited herein. For example, application 1 is locked in the settings interface of application 1.
For example, fig. 24H to 24M show user interfaces involved in locking application functions of the application 2.
As shown in fig. 24H, the other application icons 105 of the user interface 11 include an icon 105B of the application 2. Referring to fig. 24E, application 2 is an unlocked application. The electronic apparatus 100 may receive an input operation (e.g., a touch operation) acting on the icon 105B of the application 2, and in response to the input operation, the electronic apparatus may display the user interface 17 of the application 2 as shown in fig. 24I.
As shown in fig. 24I, the user interface 17 may include a menu tray 207, and the menu tray 207 may include a plurality of menu options, such as a record option 207A, a friends option, a group option, and a personal hub option 207B. The content displayed in the user interface 17 is associated with the currently selected option in the menu tray 207. As shown in fig. 24I, the record option 207A in the current menu tray 207 is selected and the user interface 17 is used to present a plurality of "chat records" entries. The electronic apparatus 100 may receive an input operation (e.g., touch operation) applied to the personal center option 207B, and in response to the input operation, the electronic apparatus may display the personal center interface 18 of the application 2 as shown in fig. 24J.
As shown in fig. 24J, the personal hub interface 18 includes a settings entry 208 for application 2, and may also include other settings entries such as my favorites settings entry, my photo album settings entry, and my wallet settings entry, among others. The electronic apparatus 100 may receive an input operation (e.g., touch operation) acting on the setting item 208, and in response to the above input operation, the electronic apparatus may display the setting interface 19 of the application 2 as shown in fig. 24K.
As shown in fig. 24K, the setting interface 19 may include an application function security setting entry 209 of the application 2, and may further include other setting entries such as a message alert setting entry, a do-not-disturb mode setting entry, a general purpose setting entry, a switch account setting entry, and an exit login setting, and the like. The electronic apparatus 100 may receive an input operation (e.g., a touch operation) acting on the setting item 209, and in response to the input operation, the electronic apparatus may display the setting interface 20 as shown in fig. 24L to which the function security is applied.
As shown in fig. 24L, the setting interface 20 may include a setting bar that may include at least one application function of the application 2, and a password type setting bar 211 of the application function of the application 2. As shown in fig. 24L, the opened password types of the application function of the application 2 may include face recognition and fingerprint. Illustratively, the setting fields of the at least one application function may include a send message setting field 210, a personal center setting field, a my favorite setting field, a my album setting field, and a my wallet setting item, etc. And each setting bar is provided with a switch control which can be used for opening or closing the protection lock of the access entrance of the application function. Illustratively, a toggle control 210A is displayed on the send message setting bar 210. As shown in fig. 24L, the switch control 210A is in the off state, the electronic device 100 may receive an input operation (e.g., a touch operation) applied to the switch control 210A, and in response to the input operation, the electronic device may switch the state of the switch control 210A to the on state shown in fig. 24M. As shown in fig. 24M, the user may also set the personal center of application 2 to be a locked application function.
Not limited to the manner of locking the application function shown in fig. 24H to 24M, the user may also lock the application function by other manners. And is not particularly limited herein.
For example, as shown in fig. 25A to 25J, when the user 1 is not in the detection range of the local authentication performed by the electronic device 100, the electronic device 100 cannot acquire the biometric information of the user 1.
As shown in fig. 25A, the user 1 is not in the local authentication detection range of the electronic apparatus 100 and the electronic apparatus 200, and the user 1 is caused to issue a voice instruction 1 "mini art, song 1" where the voice instruction 1 includes a wake-up word "mini art" of the electronic apparatus 100. After the electronic device 100 receives and recognizes the voice command 1, and recognizes that the voiceprint feature of the language command 1 matches the voiceprint feature of the user 1, and determines that the music is an unlocked application shown in fig. 24E, the electronic device 100 plays song 1 in response to the language command of the user 1, and issues a voice response "good, play song 1" as shown in fig. 25B. At this time, the electronic device may display the playing interface of song 1, or may be in a screen lock state, which is not specifically limited herein.
As shown in fig. 25C, the user 1 is not in the local authentication detection range of the electronic apparatus 100 and the electronic apparatus 200, and the user 1 issues a voice instruction 1 "mini art", and then sends an application 1 to Anna home, where the voice instruction 1 includes a wake-up word "mini art" of the electronic apparatus 100. After the electronic device 100 receives and recognizes the voice instruction 1, recognizing that the voiceprint feature of the language instruction 1 conforms to the voiceprint feature of the user 1, and when it is determined that the application 1 is the locked application shown in fig. 24E, the electronic device 100 starts cross-device authentication to acquire the identity authentication information of the electronic device 200. Since the user 1 is not within the detection range of the electronic device 200, the authentication information of the electronic device 200 does not match the preset information. Therefore, the electronic apparatus 100 does not execute the above language instruction, and may issue a voice response "please unlock" as shown in fig. 25D. In some embodiments, at this time, the electronic device 100 may display the unlock interface 21 of the application 1 illustrated in fig. 25D. As can be seen from fig. 24G, the password types of the application 1 include face recognition, fingerprint and lock screen password, and accordingly, the unlocking interface 21 shown in fig. 25D may include a face recognition control 212, a fingerprint control 213 and a lock screen password control 214. In some embodiments, the electronic device 100 may also be in a screen-off state, which is not limited herein.
As shown in fig. 25E, the user 1 is not in the detection range of the local authentication performed by the electronic apparatus 100, the user 1 is using the electronic apparatus 200, and issues a voice instruction 1 "mini art", which is a word "mini art" to wake up the electronic apparatus 100, and Anna is returned home with the application 1. After the electronic device 100 receives and recognizes the voice instruction 1, recognizing that the voiceprint feature of the language instruction 1 conforms to the voiceprint feature of the user 1, and when it is determined that the application 1 is the locked application shown in fig. 24E, the electronic device 100 starts cross-device authentication to obtain the identity authentication information of the electronic device 200. When the electronic apparatus 100 determines that the authentication information of the electronic apparatus 200 matches the preset information, the electronic apparatus 100 sends a message "home returned" to "ana" with the application 1 in response to the above-mentioned voice instruction 1, and may issue a voice response "home returned to ana sent" as shown in fig. 25F.
As shown in fig. 25G, when the user 1 is not in the local authentication detection range of the electronic apparatus 100 and the electronic apparatus 200, the user 1 issues a voice instruction 1 "mini art", and sends me work to lisa with the application 2, where the voice instruction 1 includes a wake-up word "mini art" of the electronic apparatus 100. After the electronic device 100 receives and recognizes the voice instruction 1, it recognizes that the voiceprint feature of the language instruction conforms to the voiceprint feature of the user 1, and when it is determined that the "send message" application function of the application 2 is the locked application function shown in fig. 24M, the electronic device 100 starts cross-device authentication to acquire the identity authentication information of the electronic device 200. Since the user 1 is not within the detection range of the electronic device 200, the authentication information of the electronic device 200 does not match the preset information. Therefore, the electronic apparatus 100 does not execute the above language instruction, and may issue a voice response "please unlock" as shown in fig. 25H. In some embodiments, electronic device 100 may display unlock interface 22 for the "send message" application function of application 2 shown in FIG. 25H. As can be seen from fig. 24M, the password types of the application functions of the application 2 include face recognition and fingerprint, and accordingly, the unlocking interface 22 shown in fig. 25H includes a face recognition control 215 and a fingerprint control 216.
As shown in fig. 25I, the user 1 is not in the detection range of the local authentication performed by the electronic apparatus 100, the user 1 is using the electronic apparatus 200, and issues a voice instruction 1 "mini art, i.e., I am on work" to the lisa with the application 2, where the voice instruction 1 includes the wake-up word "mini art" of the electronic apparatus 100. After the electronic device 100 receives and recognizes the voice command 1, it recognizes that the voiceprint feature of the language command matches the voiceprint feature of the user 1, and when it is determined that the "send message" application function of the application 2 is the locked application function shown in fig. 24M, the electronic device 100 starts cross-device authentication to acquire the identity authentication information of the electronic device 200. When the electronic apparatus 100 determines that the authentication information of the electronic apparatus 200 matches the preset information, the electronic apparatus 100 may send "i am on duty" to the lisa with the application 2 in response to the voice instruction 1 described above, and may issue a voice response "i am on duty has been sent to the lisa with the application 2" as shown in fig. 25J. In some embodiments, the electronic device 100 may display the messaging interface 23 of the lisa of the application 2, and may also maintain the lock screen state, which is not specifically limited herein.
In the embodiment of the application, the locked application may include a low-risk locked application and a high-risk locked application, and the locked application function may also include a low-risk locked application function and a high-risk locked application function. The low risk functions of locking may include low risk applications entering locking and low risk application functions entering locking. High risk of locking high risk functions of locking may include high risk applications entering locking and high risk application functions entering locking.
Voice control scenario 3: in this scenario, the electronic device 100 may recognize the identity of the user through the voice of the user. After the electronic device 100 receives the voice command 1, if the user is identified as the preset user 1 through voice recognition and it is determined that the function 1 with low risk of locking is triggered by the voice command 1, the electronic device 100 starts cross-device authentication. When the electronic device 100 determines that the identity authentication information of the electronic device 200 matches the preset information, the electronic device 100 starts a function triggered by the voice instruction 1 in response to the voice instruction 1. After the electronic device 100 receives the voice command 1, when it is determined that the voice command 1 triggers the high-risk function of locking, the electronic device 100 does not start the function triggered by the voice command 1.
It can be understood that in the voice control scenario 3, the locked high-risk function does not support cross-device authentication, and the electronic device 100 receives the locked high-risk function that can be activated only after the user performs an unlocking operation locally on the electronic device 100.
It should be noted that in the voice control scenario 3, when the voice instruction 1 triggers an unlocked function, after the electronic device 100 receives the voice instruction 1, the electronic device 100 may start cross-device authentication and determine that the identity authentication passes, and then start a function triggered by the voice instruction 1 in response to the voice instruction 1; electronic device 100 may also initiate the voice directive 1 triggered function in response to voice directive 1 without initiating cross-device authentication. And is not particularly limited herein.
In the embodiment of the present application, the locked low-risk (high-risk) application may be set by a user, or may be set by default by the electronic device. The locked high-risk (low-risk) application function can be set by a user or can be set by the electronic device in a default mode. Are not particularly limited herein. For example, the electronic device defaults all application functions such as transfer, payment, red package, and the like to be high risk application functions.
For example, fig. 26A-26C illustrate user interfaces involved in setting a low risk application from among locked applications by a user. In the embodiment of the application, the applications except the low-risk application in the locked applications are high-risk applications, and the high-risk applications do not support cross-device authentication.
As shown in fig. 26A, the user interface 2616 may also include a low risk settings bar 301 and settings bars for added low risk applications. For example, the setting column of the added low-risk application includes an album setting column 302, and a removal control 302A is set on the album setting column 302, and the removal control 302A is used for removing the album from the added low-risk application. An add control 301A is provided on the low risk setting bar 301. Electronic device 100 may receive an input operation (e.g., a touch operation) that acts on add control 301A, in response to which the electronic device may display window 303, as shown in fig. 26B, on user interface 2616. The window 303 includes a plurality of settings fields for locked applications, such as an application 1 setting field 304, a payment setting field, an album setting field, a memo setting field, and a mailbox setting field. Wherein a switch control is displayed on the setting bar of each locked application, and the switch control can be used for setting the application as a low-risk application. Illustratively, the application 1 settings bar 304 displays a switch control 304A. The electronic device 100 can receive an input operation (e.g., a touch operation) that acts on the application switch control 304A, and in response to the input operation, the electronic device can display the setting bar 305 of the application 1 as shown in fig. 26C on the user interface 2616. A removal control 305A is arranged on the setting bar 305 of the application 1, and the removal control 305A is used for removing the application 1 from the added low-risk application.
Referring to fig. 25E and 25F, the user 1 is present at the electronic apparatus 200 and intends to control the electronic apparatus 100 to send a message to "Anna" with the application 1 by voice instruction 1. When the electronic device 100 receives and recognizes the voice instruction 1 of the user 1 and determines that the application 1 is the locked low-risk application shown in fig. 26C, the electronic device 100 starts cross-device authentication to acquire the identity authentication information of the electronic device 200. When the electronic apparatus 100 determines that the authentication information of the electronic apparatus 200 matches the preset information, the electronic apparatus 100 transmits a message to "Anna" with the application 1 in response to the above-mentioned voice instruction 1.
For example, as shown in fig. 27A to 27D, when the user 1 is not in the detection range of the local authentication performed by the electronic device 100, the electronic device 100 cannot acquire the biometric information of the user 1.
As shown in fig. 27A, the user 1 is using the electronic apparatus 200 and issues a voice instruction 1 "art and art, and pays an application", the voice instruction 1 including a wake-up word "art and art" of the electronic apparatus 100. After the electronic device 100 receives and recognizes the voice command 1, it recognizes that the voiceprint feature of the language command matches the voiceprint feature of the user 1, and when it is determined that the payment application is a high-risk application (i.e., not the low-risk application shown in fig. 26C), the electronic device 100 does not execute the voice command 1, and may issue a voice response "please unlock" as shown in fig. 27B.
In some embodiments, the electronic device 100 defaults the transfer function to be a high risk application function in all applications. As shown in fig. 27C, the user 1 is using the electronic apparatus 200 and issues a voice instruction 1 "mini art, which includes a wake-up word" mini art "of the electronic apparatus 100 by applying 1 to transfer 10 yuan to Anna. After the electronic device 100 receives and recognizes the voice command 1, when it is determined that the voice print feature of the voice command matches the voice print feature of the user 1 and the transfer is a high risk application function, the electronic device 100 does not execute the voice command and may issue a voice response "please unlock" as shown in fig. 27D. It will be appreciated that the high-risk application functionality does not support cross-device authentication even if the application functionality is that of a locked low-risk application or that of an unlocked application.
In some embodiments of the present application, in the three voice control scenarios, the electronic device 100 receives the voice instruction 1, and when starting cross-device authentication, the electronic device 100 acquires the identity authentication information of the electronic device 200; when the electronic device 100 determines that the identity authentication information of the electronic device 200 matches the preset information and determines that the distance between the electronic device 200 and the electronic device 100 is smaller than the preset distance value, the electronic device 100 starts the function 1 triggered by the voice instruction 1. It can be understood that the electronic device 100 determines that the identity authentication information of the electronic device 200 is a secure and trusted authentication result only when the distance between the electronic device 200 and the electronic device 100 is smaller than the preset distance value.
In some embodiments of the present application, in the three voice control scenarios, the electronic device 100 receives the voice instruction 1, and when starting cross-device authentication, the electronic device 100 acquires the identity authentication information of the electronic device 200; when the electronic device 100 determines that the identity authentication information of the electronic device 200 matches the preset information and determines that the electronic device 200 is in the secure state, the electronic device 100 starts the function 1 corresponding to the voice instruction 1. In some embodiments, the electronic device 200 being in the safe state may refer to the electronic device 200 being in a non-root state, no trojan virus, and no anomaly in traffic monitoring. It is understood that the electronic device 100 determines that the identity authentication information of the electronic device 200 is a security trusted authentication result only when the electronic device 200 is in the security state.
In some embodiments of the present application, in the three voice control scenarios, the electronic device 100 receives the voice instruction 1, and when starting cross-device authentication, the electronic device 100 acquires the identity authentication information of the electronic device 200; when the electronic device 100 determines that the identity authentication information of the electronic device 200 matches the preset information and determines that the priority of the authentication mode of the local continuous authentication of the electronic device 200 is not lower than that of the electronic device 100, the electronic device 100 starts the function 1 corresponding to the voice instruction 1. It can be understood that the electronic device 100 determines that the identity authentication information of the electronic device 200 is a secure and trusted authentication result only when the priority of the authentication mode of the local persistent authentication of the electronic device 200 is not lower than that of the electronic device 100. For example, the priorities of the authentication modes of the local persistent authentication are sorted from large to small as follows: face recognition (iris recognition), heart rate detection, gait recognition and touch screen behavior recognition.
In some embodiments of the present application, in the three voice control scenarios, the electronic device 100 receives the voice instruction 1, and when starting cross-device authentication, the electronic device 100 acquires the identity authentication information of the electronic device 200; when the electronic device 100 determines that the identity authentication information of the electronic device 200 matches the preset information and determines that the electronic device 200 meets at least two of the three conditions of "the distance from the electronic device 100 is smaller than the preset distance value", "is in a safe state", and "the priority of the authentication mode of the local continuous authentication is not lower than that of the electronic device 100", the electronic device 100 starts the function triggered by the voice instruction 1. It is understood that the electronic device 100 determines that the authentication information of the electronic device 200 is a security and trusted authentication result when at least two of the above three conditions are satisfied.
In addition, under the above three voice control scenarios, in some embodiments, when the electronic device 100 detects biometric information of a non-preset user, the electronic device 100 maintains the lock screen state regardless of whether the authentication information sent by the electronic device 200 matches the preset information. In some embodiments, when the electronic device 200 detects biometric information of a non-pre-stored user, the identity authentication information sent by the electronic device 200 to the electronic device 100 may represent that the electronic device 200 detects biometric information of a non-pre-stored user, and the electronic device 100 maintains the screen lock state after receiving the identity authentication information.
The cross-device authentication method provided by the embodiment of the application is introduced below for a screen projection control scenario.
For example, fig. 28A to 28J illustrate a screen projection manner in which the electronic device 100 actively projects a screen to the electronic device 200.
As shown in fig. 28A, notification bar interface 12 also includes a screen-cast control 107B. When the electronic device may receive an input operation (e.g., a touch operation) acting on the screen projection control 107B, the electronic device 100 may display a window 401 as shown in fig. 28B in response to the input operation.
As shown in fig. 28B, window 401 includes at least one selection of a screen projection device, such as a selection of notebook XXX, a selection 401A of electronic device 200, a selection of smart television XXX, and so forth. When the electronic device may receive an input operation (e.g., a touch operation) acting on the option 401A of the electronic device 200, the electronic device 100 may transmit a screen-casting request to the electronic device 200 in response to the input operation.
As shown in fig. 28C, the electronic apparatus 200 displays a window 402 on the user interface 31 based on the screen-casting request transmitted by the electronic apparatus 100. The window 402 includes a confirm control 402A and a reject control 402B. The electronic apparatus 200 may receive an input operation (e.g., a touch operation) acting on the rejection control 402B, in response to which the electronic apparatus 200 may close the window 402, and may also transmit a rejection response of the screen-casting request to the electronic apparatus 100. When the electronic device 200 may receive an input operation (e.g., a touch operation) applied to the confirmation control 402A, in response to the input operation, the electronic device 200 may display a screen projection window 403 shown in fig. 28D on the user interface 31, where screen projection content sent by the electronic device 100, that is, the user interface 32, is displayed in the screen projection window 403.
In some embodiments, after the electronic device 200 receives the screen projection request sent by the electronic device 100, the screen projection window 403 may be displayed according to the screen projection content of the electronic device 100 without receiving an input operation for confirming screen projection by the user.
In some embodiments, the user interface 32 transmitted by the electronic device 100 includes some or all of the interface (user interface 11) currently displayed by the electronic device 100. As shown in FIG. 28D, the user interface 32 includes interface content in the user interface 11 of the electronic device 100 other than the status bar, including icons 404 of other applications. For example, the icons 404 of the other application programs include an icon 404A of application 1, an icon 404B of an album, an icon 404C of music, an icon 404D of application 2, an icon 404E of a mailbox, an icon of cloud sharing, an icon of a memo, and an icon of setting.
In some embodiments, the user interface 31 may be a main interface of the electronic device 200, and it is understood that fig. 28C and 28D only illustrate the user interface on the electronic device 200, and should not be construed as limiting the embodiments of the present application.
For example, fig. 28E to 28H illustrate a screen-casting manner in which the electronic device 200 actively acquires screen-casting content of the electronic device 100.
Fig. 28E shows the user interface 31 on the electronic device 200 for presenting the application installed by the electronic device 200. As shown in fig. 28E, the user interface 31 includes icons 405 of a plurality of applications, such as an icon of cloud sharing, an icon of stock, an icon of setting, an icon of screen sharing 405A, an icon of mail, an icon of music, an icon of video, an icon of browser, and an icon of gallery. When the electronic device may receive an input operation (e.g., a touch operation) acting on the screen sharing icon 405A, the electronic device 200 may display the screen sharing user interface 33 as shown in fig. 28F in response to the input operation.
As shown in fig. 28F, the user interface 33 includes an account login control group 406 and a return control 407. The account login control group 406 includes an account input box 406A, a password input box 406B, and a login control 406C, and may further include a registered account control for registering a new account and a forgotten password control for retrieving a password of a registered account. When the electronic apparatus 200 receives an input operation (e.g., a touch operation) acting on the return control 407, the electronic apparatus 200 may display a page previous to the current page.
As shown in fig. 28F, when the electronic device 200 may receive an input operation (e.g., a touch operation) on the login control 406C after the user inputs an account in the account input box 406A and inputs a password corresponding to the account in the password input box 406B, the electronic device 200 may display the user interface 34 of the account as shown in fig. 28G in response to the input operation.
As shown in FIG. 28G, user interface 33 may include a user's avatar and account 408, a return control 409, and at least one device option 410 to log into the same account, such as an option for notebook XXX, an option 410A for electronic device 100, and so on. The return control 409 is used to return the last page of the current page. When the electronic apparatus 200 may receive an input operation (e.g., a touch operation) acting on the option 410A of the electronic apparatus 200, in response to the input operation, the electronic apparatus 200 may acquire screen projection content (i.e., the user interface 32) to the electronic apparatus 100 and display a screen projection window 411 as shown in fig. 28H on the user interface 34, where the display content of the screen projection window 410 is the user interface 32. The display content of the screen-projecting window 411 may be part or all of the main interface 11 of the electronic device 100, and may also be part or all of the currently displayed interface of the electronic device 100, which is not specifically limited herein.
In this embodiment, the electronic device 100 or the electronic device 200 may further select a user interface of an application from applications running on the electronic device 100 for screen projection. And is not particularly limited herein.
In addition to the above screen projection modes, in the embodiment of the present application, the screen projection content of the electronic device 100 may be displayed on the electronic device 200 by other screen projection modes, which is not limited in this embodiment.
It should be noted that in the following screen-projection scenario, in some embodiments, the electronic device 100 is pre-stored with the biometric information 1 of the user 1, and is not pre-stored with the biometric information of the user 2. In some embodiments, the electronic device 100 and the electronic device 200 are preset to be the same as the user, and both have the biometric information 1 of the user 1 stored in advance, and have no biometric information of the user 2 stored in advance, so that the electronic device 200 can locally authenticate the user 1.
Screen projection control scene 1: the electronic device 200 receives and displays the screen projection content 1 transmitted by the electronic device 100. The electronic device 100 acquires the identity authentication information of the local continuous authentication of the electronic device 200 in real time, and when the electronic device 100 determines that the identity authentication information of the electronic device 200 does not match the preset information, the electronic device 100 stops sending the screen projection content to the electronic device 200. In this way, the electronic device 100 controls screen projection based on the identity authentication information of the electronic device 200, so that unauthorized people are prevented from seeing the screen projection content of the electronic device 100, and the screen projection safety is improved.
Illustratively, as shown in fig. 28I, the user 1 is using the electronic device 200, and the screen projection window 403 of the electronic device 200 displays the user interface 32. As shown in fig. 28I and 28J, when the user of the electronic device 200 is changed from user 1 to user 2, the electronic device 200 stops displaying the screen projection content of the electronic device 100, and may display an exit control 412 and a prompt message "screen projection interrupted" on the screen projection window 403. The user may click the exit control 412, and in response to detecting the above-described click operation, the electronic device 200 may stop displaying the screen projection window 403.
In some embodiments, when the user of the electronic device 200 is changed from user 1 to user 2, the electronic device 100 determines that the authentication information of the electronic device 200 does not match the preset information, and the electronic device 100 may further project a screen unlocking interface of the electronic device 100 to the electronic device 200 for display. It should be noted that the user can continue to project the screen after unlocking the screen of the electronic device 100. For example, the user may input a password of the electronic device 100 at a screen unlocking interface of the electronic device 100 displayed by the electronic device 200, the electronic device 200 transmits the password to the electronic device 100, and when the password is determined to be correct, the electronic device 100 may continue to transmit the screen shot content.
In some embodiments, when the user of the electronic device 200 is changed from user 1 to user 2, and the user 2 is not a preset user of the electronic device 200, the electronic device 200 may further display the screen unlock interface of the electronic device 200 in a full screen. It should be noted that the user can continue to use the electronic device 200 and view the screen-shot content of the electronic device 100 after unlocking the screen of the electronic device 200.
In some embodiments of the present application, the electronic device 200 receives and displays the screen projection content 1 of the electronic device 100. The electronic device 200 may receive the touch operation 1 applied to the screen projection content 1, and send the touch parameter of the touch operation 1 to the electronic device 100. After receiving the touch parameter of the touch operation 1, the electronic device 100 may start cross-device authentication to obtain the identity authentication information of the electronic device 200. When the electronic device 100 determines that the authentication is passed based on the identity authentication information of the electronic device 200, the electronic device 100 may start the function 2 triggered by the touch operation 1 in response to the touch operation 1, and draw the screen projection content 2 corresponding to the function 2. The electronic device 100 transmits the screen projection content 2 to the electronic device 200, and the electronic device 200 receives and displays the screen projection content 2 on the screen projection window. When the electronic apparatus 100 determines that the identification information of the electronic apparatus 200 does not match the preset information, the electronic apparatus 100 may stop transmitting the screen projection content to the electronic apparatus 200. It can be understood that, in the embodiment of the present application, the electronic device 100 may implement screen projection control on the electronic device 100 based on the identity authentication information of the electronic device 200, so that the screen projection security is improved.
Based on different characteristics of the function 2 triggered by the touch operation 1, different screen projection control scenes are introduced below.
Screen projection control scene 2: in this scenario, when the function 2 triggered by the touch operation 1 is a locked function or an unlocked function, the electronic device 100 needs to perform cross-device authentication, and only after the authentication is passed, the function 2 triggered by the touch operation is started, and corresponding screen projection content is drawn and sent to the electronic device 200.
It is understood that, unlike the screen projection scenario 2, in the screen projection control scenario 1, the electronic device 100 acquires the identity authentication information of the electronic device 200 in real time. Regardless of whether a touch operation acting on the screen-projected content is received, when the electronic device 100 determines that the authentication information of the electronic device 200 does not match the preset information, the electronic device 100 stops transmitting the screen-projected content to the electronic device 200.
For example, as shown in fig. 29A to 29D, when the user 1 is not in the detection range of the electronic device 100 for local authentication, the electronic device 100 cannot acquire the biometric information of the user 1.
As shown in fig. 29A, the user 1 is using the electronic apparatus 200. The screen projection window 403 displayed by the electronic device 200 includes an icon 404C of music. Referring to fig. 24E, music is an unlocked application. The electronic apparatus 200 may receive a touch operation 1 (e.g., a click operation) acting on the icon 404C of the music and notify the electronic apparatus 100 of the touch input. After receiving the notification of the electronic device 200, the electronic device 100 obtains the identity authentication information of the electronic device 200, and when it is determined that the identity authentication information of the electronic device 200 matches the preset information, the electronic device 100 may draw the home interface 35 of the music application in response to the touch operation 1. The electronic device 100 transmits the home interface 35 to the electronic device 200, and the electronic device 200 receives and displays the home interface 35 shown in fig. 29B in the screen projection window 403.
It should be noted that, when the electronic device 200 displays the screen projection content sent by the electronic device 100, the electronic device 100 may be in a screen locking state, may also display the screen projection content, and may also display other application interfaces, which is not specifically limited herein.
As shown in fig. 29C, the user 2 is using the electronic apparatus 200, and the electronic apparatus 200 does not pass the authentication of the user 2. The electronic apparatus 200 may receive a touch operation 1 (e.g., a single-click operation) acting on the icon 404C of music and notify the electronic apparatus 100 of the touch operation. The electronic apparatus 100 starts cross-apparatus authentication after receiving the notification of the electronic apparatus 200. When it is determined that the authentication information of the electronic device 200 does not match the preset information, the electronic device 100 stops sending the screen projection content to the electronic device 200, and the electronic device 200 displays a prompt message "screen projection interrupted" as shown in fig. 29D on the screen projection window 403.
As shown in fig. 29E, the user 1 is using the electronic apparatus 200. The screen-projection window displayed by the electronic device 200 also includes an album icon 404. Referring to FIG. 24E, the album is a locked application. The electronic apparatus 200 may receive a touch operation 1 (e.g., a single click operation) acting on the icon 404B of the album and notify the electronic apparatus 100 of the touch operation. After receiving the notification of the electronic device 200, the electronic device 100 starts cross-device authentication, and when it is determined that the identity authentication information of the electronic device 200 matches the preset information, the electronic device 100 may draw the user interface 36 of the album in response to the touch operation 1. The electronic device 100 transmits the user interface 36 to the electronic device 200, and the electronic device 200 receives and displays the user interface 36 shown in fig. 29F in the screen projection window 403.
In some embodiments of the present application, when the electronic device 200 displays the user interface of the locked application through the screen projecting window 403, the electronic device 100 obtains the identity authentication information of the electronic device 200 in real time, and when it is determined that the identity authentication information of the electronic device 200 does not match the preset information, the electronic device 100 stops sending the screen projecting content, and the screen projecting window 403 of the electronic device 200 stops displaying the screen projecting content. For example, as shown in fig. 29F and 29G, the electronic device 200 displays the user interface 36 of the album through the screen-casting window 403, and after the user of the electronic device 200 is switched from user 1 to user 2, the screen-casting window 403 of the electronic device 200 stops displaying the screen-casting content.
As shown in fig. 29H, the user 2 is using the electronic apparatus 200. The electronic apparatus 200 may receive a touch operation 1 (e.g., a single click operation) acting on the icon 404B of the album and notify the electronic apparatus 100 of the touch operation. After receiving the notification of the electronic device 200, the electronic device starts cross-device authentication, determines that the identity authentication information of the electronic device 200 is not matched with the preset information, stops sending screen projection content to the electronic device 200 by the electronic device 100, and displays a prompt message "screen projection interruption" on the screen projection window 403 by the electronic device 200 as shown in fig. 29I.
Screen projection control scene 3: in this scenario, after the electronic device 100 receives the touch operation 1 through the electronic device 200, when it is determined that the touch operation 1 triggers the unlocked function 2, the electronic device draws screen projection content corresponding to the function 2 in response to the touch operation 1, and sends the screen projection content to the electronic device 200. After receiving the touch operation 1 through the electronic device 200, the electronic device 100 starts cross-device authentication when it is determined that the locked function 2 is triggered by the touch operation 1, and only after it is determined that the cross-device authentication passes, draws screen projection content corresponding to the function 2 in response to the touch operation 1, and sends the screen projection content to the electronic device 200.
Referring to fig. 24A to 24G, there are provided related interfaces for locking an application according to an embodiment of the present application. Fig. 24H to fig. 24M are related interfaces for locking application functions according to an embodiment of the present application.
In some embodiments of the present application, during the screen projection process, the electronic device 100 obtains the identity authentication information of the electronic device 200 in real time. When the screen-projecting window of the electronic device 200 displays the user interface of the locked application or the user interface of the locked application function, if it is determined that the identity authentication information of the electronic device 200 is not matched with the preset information, the electronic device 100 stops sending the screen-projecting content to the electronic device 200.
Referring to fig. 24E, application 2 is an unlocked application and referring to fig. 24M, the personal center of application 2 is a locked application function. As shown in fig. 30A, the user 2 is using the electronic apparatus 200. The user interface 32 displayed by the screen projection window 403 includes an icon 404D for application 2. The electronic apparatus 200 may receive a touch operation 1 (e.g., a click operation) acting on the icon 404D of the application 2 and notify the electronic apparatus 100 of the touch operation. After receiving the notification from the electronic apparatus 200, the electronic apparatus 100 draws the user interface 37 of the application 2 in response to the touch operation 1. The electronic device 100 transmits the user interface 37 to the electronic device 200, and the electronic device 200 receives and displays the user interface 37 shown in fig. 30B in the screen projection window 403. As shown in fig. 30B, user interface 37 includes a personal hub option 413. The electronic apparatus 200 may receive the touch operation applied to the personal center option 413 and notify the electronic apparatus 100 of the touch operation. After receiving the notification of the electronic device 200, the electronic device 100 starts cross-device authentication, and when it is determined that the identity authentication information of the electronic device 200 does not match the preset information, the electronic device 100 stops sending screen projection content to the electronic device 200. As shown in fig. 30C, the electronic apparatus 200 stops displaying the screen-projected content of the electronic apparatus 100 in the screen-projection window 403.
Referring to FIG. 24E, the album is a locked application. Referring to fig. 29E to 29I, for the locked album application, the electronic device 100 may implement screen-casting control on the album based on the identity authentication information of the electronic device 200. As shown in fig. 29E and 29F, when the user 1 uses the electronic device 200, and the electronic device 100 determines that the album acted by the touch operation is a locked application, cross-device authentication is started, and when it is determined that the cross-device authentication is passed, the electronic device 200 may display the screen-shot content of the electronic device 100, that is, the album user interface 36. As shown in fig. 29H and 29I, when the user 2 uses the electronic device 200 and the electronic device 100 determines that the album used for the touch operation is the locked application, the electronic device 100 starts cross-device authentication and determines that the cross-device authentication does not pass, the electronic device 100 stops sending the screen-shot content to the electronic device 200, and the electronic device 200 stops displaying the screen-shot content of the electronic device 100. As shown in fig. 29F and 29G, when the user interface 36 of the album is displayed on the screen-projecting window 403 of the electronic device 200, after the user of the electronic device 200 is switched from the user 1 to the user 2, the authentication information of the electronic device 200 does not match the preset information, and the electronic device 200 stops displaying the screen-projecting content of the electronic device 100.
Screen projection control scene 4: in this scenario, after the electronic device 100 receives the touch operation 1, when it is determined that the touch operation 1 triggers the low-risk function 2 of locking, and when the electronic device 100 starts cross-device authentication and determines that the cross-device authentication passes, the electronic device 100 responds to the touch operation 1 to draw the user interface of the function 2 triggered by the touch operation 1. After the electronic device 100 receives the touch operation 1, when it is determined that the touch operation 1 triggers the high-risk function 2 of locking, the electronic device 100 stops sending the screen-shot content to the electronic device 200, and the electronic device 200 stops displaying the screen-shot content of the electronic device 100.
In the screen-casting control scenario 4, after the electronic device 100 receives the touch operation 1 through the electronic device 200 for the unlocked function 1, the electronic device 100 may start cross-device authentication and respond to the touch operation 1 when determining that the cross-device authentication passes, draw a user interface of the function 2 triggered by the touch operation 1, and cast the user interface to the electronic device 200; the electronic device 100 may also draw a user interface of the function 2 triggered by the touch operation 1 in response to the touch operation 1, and project the user interface to the electronic device 200 without starting cross-device authentication. And is not particularly limited herein.
Referring to fig. 26A-26C, there are provided related interfaces for setting a locked low risk application in accordance with an embodiment of the present application.
Referring to FIG. 26C, the album is a locked low risk application, supporting cross device authentication. Referring to fig. 29E to 29I, for a locked low-risk album application, the electronic device 100 may implement screen-casting control of the album based on the authentication information of the electronic device 200. As shown in fig. 29E and 29F, when the user 1 uses the electronic device 200, and the electronic device 100 determines that the album triggered by the touch operation is a locked low-risk application, cross-device authentication is started and determined to pass, and the electronic device 200 may display the screen-shot content of the electronic device 100, that is, the album user interface 36. As shown in fig. 29H and 29I, when the user 2 uses the electronic device 200, and the electronic device 100 determines that the album triggered by the touch operation is a locked low-risk application, cross-device authentication is started and determined not to pass, the electronic device 100 stops sending the screen-projected content to the electronic device 200, and the electronic device 200 stops displaying the screen-projected content of the electronic device 100. As shown in fig. 29F and 29G, when the user of the electronic apparatus 200 switches from user 1 to user 2 while the user interface 36 of the album is displayed on the screen-projecting window 403 of the electronic apparatus 200, the electronic apparatus 200 stops displaying the screen-projected content of the electronic apparatus 100.
Referring to fig. 31A, a mailbox is a locked high risk application. Illustratively, as shown in FIG. 31A, a user 1 is using the electronic device 200. The user interface 32 displayed by the screen projection window 403 includes an icon 404E for the mailbox. The electronic device 200 may receive a touch operation 1 (e.g., a one-click operation) on the icon 404E acting on the mailbox and notify the electronic device 100 of the touch operation. When the electronic device 100 determines that the mailbox for touch operation is a locked high-risk application, the screen projection content is stopped from being sent to the electronic device 200. As shown in fig. 31B, the electronic device 200 may stop displaying the screen projection content of the electronic device 100, and display a prompt message "high risk operation please unlock at the electronic device 100".
In some embodiments of the present application, the user 1 is the owner of the electronic device 100 and the electronic device 200, and the electronic device 200 may add other authorized users in addition to the user 1 and store the biometric information of the other authorized users. Other authorized users may have the right to unlock the screen, unlock the locked application, and unlock the locked application functionality. The identity authentication information of other authorized users also supports cross-device authentication. It is understood that when the electronic device 200 collects the biometric information of other authorized users, the local authentication result of the electronic device 200 may also be passed.
For example, fig. 32A to 32E show the relevant interfaces of the electronic device for adding the biometric information (e.g., facial features) of the authorized user.
As shown in fig. 32A, the user interface 14 includes a biometric and password setting entry 501. The electronic apparatus 100 may receive an input operation (e.g., a touch operation) acting on the setting item 501, and in response to the input operation, the electronic apparatus may display the setting interface 38 of biometrics and password as shown in fig. 32B. As shown in fig. 32B, the setting interface 38 includes a face recognition setting entry 502, a fingerprint setting entry, a change lock screen password setting entry, and a close lock screen password setting bar. The electronic apparatus 100 may receive an input operation (e.g., a touch operation) acting on the face recognition setting entry 502, and in response to the input operation, the electronic apparatus may display the setting interface 39 for face recognition as shown in fig. 32C. As shown in fig. 32C, the setting interface 39 may include a control 503 to delete face data 1, a control 504 to add a spare face, and an authority setting field for face recognition. Where the face data 1 is the face data of the user 1, the control 503 to delete the face data 1 may be used to delete the face data entered by the user 1. Electronic device 100 may receive an input operation (e.g., a touch operation) that acts on control 504, in response to which electronic device 100 may display face entry interface 40 as shown in FIG. 32D. A window 505 in the face entry interface 40 is used to display images of a person's face captured by the camera. After the face entry of the user 3 is completed, as shown in fig. 32E, the electronic apparatus 100 may display a control 506 for deleting the face data 2 in the setting interface 39. The delete face data 2 control 506 may be used to delete face data entered by user 3. It is understood that the face data 2 is the face data of the user 3, and the user 3 is an authorized user for face recognition after the face data of the user 3 is added.
In addition to the manner of adding authorized users shown in fig. 32A to fig. 32E, in the embodiment of the present application, authorized users may also be added in other manners, which is not limited in this embodiment. In addition, the electronic device 200 may also collect other biometric information of the user 3, such as iris, touch screen behavior characteristics, heart rate characteristics, and the like, and the steps and the flow thereof are similar to those in fig. 32A to 32E, and are not limited in detail here.
In some embodiments, for the above four screen-projection control scenarios, after the user adds the face data of the authorized user 3, the electronic device 100 may send the face data of the user 3 to the electronic device 200, so that the user 3 also becomes the authorized user of the electronic device 200. If the user 3 is also an authorized user of the electronic device 200, when the electronic device 200 acquires the face image of the user 1 or the authorized user 3 during the local continuous authentication process by using the face recognition, the local authentication of the electronic device 200 is passed. If the user 3 is not an authorized user of the electronic device 200, the electronic device 100 may obtain the face image of the user 1 or the authorized user 3 collected by the electronic device 200, and the electronic device 100 performs the identity authentication.
For example, in the screen projection control scenario 1, as shown in fig. 33A, the user 1 is using the electronic device 200, and the screen projection window 403 of the electronic device 200 displays the user interface 32. As shown in fig. 33A and 33B, when the user of the electronic device 200 becomes the authorized user 3 from the user 1, the electronic device 100 determines that the cross-device authentication of the electronic device 200 is passed, the electronic device 100 continues to transmit the screen-shot content to the electronic device 200, and the electronic device 200 continues to display the screen-shot content of the electronic device 100. As shown in fig. 33B and 33C, when the user of the electronic device 200 is changed from the authorized user 3 to the unauthorized user 2, the electronic device 100 determines that the cross-device authentication of the electronic device 200 does not pass, the electronic device 100 stops transmitting the screen projection content to the electronic device 200, and the electronic device 200 stops displaying the screen projection content of the electronic device 100.
It should be noted that, in the screen projection control scenario, the electronic device 100 starts cross-device authentication, when the electronic device 100 determines that the identity authentication information of the electronic device 200 is not matched with the preset information, the electronic device 100 stops sending screen projection content to the electronic device 200, and the electronic device 200 may close the screen projection window, or display a screen unlocking interface of the electronic device 200 in a full screen manner, or display a screen unlocking interface of the electronic device 100 in the screen projection window, or display a prompt message "screen projection interruption" on the screen projection window, which is not specifically limited herein.
Optionally, in some embodiments of the application, in the four screen projection control scenarios, when the electronic device 100 starts cross-device authentication, the electronic device 100 acquires the identity authentication information of the electronic device 200; when the electronic device 100 determines that the identity authentication information of the electronic device 200 matches the preset information and determines that the distance between the electronic device 200 and the electronic device 100 is smaller than the preset distance value, the electronic device 100 continues to send the screen projection content to the electronic device 200.
In some embodiments of the present application, in the above four screen projection control scenarios, when the electronic device 100 starts cross-device authentication, the electronic device 100 acquires the identity authentication information of the electronic device 200; when it is determined that the authentication information of the electronic device 200 matches the preset information and it is determined that the electronic device 200 is in the secure state, the electronic device 100 continues to transmit the screen projection content to the electronic device 200.
In some embodiments of the present application, in the above four screen projection control scenarios, when the electronic device 100 starts cross-device authentication, the electronic device 100 acquires the identity authentication information of the electronic device 200; when it is determined that the identity authentication information of the electronic device 200 matches the preset information and it is determined that the priority of the authentication mode of the local continuous authentication of the electronic device 200 is not lower than that of the electronic device 100, the electronic device 100 continues to send the screen-shot content to the electronic device 200.
In some embodiments of the present application, in the above four screen projection control scenarios, when the electronic device 100 starts cross-device authentication, the electronic device 100 acquires the identity authentication information of the electronic device 200; when it is determined that the identity authentication information of the electronic device 200 matches the preset information and it is determined that the electronic device 200 satisfies at least two of the three conditions of "the distance from the electronic device 100 is smaller than the preset distance value", "is in a secure state", and "the priority of the authentication mode of the local continuous authentication is not lower than that of the electronic device 100", the electronic device 100 continues to send the screen projection content to the electronic device 200. It is understood that the electronic device 200 may determine that the identification information of the electronic device 200 is safe and reliable only when the electronic device 200 satisfies at least two of the above three conditions.
In addition to the voice control scenario and the screen projection scenario, the cross-device authentication method provided in the embodiment of the present application may also be applied to cross-device authentication in other scenarios, which is not specifically limited herein.
The foregoing is a scenario in which the cross-device authentication method may be implemented in the present application, and a cross-device authentication system related to the cross-device authentication method provided in the embodiments of the present application is described below.
Illustratively, as shown in fig. 34A, the system includes an electronic apparatus 100 and an electronic apparatus 200 (hereinafter, collectively referred to as electronic apparatuses). The continuous authentication modes of the electronic device 100 and the electronic device 200 may include a local continuous authentication mode and a cross-device continuous authentication mode, wherein the continuous authentication mode refers to authentication by using an authentication factor with a continuously-collected feature, the local continuous authentication mode refers to authentication by using an authentication factor with a continuously-collected feature, and the cross-device continuous authentication mode refers to authentication by using an authentication factor with a continuously-collected feature on another electronic device. The authentication factors with the sustainable collection characteristics comprise touch screen behaviors, human faces, heart rates, pulse and other information. The electronic device 100 and the electronic device 200 each include a persistent feature collection module, a persistent feature authentication module, a local authentication result management module, an authentication mode management module, and a cross-device authentication information acquisition module. Wherein:
the continuous characteristic acquisition module is used for continuously acquiring biological characteristic information of a user in a detection range, such as face characteristics, iris characteristics and touch screen behavior characteristics, and the biological characteristic information is used for authentication modes such as face recognition, iris recognition and touch screen behavior recognition.
The continuous characteristic authentication module is used for matching the biological characteristic information acquired by the continuous characteristic acquisition module with biological characteristic information prestored in the electronic equipment, and when the matching degree reaches a preset threshold value 1, the electronic equipment can determine that the local authentication result passes.
For example, the electronic device 100 performs local persistent authentication by using face recognition, the electronic device 100 acquires face feature information acquired by the electronic device 100 through a persistent feature acquisition module, and recognizes a matching degree of the acquired face feature information and face feature information of a preset user by using the persistent feature authentication module, and when the matching degree of the acquired face feature information and the face feature information of the preset user reaches a preset threshold value 1, the local authentication result passes. For example, the preset threshold value 1 is equal to 90%.
It should be noted that there are two cases when the local authentication does not pass, and one case is that the feature collection is interrupted, that is, the biometric information is not collected by the continuous feature collection module. For example, the authentication mode of the local continuous authentication is face recognition, no user exists in the detection range of the electronic device, and the continuous feature acquisition module does not acquire face feature information. Another case is that the feature acquisition is not interrupted, but the degree of matching does not reach the preset threshold 1. For example, the authentication mode of the local continuous authentication is face recognition, an unauthorized user is included in the detection range of the electronic device, and the matching degree of the face feature information acquired by the continuous feature acquisition module and the face feature information of a preset user does not reach a preset threshold value 1.
The local authentication result management module is used for managing the local authentication result generated by the continuous characteristic authentication module. When the local authentication result changes from pass to fail (i.e. the local persistent authentication is interrupted), the local authentication result management module may notify the authentication mode management module to switch the persistent authentication mode to the cross-device persistent authentication mode. When the local authentication result changes from not passing to passing, the local authentication result management module may notify the authentication mode management module to switch the persistent authentication mode to the local persistent authentication mode.
In some embodiments of the present application, the local authentication result of the persistent feature authentication module may also be used for screen unlocking, application unlocking, and application function unlocking of the present device.
And the cross-device authentication information acquisition module is used for acquiring the identity authentication information of the connected other electronic devices when the continuous authentication mode is switched to the cross-device continuous authentication mode. In some embodiments, the module is configured to obtain a local authentication result of the local authentication result management module of the connected other electronic device. In some embodiments, the module is configured to obtain biometric information collected by a persistent feature collection module of a connected other electronic device. The electronic equipment can realize voice control, screen projection control and the like of the equipment based on the identity authentication information of other electronic equipment. It should be noted that, when the electronic device operates in the cross-device continuous authentication mode, the continuous feature collection module, the continuous feature authentication module, and the local authentication result management module still operate, so that when the local authentication result management module determines that the local authentication result of the device passes, the authentication mode management module may be notified to switch back to the local continuous authentication mode.
Illustratively, as shown in fig. 34A, the user 1 is using the electronic device 100, and the user 1 is within the detection range of the local persistent authentication of the electronic device 100 and outside the detection range of the local persistent authentication of the electronic device 200. At this time, the local authentication result of the electronic apparatus 100 is passed, and the continuous authentication mode of the electronic apparatus 100 is the local continuous authentication mode. As shown in fig. 34B, the user 1 switches the use device to the electronic device 200, and the user 1 is within the detection range of the local continuous authentication of the electronic device 200 and out of the detection range of the local continuous authentication of the electronic device 100. At this time, if the local authentication result of the electronic device 100 does not pass, the local authentication result management module of the electronic device 100 notifies the authentication mode management module to switch the continuous authentication mode to the cross-device continuous authentication mode; the electronic device 100 may acquire the authentication information of the electronic device 200 through the cross-device authentication information management module.
In some embodiments of the present application, as shown in fig. 34C, the electronic device 100 may not have the local persistent authentication capability, and the electronic device 100 includes a feature collection module, a feature authentication module, and a cross-device authentication information acquisition module. The feature collection module and the feature authentication module may be used to implement screen unlocking, application unlocking, and application function unlocking of the electronic device 100. The electronic device 100 may acquire the authentication information of the electronic device 200 through the cross-device authentication information acquisition module, and the electronic device 200 cannot acquire the authentication information of the electronic device 100. The electronic device 100 can still implement voice control and screen projection control of the electronic device 100 based on the identity authentication information of the electronic device 200.
In some embodiments of the present application, neither electronic device 100 nor electronic device 200 may have local persistent authentication capabilities. For example, when the electronic device 100 starts cross-device authentication, it sends an acquisition request to the electronic device 200 to acquire the identity authentication information of the electronic device 200; the electronic device 200 acquires the biometric information after receiving the acquisition request and transmits the biometric information to the electronic device 100, or the electronic device 200 acquires the biometric information and determines a local authentication result after receiving the acquisition request and transmits the local authentication result to the electronic device 100.
In the embodiment of the present application, the communication between different modules may adopt at least one of the following implementations.
Implementation mode 1: and broadcasting notification among systems. For example, the local authentication result management module sends a broadcast to other modules of the device to inform that the local persistent authentication is interrupted.
Implementation mode 2: and calling an interface between the modules for notification. For example, an interface 1 exists between the local authentication result management module and the authentication mode management module, and the local authentication result management module can inform the authentication mode management module of local continuous authentication interruption by calling the interface 1.
Implementation mode 3: the information is written into a storage module (such as a configuration file, a database and the like), and the receiving module actively reads the information from the storage module. For example, the local authentication result management module writes the state of the local continuous authentication interruption into a preset configuration file, and the authentication mode management module determines the local continuous authentication interruption of the device by periodically reading the preset configuration file.
In the embodiment of the present application, at least one of the following implementation manners may be adopted to obtain the authentication information between the electronic device 100 and the electronic device 200. The following description takes the example that the electronic device 100 obtains the local authentication result of the electronic device 200.
Implementation mode 4: the electronic device 200 writes the local authentication result into a distributed database, in which one or more electronic devices (e.g., the electronic device 100) connected to the electronic device 200 can read the local authentication result of the electronic device 200. It should be noted that one or more electronic devices connected to the electronic device 200 can write and read the distributed database.
Implementation mode 5: the electronic device 200 continuously broadcasts the local authentication result of the device to other devices and continuously listens for the broadcast of the authentication result of other devices. The electronic device 100 may obtain the local authentication result of the electronic device 200 by continuously monitoring the authentication result broadcasted by the other device.
Implementation mode 6: the persistent authentication query interface of the electronic device 200 is opened, and one or more electronic devices (e.g., the electronic device 100) connected to the electronic device 200 can query the local authentication result of the electronic device 200 through the query interface.
Based on the cross-device system shown in fig. 34A to 34C, the cross-device authentication method provided by the embodiment of the present application is described below.
For example, fig. 35 shows a cross-device authentication method in a voice control scenario provided in an embodiment of the present application. The above cross-device authentication method includes, but is not limited to, steps S35101 to S35111, wherein:
s35101, the electronic device 200 conducts local continuous authentication and obtains identity authentication information.
In this embodiment, the electronic device 200 may perform local persistent authentication by using one or more authentication methods, such as face recognition, iris recognition, and touch screen behavior recognition. For example, the authentication method of the local persistent authentication of the electronic device 200 is face recognition. The electronic device 200 may acquire an image using a low power consumption camera, perform face recognition on the image, and when it is determined that the image includes a face of a preset user through the face recognition, pass the local authentication. The preset user may be the user 1 in the foregoing embodiment, or may be an authorized user 3 in the foregoing embodiment. In some embodiments, the electronic device 200 may periodically capture images in real-time using a low power camera. In some embodiments, the electronic device 200 may also capture an image with a low-power-consumption camera upon receiving a designated touch operation by the user (e.g., a touch operation acting on a locked application icon). In some embodiments, electronic device 200 may also capture an image with a low power consumption camera upon receiving a cross-device authentication request by electronic device 100. And is not particularly limited herein.
In this embodiment of the application, the electronic device 100 may or may not have a local persistent authentication capability, and is not specifically limited herein.
S35102, the electronic device 100 receives the voice command 1 of the user.
It is understood that the user speaks the voice instruction 1 when the user intends to control the electronic device 100 by voice. For example, referring to fig. 23A, 23C and 25A, when the user intends to control the electronic device 100 to play song 1, the user may issue a voice instruction 1, i.e., "xiaozuoxian, play song 1". Referring to fig. 25C and 25E, when the user intends to control the electronic device 100 to send a message to Anna using the application 1, the user may issue a voice command 1, i.e., "do art and then send Anna home using the application 1". In some embodiments, the voice command 1 may not include a wakeup word, and is not specifically limited herein.
In the embodiment of the application, the electronic device 100 may receive and recognize the voice instruction 1. It should be noted that the electronic device 100 may be an electronic device capable of performing voice interaction, and the electronic device 100 is provided with a microphone and a speaker, and the microphone is usually kept powered on at all times so as to receive a voice instruction of a user at any time. The electronic device 100 also has voice recognition capability to perform voice recognition on the collected environmental sound. In some embodiments, an Application Processor (AP) of the electronic device 100 remains powered on and the microphone may send the collected voice information (e.g., voice instruction 1) to the AP. The AP recognizes the voice information and can start the function corresponding to the voice information. In some embodiments, the microphone of the electronic device is connected to the microprocessor, the microprocessor remains powered on, and the AP of the electronic device is not powered on. The microphone sends collected voice information (such as a voice instruction 1) to the microprocessor, and the microprocessor recognizes the voice information and determines whether to awaken the AP or not according to the voice information, namely, the AP is powered on. For example, when the microprocessor recognizes that the voice message includes a preset wake-up word, it wakes up the AP. In some embodiments, the AP performs a response operation corresponding to the received voice message after recognizing a preset wakeup word in the voice message. The preset wake-up word may be a default set of the electronic device before leaving a factory, or may be preset in the electronic device by the user according to the user's own needs, and is not specifically limited herein.
S35103, the electronic device 100 recognizes whether the voice command 1 conforms to the preset voiceprint characteristics of the user. If the voice command 1 matches the preset voiceprint characteristics of the user, then S35104 is executed.
In some embodiments of the present application, the preset user includes a user 1, the electronic device 100 prestores voiceprint features recorded by the user 1, the electronic device 100 can match the voiceprint features of the voice instruction 1 with the voiceprint features of the user 1, and when the matching degree reaches a preset threshold 2, the electronic device 100 determines that the voice instruction 1 conforms to the voiceprint features of the user 1. For example, the preset threshold 2 is equal to 95%.
In this embodiment, when the electronic device 100 recognizes that the voice command 1 does not conform to the voiceprint feature of the preset user, the electronic device 100 may discard the relevant data of the voice command 1 and not perform the response operation corresponding to the voice command 1.
In some embodiments, step S35103 may be an optional step, and the user may also determine whether the voice command 1 triggers a locked low-risk application or a locked low-risk application function after receiving the voice command.
S35104, the electronic device 100 determines whether the voice instruction 1 triggers a locked low-risk application (or a locked low-risk application function). If the voice command 1 triggers a low risk application for locking (or a low risk application function for locking), S35105 is performed.
In the embodiment of the present application, the locked low-risk application (or the locked low-risk application function) may be set by a user or may be set by the electronic device 100 by default. For example, in the embodiment of the present application, reference may be made to the relevant description in fig. 24A to fig. 24G for implementing an interface for locking an application, and details are not described here again. For the implementation of the interface for locking the application function in the embodiment of the present application, reference may be made to the related descriptions in fig. 24H to fig. 24M, which are not described herein again. For the interface implementation of the application with low risk of locking in the embodiment of the present application, reference may be made to the related description in fig. 26A to 26C, and details are not described here again. As shown in fig. 26C, application 1 and the album are locked low risk applications. For example, the electronic device 100 defaults to setting the payment, transfer, red package, etc. application functions of all applications as high-risk application functions.
In some embodiments, step S35104 may be an optional step, and after recognizing that the voice command 1 conforms to the voiceprint feature of the preset user, the user may perform step S35105, that is, after determining that the local persistent authentication of the electronic device 100 does not pass, start cross-device authentication, and acquire the identity authentication information of the electronic device 200. In some embodiments, steps S35103 and S35104 may be optional steps, and the user may perform step S35105 to initiate cross-device authentication after receiving the voice instruction 1.
In some embodiments, when electronic device 100 determines that voice command 1 triggers a high-risk locked application (or a high-risk locked application function), electronic device 100 may discard the relevant data of voice command 1 and not perform the response operation corresponding to voice command 1. For example, referring to fig. 27A to 27B, the user intends to control the electronic device 100 to open the payment application through the voice command 1, and since the payment application is a locked high-risk application, the electronic device 100 does not open the payment application, and may issue the prompt message "please unlock" shown in fig. 27B. Referring to fig. 27C to 27D, the user intends to control the electronic device 100 to transfer money through the voice command 1, and since the transfer is a high risk application function of locking, the electronic device 100 does not perform the transfer and may issue a prompt message "please unlock" as shown in fig. 27D.
In some embodiments of the present application, the electronic device 100 determines whether the voice instruction 1 triggers a locked application (or a locked application function), and if the voice instruction 1 triggers a locked application (or a locked application function), the electronic device 100 executes step S35105. For example, referring to the voice control scenario 2 shown in fig. 25A to 25J, when the electronic device 100 determines that the voice command 1 triggers a locked application (or a locked application function), the electronic device starts cross-device authentication to acquire the identity authentication information of the electronic device 200. As shown in fig. 24E, application 1 and the album may be locked applications, and application 2 may be unlocked application functions; as shown in fig. 24E, the personal hub and "send message" of application 2 may be a locked application function.
S35105, when the electronic device 100 determines that the local persistent authentication of the electronic device 100 does not pass, starts cross-device authentication, and sends an obtaining request 1 to the electronic device 200, where the obtaining request 1 is used to obtain the identity authentication information of the electronic device 200.
In this embodiment of the application, when the preset user is not in the detection range of the local continuous authentication of the electronic device 100, the electronic device 100 cannot collect the biometric information of the preset user, and at this time, the local authentication result of the electronic device 100 is not passed. For example, the authentication method of the local persistent authentication is face recognition, and the detection range of the local persistent authentication of the electronic device 100 refers to: and the shooting range of the low-power consumption camera is used for acquiring the face image. For example, referring to fig. 23A to 23D and fig. 25 to 25J, the user 1 is preset, the user 1 is not in the detection range of the electronic device 100, and the local persistent authentication of the electronic device 100 is not passed.
In some embodiments, if it is preset that the user is within the detection range of the local continuous authentication of the electronic device 100, and the electronic device 100 determines that the local authentication result of the electronic device 100 is passed, the electronic device 100 may perform the response operation corresponding to the voice instruction 1.
S35106, in response to the obtaining request 1, the electronic device 200 sends the identity authentication information of the electronic device 200 to the electronic device 100, and the electronic device 100 receives the identity authentication information of the electronic device 200.
In this embodiment, the electronic device 100 may also obtain the identity authentication information of the electronic device 200 in other manners. In some embodiments, when the electronic device 200 performs the local persistent authentication, the electronic device 200 broadcasts the authentication information of the electronic device 200 in real time, and the electronic device 100 may acquire the authentication information of the electronic device 200 by monitoring the authentication information of other devices (e.g., the electronic device 200) when the local persistent authentication of the electronic device 100 does not pass. In some embodiments, when the electronic device 200 performs the local persistent authentication, the electronic device 200 writes the identification information of the electronic device 200 into the distributed database in real time, and the electronic device 100 may read the identification information of the electronic device 200 in the distributed database when the local persistent authentication of the electronic device 100 fails.
S35107, the electronic device 100 determines whether the authentication information of the electronic device 200 matches the preset information. If the authentication information of the electronic device 200 matches the preset information, S35108 is performed.
In this embodiment, when the electronic device 100 determines that the identity authentication information of the electronic device 200 does not match the preset information, the electronic device 100 may discard the relevant data of the voice command 1, and not perform the response operation corresponding to the voice command 1. For example, referring to fig. 23C and 23D, fig. 25C and 25D, and fig. 25G and 25H, when the user 1 is not in the detection range of the electronic device 200, the local persistent authentication of the electronic device 200 is not passed; when the electronic device 100 determines that the local persistent authentication of the electronic device 200 does not pass, a voice response "please unlock" may be issued.
S35108, when the identity authentication information of the electronic device 200 is the local authentication result of the electronic device 200, the electronic device 100 determines whether the authentication method of the electronic device 200 has a lower priority than the authentication method of the local persistent authentication of the electronic device 100. If the priority of the authentication method of the electronic device 200 is not lower, S35109 is executed.
In one implementation, authentication methods of a plurality of devices connected to the electronic device 100 are prestored in the electronic device 100. In another implementation, the electronic device 100 may send an inquiry request of an authentication method to the electronic device 200 and receive an identification of the authentication method sent by the electronic device 200. In another implementation manner, the local authentication result sent by the electronic device 200 also carries an identifier of an authentication manner of the electronic device 200, and the electronic device 100 may know the authentication manner of the electronic device 200 based on the local authentication result of the electronic device 200.
In this embodiment of the application, the priority of the authentication mode of the local persistent authentication may be set by the electronic device 100, or may be set by the user. For example, the priorities of the authentication modes of the local persistent authentication are sorted from large to small as follows: face recognition (iris recognition), heart rate detection, gait recognition and touch screen behavior recognition. Optionally, in some embodiments of the present application, priorities of some authentication manner combinations may also be different, for example, the priority of face recognition + fingerprint recognition is greater than the priority of gait recognition + touch screen behavior. For example, the authentication manner of the electronic device 100 is face recognition, the authentication manner of the electronic device 200 is touch screen behavior recognition, and after the electronic device 100 starts cross-device authentication, if it is determined that the priority of the authentication manner of the electronic device 200 is lower than the priority of the authentication manner of the electronic device 100, the electronic device 100 may determine that the cross-device authentication of the electronic device 200 does not pass.
It can be understood that when the priority of the authentication mode of the electronic device 200 is lower, the electronic device 100 may determine that the identity authentication information of the electronic device 200 is not safe, and the electronic device 100 may discard the data related to the voice instruction 1 and not perform the response operation corresponding to the voice instruction 1.
S35109, the electronic device 100 determines whether the distance from the electronic device 200 is less than the preset distance 1. If the distance from the electronic device 200 is less than the preset distance 1, S35110 is performed.
In the embodiment of the present application, the electronic device 100 may measure the distance between the electronic device 100 and the electronic device 200 by using a positioning technology such as a bluetooth positioning technology, a UWB positioning technology, or a WiFi positioning technology.
For example, the electronic device 100 measures the distance of the electronic device 200 using bluetooth positioning technology. Specifically, in one implementation, the electronic device 100 sends a measurement request to the electronic device 200, the electronic device 200 sends a measurement response to the electronic device 100 within a preset time period based on the received measurement request, the electronic device 100 may determine the one-way flight time of the signal based on the sending time of the measurement request, the receiving time of the measurement response, and the preset time period, and further may determine the distance between the electronic device 200 and the electronic device 100 based on the one-way flight time and the propagation speed of the electromagnetic wave.
It can be understood that when the electronic device 100 determines that the distance of the electronic device 200 is greater than or equal to the preset distance 1, the electronic device 100 determines that the identity authentication information of the electronic device 200 is unsafe, and the electronic device 100 may discard the data related to the voice command 1 and not perform the response operation corresponding to the voice command 1.
S35110, the electronic device 100 determines whether the electronic device 200 is in a secure state. If the electronic apparatus 200 is in the safe state, S35111 is performed.
In one implementation, the electronic device 100 may send an inquiry request to the electronic device 200, the inquiry request being used to inquire whether the electronic device 200 is in a secure state; for example, when the electronic apparatus 200 determines that the electronic apparatus 200 is in the non-root state, an inquiry response indicating that the electronic apparatus 200 is in the security state is transmitted to the electronic apparatus 100. In another implementation manner, the identity authentication information sent by the electronic device 200 also carries an identifier of the security state of the electronic device 200, and the electronic device 100 may learn the security state of the electronic device 200 based on the identity authentication information of the electronic device 200. The determination as to whether the electronic device is in the safe state may refer to the foregoing definition related to the safe state, and is not described herein again.
It is understood that when the electronic device 100 determines that the electronic device 200 is not in the secure state, the electronic device 100 determines that the authentication information of the electronic device 200 is not secure, and the electronic device 100 may discard the data related to the voice command 1 and not perform the response operation corresponding to the voice command 1.
In the embodiment of the present application, the execution sequence of S35108 to S35110 is not specifically limited. For example, the electronic apparatus 100 may also simultaneously perform S35108 to S35110. For example, the electronic device 100 may first determine the security status of the electronic device 200; when the electronic device 200 is determined to be in the safe state, determining the priority of the authentication mode of the electronic device 200; when the priority of the authentication mode of the electronic device 200 is not lower, the distance of the electronic device 200 is determined again, and when the distance of the electronic device 200 is smaller than the preset distance 1, the electronic device 100 executes a response operation corresponding to the voice instruction 1.
In some embodiments, at least one step of S35108-S35110 is an optional step. For example, S35107 to S35109 are optional steps, and after the electronic device 100 receives the authentication information of the electronic device 200, and when it is determined that the authentication information of the electronic device 200 does not match the preset information, the response operation corresponding to the voice command 1 is executed. For example, S35107 is an optional step, after the electronic device 100 receives the identity authentication information of the electronic device 200, it is determined that the identity authentication information of the electronic device 200 does not match the preset information, and it is determined that the distance of the electronic device 200 is smaller than the preset distance 1, when it is determined that the electronic device 100 is in the safe state, the electronic device 100 determines that the identity authentication information of the electronic device 200 is safe and reliable, and the electronic device 100 performs the response operation corresponding to the voice instruction 1.
S35111, the electronic device 100 executes response operation corresponding to the voice command 1.
For example, in the aforementioned voice control scenario 1, music is an unlocked application, and referring to fig. 23A and 23B, the voice instruction 1 is "xiaozhi, play song 1", when the electronic device 100 determines that the cross-device authentication passes, the electronic device 100 plays song 1 and may issue a voice response "good, play song 1" shown in fig. 23B. In the foregoing voice control scenario 2, the application 1 is a locked application, and referring to fig. 25E and fig. 25F, the voice instruction 1 is "mini art, and send home to Anna by using the application 1", the electronic device 100 determines that the application 1 is a locked application, and determines that the cross-device authentication is passed, the electronic device 100 sends a message "is home returned" to Anna by using the application 1, and may send a voice response "is home sent to Anna is sent" shown in fig. 25F. In the aforementioned voice control scenario 3, application 1 is a locked low-risk application, and referring to fig. 25E and 25F, voice command 1 is "little art, send home to Anna with application 1", electronic device 100 determines that application 1 is a locked low-risk application, and determines that when cross-device authentication passes, electronic device 100 sends a message "do home" to Anna through application 1, and may send a voice response "send home to Anna" shown in fig. 25F.
For example, fig. 36 shows a cross-device authentication method under the screen-projection control scenario provided in the embodiment of the present application. The above-mentioned cross-device authentication method includes, but is not limited to, steps S201 to S208, wherein:
s36201, the electronic device 200 performs local persistent authentication to obtain identity authentication information.
In this embodiment of the application, the specific implementation of the electronic device 200 performing local persistent authentication and acquiring the identity authentication information may refer to the description of S101 in the method embodiment of fig. 35, which is not described herein again.
S36202, the electronic device 100 receives a screen-projecting operation of the user.
The related content of the screen projection setting can refer to the related description of fig. 28A to 28H, and is not described herein again. For example, fig. 28A to 28D are diagrams illustrating a screen projection manner of the electronic device 100 actively projecting the screen to the electronic device 200 according to an embodiment of the present application, where the screen projection operation may be that the user clicks an option 401A of the electronic device 200 shown in fig. 28B. Fig. 28E to fig. 28H are diagrams illustrating a screen-casting manner in which the electronic device 200 actively obtains screen-casting content of the electronic device 100 according to an embodiment of the present application, where the screen-casting operation may be that the user clicks the option 410A of the electronic device 100 shown in fig. 28G.
S36203, in response to the screen-projecting operation, the electronic apparatus 100 transmits the screen-projecting content 1 to the electronic apparatus 200.
S36204, after the electronic device receives the screen projection content 1, displaying a screen projection window 1, wherein the screen projection window 1 displays the screen projection content 1.
For example, the screen projection window 1 may be the screen projection window 403 shown in fig. 28D, and the screen projection content 1 may be the user interface 32 shown in fig. 28D. The screen projection window 1 may be a screen projection window 411 shown in fig. 28G, and the screen projection content 1 may be the user interface 32 shown in fig. 28G.
S36205, the electronic device 100 sends an obtaining request 2 to the electronic device 200, where the obtaining request 2 is used to obtain the identity authentication information of the electronic device 200.
S36206, in response to the received obtaining request 2, the electronic device 200 sends the identity authentication information of the electronic device 200 to the electronic device 100.
In this embodiment of the application, a specific implementation that the electronic device 200 sends the identity authentication information of the electronic device 200 to the electronic device 100 may refer to the description related to S35106 in the method embodiment of fig. 35, and is not described herein again.
S36207, when the electronic device 100 determines that the identity authentication information of the electronic device 200 does not match the preset information, the electronic device 100 stops sending the screen projection content to the electronic device 200.
S36208, the electronic device 200 stops displaying the screen projection content 1.
For example, referring to fig. 28I and 28J, after the user of the electronic device 200 is switched from the user 1 to the user 2, the cross-device authentication is not passed, the electronic device 200 stops displaying the screen projection content of the electronic device 100, and the prompt message "screen projection interruption" shown in fig. 28J may be displayed on the screen projection window 403.
In some embodiments of the present application, the cross-device authentication method described above after step S36203 may further include, but is not limited to, at least one of steps S36209 to S36221. Wherein the content of the first and second substances,
s36209, the electronic device 200 receives a touch operation 1 applied to the screen projection content 1 by the user.
For example, referring to fig. 29A and 29C, the touch operation 1 may be that the user clicks an icon 404C of music; referring to fig. 29E and 29G, the touch operation 1 may be that the user clicks an icon 404B of the album; referring to fig. 30A, the touch operation 1 may be that the user clicks an icon 404D of the application 2; referring to fig. 31A, the touch operation 1 may be that the user clicks an icon 404E of the mailbox.
S36210, the electronic device 200 sends the touch parameter of the touch operation 1 to the electronic device 100, and the electronic device 100 receives the touch parameter of the touch operation 1.
S36211, based on the touch parameter of the touch operation 1, the electronic device 100 determines whether the touch operation 1 triggers a locked low-risk application or application function. If the touch operation 1 triggers a low-risk application or application function that is not locked, S36212 is executed; if the touch operation 1 triggers a locked low-risk application or application function, S36213 is performed.
In some embodiments, the electronic device 200 determines a touch parameter of the touch operation 1 on the screen projection content 1 and sends the touch parameter to the electronic device 100, and the electronic device 100 determines a trigger event corresponding to the touch operation 1 based on the touch parameter of the touch operation 1. The touch parameter may include a touch coordinate, a touch duration, and the like. For example, referring to fig. 29E, the electronic device 200 acquires the touch coordinates and the touch duration of the user on the user interface 32, and the electronic device 200 sends the touch coordinates and the touch duration to the electronic device 100; based on the touch coordinates and the touch duration of the touch operation 1, the electronic device 100 determines that the trigger event corresponding to the touch operation 1 is a single click operation on the icon 404B of the album. Referring to fig. 26C, the photo album is a locked low-risk application, and the electronic device 100 determines that the touch operation 1 triggers the locked low-risk application. The touch parameter may include other parameters besides the touch coordinate and the touch duration, and is not limited herein.
In some embodiments, step S36211 may be an optional step, and after the electronic device 100 receives the touch parameter of the touch operation 1, step S36213 is executed, that is, cross-device authentication is started to obtain the identity authentication information of the electronic device 200.
S36212, the electronic device 100 stops transmitting the screen projection content to the electronic device 200.
Based on the touch parameter of the touch operation 1, when the electronic device 100 determines that the touch operation 1 triggers a locked high-risk application or application function, the electronic device 100 stops sending screen projection content to the electronic device 200, and the electronic device 200 stops displaying the screen projection content of the electronic device 100. For example, referring to fig. 31A, based on the touch parameter of the touch operation 1, the electronic device 100 may determine that the trigger event corresponding to the touch operation 1 is a single-click operation of the icon 404E acting on the mailbox. The mailbox is a locked high-risk application, and the electronic device 100 determines that the touch operation 1 triggers the locked high-risk application.
S36213, the electronic device 100 sends an obtaining request 3 to the electronic device 200, where the obtaining request 3 is used to obtain the identity authentication information of the electronic device 200.
S36214, in response to the received obtaining request 3, the electronic device 200 sends the authentication information of the electronic device 200 to the electronic device 100, and the electronic device 100 receives the authentication information of the electronic device 200.
In this embodiment of the application, a specific implementation that the electronic device 200 sends the identity authentication information of the electronic device 200 to the electronic device 100 may refer to the description related to S35106 in the method embodiment of fig. 35, and is not described herein again.
S36215, the electronic device 100 determines whether the authentication information of the electronic device 200 matches the preset information. If the identity authentication information of the electronic device 200 matches the preset information, S36216 is executed; if the authentication information of the electronic device 200 does not match the preset information, S36212 is performed.
In this embodiment, when the electronic device 100 determines that the identity authentication information of the electronic device 200 does not match the preset information, the electronic device 100 stops sending the screen projection content to the electronic device 200, and the electronic device 200 stops displaying the screen projection content of the electronic device 100. For example, with reference to fig. 29C and 29D, fig. 29H, and fig. 29I for the aforementioned screen projection control scenario 2, an unauthorized user 2 is using the electronic device 200, the authentication information of the electronic device 200 does not match the preset information, the electronic device 100 stops sending the screen projection content to the electronic device 200, the electronic device 200 stops displaying the screen projection content of the electronic device 100 in the screen projection window 403, or the electronic device 200 closes the screen projection window 403.
S36216, the electronic device 100 determines whether the authentication method of the electronic device 200 has a lower priority than the authentication method of the local persistent authentication of the electronic device 100. If the priority of the authentication method of the electronic device 200 is not lower, S36217 is performed; if the priority of the authentication method of the electronic apparatus 200 is lower, S36212 is executed.
In this embodiment of the application, the specific implementation that the electronic device 100 determines whether the priority of the authentication mode of the electronic device 200 is lower may refer to the related description of S35108 in the method embodiment of fig. 35, which is not described herein again.
S36217, the electronic device 100 determines whether the distance to the electronic device 200 is less than a preset distance 1. If the distance to the electronic device 200 is less than the preset distance 1, performing S36218; if the distance from the electronic apparatus 200 is not less than the preset distance 1, S36212 is performed.
In the embodiment of the present application, the specific implementation that the electronic device 100 determines whether the distance to the electronic device 200 is less than the preset distance 1 may refer to the description of S35109 in the method embodiment of fig. 35, which is not described herein again.
S36218, the electronic device 100 determines whether the electronic device 200 is in a safe state. If the electronic device 200 is in the safe state, performing S36219; if the electronic apparatus 200 is not in the safe state, S36212 is executed.
In this embodiment of the application, for specific implementation of determining whether the electronic device 200 is in the safe state by the electronic device 100, reference may be made to the description of S35110 in the method embodiment of fig. 35, which is not described herein again.
In the embodiment of the present application, the execution sequence of step S36216, step S36217, and step S36218 is not particularly limited. The electronic apparatus 100 may also perform step S36216 to step S36218 at the same time. In some embodiments, at least one of step S36216 to step S36218 is an optional step. For example, steps S36216 to S36218 are optional steps, and when it is determined that the authentication information of the electronic device 200 matches the preset information after the electronic device 100 receives the authentication information of the electronic device 200, the electronic device 100 executes the function triggered by the touch operation 1, and draws the screen projection content 2 corresponding to the execution of the function. For example, step S36216 is an optional step, after the electronic device 100 receives the identity authentication information of the electronic device 200, it is determined that the identity authentication information of the electronic device 200 matches the preset information, and it is determined that the distance of the electronic device 200 is smaller than the preset distance 1, when the electronic device 100 is in the secure state, the electronic device 100 determines that the identity authentication information of the electronic device 200 is secure and reliable, the electronic device 100 executes the function triggered by starting the touch operation 1, and draws the screen projection content 2 corresponding to the execution of the function.
S36219, the electronic device 100 executes a response operation corresponding to the touch operation 1, and draws corresponding screen projection content 2.
S36220, the electronic device 100 sends the screen projection content 2 to the electronic device 200, and the electronic device 200 receives the screen projection content 2 sent by the electronic device 100.
S36221, the electronic device 200 displays the screen projection content 2 in the screen projection window 1.
Aiming at the screen-casting control scene 3, the photo album is applied in a locking mode; for the screen-casting control scenario 4, the album is a locked low-risk application. For example, referring to fig. 29E and 29F, the electronic device 100 determines that the touch operation 1 is a single-click operation on the icon 404B of the album, the electronic device 100 determines that the album triggered by the touch operation 1 is a locked application (or a locked low-risk application), and when the electronic device 100 acquires the authentication information of the electronic device 100 and determines that the cross-device authentication passes, the electronic device 100 draws the user interface 36 of the album and sends the user interface 36 to the electronic device 200, as shown in fig. 29F, the electronic device displays the user interface 36 in the screen-shooting window 403. The screen projection window 1 may refer to the screen projection window 403 shown in fig. 29F, and the screen projection content 2 may refer to the user interface 36 shown in fig. 29F.
Based on the above method, when the local authentication of the electronic device 100 fails, the electronic device 100 may initiate cross-device authentication, that is, determine whether the authentication information of the at least one electronic device (e.g., the electronic device 200) matches with the preset information, when the authentication information matches, the electronic device 100 determines that the cross-device authentication passes, and when the authentication information does not match, the electronic device 100 determines that the cross-device authentication fails. Therefore, the identity authentication of the equipment can be realized through the local authentication result or the biological characteristic information of other credible equipment, the convenience of the identity authentication is effectively improved, and better user experience is created.
Implementation mode four
Based on the steps shown in fig. 3, in S301, the receiving, by the first electronic device, an authentication request includes: the first electronic device receives a target operation acting on a first interface of the first electronic device, the target operation is used for triggering access to a first service, and the first service is associated with a second electronic device. In the above S302, the determining, by the first electronic device, the authentication manner corresponding to the first service includes: the first electronic device obtains a target authentication mode corresponding to the first service. In the above S303, the scheduling, by the first electronic device, the M electronic devices to authenticate the first service according to the authentication method includes: the first electronic equipment collects authentication information according to a target authentication mode; and then the first electronic device sends an authentication request to the second electronic device, wherein the authentication request comprises authentication information, the authentication request is used for requesting the second electronic device to authenticate the first service, and the second electronic device is contained in the M electronic devices.
The authentication method provided in the fourth implementation manner of the embodiment of the present application is exemplarily described below with reference to the first to third specific embodiments.
Example one
Fig. 37 to 39D are related to the first embodiment.
Fig. 37 is a flowchart illustrating a cross-device authentication method according to an embodiment of the present application. In this embodiment, the following takes the first electronic device as a PC and the second electronic device as a mobile phone as an example, and details the cross-device authentication method provided in this embodiment are described. Illustratively, the mobile phone and the PC perform multi-screen cooperative office, an input device of the PC is a mouse, and the PC further has a camera. It should be noted that the cross-device authentication method shown in this embodiment is also applicable to other types of electronic devices.
As shown in fig. 37, the cross-device authentication method provided by the embodiment of the present application may include the following S3701-S3705.
S3701, the PC receives a target operation of a user on a first object in a first window, wherein the first window is a display window projected from the mobile phone to the PC.
The target operation is used for triggering the execution of the first service. The first service is a service of a mobile phone. For example, the first service is an unlocking service of the mobile phone. Illustratively, as shown in fig. 39A, a first window of the mobile phone is displayed in the PC interface, and the first window is a lock screen interface 3902. Assuming that the user wants to unlock the mobile phone, the user may click the face unlocking control 3903 in the screen locking interface by operating the mouse of the PC. That is, the PC may receive a click operation of the user on the face unlocking control 3903 in the lock screen interface 3902.
Optionally, the PC may also trigger unlocking of the interface 3902 without receiving a target operation of the user, that is, without receiving the target operation, for example, the PC may monitor the user through the camera 3901, and execute subsequent steps after monitoring the face of the user.
S3702, in response to the target operation, the PC obtains a target authentication method corresponding to the first service.
Specifically, the first service is a service in the second electronic device, such as unlocking, payment, and the like.
One possible mode is that the PC sends an authentication request to the mobile phone, the authentication request comprises information of a first service which is accessed currently, the mobile phone determines an authentication mode corresponding to the first service, and then the authentication mode corresponding to the first service is obtained from the mobile phone. For example, the mobile phone determines that the target authentication mode of the face unlocking service is face authentication.
Optionally, the mobile phone may further query the resource pool, and determine that the PC side has a face acquisition capability, so that it is determined that the authentication capability corresponding to the face unlocking operation is the authentication capability of the face at the mobile phone side, and the acquisition capability corresponding to the face unlocking operation is the acquisition capability of the face at the PC side, and the mobile phone may send the determined face authentication mode, and information such as the face acquisition capability and the face authentication capability associated with the face authentication mode, to the PC, so that the PC schedules the face acquisition capability to perform face acquisition.
In another possible mode, the PC and the mobile phone can use the pre-synchronization resources to obtain a resource pool, wherein the resource pool comprises authentication modes corresponding to various operations in the mobile phone, so that the PC can query the local resource pool and determine the authentication mode corresponding to the target operation. Optionally, the resource pool may further include templates of various authentication information, such as a fingerprint template, a face template, and other information.
Specifically, in this step, the specific manner in which the PC or the mobile phone determines the target authentication manner corresponding to the first service may be multiple manners:
first, as shown in fig. 38A, the resource pool in the PC or the mobile phone may include a preset configuration table, where the preset configuration table includes a corresponding relationship between various operations/various services and authentication methods. The PC or the mobile phone can determine a target authentication mode according to the first service by inquiring a local configuration table, if the target operation is face unlocking of a mobile phone screen, the target authentication mode is face authentication, and if the first service is payment service, the target authentication mode is face authentication or fingerprint authentication.
It should be understood that, in this embodiment, in the case that the target authentication manner is determined by the PC, the PC may obtain a preset configuration table from the mobile phone before executing the above S3701, and then determine the target authentication manner corresponding to the first service by using the configuration table obtained from the mobile phone. Or, the mobile phone may upload a preset configuration table to the server, and the PC acquires the preset configuration table from the server, and then determines a target authentication mode corresponding to the first service by using the configuration table acquired from the server. In one possible case, the preset configuration table may be generated according to the configuration of a user using the mobile phone, and in another possible case, the preset configuration table may also be generated according to the configuration of a service requirement by a developer.
In the second mode, as shown in fig. 38B, the PC or the mobile phone may determine the security risk level corresponding to the operation/service, and then determine the authentication mode corresponding to the security risk level.
It should be understood that in one possible scenario, a developer may pre-define a complete set of operations/services (e.g., open door, turn on light, pay, access lockers), and then establish a fixed mapping between actions in the complete set of operations/services and the security level of the risk. And if the safety risk level corresponding to the door opening operation is defined as a high safety risk level, the safety risk level corresponding to the lamp opening operation is defined as a low safety risk level. Illustratively, as shown in table 15, a complete set of predefined operations (open door, light, pay, access to secure files, access to general files, etc.) in a PC or handset establish a mapping table of operations to security risk levels, as shown in table 15 below.
Watch 15
Operations/services Level of security risk
Door opening Height of
Turning on the light Is low in
Payment Height of
Accessing secure documents Height of
Accessing general files In
In another possible scenario, the developer may dynamically determine a security risk level corresponding to the first service according to an analysis policy. For example, the analysis strategy may be: determining a correlation coefficient between the first service and the private data of the user, and when the correlation coefficient is lower, determining that the security risk level corresponding to the target operation is a low security risk level; when the correlation coefficient is in the middle, determining that the safety risk level corresponding to the target operation is a medium safety risk level; and when the correlation coefficient is higher, determining that the safety risk level corresponding to the target operation is a high safety risk level. For example, the PC or the mobile phone determines that the correlation coefficient between the unlocking operation shown in fig. 38B and the private data of the user is large according to the analysis policy, and thus determines that the security risk level of the door opening operation is a high security risk level. Illustratively, the PC analyzes the degree of correlation between target operation and privacy data of the user by using artificial intelligence, and dynamically judges the security risk level of the current business execution action according to the data analysis result. For example, as shown in table 16 below.
TABLE 16
Operations/services Involving data Level of security risk
Door opening Home data Height of
Payment Payment data, user password Height of
Turning on the light Home data Is low in
In addition, it should be understood that in the second mode, a corresponding relationship between the risk security level and the authentication mode needs to be established in advance, and a developer may define the reliability of different authentication modes in advance, and then match the corresponding relationship between the security risk level and the authentication mode according to the reliability, where the higher the security risk level is, the higher the reliability is, the lower the security risk level is, and the lower the reliability is. In this way, since the first service is a service of the mobile phone, the mobile phone may determine the security risk level corresponding to the operation/service first, and then determine the authentication mode corresponding to the security risk level, and the mobile phone may send the determined target authentication mode corresponding to the first service to the PC.
And S3703, the PC collects the authentication information of the user according to the target authentication mode.
In this embodiment, the authentication information is user information that needs to be authenticated by the target authentication method, for example, information such as a fingerprint of a user, a face of the user, or a password input by the user.
Exemplarily, as shown in fig. 39A, when the PC receives a click operation of a user on the face unlocking control 3903 in the lock screen interface 3902, the PC determines that the target authentication mode is face authentication, and the authentication information of the user to be acquired is a face image. The PC invokes the camera 3901 of the PC to capture facial images of the user.
S3704, the PC sends an authentication request message to the mobile phone, where the authentication request message includes authentication information, and the authentication request is used to request authentication of the first service.
Illustratively, in the scenario shown in fig. 39A, the PC acquires a face image from the camera, and transmits a face authentication request message including the acquired face image to the mobile phone, the face authentication request message being used to request face authentication for a face unlock operation.
S3705, the mobile phone receives the authentication request message, obtains the authentication information from the authentication request message, authenticates the target operation using the authentication information, and generates an authentication result.
Illustratively, the mobile phone receives a face image from the PC, performs face authentication using the face image and a face template stored in the mobile phone, and generates an authentication result of the face. And when the authentication result of the face is that the authentication is passed, the mobile phone responds, namely the screen is unlocked, the unlocked mobile phone interface is displayed, and the PC also synchronously displays the unlocked mobile phone interface. And when the authentication result of the face is authentication failure, the mobile phone responds, namely an unlocking failure interface is displayed, and the PC also synchronously displays the unlocking failure interface.
In a possible embodiment, the method may further include step S3706, where the handset may further transmit a response message to the PC for the authentication request message, the response message including the authentication result. In this embodiment, the mobile phone sends the authentication result to the PC, and if the authentication result is successful, the PC may prompt the user that the authentication is successful in the interface. If the authentication fails, the PC may prompt the user in the interface that the authentication failed.
To describe the above method more systematically, the following description will be made with reference to fig. 39B for the mobile phone screen unlocking process shown in fig. 39A, and the software architecture of the PC includes a resource management unit and a scheduling unit, and in addition, the PC also includes a human face acquisition unit. When the PC receives a face unlocking operation of a user, a service layer of the PC generates a face unlocking request, then a resource management unit in the PC determines acquisition capacity related to face authentication, and when the resource management unit determines that the face acquisition capacity in the PC is available, the resource management unit instructs the face acquisition unit to perform face acquisition, the face acquisition unit calls a camera to acquire a face of the user, then a scheduling unit in the PC transmits a face image acquired by the face acquisition unit to a mobile phone, the scheduling unit of the mobile phone schedules the authentication unit of the face to authenticate the face, each functional unit in fig. 39B is a service located in the PC or the mobile phone and can be implemented in an internal memory 221 in fig. 2, and a plurality of or one corresponding functional unit can be integrated, which is not limited herein.
In a possible embodiment, if the PC determines that the target authentication method corresponding to the target operation is a combination of at least two different authentication methods in S3702, the PC may collect at least two kinds of authentication information corresponding to the target authentication method at the PC, and then transmit the at least two kinds of authentication information to the mobile phone at S3703. And the mobile phone authenticates the authentication information acquired by the PC by using the template of the authentication information locally stored in the mobile phone.
As still another example, as shown in an interface 3911 shown in fig. 39C, the interface 3911 prompts the user to pay for purchasing a video, when the user determines that a purchase is required, the user may click the purchase-soon control 3912, and when the PC receives a click operation performed by the user on the purchase-soon control 3912, the PC may determine that the target authentication manner corresponding to the payment operation includes face authentication and voiceprint authentication. As shown in fig. 39D, the PC calls the camera 3901 of the PC, and locally captures a face image corresponding to the face authentication method in the PC, and the display window 3921 in the PC displays the preview effect of the face image captured by the camera. In addition, the user sends a voice command "Xiao Yi, please pay" according to the instruction of the display window 3921, the PC calls the microphone 3922, the voiceprint corresponding to the voiceprint authentication mode is collected locally at the PC, and then the face image and the voiceprint are sent to the mobile phone at the PC. The mobile phone authenticates the face image by using the face template locally stored in the mobile phone, authenticates the voiceprint by using the voiceprint template locally stored in the mobile phone, and obtains the authentication result of the payment operation by combining the authentication result of the face and the authentication result of the voiceprint.
It can be seen from the foregoing embodiments that, in a multi-screen collaborative scenario, a user accesses services of other electronic devices (e.g., a mobile phone) on an electronic device (e.g., a PC), and when identity authentication is required, the user does not need to operate on the other devices, and the acquisition of authentication information can be completed on the electronic device currently operated by the user, so that convenience of authentication operation can be improved.
Example two
The difference between the second embodiment and the first embodiment is that the collecting and authenticating actions can both be performed on the same device. As shown in fig. 40, the cross-device authentication method provided by the embodiment of the present application may include the following S4001-S4008.
S4001 to S4003 are the same as S3701 to S3703 described above.
S4004, the PC sends a request message to the mobile phone, wherein the request message is used for requesting to acquire the template of the authentication information.
In this step, the template of the authentication information requested by the request message may be a secret template (e.g., a lock screen password) or a biometric template (e.g., a fingerprint template, a face template, etc.). Optionally, the request message includes a type of the authentication information to be requested, so that the mobile phone determines the template of the authentication information according to the type of the authentication information.
S4005, the mobile phone sends a response message of the request message to the PC. The response message to the request message includes a template of authentication information.
It should be noted that, in a possible case, the PC and the mobile phone establish a secure channel in advance, and the mobile phone can send a response message to the PC through the secure channel; in another possible case, the PC and the mobile phone may negotiate a negotiation key in advance, encrypt the template for transmitting the authentication information using the negotiation key, and decrypt the template for the authentication information using the negotiation key.
It should be understood that if the PC and the mobile phone perform resource synchronization and the resource pool in the PC includes the template of the authentication information on the mobile phone, the above S4004 and S4005 may not be performed, that is, S4004 and S4005 are optional steps and are not necessarily performed steps. Illustratively, the PC acquires a face image, and performs face authentication by using the face image and a face template stored in the PC resource pool to generate an authentication result of the face. And when the authentication result of the face is that the authentication is passed, unlocking the screen by the mobile phone, displaying the unlocked mobile phone interface, and synchronously displaying the unlocked mobile phone interface by the PC.
S4006, the PC receives the template of the authentication information, and the PC authenticates the first service by using the authentication information and the template of the authentication information to generate an authentication result.
S4007, the PC sends the authentication result to the mobile phone.
Optionally, this step further includes S4008, where the mobile phone receives the authentication result from the PC, and responds to the target operation according to the authentication result. Optionally, after the mobile phone responds, the mobile phone may further send a response message of the authentication request message to the PC, where the response message includes the authentication result. In this embodiment, the mobile phone sends the authentication result to the PC, and if the authentication result is successful, the PC may prompt the user that the authentication is successful in the interface. If the authentication fails, the PC may prompt the user in the interface for the failure.
Illustratively, as shown in fig. 39A, a first window of the mobile phone is displayed in the PC interface, and the first window is a lock screen interface 3902. Assuming that the user wants to unlock the mobile phone, the user can click the face unlocking control 3903 in the screen locking interface by operating the mouse of the PC. That is to say, the PC can receive a click operation of a user on the face unlocking control 3903 in the screen locking interface 3902, trigger the PC to acquire an authentication mode from the mobile phone as face authentication, and then call the camera to acquire a face, and in addition, the PC acquires a template of the face from the mobile phone, so that the acquired face information is authenticated, and an authentication result is generated. For a detailed interface example, reference may be made to the above embodiments, and details are not repeated here.
It can be seen from the foregoing embodiments that, in a multi-screen coordination scenario, a user accesses a service of another electronic device (e.g., a mobile phone) on an electronic device (e.g., a PC), when the device where the service is located needs to perform identity authentication on the user, the user does not need to operate on the accessed device, and the acquisition and authentication of authentication information can be completed on the electronic device currently operated by the user, so that convenience of authentication operation can be improved.
EXAMPLE III
The third embodiment is different from the two embodiments described above in that the third embodiment is not limited to be performed in a multi-screen collaborative scenario, and may be that a user triggers a service request related to security of a second electronic device on a first electronic device, so as to initiate cross-device authentication. In this embodiment, the following describes in detail the cross-device authentication method provided in this embodiment, taking the first electronic device as an intelligent television and the second electronic device as a mobile phone as an example. Illustratively, the input device of the smart television is a remote controller, and the smart television further has a camera. It should be noted that the cross-device authentication method shown in this embodiment is also applicable to other types of electronic devices.
As shown in fig. 41, the cross-device authentication method provided in the embodiment of the present application may include the following S4101-S4106.
S4101, the smart television receives target operation acted on a display window of the smart television by a user.
The target operation is used to trigger execution of the first service. The first service is a service in the smart television, but the first service is associated with the mobile phone. In this embodiment, the first transaction is associated with sensitive data in the handset.
Illustratively, as shown in fig. 42, a display window 4200 of a video application including a buy now control 4201 is displayed in the smart tv interface. Assuming the user wants to pay for watching the full-set video, the user can click on the buy-now control 4201 in the display window of the video application by operating the remote control. That is, the smart tv may receive a click operation by the user on the buy now control 4201 as in the display window 4200 in fig. 42. Therefore, after the payment service is executed, the user data in the APP related to payment in the mobile phone is changed, and the user data belongs to the sensitive data in the mobile phone, so that the payment service in the smart television is related to the sensitive data in the mobile phone. In the present invention, the sensitive data is different according to different scenarios, and for example, the sensitive data may be user data in the device, authentication information of the user, device information, and the like.
S4102, responding to the target operation, and the smart television acquires a target authentication mode corresponding to the first service.
Illustratively, the resource pool of the smart television includes authentication manners corresponding to the respective operations, and the smart television may determine, by querying the resource pool, that the target authentication manner corresponding to the click operation of the immediate purchase control 4201 is the face authentication. Optionally, the smart television may further determine that the face acquisition unit corresponding to the face authentication method on the smart television is available by querying the resource pool.
In addition, optionally, the smart television may request the mobile phone to acquire the target authentication method.
S4103, the smart television collects authentication information of the user according to the target authentication mode.
In this embodiment, the authentication information is user information that needs to be authenticated by the target authentication method, for example, information such as a fingerprint of a user, a face of the user, or a password input by the user.
For example, as shown in fig. 42, when the smart television determines that the target authentication mode is face authentication, the authentication information of the user to be collected is a face image. The smart television calls a camera 1003 of the smart television to acquire a face image of the user.
S4104, the smart television sends authentication information to the mobile phone.
Illustratively, in the scene shown in fig. 42, the smart television acquires a face image from the camera and transmits the acquired face image to the mobile phone.
S4105, the mobile phone receives the authentication request message, acquires the authentication information from the authentication request message, authenticates the target operation by using the authentication information, and generates an authentication result.
See S3705 above for specific examples.
S4106, the mobile phone sends a response message of the authentication request message to the smart tv, where the response message includes an authentication result.
In this embodiment, the mobile phone sends the authentication result to the smart television, and if the authentication result is that the authentication is successful, the smart television can prompt the user that the authentication is successful in the interface. If the authentication fails, the smart television can prompt the user that the authentication fails in the interface.
S4107, the smart television makes a response corresponding to the target operation according to the authentication result.
Illustratively, if the authentication is successful, the smart television displays that the payment is successful; and if the authentication fails, the smart television displays that the payment fails.
The steps S4104 to S4107 may also be performed on the smart television side for the smart television to acquire the authentication information template from the mobile phone, and the steps S4004 to S4008 are specifically performed in the embodiment, which are not described herein.
In this embodiment, when the first service triggered by the user on the smart television is associated with the mobile phone, the authentication process of the service needs to be executed by the mobile phone, so that the device security of the mobile phone or the security of sensitive data of the mobile phone can be ensured, and the mobile phone sends the authentication result to the smart television, so that the smart television responds according to the authentication result.
As another example, in a driving scenario, as shown in fig. 43, the in-vehicle terminal and the mobile phone may cooperate to complete cross-device authentication. Specifically, a user sends a voice command ' VIP for opening music application ' in a cockpit ', a payment authentication request is sent to a mobile phone by a vehicle-mounted terminal after the voice command is received by the vehicle-mounted terminal, and the mobile phone determines that an authentication operation corresponding to the payment operation is face authentication. Or, the mobile phone determines that the authentication operation corresponding to the payment operation is face authentication, the vehicle-mounted terminal acquires an authentication mode from the mobile phone as the face authentication, acquires a face template, and the vehicle-mounted terminal performs the face authentication to generate an authentication result. That is, the cross-device authentication method shown in the above embodiment is also applicable to cooperative authentication between the in-vehicle terminal and the mobile phone.
Referring to fig. 44, fig. 44 shows a schematic structural diagram of a communication system. The communication system may include a first communication device 4400 and a second communication device 4410, where the first communication device 4400 includes a first transceiving unit 4401 and a collecting unit 4402; wherein:
the transceiving unit 4401 is configured to receive a target operation that acts on a first interface of a first electronic device, where the target operation is used to trigger access to a first service, and the first service is associated with the second electronic device.
The transceiving unit 4401 is further configured to acquire a target authentication manner corresponding to the first service.
The collection unit 4402 is configured to collect authentication information.
Specifically, the communication device 4400 may include at least one acquiring unit 4401, where one acquiring unit 4401 may be configured to acquire at least one type of authentication information (hereinafter, referred to as simply one type of authentication information). The embodiment of the present application takes an example in which one acquisition unit acquires one authentication information. The authentication information may be a fingerprint, a face, a heart rate, a pulse, a behavior habit or a device connection state, etc. For example, the face acquisition unit may be used to acquire a face, and the face acquisition unit may refer to the camera 293 shown in fig. 2; the gait acquisition unit may be used to acquire gait, and the gait acquisition unit may be a camera 293 shown in fig. 2; the pulse acquisition unit can be used for acquiring pulses, and the pulse acquisition unit can be a pulse sensor 280N shown in fig. 2; the heart rate acquisition unit can be used for acquiring a heart rate, and the heart rate acquisition unit can be referred to as a heart rate sensor 280P shown in fig. 2; the acquisition unit of the touch screen behavior may be used to acquire the touch screen behavior, and the acquisition unit of the touch screen behavior may refer to the display screen 294 shown in fig. 2; the acquisition unit of the trusted device may be configured to acquire a connection status and/or a wearing status of the wearable device.
The second communicator 4410 includes a second transceiving unit 4411, an authentication unit 4412, and a decision unit 4413. Wherein:
a second transceiving unit 4411 configured to receive an authentication request, where the authentication request includes authentication information. Optionally, the second transceiving unit 4411 is further configured to receive a request message from the first electronic device, where the request message is used to request a target authentication manner corresponding to the first service.
An authentication unit 4412 configured to perform authentication according to the authentication information and generate an authentication result. The authentication unit 4402 is generally an authentication service in software implementation, and is integrated in an operating system.
The communication apparatus 4400 may include at least one authentication unit 4402, wherein the one authentication unit 4402 may be configured to authenticate at least one type of authentication information to obtain an authentication result. The embodiment of the present application takes an example in which one authentication unit authenticates one kind of authentication information. For example, the authentication unit of the face may be configured to authenticate the face to obtain an authentication result of the face; the gait authentication unit can be used for authenticating the gait information to obtain the gait authentication result; the pulse authentication unit can be used for authenticating the collected pulse to obtain an authentication result of the pulse; the heart rate authentication unit can be used for authenticating the collected heart rate to obtain an authentication result of the heart rate; the authentication unit of the touch screen behavior can be used for authenticating the acquired touch screen behavior information to obtain an authentication result of the touch screen behavior; the authentication unit of the trusted device may be configured to authenticate the acquired connection state and/or wearing state of the electronic device to obtain an authentication result of the trusted device.
The decision unit 4413 is configured to determine a target authentication manner corresponding to the first service.
Optionally, the first communication apparatus and the second communication apparatus may further include a resource management unit 4420, configured to perform resource synchronization with other devices in the device networking, generate a resource pool, or maintain or manage the resource pool. Wherein the resource management unit 4420 includes a resource pool, and the resource in the resource pool may be an authentication factor (or information of the authentication factor), an acquisition capability of the device, an authentication capability of the device, and the like. The resource management unit 4420 may refer to the internal memory 221 shown in fig. 2, and the internal memory 221 stores a resource pool.
Therefore, based on the method, the first electronic device can collect the authentication information, the second electronic device can authenticate the authentication information, cross-device collection of the authentication information is achieved, convenience of authentication operation is improved, a user is prevented from operating on a plurality of electronic devices, and user experience is improved. In addition, the first electronic device and the second electronic device cooperatively authenticate the first service, so that the security of the authentication result can be improved, and the problem that the security of the authentication result is low due to the limitation of hardware or insufficient authentication capability and acquisition capability of a single electronic device is solved.
Implementation mode five
Based on the steps shown in fig. 3, in this fifth implementation manner, before S301, the method may further include: the method comprises the steps that first electronic equipment receives first operation of a user, wherein the first operation is used for requesting to enter a first feature template; the first electronic equipment responds to the first operation and utilizes an existing second characteristic template to authenticate the identity of the user, wherein the second characteristic template is associated with the user identification of the user. After the authentication is passed, the first electronic equipment receives a first feature template input by a user; and then establishing an association relation between the first characteristic template and the user identification.
In consideration of the conventional technology, the biometric technology is to identify and authenticate the identity of a user by biometric features. For example, the identity of the user is identified by using characteristic information such as a human face, a fingerprint, a finger vein, an iris, a palm vein and the like. Different biological characteristics of the same user may be stored in a traditional single device, the user usually distinguishes the entered different characteristic templates depending on the naming information of the biological characteristic template, and the user can associate the attribution in the device with the characteristic template of the user according to the naming information of the characteristic template. However, the naming information of some feature templates may be relatively simple, such as fingerprint 1, fingerprint 2, etc., so that there may be a user to which the feature template belongs that cannot be distinguished depending on the naming information. Compared with the prior art, the data association method provided in the embodiment of the present application can implement association of feature templates of a same user in multiple electronic devices, so that, in a new and old change scene, a user can obtain feature templates on each device belonging to a same user according to the association relationship, and then correspondingly distribute the feature templates on the old devices to new electronic devices, so as to implement one-key migration of the feature templates on the new and old devices. Or, the association relationship may be shared with other electronic devices, so that the other electronic devices may provide personalized services for the user according to the record information, for example, provide services such as cross-device authentication or device collaborative authentication.
Before describing the technical solution of the fifth implementation manner of the embodiment of the present application, some terms in the present application are explained to facilitate understanding by those skilled in the art.
Some concepts related to embodiments of the present application are presented below:
(I) sensitive data may include: user secret data, biometric data, and the like. The user secret data may include a screen locking password of the user, a protection password of the user, and the like. The biometric data may include one or more of: physical biometrics, behavioral biometrics, soft biometrics. The physical biometric characteristics may include: face, fingerprint, iris, retina, deoxyribonucleic acid (DNA), skin, hand, vein. The behavioral biometric may include: voiceprint, signature, gait. Soft biometrics may include: gender, age, height, weight, etc.
(II), the feature template may include: templates of user secret data, templates of biometric data, etc.
Currently, different types of biological features (such as fingerprints, faces, voiceprints, and the like) entered by different members of a family may be stored on different types of devices (such as a smart camera, a smart speaker, a mobile phone, and a flat panel lamp) in a smart home scene. Under the condition that the fingerprint template of the user Alisa is stored on the mobile phone and the voiceprint template of the user Alisa is stored on the intelligent sound box, the mobile phone and the intelligent sound box can only use the existing biological characteristics of the equipment to carry out identity authentication at present. If user's Alisa pronunciation instruction smart speaker pays, need carry out the fingerprint authentication because of the payment business, and smart speaker is because of not having the fingerprint template, and does not support fingerprint collection again, leads to smart speaker to have to reply user's Alisa unable payment.
With the rapid development of the field of internet of things, the cooperative integration of a plurality of electronic devices has become a consensus in the industry. In order to realize collaboration among a plurality of electronic devices, it is necessary to enable user data and device data to flow and be shared among the plurality of electronic devices. If the mobile phone can share the state information of the fingerprint template locally stored with the user alias to the smart sound box, the smart sound box can forward the fingerprint authentication request to the mobile phone when the fingerprint authentication of the payment service is needed, the mobile phone authenticates the payment service by using the fingerprint template, and then the authentication result of the fingerprint is sent to the smart sound box.
Therefore, the data association method can associate different biological characteristics of the same user stored on each device in the device networking, also can associate the biological characteristics of different users on each device in the device networking, each device in the device networking can share the association relation, and each device in the device networking can provide personalized services for the user according to the association relation, such as cross-device authentication or device collaborative authentication.
Referring to fig. 45, a schematic diagram of a software architecture provided in the embodiment of the present application is shown. As shown in fig. 45, the software architecture of each of the electronic device 100 and the electronic device 200 may include: application layer and framework layer (FWK).
In some embodiments, the application layer may include various applications installed in the electronic device. For example, applications installed in the electronic device may include settings, calculators, cameras, short messages, music players, file managers, galleries, browsers, memos, news, video players, mail, and the like. The applications may be system applications of the electronic device, or may also be third-party applications, and the embodiments of the present application are not specifically limited herein. For example, the application layer of the electronic device 100 may include various applications installed in the electronic device 100, such as a file manager, a calculator, a music player, a video player, and the like. As another example, the application layer of the electronic device 200 may include a file manager, gallery, memo, video player, mail, and the like.
In some embodiments, the framework layer includes a window management module to enable windowing of the display interface. The framework layer may include a resource management module, in addition to the window management module, for managing the feature information on the electronic device 100 and the electronic device 200, and the constraint policy bound to the feature information; the framework layer may further include a data synchronization module, configured to implement sharing of association relationships between different features of the same user, stored on each device in the device networking, to each device, or sharing of association relationships between features of different users on each device in the device networking, or simultaneously sharing of a constraint policy bound to the feature information to each device while sharing the association relationships.
It should be noted that the software architecture illustrated in the present embodiment does not specifically limit the electronic device 100 and the electronic device 200. In other embodiments, electronic device 100 and/or electronic device 200 may include more or fewer layers than those shown, or more or fewer modules, or a combination of certain modules, or a different arrangement of modules, and the embodiments are not limited in this respect. For example, the software architecture shown above may include other layers, such as a kernel layer (not shown in fig. 45) and the like, in addition to the application layer and the framework layer described above. The kernel layer is a layer between hardware and software. The core layer may include at least a display driver, a camera driver, an audio driver, a sensor driver, and the like.
The data association method provided by the embodiment of the present application is described below with reference to specific embodiments.
In a first scenario, for a single device, during the process of entering a new feature template, an association relationship between the feature template in the device and a user identifier may be established. Or after the characteristic template is input, the user binds the input characteristic template and the user identifier, so as to establish the association relationship between the characteristic template and the user identifier. And a second scenario, aiming at multiple devices, the multiple electronic devices can initiate authentication of the same feature of the same user, and the devices with the feature of the user are determined according to the authentication results fed back by the multiple devices, so that the association relationship between the features in the multiple devices and the user identifier is established. The following is made for these two different scenarios, respectively, and with reference to the drawings.
Scene one
In the first mode, in the process of inputting a new characteristic template, a user establishes a binding relationship between the newly input characteristic template and a target user identifier.
That is, when the user aisa enters a new feature template in the electronic device, the electronic device may display a prompt message during the user entry, where the prompt message is used to ask the user aisa whether to bind the feature template being entered to the user, and when the user confirms that the user needs to be bound, the electronic device further displays a user list so that the user aisa selects a target user identifier from the user list. Or, when the user confirms that the user needs to be bound, the electronic device acquires the user account of the current user from the device, such as the user account name, so as to determine the user identifier of the current user. The electronic equipment provides related controls for the user Alisa to newly build a target user identifier corresponding to the feature template, so that a binding relationship is built between the newly-entered feature template and the target user identifier. Optionally, in the same device, the same account may store a plurality of different user identifiers, and after the user logs in the user account, a plurality of users may be newly built in the electronic device, where different users correspond to different user names.
Illustratively, as shown in (a) of fig. 46A, the user interface 4610 is a display interface of setting items of biometrics and password, and the user interface 4610 includes a face recognition 4611. Assuming that the user needs to enter a face, when the mobile phone detects an operation of the user on the face recognition 4611 control, the mobile phone displays an interface 4620 as shown in (b) in fig. 46A. The interface 4620 includes an enter face control 4621. When the cellular phone detects a click operation by the user on the enter face 4621, the cellular phone displays an interface 4630 as shown in (c) in fig. 46A. The interface 4630 is used for displaying the preview effect of the face captured by the camera. When the user performs a shooting operation according to the animation prompt in the interface 4630 and the mobile phone correctly acquires a face, the mobile phone displays the interface 4640 as shown in (d) in fig. 46A. The interface is used for prompting the user that the face entry is successful.
Thereafter, the cellular phone also displays an interface 4650 as shown in (e) in fig. 46B. The interface 4650 includes a prompt box for prompting the user whether to establish a binding relationship between the face entered and the user. When the user clicks on the confirmation control 4651, the handset detects this and displays an interface 4660 as shown in (f) of fig. 46B. The interface 4660 includes an existing user list, and if the currently entered face is a face of the user alias, the user may select the user alias in the user list, and when the mobile phone detects that the user performs a click operation on the completion control 4661, an association relationship between the face template and the user alias may be established. For another example, if the currently entered face is a face of the user Lucy and the existing user list does not have this option, the current user may create a new user, so as to establish an association relationship between the face template and the user Lucy.
In a second mode, in the process of entering the new feature template, the electronic device prompts the user to input the specified biological feature, the specified biological feature is used for authenticating the identity of the user, and the user is allowed to continue entering the new feature template only after the authentication is passed, so that the binding relationship between the new feature template and the specified biological feature can be completed. The specified biological characteristics belong to the target user, so that the association relationship between the new characteristic template and the target user identification is established.
Illustratively, in the process of inputting a new face template, the identity of the user is verified by using the fingerprint, and the face template is allowed to be input after the verification is passed, so that the association relationship between the face template and the fingerprint template can be established, and the association relationship between the face template and the user identifier of the user Alisa is established because the fingerprint template belongs to the user Alisa. Such as the user interface 4710 shown in (a) of fig. 47. The user interface 4710 is a display interface of setting items of biometrics and password, and the user interface 4710 includes a face recognition 4711. Assuming that the user needs to enter a face, when the cellular phone detects an operation of the user on the face recognition 4711 control, the cellular phone displays an interface 4720 as shown in (b) of fig. 47. The interface 4720 prompts the user to enter the user's fingerprint. After the user inputs the fingerprint, the mobile phone matches the fingerprint input by the user by using the fingerprint template of the user of the mobile phone, if the authentication is passed, the user is the owner, or the owner inputs the fingerprint to authorize the current user to input the face. When the fingerprint authentication is passed, the cellular phone displays an interface 4730 as shown in (c) of fig. 47. The user can operate the face input control 4731 to complete the input of the face.
In a third mode, aiming at the existing feature template in the electronic equipment, the user can manage the existing feature template and select the feature template needing to be associated to the same user from the existing feature template.
In one possible implementation, the user manages the existing feature templates according to the naming information of each feature template.
Illustratively, a user interface 4810 as shown in (a) of fig. 48A. The user interface 4810 is a display interface of setting items of biometrics and password, and the user interface 4810 includes a fingerprint 4811. When the cellular phone detects an operation of the user on the fingerprint control 4811, the cellular phone may display an interface 4820 as shown in (b) of fig. 48A. The interface 4820 includes a fingerprint management menu, and the management of fingerprints includes renaming, deleting, and binding users. When the handset detects that the user acts on the bound user control 4821, the handset may display an interface 4830 as shown in (c) of fig. 48A. The interface 4830 includes a list of fingerprints that the user already has. Because the user renames the input fingerprint in advance, the current user can determine the fingerprint belonging to the same user according to the renaming information of the fingerprint. As shown in (c) in fig. 48A, the user can select fingerprint 1 of aisa and fingerprint 2 of aisa. When the mobile phone detects an operation of the user on the completion control 4831, the mobile phone displays an interface 4840 shown in (d) in fig. 48B, where the interface 4840 is used to inquire whether the user determines to bind the selected fingerprint to the user, and when the user clicks the confirmation control 4841, the mobile phone may display an interface 4850 shown in (e) in fig. 48B, where the interface 4850 includes an existing user list, the user may select user aisa in the user list, and when the mobile phone detects a click operation of the user on the completion control 4851, the association relationship among the fingerprint 1 of aisa, the fingerprint 2 of aisa, and the user aisa may be established.
In another possible implementation manner, when the user cannot distinguish the belonging user according to the naming information of each feature template, the existing feature templates may be identified first, and the feature templates belonging to the same user may be associated to the list of the belonging users according to the identification result.
Still another example, the cellular phone displays a user interface 4910 as shown in (a) in fig. 49A. The user interface 4910 is a display interface for setting items of biometrics and password, and the user interface 4910 includes a fingerprint 4911. When the handset detects an operation by the user on the fingerprint control 4911, the handset may display an interface 4920 as shown in (b) of fig. 49A. Included in the interface 4920 is a fingerprint recognition control 4921. When the mobile phone detects a click operation of the user on the fingerprint identification control 4921, the mobile phone displays a pop-up box 4931 as shown in (c) of fig. 49A, and the pop-up box 4931 displays prompt information for prompting the user to place a finger on the fingerprint sensor and highlight the finger after identifying the entered fingerprint. If the user Alisa places the index finger of the right hand on the fingerprint sensor, the mobile phone recognizes that the fingerprint is matched with the entered fingerprint 2, and therefore the fingerprint 2 is highlighted.
Further, in the above manner, the user aisa can identify, from the feature templates of the fingerprints recorded in the mobile phone, that the feature templates belonging to the user aisa include the fingerprint 1 and the fingerprint 2 as shown in (c) in fig. 49A. So that the user can associate fingerprint 1 and fingerprint 2 in the fingerprint list. When the user performs a click operation on the fingerprint list 4921 shown in fig. 49A (B), the cellular phone displays an interface 4940 shown in fig. 49B (d), and the user can select the fingerprint 1 and the fingerprint 2 belonging to Alisa. When the mobile phone detects that the user acts on the completion control 4941, the mobile phone displays an interface 4950 as shown in (e) in fig. 49B, where the interface 4950 is used to inquire whether the user determines to bind the selected fingerprint to the user, when the user clicks on the confirmation control 4951, the mobile phone may display an interface 4960 as shown in (f) in fig. 49B, where the interface 4960 includes an existing user list, the user may select user aisa in the user list, and when the mobile phone detects that the user acts on the completion control 4961, the mobile phone may establish an association relationship between the fingerprint 1, the fingerprint 2, and the user identification aisa of aisa.
It should be noted that, in the above fingerprint management process, an authentication process for the identity of the user may also be included, for example, the user is prompted to input a screen locking password (not shown in fig. 49A), and after the authentication is passed, the above fingerprint management operation is allowed to be performed, so as to prevent sensitive data from being modified by an illegal user.
The mobile phone may store record information of association between the device identifier, the feature template, and the user identifier, where the record information may be in a list form or in a code form such as XML, and this is not limited in this embodiment of the present application. Illustratively, the handset stores therein record information as shown in table 17 below.
TABLE 17
Figure BDA0002990820150001051
Figure BDA0002990820150001061
Optionally, the mobile phone may also present the established association relationship to the user in the form of a list. For example, a biometric association table control 5001 shown in fig. 50 (a) may be provided in the interface 5000 for biometric identification and password of the mobile phone, and when the mobile phone detects a click operation of the user on the control 5001, the mobile phone may display an interface 5010 shown in fig. 50 (b). When the handset detects that the user acts on the user identity aisa control 5011, the handset displays an interface 5020 as shown in (c) of fig. 50. The interface 5020 includes feature templates of the home subscriber alias, such as fingerprint 1, fingerprint 2 and face.
It should be noted that when a user newly enters a feature template, the update of the existing recorded information is triggered, and similarly, when the user deletes the existing feature template, the update of the existing recorded information is also triggered, and it is seen that the current latest device identifier in the device, the association relationship between the feature template and the user identifier are always stored in the recorded information.
Optionally, after the association relationship between the feature template of the same user and the user identifier in the mobile phone is established, the user may share the device identifier, the record information of the association relationship between the feature template and the user identifier to other trusted devices. For example, after the mobile phone detects that the user performs a click operation on the sharing control 5021 shown in (c) of fig. 50, the mobile phone may share the record information including the association relationship between the user alias and the feature template of the home user alias to other electronic devices or the hub device in the device group network, so that the other electronic devices or the hub device may provide personalized services for the user according to the record information, such as providing services like cross-device authentication or device collaborative authentication.
Scene two
It is contemplated that different biometrics of a user may be stored on multiple devices of the user, and the same biometrics of the user may also be stored on different devices. Therefore, the embodiment of the application provides a data association mode, which can associate the biological characteristics belonging to the same user on different devices.
The method A can initiate the authentication of a plurality of electronic devices on the same feature of the same user aiming at a plurality of devices, locate which devices on the plurality of devices have the feature of the user according to the authentication results fed back by the plurality of devices, and establish the association relationship among the user identifier, the feature template and the device identifier according to the matching result.
Illustratively, as shown in fig. 51A, a user aisa may input a fingerprint on a mobile phone, and after receiving the fingerprint, the mobile phone synchronously distributes the fingerprint to a PC and a tablet device in a networking device. And then, the mobile phone, the PC and the tablet device match the fingerprints of the user. Specifically, the mobile phone matches the fingerprint currently input by the user by using the existing fingerprint template in the mobile phone, and if the matching is passed, the mobile phone prompts the user that the matching is successful. In addition, the PC matches the received fingerprint by using the existing fingerprint in the mobile phone, if the matching fails, the PC prompts the user that the matching fails, or the PC sends the matching result to the mobile phone, and the mobile phone prompts the user that the fingerprint at the PC end does not pass. Similarly, the tablet computer matches the received fingerprint by using the existing fingerprint in the tablet computer, if the matching is passed, the tablet computer prompts the user that the matching is successful, or the tablet computer sends the matching result to the mobile phone, and the mobile phone prompts the user that the fingerprint at the tablet computer end passes the matching.
As shown in fig. 51B, the user aisa may input a face on a mobile phone, and after receiving the face, the mobile phone may synchronously distribute the face to a PC and a tablet computer in the networking device. Then, the mobile phone, the PC and the tablet personal computer match the face of the user, specifically, the mobile phone matches the face currently received by the user by using the face existing in the mobile phone, if the matching is passed, the mobile phone prompts the user that the matching is successful, in addition, the PC matches the received face by using the face existing in the mobile phone, if the matching is passed, the PC prompts the user that the matching is passed, or the PC sends the matching result to the mobile phone, and the mobile phone prompts the user that the fingerprint of the PC end is matched. Similarly, the tablet computer matches the received fingerprint by using the existing fingerprint in the tablet computer, if the fingerprint matches the fingerprint, the tablet computer prompts the user that the fingerprint does not match the fingerprint, the matching result is sent to the mobile phone, and the mobile phone prompts the user that the fingerprint does not match the fingerprint at the tablet computer end.
Optionally, after face is input on the mobile phone, Alice may broadcast a request message, where the request message is used to report all user identifiers in devices located in the same local area network or devices (such as a PC and a tablet) that log in the same account to the mobile phone. After the mobile phone receives the user identification, the mobile phone is matched with the user identification received from the PC and the tablet computer, all the characteristic templates under the corresponding user identification are further obtained from the PC and the tablet computer, and the association relation between the user identification and the characteristic templates is established. Optionally, the PC and the tablet computer may send the user identifier and the corresponding feature template, and after the mobile phone performs matching according to the user identifier, an association relationship between the user identifier and the feature template is established in the mobile phone. After the association relationship is established, the mobile phone may share the recorded information of the association relationship among the device identifier, the feature template, and the user identifier to other trusted devices, such as other electronic devices or a central device in the device group network.
In the method B, for a plurality of devices, a user first establishes an association between a user identifier and a feature template for each electronic device, and keeps the association locally, and the specific establishment method may refer to the scenario one. Then, the user may initiate authentication of one feature of the user by one electronic device to identify the user identifier of the current user, and then the first electronic device sends a request message including the user identifier to the other electronic devices, and after the other electronic devices receive the request message, the other electronic devices send the record information of the feature template bound to the user identifier to the first electronic device, so that the first electronic device establishes an association relationship between the user identifier and the feature template, and the device identifier according to the record information.
For example, as shown in fig. 51C, a user alias may input a fingerprint on a mobile phone, and after receiving the fingerprint, the mobile phone authenticates the fingerprint, and identifies that the user corresponding to the fingerprint is the user alias, and then sends a first request message to a PC and a tablet device in a networking device, where the first request message includes a user identifier alias, and the first request message is used to request the user to feed back information of a feature template associated with the user alias. And then, the PC and the tablet device both use the information of the user Alisa to search and match, and locally acquire the information of the feature template associated with the user Alisa. Then, the PC sends a first response message to the mobile phone, wherein the first response message comprises recorded information I, and the recorded information I is information of a face template associated with the user Alisa in the PC. And the tablet computer sends a second response message to the mobile phone, wherein the second response message comprises recorded information II, and the recorded information II is information of a fingerprint template associated with the user Alisa in the tablet computer. Therefore, the mobile phone can establish the association relation table of the feature template associated with the user Alisa according to the record information of the feature template associated with the user Alisa stored by the mobile phone, and the record information I and the record information II acquired from the PC and the tablet computer.
Optionally, the first electronic device may also send the authentication result to the other electronic device, so that the other electronic device determines that the access user is legitimate. And when the request message received by the other electronic equipment comprises an authentication result passing the authentication, sending the self-stored record information associated with the user identification to the first electronic equipment, otherwise, not sending the record information. For example, as shown in fig. 51C, the first request message and the second request message sent by the mobile phone may further include an authentication result that the fingerprint authentication passes, and then the tablet PC and the PC send a response message to the mobile phone.
Optionally, after the first electronic device receives the record information from the other electronic devices, the first electronic device may further provide an interface for the user to filter information of the feature template that needs to be associated, and the first electronic device establishes, according to the selection of the user, an association relationship between the user identifier and the feature template set on the first electronic device. Illustratively, after the mobile phone receives the record information I and the record information II from other electronic devices, the mobile phone determines information of a feature template associated with the user aisa in the three electronic devices, namely the mobile phone, the PC and the tablet, and then the mobile phone displays information of all feature templates in an interface, from which the user can further filter the feature templates that need to be associated with the user aisa.
Then, the mobile phone may store record information of association relationship between the device identifier, the feature template, and the user identifier according to a matching result of the user alias fingerprint and the face obtained from each device, where the record information may be in a list form or in a code form such as XML, and the embodiment of the present application does not limit this. Illustratively, the handset stores therein record information as shown in table 18 below.
Watch 18
Figure BDA0002990820150001081
Optionally, after the association relationship is established between the feature template of the same user and the electronic device to which the feature template belongs in the mobile phone, the user may share the record information of the association relationship between the device identifier, the feature template, and the user identifier on the mobile phone to other trusted devices, such as other electronic devices or a hub device in the device group network, so that the other electronic devices may provide personalized services for the user according to the record information, such as providing services of cross-device authentication or device collaborative authentication.
It should be noted that, in this scenario, when the user initiates identity authentication each time, the electronic devices may trigger to perform authentication on the same feature of the same user, and position which device of the devices has the feature of the user according to the authentication result fed back by the devices, and establish an association relationship between the user identifier and the feature template and the device identifier according to the matching result. Optionally, the user may also initiate authentication of the same feature of the same user by multiple electronic devices periodically, and update the record information shown in table 18 according to the authentication result fed back by the multiple devices, which is not limited in this embodiment of the application.
Optionally, if an administrator is set in a device networking system formed by a plurality of devices, the administrator may associate different types of feature templates collected by different devices in the networking system, for example, as shown in fig. 52, a face 1 and a fingerprint 1 of a user aisa are stored in a mobile phone. The PC stores the face and fingerprint 1 of the user Alisa, and the face 2 and voiceprint of the user Lucas. The administrator may control the mobile phone to obtain the record information from the PC and generate the record information in the mobile phone including the association relationship between the mobile phone and the PC. Illustratively, as shown in table 19.
Watch 19
Figure BDA0002990820150001082
That is, the method may generate total record information, and the total record information may record association relationships between device identifications of different electronic devices and different feature templates on the electronic devices. The total recorded information may also be shared to other electronic devices within the network. Therefore, in a new and old machine changing scene, an administrator can obtain the feature templates on each device maintained by the administrator according to the association relationship, and then correspondingly distribute the feature templates on each device to the new electronic device, so as to realize one-key migration of sensitive data on the new and old devices.
In a possible embodiment, based on the association relationship among the feature template, the user identifier, and the device identifier, the embodiment of the present application may further add a usage constraint condition of the feature template based on the association relationship, where the usage constraint condition may include at least one of the constraints: 1. constraint conditions on the use authority of the feature template; 2. constraints on the device environment to which the feature template is applicable; 3. constraint conditions of the services applicable to the feature templates; 4. constraints on the security level of the feature template.
For example, in addition to the association relationship shown in table 17, the mobile phone may also record the usage constraint of the face template, the usage constraint of the fingerprint 1, and the usage constraint of the fingerprint 2. Specifically, the use constraint conditions of the face template are as follows: the user who has the use authority of the face template is user Alisa, the equipment environment suitable for the face template is TEE, and the service suitable for the face template is payment service. The use constraints of the fingerprint 1 template and the fingerprint 2 template are as follows: the user who has the use authority of the fingerprint 1 template and the fingerprint 2 template is user Alisa, the equipment environment suitable for the fingerprint 1 template and the fingerprint 2 template is TEE, the service suitable for the fingerprint 1 template is unlocking service, and the service suitable for the fingerprint 2 template is screen unlocking.
Illustratively, the handset may record therein record information as shown in table 20, which holds an association relationship between a user identifier, a feature identifier, a device identifier, and a usage constraint.
Watch 20
Figure BDA0002990820150001091
In a possible embodiment, the user may configure the usage constraint of each feature template in the feature association relationship list of the existing user, and specifically, as shown in fig. 53, the step of creating the usage constraint of the feature template by the user may include: for the feature template in the interface 5020 shown in (c) in fig. 50, the user can manage the feature templates in the list, as shown in (a) in fig. 53, the cell phone receives the click operation of the user on the fingerprint 1 control 5301, and displays an interface 5310 as shown in (b) in fig. 53, wherein the interface 5310 includes optional service types. After the user aisa selects the service type applicable to the feature template, as shown in (b) of fig. 53, after the user aisa selects the door opening service, the mobile phone receives a user action completion control 5311. Based on the above manner, the user alias completes the configuration of the usage constraint condition for the fingerprint 1. It should be noted that, when the user aisa initiates configuration of the usage constraint condition of the feature template, the mobile phone needs to authenticate the identity of the user first, for example, prompt the user to input a fingerprint, and allow the user to further configure the usage constraint condition of the feature template after the fingerprint authentication is passed.
It should be noted that the user aisa may initiate the configuration of the usage constraint conditions for a plurality of feature templates in the feature association list of its user aisa, for example, the user aisa may simultaneously perform the configuration of the same usage constraint condition for a plurality of feature templates in the feature template list shown in (c) in fig. 50. Alternatively, the administrator may initiate the configuration of usage constraints for different users' feature templates in the handset. Such as the configuration of the feature template list shown in (b) of fig. 50 by the administrator at the same time. Alternatively, the configuration granularity of the usage constraint may be configured at user granularity, that is, all the biometrics of the user aisa can perform the door opening service. Or, the configuration granularity of the usage constraint condition may also be configured by taking a feature template of the user as the granularity, that is, the fingerprint biometric feature of the user aisa can only execute the door opening service.
Optionally, after the feature template and the usage constraint condition establish an association relationship, the user may share the record information including the usage constraint condition to other trusted devices, such as other electronic devices or a hub device in the device group network, so that the other electronic devices may provide personalized services for the user according to the record information, such as providing services like cross-device authentication or device cooperation authentication.
Based on the embodiment described in the first scenario, the present application provides a data association method, as shown in fig. 54, which may include the following steps:
s5401, the electronic equipment receives a first operation of a user, and the first operation is used for requesting to enter a first feature template.
Illustratively, the first operation may be an operation as shown in face recognition 4711 shown in (a) in fig. 47, which is an operation for requesting entry of a face.
S5402, in response to the first operation, the electronic device authenticates the identity of the user by using an existing second feature template, wherein the second feature template is associated with the user identification of the user.
Exemplarily, as shown in fig. 47 (b), the mobile phone prompts the user to input a fingerprint for fingerprint authentication, and the fingerprint template is a feature template previously entered by the user aisa, so that the fingerprint template has an association relationship with the user aisa.
S5403, after the authentication is passed, the electronic equipment receives the first feature template input by the user.
For example, the mobile phone may receive an operation that the user acts on the input face 531 as shown in (c) in fig. 47, and the mobile phone calls the camera to capture the face of the current user, that is, the first feature template may be the face captured by the camera of the mobile phone.
S5404, the electronic equipment establishes an incidence relation between the first feature template and the user identification.
Illustratively, the mobile phone establishes an association relationship between the face template and the fingerprint template. Because the fingerprint template and the user identification have an association relationship, the association relationship is established between the face template and the user identification.
In a possible implementation, the electronic device may further receive a second operation of the user; the second operation is used for triggering the association of the entered third feature template with the user identification; then, in response to the second operation, an association between the third feature template and the user identifier is established. Specifically, the method in the scenario one may be referred to as a third method, that is, for an existing feature template in the electronic device, a user may manage the existing feature template, and select a feature template that needs to be associated with the same user. In one possible implementation, the user manages the existing feature templates according to the naming information of each feature template. Optionally, in another possible implementation manner, when the user cannot distinguish the belonging user according to the naming information of each feature template, the existing feature template may be identified first, and the feature templates belonging to the same user may be associated to the list of the belonging user according to the identification result. Namely, the electronic equipment can receive characteristic information input by a user; and matching the feature information input by the user with at least one feature template in the electronic equipment, and determining a third feature template matched with the features input by the user.
In a possible implementation, the electronic device may further obtain a usage constraint corresponding to the third feature template; and establishing an association relation between the third feature template and the use constraint condition. For a specific example, refer to the embodiment shown in fig. 11.
In one possible implementation, the first electronic device may further send the first feature template to a second electronic device, wherein the second electronic device is connected to the first electronic device. The second electronic equipment matches the feature information by using a fourth feature template in the second electronic equipment to obtain a matching result, and the first electronic equipment obtains the matching result from the second electronic equipment; and when the matching results are successful, establishing an association relation among the fourth feature template, the first feature template and the user identifier.
The method can realize the association of the feature templates of the same user in a plurality of electronic devices, so that in the scene of changing between new and old devices, the user can obtain the feature templates on each device belonging to the same user according to the association relation, and then correspondingly distribute the feature templates on each old device to the new electronic device so as to realize the one-key migration of the feature templates on the new and old devices.
In one possible embodiment, the method further comprises: the electronic equipment can also share the record information to the second electronic equipment; the record information comprises the fourth characteristic template, the association relation between the first characteristic template and the user identification. In this way, other electronic devices can provide personalized services for the user according to the recorded information, such as providing services of cross-device authentication or device collaborative authentication.
Based on the embodiment shown in the second scenario, in summary, the present application provides a data association method, which may include the following steps:
fig. 55 shows a schematic diagram of a data association method provided in an embodiment of the present application, where the process mainly includes the following steps.
S5501, the first electronic equipment receives an operation of a user, wherein the operation comprises characteristic information input by the user.
Illustratively, as shown in FIG. 51A, a user, Alisa, may enter a fingerprint on a cell phone.
S5502, the first electronic device matches the feature information by using a first feature template in the first electronic device to generate a first matching result.
Illustratively, the mobile phone matches the fingerprint input by the user by using the fingerprint template in the mobile phone, and generates a matching result.
S5503, the first electronic device distributes the feature information to the second electronic device.
Continuing the previous example, after receiving the fingerprint, the mobile phone synchronously distributes the fingerprint to the PC in the networking equipment.
S5504, the second electronic device uses a second feature template in the second electronic device to match with the feature information to obtain a second matching result, and the second electronic device sends the second matching result to the first electronic device.
Illustratively, the handset may obtain a matching result of the fingerprint from the PC.
S5505, when the first matching result and the second matching result are both matched successfully, the first electronic device establishes an association relationship among the first feature template, the second feature template and the user identifier according to the first matching result and the second matching result.
Optionally, the first electronic device may further establish an association relationship between the first feature template, the second feature template, the user identifier, and the device identifier according to the first matching result and the second matching result. For example, the mobile phone may store recorded information of association between the device identifier, the feature template, and the user identifier according to a matching result between the fingerprint of the user aisa and the face obtained from each device, as shown in table 18.
Optionally, this embodiment may further include: the first electronic equipment acquires the use constraint conditions of the first characteristic template and the second characteristic template; establishing an association between the first feature template, the second feature template, and the usage constraint condition. The usage constraints may include at least one of the constraints: 1. constraint conditions on the use authority of the feature template; 2. constraints on the equipment environment to which the feature template is applicable; 3. constraint conditions of the services applicable to the feature templates; 4. constraints on the security level of the feature template. A specific example can be seen in fig. 53.
In a possible implementation manner, the electronic device shares first record information including an association relationship between the first feature template, the second feature template and the user identifier, and/or second record information including an association relationship between the first feature template, the second feature template and the usage constraint to other trusted devices.
It should be noted that the sharing of the record information may be performed by a data synchronization service module in the electronic device. That is, the electronic device may invoke the data synchronization service module to initiate data synchronization within the device web.
The first method is as follows: when the operating system of the device is provided with a unified data synchronization service, each service or service (for example, the biometric association management service for executing the method in this case) can call the data synchronization service module of the local device, thereby completing data synchronization between the devices. The unified data synchronization service can complete data writing, distribution among multiple devices, duplicate removal and the like.
The second method comprises the following steps: if the operating system of the device does not have a unified data synchronization service module, a data synchronization service function may be added to the service or service (for example, a data synchronization service function is added to the biometric characteristic association management service for executing the method in this case), and the service or service may call the data synchronization function in this service, thereby completing data synchronization between devices. The data synchronization function added in the service or service can complete data writing, distribution among multiple devices, duplicate removal and the like.
Therefore, based on the method, the multiple biological characteristic data belonging to the same user are correlated so as to provide personalized services for the user, and the first to fourth implementation modes can complete cross-device authentication or multi-device cooperative authentication based on the data correlation method.
Implementation mode six
Based on the steps shown in fig. 3, in S302, a specific method for determining, by the first electronic device, an authentication method corresponding to the first service includes: the first electronic device determines an authentication mode corresponding to the first service, wherein the authentication mode at least supports authentication of two sustainable authentication factors. In the above S303, the scheduling, by the first electronic device, the M electronic devices to authenticate the first service according to the authentication method specifically includes: the first electronic device obtains at least two sustainable authentication factors from the at least two electronic devices according to the authentication mode, authenticates the at least two sustainable authentication factors, and aggregates authentication results of the two sustainable authentication factors.
In consideration of the conventional technology, in the using process of the terminal device, whether the touch screen behavior of the current user is consistent with that of the expected user is continuously judged. If not, it indicates that the current actual user may not be the legitimate user himself. The reliability of touch screen behavior authentication is based on sample quality. When the number of samples is too small, a very reliable judgment cannot be given, and the misjudgment rate is too high. The effect of continuous authentication cannot be practically achieved. Compared with the prior art, in the sixth implementation manner of the embodiment of the application, in order to solve the problem that the continuous authentication and acquisition capacity on a single device is limited, the sustainable authentication capacity and the sustainable acquisition capacity on a plurality of devices are cooperatively used in the sixth implementation manner, so that at least two sustainable authentication factors are obtained from at least two electronic devices, and then the at least two sustainable authentication factors are authenticated to achieve continuous authentication of the same service.
Currently, when a user uses any one of the above devices, the device may perform a process of authenticating the identity of the user. For example, the smartphone may perform continuous authentication of the touch screen behavior while the user is using the smartphone. Specifically, the smart phone may continuously obtain touch screen behavior information (e.g., a position and an area of a touch area, a timestamp, a touch frequency, a pressure, etc.) of a current user, and then compare the touch screen behavior information with a pre-established user touch screen behavior sample and determine whether the touch screen behavior information is consistent with the pre-established user touch screen behavior sample. Under the condition of consistency, the smart phone confirms that the current user is a legal user; and under the inconsistent condition, the smart phone confirms that the current user is not a legal user. The user touch screen behavior sample can be obtained by the smart phone through repeated touch screen behavior information acquisition and learning, and can be used for representing the touch screen behavior information of a legal user. The reliability of the authentication mode is closely related to the quality of the user touch screen behavior sample, if the number of samples is small or the quality of the samples is poor, the reliability of the authentication result is low, and the misjudgment rate is high. For another example, during the process of using the smart phone by the user, the smart phone may perform continuous authentication of the face information. Specifically, the smart phone can continuously acquire face information of the current user, and judge whether the user in the image acquisition area changes according to the face information. The authentication mode is single, and is only used for judging whether the user in the image acquisition area changes, so that the security is low. In addition, the two authentication modes of the above example have high requirements on power consumption of terminal devices such as smart phones, and are likely to cause too fast power consumption of the terminal devices, which results in poor practicability.
In order to solve the above problem, in the embodiments of the present application, the user identity may be authenticated in a multi-device cooperation manner. The collection and authentication of the user characteristic information and the aggregation of at least one authentication result are not limited to a single device, the devices with established connection can be cooperatively performed, the resource integration among multiple devices is realized, and the usability is high.
In this embodiment, the plurality of devices connected and communicating via the network may be referred to as a multi-device group. When a user uses any one or more devices (which may be referred to as a using device hereinafter) in the multi-device group, at least one collecting device in the multi-device group may continuously collect an authentication factor for identifying the identity of the user. The authentication factors are of various types, such as but not limited to biometric factors such as fingerprints, voiceprints, faces, gait, heart rate and pulse, user behavior factors such as touch screen behavior and key behavior, and trusted device factors such as connection state and/or wearing state of the wearable device.
Then, at least one authentication device in the multi-device group may authenticate the authentication factor collected by the at least one collection device, thereby obtaining at least one authentication result. Finally, at least one decision device in the multi-device group may aggregate the at least one authentication result to obtain at least one aggregated result. The at least one aggregated result may be used to characterize whether a user using at least one of the above-mentioned using devices is legitimate. The at least one aggregated result may be synchronized to any one device in the multi-device group. Therefore, even if the device which does not participate in the collection, authentication and aggregation result obtaining in the multi-device group can obtain the aggregation result, the influence on the power consumption is greatly reduced. And at least one aggregation result is obtained according to at least one authentication factor, namely, the security and the reliability of the authentication result are improved by adopting a multi-factor authentication mode.
It should be noted that the above-mentioned using device, collecting device, authenticating device and decision device are only used to distinguish the role of at least one device in the multi-device group performing the user identity authentication process. Wherein the using device is a device used by a user. The acquisition device is a device capable of acquiring an authentication factor for identifying the identity of a user. The authentication device is a device capable of authenticating the authentication factor acquired by the acquisition device to obtain an authentication result. The decision device is a device capable of making an authentication decision, i.e. the decision device may process the at least one authentication result to obtain the at least one aggregated result. In a specific implementation, a device may have multiple roles, for example, a device may be both a use device and an acquisition device. Multiple devices may also be in the same role, for example, three devices are all acquisition devices. The role of any device in the multi-device group may be determined according to an actual situation, which is not limited in the embodiment of the present application.
Referring to fig. 56A, fig. 56A shows a schematic structural diagram of another apparatus. The electronic device 200 shown in fig. 56A may be any one of the devices in the multi-device group shown in fig. 1.
As shown in fig. 56A, the electronic device 200 may include a resource management unit 56301 and a scheduling unit 56302. Wherein, the detailed description of each unit is as follows:
the resource management unit 56301 is configured to perform resource synchronization with other devices in the multi-device group. The resource management unit 56301 may correspond to the processor 210 in fig. 2 above.
The scheduling unit 56302 is configured to coordinate with at least one other device in the multi-device group to determine at least one authentication factor, so as to determine the corresponding acquiring device and the corresponding authenticating device in the multi-device group. Scheduling unit 56302 may correspond to processor 210 in fig. 2, above.
In particular, the resource management unit 56301 may be configured to perform resource synchronization of the acquisition capability, the authentication capability, and the decision capability with at least one other device in the multi-device group. The device with the collection capability is used for collecting an authentication factor for identifying the identity of the user. The device is provided with authentication capability, which means that the device can be used for authenticating the collected authentication factors to obtain an authentication result. The decision-making capability of the device means that the device can be used to aggregate at least one authentication result to obtain an aggregated result.
Specifically, the at least one authentication factor for coordinating the acknowledgement between the scheduling unit 56302 and the at least one other device in the multi-device group is: the authentication factor is needed to be used in the user identity authentication process. The collecting device can be used for collecting the authentication factors in the user identity authentication process. The authentication device can be used for authenticating the authentication factor acquired by the acquisition device in the user identity authentication process, so that an authentication result is obtained.
Optionally, in some embodiments of the present application, the authentication factor may be user-selected, for example, the authentication factor for a specific execution operation may be a specific authentication factor, or two authentication factors, or multiple authentication factors. There may be no limitation on the specific number of authentication factors.
In a possible implementation manner, the electronic device 200 may further include at least one acquisition unit 5603, which is specifically configured as shown in fig. 56B. I.e. the electronic device 200 may be provided with acquisition capabilities. Therein, one collecting unit 5603 may be used for collecting at least one type of authentication factor (hereinafter referred to as an authentication factor). Alternatively, one acquisition unit 5603 may belong to one acquisition module included in the electronic device 200 shown in fig. 2. The embodiment of the present application is described by taking an example that one acquisition unit is used for acquiring one authentication factor.
For example, the face acquisition unit may be configured to acquire a biometric factor of face information.
For example, the gait acquisition unit may be used to acquire biometric factors of gait information.
For example, the pulse acquisition unit may be used to acquire biometric factors of pulse information.
For example, the heart rate acquisition unit may be used to acquire biometric factors of heart rate information.
For example, the touch screen behavior collecting unit may be configured to collect a user behavior factor of the touch screen behavior information.
For example, the trusted device acquisition unit may be configured to acquire a trusted device factor of a connection state and/or a wearing state of the wearable device.
In one possible implementation, the electronic device 200 may further include at least one authentication unit 56304. The specific structure is shown in fig. 56C. I.e. the electronic device 200 may be provided with authentication capabilities. An authentication unit 56304 may be configured to authenticate at least one authentication factor to obtain an authentication result. The embodiment of the present application takes an example that one authentication unit is used for authenticating one authentication factor to obtain a corresponding authentication result.
For example, the face authentication unit may be configured to authenticate the collected face information to obtain a face authentication result.
For example, the gait authentication unit may be configured to authenticate the collected gait information to obtain a gait authentication result.
For example, the pulse authentication unit may be configured to authenticate the collected pulse information to obtain a pulse authentication result.
For example, the heart rate authentication unit may be configured to authenticate the collected heart rate information to obtain a heart rate authentication result.
For example, the touch screen behavior authentication unit may be configured to authenticate the collected touch screen behavior information to obtain a touch screen behavior authentication result.
For example, the trusted device authentication unit may be configured to authenticate the acquired connection state and/or wearing state of the wearable device to obtain a trusted device authentication result.
In some embodiments, the resource management unit 56301, when configured to perform resource synchronization of the acquisition capability and the authentication capability with at least one other device in the multi-device group, may specifically be configured to: and carrying out resource synchronization of the acquisition unit and the authentication unit with at least one other device in the multi-device group.
Specifically, the acquisition unit and the authentication unit included in the electronic device 200 may actively report respective information to the resource management unit 56301 (this operation may be referred to as registration subsequently). The resource management unit 56301 may also actively acquire information of the acquisition unit and the authentication unit included in the electronic device 200. The implementation of the active acquisition is similar to that of the active reporting, and the active reporting is taken as an example to be described next.
In some embodiments, there is at least one aggregation device in the multi-device group. Wherein at least one aggregation device may comprise the electronic device 200. The method for determining the summary device is not limited in the embodiment of the present application, and for example, the electronic device with the strongest processing performance in the multi-device group may be used as the summary device. The resource management unit 56301 may send the information reported by the acquisition unit and the authentication unit to the summarizing device, and the summarizing device summarizes the information of the acquisition unit and the authentication unit of the multiple electronic devices in the multiple device group, i.e., completes resource synchronization. After the resource synchronization, any device in the multi-device group may obtain, from the aggregation device, information of the acquisition unit and the authentication unit deployed by any other device in the multi-device group.
In some embodiments, the resource management unit 56301 may first aggregate information of the acquisition unit and the authentication unit included in the electronic device 200. Then, the resource management unit 56301 performs resource synchronization of the acquisition unit and the authentication unit with the resource management units of other devices in the multi-device group. Alternatively, resource synchronization may be performed over a connected network (e.g., a cloud server therein). Wherein, the synchronization mechanism of the resource synchronization may be but is not limited to at least one of the following: timing synchronization (e.g., once per minute), triggering synchronization (e.g., performing synchronization once in response to user action), updating synchronization (e.g., performing synchronization once when information of the acquisition unit or the authentication unit changes).
After the resource synchronization, any device in the multi-device group may acquire information of the acquisition unit and the authentication unit deployed by any other device in the multi-device group, that is, any device in the multi-device group may acquire information of the acquisition unit and the authentication unit deployed by all devices in the multi-device group. The specific implementation of the resource synchronization process can be referred to the embodiment shown in fig. 57, and will not be described in detail for the moment.
In some embodiments, the scheduling unit 56302 may confirm at least one target acquisition unit and at least one target authentication unit corresponding to the target acquisition unit in a one-to-one manner according to a preset rule based on the information of the acquisition unit and the authentication unit of any one device in the multi-device group, which is acquired by the resource management unit 56301. If the authentication factor acquired by the acquisition unit is the same as the authentication factor used by the authentication unit for authentication, the acquisition unit corresponds to the authentication unit. For example, the authentication factor acquired by the trusted device acquisition unit and the authentication factor authenticated by the trusted device authentication unit are both the connection state and/or the wearing state of the wearable device, and then the trusted device acquisition unit corresponds to the trusted device authentication unit. Therefore, the authentication factor acquired by the at least one target acquisition unit is the at least one authentication factor, that is, the authentication factor for the at least one target authentication unit to authenticate is also the at least one authentication factor. The equipment where the target acquisition unit is located is the acquisition equipment, and the equipment where the target authentication unit is located is the authentication equipment. The execution process of the scheduling unit 56302 can be specifically referred to the description of S5803 in fig. 58, and will not be described in detail for the moment.
If the electronic device 200 is a collection device for coordinating the confirmation between the schedule unit 56302 and other devices in the multi-device group, the collection unit 5603 included in the electronic device 200 may be used for collecting the authentication factor in the user identity authentication process. If the electronic device 200 is an authentication device that coordinates the confirmation between the scheduling unit 56302 and other devices in the multi-device group, the authentication unit 56304 included in the electronic device 200 may be used to authenticate the collected authentication factors in the user identity authentication process.
In one possible implementation, the electronic device 200 may further include a decision unit 56305. The specific structure is shown in fig. 56D. I.e., the electronic device 200 may be decision-making capable. The decision unit 56305 may be configured to obtain an aggregated result based on the at least one authentication result. The implementation of the decision unit 56305 can be specifically referred to the description of S5806 in fig. 58, and will not be described in detail for the moment.
In some embodiments, when the resource management unit 56301 is used for resource synchronization of decision-making capability with at least one other device in the multi-device group, it may specifically be configured to: and carrying out resource synchronization of the decision unit with at least one other device in the multi-device group.
Specifically, the resource management unit 56301 may first summarize information of the decision unit included in the electronic device 200. For example, the decision unit included in the electronic device 200 may report respective information to the resource management unit 56301 after the electronic device 200 is powered on and started. Alternatively, the electronic apparatus 200 may receive a user operation (for example, a pressing operation of a user on a power key), and in response to the user operation, the resource management unit 56301 actively acquires and aggregates information of the decision unit included in the electronic apparatus 200.
Then, the resource management unit 56301 performs resource synchronization of the decision unit with at least one other device in the multi-device group. Optionally, data synchronization may be performed between multiple devices in the multi-device group through a connected network (e.g., a cloud server therein). The resource synchronization mechanism is similar to the resource synchronization mechanism of the acquisition unit and the authentication unit, and is not described again. After the resource synchronization, any device in the multi-device group may obtain information of the decision unit of any device in the multi-device group.
In some embodiments, the scheduling unit 56302 may also be used to identify decision devices in a multi-device group. The decision device may be configured to process at least one authentication result obtained by the authentication device to obtain an aggregation result.
In some embodiments, based on the information of the decision unit of any device in the multi-device group acquired by the resource management unit 56301, the scheduling unit 56302 may confirm the target decision unit according to a preset rule, so as to confirm that the device where the target decision unit is located is the decision device. The preset rules of the validation target decision unit may be, but are not limited to: the device comprising the decision unit is a decision device, the device comprising the decision unit and having the capability of the decision unit stronger than a preset threshold value is a decision device, or the device comprising the decision unit and having the strongest capability is a decision device, and the like. The capability of the decision unit is, for example, the number of authentication results processed within a preset time period, the number of obtained aggregation results, and the like.
If the electronic device 200 is a decision device for coordinating the confirmation between the scheduling unit 56302 and other devices in the multi-device group, the decision unit 56305 of the electronic device 200 may be configured to process at least one authentication result obtained by the authentication unit 5603 to obtain an aggregation result. The aggregated result may be synchronized to other devices in the multi-device group. After the results are synchronized, any one device in the multi-device group may obtain and use the aggregated results. The synchronization mechanism of the result synchronization is similar to the synchronization mechanism of the resource synchronization of the acquisition unit and the authentication unit, and is not described again.
It should be noted that the implementation of each operation may also correspond to the corresponding description of the method embodiments shown in fig. 57 to fig. 77 described below. The electronic device 200 may be any one of the group of devices in fig. 57-77.
In this embodiment, the processor 201 of the electronic device 200 shown in fig. 2 may include: fig. 56A to 56D illustrate at least one of the resource management unit 56301, the scheduling unit 56302, the acquisition unit 5603, the authentication unit 5603, and the decision unit 56305 of the electronic device 200. Alternatively, the sensor module of the electronic device 200 depicted in fig. 2 may include the acquisition unit 5603 of the device electronics 200 depicted in fig. 56A-56D.
The following is an exemplary description of a resource synchronization process between multiple devices in a multi-device group.
Referring to fig. 57, fig. 57 is a flowchart illustrating a resource synchronization method according to an embodiment of the present application. Fig. 57 includes four devices in a multi-device group: the first device, the second device, the third device, and the fourth device are explained as an example. And, in fig. 57, the first device includes a resource management unit, a trusted device collection unit, and a trusted device authentication unit. The second device comprises a resource management unit and two acquisition units: the device comprises a face acquisition unit, a gait acquisition unit and a touch screen behavior authentication unit. The third device comprises a resource management unit, a heart rate acquisition unit and two authentication units: the device comprises a face authentication unit and a gait authentication unit. The fourth device comprises a resource management unit, a face acquisition unit and a pulse authentication unit. Exemplarily, the first device is a mobile phone, the second device is a tablet computer, the third device is an intelligent bracelet, and the fourth device is an intelligent camera.
The resource synchronization method shown in fig. 57 may include, but is not limited to, the following steps:
s5701, the first device obtains an acquisition unit and an authentication unit which are available for the first device.
First, the acquisition unit and the authentication unit of any one device in the multi-device group may report respective information to the resource management unit of the device (i.e., the registration operation described in fig. 56A to 56D). The process of registration may include, but is not limited to, the following steps:
in step a1, the trusted device collection unit of the first device sends the information of the trusted device collection unit to the resource management unit of the first device, thereby completing the registration operation.
Specifically, the information reported when the acquisition unit is registered may include, but is not limited to, an acquisition unit name, an acquired authentication factor, a current state, and the like. An example of the information reported by the acquisition unit of the first device is shown in table 19 below. In a specific implementation, the information reported by the acquisition unit when registering may be more or less.
Watch 19 acquisition unit of a first device
Figure BDA0002990820150001171
Wherein the current state may be available or unavailable. When the current state corresponding to the acquisition unit is available, the acquisition unit may acquire user characteristic information (i.e., the acquired authentication factors in table 1) for identifying the user identity, otherwise, the acquisition unit is unavailable.
For example, when the first device may communicate with at least one other device in the multi-device group, and the trusted device acquisition unit may start up normally (i.e., may detect the connection state of the wearable device), the current state of the trusted device acquisition unit is available, otherwise it is not available. Or, when the first device may communicate with at least one other device in the multi-device group, and the trusted device acquisition unit may start normally (i.e., when the trusted device acquisition unit detects that the connection state of the wearable device is connected, and the trusted device acquisition unit may detect the wearing state of the wearable device), the current state of the trusted device acquisition unit is available, otherwise, it is not available.
Step a 2: and the trusted device authentication unit of the first device sends the information of the trusted device authentication unit to the resource management unit of the first device, so that the registration operation is completed.
Specifically, the information reported when the authentication unit is registered may include, but is not limited to, an authentication unit name, an authentication factor of the authentication, a current state, and the like. An example of the information reported by the authentication unit of the first device is shown in table 20 below. In a specific implementation, the information of the registration report of the authentication unit may be more or less.
Table 20 authentication unit of first device
Figure BDA0002990820150001172
Figure BDA0002990820150001181
Wherein the current state may be available or unavailable. The authentication unit may authenticate the collected user characteristic information (i.e., the authentication factor authenticated in table 20) when the current state corresponding to the authentication unit is available, otherwise, the authentication unit is unavailable.
For example, when the first device may communicate with at least one other device in the multi-device group, and the trusted device authentication unit may start up normally (that is, when it is detected that the connection state of the wearable device is connected), the current state of the trusted device authentication unit is available, and the authentication result obtained by the trusted device authentication unit is also legitimate; otherwise, the authentication result obtained by the trusted device authentication unit is not legal. Or when the first device can communicate with at least one other device in the multi-device group and the trusted device authentication unit can be started normally (that is, when it is detected that the wearable device is connected and the wearable device is worn), the current state of the trusted device authentication unit is available, and the authentication result obtained by the trusted device authentication unit is legal; otherwise, the authentication result obtained by the trusted device authentication unit is invalid.
And S5702, the second equipment acquires an acquisition unit and an authentication unit which are available for the second equipment.
The S5702 may specifically include the following steps:
step b 1: and the face acquisition unit of the second equipment sends the information of the face acquisition unit to the resource management unit of the second equipment, so that the registration operation is completed.
Step b 2: and the gait acquisition unit of the second equipment sends the information of the gait acquisition unit to the resource management unit of the second equipment so as to complete the registration operation.
Specifically, steps b1 to b2 are similar to step a1, and the detailed description can be referred to the corresponding description in step a 1. An example of the information reported by the acquisition unit of the second device is shown in table 21 below.
Table 21 acquisition unit of second device
Name of acquisition unit Collected authentication factors Current state
Face acquisition unit Face information Can be used
Gait acquisition unit Gait information Can be used
For example, when the second device can communicate with at least one other device in the multi-device group, and the face acquisition unit and the gait acquisition unit can be started normally, that is, when the corresponding user characteristic information (i.e., the acquired authentication factors in table 21) can be detected, the current states of the face acquisition unit and the gait acquisition unit are available, otherwise, they are not available.
Step b 3: and the touch screen behavior authentication unit of the second device sends the information of the touch screen behavior authentication unit to the resource management unit of the second device, so that the registration operation is completed.
Specifically, step b3 is similar to step b2, and the specific description can be referred to the corresponding description in step b 2. An example of the information reported by the authentication unit of the second device is shown in table 22 below.
Table 22 authentication unit of second device
Authentication unit name Authentication factor for authentication Current state
Touch screen behavior authentication unit Touch screen behavior information Can be used
For example, when the second device may communicate with at least one other device in the multi-device group, and the touch screen behavior authentication unit may be normally activated, that is, may authenticate the collected user characteristic information (i.e., the authentication factor authenticated in table 22), the current state of the touch screen behavior authentication unit is available, otherwise, the current state is unavailable.
S5703: the third device obtains the acquisition unit and the authentication unit available to the second device itself.
The S5703 may specifically include the following steps:
step c 1: and the heart rate acquisition unit of the third equipment sends the information of the heart rate acquisition unit to the resource management unit of the third equipment, so that the registration operation is completed.
Specifically, step c1 is similar to step a1, and the detailed description can be referred to the corresponding description in step a 1. An example of the information reported by the acquisition unit of the third device is shown in table 23 below. The example of the current state of the acquisition unit of the third device corresponding to table 23 is similar to that of the second device, and reference may be specifically made to the examples of step b 1-step b2, which are not described again.
Watch 23 acquisition unit of the third device
Name of acquisition unit Collected authentication factors Current state
Heart rate acquisition unit Heart rate information Is not available
Step c 2: and the face authentication unit of the third equipment sends the information of the face authentication unit to the resource management unit of the third equipment, so that the registration operation is completed.
Step c 3: and the gait authentication unit of the third equipment sends the information of the gait authentication unit to the resource management unit of the third equipment, thereby completing the registration operation.
Specifically, steps c 2-c 3 are similar to step b2, and the detailed description can be found in the corresponding description of step b 2. An example of the information reported by the authentication unit of the third device is shown in table 24 below. The example of the current state of the authentication unit of the third device corresponding to table 24 is similar to that of the second device, and reference may be specifically made to the example of step b3, which is not described again.
Table 24 authentication unit of third device
Authentication unit name Authentication factor for authentication Current state
Face authentication unit Face information Can be used
Gait authentication unit Gait information Can be used
S5704: the fourth device obtains the acquisition unit and the authentication unit available to the second device itself.
The S5704 may specifically include the following steps:
step d 1: and the face acquisition unit of the fourth device sends the information of the face acquisition unit to the resource management unit of the fourth device, so that the registration operation is completed.
Specifically, step d1 is similar to step a1, and the detailed description can be referred to the corresponding description in step a 1. An example of the information reported by the acquisition unit of the fourth device is shown in table 25 below. The example of the current state of the acquisition unit of the fourth device corresponding to table 25 is similar to that of the second device, and specific reference may be made to the examples of step b 1-step b2, which is not described again.
Watch 25 acquisition unit of the fourth device
Name of acquisition unit Collected authentication factors Current state
Face acquisition unit Face information Is not available
Step d 2: and the pulse authentication unit of the fourth device sends the information of the pulse authentication unit to the resource management unit of the fourth device, so that the registration operation is completed.
Specifically, step d2 is similar to step a2, and the detailed description can be referred to the corresponding description in step a 2. An example of the information reported by the authentication unit of the fourth device is shown in table 26 below. The example of the current state of the authentication unit of the fourth device corresponding to table 26 is similar to that of the second device, and may specifically refer to the example of step b3, and is not described again.
Table 26 authentication unit of fourth device
Authentication unit name Authentication factor for authentication Current state
Pulse authentication unit Pulse information Can be used
The present invention is not limited to the above-mentioned explanation of the current state, and in a specific implementation, it may also be considered whether the position information of the device where the acquisition unit is located can support the acquisition unit to acquire the corresponding user feature information to determine the current state, which is not limited in this embodiment of the present application.
Specifically, after the registration is completed, the resources of the acquisition unit and the authentication unit may be synchronized between the multiple devices in the multiple device group through their respective resource management units. This process of resource synchronization may include, but is not limited to, the following steps:
s5705: the first device, the second device, the third device and the fourth device perform resource synchronization.
Specifically, the first device, the second device, the third device, and the fourth device perform resource synchronization of the acquisition unit and the authentication unit.
In some embodiments, it is assumed that there is at least one aggregation device, such as a first device, in the multi-device group. The resource management unit of the second device may send the information reported by the acquisition unit and the authentication unit of the second device to the resource management unit of the first device. And the resource management unit of the third device sends the information reported by the acquisition unit and the authentication unit of the third device to the resource management unit of the first device. And the resource management unit of the fourth device sends the information reported by the acquisition unit and the authentication unit of the fourth device to the resource management unit of the first device.
Then, the resource management unit of the first device may summarize information of the acquisition units and the authentication units of the first device, the second device, the third device, and the fourth device. For example, the first device may obtain a collection unit total table and an authentication unit total table of the multi-device group, and specific examples are shown in tables 27 and 28 below. The second device, the third device or the fourth device may obtain information of any one acquisition unit and the authentication unit in the multi-device group from the first device, that is, complete resource synchronization.
In some embodiments, the resource management units of the respective first, second, third and fourth devices may first aggregate information for the respective acquisition units and authentication units. Then, resource synchronization of the acquisition unit and the authentication unit is performed among the resource management units of the first device, the second device, the third device and the fourth device. The synchronization mechanism of resource synchronization can refer to the synchronization mechanism of resource synchronization of the acquisition unit and the authentication unit shown in fig. 56A-56D. For example, after the resource synchronization, a collection unit summary table and an authentication unit summary table of the multi-device group may be obtained, and specific examples thereof are shown in tables 27 and 28 below.
After the resource synchronization, the first device, the second device, the third device, and the fourth device may all obtain information of any one acquisition unit and authentication unit in the multi-device group.
Table 27 summary of acquisition units of multiple device groups
Figure BDA0002990820150001201
The identifier of the first device is 1, the identifier of the second device is 2, the identifier of the third device is 3, and the identifier of the fourth device is 4.
Table 28 summary of authentication units for multiple device groups
Figure BDA0002990820150001202
Figure BDA0002990820150001211
In a specific implementation, when a plurality of devices in the multi-device group perform resource synchronization of the acquisition unit and the authentication unit, location information between the devices may also be synchronized, which is not limited in the embodiment of the present application.
Not limited to the resource synchronization process shown in fig. 57, in a specific implementation, the resource management unit of any device in the multi-device group may also actively acquire information of the acquisition unit and the authentication unit included in the device, so as to perform resource synchronization between the acquisition unit and the authentication unit with other devices in the multi-device group.
The process of performing resource synchronization of the decision unit by the respective resource management unit between the multiple devices in the multiple device group is similar to the resource synchronization process shown in fig. 57, and is not described again. After the resource synchronization, any device in the multi-device group may obtain information of any decision unit in the multi-device group. For example, after resource synchronization, any device in the multi-device group may obtain the total decision unit table of the multi-device group, which is specifically shown in table 29 below. It should be noted that, in table 29, the first device, the second device, and the third device do not include a decision unit, and only the fourth device includes a decision unit.
TABLE 29 decision Unit summary of Multi-device groups
Identification of equipment where decision unit is located Current state
4 Can be used
The identifier of the first device is 1, the identifier of the second device is 2, the identifier of the third device is 3, and the identifier of the fourth device is 4. The current state may be available or unavailable. The decision unit may process at least one authentication result to obtain an aggregated result when the current state corresponding to the decision unit is available, otherwise, the current state is unavailable.
Without being limited to the example listed in table 29, in a specific implementation, a measure of the magnitude of the authentication decision capability may also be included, which is not limited by the embodiment of the present application.
Illustratively, a user enters an area where any one or more electronic devices in a multi-device group may be present. When a user uses any electronic device in the multi-device group in the area, the electronic devices in the multi-device group in the area can cooperatively authenticate the user identity, so as to judge whether the user is legal or not. For example, when a user uses a mobile phone in a living room at home, devices such as the mobile phone, a camera, a smart television, a smart sound box and the like can exist in the living room, and a multi-device group can be formed. In the process of using the mobile phone by the user, the devices in the multi-device group can cooperate with each other to perform user identity authentication and obtain at least one aggregation result. The mobile phone can obtain the at least one aggregation result obtained at different moments to obtain whether the user is legal or not. And under the condition that the user is legal, the mobile phone can provide corresponding service for the user, otherwise, under the condition that the user is illegal, the mobile phone can block the access of the user to the mobile phone.
The following describes in detail a process of cooperatively authenticating the user identity by a plurality of electronic devices in the multi-device group.
Referring to fig. 58, fig. 58 is a diagram illustrating an authentication method for multi-device cooperation according to an embodiment of the present disclosure. The method can be applied to the multi-device cooperative authentication system shown in fig. 1, and in detail, the method can be applied to the multi-device group shown in fig. 1.
It should be noted that fig. 58 only shows the use device, the collection device, the authentication device, and the decision device that perform the user identity authentication process, but this does not mean that no other device is included in the multi-device group. For example, the multi-device group may further include at least one device that does not perform the user authentication process. For the descriptions of the using device, the collecting device, the authenticating device, and the deciding device, reference may be specifically made to the descriptions of the using device, the collecting device, the authenticating device, and the deciding device in fig. 1-2, fig. 56A-56D, and fig. 57, which are not described again.
The method includes, but is not limited to, the steps of:
s5801: and establishing connection among the using equipment, the collecting equipment, the authentication equipment and the decision-making equipment.
Specifically, the connection mode among the usage device, the acquisition device, the authentication device, and the decision device may specifically refer to the description of the connection mode among multiple devices in the multiple device group in fig. 1, and is not described again.
S5802: and resource synchronization is carried out among the using equipment, the collecting equipment, the authentication equipment and the decision-making equipment.
Specifically, under the condition that a first preset condition is met, resource synchronization of acquisition capability, authentication capability and decision capability can be performed among a plurality of devices in the multi-device group. For the description of the resource synchronization of the acquisition capability, the authentication capability and the decision capability, reference may be specifically made to the description about the acquisition capability and the authentication capability in fig. 1-2, fig. 56A-56D and fig. 57, which is not described herein again.
In some embodiments, the structure of any one device in a multi-device group may be as shown in fig. 56A-56D above. Then, resource synchronization of acquisition capability, authentication capability and decision capability is performed among the multiple devices in the multiple device group, which may specifically be: and resource synchronization of the acquisition unit, the authentication unit and the decision capability is performed among the multiple devices in the multiple device group through respective resource management units. For the description of the resource synchronization process, refer to the description of the resource synchronization process in fig. 1-2, fig. 56A-56D, and fig. 57, which is not repeated herein. After the resources are synchronized, any device in the multi-device group can acquire information of any acquisition unit, authentication unit and decision unit in the multi-device group.
Specifically, the first preset condition may include at least one of: any one device in the multi-device group receives user operation, the used device in the multi-device group is in a first preset state, the state of any one device in the multi-device group is changed, and the like. The user operation may be, but is not limited to, an operation such as a touch, a press, or a slide operation on a display screen of the user apparatus, an operation on a key of the user apparatus, a voice signal, a gesture operation, a user electroencephalogram signal, or the like. The first preset state may include at least one of: the state of establishing connection with other devices in the multi-device group, the state of being bright and not receiving user operation, the state of displaying a preset application interface and not receiving user operation and the like. The change of the state of any one device in the multi-device group may include at least one of the following: the device is reconnected to the multi-device group, the connection is cancelled and the multi-device group is exited, and the current state of the acquisition unit, the authentication unit or the decision unit of the device is changed.
For example, the usage apparatus receives a pressing operation of a user on a power key of the usage apparatus, in response to the user operation, the usage apparatus is powered on and starts, and S5801-S5802 are executed.
For example, the user device receives a click operation of the user on a bank application icon displayed using a display screen of the device. In response to the user operation, authentication of the user identity is triggered using the device, S5802 is performed.
For example, using the device to confirm that it has been unlocked by the screen and is in a bright screen state, authentication of the user identity may be triggered, performing S5802.
For example, the preset application is a first payment application. And (4) the equipment is used for confirming that the currently displayed user interface is the application interface of the payment treasure, the authentication of the user identity can be triggered, and S5802 is executed.
For example, the camera in the living room includes a face acquisition unit, and the camera can turn off the face acquisition unit after the user leaves the living room, that is, when the face information of the user is not detected, the current state of the face acquisition unit is unavailable. After the user enters the living room, the camera receives a connection establishment request of the mobile phone of the user. In response to the request, the camera may turn the face acquisition unit on and the current state of the face acquisition unit may be changed from unavailable to available. At this time, a resource synchronization procedure may be performed between the devices in the multi-device group, i.e., S5802 is performed.
S5803: and coordinating and confirming at least one authentication factor among the using device, the collecting device, the authentication device and the decision device, so as to confirm the collecting device and the authentication device in the multi-device group.
Specifically, in the case that a second preset condition is met, at least one authentication factor is confirmed in coordination among the usage device, the acquisition device, the authentication device, and the decision device. Wherein the second preset condition may include at least one of: any one device in the multi-device group receives user operation, the used device in the multi-device group is in a second preset state, the current state of the acquisition unit, the authentication unit or the decision unit of any one device in the multi-device group is changed, and the like. The user operation is similar to the user operation in S5802, and is not described again. The second preset state may include at least one of: the method comprises the steps of completing resource synchronization with other equipment in a multi-equipment group, lighting the screen but not receiving user operation, displaying a preset application interface but not receiving user operation and the like. Examples of the second preset condition are similar to those of the first preset condition in S5802, and the examples are not repeated, and are exemplified below in conjunction with the first preset condition and the second preset condition.
Optionally, in some embodiments of the present application, the first preset condition and the second preset condition may be different. For example, the user apparatus receives a pressing operation (i.e., satisfies a first preset condition) on a power key of the user apparatus, in response to the user operation, the user apparatus is powered on, and S5801-S5802 are executed. Then, the user performs a payment operation by using a payment application on the device, the device receives a click operation (that is, a second preset condition is satisfied) on an immediate payment control displayed on the display screen, and in response to the click operation, authentication of the user identity is triggered, and S5803 is performed.
Optionally, in some embodiments of the present application, the first preset condition and the second preset condition may also be the same. For example, when the user confirms that the screen unlocking is successful and the user device is in a bright screen state (i.e., the first preset condition and the second preset condition are met), the authentication of the user identity may be triggered, and S5802-S5803 is executed.
Specifically, the authentication factor is an authentication factor that needs to be used in the user identity authentication process. The acquisition device may be configured to acquire the authentication factor in the user identity authentication process. The authentication device may be configured to authenticate the authentication factor acquired by the acquisition device in the user identity authentication process.
In some embodiments, the structure of any one device in a multi-device group may be as shown in fig. 56A-56D above. Any one device in the multi-device group can acquire any one acquisition unit and the authentication unit in the multi-device group after resource synchronization, and confirms at least one target acquisition unit and at least one target authentication unit corresponding to the target acquisition unit one by one according to a preset rule. If the authentication factor acquired by the acquisition unit is the same as the authentication factor used by the authentication unit for authentication, the acquisition unit corresponds to the authentication unit. Therefore, the authentication factor collected by at least one target collection unit is the at least one authentication factor. The authentication factor for the at least one target authentication unit to authenticate is also the at least one authentication factor. The device where the target acquisition unit is located is acquisition equipment, and the device where the target authentication unit is located is authentication equipment. Wherein, the preset rule may be: and under the condition that the current state of the authentication unit corresponding to the acquisition unit with the available current state is also available, determining that the acquisition unit is a target acquisition unit, and determining that the authentication unit is a target authentication unit.
The confirmed acquisition equipment and the authentication equipment can have various mapping relations: one-to-one, one-to-many, many-to-one, many-to-many. The one-to-one mapping relationship may specifically be: the multi-device group coordination identifies an acquisition device and an authentication device. And at least one acquisition unit of the one acquisition device and at least one authentication unit of the one authentication device are in one-to-one correspondence. The one-to-many mapping relationship may specifically be: the multi-device group coordination confirms one acquisition device and a plurality of authentication devices. And, a plurality of acquisition units of the one acquisition device and authentication units of a plurality of authentication devices are in one-to-one correspondence. The many-to-one mapping relationship may specifically be: the multi-device group coordination confirms a plurality of acquisition devices and an authentication device. And the acquisition units of a plurality of acquisition devices and a plurality of authentication units of the authentication device are in one-to-one correspondence. The many-to-many mapping relationship may specifically be: the multi-device group coordination confirms a plurality of acquisition devices and a plurality of authentication devices. And at least one acquisition unit of the plurality of acquisition devices and at least one authentication unit of the plurality of authentication devices are in one-to-one correspondence.
For example, it is assumed that four devices, i.e., four devices shown in fig. 57, are included in the multi-device group, and the four devices include the acquisition unit and the authentication unit, which are also shown in the above tables 19 to 28. Therefore, it can be obtained from the collection unit summary table shown in table 27 and the authentication unit summary table shown in table 28: the current state of the face acquisition unit of the second device is available, and the current state of the face authentication unit of the third device corresponding to the face acquisition unit of the second device is also available. And the current state of the gait acquisition unit of the second device is available, and the current state of the gait authentication unit of the third device corresponding to the gait acquisition unit of the second device is also available. Therefore, the face acquisition unit and the gait acquisition unit of the second device are confirmed target acquisition units, and the face authentication unit and the gait authentication unit of the third device are confirmed target authentication units.
Therefore, the authentication factor collected in the information of the face collecting unit of the second device (i.e., the authentication factor authenticated in the information of the face authentication unit of the third device), and the authentication factor collected in the information of the gait collecting unit of the second device (i.e., the authentication factor authenticated in the information of the gait authentication unit of the third device) are both confirmed authentication factors. Namely, the face information and the gait information are confirmed authentication factors. The second device is a collection device, and the third device is an authentication device. At this time, the confirmed acquisition device and the authentication device are in one-to-one mapping relationship. And the face acquisition unit of the second device as the acquisition device corresponds to the face authentication unit of the third device as the authentication device, and the gait acquisition unit of the second device as the acquisition device corresponds to the gait authentication unit of the third device as the authentication device.
Not limited to the above listed example of one-to-one mapping relationship, the above example of one-to-one mapping relationship may also refer to the embodiments shown in fig. 61-62. Examples of the one-to-many mapping relationship can be seen in particular in the embodiments shown in fig. 63-64. Examples of the many-to-one mapping relationship described above can be found in the embodiments shown in fig. 65-66. Examples of the many-to-many mapping relationship described above can be found in the embodiments shown in fig. 67-68.
Not limited to the above example of the preset rule, in a specific implementation, the preset rule may also be: and under the condition that the current state of the authentication unit corresponding to the acquisition unit with the available current state is also available, determining whether the acquisition unit acquires the biological characteristic information of the user, if so, determining that the acquisition unit is a target acquisition unit, and determining that the authentication unit corresponding to the acquisition unit is a target authentication unit. For example, if the collection capability summary table of the multi-device group further includes location information of the device where the collection unit is located, it may also be considered whether the location information can support the collection unit to collect the authentication factor to confirm the collection device. For example, when the mobile phone is placed in a pocket of a user, the face acquisition unit (e.g., a camera) of the mobile phone cannot acquire face information of the user, and even if the face acquisition unit and the face authentication units of other devices in the mobile phone or the multi-device group are both available, the face acquisition unit of the mobile phone is not determined to be the target acquisition unit. However, when the mobile phone is placed in the pocket of the user, if the gait acquisition unit of the mobile phone and the gait authentication unit of the other devices in the mobile phone or the multi-device group are both available, the gait acquisition unit of the mobile phone can acquire the gait information of the user, so that the gait acquisition unit of the mobile phone can be confirmed as the target acquisition unit, and the gait authentication unit can be confirmed as the target authentication unit. The embodiment of the present application does not limit the specific content of the preset rule.
Optionally, in some embodiments, after S5802, the method may further include: the using equipment, the collecting equipment, the authentication equipment and the decision-making equipment are coordinated and confirmed.
In particular, the decision device may be configured to process at least one authentication result obtained by the authentication device to obtain an aggregated result.
In some embodiments, the structure of any one device in a multi-device group may be as shown in fig. 56A-56D above. Any one device in the multi-device group can acquire the information of any one decision unit in the multi-device group after resource synchronization, and confirms the decision device according to a preset rule. Wherein, the preset rule may be but is not limited to: the device comprising the decision unit is a decision device, the device comprising the decision unit and having the capability of the decision unit stronger than the preset threshold value is a decision device, or the device comprising the decision unit and having the strongest capability is a decision device, and the like.
For example, it is assumed that four devices, i.e., four devices shown in fig. 57, are included in the multi-device group, and the decision units included in the four devices are also shown in the above table 29. In this case, the preset rule may be that the device including the decision unit is a decision device. Therefore, the following can be obtained from the decision unit summary table shown in table 29: the fourth device comprising the decision unit is a decision device.
S5804: the acquisition device acquires at least one authentication factor.
In some embodiments, the structure of any one device in a multi-device group may be as shown in fig. 56A-56D above. The acquisition device can acquire the corresponding authentication factor through a target acquisition unit of the acquisition device.
S5805: the authentication device authenticates the collected at least one authentication factor to obtain at least one authentication result.
In some embodiments, the structure of any one device in a multi-device group may be as shown in fig. 56A-56D above. The authentication device can authenticate the corresponding authentication factor through a target authentication unit of the authentication device to obtain a corresponding authentication result. The authentication factor authenticated by the target authentication unit is the authentication factor acquired by the target acquisition unit corresponding to the target authentication unit.
For example, it is assumed that the authentication factor for the coordination confirmation in S5803 is face information and gait information, the acquisition device is the second device, and the authentication device is the third device. The collecting device may collect the face information through a face collecting unit of the collecting device and collect the gait information through a gait collecting unit of the collecting device. The authentication device can authenticate the face information collected by the face collection unit through the face authentication unit of the collection device to obtain a corresponding face authentication result. The authentication device can authenticate the gait information collected by the gait collection unit through a gait authentication unit of the authentication device to obtain a corresponding gait authentication result.
In some embodiments, a orchestration device is included in the multi-device group, and the orchestration device may be used for comprehensively scheduling the acquisition device to perform S5804, the authentication device to perform S5805, and the decision device to perform S5806. Optionally, the orchestration device may send instruction information to the collection device. The instruction information is used to instruct the acquiring device to execute S5804 and send the acquired authentication factor to the authenticating device. Alternatively, the orchestration device may send instruction information to the authentication device. The instruction information is used to instruct the authentication device to execute S5805, and send the obtained authentication result to the decision device. Optionally, the orchestration device may send instruction information to the decision device. The instruction information is used to instruct the decision device to perform S5806. Optionally, the orchestration device may send instruction information to at least one device in the multi-device group to cause the at least one device to perform one or more of S5801, S5802, S5803, S5807.
In some embodiments, the collection device may send the collected authentication factors to a orchestration device in the multi-device group, and the orchestration device uniformly distributes the collected authentication factors to corresponding authentication devices for authentication. The overall device may be any device in the multi-device group, and optionally, a device with a stronger processing capability in the multi-device group. For example, assume a pooled device in a multi-device group is a use device. The collection device may send the collected authentication factor to the use device, and the use device sends the authentication factor collected by the collection device to the authentication device for authentication. The description of the orchestration device may specifically refer to the description of the orchestration device in fig. 61-68, and will not be described in detail for the moment.
In some embodiments, the acquisition device may also acquire information of a target authentication unit corresponding to a target acquisition unit included in the acquisition device, thereby acquiring information of an authentication device in which the target authentication unit is located. Then, the collection device may send the collected authentication factor to the authentication device for authentication according to the obtained information of the authentication device. For example, the acquisition device may send the acquired authentication factor to the authentication device for authentication according to the address information of the authentication device after acquiring the address information of the authentication device.
S5806: and the decision equipment obtains an aggregation result according to the at least one authentication result.
Specifically, the decision device may process the obtained at least one authentication result to obtain an aggregated result. The aggregated result may be used to characterize whether the user using the above-described using device (which user may subsequently be referred to as the target user) is legitimate. Wherein, whether the target user is legal may include but is not limited to at least one of the following situations: whether the target user is the owner of the equipment, whether the target user is the owner of any equipment except the equipment in the multi-equipment group, and whether the target user is a legal user preset by the multi-equipment group.
The above-mentioned means for obtaining the polymerization result include, but are not limited to, the following three cases.
In case one, when the at least one authentication result all indicates that the target user is legal, the obtained aggregation result indicates that the target user is legal.
And in case two, when the number of the authentication results which represent that the target user is legal in the at least one authentication result is greater than a preset legal threshold, the obtained aggregation result represents that the target user is legal.
In case three, the at least one authentication result corresponds to a weight, and the weight may be a weight determined according to the corresponding authentication factor. Based on each authentication result riAnd corresponding weight wiAnd obtaining a polymerization result W according to a preset calculation formula f. Wherein the value range of i is [1, N]I is a positive integer, and N is the total number of the at least one authentication result. The polymerization result W can be expressed as:
W=f(wi,ri)
when W is larger than or equal to a preset legal threshold, the obtained aggregation result indicates that the target user is legal; otherwise, when W is smaller than the preset legal threshold, the obtained aggregation result indicates that the target user is illegal.
For example, it is assumed that the authentication factor for the coordination confirmation in S5803 is face information and gait information, the acquisition device is the second device, and the authentication device is the third device. And suppose the weight value corresponding to the face authentication factor is w 1The weight corresponding to the gait habit authentication factor is w when the weight is 0.720.3, face authentication result r obtained by the authentication device10.8, gait habit authentication result r2The preset legal threshold is 0.8, which is 0.6. Therefore, the calculation process of the decision device obtaining the aggregation result W according to the face authentication result and the gait authentication result may be as follows:
W=f(wi,ri)=W1×r1+W2×r2=0.7×0.8+0.6×0.3=0.74
since the aggregation result W is 0.74 smaller than the preset legal threshold 0.8, the aggregation result W is used to indicate that the target user is not legal.
In some embodiments, for different services, weights and preset legal thresholds corresponding to different authentication factors may be different, that is, may be determined according to an application scenario. For example, a payment transaction with a larger payment amount (e.g., greater than ten thousand dollars) may have a higher predetermined legal threshold (e.g., 0.9). A payment transaction with a smaller payment amount (e.g., less than one hundred dollars) may have a lower predetermined legal threshold (e.g., 0.7). Or the preset legal threshold corresponding to the business for checking the balance of the bank card is higher (such as 0.7). The preset legal threshold corresponding to the service for viewing the video viewing history is low (e.g. 0.5). Therefore, the preset legal threshold is flexibly adjusted to provide the identity authentication services with different business risk levels for the user, and the user experience is improved. In a specific implementation, the weights corresponding to the different authentication factors and the preset legal threshold may also be set manually by the user, which is not limited in this embodiment of the present application.
In some embodiments, each authentication device in the multi-device group may uniformly report the obtained authentication result to one decision device for authentication decision. Specific examples can be found in the embodiment shown in fig. 59 below.
In some embodiments, there may be multiple decision devices in a multi-device group, and multiple decision devices may be aggregated in a decentralized manner, thereby relieving processing pressure on a single decision device. Specific examples can be found in the embodiment shown in fig. 60 below.
S5807: and synchronizing the aggregation results among the using equipment, the collecting equipment, the authentication equipment and the decision-making equipment.
Specifically, synchronization of aggregation results may be performed among a plurality of devices in a multi-device group. After the results are synchronized, any one device in the multi-device group can obtain and use the aggregated results.
In this embodiment of the application, when a user uses any one device in the multi-device group, the multiple devices in the multi-device group may perform continuous authentication of the user identity, that is, may cyclically execute at least one of S5802-S5807. Therefore, the aggregation result obtained and used by any one electronic device may be at least one aggregation result obtained by the multi-device group at different time. Optionally, the aggregation result obtained and used by any one electronic device may also be at least one aggregation result synchronized by the multiple device groups at different time instants.
In the embodiment of the application, any one device in the multi-device group can query and acquire at least one aggregation result obtained at different time. Such as, but not limited to, reading the at least one aggregated result from a memory or querying and/or downloading the at least one aggregated result from a connected cloud server. Then, any one device in the multi-device group may display a corresponding user interface according to the obtained at least one aggregation result.
In the method shown in fig. 58, a multi-device group collaborates to perform an authentication process of a user identity to obtain at least one aggregation result. The collection, authentication and authentication decision of the authentication factor are not limited to single equipment, so that the resource integration and comprehensive scheduling of the multi-equipment group are realized, and the usability is high. The aggregation result can be obtained according to a plurality of authentication results, namely, the continuous authentication of the user identity is carried out through multi-factor fusion, so that the safety and the reliability of the authentication result are effectively improved. And, the above-mentioned aggregation result can be synchronized to any one device in the multi-device group. Even if the equipment sensitive to power consumption does not directly participate in the user identity authentication process, the at least one aggregation result can be obtained, and the influence on the power consumption is greatly reduced.
Next, the decision process of S5806 of fig. 58 will be exemplarily described with reference to fig. 59 and 60.
Referring to fig. 59, fig. 59 is a flowchart illustrating a decision process according to an embodiment of the present disclosure. The method may be applied to the multi-device group described in fig. 1. Fig. 59 illustrates an example in which the fifth device is a decision device, and the sixth device and the seventh device are authentication devices. Note that fig. 59 shows only the fifth device, the sixth device, and the seventh device, but this does not mean that no other device is included in the multi-device group.
The decision process includes, but is not limited to, the following steps:
s5901: the sixth device sends the first authentication result to the fifth device.
S5902: the seventh device sends the second authentication result to the fifth device.
S5903: and the fifth equipment obtains an aggregation result according to the first authentication result and the second authentication result.
Specifically, the sixth device and the seventh device may obtain address information of the fifth device, and then send the authentication result to the fifth device for authentication decision according to the obtained address information of the fifth device.
Referring to fig. 60, fig. 60 is a schematic flow chart illustrating another decision process according to an embodiment of the present disclosure. The method may be applied to the multi-device group described in fig. 1. Fig. 60 illustrates an example in which the eighth device, the ninth device, and the tenth device are all decision devices. Note that fig. 60 shows only the eighth device, the ninth device, and the tenth device, but this does not mean that no other device is included in the multi-device group.
The decision process includes, but is not limited to, the following steps:
s6001: the eighth device receives the third authentication result and the fourth authentication result.
S6002: the ninth device receives the fifth authentication result.
S6003: the tenth device receives the sixth authentication result.
S6004: and the eighth device obtains a first aggregation result according to the third authentication result and the fourth authentication result.
S6005: the eighth device sends the first aggregation result to the ninth device.
S6006: and the ninth equipment obtains a second aggregation result according to the first aggregation result and the fifth authentication result.
S6007: the ninth device transmits the second aggregation result to the tenth device.
S6008: and the tenth device obtains a third aggregation result according to the second aggregation result and the sixth authentication result.
S6009: and the eighth device, the ninth device and the tenth device synchronize the third aggregation result.
Specifically, there may be a plurality of authentication devices in the multi-device group, and the plurality of decision devices may each receive the authentication result sent by the plurality of authentication devices. For example, the third authentication result and the fourth authentication result are transmitted from the first authentication device to the eighth device, the fifth authentication result is transmitted from the second authentication device to the ninth device, and the sixth authentication result is transmitted from the third authentication device to the tenth device.
Not limited to the example shown in fig. 59 and fig. 60, in a specific implementation, each authentication device in the multi-device group may also send the obtained authentication result to a orchestration device in the multi-device group, and the orchestration device collects the authentication result and then collectively reports the authentication result to at least one decision device in the multi-device group for authentication decision. The embodiment of the present application does not limit this.
In the embodiment of the present application, the acquisition device and the authentication device that coordinate and confirm among a plurality of devices in a multi-device group may have a plurality of mapping relationships: one-to-one, one-to-many, many-to-one, many-to-many. See, in particular, the description of S5803 of fig. 58. When the mapping relationships are different, the authentication processes of the user identities cooperatively performed among the multiple devices in the multiple device group may be different. Specific examples are shown in fig. 61-68 below.
For convenience of description, fig. 61 to 68 illustrate an example in which a device for centralized scheduling (hereinafter, may be referred to as a pooled device) exists in a multi-device group. Specifically, the orchestration device may send instruction information to at least one device in the multi-device group to cause the at least one device to perform one or more of the steps shown in fig. 58. Optionally, the orchestration device may be configured to validate the at least one authentication factor, and thereby validate the collection device and the authentication device (e.g., perform S5803 of fig. 58). Optionally, the orchestration device may send instruction information to the collection device to cause the collection device to collect the authentication factor. Optionally, the orchestration device may send instruction information to the authentication device to cause the authentication device to authenticate the authentication factor. Optionally, the orchestration device may send instruction information to the decision device, so that the decision device processes the authentication result and obtains an aggregated result. The operations performed by the orchestration device described above may subsequently be referred to as orchestration operations.
Not limited to the above-listed orchestration operations, in a specific implementation, the orchestration device may be further configured to obtain the acquisition capabilities, the authentication capabilities, and the authentication decision capabilities of the multiple devices in the multi-device group, and then assemble and initiate the resource synchronization (e.g., perform S5802 of fig. 58). The embodiment of the present application does not limit this.
The orchestration device and the aggregation device are also used only to distinguish the roles of the devices. A device may have multiple roles, for example, a device may be both a coordinating device and an acquiring device. Multiple devices can be the same role, and the multiple devices are overall devices.
First, an example of a user identity authentication process when a collection device and an authentication device of multi-device group coordination confirmation are in a one-to-one mapping relationship is described, which is specifically shown in fig. 61 and 62. In fig. 61 and 62, the acquisition device and the authentication device are the same device: the eleventh apparatus is explained as an example. The eleventh device may be a pooled device in a multi-device group. The eleventh device may also be a decision device in a multi-device group.
It should be noted that, although fig. 61 and 62 only show the eleventh device, this does not mean that no other device is included in the multi-device group.
Referring to fig. 61, fig. 61 is a schematic flowchart of another authentication method for multi-device cooperation according to an embodiment of the present application. The method can be applied to the multi-device cooperative authentication system shown in fig. 1. The method may be applied to the eleventh apparatus. The method includes, but is not limited to, the steps of:
s6101: the eleventh device performs resource synchronization with other devices in the multi-device group.
S6102: the eleventh device validates the at least one authentication factor.
S6103: the eleventh device collects at least one authentication factor.
S6104: and the eleventh equipment authenticates the collected at least one authentication factor to obtain at least one authentication result.
S6105: the eleventh device obtains an aggregation result according to the at least one authentication result.
S6106: and the eleventh equipment synchronizes the aggregation result with other equipment in the multi-equipment group.
Specifically, the execution processes of S6101 to S6106 are similar to S5802 to S5807 of fig. 58, and are not described again.
In some embodiments, before S6101, the method may further comprise: the eleventh device establishes a connection with other devices in the multi-device group. See, in particular, the description of S5801 of fig. 58.
In one possible implementation, the eleventh apparatus may be configured as shown in fig. 56A-56D above. Then, the embodiment shown in fig. 61 may also be embodied as the following flow shown in fig. 62.
Referring to fig. 62, fig. 62 is a schematic flowchart of another authentication method for multi-device cooperation according to an embodiment of the present application. It should be noted that fig. 62 illustrates an eleventh device including a resource management unit, a scheduling unit, a first acquisition unit, a second acquisition unit, a first authentication unit, a second authentication unit, and a decision unit as an example. And the first acquisition unit and the second acquisition unit are both target acquisition units, and the first authentication unit and the second authentication unit are both target authentication units. The first acquisition unit corresponds to the first authentication unit, and the second acquisition unit corresponds to the second authentication unit.
The method includes, but is not limited to, the steps of:
s6201: and the first acquisition unit of the eleventh device sends the information of the first acquisition unit to the resource management unit of the eleventh device, so that the registration operation is completed.
S6202: and the second acquisition unit of the eleventh device sends the information of the second acquisition unit to the resource management unit of the eleventh device, so that the registration operation is completed.
S6203: the first authentication unit of the eleventh device transmits information of the first authentication unit to the resource management unit of the eleventh device, thereby completing the registration operation.
S6204: the second authentication unit of the eleventh device transmits information of the second authentication unit to the resource management unit of the eleventh device, thereby completing the registration operation.
S6205: and resource synchronization of the acquisition unit and the authentication unit is performed between the resource management unit of the eleventh device and other devices in the multi-device group.
Specifically, S6101 of FIG. 61 may include S6201-S6205. The order of S6201, S6202, S6203, and S6204 is not limited.
S6206: the scheduling unit of the eleventh device acquires the information after resource synchronization from the resource management unit of the eleventh device.
S6207: the scheduling unit of the eleventh device determines at least one authentication factor based on the obtained resource-synchronized information.
Specifically, S6102 of FIG. 61 may include S6206-S6207.
S6208: and the scheduling unit of the eleventh device instructs the first acquisition unit of the eleventh device to acquire the corresponding authentication factor.
S6209: and the scheduling unit of the eleventh device instructs the second acquisition unit of the eleventh device to acquire the corresponding authentication factor.
Specifically, S6103 of FIG. 61 may include S6208-S6209. It should be noted that the order of S6208 and S6209 is not limited.
S6210: the scheduling unit of the eleventh device instructs the first authentication unit of the eleventh device to authenticate the authentication factor collected by the first collection unit of the eleventh device to obtain a corresponding authentication result.
S6211: and the scheduling unit of the eleventh device instructs the second authentication unit of the eleventh device to authenticate the authentication factor collected by the second collection unit of the eleventh device so as to obtain a corresponding authentication result.
Specifically, S6104 of FIG. 61 can include S6210-S6211. The order of S6210 and S6211 is not limited.
Optionally, in this embodiment of the application, before S6210, the first acquiring unit and the second acquiring unit may send the authentication factors acquired by the first acquiring unit and the second acquiring unit to the scheduling unit, and then the scheduling unit distributes the authentication factors to the corresponding first authentication unit and the corresponding second authentication unit for authentication.
Optionally, in some embodiments, before S6210, the first collecting unit and the second collecting unit may also send the collected authentication factors to the corresponding first authentication unit and the corresponding second authentication unit for authentication. Thus, after S6208, before S6210, the method may further comprise: and the first acquisition unit of the eleventh device sends the authentication factor acquired by the first acquisition unit to the first authentication unit of the eleventh device. After S6209 and before S6211, the method may further include: and the second acquisition unit of the eleventh device sends the authentication factor acquired by the second acquisition unit to the second authentication unit of the eleventh device.
S6212: the decision unit of the eleventh device obtains an aggregation result according to the authentication result obtained by the first authentication unit of the eleventh device and the authentication result obtained by the second authentication unit of the eleventh device.
Specifically, S6105 of fig. 61 may include S6212. In some embodiments, before S6212, the first authentication unit and the second authentication unit may send the authentication results obtained by the first authentication unit and the second authentication unit to the scheduling unit, and the scheduling unit may report the authentication results to the decision unit for decision making. In some embodiments, before S6212, the first authentication unit and the second authentication unit may also send the obtained authentication results to the decision unit for decision making.
S6213: and synchronizing the aggregation result between the decision unit of the eleventh device and other devices in the multi-device group.
Specifically, S6106 of fig. 61 may include S6213.
Optionally, in some embodiments, before S6201, the method may further include: the eleventh device establishes a connection with the other devices in the multi-device group. See, in particular, the description of S5801 of fig. 58.
Exemplarily, the eleventh device is a smart phone and is a device in use by a user. The first acquisition unit is a face acquisition unit, and the second acquisition unit is a touch screen behavior acquisition unit. The first authentication unit is a face authentication unit, and the second authentication unit is a touch screen behavior authentication unit. In the embodiment shown in fig. 62, when the user uses the eleventh device, the eleventh device acquires face information through the face acquisition unit of the eleventh device, and determines whether the acquired face information is changed or legal through the face authentication unit of the eleventh device, so as to obtain a face authentication result. And the eleventh device acquires the touch screen behavior information through a touch screen behavior acquisition unit of the eleventh device, and judges whether the acquired touch screen behavior information changes or is legal through a touch screen behavior authentication unit of the eleventh device to obtain a touch screen behavior authentication result. Then, the decision unit of the eleventh device may obtain an aggregation result according to the face authentication result and the touch screen behavior authentication result. The aggregation result may be used to indicate whether the user using the eleventh device at this time is legitimate.
In the methods shown in fig. 61 and 62, the above-described aggregation result may be a result obtained from a plurality of authentication results. That is to say, this application can realize the authentication of user's identity through the mode of multifactor integration, has improved the lower problem of reliability of the authentication result that single factor authentication brought, has promoted the security greatly. Moreover, the aggregation result can be synchronously sent to any one device in the multi-device group, and even if the device sensitive to power consumption does not directly participate in the user identity authentication process, the at least one aggregation result can be obtained, so that the influence on the power consumption is reduced.
Next, an example of a user identity authentication process when a collecting device and an authenticating device coordinating confirmation among multiple devices in a multiple device group are in a one-to-many mapping relationship is described, which is specifically shown in fig. 63 and 64. In fig. 63 and 64, description will be given by taking one acquisition device and two authentication devices as examples. The twelfth electronic device is a collection device, and the twelfth device and the thirteenth device are authentication devices. The twelfth device may also be a orchestration device in a multi-device group. The thirteenth device may also be a decision device in a multi-device group.
It should be noted that, although fig. 63 and fig. 64 only show the twelfth device and the thirteenth device, this does not mean that no other device is included in the multi-device group.
Referring to fig. 63, fig. 63 is a flowchart illustrating a further multi-device cooperative authentication method according to an embodiment of the present application. The method can be applied to the multi-device cooperative authentication system shown in fig. 1. The method includes, but is not limited to, the steps of:
s6301: the twelfth device and the thirteenth device perform resource synchronization.
S6302: the twelfth device validates the at least one authentication factor.
S6303: the twelfth device collects at least one authentication factor.
Specifically, the execution process of S6301-S6303 is similar to that of S5802-S5804 of fig. 58, and is not described again.
S6304: the twelfth device transmits, to the thirteenth device, the authentication factor collected by the twelfth device for authentication of the thirteenth device.
S6305: and the twelfth device authenticates the authentication factor which is acquired by the twelfth device and used for authenticating the twelfth device to obtain a corresponding authentication result.
S6306: and the thirteenth equipment authenticates the authentication factor which is acquired by the twelfth equipment and used for authenticating the thirteenth equipment to obtain a corresponding authentication result.
Specifically, the execution process of S6305-S6306 is similar to S5805 of fig. 58 and will not be described again.
The sequence of S6304 and S6305 is not limited, and the sequence of S6305 and S6306 is not limited. But S6304 precedes S6306.
S6307: the twelfth device transmits the authentication result obtained by the twelfth device to the thirteenth device.
S6308: the thirteenth device obtains the aggregation result according to at least one authentication result obtained by the twelfth device and the thirteenth device.
S6309: the twelfth apparatus and the thirteenth apparatus perform synchronization of the polymerization results.
Specifically, the execution process of S6308-S6309 is similar to that of S5806-S5807 of fig. 58, and is not described in detail.
In some embodiments, prior to S6301, the method may further comprise: a connection is established between a plurality of devices in a multi-device group. See, in particular, the description of S5801 of fig. 58.
In one possible implementation, the structures of the twelfth device and the thirteenth device may be as shown in fig. 56A to 56D above. The embodiment shown in fig. 63 can then also be as shown in fig. 64 below.
Referring to fig. 64, fig. 64 is a schematic flowchart of another authentication method for multi-device cooperation according to an embodiment of the present application. In fig. 64, the twelfth device includes a resource management unit, a scheduling unit, a third acquisition unit, a fourth acquisition unit, and a third authentication unit. The thirteenth device includes a resource management unit, a fourth authentication unit, and a decision unit. And the third acquisition unit and the fourth acquisition unit are both target acquisition units, and the third authentication unit and the fourth authentication unit are both target authentication units. The third acquisition unit corresponds to the third authentication unit, and the fourth acquisition unit corresponds to the fourth authentication unit. The method includes, but is not limited to, the steps of:
S6401: the twelfth device acquires the acquisition unit and the authentication unit available to the twelfth device itself.
The step may specifically include the steps of:
in step M1, the third acquisition unit of the twelfth device sends the information of the third acquisition unit to the resource management unit of the twelfth device, thereby completing the registration operation.
Step M2: and the fourth acquisition unit of the twelfth device sends the information of the fourth acquisition unit to the resource management unit of the twelfth device, so that the registration operation is completed.
Step M3: the third authentication unit of the twelfth device sends the information of the third authentication unit to the resource management unit of the twelfth device, thereby completing the registration operation.
S6402: the thirteenth electronic device acquires an authentication unit available to the thirteenth electronic device itself.
Specifically, the fourth authentication unit of the thirteenth device transmits information of the fourth authentication unit to the resource management unit of the thirteenth device, thereby completing the registration operation.
S6403: and performing resource synchronization between the twelfth device and the thirteenth device.
Specifically, resource synchronization of the acquisition unit and the authentication unit is performed between the resource management unit of the twelfth device and the resource management unit of the thirteenth device.
Specifically, S6301 of FIG. 63 may include S6401-S6403. The order of S6401 and S6402 is not limited.
S6404: the twelfth device determines at least one authentication factor, collects the corresponding authentication factor, and authenticates the collected authentication factor to obtain a corresponding first authentication result.
Specifically, this step S6404 includes performing actions of:
step N1: the scheduling unit of the twelfth device acquires the information after resource synchronization from the resource management unit of the twelfth device.
Step N2: and the scheduling unit of the twelfth device confirms at least one authentication factor based on the acquired resource synchronization information.
Step N3: and the scheduling unit of the twelfth device instructs the third acquisition unit of the twelfth device to acquire the corresponding authentication factor.
Step N4: and the scheduling unit of the twelfth device instructs the fourth acquisition unit of the twelfth device to acquire the corresponding authentication factor.
The order of step N3 and step N4 is not limited.
Step N5: and the scheduling unit of the twelfth device instructs the third authentication unit of the twelfth device to authenticate the authentication factor acquired by the third acquisition unit of the twelfth device so as to obtain a corresponding authentication result.
S6405: the twelfth device instructs the thirteenth device to authenticate.
Specifically, the scheduling unit of the twelfth device instructs the fourth authentication unit of the thirteenth device to authenticate the authentication factor collected by the fourth collection unit of the twelfth device, so as to obtain a corresponding authentication result.
Specifically, before S6405, the third acquiring unit and the fourth acquiring unit of the thirteenth device may send the authentication factors acquired by the third acquiring unit and the fourth acquiring unit of the thirteenth device to the scheduling unit of the twelfth device, and the authentication factors are distributed by the scheduling unit of the twelfth device to the corresponding third authenticating unit and the corresponding fourth authenticating unit for authentication.
In some embodiments, the third acquiring unit and the fourth acquiring unit may also send the acquired authentication factors to the corresponding third authenticating unit and fourth authenticating unit for authentication. Thus, the method may further comprise: and the third acquisition unit sends the authentication factor acquired by the third acquisition unit to the third authentication unit. The method may further comprise: and the fourth acquisition unit sends the authentication factor acquired by the fourth acquisition unit to the fourth authentication unit.
S6406: and the thirteenth equipment authenticates the collected authentication factors to obtain a corresponding second authentication result.
Specifically, the decision unit of the thirteenth device obtains the aggregation result according to the authentication result obtained by the third authentication unit of the twelfth device and the authentication result obtained by the fourth authentication unit of the thirteenth device.
Specifically, in some embodiments, the third authentication unit may send the obtained authentication result to the scheduling unit of the twelfth device, and the scheduling unit of the twelfth device reports the authentication result to the decision unit of the thirteenth device for decision. Optionally, the third authentication unit may also send the obtained authentication result to the decision unit of the thirteenth device for decision making.
S6407: and the thirteenth equipment aggregates the first authentication result and the second authentication result to obtain an aggregation result.
Specifically, the decision unit of the thirteenth device synchronizes the aggregation result with the other devices in the multi-device group.
In some embodiments, the method may further comprise: a plurality of devices in a multi-device group establish a connection.
Exemplarily, the twelfth device is a smart band, and the user wears the smart band. The thirteenth device is a smartphone. The third acquisition unit is a heart rate acquisition unit, and the fourth acquisition unit is a gait acquisition unit. The third authentication unit is a heart rate authentication unit, and the fourth authentication unit is a gait authentication unit. In the embodiment shown in fig. 64, the twelfth device acquires the heart rate information through the heart rate acquisition unit of the twelfth device, and determines whether the acquired heart rate information is changed or legal through the heart rate authentication unit of the twelfth device, so as to obtain a heart rate authentication result. And the twelfth device acquires the gait information through the gait acquisition unit of the twelfth device, and the thirteenth device obtains the gait authentication result through the gait authentication unit of the thirteenth device according to the gait information acquired by the twelfth device. Then, the decision unit of the thirteenth device may obtain an aggregated result according to the heart rate authentication result and the gait authentication result. The aggregation result may be used to indicate whether the user wearing the twelfth device at this time is legitimate.
The gait certification result obtained by the smart phone can be a comprehensive training result obtained by combining at least one of the following items: historical gait information acquired by the smart phone from a memory, a connected cloud server or other connected wearable devices and the like, gait information acquired by other wearable devices which do not belong to a multi-device group and connected with the smart phone or gait information acquired by the smart phone in other modes. That is to say, the authentication samples used by the authentication devices in the multi-device group to obtain the authentication result may not only come from a single acquisition device, but also the authentication devices may implement identity authentication through a comprehensive training mode in which the authentication samples are collected. Therefore, the usability is high, and the reliability of the authentication result is also improved.
In the methods shown in fig. 63 and 64, the authentication device used to authenticate the authentication factor is not limited to a single device. And, any one of the authentication results may be a comprehensive training result obtained by authentication samples from different devices. According to the embodiment of the application, the authentication of the user identity is realized through a multi-factor fusion mode and a comprehensive training mode for gathering the authentication samples, and the safety and the reliability of the authentication result are greatly improved. Moreover, the aggregation result can be synchronously sent to any one device in the multi-device group, and even if the device sensitive to power consumption does not directly participate in the user identity authentication process, at least one aggregation result can be obtained, so that the influence on the power consumption is greatly reduced.
Next, an example of the user identity authentication process when the collecting device and the authenticating device in the multi-device group coordinate the validation is in a many-to-one mapping relationship is described, which is specifically shown in fig. 65 and fig. 66. In fig. 65 and 66, two acquisition devices and one authentication device are illustrated as an example. The fourteenth device is an authentication device, and the fifteenth device and the sixteenth device are acquisition devices. The fourteenth device may also be a orchestration device in a multi-device group. The fourteenth device may also be a decision device in a multi-device group.
It should be noted that, although fig. 65 and fig. 66 only show the fourteenth device, the fifteenth device, and the sixteenth device, this does not mean that no other device is included in the multi-device group.
Referring to fig. 65, fig. 65 is a schematic flowchart of another authentication method for multi-device cooperation according to an embodiment of the present application. The method can be applied to the multi-device cooperative authentication system shown in fig. 1. The method includes, but is not limited to, the steps of:
s6501: the fourteenth device, the fifteenth device, and the sixteenth device perform resource synchronization.
S6502: the fourteenth device validates the at least one authentication factor.
Specifically, the execution process of S6501-S6502 is similar to that of S5802-S5803 of fig. 58, and is not described in detail.
S6503: the fifteenth device collects an authentication factor.
S6504: the sixteenth device collects an authentication factor.
Specifically, S6503-S6504 are similar to S5804 of fig. 58 and will not be described in detail. The order of S6503 and S6504 is not limited.
S6505: and the fifteenth device sends the authentication factor collected by the fifteenth device to the fourteenth device.
S6506: and the sixteenth device sends the authentication factor collected by the sixteenth device to the fourteenth device.
The order of S6505 and S6506 is not limited.
S6507: and the fourteenth device authenticates the at least one authentication factor collected by the fifteenth device and the sixteenth device to obtain at least one authentication result.
Specifically, S6507 is similar to S5805 of fig. 58 and is not described in detail.
S6508: the fourteenth device obtains an aggregation result according to the at least one authentication result.
S6509: the fourteenth device, the fifteenth device, and the sixteenth device perform synchronization of the aggregation result.
Specifically, the execution process of S6508-S6509 is similar to that of S5806-S5807 of fig. 58, and is not described in detail.
In some embodiments, prior to S6501, the method may further comprise: a connection is established between a plurality of devices in a multi-device group. See, in particular, the description of S5801 of fig. 58.
In one possible implementation, the structures of the fourteenth device, the fifteenth device, and the sixteenth device may be as shown in fig. 56A-56D above. The embodiment shown in fig. 65 may then also be as shown in fig. 66 below.
Referring to fig. 66, fig. 66 is a schematic flowchart of another authentication method for multi-device cooperation according to an embodiment of the present application. In fig. 66, the fourteenth device includes a resource management unit, a scheduling unit, a fifth authentication unit, a sixth authentication unit, and a decision unit. The fifteenth device includes a resource management unit and a fifth acquisition unit. The sixteenth device includes a resource management unit and a sixth acquisition unit. And the fifth acquisition unit and the sixth acquisition unit are both target acquisition units, and the fifth authentication unit and the sixth authentication unit are both target authentication units. The fifth acquisition unit corresponds to the fifth authentication unit, and the sixth acquisition unit corresponds to the sixth authentication unit.
The method includes, but is not limited to, the steps of:
s6601: the fourteenth device acquires the authentication unit usable by the fourteenth device itself.
Specifically, the fifth authentication unit of the fourteenth device transmits information of the fifth authentication unit to the resource management unit of the fourteenth device, thereby completing the registration operation. The sixth authentication unit of the fourteenth device transmits information of the sixth authentication unit to the resource management unit of the fourteenth device, thereby completing the registration operation.
S6602: the fifteenth device acquires an acquisition unit available to the fifteenth device itself.
Specifically, the fifth acquisition unit of the fifteenth device sends information of the fifth acquisition unit to the resource management unit of the fifteenth device, thereby completing the registration operation.
S6603: the sixteenth device acquires an acquisition unit available to the sixteenth device itself.
S6604: the fourteenth device, the fifteenth device, and the sixteenth device perform resource synchronization.
Specifically, the sixth acquisition unit of the sixteenth device sends the information of the sixth acquisition unit to the resource management unit of the sixteenth device, thereby completing the registration operation. And resource synchronization of the acquisition unit and the authentication unit is performed among the resource management unit of the fourteenth device, the resource management unit of the fifteenth device and the resource management unit of the sixteenth device. The scheduling unit of the fourteenth device acquires the information after resource synchronization from the resource management unit of the fourteenth device.
S6605: the fourteenth device determines at least one authentication factor, such as the first authentication factor and the second authentication factor.
Specifically, the scheduling unit of the fourteenth device determines at least one authentication factor based on the obtained information after resource synchronization.
S6606: the fourteenth device instructs the fifteenth device to collect the first authentication factor.
Specifically, the scheduling unit of the fourteenth device instructs the fifth collecting unit of the fifteenth device to collect the corresponding authentication factor.
S6607, the fourteenth device instructs the sixteenth device to collect the second authentication factor.
Specifically, the scheduling unit of the fourteenth device instructs the sixth acquisition unit of the sixteenth device to acquire the corresponding authentication factor.
S6608, the fifteenth device sends the first authentication factor to the fourteenth device.
S6609, the sixteenth device sends the second authentication factor to the fourteenth device.
S6610: the fourteenth equipment authenticates the collected first authentication factor to obtain a corresponding first authentication result; and authenticating the collected second authentication factor to obtain a second authentication result.
Specifically, the scheduling unit of the fourteenth device instructs the fifth authentication unit of the fourteenth device to authenticate the authentication factor collected by the fifth collection unit of the fifteenth device, so as to obtain a corresponding authentication result.
The scheduling unit of the fourteenth device instructs the sixth authentication unit of the fourteenth device to authenticate the authentication factor collected by the sixth collection unit of the sixteenth device, so as to obtain a corresponding authentication result.
Specifically, the fifth acquiring unit and the sixth acquiring unit may send the authentication factors acquired by the fifth acquiring unit and the sixth acquiring unit to the scheduling unit of the fourteenth device, and then the scheduling unit of the fourteenth device distributes the authentication factors to the corresponding fifth authenticating unit and the corresponding sixth authenticating unit for authentication.
In some embodiments, the fifth acquiring unit and the sixth acquiring unit may also send the acquired authentication factors to the corresponding fifth authenticating unit and the corresponding sixth authenticating unit for authentication. Thus, the method may further comprise: the fifth acquiring unit of the fifteenth device sends the authentication factor acquired by the fifth acquiring unit to the fifth authenticating unit of the fourteenth device. The method may further comprise: and the sixth acquisition unit sends the authentication factor acquired by the sixth acquisition unit to the sixth authentication unit.
S6611: and the fourteenth equipment carries out aggregation on the first authentication result and the second authentication result to obtain an aggregation result.
And the decision unit of the fourteenth device obtains an aggregation result according to the authentication result obtained by the fifth authentication unit and the authentication result obtained by the sixth authentication unit.
The method further comprises the following steps: and synchronizing the aggregation result between the decision unit of the fourteenth device and other devices in the multi-device group.
In some embodiments, the method may further comprise: a plurality of devices in a multi-device group establish a connection.
Illustratively, the fourteenth device is an intelligent large-screen device (e.g., a monitoring large-screen device), the fifteenth device is an intelligent camera, and the sixteenth device is an intelligent watch. The fifth acquisition unit is a face acquisition unit, and the sixth acquisition unit is a heart rate acquisition unit. The fifth authentication unit is a face authentication unit, and the sixth authentication unit is a heart rate authentication unit. In the embodiment shown in fig. 65, the fifteenth device acquires face information through the face acquisition unit of the fifteenth device, and the fourteenth device determines whether the face information acquired by the fifteenth device is changed or legal through the face authentication unit of the fourteenth device, so as to obtain a face authentication result. The sixteenth device collects heart rate information through the heart rate collecting unit of the sixteenth device, and the fourteenth device judges whether the heart rate information collected by the sixteenth device is changed or legal or not through the heart rate authentication unit of the fourteenth device so as to obtain a heart rate authentication result. Then, the decision unit of the fourteenth device may obtain an aggregation result according to the face authentication result and the heart rate authentication result. The aggregation result may be used to indicate whether the user using the fourteenth device is legitimate at this time.
In the methods shown in fig. 65 and 66, the acquisition device for acquiring the authentication factor is not limited to a single device. The embodiment of the application can comprehensively schedule the acquisition capacity of a plurality of devices in the multi-device group, and has high availability. Moreover, the aggregation result can be obtained according to a plurality of authentication results, namely, the authentication of the user identity is realized in a multi-factor fusion mode, so that the safety and the reliability of the authentication result are greatly improved. Moreover, the aggregation result can be synchronously sent to any one device in the multi-device group, and even if the device sensitive to power consumption does not directly participate in the user identity authentication process, at least one aggregation result can be obtained, so that the influence on the power consumption is greatly reduced.
Next, an example of the user identity authentication process when the acquisition device and the authentication device coordinating the confirmation among the multiple devices in the multiple device group are in a many-to-many mapping relationship is described, which is specifically shown in fig. 67 and 68. In fig. 67 and 68, two acquisition devices and two authentication devices are illustrated as an example. The eighteenth device and the nineteenth device are acquisition devices, and the twentieth device and the twenty-first device are authentication devices. The seventeenth device may also be a coordinating device in a multi-device group. The seventeenth device may also be a decision device in a multi-device group.
It should be noted that although fig. 67 shows only the seventeenth device, the eighteenth device, the nineteenth device, the twentieth device, and the twenty-first device, this does not mean that no other device is included in the multi-device group.
Referring to fig. 67, fig. 67 is a schematic flowchart of another multi-device cooperative authentication method according to an embodiment of the present application. The method can be applied to the multi-device cooperative authentication system shown in fig. 1. The method includes, but is not limited to, the steps of:
s6701: the seventeenth device, the eighteenth device, the nineteenth device, the twentieth device, and the twenty-first device perform resource synchronization.
S6702: the seventeenth device validates at least one authentication factor.
Specifically, the execution process of S6701-S6702 is similar to that of S5802-S5803 of fig. 58, and is not described in detail.
S6703: the eighteenth device collects the authentication factor.
S6704: the nineteenth device collects the authentication factor.
Specifically, S6703-S6704 are similar to S5804 of fig. 68 and will not be described in detail. The order of S6703 and S6704 is not limited.
S6705: and the eighteenth device sends the authentication factor collected by the eighteenth device to the twentieth device.
S6706: and the nineteenth device sends the authentication factor collected by the nineteenth device to the twenty-first device.
S6707: and the twentieth equipment authenticates the authentication factor acquired by the eighteenth equipment to obtain a corresponding authentication result.
S6708: and the twenty-first equipment authenticates the authentication factor acquired by the nineteenth equipment to obtain a corresponding authentication result.
Specifically, S6707-S6708 are similar to S5805 of fig. 58 and will not be described in detail. The order of S6705 and S6706 is not limited. The order of S6707 and S6708 is not limited.
S6709: the twentieth device transmits the authentication result obtained by the twentieth device to the seventeenth device.
S6710: and the twenty-first device sends the authentication result obtained by the twenty-first device to the seventeenth device.
The order of S6709 and S6710 is not limited.
S6711: and the seventeenth device obtains an aggregation result according to at least one authentication result obtained by the twentieth device and the twenty-first device.
S6712: and the seventeenth device, the eighteenth device, the nineteenth device, the twentieth device and the twenty-first device synchronize the aggregation results.
Specifically, the execution process of S6711-S6712 is similar to S5806-S5807 of fig. 58 and will not be described again.
In some embodiments, prior to S6701, the method may further comprise: a connection is established between a plurality of devices in a multi-device group. See, in particular, the description of S5801 of fig. 58.
In one possible implementation, the structures of the seventeenth device, the eighteenth device, the nineteenth device, the twentieth device, and the twenty-first device may be as shown in fig. 56A-56D above. Then, the embodiment shown in fig. 67 may also be as shown in fig. 68 below.
Referring to fig. 68, fig. 68 is a schematic flowchart of another authentication method for multi-device cooperation according to an embodiment of the present application. Note that, in fig. 68, the seventeenth device includes a resource management unit, a scheduling unit, and a decision unit. The eighteenth device includes a resource management unit and a seventh acquisition unit. The nineteenth device includes a resource management unit and an eighth acquisition unit. The twentieth device includes a resource management unit and a seventh authentication unit. The twenty-first device includes a resource management unit and an eighth authentication unit. And the seventh acquisition unit and the eighth acquisition unit are both target acquisition units, and the seventh authentication unit and the eighth authentication unit are both target authentication units. The seventh acquisition unit corresponds to the seventh authentication unit, and the eighth acquisition unit corresponds to the eighth authentication unit.
The method includes, but is not limited to, the steps of:
s6801: the eighteenth device acquires an acquisition unit available to the eighteenth device itself.
Specifically, the seventh acquisition unit of the eighteenth device sends the information of the seventh acquisition unit to the resource management unit of the eighteenth device, thereby completing the registration operation.
S6802: the nineteenth device acquires the acquisition units available to the nineteenth device itself.
Specifically, the eighth acquisition unit of the nineteenth device sends information of the eighth acquisition unit to the resource management unit of the nineteenth device, thereby completing the registration operation.
S6803: the twentieth device acquires the acquisition units available to the twentieth device itself.
Specifically, the seventh authentication unit of the twentieth device transmits information of the seventh authentication unit to the resource management unit of the twentieth device, thereby completing the registration operation.
S6804: the twenty-first device obtains an acquisition unit available to the twenty-first device itself.
Specifically, the eighth authentication unit of the twenty-first device sends information of the eighth authentication unit to the resource management unit of the twenty-first device, thereby completing the registration operation.
S6805: the seventeenth device, the eighteenth device, the nineteenth device, the twentieth device, and the twenty-first device perform resource synchronization.
Specifically, resource synchronization of the acquisition unit and the authentication unit is performed between the resource management unit of the seventeenth device, the resource management unit of the eighteenth device, the resource management unit of the nineteenth device, the resource management unit of the twentieth device, and the resource management unit of the twenty-first device. The scheduling unit of the seventeenth device acquires the information after resource synchronization from the resource management unit of the seventeenth device.
S6806: the seventeenth device validates at least one authentication factor.
Specifically, the scheduling unit of the seventeenth device determines at least one authentication factor based on the obtained information after resource synchronization.
S6807: the seventeenth device instructs the eighteenth device to collect the first authentication factor.
Specifically, the scheduling unit of the seventeenth device instructs the seventh acquiring unit of the eighteenth device to acquire the corresponding authentication factor.
S6808: the seventeenth device instructs the nineteenth device to collect the second authentication factor.
Specifically, the scheduling unit of the seventeenth device instructs the eighth acquiring unit of the nineteenth device to acquire the corresponding authentication factor.
S6809: the eighteenth device transmits the first authentication factor to the twentieth device.
Specifically, the scheduling unit of the seventeenth device instructs the seventh authentication unit of the twentieth device to authenticate the authentication factor acquired by the seventh acquisition unit of the eighteenth device, so as to obtain a corresponding authentication result.
S6810: the nineteenth device transmits the second authentication factor to the twenty-first device.
Specifically, the scheduling unit of the seventeenth device instructs the eighth authentication unit of the twenty-first device to authenticate the authentication factor acquired by the eighth acquisition unit of the nineteenth device, so as to obtain a corresponding authentication result.
Specifically, the seventh acquiring unit and the eighth acquiring unit may send the authentication factors acquired by the seventh acquiring unit and the eighth acquiring unit to the scheduling unit of the seventeenth device, and the scheduling unit of the seventeenth device distributes the authentication factors to the corresponding seventh authentication unit and the seventh authentication unit for authentication.
In some embodiments, the seventh collecting unit and the eighth collecting unit may also send the collected authentication factors to the corresponding seventh authenticating unit and the seventh authenticating unit for authentication. Thus, the method may further comprise: and the seventh acquisition unit sends the authentication factor acquired by the seventh acquisition unit to the seventh authentication unit. The method may further comprise: the eighth acquiring unit transmits the authentication factor acquired by the eighth acquiring unit to the eighth authenticating unit.
And S6811, the twentieth device authenticates the collected first authentication factor to obtain a corresponding first authentication result.
And S6812, performing authentication on the collected second authentication factor in the twenty-eighth mode to obtain a second authentication result.
S6813, the twentieth device sends the first authentication result to the seventeenth device.
S6814, the twenty-first device sends the second authentication result to the seventeenth device.
And S6815, the seventeenth device aggregates the first authentication result and the second authentication result to obtain an aggregation result.
Specifically, the decision unit of the seventeenth device obtains the aggregation result according to the authentication result obtained by the seventh authentication unit of the twentieth device and the authentication result obtained by the eighth authentication unit of the twenty-first device.
Specifically, in some embodiments, the seventh authentication unit and the eighth authentication unit may send the authentication results obtained by the authentication units to the decision unit of the seventeenth device for decision. And synchronizing the aggregation result between the decision unit of the seventeenth device and other devices in the multi-device group.
In some embodiments, the method may further comprise: a plurality of devices in a multi-device group establish a connection.
Illustratively, the seventeenth device is a smart phone and is a device in use by a user. The eighteenth device is an intelligent camera, the nineteenth device is an intelligent watch, the twentieth device is an intelligent large-screen device, and the twenty-first device is a tablet computer. The seventh acquisition unit is a face acquisition unit, and the eighth acquisition unit is a heart rate acquisition unit. The seventh authentication unit is a face authentication unit, and the eighth authentication unit is a heart rate authentication unit. In the embodiment shown in fig. 68, when the seventeenth device is used by the user, the eighteenth device acquires face information through the face acquisition unit of the eighteenth device. The twentieth device judges whether the face information acquired by the face acquisition unit of the eighteenth device is changed or legal through the face authentication unit of the twentieth device to obtain a face authentication result. And the nineteenth device collects heart rate information through the heart rate collecting unit of the nineteenth device, and the twenty-first device judges whether the heart rate information collected by the heart rate collecting unit of the nineteenth device is changed or legal or not through the heart rate authentication unit of the twenty-first device so as to obtain a heart rate authentication result. Then, the decision unit of the seventeenth device may obtain an aggregation result according to the face authentication result and the heart rate authentication result. The aggregation result may be used to indicate whether the user using the seventeenth device is legitimate at this time.
In the methods shown in fig. 67 and 68, the collection and authentication of the authentication factor for implementing the user identity authentication are not limited to a single device. That is to say, the embodiment of the application can realize the collection and synchronization of resources such as the acquisition capability and the authentication capability of a plurality of devices in a multi-device group, and the resources can be comprehensively scheduled and used. Therefore, the authentication of the user identity can be effectively realized, and the usability is higher. Moreover, the aggregation result can be obtained according to a plurality of authentication results, namely, the authentication of the user identity is realized in a multi-factor fusion mode, so that the safety and the reliability of the authentication result are greatly improved.
Moreover, the aggregation result can be synchronously sent to any one device in the multi-device group, and even if the device sensitive to power consumption does not directly participate in the user identity authentication process, the at least one aggregation result can be obtained, so that the influence on the power consumption is greatly reduced.
Not limited to the case of one orchestration device listed above, in a specific implementation, there may be multiple orchestration devices in a multi-device group. In this case, the plurality of orchestration devices may coordinate performing at least one orchestration operation. Alternatively, no orchestration device may exist in the multi-device group. In this case, the authentication process of the user identity can be performed through self-coordination among a plurality of devices in the multi-device group.
In the embodiment of the application, in the process that the user uses any one device in the multi-device group, the multiple devices in the multi-device group can perform continuous authentication of the user identity, that is, the identity authentication process can be executed once in a circulating manner. And finally, the aggregation result obtained by the continuous authentication is at least one aggregation result obtained by the identity authentication process executed at different moments. Optionally, the aggregation result obtained by the last persistent authentication may also be at least one aggregation result synchronized by the multi-device group at different time instants. Wherein the identity authentication process may be at least one of the embodiments shown in fig. 58, fig. 61-fig. 68.
In the embodiment of the present application, the process of performing user identity authentication by a multi-device group (specifically, as shown in the embodiments illustrated in fig. 58 and fig. 61-68) is not sensible to the user, and does not affect the normal use of any device in the multi-device group by the user.
In the embodiment of the application, any one device in the multi-device group can query and obtain at least one aggregation result obtained at different moments. For example, but not limited to, reading the at least one aggregated result from the memory, and querying and downloading the at least one aggregated result from the connected cloud server. The application program on any one device in the multi-device group can provide corresponding application service according to the at least one aggregation result, namely, a corresponding user interface is displayed. The application program on the device may be an application program of the system itself when the device leaves the factory (hereinafter, may be referred to as a system application), or may be an application program installed subsequently (hereinafter, may be referred to as a third-party application). Often, the third-party application needs to connect to a corresponding application server in the network, so as to provide the application service for the user through the corresponding application server.
The following describes an application scenario and an embodiment of a user interface in the scenario related to the embodiment of the present application.
In the following description, the twenty-second device is taken as an example of a device used by a user. The twenty-second device may be any one of the devices in the multi-device group shown in fig. 1. And, a plurality of applications may be installed on the twenty-second device.
For example, a twenty-second device may have a file management application and a first payment application installed thereon. The file management application is a system application, and can provide application service for a user without connecting a network. The user may view and edit files stored in the internal memory of the twenty-second device or files stored in memory additionally integrated with the twenty-second device (e.g., secure digital card (SD card)) by the file management application (hereinafter collectively referred to as files on the twenty-second device). The first payment application is a third-party application, and needs to connect to a corresponding application server in the network (which may be referred to as a first payment application server in the following) so as to provide the application service for the user through the first payment application server. The user can make payment and the like through the first payment application.
Referring to fig. 69, fig. 69 is a diagram illustrating a comparison of the user interface of the file management application installed on the twenty-second device before and after the user authentication fails. The user interface 6910 shown in fig. 69 (a) is a user interface before user authentication fails, and the user interface 1620 shown in fig. 69 (b) is a user interface after user authentication fails.
As shown in fig. 69 (a), user interface 6910 may include a first navigation bar 6911, a search bar 6912, and a sort list 6913. Wherein:
the first navigation bar 6911 may include a sort control, a local control, and a cloud collection control. The primary interface of the file management application may be a user interface 6910. The user interface 6910 may be an interface displayed by the twenty-second electron upon any of the user interfaces of the file management application detecting a user operation on the sort control in the first navigation bar 6911.
Search bar 6912 may be used for a user to search for files on a twenty-second device. The twenty-second device may detect a user operation (e.g., a click operation) that acts on search bar 6912, and in response to this operation, the twenty-second device may display an input box. The twenty-second device may search for a file on the twenty-second device according to a part or all of the file name input by the user through the input box, and display a corresponding search result.
The sorted list 6913 may include picture controls, audio controls, video controls, document controls, compact package controls, collection controls, application game controls, and lockers controls 6913A. The twenty-second device may detect a user operation (e.g., a click operation) on any of the controls in the sort list 6913, in response to which the twenty-second device may display a corresponding interface. For example, the twenty-second device may detect a click operation on a picture control, and in response to the operation, the twenty-second device may display information of a file of a type of picture (e.g., a file of a file format of joint photographic experts group (jpeg)) or a portable network graphics (png)) among files on the twenty-second device.
As shown in fig. 69, the twenty-second device may detect a click operation acting on a lockbox control 6913A in the user interface 6910, and in response to the operation, the twenty-second device may obtain at least one aggregated result obtained by the multiple device groups at different times. When the number of the aggregation results indicating that the user using the twenty-second device is legal in the at least one aggregation result is smaller than the preset legal threshold, the twenty-second device confirms that the user identity authentication does not pass, and therefore, a user interface 6920 for identity authentication is displayed.
The function of the secrecy cabinet in the file management application can provide a service for encrypting and storing files for a user. And when the user identity is verified to be passed, the user can check and edit the encrypted and stored file through the function of the secrecy cabinet in the file management application, otherwise, the encrypted and stored file cannot be checked and edited. The user identity authentication method may include, but is not limited to, fingerprint authentication, password authentication, face authentication, and the like. The user interface 6920 is illustrated by way of example of fingerprint authentication as a preferred way of user authentication as described above.
As shown in fig. 69 (b), the user interface 6920 may include a first cue 6921, a fingerprint icon 6922, a second cue 6923, and a first selection control 6924. Wherein:
the first prompt 6921 may be a word "please verify the fingerprint". The way that the user can obtain the user identity authentication through the first prompt 6921 and the fingerprint icon 6922 may be fingerprint authentication. In some embodiments, the area in which the fingerprint icon 6922 is located may be an identified area of an off-screen fingerprint.
The user can also obtain the above fingerprint verification condition through the second prompt 6923. When no fingerprint information is detected, the second prompt 6923 may be a text of "no fingerprint detected". The second prompt 6923 may be a word "in fingerprint verification" when fingerprint information is being detected and/or verified. When the fingerprint information fails or succeeds in verification, the second prompt 6923 may be a word "fingerprint verification failed" or "fingerprint verification succeeded".
The first selection control 6924 may be used for user selection of other ways of user authentication as described above. The twenty-second device may detect a user operation (e.g., a click operation) acting on the first selection control 6924, in response to which the twenty-second device may display a user interface for password authentication, face authentication, and other authentication means.
In the case where the user authentication passes (e.g., the fingerprint authentication passes or the password authentication passes), the twenty-second device may display file information stored in the first-level directory in an encrypted manner by using a safe function, for example, a user interface 7010 shown in fig. 70 below.
Referring to fig. 70, fig. 70 is a diagram illustrating a comparison of the user interface of the file management application installed on the twenty-second device before and after the user authentication is passed. The user interface 7010 shown in fig. 70 (a) is a user interface before the user authentication is passed, and the user interface 7010 shown in fig. 70 (b) is a user interface after the user authentication is passed. For a description of the user interface 6910 shown in fig. 70 (a), reference may be made to the description of the user interface 6910 shown in fig. 69 (a), and details thereof are not repeated.
As shown in fig. 70, the twenty-second device may detect a click operation on a lockbox control 1613A in the user interface 161, in response to which the twenty-second device may obtain at least one aggregated result obtained by the multi-device group at different times. When the number of the aggregation results indicating that the user using the twenty-second device is legal in the at least one aggregation result is greater than the preset legal threshold, the twenty-second device may confirm that the user identity authentication is passed, and thus, the user interface 7010 may be displayed. That is, when the twenty-second device confirms that the user authentication passes, the twenty-second device may confirm that the user authentication of the lockroom function in the document management application passes, and the twenty-second device may display the user interface 7010 without displaying the user interface 6920 for authentication. Under the condition, the user does not need to manually perform the user identity authentication process, and the use of the user is greatly facilitated.
As shown in fig. 70 (b), the user interface 7010 may include a first file list 7011. The user may obtain, through the user interface 7010, file information in the first-level directory stored in encrypted form by the safe function. The first file list 7011 may include, among other things, a folder 111 control, a folder 222 control, and a folder 333 control. The twenty-second device can detect a user operation (e.g., a click operation) acting on any one of the controls in the first file list 7011, and in response thereto, the twenty-second device can display a corresponding interface. For example, the twenty-second device may detect a click operation on a control of the folder 111, and in response to this operation, the twenty-second device may display file information under the first-level directory of the folder 111 (e.g., the user interface 7110 shown in (a) of fig. 71).
In some embodiments, during the process of using the safe function in the file management application on the twenty-second device by the user, for example, when the user views the file information in the first-level directory of the folder 111, the multiple devices in the multiple-device group still cooperate to perform the authentication of the user identity, and obtain at least one aggregation result. The twenty-second device may also obtain the at least one aggregated result.
Referring to fig. 71, fig. 71 is a diagram illustrating a comparison of user interfaces of the file management application installed on the twenty-second device before and after the user identity authentication fails. The user interface 7110 shown in fig. 71 (a) is a user interface before user authentication fails, and the user interface 7110 shown in fig. 71 (b) is a user interface after user authentication fails.
As shown in (a) of fig. 71, the user interface 7110 may include a second file list 7111. The user can acquire file information under the first-level directory of the folder 7111 stored in an encrypted manner through the safe function through the user interface 7111. The second file list 7111 may include, among other things, a picture folder control, an audio folder control, and other folder controls. The twenty-second device may detect a user operation (e.g., a click operation) acting on any one of the controls in the second file list 7111, and in response to the operation, the twenty-second device may display a corresponding interface. For example, the twenty-second device may detect a click operation acting on a picture folder control, in response to which the twenty-second device may display file information under a picture folder of the folder 7111.
As shown in fig. 71, in the process of viewing file information in the first-level directory of the folder 111 through the user interface 7110 shown in fig. 71 (a), for example, the twenty-second device detects a click operation acting on other folder controls in the second file list 7111, and in response to the operation, the twenty-second device may acquire at least one aggregation result obtained by the multi-device group at different times. When there is an aggregation result indicating that the current user using the second device is not legitimate among the at least one aggregation result, the second device may block the user's access to the lockers function through the file management application, that is, may display the user interface 7120 shown in (b) of fig. 71.
In contrast to the user interface 7110 shown in fig. 71 (a), the user interface 7120 shown in fig. 71 (b) may also include a warning prompt 7121. The warning reminder 7121 may be "No Authority Access! Click on screen to return to file management application ". The twenty-second device may detect a user operation (e.g., a click operation) acting on an arbitrary area of the user interface 7121 shown in (b) of fig. 71, and in response to the operation, the twenty-second device may display a main interface of the file management application, e.g., the user interface 6910 shown in (a) of fig. 69 above. If the user still needs to access the file encrypted and stored by the function of the lockers, the user needs to perform user authentication again, and the specific process is similar to the process shown in fig. 69 to 71, and is not described again.
Not limited to the above-mentioned application scenario and user interface schematic diagrams, in a specific implementation, when a user uses a safe function in a file management application on the twenty-second device, if a file accessed by the user is a file with a higher authority requirement and a higher security level, even if the user identity is authenticated, the twenty-second electronic device may still display an identity verification user interface (e.g., the user interface 6920 shown in fig. 69). In case the authentication is passed, the user can normally access the file. If the file accessed by the user is a file with lower authority requirement and lower security level, even if at least one aggregation result acquired by the twenty-second device contains an aggregation result indicating that the current user is illegal, the user can still normally access the file as long as the number of the aggregation results indicating that the current user is illegal is smaller than a preset threshold value. The embodiment of the present application is not limited thereto.
Referring to fig. 72, fig. 72 is a diagram illustrating a comparison of the user interface of the first payment application installed on the twenty-second device before and after the user authentication fails. The user interface 7210 shown in fig. 72 (a) is a user interface before user authentication fails, and the user interface 7220 shown in fig. 72 (b) is a user interface after user authentication fails.
As shown in fig. 72 (a), the user interface 7210 may include payment amount information 7211, a payment method selection control 7212, and an immediate payment control 7213. Wherein:
the payment amount information 7211 is used to display specific amount information that the user needs to pay through the first payment application: 25.6 yuan.
The payment method selection control 7212 can be used for a user to select a particular payment method. The payment method can be other deposit types such as bank card or credit card. The twenty-second device may detect a user operation (e.g., a click operation) on the payment method selection control 7212, in response to which the twenty-second device may display any one of the payment methods that may be used by the first payment application.
An immediate payment control 7213 can be used for user confirmation of the payment operation. The twenty-second device may detect a user action (e.g., a click action) on the immediate payment control 7213, in response to which the twenty-second device may engage in data interaction with the first payment application server and display a corresponding user interface based on information returned by the first payment application server. For example, a user interface for payment verification (such as the user interface 7220 shown in (b) of fig. 72) or a user interface for payment result (such as the user interface 7310 shown in (a) of fig. 73).
As shown in fig. 72, the twenty-second device may detect a click operation on the immediate payment control 7213 of the user interface 7210, in response to which the twenty-second device may obtain at least one aggregated result obtained by the group of multiple devices at different times. When the number of the aggregation results indicating that the user using the twenty-second device is legal in the at least one aggregation result is smaller than the preset legal threshold, the twenty-second device confirms that the user identity authentication does not pass, and the twenty-second device can report the result that the user identity authentication does not pass to the first payment application server. The first payment application server confirms that the payment amount (i.e. 25.6 yuan) is less than a first preset threshold (e.g. 100 yuan), i.e. the application scenario is currently paid for a small amount with a low risk degree. And the first payment application server confirms that the user identity authentication is not passed. Thus, the first payment application server may instruct the twenty-second device to trigger a normal payment verification procedure through the first payment application. The twenty-second device may display a user interface 7220 for payment verification.
The payment verification method may include, but is not limited to, fingerprint verification, password verification, face verification, and the like. User interface 7220 is illustrated by way of example of a preferred manner of verifying a fingerprint as described above for payment verification.
As shown in (b) of fig. 72, the user interface 7220 may include a third cue 7221, a fingerprint icon 7222, a fourth cue 7223, and a second selection control 7224. The third prompt 7221, the fingerprint icon 7222, and the fourth prompt 7223 are similar to the first prompt 6921, the fingerprint icon 6922, and the second prompt 6923 in the user interface 6920 shown in fig. 69 (b), and specific reference may be made to the description of the user interface 6920 shown in fig. 69 (b).
The second selection control 7224 can be used for other ways of user selection of the payment verification described above. The twenty-second device may detect a user operation (e.g., a click operation) acting on the second selection control 7224, in response to which the twenty-second device may display a user interface for password authentication, face authentication, and other payment authentication means.
In case the payment verification is passed (e.g. the fingerprint verification or the password verification is passed), the second twenty-second device may report the result of the payment verification to the first payment application server, and the first payment application server may indicate that the payment of the second twenty-second device for the first payment application is successful. The twenty-second device may display a user interface for successful payment, such as the user interface 7320 shown in (b) of fig. 73 below. Otherwise, the first payment application server may indicate that the payment by the twenty-second device for the first payment application failed. The twenty-second device may display a user interface of the payment failure.
Referring to fig. 73, fig. 73 is a diagram illustrating a comparison of the user interface of the first payment application installed on the twenty-second device before and after the user authentication passes. The user interface 7311 shown in fig. 73 (a) is a user interface before the user authentication is passed, and the user interface 7320 shown in fig. 73 (b) is a user interface after the user authentication is passed. The description of the user interface 7310 shown in fig. 73 (a) may specifically refer to the description of the user interface 7210 shown in fig. 72 (a), and is not repeated.
As shown in fig. 73, the twenty-second device may detect a click operation on the immediate payment control 7213 of the user interface 7210, in response to which the twenty-second device may obtain at least one aggregated result obtained by the group of multiple devices at different times. When the number of aggregation results indicating that the user using the twenty-second device is legal in the at least one aggregation result is greater than the preset legal threshold, the twenty-second device confirms that the user identity authentication passes, and the twenty-second device may report the result that the user identity authentication passes to the first payment application server. The first payment application server confirms that the amount of payment (i.e. 25.6 dollars) is less than a first preset threshold (e.g. 100 dollars), i.e. the application scenario currently being paid for a small amount. And the first payment application server confirms that the user identity authentication is passed. Thus, the first payment application server may indicate that the payment of the twenty-second device for the first payment application was successful. The twenty-second device may display a user interface 7320 that the payment was successful.
That is, in an application scenario of micropayment with a low risk degree, when the first payment application server confirms that the user identity authentication passes, the first payment application server may confirm that the payment verification of the first payment application passes, and then the first payment application server may indicate that the payment of the twenty-second device for the first payment application succeeds. Under the condition, the user does not need to manually perform the payment verification process, and the use of the user is greatly facilitated.
As shown in fig. 73 (b), the user interface 7320 may include a prompt box 7321. Prompt box 7321 may be used to display the text "payment successful". The user can get the result information of successful payment through the prompt box 7321.
Referring to fig. 74, fig. 74 is a diagram illustrating a user interface of a first payment application installed on a twenty-second device during a payment process. The user interface 7410 shown in fig. 74 (a) is a user interface before payment verification, the user interface 7420 shown in fig. 74 (b) is a user interface during payment verification, and the user interface 7430 shown in fig. 74 (c) is a user interface after payment verification has passed. The descriptions of the user interface 7220 shown in fig. 74 (b) and the user interface 7320 shown in fig. 74 (c) may specifically refer to the descriptions of the user interface 7220 shown in fig. 74 (b) and the user interface 7320 shown in fig. 20 (b), and are not repeated.
As shown in fig. 74 (a), the user interface 7410 may include payment amount information 7411, a payment means selection control 7412, and an immediate payment control 7413. The payment amount information 7411 is used to display specific amount information that the user needs to pay through the first payment application: 28000.9 yuan. The description of the payment method selection control 7412 is similar to that of the payment method selection control 7212 shown in fig. 72 (a) and will not be repeated.
An immediate payment control 7413 may be used for user confirmation of the payment operation. The twenty-second device may detect a user operation (e.g., a click operation) acting on the immediate payment control 7413, in response to which the twenty-second device may display a user interface for payment verification, such as the user interface 7220 shown in (b) of fig. 74.
As shown in (b) of fig. 74, the user may perform payment verification through the user interface 7220. The twenty-second device may obtain a result of the payment verification. Fig. 74 is described taking this payment verification pass as an example.
As shown in fig. 75, the twenty-second device may also obtain at least one aggregated result from the multi-device group at different times in response to the user operation (e.g., a click operation) described above acting on an immediate payment control 7513 in the user interface 7510. And when the number of the aggregation results which represent that the user using the twenty-second equipment is legal in the at least one aggregation result is greater than a preset legal threshold, the twenty-second equipment confirms that the user identity authentication is passed. The twenty-second device may report a result of the payment verification performed by the user through the user interface 7220 (i.e., payment verification passed) and a result of the authentication of the user identity to the first payment application server.
The first payment application server confirms that the amount of payment (i.e. 28000.9 yuan) is greater than a second preset threshold (e.g. 10000 yuan), i.e. the application scenario currently being paid in large amount with higher risk. And the first payment application server confirms that the payment verification is passed and the user identity authentication is passed. Thus, the first payment application server may indicate that payment by the second twelfth device for the first payment application was successful. The twenty-second device may display a user interface in which the payment is successful, such as the user interface 200 shown in (c) of fig. 74.
Referring to fig. 75, fig. 75 is a diagram illustrating a user interface of a first payment application installed on a twenty-second device during a payment process. The user interface 7410 shown in fig. 75 (a) is a user interface before payment verification, the user interface 7220 shown in fig. 75 (b) is a user interface in first payment verification, and the user interface 7520 shown in fig. 75 (c) is a user interface in second payment verification. The description of the user interface 7410 shown in fig. 75 (a) and the user interface 7220 shown in fig. 75 (b) may specifically refer to the description of the user interface 7410 shown in fig. 74 (a) and the user interface 7220 shown in fig. 72 (b), and will not be repeated.
As shown in fig. 75, in response to a user operation (e.g., a click operation) acting on the immediate payment control 7413 in the user interface 7410 shown in (a) of fig. 75, the twenty-second device may display a user interface for payment verification, such as the user interface 7220 shown in (b) of fig. 75. The user may perform payment verification through the user interface 7220. The twenty-second device may obtain a result of the payment verification. Fig. 75 is described by taking this payment verification pass as an example.
As shown in fig. 75, the twenty-second device may also obtain at least one aggregated result obtained by the multi-device group at different times in response to a user operation (e.g., a click operation) acting on the immediate payment control 7413 in the user interface 7410 shown in fig. 75 (a). And when the number of the aggregation results which represent that the user using the twenty-second device is legal in the at least one aggregation result is less than a preset legal threshold, the twenty-second device confirms that the user identity authentication is not passed. The twenty-second device may report a result of the payment verification performed by the user through the user interface 7220 (i.e., the payment verification is passed) and a result of the user authentication failure to the first payment application server.
The first payment application server confirms that the amount of payment (i.e. 28000.9 yuan) is greater than a second preset threshold (e.g. 10000 yuan), i.e. the application scenario currently being paid in large amount with higher risk. And, the first payment application server confirms that the payment verification is passed, but the user authentication is not passed. Thus, the first payment application server may instruct the twenty-second device to trigger the flow of the secondary payment verification through the first payment application. The twenty-second device may display a user interface in the second payment verification, such as user interface 7520 shown in (c) of fig. 75.
As shown in fig. 75 (c), user interface 7520 can include a fifth clue 7521, a display area 7522, a numeric control 7523, a reacquisition control 7524, and a determination control 7525. Wherein:
fifth prompt 7521 may include text "the passcode has been sent to your cell phone" and "please enter the passcode". The user can acquire that the currently displayed user interface 7520 is a user interface for verifying the mobile phone verification code through the fifth prompt 7521.
The display area 7522 may display a six-digit passcode. Numeric control 7523 can be used for user selection of a number to enter a verification code. The twenty-second device may detect a user operation (e.g., an operation of clicking on the number 2) by the user on the numeric control 7523, in response to which the twenty-second device may display a number or hide a special symbol (e.g., a number) of the number in a box of the display area 7522.
When the mobile phone bound to the first payment application does not receive the verification code, or the verification code is input incorrectly and the verification code needs to be reacquired, the user can click on the reacquire control 7524. The twenty-second device may detect a user operation (e.g., a click operation) on reacquisition control 7524, and in response to this operation, the twenty-second device may perform a data interaction with the first payment application server, thereby triggering a process of resending the authentication code.
When the display area 7522 has displayed a special symbol with six digits or hidden digits, the user can click on the decision control 7525 for authentication. The twenty-second device can detect a user operation (e.g., a click operation) acting on determination control 7525, in response to which the twenty-second device can transmit the six digit passcode in display area 7522 to the first payment application server. The first payment application server may validate the six-bit validation code sent by the twenty-second device. For example, when the six-digit verification code sent by the twenty-second device is consistent with the six-digit verification code triggered and sent by the first payment application server, the first payment application server confirms that the secondary payment verification is passed. The first payment application server may indicate that the payment of the twenty-second device for the first payment application was successful. The twenty-second device may display a user interface for successful payment, such as user interface 7520 shown in (c) of fig. 75.
That is, in the application scenario of high-volume payment with a high risk degree, even if the first payment verification passes (for example, the result of the payment verification performed by the user through the user interface 7220 is pass), if the user authentication does not pass, the first payment application server may not indicate that the payment of the twenty-second device for the first payment application is successful. The first payment application server may instruct the twenty-second device to trigger a flow of secondary payment verification triggered by the first payment application. In case the second payment verification passes, the first payment application server will indicate that the payment of the twenty-second device for the first payment application was successful. Otherwise the first payment application server indicates that the payment by the twenty-second device for the first payment application failed. Therefore, for the payment scene with higher risk degree, the safety of payment is greatly improved.
Not limited to the application scenarios and user interface diagrams listed above. In a specific implementation, if the result of the payment verification performed by the user through the user interface 7220 is "no" in the application scenario of the large amount of payment, even if the result of the user identity authentication is "no", the first payment application server may instruct the twenty-second device to trigger a flow of the secondary payment verification through the first payment application. In the case that the second payment verification passes, the first payment application server may indicate that the payment of the twenty-second device for the first payment application was successful. The embodiments of the present application do not limit this.
In this embodiment of the present application, any device in the multi-device group may default to start the function of user identity authentication, so as to perform user identity authentication in cooperation with other devices in the multi-device group, and obtain at least one aggregation result (i.e., execute the method shown in fig. 58). Any one device in the multi-device group can also respond to the user operation and turn on or off the function of user identity authentication. An example of a user-defined function of turning on or off user authentication is given below.
Referring to fig. 76, fig. 76 illustrates a user interface 76230 on a twenty-second device. The user interface 76230 may include a first functionality control 76231 and a specific settings list 76232. Wherein:
the first functionality control 76231 may be used for a user to turn on or off the functionality of user authentication. The twenty-second device may detect a user operation (e.g., a click or slide operation) acting on the first function control 76231, in response to which the twenty-second device may turn on or off the function of user authentication. The first functionality control 76231 shown in the user interface 76230 indicates that the functionality for user authentication has been turned on. If the twenty-second device detects a click operation on the first function control 76231 at this time, the twenty-second device may turn off the function of user authentication in response to the operation.
The specific setting list 76232 may be used for user selection to turn on or off the function of user authentication in different applications. The twenty-second device may detect a user operation (e.g., a click or slide operation) acting on a function control corresponding to any one of the applications in the specific setting list 76232, and in response to the operation, the twenty-second device may turn on or off a function of user authentication in the application.
Illustratively, the function control corresponding to the file management application shown in the specific setting list 76232 of the user interface 76230 indicates that the function of user authentication is turned on in the file management application. Therefore, in the file management application, the twenty-second device may display a corresponding user interface according to at least one aggregation result obtained by the multiple device group at different time, for a specific example, refer to the description of fig. 69 to fig. 71.
Illustratively, the function control corresponding to the first video application shown in the specific setting list 76232 of the user interface 76230 indicates that the function of user authentication has been turned off in the first video application. Therefore, in the first video application, the twenty-two device may not display the corresponding user interface according to at least one aggregation result obtained by the multi-device group at different times, but display the corresponding user interface in a normal manner.
Illustratively, the function control corresponding to the first payment application shown in the specific settings list 76232 of the user interface 76230 indicates that the function of user authentication is turned on in the first payment application. Therefore, in the first payment application, the twenty-second device may display a corresponding user interface according to at least one aggregation result obtained by the multiple device group at different time, for a specific example, refer to the description in fig. 72 to fig. 75 above.
In some embodiments, the twenty-second device may also detect a user operation (e.g., a click operation) by which the user acts on the name of any one of the applications in the specific setting list 76232, and in response to this operation, the twenty-second device may display an interface of details of the user authentication function in that application. For example, the twenty-second device displays the user interface 77240 shown in fig. 77 in response to a click operation by the user on the first payment application in the specific setting list 76232.
Referring to fig. 77, fig. 77 illustrates a user interface 77240 on the twenty-second device. The user interface 77240 may be an interface displayed after a twenty-second device detects a click operation on a first payment application in the specific settings list 76232. The user interface 77240 may include a second functionality control 77241 and a specific settings region 77242. Wherein:
The second function control 77241 may be used for the user to turn on or off the function of user authentication in the first payment application. The twenty-second device may detect a user operation (e.g., a click or slide operation) acting on the second function control 77241, in response to which the twenty-second device may turn on or off the function of user authentication in the first payment application. A second functionality control 77241 shown in user interface 77240 indicates that the functionality of user authentication has been turned on in the first payment application. If the twenty-second device detects a click operation on the second function control 77241 at this time, the twenty-second device may close the function of user authentication in the first payment application in response to the operation.
The specific setting area 77242 may be used for details of the user authentication function that the user specifically sets in the first payment application. The specific settings region 77242 may include a third functionality control 772421 and an input control 772422. The third function control 772421 may be used for the user to turn on or off the privacy-free payment function. The twenty-second device may detect a user operation (e.g., a click or slide operation) acting on the third function control 772421, in response to which the twenty-second device may turn the privacy-exempt payment function on or off. A third functionality control 772421, shown in the user interface 77240, indicates that the privacy-free payment function has been activated in the first payment application. If the twenty-second device detects a click operation on the third function control 772421 at this time, the twenty-second device may turn off the unpaid function in response to the operation.
The input control 772422 may be used for the user to enter information (in units of elements) for a first preset amount. The twenty-second device may detect a user operation (e.g., a click operation) acting on the input control 772422, and in response to the operation, the twenty-second device may display a numeric keypad such that the user may input information of the first preset amount based on the numeric keypad. An input control 772422 shown in user interface 77240 indicates that the first preset amount is currently set to 100 dollars.
After the password-free payment function is started, when the amount of money paid in the first payment application is smaller than a first preset amount of money and the number of legal results in the obtained at least one aggregation result is larger than a preset legal threshold (namely, the user identity authentication is passed), the user can manually perform payment verification operation in the first payment application. Wherein the legitimate result is specifically used to indicate that the user using the twenty-second device is legitimate. The first payment application server may indicate that payment by the twenty-second device for the first payment application was successful. Specific examples can be found in the description of fig. 72 and 73 above.
It is to be understood that the examples of the function of customizing the turning on or off of the user identity authentication shown in fig. 76 and 77 are only used for explaining the embodiments of the present application, and should not be construed as limiting. The setting interface may further include other options for self-defining the function of opening or closing the user identity authentication, which is not limited in the embodiment of the present application.
Based on the method provided by the sixth implementation mode, the acquisition equipment acquires at least one authentication factor and sends the at least one authentication factor to the authentication equipment; the authentication equipment authenticates the received at least one authentication factor to obtain at least one authentication result, and sends the at least one authentication result to the decision-making equipment; the decision device processes the received at least one authentication result to obtain at least one aggregation result, and synchronizes the at least one aggregation result to the plurality of electronic devices. By adopting the embodiment of the application to authenticate at least two sustainable authentication factors, the influence on the power consumption of the electronic equipment can be reduced while the safe and reliable identity authentication process is realized, and the usability is high.
Referring to fig. 78, fig. 78 is a block diagram illustrating a software structure of a software system of an electronic device according to an exemplary embodiment of the present disclosure. The electronic device may be the electronic device 100 or the electronic device 200, and the electronic device 100 is described as an example below. The electronic device 100 can realize voice control and screen projection control of the device based on the identity authentication information of the electronic device (e.g., the electronic device 200) connected with the electronic device 100, so that convenience and safety of cross-device authentication are effectively improved, and user experience is improved.
As shown in fig. 78, the layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system may be divided into an application layer, an application framework layer, a protocol stack, a Hardware Abstraction Layer (HAL) layer, and a kernel layer (kernel) from top to bottom. Wherein:
the application layer includes a series of application packages, such as application 1, application 2, music, photo albums, mailboxes, and the like. Applications such as bluetooth, telephony, video, etc. may also be included.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
The application framework layer can comprise a continuous characteristic acquisition module, a continuous characteristic authentication module, a local authentication result management module, an authentication mode management module and a cross-device authentication information acquisition module. The continuous characteristic acquisition module acquires biological characteristic information; the continuous characteristic authentication module is used for matching the biological characteristic information acquired by the continuous characteristic acquisition module with prestored biological characteristic information to acquire a local authentication result of the time; the local authentication result management module is used for managing the local authentication result determined by the continuous characteristic authentication module and informing the authentication mode management module to switch the authentication mode when the local authentication result changes; the cross-device authentication information acquisition module may be used to acquire the identity authentication information of other connected devices (e.g., the electronic device 200).
The application framework layer may also include bluetooth services, UWB services, WLAN services, and the like. The electronic device 100 may detect the distance of other devices to which the electronic device 100 has been connected by tuning one or more short-range communication services among bluetooth services, UWB services, WLAN services, and the like. It is also possible to connect with nearby devices of the electronic device 100 and perform data transmission by invoking one or more short-range communication services among bluetooth service, UWB service, WLAN service, and the like. In some embodiments, when the electronic device 100 determines that the distance between the electronic device 100 and the electronic device 200 is less than the preset distance 1, the electronic device 100 determines that the identity authentication information of the electronic device 200 is safe and authentic.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application layer and the application framework layer as binary files. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The kernel layer is a layer between hardware and software. The kernel layer can include a display driver, a camera driver, a touch chip driver, a sensor driver, an audio driver, and the like. The HAL layer and kernel layer (kernel) may perform corresponding operations in response to functions called by the application framework layer.
In this embodiment of the application, the electronic device 100 may perform local persistent authentication through one or more authentication manners such as face recognition, iris recognition, and touch screen behavior recognition.
In some embodiments, the electronic device 100 employs face recognition for local persistent authentication. After the screen of the electronic device 100 is unlocked, the electronic device 100 acquires an image by using a camera (e.g., a low-power camera); the electronic device 100 transmits the image to a continuous feature acquisition module of the application framework layer through the camera drive of the kernel layer; the continuous characteristic acquisition module acquires the face characteristic information in the image and sends the face characteristic information to the continuous characteristic authentication module; the continuous characteristic authentication module matches the face characteristic information with biological characteristic information of a preset user of the electronic device 100, and when the matching degree reaches a preset threshold value 1, the identity authentication is determined to be passed, otherwise, the identity authentication is not passed. In some embodiments, the electronic device 100 employs touch screen behavior recognition for local persistent authentication. The electronic device 100 may acquire N touch inputs of the user using a touch sensor in the touch screen; the touch chip acquires touch screen parameters of the N touch inputs (the touch screen parameters can include coordinate points of a touch area of the touch inputs and capacitance information of each coordinate point); the touch chip sends the touch screen parameters of the N touch inputs to a continuous characteristic acquisition module of the application framework layer through the driving of the touch chip; the continuous characteristic acquisition module acquires the touch screen characteristic information of the N touch inputs based on the touch screen parameters of the N touch inputs and sends the touch screen characteristic information to the continuous characteristic authentication module; the continuous characteristic authentication module matches the touch screen characteristic information with touch screen characteristic information of a preset user of the electronic device 100, and when the matching degree reaches a preset threshold value 1, the identity authentication is determined to be passed, otherwise, the identity authentication is not passed. Touch screen characteristic information of touch input at least comprises: and one or more items of information such as touch position, touch area, touch strength, touch direction and touch time of touch input. N is a positive integer greater than zero. The preset user may be the user 1 in the foregoing embodiment, and may also be an authorized user 3 added in the foregoing embodiment.
In some embodiments, the persistent feature authentication module sends the obtained local authentication result to the local authentication result management module; when the local authentication result is changed from pass to pass, the local authentication result management module can inform the authentication mode management module to switch the continuous authentication mode into the cross-equipment continuous authentication mode, and when the local authentication result is changed from pass to pass, the local authentication result management module can inform the authentication mode management module to switch the continuous authentication mode into the local continuous authentication mode; when the persistent authentication mode is the cross-device persistent authentication mode, the cross-device authentication information acquisition module may call the communication service to acquire the identity authentication information of the other connected device (e.g., the electronic device 200). For example, the cross-device authentication information obtaining module obtains the identity authentication information of the electronic device 200 by calling a bluetooth service, the bluetooth service calls a bluetooth chip driver of the kernel layer, and the bluetooth chip driver may drive the bluetooth antenna to send an obtaining request to the electronic device 200, where the obtaining request is used to obtain a local authentication result of the electronic device 200, and for example, the obtaining request may be obtaining request 1 in the embodiment of fig. 35, or obtaining request 2 and obtaining request 3 in the embodiment of fig. 36. The electronic device 100 may obtain the authentication information of the electronic device 200 received by the bluetooth antenna through the bluetooth chip driver. The bluetooth chip driver may send the identity authentication information of the electronic device 200 to the cross-device authentication information acquisition module of the application framework layer. In this embodiment, the electronic device may implement voice control and screen projection control on the electronic device 100 based on the identity authentication information of the electronic device 200 acquired by the cross-device authentication information acquisition module.
In some embodiments, the electronic device 100 is also provided with local persistent authentication capabilities. The electronic device 100 may also obtain, through the bluetooth chip driver, an obtaining request of the electronic device 200 received by the bluetooth antenna, where the obtaining request is used to obtain a local authentication result of the electronic device 100. The bluetooth chip driver can send the acquisition request to a local authentication result management module or a continuous feature acquisition module of the application framework layer. The local authentication result management module may send a local authentication result to the bluetooth chip driver, the persistent feature acquisition module may send acquired biometric information to the bluetooth chip driver, and the bluetooth chip driver may send the local authentication result or the biometric information of the electronic device 100 to the electronic device 200 through the bluetooth antenna.
In this embodiment, the electronic device 100 may implement voice control and screen projection control on the electronic device 100 based on the local authentication result of the electronic device 200 acquired by the cross-device authentication information acquisition module.
Referring to the foregoing voice control scenario 2 and screen projection control scenario 3, the electronic device 100 may start cross-device authentication for a locked application and may not start cross-device authentication for an unlocked application (or application function). Referring to the aforementioned voice control scenario 3 and screen projection control scenario 4, the electronic device 100 may initiate cross-device authentication for a locked low-risk application (or application function) and may not initiate cross-device authentication for a locked high-risk application (or application function). In some embodiments of the present application, the application framework may include an application security management module having stored therein an identification of the locked application (or application function) and/or the locked low-risk application (or application function). When the electronic device 100 receives an input operation of a user, the application security management module may be invoked to determine whether an application (or application function) triggered by the input operation is a locked application (or application function) or a low-risk application (or application function) that is locked.
In some embodiments, the microphone of the electronic device 100 receives the voice instruction 1, the electronic device 100 sends the voice instruction 1 to the application framework layer through the kernel layer, and the application framework layer invokes the voice recognition algorithm of the HAL layer to recognize the voice instruction 1 for triggering the application 1. The application framework layer may then invoke the application security management module to determine that application 1 is a locked application. Since the application 1 is a locked application, when the local authentication result management module determines that the local authentication does not pass, the local authentication result management module calls the cross-device authentication information acquisition module to acquire the identity authentication information of the electronic device 200.
Referring to the aforementioned screen projection control scenarios 2 to 4, the electronic device 100 may receive touch parameters for a touch operation of screen projection content acting on the electronic device 200.
In some embodiments of the present application, the application framework layer further comprises a screen projection service, the screen projection service comprising a coordinate conversion module. After the electronic device 100 receives the touch parameters (including touch coordinates, touch duration, and the like) of the touch operation sent by the electronic device 200 through the communication service, the electronic device 100 may call a coordinate conversion module to convert the touch coordinates of the electronic device 200 in the touch parameters into the touch coordinates of the electronic device 100, so as to determine an event triggered by the touch operation. For example, after the touch coordinates of the electronic device 100 are converted, the touch coordinates of the electronic device 100 are determined to correspond to the area where the album icon is located, and then the electronic device 100 may determine that the touch operation is a single click operation acting on the album icon according to parameters such as the touch duration of the touch parameter.
Based on the same concept, fig. 79 illustrates an apparatus 7900 provided by the present application. The device 7900 includes at least one processor 7910, memory 7920, and a transceiver 7930. The processor 7910 is coupled to the memory 7920 and the transceiver 7930, and in this embodiment, the coupling is an indirect coupling or communication connection between devices, elements or modules, and may be in an electrical, mechanical or other form for information exchange between the devices, elements or modules. The connection medium between the transceiver 7930, the processor 7910, and the memory 7920 is not limited in the embodiments of the present application. For example, in fig. 79, the memory 7920, the processor 7910 and the transceiver 7930 may be connected via a bus, which may be divided into an address bus, a data bus, a control bus, and the like.
In particular, the memory 7920 is used for storing program instructions.
The transceiver 7930 is used to receive or transmit data.
The processor 7910 is configured to invoke program instructions stored in the memory 7920 to cause the device 7900 to perform the steps performed by the electronic device described above.
In the embodiments of the present application, the processor 7910 may be a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof that may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
In this embodiment, the memory 7920 may be a non-volatile memory, such as a Hard Disk Drive (HDD) or a solid-state drive (SSD), and may also be a volatile memory (RAM), for example. The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
It should be understood that the apparatus 7900 may be used to implement the method shown in the embodiments of the present application, and the related features may refer to the above description, which is not described herein again.
An embodiment of the present application further provides a computer-readable storage medium, where a computer instruction is stored in the computer-readable storage medium, and when the computer instruction runs on an electronic device, the electronic device is caused to execute the relevant method steps to implement the authentication method in the foregoing embodiment.
The embodiment of the present application further provides a computer program product, which when running on a computer, causes the computer to execute the above related steps to implement the authentication method in the above embodiment.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component or a module, and may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip can execute the authentication method in the above method embodiments.
In addition, the electronic device, the computer storage medium, the computer program product, or the chip provided in the embodiments of the present application are all used for executing the corresponding method provided above, and therefore, the beneficial effects achieved by the electronic device, the computer storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Through the description of the above embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the above functional modules is used as an example, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be discarded or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed to a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially implemented in the form of a software product, which is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and all the changes or substitutions should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

1. An authentication method, performed by a first electronic device, the method comprising:
receiving an authentication request, wherein the authentication request is used for requesting authentication of a first service;
determining a risk security level corresponding to the first service;
determining an authentication mode meeting the safety risk level according to the risk safety level;
and scheduling M pieces of electronic equipment to authenticate the first service according to the authentication mode, wherein M is a positive integer.
2. The method of claim 1, wherein determining, based on the risk security level, an authentication manner that satisfies the security risk level comprises:
determining an available authentication factor and an available acquisition capability associated with the available authentication factor;
And determining an authentication mode meeting the safety risk level according to the risk safety level, the available authentication factors and the available acquisition capacity associated with the available authentication factors.
3. The method of claim 1 or 2, further comprising:
when the authentication request comprises the biological characteristics, identifying the biological characteristics and determining a user corresponding to the biological characteristics;
determining that the user has the right to execute the first service.
4. The method of claim 3, wherein determining, based on the risk security level, an authentication manner that satisfies the security risk level comprises:
determining available authentication factors associated with the user, and available authentication capabilities and available capture capabilities associated with the available authentication factors;
and determining an authentication mode meeting the safety risk level according to the risk safety level, the available authentication factor, the available authentication capability and the available acquisition capability.
5. An electronic device, comprising a processor and a memory;
The memory stores program instructions;
the processor is configured to execute the program instructions stored by the memory to cause the electronic device to perform:
receiving an authentication request, wherein the authentication request is used for requesting authentication of a first service;
determining a risk security level corresponding to the first service;
determining an authentication mode meeting the safety risk level according to the risk safety level;
and scheduling M pieces of electronic equipment to authenticate the first service according to the authentication mode, wherein M is a positive integer.
6. The electronic device of claim 5, wherein the processor is configured to execute the program instructions stored in the memory, so that when the electronic device determines, according to the risk security level, an authentication manner that meets the security risk level, the electronic device specifically performs:
determining an available authentication factor and an available acquisition capability associated with the available authentication factor;
and determining an authentication mode meeting the safety risk level according to the risk safety level, the available authentication factors and the available acquisition capacity associated with the available authentication factors.
7. The electronic device of claim 5 or 6, wherein the processor is configured to execute the program instructions stored in the memory to cause the electronic device to further perform:
when the authentication request comprises the biological characteristics, identifying the biological characteristics and determining a user corresponding to the biological characteristics;
determining that the user has the right to execute the first service.
8. The electronic device of claim 7, wherein the processor is configured to execute the program instructions stored by the memory to cause the electronic device to further perform:
determining available authentication factors associated with the user, and available authentication capabilities and available capture capabilities associated with the available authentication factors;
and determining an authentication mode meeting the safety risk level according to the risk safety level, the available authentication factor, the available authentication capability and the available acquisition capability.
9. A data association method is applied to a first electronic device, and comprises the following steps:
receiving a first operation of a user, wherein the first operation is used for requesting to enter a first feature template;
In response to the first operation, authenticating the identity of the user using an existing second feature template, wherein the second feature template is associated with a user identification of the user;
after the authentication is passed, receiving a first characteristic template input by the user;
and establishing an incidence relation between the first characteristic template and the user identification.
10. The method of claim 9, further comprising:
receiving a second operation of the user, wherein the second operation is used for triggering the association of the input third feature template and the user identification;
and responding to the second operation, and establishing an association relation between the third feature template and the user identification.
11. The method of claim 9, wherein before receiving the second operation of the user, further comprising:
receiving characteristic information input by a user;
and matching the feature information input by the user with at least one feature template in the first electronic equipment, and determining the third feature template matched with the features input by the user.
12. The method of claim 11, further comprising:
acquiring a use constraint condition corresponding to the third feature template;
And establishing an association relation between the third feature template and the use constraint condition.
13. An electronic device, comprising a processor and a memory;
the memory stores program instructions;
the processor is configured to execute the program instructions stored by the memory to cause the electronic device to perform:
receiving a first operation of a user, wherein the first operation is used for requesting to enter a first feature template;
in response to the first operation, authenticating the identity of the user using an existing second feature template, wherein the second feature template is associated with a user identification of the user;
after the authentication is passed, receiving a first characteristic template input by the user;
and establishing an incidence relation between the first characteristic template and the user identification.
14. The electronic device of claim 13, wherein the processor is configured to execute the program instructions stored by the memory to cause the electronic device to further perform:
receiving a second operation of the user, wherein the second operation is used for triggering the association of the input third feature template and the user identification;
And responding to the second operation, and establishing an association relation between the third feature template and the user identification.
15. The electronic device of claim 14, wherein prior to receiving a second operation by a user, the processor is configured to execute the program instructions stored in the memory to cause the electronic device to further perform:
receiving characteristic information input by a user;
and matching the feature information input by the user with at least one feature template in the electronic equipment, and determining the third feature template matched with the features input by the user.
16. The electronic device of claim 15, wherein the processor is configured to execute the program instructions stored by the memory to cause the electronic device to further perform:
acquiring a use constraint condition corresponding to the third feature template;
and establishing an association relation between the third feature template and the use constraint condition.
17. A computer-readable storage medium, comprising program instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-4 or 9-12.
CN202110313313.2A 2020-05-11 2021-03-24 Authentication method and electronic equipment Pending CN113641981A (en)

Applications Claiming Priority (12)

Application Number Priority Date Filing Date Title
CN2020103938955 2020-05-11
CN202010393895 2020-05-11
CN2020110634028 2020-09-30
CN202011070212 2020-09-30
CN2020110702129 2020-09-30
CN202011063402 2020-09-30
CN2021101557953 2021-02-04
CN202110155795 2021-02-04
CN202110162842 2021-02-05
CN2021101628427 2021-02-05
CN202110185361 2021-02-10
CN2021101853618 2021-02-10

Publications (1)

Publication Number Publication Date
CN113641981A true CN113641981A (en) 2021-11-12

Family

ID=78415712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110313313.2A Pending CN113641981A (en) 2020-05-11 2021-03-24 Authentication method and electronic equipment

Country Status (1)

Country Link
CN (1) CN113641981A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113672903A (en) * 2021-10-22 2021-11-19 深圳市信润富联数字科技有限公司 Password management method, electronic device, device and readable storage medium
CN113965789A (en) * 2021-12-15 2022-01-21 荣耀终端有限公司 Screen projection method, terminal and communication system
CN115065512A (en) * 2022-05-31 2022-09-16 北京奇艺世纪科技有限公司 Account login method, system, device, electronic equipment and storage medium
US20220407692A1 (en) * 2021-06-16 2022-12-22 International Business Machines Corporation Multiple device collaboration authentication
WO2023089406A1 (en) * 2021-11-16 2023-05-25 International Business Machines Corporation Auditing of multi-factor authentication
CN116437006A (en) * 2023-06-14 2023-07-14 深圳市英迈通信技术有限公司 Information security management system and method for mobile phone screen throwing
CN116861496A (en) * 2023-09-04 2023-10-10 合肥工业大学 Intelligent medical information safety display method and system
WO2024020828A1 (en) * 2022-07-27 2024-02-01 京东方科技集团股份有限公司 Display terminal, server and secure information publishing system

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220407692A1 (en) * 2021-06-16 2022-12-22 International Business Machines Corporation Multiple device collaboration authentication
CN113672903A (en) * 2021-10-22 2021-11-19 深圳市信润富联数字科技有限公司 Password management method, electronic device, device and readable storage medium
WO2023089406A1 (en) * 2021-11-16 2023-05-25 International Business Machines Corporation Auditing of multi-factor authentication
US11762973B2 (en) 2021-11-16 2023-09-19 International Business Machines Corporation Auditing of multi-factor authentication
CN113965789A (en) * 2021-12-15 2022-01-21 荣耀终端有限公司 Screen projection method, terminal and communication system
CN115065512A (en) * 2022-05-31 2022-09-16 北京奇艺世纪科技有限公司 Account login method, system, device, electronic equipment and storage medium
CN115065512B (en) * 2022-05-31 2024-03-15 北京奇艺世纪科技有限公司 Account login method, system, device, electronic equipment and storage medium
WO2024020828A1 (en) * 2022-07-27 2024-02-01 京东方科技集团股份有限公司 Display terminal, server and secure information publishing system
CN116437006A (en) * 2023-06-14 2023-07-14 深圳市英迈通信技术有限公司 Information security management system and method for mobile phone screen throwing
CN116437006B (en) * 2023-06-14 2023-09-08 深圳市英迈通信技术有限公司 Information security management system and method for mobile phone screen throwing
CN116861496A (en) * 2023-09-04 2023-10-10 合肥工业大学 Intelligent medical information safety display method and system

Similar Documents

Publication Publication Date Title
CN113641981A (en) Authentication method and electronic equipment
US11907388B2 (en) Enhanced processing and verification of digital access rights
CN109923885B (en) Multi-factor authentication for access to services
US10419435B2 (en) System and method for implementing a two-person access rule using mobile devices
US20190173745A1 (en) Proximity and Context Aware Mobile Workspaces in Enterprise Systems
US11811757B1 (en) Authentication as a service
WO2019158001A1 (en) Blockchain generating method, and related device and system
US20150350820A1 (en) Beacon additional service of electronic device and electronic device for same background arts
CN110300083B (en) Method, terminal and verification server for acquiring identity information
CN111466099A (en) Login method, token sending method and device
CN107735999A (en) The certification for passing through multiple approach based on functions of the equipments and user's request
CN112699354A (en) User authority management method and terminal equipment
CN108022349A (en) Information input method, equipment, smart lock and storage medium
CN101488859A (en) Network security authentication system based on handwriting recognition and implementing method thereof
JP2017531941A (en) Data sharing using body-coupled communication
CN112286632A (en) Cloud platform, cloud platform management method and device, electronic equipment and storage medium
WO2015059365A1 (en) Audiovisual -->associative --> authentication --> method and related system
CN201393226Y (en) Network safety authentication system based on handwriting identification
US10992796B1 (en) System for device customization based on beacon-determined device location
KR101979118B1 (en) Method for controlling smart device using fingerprint information and computer readable medium for performing the method
CN113645024A (en) Key distribution method, system, device and readable storage medium and chip
CN107241318A (en) The method and apparatus that a kind of account is reported the loss
US20230041559A1 (en) Apparatus and methods for multifactor authentication
KR20130082645A (en) Voice recognition of smart phone banking
KR102133726B1 (en) Server for managing door-lock device by inaudible sound wave, door-lock device, and method for controling door-lock device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination