WO2023274033A1 - 访问控制方法及相关装置 - Google Patents

访问控制方法及相关装置 Download PDF

Info

Publication number
WO2023274033A1
WO2023274033A1 PCT/CN2022/100826 CN2022100826W WO2023274033A1 WO 2023274033 A1 WO2023274033 A1 WO 2023274033A1 CN 2022100826 W CN2022100826 W CN 2022100826W WO 2023274033 A1 WO2023274033 A1 WO 2023274033A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
operation instruction
authentication
resource
access
Prior art date
Application number
PCT/CN2022/100826
Other languages
English (en)
French (fr)
Inventor
陈晓东
李昌婷
张胜涛
赵国见
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP22831849.9A priority Critical patent/EP4350544A1/en
Publication of WO2023274033A1 publication Critical patent/WO2023274033A1/zh
Priority to US18/398,325 priority patent/US20240126897A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/36User authentication by graphic or iconic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
    • G06F21/577Assessing vulnerabilities and evaluating computer system security
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/604Tools and structures for managing or administering access control systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2141Access rights, e.g. capability lists, access control lists, access tables, access matrices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2149Restricted operating environment

Definitions

  • the present application relates to the technical field of terminals and identity authentication, and in particular to an access control method and related devices.
  • Electronic devices such as computers and mobile phones can be set to a locked state for safety and to prevent misuse.
  • the electronic device When the electronic device is in the locked state, the user needs to input predetermined identity authentication information, such as a preset fingerprint, face or password, to unlock and enter the unlocked state.
  • predetermined identity authentication information such as a preset fingerprint, face or password
  • Most functions of electronic devices can only be called in the unlocked state.
  • users need to input accurate identity authentication information, such as a face at close range, a fingerprint that is exactly the same as a preset fingerprint, etc., to trigger the unlocking of the electronic device.
  • users cannot use some less accurate authentication methods to unlock the device, such as voiceprint authentication. This leads to the need for the user to go through cumbersome authentication operations, or even multiple authentication operations to unlock the device, and the use of the electronic device loses convenience.
  • the present application provides an access control method and related devices, which allow users to freely and conveniently control the electronic equipment without going through cumbersome authentication to unlock the electronic equipment.
  • the embodiment of the present application provides an access control method based on a weak authentication factor, including: when the first device is in a locked state, obtaining a first operation instruction and a first authentication factor; the first operation instruction is used to request access
  • the first resource of the first device the first authentication factor includes identity authentication information that does not meet the unlocking requirements of the first device, and the identity authentication information that meets the unlocking requirements of the first device is used to switch the first device from a locked state to an unlocked state ;
  • the first device determines the resource that the first device is allowed to access according to the first operation instruction and, the first authentication factor; if the resource that the first device is allowed to access includes the first resource, then the first device responds to the first operation instruction, Access the first resource.
  • the electronic device no longer decides whether to respond to the corresponding operation based on whether it is unlocked or not.
  • the operation instruction and the weak authentication factor more fine-grained access control can be implemented for various resources, which enriches the electronic device. usage scenarios and scope of use.
  • the electronic device can be triggered to perform some operations without going through cumbersome authentication to unlock the electronic device, so that the user can manipulate the electronic device more freely and conveniently.
  • the first device may determine the resources that the first device is allowed to access according to the risk level of accessing the first resource; the higher the risk level of accessing the first resource, the more resources the first device is allowed to access. fewer resources.
  • the first device can determine the resources that the first device is allowed to access according to the security level of the first authentication factor; the lower the security level of the first authentication factor, the resources that the first device is allowed to access fewer resources.
  • the higher the authentication capability level ACL of the identity authentication method corresponding to the first authentication factor is, or the higher the degree of matching between the first authentication factor and the identity authentication information meeting the unlocking requirement of the first device is, or the first authentication factor is acquired , the higher the security level of the first authentication factor is. In this way, the reliability of the current authentication factor can be fully considered, and situations such as data leakage can be avoided.
  • the first resource includes: a predefined resource that cannot be accessed by the first device in a locked state.
  • the resources that can be accessed in the locked state are basic resources or common resources, such as camera applications, flashlights, Bluetooth, and so on.
  • the resources that cannot be accessed in the locked state may include resources related to user privacy data, such as photos, browsing records, and so on.
  • the resources that can be accessed in the locked state may be predefined by the first device.
  • the first operation instruction includes any one of the following: semantics carried by voice, gesture, facial expression, and body gesture.
  • the first device may acquire the first operation instruction in any of the following ways:
  • the first device collects the voice or image, and recognizes the first operation instruction carried in the voice or image;
  • the first device receives the voice or image sent by the second device, and recognizes the first operation instruction carried in the voice or image; or,
  • the first device receives the first operation instruction sent by the second device.
  • the identity authentication information includes any one or more of the following: password, graphic or biometric feature.
  • biological characteristics are divided into two categories: physical characteristics and behavioral characteristics.
  • Physical characteristics include: face, voiceprint, fingerprint, palm shape, retina, iris, body odor, face shape, heart rate, deoxyribonucleic acid (DNA).
  • Behavioral features include: signature, body posture (such as walking gait), etc.
  • the identity authentication information that does not meet the unlocking requirements of the first device may include any one or more of the following:
  • the first authentication method is an identity authentication method for switching the first device from a locked state to an unlocked state.
  • the first authentication method is an identity authentication method whose authentication capability level ACL is higher than the third value, or the first authentication method is preset by the first device.
  • the first authentication method may include password authentication, graphic authentication, fingerprint authentication, face authentication, and the like.
  • the identity authentication information lower than the standard required by the first authentication method may include: a biological characteristic whose matching degree with the pre-stored first biological characteristic is lower than a first value, and the first biological characteristic is the identity authentication information corresponding to the first authentication method.
  • the first value can be preset.
  • the second authentication method is an identity authentication method other than the first authentication method.
  • the second authentication method is an identity authentication method other than the first authentication method.
  • the second authentication method may be an identity authentication method with a lower authentication capability level ACL, or the second authentication method is preset by the first device.
  • the second authentication method may include voiceprint authentication, heart rate authentication, body posture authentication, and the like.
  • the identity authentication information that meets the standards required by the second authentication method includes: a biological characteristic whose matching degree reaches a second value with the pre-stored second biological characteristic, and the second biological characteristic is the identity authentication information corresponding to the second authentication method.
  • the first person value can be set in advance.
  • the first device may obtain the first authentication factor through any one or more of the following:
  • the first device collects the voice or image, and recognizes the first authentication factor carried in the voice or image;
  • the first device receives the voice or image sent by the second device, and recognizes the first authentication factor carried in the voice or image; or,
  • the first device receives the first authentication factor sent by the second device.
  • the first device may also obtain the first operation instruction and the first authentication factor at the same time.
  • the first device may collect voice, recognize the semantics of the voice, and determine the semantics as the first operation instruction; recognize the voiceprint carried by the voice, and determine the voiceprint as the first authentication factor.
  • the first device can recognize the gestures, facial expressions, and body postures in the images collected by the first device, and determine the gestures, facial expressions, and body postures in the images as the first operation instruction; A feature, identifying a biometric as the first authentication factor.
  • the first device may also receive a user operation for requesting access to the second resource of the first device . If the resources that the first device is allowed to access include the second resource, the first device accesses the second resource in response to the user operation; if the resources that the first device is allowed to access do not include the second resource, the first device refuses to respond to the user operation .
  • the operations that the first device can perform can be limited within a certain range, so that the expansion of authority can be avoided and the data security of the first device can be protected.
  • the first device may also obtain a second authentication factor after accessing the first resource in response to the first operation instruction, and the second authentication factor includes an identity that meets the unlocking requirement of the first device Authentication information, or a predetermined number of first authentication factors; the first device is switched from a locked state to an unlocked state according to the second authentication factor.
  • the second authentication factor is a predetermined number of the first authentication factor, the user can complete the identity authentication by inputting the first authentication factor multiple times, and trigger the unlocking of the electronic device.
  • the first device after the first device determines the resources that the first device is allowed to access, before the first device obtains the second authentication factor, it can display the first control, detect the operation on the first control; and respond to the The operation of the first control starts to detect identity authentication information. That is to say, the user can actively trigger the first device to start detecting identity authentication information, so as to acquire the second authentication factor and unlock it. In this way, the user can decide whether to unlock or not according to his own needs, and the power consumption of the first device can also be saved.
  • the first device may create a restricted execution environment.
  • the first device allows access to the resources that are determined to be allowed to be accessed.
  • the first device can access the first resource in the restricted execution environment in response to the first operation instruction.
  • the first device may record the determined operations that are allowed to be executed. That is to say, the first device records which specific access operations the first device is allowed to perform on which resources or which type of resources.
  • the embodiment of the present application provides a cross-device access control method, including: when the first device is in a locked state, receiving a second operation instruction sent by the third device; the second operation instruction is used to request access to the second The third resource of a device; the first device determines the resources that the first device is allowed to access according to the second operation instruction; if the resources that the first device is allowed to access include the third resource, the first device responds to the second operation instruction and accesses third resource.
  • the electronic device no longer decides whether to respond to the corresponding operation based on whether it is unlocked, but implements finer-grained access control for various resources according to the operation instruction, which enriches the usage scenarios and usage of the electronic device scope.
  • the electronic device can be triggered to perform some operations without going through cumbersome authentication to unlock the electronic device, so that the user can manipulate the electronic device more freely and conveniently.
  • the first device can determine the resources that the first device is allowed to access according to the risk level of accessing the third resource; the higher the risk level of accessing the third resource, the more resources the first device is allowed to access. fewer resources. Wherein, the higher the degree of privacy of the third resource, the higher the risk level of accessing the third resource. In this way, the risk of resource access can be fully considered to avoid data leakage and other situations.
  • the third resource includes: a predefined resource that cannot be accessed by the first device in a locked state.
  • the third resource is the same as the first resource in the first aspect, and reference may be made to related descriptions in the first aspect.
  • the third operation instruction includes any one of the following: semantics carried by voice, gesture, facial expression, and body posture.
  • the third operation instruction is a screen projection request.
  • the embodiments of the present application reduce the difficulty and complexity of screen projection and multi-screen interaction, and can bring better user experience to users.
  • the first device may receive a user operation for requesting access to the fourth resource of the first device. If the resources that the first device is allowed to access include the fourth resource, the first device accesses the fourth resource in response to the user operation; if the resources that the first device is allowed to access do not include the fourth resource, the first device refuses to respond to the user operation .
  • the operations that the first device can perform can be limited within a certain range, so that the expansion of authority can be avoided and the data security of the first device can be protected.
  • the first device may obtain a second authentication factor after accessing the third resource in response to the second operation instruction, and the second authentication factor includes an identity authentication that meets the unlocking requirement of the first device information, or a predetermined number of first authentication factors; the first device is switched from a locked state to an unlocked state according to the second authentication factor.
  • the second authentication factor is a predetermined number of the first authentication factor, the user can complete the identity authentication by inputting the first authentication factor multiple times, and trigger the unlocking of the electronic device.
  • the first device after the first device determines the resources that the first device is allowed to access, but before obtaining the second authentication factor, it can display the first control; detect the operation acting on the first control; respond to the operation acting on the first control Operation, start to detect identity authentication information. That is to say, the user can actively trigger the first device to start detecting identity authentication information, so as to acquire the second authentication factor and unlock it. In this way, the user can decide whether to unlock or not according to his own needs, and the power consumption of the first device can also be saved.
  • the first device may create a restricted execution environment.
  • the first device allows access to the resources that are determined to be allowed to be accessed.
  • the first device may access the third resource in the restricted execution environment in response to the second operation instruction.
  • the first device may record the determined operations that are allowed to be executed. That is to say, the first device records which specific access operations the first device is allowed to perform on which resources or which type of resources.
  • the embodiment of the present application provides an electronic device, including: a memory, one or more processors; the memory is coupled with one or more processors, and the memory is used to store computer program codes, and the computer program codes include computer instructions
  • One or more processors call computer instructions to make the electronic device execute the method according to the first aspect or any one implementation manner of the first aspect.
  • the embodiment of the present application provides an electronic device, including: memory, one or more processors; the memory is coupled with one or more processors, and the memory is used to store computer program codes, and the computer program codes include computer instructions
  • One or more processors call computer instructions to make the electronic device execute the method according to the second aspect or any implementation manner of the second aspect.
  • the embodiment of the present application provides a communication system, including a first device and a second device, and the first device is configured to execute the method according to the first aspect or any implementation manner of the first aspect.
  • the embodiment of the present application provides a communication system, including a first device and a third device, and the first device is configured to execute the method according to the second aspect or any implementation manner of the second aspect.
  • the embodiment of the present application provides a computer-readable storage medium, including instructions, and when the instructions are run on the electronic device, the electronic device is made to execute the method according to the first aspect or any one of the implementation manners of the first aspect.
  • the embodiment of the present application provides a computer program product, which, when running on a computer, causes the computer to execute the method of the second aspect or any one of the implementation manners of the second aspect.
  • the embodiment of the present application provides a computer-readable storage medium, including instructions, which, when the instructions are run on the electronic device, cause the electronic device to execute the method according to the first aspect or any one of the implementation manners of the first aspect.
  • the embodiment of the present application provides a computer program product, which, when running on a computer, causes the computer to execute the method of the second aspect or any one of the implementation manners of the second aspect.
  • FIG. 1 is a schematic diagram of a hardware structure of an electronic device provided in an embodiment of the present application
  • FIG. 2 is a schematic diagram of the software structure of the electronic device provided by the embodiment of the present application.
  • FIG. 3 is a structural diagram of a communication system provided by an embodiment of the present application.
  • FIG. 4 is a flowchart of an access control method based on a weak authentication factor provided in an embodiment of the present application
  • FIG. 5A is a user interface when the electronic device 100 is in a locked state according to an embodiment of the present application
  • FIG. 5B-FIG. 5D are scenes where the electronic device 100 is provided in the embodiment of the present application.
  • FIG. 5E-FIG. 5G are user interfaces displayed after the electronic device 100 provided in the embodiment of the present application creates a restricted execution environment
  • FIG. 6 is a flowchart of a cross-device access control method provided by an embodiment of the present application.
  • 7A-7C are a set of user interfaces involved in the cross-device access control method
  • 8A and 8B are schematic diagrams of the software structure of the electronic device 100 provided by the embodiment of the present application.
  • first and second are used for descriptive purposes only, and cannot be understood as implying or implying relative importance or implicitly specifying the quantity of indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present application, unless otherwise specified, the “multiple” The meaning is two or more.
  • UI user interface
  • the term "user interface (UI)” in the following embodiments of this application is a medium interface for interaction and information exchange between an application program or an operating system and a user, and it realizes the difference between the internal form of information and the form acceptable to the user. conversion between.
  • the user interface is the source code written in a specific computer language such as java and extensible markup language (XML).
  • the source code of the interface is parsed and rendered on the electronic device, and finally presented as content that can be recognized by the user.
  • the commonly used form of user interface is the graphical user interface (graphic user interface, GUI), which refers to the user interface related to computer operation displayed in a graphical way. It may be text, icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, Widgets, and other visible interface elements displayed on the display screen of the electronic device.
  • the electronic device has two states: a locked state and an unlocked state.
  • the electronic device can only perform predefined operations, but cannot perform other operations other than the predefined operations.
  • the locked state can be used to prevent user's misoperation, or to prevent the electronic device from performing operations other than predefined operations.
  • an electronic device performing an operation specifically refers to an electronic device performing an access operation on a resource
  • the access operation may include operations such as reading, adding, deleting, writing, modifying, and executing, for example.
  • the resources in the electronic device may include one or more of the following: software resources, hardware resources, peripherals or peripheral resources of the electronic device, and the like. in:
  • the hardware resource is related to the hardware configured by the electronic device, and may include, for example, a camera, a sensor, an audio device, a display screen, a motor, a flash light, and the like of the electronic device.
  • Software resources are related to the software configured by the electronic device, for example, it may include the application program (application, APP) or service component installed on the electronic device, the available memory resources, computing capabilities (such as beauty algorithm capabilities, audio and video codec capabilities), network capabilities, device connection capabilities, device discovery capabilities, data transmission capabilities, etc.
  • the software resources may include system resources or third-party resources, which are not limited here.
  • Peripherals refer to devices that are connected to electronic devices and are used to transmit, transfer and store data and information. Peripherals may include, for example, accessory devices of electronic devices, such as a mouse, an external display screen, a Bluetooth headset, a keyboard, and smart watches, smart bracelets, etc. managed by the electronic device. Peripheral resources may include hardware resources and software resources, and the hardware resources and software resources may refer to the related description above.
  • the predefined operations may be predefined by the manufacturer of the electronic device and cannot be modified.
  • a producer of an electronic device may include a manufacturer, supplier, supplier, etc. of the electronic device.
  • a manufacturer may refer to a manufacturer that processes and manufactures electronic equipment with self-made or purchased parts and raw materials.
  • a supplier may refer to a manufacturer that provides the complete machine, raw materials or parts of the electronic device. For example, the manufacturer of Huawei's "Mate" series of phones is Huawei Technologies Co., Ltd.
  • the predefined operations do not involve the user's private data, and only include some basic or commonly used operations.
  • the predefined operations may include, for example, starting or closing some basic applications, such as starting a camera application, turning on a flashlight, turning on a calculator, scanning a QR code, turning off/on Bluetooth, turning off/on a cellular signal, turning on/off Wi-Fi ( wireless fidelity, Wi-Fi) signal, etc., and the electronic device cannot enter the gallery or photo album through the camera application after starting the camera application.
  • Other operations other than the predefined operations may include: operations involving user privacy data, and some operations not involving user privacy data.
  • the user's private data may include: user data stored in various applications, such as user's photos, videos, audio, contact information, browsing records, shopping records, and so on.
  • Operations involving user privacy data may include, for example, enabling or disabling gallery, photo album, address book, shopping application, instant messaging application, memo, sharing user data through background, Wi-Fi, USB, Bluetooth, etc., etc.
  • Some operations that do not involve user privacy may include, for example: starting navigation applications but not reading user data, starting a browser but not reading browsing records, starting video applications but not reading browsing records, and so on. Wherein, the navigation application may also be called other terms such as map application.
  • the electronic device can perform other operations besides the predefined operation in the locked state.
  • the electronic device can perform operations involving user privacy data, such as launching a gallery or photo album, launching a shopping application and viewing shopping records, launching an instant messaging application, viewing memos, viewing navigation data, and viewing browser browsing data. record etc.
  • the locked state may also be referred to by other terms, such as a screen-locked state.
  • the unlocked state may also be called other nouns, which are not limited here.
  • the locked state and unlocked state will be used uniformly to describe later.
  • the electronic device can preset a variety of identity authentication methods, and can receive the identity authentication information corresponding to the preset identity authentication method in the locked state, and after confirming that the input identity authentication information meets the identity authentication standard, unlock and enter the unlocking mode. state.
  • Authentication is a technique used to confirm a user's identity.
  • identity authentication methods may include: password authentication, graphic authentication, and biometric authentication.
  • Different users can be distinguished through different identity authentication information.
  • the electronic device may pre-store passwords, graphics or biometric features, and when the user inputs the pre-stored password or graphic, or enters a biometric feature whose matching degree with the pre-stored biometric features reaches a certain value, the electronic device may confirm that the user is the previous Users with pre-stored information.
  • the value of the matching degree can be preset. The higher the value of the matching degree, the higher the accuracy of the biometric authentication method.
  • the password can be a string of numbers, letters, and symbols.
  • Biological characteristics are divided into two categories: physical characteristics and behavioral characteristics.
  • Physical characteristics include: face, voiceprint, fingerprint, palm shape, retina, iris, body odor, face shape, blood pressure, blood oxygen, blood sugar, respiration rate, heart rate, a cycle of ECG waveform, etc., deoxyribonucleic acid (deoxyribonucleic acid) acid, DNA).
  • Behavioral features include: signature, body posture (such as walking gait), etc.
  • each of the above-mentioned identity authentication methods has a corresponding authentication capability level (ACL).
  • ACL authentication capability level
  • the accuracy of electronic equipment to extract information depends on the current technological development. For example, the accuracy of electronic equipment to extract passwords and fingerprints is very high, but the accuracy of extracting voiceprints and signatures is relatively low. For the same information, when different electronic devices use different algorithms, the accuracy of information extracted by different electronic devices using the identity authentication method is also different.
  • the identity authentication can be judged according to the false accept rate (FAR), false reject rate (FRR), and spoof accept rate (SAR) when using the identity authentication method.
  • FAR false accept rate
  • FRR false reject rate
  • SAR spoof accept rate
  • mode ACL the higher the ACL.
  • the ACLs for password authentication/image authentication, face authentication/fingerprint authentication, voiceprint authentication, and body posture authentication are successively lowered.
  • ACLs can be divided into multiple levels with different granularities, which are not limited here. For example, ACLs can be divided into four levels.
  • an electronic device is generally only unlocked using an identity authentication method with a higher ACL, rather than an identity authentication method with a lower ACL.
  • the identity authentication method used for unlocking the electronic device is called the first authentication method; the identity authentication method other than the identity authentication method used for unlocking the electronic device is called the second authentication method.
  • the first authentication mode may be independently set by the electronic device or the manufacturer of the electronic device, which is not limited here.
  • the electronic device can be set to be unlocked using password authentication, graphic authentication, fingerprint authentication, and face authentication instead of voiceprint authentication, heart rate authentication, and body gesture authentication.
  • the electronic device When the electronic device is in the locked state, it can receive the identity authentication information input by the user, and after it is determined that the input identity authentication information meets the standard of the first authentication method, it is unlocked and enters the unlocked state.
  • standard-compliant identity authentication information users need to perform cumbersome operations. For example, the user needs to strictly enter the preset password or graphics, point the face at the front camera of the electronic device within a certain distance and keep it still, press the fingerprint recognition sensor with a clean finger and keep it still, and so on. That is to say, the user can only unlock the device through cumbersome authentication operations, or even multiple authentication operations, which wastes a lot of time and power consumption of the electronic device.
  • the following embodiments of the present application provide access control methods based on weak authentication factors.
  • this method when the electronic device is in the locked state, after the first operation instruction and the weak authentication factor are acquired, restrictions can be created according to the risk level of the operation corresponding to the first operation instruction and the security level of the weak authentication factor.
  • An execution environment limited execution environment
  • responding to the first operation instruction in the limited execution environment, and executing a corresponding operation when the electronic device is in the locked state, after the first operation instruction and the weak authentication factor are acquired, restrictions can be created according to the risk level of the operation corresponding to the first operation instruction and the security level of the weak authentication factor.
  • the correspondence between the first operation instruction and its corresponding operation is preset by the electronic device.
  • the first operation instruction may be directly received by the electronic device, or may be obtained by other devices and sent to the electronic device.
  • For the specific content of the first operation instruction reference may be made to the detailed description of subsequent method embodiments, which will not be repeated here.
  • the first operation instruction is used to request the electronic device to perform operations other than the above-mentioned predefined operations in the locked state.
  • predefined operations and operations other than the predefined operations, please refer to the relevant description above.
  • Weak authentication factors refer to identity authentication information that does not meet the requirements for unlocking electronic devices.
  • Weak authentication factors may include the following two categories: 1. Identity authentication information that is lower than the standard required by the first authentication method. 2. Identity authentication information that meets the standards required by the second authentication method.
  • the weak authentication factor may be directly collected by the electronic device, or may be sent to the electronic device after being collected by other devices. For the specific content of the weak authentication factor, refer to the detailed description of the subsequent method embodiments, which will not be repeated here.
  • the electronic device may respectively receive the first operation instruction and the weak authentication factor.
  • the electronic device may receive the first operation instruction and the weak authentication factor at the same time.
  • a restricted execution environment refers to a restricted execution environment.
  • Execution environments may include hardware environments and software environments.
  • An execution environment can be a sandbox or a function domain containing multiple functions.
  • an electronic device can only perform a specified part of the operations, but cannot perform other operations that are not part of the operations.
  • an electronic device can only access some resources of the electronic device, but cannot access other resources other than this part of resources.
  • the restricted execution environment in this embodiment of the present application may also be referred to as a restricted execution environment, a restricted execution environment, a restricted domain, etc., which are not limited here.
  • the electronic device may create a restricted execution environment according to the risk level of the operation corresponding to the first operation instruction and the security level of the weak authentication factor.
  • the risk level of the operation, the security level of the weak authentication factor, the way of creating a restricted execution environment, etc. please refer to the relevant description in the subsequent method embodiments for details.
  • the electronic device no longer decides whether to respond to the corresponding operation based on whether it is unlocked, but decides whether to perform the operation according to the risk level of the operation instruction and the security level of the weak authentication factor.
  • the electronic device can be triggered to perform operations other than the predefined operations in the locked state without going through cumbersome authentication to unlock the electronic device, so that the user can manipulate the electronic device more freely and conveniently.
  • electronic devices no longer simply divide resources into resources accessible by predefined operations and resources accessible by other operations, but also implement more fine-grained access control for various resources.
  • the embodiment of the present application also provides a cross-device access control method, which is applied to a communication system including two electronic devices.
  • one electronic device may send a second operation instruction to another electronic device, and the other electronic device may create a limited execution environment (limited execution environment) according to the risk level of the operation corresponding to the second operation instruction, and A corresponding operation is executed in response to the second operation instruction in the restricted execution environment.
  • a limited execution environment limited execution environment
  • the correspondence between the second operation instruction and its corresponding operation is preset by the electronic device.
  • the second operation instruction is an operation instruction sent by other electronic devices, for example, it may be a screen projection request or the like.
  • For the specific content of the second operation instruction reference may be made to the detailed description of subsequent method embodiments, which will not be repeated here.
  • the second operation instruction is used to request the electronic device to perform other operations than the above-mentioned predefined operations in the locked state.
  • the electronic device no longer decides whether to respond to user operations based on whether it is unlocked, but decides whether to respond to user operations based on the risk level of the operation instructions received across devices, which can achieve finer-grained Access control enriches the use scenarios and scope of electronic equipment.
  • the electronic device can be triggered to perform operations other than the predefined operations in the locked state without going through cumbersome authentication to unlock the electronic device, so that the user can manipulate the electronic device more freely and conveniently.
  • electronic devices no longer simply divide resources into resources accessible by predefined operations and resources accessible by other operations, but also implement more fine-grained access control for various resources.
  • the electronic device after the electronic device creates a restricted execution environment, if the electronic device receives a user operation, and the user operation requests to perform operations other than those permitted by the restricted execution environment, the electronic device can prompt the user unlock. After the electronic device is unlocked under the trigger of the user, it can respond to a previously received user operation and perform a corresponding operation.
  • the user can actively trigger the electronic device to be unlocked. After the electronic device is unlocked, various operations can be performed in response to user operations.
  • the electronic device 100 provided in the embodiment of the present application is firstly introduced.
  • the electronic device 100 may be of various types, and the embodiment of the present application does not limit the specific type of the electronic device 100 .
  • the electronic device 100 includes a mobile phone, and may also include a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, a large-screen TV, a smart screen, a wearable device, an augmented reality (augmented reality, AR) devices, virtual reality (virtual reality, VR) devices, artificial intelligence (AI) devices, car machines, smart headsets, game consoles, and can also include Internet of Things (IOT) devices or smart home devices such as smart Water heaters, smart lamps, smart air conditioners, cameras, etc.
  • IOT Internet of Things
  • the electronic device 100 may also include non-portable terminal devices such as a laptop with a touch-sensitive surface or a touch panel, a desktop computer with a touch-sensitive surface or a touch panel, and the like.
  • FIG. 1 shows a schematic structural diagram of an electronic device 100 .
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide solutions for wireless communication including 2G/3G/4G/5G applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, convert it into electromagnetic wave and radiate it through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
  • the application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the electronic device 100.
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , demodulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • code division multiple access code division multiple access
  • CDMA broadband Code division multiple access
  • WCDMA wideband code division multiple access
  • time division code division multiple access time-division code division multiple access
  • TD-SCDMA time-division code division multiple access
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou navigation satellite system beidou navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
  • the electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 and the application processor.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the internal memory 121 may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (non-volatile memory, NVM).
  • RAM random access memory
  • NVM non-volatile memory
  • the random access memory can be directly read and written by the processor 110, and can be used to store executable programs (such as machine instructions) of an operating system or other running programs, and can also be used to store data of users and application programs.
  • the non-volatile memory can also store executable programs and data of users and application programs, etc., and can be loaded into the random access memory in advance for the processor 110 to directly read and write.
  • the external memory interface 120 can be used to connect an external non-volatile memory, so as to expand the storage capacity of the electronic device 100 .
  • the electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 170 may also be used to encode and decode audio signals.
  • the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
  • Speaker 170A also referred to as a "horn" is used to convert audio electrical signals into sound signals.
  • Electronic device 100 can listen to music through speaker 170A, or listen to hands-free calls.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the receiver 170B can be placed close to the human ear to receive the voice.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C with the mouth, and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In some other embodiments, the electronic device 100 may be provided with two microphones 170C, which may also implement a noise reduction function in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
  • the earphone interface 170D is used for connecting wired earphones.
  • the earphone interface 170D can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • pressure sensor 180A may be disposed on display screen 194 .
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view short messages is executed. When a touch operation whose intensity is greater than or equal to the first pressure threshold acts on the icon of the short message application, the instruction of creating a new short message is executed.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access to application locks, take pictures with fingerprints, answer incoming calls with fingerprints, and the like.
  • the touch sensor 180K is also called “touch device”.
  • the touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the position of the display screen 194 .
  • the keys 190 include a power key, a volume key and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
  • the motor 191 can generate a vibrating reminder.
  • the indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the internal memory 121 is used to store predefined operations that the electronic device can perform in a locked state. Specifically, the internal memory 121 can record resources that can be accessed by the electronic device in a locked state, and specific access operations (such as modifying, reading, etc.) that can be performed on the resources. In some embodiments of the present application:
  • the internal memory 121 can be used to store standard identity authentication information of one or more users.
  • the identity authentication information is used to identify the user, and may include identity authentication information corresponding to the first authentication method, or may include identity authentication information corresponding to the second authentication method.
  • these identity authentication information can include: password, pattern, face, voiceprint, fingerprint, palm shape, retina, iris, body odor, face shape, blood pressure, blood oxygen, blood sugar, breathing rate, heart rate, a cycle of ECG Waveform, etc., deoxyribonucleic acid (deoxyribo nucleic acid, DNA), signature, body posture (such as walking gait).
  • the receiver 170B, the microphone 170C, the display screen 194, the camera 193, the button 190, the sensor module 180 (such as the pressure sensor 180A, the gyro sensor 180B), the earphone connected to the earphone interface 170D, etc. can be used to receive the first operation command input by the user.
  • the sensor module 180 such as the pressure sensor 180A, the gyro sensor 180B
  • the earphone connected to the earphone interface 170D etc.
  • the mobile communication module 150 and the wireless communication module 160 can be used to receive the first operation instruction sent by other devices, and can also be used to receive the weak authentication factor sent by other devices.
  • the display screen 194, camera 193, fingerprint sensor 180H, receiver 170B, microphone 170C, optical sensor, electrodes, etc. can be used to collect weak authentication factors input by the user.
  • the display screen 194 can be used to collect passwords, graphics, and signatures input by the user.
  • the camera 193 is used to collect the face, iris, retina, face shape, body posture, etc. input by the user.
  • the fingerprint sensor 180H may be used to collect a fingerprint input by a user.
  • the receiver 170B and the microphone 170C can be used to collect voice input by the user.
  • the optical sensor can be used to collect PPG signals (such as blood pressure, blood oxygen, blood sugar, respiration rate, heart rate, one-cycle ECG waveform, etc.) using photoplethysmography (Photoplethysmography, PPG) technology.
  • PPG signals such as blood pressure, blood oxygen, blood sugar, respiration rate, heart rate, one-cycle ECG waveform, etc.
  • the electrodes configured on the electronic device 100 can be used to collect electrocardiogram waveforms within one cycle by using electrocardiogram (ECG) technology.
  • ECG electrocardiogram
  • the processor 110 may analyze the weak authentication factors acquired by the above modules to determine the security level of the weak authentication factors.
  • the processor is further configured to determine the risk level of the operation corresponding to the first operation instruction.
  • the processor 110 is further configured to create a restricted execution environment according to the risk level of the operation corresponding to the first operation instruction, and the security level of the weak authentication factor, and dispatch electronic devices in response to the first operation instruction in the restricted execution environment.
  • Each module of the device 100 performs corresponding operations.
  • the mobile communication module 150 and the wireless communication module 160 in the electronic device 100 can be used to receive the second operation instruction sent by other devices.
  • the processor 110 may be configured to determine the risk level of the operation corresponding to the second operation instruction. Afterwards, the processor 110 is further configured to create a restricted execution environment according to the risk level of the operation corresponding to the second operation instruction, and schedule each module of the electronic device 100 to execute corresponding operations in response to the second operation instruction in the restricted execution environment. .
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture.
  • the embodiment of the present application takes the Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 .
  • FIG. 2 is a block diagram of the software structure of the electronic device 100 according to the embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the Android system is divided into four layers, which are respectively the application program layer, the application program framework layer, the Android runtime (Android runtime) and the system library, and the kernel layer from top to bottom.
  • the application layer can consist of a series of application packages.
  • the application package may include applications such as voice assistant, camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.
  • applications such as voice assistant, camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer can include window managers, content providers, view systems, phone managers, resource managers, notification managers, and so on.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • Said data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on.
  • the view system can be used to build applications.
  • a display interface can consist of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide communication functions of the electronic device 100 . For example, the management of call status (including connected, hung up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify the download completion, message reminder, etc.
  • the notification manager can also be a notification that appears on the top status bar of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window.
  • prompting text information in the status bar issuing a prompt sound, vibrating the electronic device, and flashing the indicator light, etc.
  • the Android Runtime includes core library and virtual machine. The Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function function that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application program layer and the application program framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • a system library can include multiple function modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing, etc.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer includes at least a display driver, a camera driver, an audio driver, and a sensor driver.
  • the communication system 10 provided by the embodiment of the present application is introduced below.
  • the communication system 10 includes an electronic device 100 , and may further include an electronic device 200 , or an electronic device 300 .
  • the number of the electronic device 100, the electronic device 200, or the electronic device 300 may be one or more.
  • the embodiment of the present application does not limit the specific type of the electronic device 200 or the electronic device 300 .
  • the type of the electronic device 200 or the electronic device 300 reference may be made to the above description of the type of the electronic device 100 .
  • the electronic device 100 may be a smart phone, and the electronic device 200 may be a smart watch, a smart bracelet, an earphone, and the like.
  • the electronic device 100 may be a smart screen, a large-screen TV, a notebook computer, etc.
  • the electronic device 300 may be a smart phone.
  • the multiple electronic devices in the communication system 10 may be configured with different software operating systems (operating system, OS), or may all be configured with the same software operating system.
  • Operating systems include but are not limited to wait. in, It is Huawei's Hongmeng system.
  • a communication connection is established between the electronic device 100 and the electronic device 200 , or between the electronic device 100 and the electronic device 300 .
  • the communication connection may include but not limited to: wired connection, wireless connection such as Bluetooth (bluetooth, BT) connection, wireless local area network (wireless local area networks, WLAN) such as wireless fidelity point to point (wireless fidelity point to point, Wi-Fi P2P) connection, a near field communication (near field communication, NFC) connection, an infrared technology (infrared, IR) connection, and a remote connection (such as a connection established through a server), etc.
  • wireless connection such as Bluetooth (bluetooth, BT) connection, wireless local area network (wireless local area networks, WLAN) such as wireless fidelity point to point (wireless fidelity point to point, Wi-Fi P2P) connection, a near field communication (near field communication, NFC) connection, an infrared technology (infrared, IR) connection, and a remote connection (such as a connection
  • any two electronic devices in the communication system 10 can be connected by logging in with the same account.
  • two electronic devices can log in to the same Huawei account, and remotely connect and communicate through the server.
  • Any two electronic devices can also log in to different accounts, but they are connected through binding. After an electronic device logs in to an account, it can be bound to other electronic devices logged in with different accounts or not logged in in the device management application, and then these electronic devices can communicate with each other through the device management application.
  • Any two electronic devices can also establish a connection by scanning a QR code, touching by near field communication (NFC), searching for Bluetooth devices, etc.
  • NFC near field communication
  • the electronic devices in the communication system 10 may also be connected and communicate in combination with any of the above methods, which is not limited in this embodiment of the present application.
  • the electronic device 200 may be configured to receive a user operation carrying a first operation instruction, and then send instruction information of the user operation to the electronic device 100 .
  • the electronic device 200 may receive a voice command input by the user, and then send the voice command to the electronic device 100 .
  • the electronic device 200 may be configured to receive a user operation carrying a first operation instruction, then identify the first operation instruction carried by the user operation, and send it to the electronic device 100 according to the first operation instruction.
  • the electronic device 200 may receive the voice instruction "play music on the mobile phone" input by the user, and then recognize that the intention of the voice instruction is to trigger the mobile phone to play music, and then the electronic device 200 may use the The first operation instruction requesting the electronic device 100 to play music is sent to the electronic device 100 .
  • the electronic device 200 may be configured to receive a user operation carrying a weak authentication factor, and then send instruction information of the user operation to the electronic device 100 .
  • the electronic device 200 when the electronic device 200 is an earphone connected to the electronic device 100 , it may receive a voice command with a voiceprint input by the user, and then send the voice command with a voiceprint to the electronic device 100 .
  • the electronic device 200 may be configured to receive a user operation carrying a weak authentication factor, then identify the weak authentication factor carried by the user operation, and send it to the electronic device 100 according to the weak authentication factor.
  • the electronic device 200 when the electronic device 200 is a smart watch connected to the electronic device 100, it can receive a voice command with a voiceprint input by the user, and then recognize the voiceprint carried by the voice command, and then the electronic device 200 can send the voiceprint information to Electronic device 100.
  • the electronic device 300 may be configured to receive a user operation, then identify the intention of the user operation, generate a second operation instruction according to the intention of the user operation, and then send the second operation instruction to the electronic device 100 .
  • the electronic device 300 may receive a user operation for casting the screen to the smart screen, and then the electronic device 300 may generate a screen projection request (ie, a second operation instruction) , and send the screencasting request to the smart screen.
  • the communication system 10 shown in FIG. 3 is only an example. In a specific implementation, the communication system 10 may further include more terminal devices, which is not limited here. The communication system 10 may also be called other terms such as a distributed system, which is not limited here.
  • FIG. 4 is a schematic flowchart of an access control method based on a weak authentication factor provided in an embodiment of the present application.
  • the method may include the following steps:
  • Step S101 when the electronic device 100 is in a locked state, acquire a first operation instruction and a weak authentication factor.
  • the electronic device 100 may have two states: a locked state and an unlocked state.
  • a locked state For the specific definition of the locked state and the unlocked state, refer to the relevant description above.
  • the display screen When the electronic device 100 is in the locked state, the display screen may be in the on-screen state or in the off-screen state, which is not limited here.
  • the electronic device 100 may enter the locked state by default when no user operation is received for a long time, or may enter the locked state in response to a user operation (such as pressing a power button). For example, refer to FIG. 5A , which shows a user interface 50 displayed when the electronic device 100 is in a locked state.
  • the correspondence between the first operation instruction and the operation it requests the electronic device 100 to perform may be preset by the electronic device 100 , which is not limited here.
  • the resource in the electronic device 100 requested to be accessed by the first operation instruction may be referred to as a first resource.
  • the first resource may include one or more resources, which is not limited here.
  • the first operation instruction is used to request the electronic device 100 to perform operations other than the predefined operations in the locked state. That is to say, the first operation instruction is used to request access to a certain resource in the electronic device 100, and the access to this resource cannot be performed by the electronic device in a locked state.
  • predefined operations that can be executed in a locked state are pre-stored in the electronic device 100 . That is, the electronic device 100 records the resources that can be accessed in the locked state, and the specific access operations (such as reading, adding, deleting, writing, modifying, etc.) that can be performed on the resources.
  • predefined operations refer to the previous descriptions.
  • the form of the first operation instruction is not limited in this embodiment of the present application.
  • the first operation instruction may include, for example but not limited to: voice-borne semantics, gestures, facial expressions, signatures, body gestures, mouth shapes, operations of pressing buttons or shaking operations, and the like.
  • gestures, facial expressions, signatures, body postures, and mouth shapes can be static information at a point in time, such as gestures at a certain point in time, or dynamic information within a period of time, such as mouth gestures within a period of time. type changes, etc.
  • the electronic device 100 may acquire the first operation instruction in the following ways:
  • the electronic device 100 directly receives a user operation carrying a first operation instruction, and extracts the first operation instruction from the user operation
  • the electronic device 100 may start to receive user operations input by the user periodically, or under certain trigger conditions, and extract the first operation instruction therefrom.
  • the trigger conditions may include various types, for example, it may include after the voice assistant is started, after the electronic device 100 detects a wrist-raising operation, after the electronic device 100 detects an operation of tapping the display screen, and so on.
  • the electronic device 100 can continuously run the wake-up word recognition program with low power consumption, and start the voice assistant after detecting the wake-up word. In this way, the power consumption of the electronic device 100 can be reduced by starting to receive the user's operation and extracting the first operation instruction from it when the trigger condition is detected.
  • the user operation carrying the first operation instruction may have various forms. For example, it may include speech carrying semantics, one or more images including gestures/facial expressions/body gestures/mouth shapes, sliding operations including signatures, operations of pressing buttons, operations of shaking the electronic device 100, and so on.
  • the electronic device 100 may use a corresponding module to receive the user operation carrying the first operation instruction.
  • the receiver 170B and the microphone 170C can receive semantically-carrying voices
  • the display screen 194 can receive sliding operations including signatures
  • sliding operations including gestures can be received through the camera 193.
  • the operation of pressing a key is received through the key 190
  • the shaking operation is received through the gyro sensor 180B, and so on.
  • the electronic device 100 may identify or extract the first operation instruction from the received user operation. For example, the electronic device 100 may extract semantics from speech, extract gestures/facial expressions/body gestures/mouth shapes from one or more images, extract signatures or gestures from sliding operations, and so on.
  • the electronic device 100 may recognize the first operation instruction included in the user operation locally or through a network.
  • the electronic device 100 can locally use the processor 110 to recognize the semantics in speech, recognize gestures/facial expressions/body gestures in images, etc., and can also upload the speech or images to the network, through a network server or other devices Recognize semantics in speech, gestures/facial expressions/body gestures/mouth shapes in images, etc.
  • Voice carries semantics, and different voices can carry different semantics.
  • the user can input different operation instructions by inputting different voices.
  • the voice “navigate to home” can be used to request the electronic device to start a navigation application and navigate to the location of home;
  • the voice “open photo album” can be used to request the electronic device to start a gallery application.
  • a voice assistant is an application program installed in an electronic device to support users to control the electronic device through voice commands.
  • the voice assistant is in a dormant state, and the user can wake up or start the voice assistant before using the voice assistant. Only after the voice assistant is woken up, the electronic device can receive and recognize the voice command input by the user.
  • the voice used to wake up the voice assistant may be called a wake-up word, for example, the wake-up word may be the voice "small E small E".
  • the voice assistant in the electronic device 100 may be in the wake-up state for a long time, and does not need to be woken up by a wake-up word.
  • Voice assistant is just a word used in this application, and it can also be called other words such as smart assistant, which is not limited here.
  • the gesture may be a gesture of touching an electronic device, such as a sliding gesture or a clicking gesture of touching a display screen.
  • the gesture may also be a hovering gesture that does not touch the electronic device, such as a gesture of opening a palm above a display screen or a gesture of making a fist, and the like.
  • Air gestures can also be called hover gestures, air gestures, remote gestures, and so on.
  • the user can input different operation instructions by inputting different gestures. For example, a gesture of opening a palm above the display screen can be used to request the electronic device to start a navigation application and navigate to a home location; a gesture of making a fist above the display screen can be used to request the electronic device to start a gallery application.
  • Facial expressions may include, for example, winking expressions, mouth opening expressions, and the like.
  • the user can input different operation instructions by inputting different facial expressions.
  • Physical gestures may include, for example, nodding, shaking the head, swinging the arms, squatting, and the like.
  • the user can input different operation instructions by inputting different body postures.
  • a physical gesture of nodding may be used to request the electronic device to play music
  • a physical gesture of shaking the head may be used to request the electronic device to pause playing music.
  • buttons and shake the electronic device There are many ways to press the button and shake the electronic device, and the user can press the button or shake the electronic device in different ways to input different operation instructions. For example, double-clicking the power button can be used to request the electronic device to play music, and shaking the electronic device twice can be used to request the electronic device to pause playing music.
  • mouth shapes can be used to indicate different actions. For example, lip changes corresponding to the voice "play music" over a period of time can be used to request an electronic device to play music.
  • Using the mouth shape to input the first operation instruction can facilitate the user to control the electronic device through lip language, which enriches the usage scenarios and range of the electronic device.
  • the first operation instruction may also be implemented in other forms, for example, it may also be a sound of snapping fingers, etc., which is not limited here.
  • the electronic device 100 may establish a communication connection with other devices such as the electronic device 200, and the manner in which the electronic device 100 establishes a communication connection with other electronic devices may refer to related descriptions in FIG. 3 .
  • the user operations received by other devices carry the first operation instruction.
  • the timing and method for other devices to receive the user operation carrying the first operation instruction are the same as the timing and method for the electronic device 100 to receive the user operation carrying the first operation instruction in the above-mentioned first method, please refer to the relevant description.
  • the indication information of the user operation sent by other devices may be the user operation itself, or other indication information of the user operation.
  • the electronic device 200 when the electronic device 200 is an earphone connected to the electronic device 100 , it may receive a voice input by the user including semantics, and then send the voice to the electronic device 100 .
  • the electronic device 200 when the electronic device 200 is a camera connected to the electronic device 100 , it may collect images input by the user including gestures/facial expressions/body postures, and then send the images to the electronic device 100 .
  • the electronic device 200 is a smart bracelet connected to the electronic device 100 , it may receive a pressing operation on the power button, and then send instruction information of the pressing operation to the electronic device 100 .
  • the manner in which the electronic device 100 extracts the first operation instruction from the indication information of the user operation is the same as the manner in which the electronic device 100 extracts the first operation instruction from the received user operation in the first form, and reference may be made to related descriptions.
  • other devices such as the electronic device 200 may be regarded as peripheral devices or accessory devices of the electronic device 100 .
  • the electronic device 200 may select the electronic device 100 by default, or may send instruction information for the user's operation to the electronic device 100 according to the electronic device 100 selected by the user.
  • the manner in which the user selects the electronic device 100 is not limited, for example, it may be through voice or a selection operation on a user interface.
  • the electronic device 200 may detect the voice instruction "play music using the mobile phone", and then send the voice to the mobile phone mentioned in the voice instruction (that is, the electronic device 100).
  • the other device receives the user operation carrying the first operation instruction, extracts the first operation instruction from the user operation, and sends the first operation instruction to the electronic device 100
  • the electronic device 100 may establish a communication connection with other devices such as the electronic device 200, and the manner in which the electronic device 100 establishes a communication connection with other electronic devices may refer to related descriptions in FIG. 3 .
  • Other devices such as the electronic device 200 may first receive the user operation carrying the first operation instruction, identify the first operation instruction included in the user operation, and then send the first operation instruction to the electronic device 100 .
  • the other device receives the user operation carrying the first operation instruction, which is similar to the electronic device 100 receiving the user operation carrying the first operation instruction in the above first form, and reference may be made to related descriptions.
  • the manner in which other devices identify the first operation instruction it contains from the received user operation is the same as the manner in which the electronic device 100 recognizes the first operation instruction it contains from the user operation in the first form above. related description.
  • the electronic device 200 may receive the voice input by the user, then recognize the semantics of the voice, and then send the semantic information to the electronic device 100 .
  • the electronic device 200 collects an image containing gestures/facial expressions/body postures input by the user, can recognize the gestures/facial expressions/body postures in the image, and then sends the gesture/facial expressions/body posture information to to the electronic device 100.
  • the electronic device 200 may select the electronic device 100 by default, or may send the first operation instruction to the electronic device 100 according to the electronic device 100 selected by the user.
  • Weak authentication factors refer to identity authentication information that does not meet the requirements for unlocking electronic devices.
  • identity authentication information may include passwords, graphics, and biometric features.
  • identity authentication information please refer to the related description above.
  • the identity authentication information that does not meet the requirements for unlocking the electronic device, that is, the weak authentication factor may include the following two types:
  • the first authentication mode is an identity authentication mode with a higher ACL.
  • the first authentication mode may be preset by the electronic device or the manufacturer of the electronic device, and reference may be made to the relevant description above.
  • the first authentication method may include password authentication, graphic authentication, fingerprint authentication, face authentication, and the like.
  • the electronic device can pre-store the user's identity authentication information, which is used for subsequent unlocking using the corresponding first authentication method. For example, when the first authentication method includes password authentication, the electronic device may prestore one or more passwords. When the first authentication method includes pattern authentication, the electronic device may prestore one or more patterns. When the first authentication method includes biometric authentication, the electronic device may prestore one or more biometrics, such as fingerprints, faces, and the like.
  • Identity authentication information that meets the standards required by the first authentication method may include, for example: a password or pattern pre-stored in the electronic device, or a biometric feature that matches a pre-stored biometric feature (eg, fingerprint, face, etc.) to a first value.
  • the electronic device can switch from a locked state to an unlocked state after receiving identity authentication information that meets the standards required by its first authentication method.
  • the first value can be preset.
  • Identity authentication information that is lower than the standard required by the first authentication method may include, for example: a biological feature whose matching degree with a pre-stored biological feature is lower than the first value, or a similarity with a pre-stored password or pattern of an electronic device reaching a certain value password or graphics.
  • the user can input identity authentication information that is lower than the standards required by the first authentication method without cumbersome operations or multiple operations.
  • the user can input a graphic similar to the preset graphic, point the face at the camera of the electronic device from a distance without keeping still, use a finger with water stains to press the location of the fingerprint recognition sensor or point the finger at the camera, etc. .
  • this can reduce the requirement for the user to input identity authentication information, so that the user can use the electronic device more simply, conveniently and freely.
  • the second authentication mode is an identity authentication mode with a lower ACL.
  • the second authentication mode may be preset by the electronic device or the manufacturer of the electronic device, and reference may be made to the relevant description above.
  • the second authentication method may include voiceprint authentication, heart rate authentication, body posture authentication, and the like.
  • the identity authentication information that meets the standards required by the second authentication method may include, for example: biological features whose matching degree reaches a second value with the biological features (such as voiceprint, body posture, etc.) pre-stored in the electronic device.
  • the second value can be preset.
  • the user can use a more convenient way to control the electronic device.
  • users can control electronic devices through voice commands, body postures, etc., and can control electronic devices without touching when driving, cooking, exercising and other scenes, which brings great convenience.
  • the number of weak authentication factors received by the electronic device 100 may be one or multiple, which is not limited here. That is, the electronic device 100 may receive multiple different weak authentication factors.
  • the electronic device in the embodiment of this application can obtain the weak authentication factor in the following ways:
  • the electronic device 100 directly receives a user operation carrying a weak authentication factor, and extracts the weak authentication factor from the user operation
  • the electronic device 100 may periodically, or under certain trigger conditions, start to receive user operations input by the user and extract weak authentication factors therefrom.
  • the trigger conditions may include various types, for example, it may include after the voice assistant is started, after the electronic device 100 detects a wrist-raising operation, after the electronic device 100 detects an operation of tapping the display screen, and so on. In this way, the power consumption of the electronic device 100 can be reduced by starting to collect weak authentication factors when a trigger condition is detected.
  • user operations indicating passwords such as click operations
  • user operations indicating graphics such as sliding operations
  • images or sliding operations carrying biometric features such as click operations
  • the electronic device 100 can schedule corresponding modules to receive these user operations carrying weak authentication factors.
  • the electronic device 100 can receive a user operation indicating a password (such as a click operation) and a user operation (such as a sliding operation) indicating a graphic through the display screen 194, and collect biometric features (such as a face, iris, retina, face shape) through the camera 193. , body posture), the fingerprint sensor 180H is used to collect the fingerprint input by the user, the receiver 170B and the microphone 170C are used to collect the voice input by the user, and the heart rate is collected by the optical sensor.
  • a user operation indicating a password such as a click operation
  • a user operation such as a sliding operation
  • biometric features such as a face, iris, retina, face shape
  • the fingerprint sensor 180H is used to collect the fingerprint input by the user
  • the receiver 170B and the microphone 170C are used to collect the voice input by the user
  • the heart rate is collected by the optical sensor.
  • the electronic device 100 can identify the weak authentication factor it contains from the received user operation. For example, extract voiceprint from voice, extract password from click operation, extract graphics or signature from slide operation, extract face, iris, retina, face shape, body posture or fingerprint from image, etc.
  • the electronic device 100 may recognize the weak authentication factor involved in the user's operation locally or through a network.
  • the electronic device 100 can locally use the processor 110 to recognize the voiceprint in the voice, recognize the body posture or face shape in the image, etc., directly recognize the operation of pressing the button through the button, and recognize the fingerprint through the fingerprint sensor 180H.
  • the voice or image can be uploaded to the network, and the voiceprint in the voice, the body posture or face shape in the image can be recognized through a network server or other equipment.
  • the electronic device 100 may establish a communication connection with other devices such as the electronic device 200, and the manner in which the electronic device 100 establishes a communication connection with other electronic devices may refer to related descriptions in FIG. 3 .
  • Timing and method for other devices to receive user operations with weak authentication factors are the same as the timing and method for the electronic device 100 to receive user operations with weak authentication factors in the first method described above. Please refer to related descriptions.
  • the user operation indication information sent by other devices may be the user operation itself, or other indication information of the user operation.
  • other devices can collect user operations that indicate passwords (such as click operations), user operations that indicate graphics (such as slide operations), images with biometric features or slide operations, etc., and then store the indications of these click operations or slide operations.
  • the information, or the image is sent to the electronic device 100, and the weak authentication factor therein is identified by the electronic device 100.
  • the manner in which the electronic device 100 extracts the weak authentication factor from the indication information of the user operation is the same as the manner in which the electronic device 100 extracts the weak authentication factor from the received user operation in the first form, and reference may be made to related descriptions.
  • other devices such as the electronic device 200 may be regarded as peripheral devices or accessory devices of the electronic device 100 .
  • the electronic device 200 may select the electronic device 100 by default, or may send instruction information for the user's operation to the electronic device 100 according to the electronic device 100 selected by the user.
  • the electronic device 100 can establish a communication connection with other devices such as the electronic device 200, and the manner in which the electronic device 100 establishes a communication connection with other electronic devices can refer to the related description in FIG. 3 .
  • Other devices such as the electronic device 200 may first receive a user operation carrying a weak authentication factor, identify the weak authentication factor contained in the user operation, and then send the weak authentication factor to the electronic device 100 .
  • other devices receive user operations carrying weak authentication factors, which is similar to the electronic device 100 receiving user operations carrying weak authentication factors in the first form, and reference may be made to related descriptions.
  • the way other devices identify the weak authentication factors they contain from the received user operations is the same as the way the electronic device 100 identifies the weak authentication factors it contains from the user operations in the first form above, and you can refer to related descriptions .
  • the electronic device 200 may receive the voice input by the user, then recognize the voiceprint of the voice, and then send the voiceprint information to the electronic device 100 .
  • the electronic device 200 collects an image input by the user that includes biometric features (such as face, fingerprint, palm shape, retina, iris, body posture, and face shape), and can identify the biometric features contained in the image, and then the biometric The feature information is sent to the electronic device 100 .
  • biometric features such as face, fingerprint, palm shape, retina, iris, body posture, and face shape
  • the electronic device 200 may select the electronic device 100 by default, or may send the weak authentication factor to the electronic device 100 according to the electronic device 100 selected by the user.
  • the electronic device 100 may respectively receive the first operation instruction and the weak authentication factor. For example, the electronic device 100 may first collect a voice command "play music" through a microphone, and then collect a face image through a camera.
  • the electronic device 100 may receive the first operation instruction and the weak authentication factor at the same time. In this way, user operations can be simplified, making the user experience better.
  • 5B-5D respectively show the scenario where the electronic device 100 receives the first operation instruction and the weak authentication factor at the same time.
  • the electronic device 100 is in a locked state.
  • FIG. 5B exemplarily shows a scenario where the electronic device 100 (such as a mobile phone) receives the first instruction and the weak authentication factor at the same time.
  • the electronic device 100 can collect the voice command "navigate to home" through the microphone, and the voice command also carries a voiceprint, and the electronic device 100 can also recognize the corresponding semantics through the voice command.
  • the first resource accessed by the semantic request includes a navigation application and an address of "home".
  • FIG. 5C exemplarily shows another scenario where the electronic device 100 (such as a mobile phone) receives the first instruction and the weak authentication factor at the same time.
  • the electronic device 100 can collect an image including an open palm gesture through the camera, and the electronic device 100 can recognize the palm open gesture in the gesture image, and can also recognize the characteristics of the palm (such as fingerprints, knuckle size, etc.).
  • the open palm gesture can be used to request the electronic device 100 to "navigate to home", and the first resource requested to access includes navigation applications and the address of "home”.
  • FIG. 5D exemplarily shows another scenario where the electronic device 100 (such as a smart bracelet) simultaneously receives the first instruction and the weak authentication factor.
  • the electronic device 200 can collect the voice command "play music with the mobile phone" through the microphone.
  • the voice command also carries a voiceprint.
  • the corresponding semantic information is then sent to the electronic device 100 simultaneously according to the semantic information and the voiceprint information.
  • the first resource accessed by the semantic request includes a music application.
  • the electronic device 100 may also receive other forms of first operation instructions and weak authentication factors. Reference may be made to related descriptions above, which will not be listed here.
  • step S102 the electronic device 100 creates a restricted execution environment according to the first operation instruction and the weak authentication factor.
  • a restricted execution environment refers to a restricted execution environment.
  • Execution environments may include hardware environments and software environments.
  • the execution environment can be a sandbox or a function domain containing multiple functions.
  • an electronic device can only perform a specified part of the operations, but cannot perform other operations that are not part of the operations. That is to say, in the restricted execution environment, the electronic device can only access some resources of the electronic device, but cannot access other resources other than this part of resources.
  • the embodiment of the present application does not limit the policy that the electronic device 100 creates a restricted execution environment according to the first operation instruction and the weak authentication factor.
  • the electronic device 100 may create a restricted execution environment according to the type of the first operation instruction, the environment when the weak authentication factor is collected, and the like.
  • the first operation instructions are voice-borne semantics, gestures, facial expressions, signatures, and body gestures, the number of operations that can be performed in the restricted execution environments respectively created by the electronic device 100 decreases in turn.
  • the electronic device 100 may create a restricted execution environment according to the risk level of the operation corresponding to the first operation instruction, and/or the security level of the weak authentication factor.
  • Step S102 may specifically include the following steps S1021-S1024.
  • the multiple authentication factors may be received successively.
  • the user may respectively input five sentences of speech, and the electronic device 100 may extract a voiceprint from each sentence of speech as a weak authentication factor.
  • step S1021 the electronic device 100 determines the risk level of the operation corresponding to the first operation instruction.
  • the electronic device 100 may first determine the operation corresponding to the first operation instruction.
  • the correspondence between the first operation instruction and the operation it requests the electronic device 100 to perform may be preset by the electronic device 100 , which is not limited here.
  • operations corresponding to different first operation instructions may be preset.
  • the semantic "navigate to home", or, a gesture of opening a palm above the display corresponds to launching a navigation app and navigating to home
  • ", or, the gesture of making a fist above the display screen corresponds to launching the gallery application
  • the physical gesture of nodding corresponds to playing music
  • the physical gesture of shaking the head corresponds to pausing music.
  • the preset correspondence between different semantics, gestures, facial expressions, body gestures and operations may be stored in the electronic device 100 or in a network server, which is not limited here.
  • the electronic device 100 finds the operation corresponding to the first operation instruction locally or in the network according to preset information.
  • the operation corresponding to the first operation instruction includes an access operation for a resource
  • the resource is one or more resources in the electronic device
  • the access operation may include, for example, read, add, delete, write, One or more of modification, execution.
  • the resources in the electronic device may include software resources, hardware resources, peripherals or resources of peripherals, etc. For details, refer to the relevant description above.
  • the electronic device 100 may first determine the risk level of the operation corresponding to the first operation instruction.
  • the electronic device 100 may pre-store risk levels corresponding to execution of different operations.
  • various operations executable by the electronic device 100 may be divided into different risk levels according to different granularities.
  • This application does not limit the particle size.
  • the risk levels of operations can be roughly divided into three levels: high, medium, and low.
  • the risk level of the operation can be divided into 1-10 levels, and the higher the value, the higher the risk level of the operation.
  • the electronic device 100 when the electronic device 100 performs an operation, the higher the risk of privacy leakage brought to the user, the higher the risk level of the operation.
  • the risk of privacy leakage to the user is higher when the operation is performed, and the risk level of the operation is also higher.
  • the risk levels of viewing photos, viewing shopping records, and viewing browsing records in the browser can be reduced in turn.
  • the risk level of the corresponding operation is also higher. For example, the risk levels of reading photos, deleting photos, and adding photos can be reduced in sequence.
  • the electronic device 100 may independently set risk levels corresponding to different operations. For example, the electronic device 100 may set risk levels for different operations in consideration of factors such as the category and location of the resource that the operation requires to access. For example, operations that require access to third-party resources have a higher risk level than operations that require access to system resources; operations performed at home have a lower risk level than operations performed elsewhere.
  • the electronic device 100 may also set risk levels corresponding to different operations according to user requirements. Specifically, the electronic device 100 may determine or set the risk level of each operation that the electronic device 100 can perform in response to the received user operation. For example, the electronic device 100 provides a user interface in the setting application for the user to set the risk level of each operation.
  • the risk level of the operation corresponding to the first operation instruction may also be determined according to the manner in which the first operation instruction is obtained. For example, when the electronic device 100 acquires the first operation instruction through the first to third manners above, the security level of the obtained first operation instruction decreases successively. That is, the security level of the first operation instruction obtained by the electronic device 100 through the first method is higher than that of the first operation instruction obtained through the second or third method. For another example, when the electronic device 100 receives the first operation instruction sent by the electronic device 200 , it may determine the risk level of the operation corresponding to the first operation instruction according to the electronic device 200 . For example, if the historical communication frequency between the electronic device 200 and the electronic device 100 is higher, the risk level of the operation corresponding to the first operation instruction is lower.
  • step S1022 the electronic device 100 determines the security level of the weak authentication factor.
  • weak authentication factors may be divided into different security levels according to different granularities.
  • This application does not limit the particle size.
  • the security levels of weak authentication factors can be roughly divided into three levels: high, medium, and low.
  • the security level of the weak authentication factor can be divided into 1-10 levels, and the higher the value, the higher the security level of the weak authentication factor.
  • the security level of the weak authentication factor may be determined according to the ACL of the identity authentication mode to which the weak authentication factor belongs. The higher the ACL of the identity authentication mode to which the weak authentication factor belongs, the higher the security level of the weak authentication factor.
  • the security level of the weak authentication factor can also be determined according to one or more of the following: the matching degree between the weak authentication factor and the pre-stored identity authentication information, the environment information when receiving the weak authentication factor, The way to obtain the weak authentication factor, or the strength of the corresponding voice when the weak authentication factor is voiceprint.
  • the weaker authentication factor The higher the matching degree between the weak authentication factor and the pre-stored identity authentication information, or the quieter the environment when the weak authentication factor is received, or the stronger the corresponding voice strength when the weak authentication factor is voiceprint, the weaker authentication factor The security level is also higher.
  • the security level of the weak authentication factors decreases successively. That is, the security level of the weak authentication factor obtained by the electronic device 100 through the first method is higher than that of the weak authentication factor obtained through the second or third method.
  • the electronic device 100 may record the identity authentication method to which the weak authentication factor belongs, the security level of the weak authentication factor, and the authentication validity period of the weak authentication factor.
  • the validity period of the authentication of the weak authentication factor can be preset by the electronic device, for example, it can be set as a fixed value such as invalid after the restricted execution environment is created.
  • the embodiment of the present application does not limit the sequence of S1021 and S1022.
  • step S1023 the electronic device 100 determines whether to allow the operation corresponding to the first operation instruction to be performed according to the risk level of the operation corresponding to the first operation instruction and the security level of the weak authentication factor.
  • the electronic device 100 is preset with risk levels of different operations and security levels of different authentication factors, allowing various operations performed by the electronic device 100 .
  • This setting may be set in advance by the user or the manufacturer of the electronic device 100 .
  • the embodiment of the present application does not limit the correspondence between the risk level of the operation, the security level of the authentication factor, and the operations that the electronic device 100 is allowed to perform.
  • the operation corresponding to the first operation instruction when the risk level of the operation corresponding to the first operation instruction is high and the security level of the weak authentication factor is low, the operation corresponding to the first operation instruction is not allowed to be executed. For another example, when the risk level of the operation corresponding to the first operation instruction is low and the security level of the weak authentication factor is high, the operation corresponding to the first operation instruction is allowed to be executed.
  • the electronic device 100 may match the risk level of the operation corresponding to the first operation instruction with the security level of the weak authentication factor, and determine whether to allow the operation corresponding to the first operation instruction to be performed. Specifically, the electronic device 100 may preset security levels of weak authentication factors for performing various operations. Wherein, when the risk level of the operation is higher, the security level of the weak authentication factor required to perform the operation is also higher.
  • the electronic device 100 may also output prompt information, which may be used to remind the user that the operation corresponding to the first operation instruction is currently not allowed to be performed.
  • the prompt information may further prompt the reason why the user is currently not allowed to perform the operation corresponding to the first operation instruction, for example, it may include that the operation corresponding to the first operation instruction has a higher risk level, or a weak authentication factor The security level is low.
  • the prompt information may further prompt the user for a solution. For example, prompting the user to input a weak authentication factor with a higher security level, or prompting the user to unlock, etc., are not limited here.
  • the implementation form of the prompt information is the same as the implementation form of the prompt information in the subsequent step S105 , for details, please refer to the relevant description in the subsequent steps.
  • step S1024 the electronic device 100 creates a restricted execution environment according to the risk level of the operation corresponding to the first operation instruction and the security level of the weak authentication factor.
  • the electronic device 100 may execute S1024 after receiving a weak authentication factor of a predetermined value. That is, the electronic device 100 may utilize a plurality of weak authentication factors to create a restricted execution environment.
  • the operations that the electronic device 100 is allowed to perform that is, the operations that can be performed in the restricted execution environment created by the electronic device 100 .
  • the electronic device 100 is preset with risk levels of different operations and security levels of different authentication factors, allowing various operations performed by the electronic device 100 .
  • This setting may be set in advance by the user or the manufacturer of the electronic device 100 .
  • the embodiment of the present application does not limit the correspondence between the risk level of the operation, the security level of the authentication factor, and the operations that the electronic device 100 is allowed to perform.
  • the electronic device receives the same first operation instruction but different weak authentication factors, it can create different restricted execution environments.
  • the electronic device receives different first operation instructions and the same weak authentication factor, it may also create different restricted execution environments.
  • the predetermined operation in the locked state can be executed in the restricted execution environment created by the electronic device 100 .
  • Table 1 exemplarily shows various operations that are allowed to be performed by the electronic device 100 under different operation risk levels and different authentication factor security levels.
  • the risk level of the operation and the security level of different authentication factors are divided into 1-5 levels, the higher the value, the higher the risk level of the operation, and the higher the security level of the weak authentication factor.
  • the electronic device 100 may record various operations that are allowed to be performed by the electronic device 100 determined according to the risk level of the operation and the security level of different authentication factors. That is to say, the electronic device 100 records which specific access operations the electronic device 100 is allowed to perform on which resources or which type of resources.
  • the electronic device 100 can change the current restricted execution environment to the one described above according to the risk level of the operation and the security level of different authentication factors. Limit the execution environment. Specifically, the electronic device 100 may change the recorded information, thereby changing the current restricted execution environment, for details, refer to the relevant description above.
  • the electronic device 100 may also consider the number of weak authentication factors acquired in S101 to create a restricted execution environment. For example, the more weak authentication factors obtained in S101, the more operations allowed to be executed in the created restricted execution environment.
  • the restricted execution environment created in S1024 must allow execution of the operation corresponding to the first operation instruction. In this way, an effective restricted execution environment can be created to reduce waste of resources in the electronic device 100 .
  • the electronic device 100 does not need to execute S1023, but directly executes S1024. At this time, the restricted execution environment created in step S1024 does not necessarily allow execution of the operation corresponding to the first operation instruction.
  • the electronic device 100 executes an operation corresponding to the first operation instruction in the created restricted execution environment.
  • the electronic device 100 if the electronic device does not execute S1023, before S103, the electronic device 100 needs to judge whether the created restricted execution environment allows the execution of the operation corresponding to the first operation, and if the judgment result is yes, then execute S103 . If the judgment result is no, the electronic device 100 may stop executing any steps, or the electronic device 100 may try to respond to the first operation instruction, and perform other operations close to the operation corresponding to the first operation instruction in the restricted execution environment.
  • 5E-5F exemplarily show the user interface displayed when the electronic device 100 executes S103.
  • FIG. 5E shows the user interface 53 displayed after the electronic device 100 receives the voice command “navigate to home” in FIG. 5B and the weak authentication factor (ie, the voiceprint carried in the voice command).
  • the restricted execution environment created by the electronic device 100 according to the voice instruction and the weak authentication factor allows launching navigation applications and reading user data of navigation applications, for example, the detailed address of the user's "home” is read as "XX Building". Therefore, in the navigation interface provided in FIG. 5E , the electronic device 100 automatically fills in the detailed address of "home” at the destination.
  • FIG. 5F may also be that after the electronic device 100 receives the image including the open palm gesture in FIG. The displayed user interface.
  • the gesture of opening the palm is the same as the voice command "navigate to home", both of which are used to request the electronic device 100 to navigate to the location of "home”.
  • the security level of the weak authentication factor received by the electronic device 100 in FIG. 5C is lower than the security level of the weak authentication factor received by the electronic device 100 in FIG. 5B
  • the electronic device 100 according to the open palm gesture, and, Weak authentication factors create a restricted execution environment that allows navigation apps to be launched but not read user data of navigation apps.
  • the electronic device 100 cannot read the detailed address of "home", the address is not filled at the destination. The user may manually enter the address of "home” at the destination to navigate to home.
  • Optional step S104 the electronic device 100 receives a user operation.
  • the embodiment of the present application does not make any limitation on the form of user operation received by the electronic device 100 in S104, for example, it may be voice carrying semantics, images containing gestures/facial expressions/body gestures, sliding operations containing signatures, and pressing buttons. operation, operation of shaking the electronic device 100, and the like.
  • the manner in which the electronic device 100 receives the user operation in S104 is the same as the first manner in which the electronic device 100 receives the user operation carrying the first operation instruction in S101, and reference may be made to related descriptions.
  • Optional step S105 if the created restricted execution environment allows the execution of the operation requested by the user operation on the electronic device 100, then the electronic device 100 responds to the user operation; if it is not allowed to perform the operation requested by the user operation on the electronic device 100, then output Prompt information, the prompt information is used to prompt the user that the operation corresponding to the user operation is currently not allowed to be performed.
  • the electronic device 100 determines that the user operation requests the electronic device 100 to perform an operation that is the same as the operation corresponding to the first operation instruction determined by the electronic device 100 in S1021, and reference may be made to related descriptions.
  • the resource requested to be accessed by the user operation in S104 may be referred to as a second resource.
  • the second resource may include one or more resources, which is not limited here.
  • the electronic device 100 will respond to the user operation and perform the operation requested by the electronic device 100 to perform.
  • the electronic device 100 may start the camera application.
  • the electronic device 100 can start the microphone to collect user information. input voice.
  • a user operation such as a click operation
  • the electronic device 100 will not respond to the user operation, and will output a prompt message.
  • the electronic device 100 may output prompt information.
  • the prompt information output by the electronic device 100 may further prompt the user for reasons why the operation corresponding to the user operation is currently not allowed, for example, it may include that the risk level of the user operation is high, or that the current electronic device 100 The security level of the received weak authentication factor is low.
  • the prompt information output by the electronic device 100 may further prompt the user for a solution. For example, prompting the user to input a weak authentication factor with a higher security level, or prompting the user to unlock, etc., are not limited here.
  • the prompt information may be implemented in the form of a visual element, a vibration signal, a flash light signal, audio, etc., which is not limited here.
  • FIG. 5G exemplarily shows prompt information 502 output by the electronic device 100 .
  • the operations that can be performed by the electronic device 100 can be limited within the scope of the restricted execution environment, so as to avoid the expansion of authority and protect the data security of the electronic device 100 .
  • step S106 the electronic device 100 obtains the strong authentication factor, and switches from the locked state to the unlocked state.
  • the strong authentication factor includes identity authentication information meeting the standards required by the first authentication method.
  • identity authentication information that meets the standards required by the first authentication method, refer to the detailed description in S101, which will not be repeated here.
  • a strong authentication factor may also include multiple weak authentication factors obtained over a period of time.
  • the specific number of the multiple weak authentication factors can be preset, and there is no limitation here.
  • the multiple weak authentication factors may be the same identity authentication information or different identity authentication information. That is to say, the user can complete identity authentication by inputting weak authentication factors multiple times. For example, the user can continuously input multiple sentences of voice, so that the electronic device 100 can complete unlocking after extracting multiple voiceprints (that is, weak authentication factors). For another example, the electronic device 100 may extract the voiceprint and the remote face at the same time, and then unlock it.
  • the electronic device 100 may automatically start to detect the strong authentication factor input by the user after the prompt information is output in S105. After seeing the prompt information output by the electronic device 100, the user can input a strong authentication factor.
  • the electronic device 100 may start to detect the strong authentication factor input by the user in response to the received user operation at any time point after S103 is executed.
  • the user can enter a strong authentication factor after entering the user action.
  • the embodiment of the present application does not limit the form of the user operation.
  • the electronic device 100 may continuously display the unlock control 503 in the displayed access control interface. As shown in FIG. 5E and FIG. 5F , the electronic device 100 may start to detect the strong authentication factor input by the user in response to the operation acting on the unlock control 503 .
  • the unlock control 503 can also be used to prompt the user that the electronic device 100 is currently in a restricted execution environment and is still in a locked state, so as to prevent the user from performing user operations outside the restricted execution environment.
  • the implementation of the unlocking control 503 is not limited in the embodiment of the present application, for example, it may be an icon, text or other forms, which may be transparent or opaque.
  • the unlock control 503 can be displayed at any position in the display screen, can be displayed in a fixed area, or can be dragged by the user, which is not limited here.
  • the unlock control 503 may be called a first control.
  • step S107 the electronic device 100 closes the restricted execution environment.
  • the electronic device 100 may close the restricted execution environment after switching to the unlocked state after S106.
  • the electronic device 100 may close the restricted execution environment after receiving the operation for closing the startup application corresponding to the first operation instruction.
  • the user triggers the electronic device 100 to close the application started corresponding to the first operation command, it means that the current user no longer needs the restricted execution environment. Therefore, closing the restricted execution environment by the electronic device 100 can save device resources.
  • closing the restricted execution environment by the electronic device 100 means that the electronic device 100 deletes various information recorded in S102, for example, various operations that the recorded restricted execution environment allows the electronic device 100 to perform, and the like.
  • the electronic device no longer decides whether to respond to the corresponding operation based on whether it is unlocked, but decides based on the risk level of the operation instruction and the security level of the weak authentication factor Whether to perform this operation, so that more fine-grained access control can be achieved, and the usage scenarios and usage range of electronic devices can be enriched.
  • the electronic device can be triggered to perform operations other than the predefined operations in the locked state without going through cumbersome authentication to unlock the electronic device, so that the user can manipulate the electronic device more freely and conveniently.
  • electronic devices no longer simply divide resources into resources accessible by predefined operations and resources accessible by other operations, but also implement more fine-grained access control for various resources.
  • FIG. 6 is a schematic flowchart of a cross-device access control method provided by an embodiment of the present application.
  • the method may include the following steps:
  • step S201 the electronic device 300 receives a user operation, and the user operation is used to request the electronic device 100 to perform a certain operation.
  • the embodiment of the present application does not make any limitation on the form of user operation in S201, for example, it can be click operation or slide operation acting on the display screen, voice, gesture/facial expression/body gesture, slide operation including signature, and operation of pressing a button , the operation of shaking the electronic device 100, and the like.
  • the user operation requests an operation performed by the electronic device 100, including a certain access operation for a resource
  • the resource is one or more resources in the electronic device 100
  • the access operation may include, for example, reading, adding, One or more of delete, write, modify, execute.
  • the resources in the electronic device may include software resources, hardware resources, peripherals or resources of peripherals, etc. For details, refer to the relevant description above.
  • the resource in the electronic device 100 that the user requests to access in S201 may be referred to as a third resource.
  • the third resource may include one or more resources, which is not limited here.
  • the user operation is used to request to share some data in the electronic device 300 to the electronic device 100 .
  • FIG. 7A-FIG. 7B show a screen projection scenario.
  • FIG. 7A schematically shows a user interface 71 displayed when the electronic device 100 plays the network video selected by the user.
  • the user interface 71 may be displayed by the electronic device 300 in response to the user switching the electronic device 300 from the vertical screen state to the horizontal screen state, or when the user clicks the full-screen playback control displayed in the lower right corner of the electronic device 300 to play the video. .
  • the user interface 71 may further include a switch control 701 for screen projection, and the control 701 is used to monitor user operations (such as click operations, touch operations, etc.) to enable/disable the screen projection function of the video application.
  • a switch control 701 for screen projection is used to monitor user operations (such as click operations, touch operations, etc.) to enable/disable the screen projection function of the video application.
  • the electronic device 300 can detect user operations (such as click operations, touch operations, etc.) acting on the screen projection control 701, find nearby electronic devices that support screen projection, and display the identification of the found electronic devices.
  • user operations such as click operations, touch operations, etc.
  • FIG. 7B shows the identifications of nearby electronic devices that support screen projection displayed by the electronic device 300 .
  • the electronic device 300 may detect a user operation acting on the identification corresponding to the electronic device 100 .
  • the user operation received by the electronic device 300 includes clicking the control 701 first, and then clicking the user operation of the logo of the electronic device 100, and the user operation is used to request that the video currently being played by the electronic device 300 Send to the electronic device 100 to continue playing.
  • the user operation requests to access the display screen, speaker, and screen-casting application of the electronic device 100 .
  • the electronic device 100 is selected by the user, and in some other embodiments, the electronic device 100 may also be selected by the electronic device 300 by default. For example, after receiving the user operation of clicking the control 701 , the electronic device 300 may request by default that the video currently being played is delivered to the device (ie, the electronic device 100 ) where the screen was projected last time to continue playing.
  • the electronic device 300 generates a second operation instruction according to the user operation, and the second operation instruction is used to request the electronic device 100 to perform a certain operation.
  • the second operation instruction is the same as the user operation in S201 , and is used to request access to the third resource in the electronic device 100 .
  • the embodiment of the present application does not limit the form of the second operation instruction.
  • the second operation instruction may be, for example, a message sent through a wired connection, a wireless connection such as a Bluetooth (bluetooth, BT) connection, a Wi-Fi P2P connection, an NFC connection, or a remote connection.
  • a wireless connection such as a Bluetooth (bluetooth, BT) connection, a Wi-Fi P2P connection, an NFC connection, or a remote connection.
  • the second operation instruction generated by the electronic device 300 may be a screen projection request, which is used to request to send the video currently being played by the electronic device 300 to the electronic device. Playing continues in device 100.
  • the electronic device 300 sends the second operation instruction to the electronic device 100 .
  • the electronic device 100 is in a locked state, receives a second operation instruction, and creates a restricted execution environment according to the second operation instruction.
  • the definition and acquisition method of the second operation instruction are similar to the first operation instruction, and reference may be made to the related description in FIG. 4 .
  • the embodiment of the present application does not limit the strategy for creating a restricted execution environment by the electronic device 100 according to the second operation instruction.
  • the electronic device 100 may create a restricted execution environment according to the type of the second operation instruction.
  • the second operation instructions are voice-borne semantics, gestures, facial expressions, signatures, and body gestures, the number of operations that can be performed in the restricted execution environments respectively created by the electronic device 100 decreases in turn.
  • the electronic device 100 may create a restricted execution environment according to the risk level of the operation corresponding to the second operation instruction.
  • Step S204 may specifically include the following steps S2041-S2043.
  • the electronic device 100 determines the risk level of the operation corresponding to the second operation instruction, which is the same as the electronic device 100 determines the risk level of the operation corresponding to the first operation instruction in S102 of FIG. 4 , and reference may be made to related descriptions.
  • step S2042 the electronic device 100 determines whether to allow the operation corresponding to the second operation instruction to be performed according to the risk level of the operation corresponding to the second operation instruction.
  • various operations that are allowed to be performed by the electronic device 100 under different operation risk levels are preset in the electronic device 100 .
  • This setting may be set in advance by the user or the manufacturer of the electronic device 100 .
  • the electronic device 100 may also output prompt information, which may be used to prompt the user that the operation corresponding to the second operation instruction is currently not allowed to be performed.
  • the prompt information may further prompt the reason why the user is currently not allowed to perform the operation corresponding to the second operation instruction, for example, it may include that the operation corresponding to the second operation instruction has a higher risk level.
  • the prompt information may further prompt the user for a solution.
  • prompting the user to unlock, etc. is not limited here.
  • the implementation form of the prompt information is the same as the implementation form of the prompt information in the subsequent step S207, and for details, please refer to the relevant description in the subsequent steps.
  • step S2043 the electronic device 100 creates a restricted execution environment according to the risk level of the operation corresponding to the second operation instruction.
  • the manner in which the electronic device 100 creates a restricted execution environment according to the risk level of the operation corresponding to the second operation instruction is the same as the method in which the electronic device 100 creates a restricted execution environment according to the risk level of the operation corresponding to the first operation instruction in S1024 of FIG. See related description.
  • FIG. 7C shows the user interface 72 displayed after the electronic device 100 creates a restricted execution environment.
  • the electronic device 100 is playing the video delivered by the electronic device 300 , and an unlock control 702 is displayed.
  • the function of the unlocking control 702 is the same as that of the unlocking control 503 in FIG. 5E and FIG. 5F , and relevant descriptions may be referred to.
  • the unlock control 702 may also be referred to as a first control.
  • the electronic device 100 executes an operation corresponding to the second operation instruction in the created restricted execution environment.
  • S205 is similar to S103 in FIG. 4 , and reference may be made to related descriptions.
  • steps S206-S209 refer to optional steps S104-S107 in FIG. 4 .
  • the fourth resource may include one or more resources, which is not limited here.
  • the electronic device 100 may close the restricted execution environment after receiving the operation for closing the startup application corresponding to the second operation instruction.
  • the electronic device 300 may send instruction information to stop screen projection to the electronic device 100, and then the electronic device 100 closes the restricted execution environment.
  • the electronic device no longer decides whether to respond to user operations based on whether it is unlocked, but decides whether to respond to user operations according to the risk level of the operation instructions received across devices, which can Realize finer-grained access control and enrich the use scenarios and scope of electronic devices.
  • the electronic device can be triggered to perform operations other than the predefined operations in the locked state without going through cumbersome authentication to unlock the electronic device, so that the user can manipulate the electronic device more freely and conveniently.
  • electronic devices no longer simply divide resources into resources accessible by predefined operations and resources accessible by other operations, but also implement more fine-grained access control for various resources.
  • the embodiments of the present application reduce the difficulty and complexity of screen projection and multi-screen interaction, and can bring better user experience to users.
  • the electronic device 100 , the electronic device 200 , and the electronic device 300 may be referred to as a first device, a second device, and a third device.
  • a weak authentication factor may also be referred to as a first authentication factor, and a strong authentication factor may also be referred to as a second authentication factor.
  • FIG. 8A is a software architecture diagram of another electronic device 100 according to an embodiment of the present application.
  • the electronic device 100 may include the following modules: an operation instruction identification module 801 , a weak authentication factor identification module 802 , and an access control and execution environment management module 803 . in:
  • the operation instruction identification module 801 is configured to acquire a first operation instruction of the electronic device 100 .
  • the operation instruction identification module 801 may be configured to obtain the first operation instruction or the second operation instruction through the above-mentioned first manner. That is, the operation instruction identification module 801 may be configured to receive a user operation carrying a first/second operation instruction, and extract the first/second operation instruction from the user operation.
  • the operation instruction recognition module 801 may include various modules involved when the electronic device 100 obtains the first/second operation instruction through the above-mentioned first method, such as a voice assistant, a microphone, and the like.
  • the operation instruction identification module 801 may be configured to obtain the first/second operation instruction through the above-mentioned second manner. That is, the operation instruction recognition module 801 may be configured to receive user operation instruction information sent by other devices to the electronic device 100, and extract the first/second operation instruction from the user operation instruction information. In this case, the operation instruction recognition module 801 may include various modules involved when the electronic device 100 obtains the first/second operation instruction through the above-mentioned second method, such as a wireless communication module, a wired communication module, a voice assistant, etc. .
  • the operation instruction identification module 801 is further configured to determine the operation corresponding to the first/second operation instruction.
  • the weak authentication factor identification module 802 is used to obtain the weak authentication factor of the electronic device 100 .
  • the weak authentication factor identifying module 802 can be used to obtain the weak authentication factor through the above first method. That is, the weak authentication factor identifying module 802 can be configured to receive user operations carrying weak authentication factors, and extract weak authentication factors from the user operations. In this case, the weak authentication factor identification module 802 may include various modules involved when the electronic device 100 obtains the weak authentication factor through the above-mentioned first method, such as a voice assistant, a microphone, a camera, a fingerprint sensor, and the like.
  • the weak authentication factor identification module 802 can be used to obtain the weak authentication factor through the above-mentioned second method. That is, the weak authentication factor identification module 802 can be configured to receive user operation instruction information sent by other devices to the electronic device 100, and extract the weak authentication factor from the user operation instruction information.
  • the weak authentication factor identification module 802 may include various modules involved when the electronic device 100 obtains the weak authentication factor through the above-mentioned second method, such as a wireless communication module, a mobile communication module, a voice assistant, and the like.
  • the weak authentication factor identifying module 802 is also used to determine the security level of the weak authentication factor. After the weak authentication factor identification module 802 acquires the weak authentication factor of the electronic device 100, it can also generate an authentication token (token).
  • the authentication token indicates the security level of the weak authentication factor, and can also indicate the authentication method and the validity of the weak authentication factor. time and so on.
  • the operation instruction identification module 801 sends the operation corresponding to the first operation instruction
  • the weak authentication factor identification module 802 sends the authentication token to the access control and execution environment management module 803 respectively.
  • the authentication token can be used by the access control and execution environment management module 803 to check the validity.
  • the access control and execution environment management module 803 is configured to determine whether to allow the operation corresponding to the first operation instruction to be executed based on the security level of the operation corresponding to the first operation instruction and the security level of the weak authentication factor . In some embodiments, the access control and execution environment management module 803 is configured to determine whether the operation corresponding to the second operation instruction is allowed to be executed according to the security level of the operation corresponding to the second operation instruction. If the judgment result is yes, the access control and execution environment management module 803 is configured to create a restricted execution environment, and execute the operation corresponding to the first/second operation instruction in the restricted execution environment.
  • the access control and execution environment management module 803 is configured to create a restricted execution environment, and execute the operation corresponding to the first/second operation instruction in the restricted execution environment.
  • the electronic device 100 may further include a distributed scheduling module 804, and the distributed scheduling module 804 is used to obtain the first/second operation instruction through the above-mentioned third method, or, through the above-mentioned third method to obtain Get the weak authentication factor.
  • the distributed scheduling module 804 may include a wireless communication module, a mobile communication module and the like.
  • FIG. 8B exemplarily shows the structure of the electronic device access control and execution environment management module 803 .
  • the access control and execution environment management module 803 may include: an access control module 8031 , an execution environment management module 8032 , a policy management module 8033 , an application life cycle management module 8034 , and a resource management module 8035 .
  • the access control module 8031 is configured to pass the operation corresponding to the first/second operation instruction, that is, the information of the accessed resource, to the execution environment management module 8032 .
  • the execution environment management module 8032 can be used to determine whether the operation corresponding to the first/second operation instruction is allowed to be executed, and if so, set the flag of the restricted execution environment, and configure the operation policy of the restricted execution environment in the policy management module 8033 .
  • the policy management module 8033 is used to configure the operating policy of the restricted execution environment, that is, to record the operations allowed in the restricted execution environment, that is, to record which specific access operations are allowed to be performed on which resources or which type of resources.
  • the resource management module 8035 may include: an application information management module, a data management module, and a rights management module.
  • the application information management module stores and manages the information of all applications, especially records the information of the applications that are allowed to be launched or accessed in the current restricted execution environment.
  • the data management module can be used to classify and hierarchically manage the data in the electronic device, and set the data level or category that is allowed to be accessed in the restricted execution environment.
  • electronic devices can classify various types of data according to their characteristics, such as data that can be classified into different security levels.
  • the rights management module is used to manage the rights of various operations in the electronic device, and set the rights allowed in the restricted execution environment.
  • the application life cycle management module 8034 is used to manage the life cycle of each application in the electronic device 100, such as starting or destroying and so on.
  • the application lifecycle management module 8034 When the application lifecycle management module 8034 is about to start an application or access data in response to a user operation, it first confirms to the application information management module whether the current restricted execution environment allows the application to be started, or confirms to the data management module whether the current restricted execution environment Access to the data is allowed, and if so, the app can be launched or the data accessed. After the application lifecycle management module 8034 starts the application, if it needs to perform certain operations, it needs to confirm with the permission management module whether the current restricted execution environment has the corresponding permission, and if so, perform the operation.
  • FIG. 8A and FIG. 8B can be located at any layer or layers of the software system shown in FIG. 2 , which is not limited here.
  • each module shown in FIG. 8A and FIG. 8B is only an example.
  • the electronic device 100 may include more or fewer modules, which is not limited here.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the present application will be produced in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server, or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, DSL) or wireless (eg, infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (such as a floppy disk, a hard disk, or a magnetic tape), an optical medium (such as a DVD), or a semiconductor medium (such as a solid state disk (solid state disk, SSD)), etc.
  • the processes can be completed by computer programs to instruct related hardware.
  • the programs can be stored in computer-readable storage media.
  • When the programs are executed may include the processes of the foregoing method embodiments.
  • the aforementioned storage medium includes: ROM or random access memory RAM, magnetic disk or optical disk, and other various media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Automation & Control Theory (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Telephone Function (AREA)

Abstract

本申请提供了访问控制方法及相关装置。在该方法中,电子设备处于锁定状态时,在获取到操作指令和未达到解锁要求的身份认证信息后,可以判断是否允许访问该操作指令请求访问的资源,若是,则响应该操作指令访问对应的资源。实施该方法,用户不必通过繁琐的认证来解锁电子设备,即可触发电子设备在锁定状态下访问对应的资源,使得用户能够更加自如、方便地操控电子设备。此外,电子设备不再根据是否解锁来决定是否执行某些操作,这样可以实现更细粒度的访问控制,丰富了电子设备的使用场景和使用范围。

Description

访问控制方法及相关装置
本申请要求于2021年06月29日提交中国专利局、申请号为202110742228.8、申请名称为“访问控制方法及相关装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端及身份认证技术领域,尤其涉及访问控制方法及相关装置。
背景技术
电脑、手机等电子设备为了安全以及防止误操作,可以设定锁定状态。电子设备处于锁定状态时,需要用户输入预定的身份认证信息,如预设指纹、人脸或密码等,才能解锁并进入解锁状态。电子设备的大部分功能都只能在解锁状态下调用。
目前,用户需要输入精确的身份认证信息,例如近距离的人脸、和预设指纹完全一致的指纹等等,从而触发电子设备解锁。此外,用户也不能使用一些精确度不那么高的认证方式来解锁设备,例如声纹认证等。这就导致了用户需要通过繁琐的认证操作,甚至多次的认证操作才能解锁设备,电子设备的使用过程丧失了便捷性。
发明内容
本申请提供了访问控制方法及相关装置,可以让用户不必通过繁琐的认证来解锁电子设备,即可自如、方便地操控电子设备。
第一方面,本申请实施例提供了一种基于弱认证因子的访问控制方法,包括:第一设备处于锁定状态时,获取第一操作指令和第一认证因子;第一操作指令用于请求访问第一设备的第一资源,第一认证因子包括未达到第一设备的解锁要求的身份认证信息,达到第一设备的解锁要求的身份认证信息用于将第一设备由锁定状态切换至解锁状态;第一设备根据第一操作指令,和,第一认证因子,确定第一设备允许访问的资源;如果第一设备允许访问的资源包括第一资源,则第一设备响应于第一操作指令,访问第一资源。
实施第一方面提供的方法,电子设备不再仅仅根据是否解锁来决定是否响应执行对应的操作,根据操作指令的和弱认证因子可以针对各类资源实现更细粒度的访问控制,丰富了电子设备的使用场景和使用范围。对于用户来说,不必通过繁琐的认证来解锁电子设备,即可触发电子设备执行一些操作,使得用户能够更加自如、方便地操控电子设备。
结合第一方面,在一些实施方式中,第一设备可以根据访问第一资源的风险等级,确定第一设备允许访问的资源;访问第一资源的风险等级越高,则第一设备允许访问的资源越少。其中,第一资源的隐私度越高,访问第一资源的风险等级越高。这样可以充分考虑资源访问的风险,避免出现数据泄露等情况。
结合第一方面,在一些实施方式中,第一设备可以根据第一认证因子的安全等级,确定第一设备允许访问的资源;第一认证因子的安全等级越低,则第一设备允许访问的资源越少。其中,第一认证因子对应的身份认证方式的认证能力等级ACL越高,或者,第一认证因子和达到第一设备的解锁要求的身份认证信息的匹配度越高,或者,获取第一认证因子的,则第一认证因子的安全等级越高。这样可以充分考虑当前认证因子的可靠程度,避免出现数据泄露等情况。
结合第一方面,在一些实施方式中,第一资源包括:预先定义的第一设备在锁定状态下不能访问的资源。这里,锁定状态下能访问的资源为基础资源或常用资源,例如相机应用、电筒、蓝牙等等。而锁定状态下不能访问的资源可以包括涉及用户隐私数据的资源,例如照片、浏览记录等等。锁定状态下能访问的资源可以由第一设备预先定义。
结合第一方面,在一些实施方式中,第一操作指令包括以下任意一项:语音携带的语义、手势、脸部表情、形体姿态。
结合第一方面,在一些实施方式中,第一设备可以通过以下任意一种方式来获取第一操作指令:
第一设备采集到语音或图像,识别出语音或图像中携带的第一操作指令;
第一设备接收到第二设备发送的语音或图像,识别出语音或图像中携带的第一操作指令;或者,
第一设备接收到第二设备发送的第一操作指令。
结合第一方面,在一些实施方式中,身份认证信息包括以下任意一项或多项:密码、图形或者生物特征。其中,生物特征分为身体特征和行为特征两类。身体特征包括:人脸、声纹、指纹、掌型、视网膜、虹膜、人体气味、脸型、心率、脱氧核糖核酸(deoxyribo nucleic acid,DNA)。行为特征包括:签名、形体姿态(如行走步态)等。
结合第一方面,在一些实施方式中,未达到第一设备的解锁要求的身份认证信息,可包括以下任意一项或多项:
1.低于第一认证方式所需标准的身份认证信息。
第一认证方式为用于将第一设备由锁定状态切换至解锁状态的身份认证方式。
在一些实施例中,第一认证方式为认证能力等级ACL高于第三值的身份认证方式,或者,第一认证方式由第一设备预先设置。例如,第一认证方式可包括密码认证、图形认证、指纹认证以及人脸认证等等。
低于第一认证方式所需标准的身份认证信息可包括:和预存的第一生物特征的匹配度低于第一值的生物特征,第一生物特征为第一认证方式对应的身份认证信息。第一值可以预先设定。
2.符合第二认证方式所需标准的身份认证信息。
第二认证方式为第一认证方式之外的身份认证方式。
在一些实施例中,在一些实施例中,第二认证方式为第一认证方式以外的身份认证方式。第二认证方式可以为认证能力等级ACL较低的身份认证方式,或者,第二认证方式由第一设备预先设置。例如,第二认证方式可包括声纹认证、心率认证、形体姿态认证等等。
符合第二认证方式所需标准的身份认证信息包括:和预存的第二生物特征的匹配度达到第二值的生物特征,第二生物特征为第二认证方式对应的身份认证信息。第人值可以预先设定。
结合第一方面,在一些实施方式中,第一设备可以通过以下任意一项或多项来获取第一认证因子:
第一设备采集到语音或图像,识别出语音或图像中携带的第一认证因子;
第一设备接收到第二设备发送的语音或图像,识别出语音或图像中携带的第一认证因子;或者,
第一设备接收到第二设备发送的第一认证因子。
结合第一方面,在一些实施方式中,第一设备还可以同时获取到第一操作指令和第一认证因子。例如,第一设备可以采集到语音,识别出语音的语义,将语义确定为第一操作指令;识别出语音携带的声纹,将声纹确定为第一认证因子。或者,第一设备可以采集到的图像,识别出图像中的手势、脸部表情、形体姿态,将图像中的手势、脸部表情、形体姿态确定为第一操作指令;识别图像中携带的生物特征,将生物特征确定为第一认证因子。
结合第一方面,在一些实施方式中,第一设备响应于第一操作指令,访问第一资源之后,第一设备还可以接收到用户操作,用户操作用于请求访问第一设备的第二资源。如果第一设备允许访问的资源包括第二资源,则第一设备响应于用户操作,访问第二资源;如果第一设备允许访问的资源不包括第二资源,则第一设备拒绝响应于用户操作。
通过上一实施方式,可以将第一设备能够执行的操作限制在一定范围之内,这样可以避免权限扩大化,保护第一设备的数据安全。
结合第一方面,在一些实施方式中,第一设备响应于第一操作指令,访问第一资源之后,还可以获取到第二认证因子,第二认证因子包括达到第一设备的解锁要求的身份认证信息,或者,预定数量的第一认证因子;第一设备根据第二认证因子,由锁定状态切换为解锁状态。当第二认证因子为预定数量的第一认证因子时,用户可以通过多次输入第一认证因子,来完成身份认证,触发电子设备解锁。
结合上一实施方式,第一设备确定第一设备允许访问的资源之后,第一设备获取到第二认证因子之前,可以显示第一控件,检测到作用于第一控件的操作;并响应作用于第一控件的操作,开始检测身份认证信息。也就是说,用户可以主动触发第一设备开始检测身份认证信息,从而获取第二认证因子并解锁。这样用户可以根据自身需求来决定是否解锁,还可节约第一设备的功耗。
结合第一方面,在一些实施方式中,第一设备确定第一设备允许访问的资源之后,可以创建限制执行环境,在限制执行环境中,第一设备允许访问确定允许访问的资源。第一设备可以响应于第一操作指令,在限制执行环境中,访问第一资源。
在上一实施方式中,具体创建限制执行环境时,第一设备可以记录确定的允许执行的各项操作。也就是说,第一设备记录允许第一设备针对哪些资源或哪一类资源执行哪些具体的访问操作。
第二方面,本申请实施例提供了一种跨设备的访问控制方法,包括:第一设备处于锁定状态时,接收到第三设备发送的第二操作指令;第二操作指令用于请求访问第一设备的第三资源;第一设备根据第二操作指令,确定第一设备允许访问的资源;如果第一设备允许访问的资源包括第三资源,则第一设备响应于第二操作指令,访问第三资源。
实施第二方面的方法,电子设备不再仅仅根据是否解锁来决定是否响应执行对应的操作,而根据操作指令来针对各类资源实现更细粒度的访问控制,丰富了电子设备的使用场景和使用范围。对于用户来说,不必通过繁琐的认证来解锁电子设备,即可触发电子设备执行一些操作,使得用户能够更加自如、方便地操控电子设备。
结合第二方面,在一些实施方式中,第一设备可以根据访问第三资源的风险等级,确定第一设备允许访问的资源;访问第三资源的风险等级越高,则第一设备允许访问的资源越少。其中,第三资源的隐私度越高,访问第三资源的风险等级越高。这样可以充分考虑资源访问 的风险,避免出现数据泄露等情况。
结合第二方面,在一些实施方式中,第三资源包括:预先定义的第一设备在锁定状态下不能访问的资源。这里,第三资源和第一方面的第一资源相同,可参考第一方面的相关描述。
结合第二方面,在一些实施方式中,第三操作指令包括以下任意一项:语音携带的语义、手势、脸部表情、形体姿态。
结合第二方面,在一些实施方式中,第三操作指令为投屏请求。这样,针对投屏、多屏互动等数据共享场景,一个设备将数据共享至另一设备时,该另一设备无需解锁。相对于每次共享数据时都需要先解锁另一设备的方案,本申请实施例降低了投屏、多屏互动的难度和复杂性,可以给用户带来更好的使用体验。
结合第二方面,在一些实施方式中,第一设备响应于第二操作指令,访问第三资源之后,可以接收到用户操作,用户操作用于请求访问第一设备的第四资源。如果第一设备允许访问的资源包括第四资源,则第一设备响应于用户操作,访问第四资源;如果第一设备允许访问的资源不包括第四资源,则第一设备拒绝响应于用户操作。
通过上一实施方式,可以将第一设备能够执行的操作限制在一定范围之内,这样可以避免权限扩大化,保护第一设备的数据安全。
结合第二方面,在一些实施方式中,第一设备响应于第二操作指令,访问第三资源之后,可以获取到第二认证因子,第二认证因子包括达到第一设备的解锁要求的身份认证信息,或者,预定数量的第一认证因子;第一设备根据第二认证因子,由锁定状态切换为解锁状态。当第二认证因子为预定数量的第一认证因子时,用户可以通过多次输入第一认证因子,来完成身份认证,触发电子设备解锁。
结合上一实施方式,第一设备确定第一设备允许访问的资源之后,获取到第二认证因子之前,可以显示第一控件;检测到作用于第一控件的操作;响应作用于第一控件的操作,开始检测身份认证信息。也就是说,用户可以主动触发第一设备开始检测身份认证信息,从而获取第二认证因子并解锁。这样用户可以根据自身需求来决定是否解锁,还可节约第一设备的功耗。
结合第二方面,在一些实施方式中,第一设备确定第一设备允许访问的资源之后,可以创建限制执行环境,在限制执行环境中,第一设备允许访问确定允许访问的资源。第一设备可以响应于第二操作指令,在限制执行环境中,访问第三资源。
在上一实施方式中,具体创建限制执行环境时,第一设备可以记录确定的允许执行的各项操作。也就是说,第一设备记录允许第一设备针对哪些资源或哪一类资源执行哪些具体的访问操作。
第三方面,本申请实施例提供了一种电子设备,包括:存储器、一个或多个处理器;存储器与一个或多个处理器耦合,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,一个或多个处理器调用计算机指令以使得电子设备执行如第一方面或第一方面任意一种实施方式的方法。
第四方面,本申请实施例提供了一种电子设备,包括:存储器、一个或多个处理器;存储器与一个或多个处理器耦合,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,一个或多个处理器调用计算机指令以使得电子设备执行如第二方面或第二方面任意一种实施方式的方法。
第五方面,本申请实施例提供了通信系统,包括第一设备、第二设备,第一设备用于执行如第一方面或第一方面任意一种实施方式的方法。
第六方面,本申请实施例提供了通信系统,包括第一设备、第三设备,第一设备用于执行如第二方面或第二方面任意一种实施方式的方法。
第七方面,本申请实施例提供了一种计算机可读存储介质,包括指令,当指令在电子设备上运行时,使得电子设备执行如第一方面或第一方面任意一种实施方式的方法。
第八方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行第二方面或第二方面任意一种实施方式的方法。
第九方面,本申请实施例提供了一种计算机可读存储介质,包括指令,当指令在电子设备上运行时,使得电子设备执行如第一方面或第一方面任意一种实施方式的方法。
第十方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行第二方面或第二方面任意一种实施方式的方法。
实施本申请提供的技术方案,电子设备处于锁定状态时,在获取到操作指令和未达到解锁要求的身份认证信息后,可以判断是否允许访问该操作指令请求访问的资源,若是,则响应该操作指令访问对应的资源。实施该方法,用户不必通过繁琐的认证来解锁电子设备,即可触发电子设备在锁定状态下访问对应的资源,使得用户能够更加自如、方便地操控电子设备。此外,电子设备不再根据是否解锁来决定是否执行某些操作,这样可以实现更细粒度的访问控制,丰富了电子设备的使用场景和使用范围。
附图说明
图1为本申请实施例提供的电子设备的硬件结构示意图;
图2为本申请实施例提供的电子设备的软件结构示意图;
图3本申请实施例提供的通信系统的结构图;
图4为本申请实施例提供的基于弱认证因子的访问控制方法的流程图;
图5A为本申请实施例提供的电子设备100处于锁定状态时的用户界面;
图5B-图5D为本申请实施例提供的电子设备100所处的场景;
图5E-图5G为本申请实施例提供的电子设备100创建限制执行环境后显示的用户界面;
图6为本申请实施例提供的跨设备的访问控制方法的流程图;
图7A-图7C为跨设备的访问控制方法涉及的一组用户界面;
图8A及图8B为本申请实施例提供的电子设备100的软件结构示意图。
具体实施方式
下面将结合附图对本申请实施例中的技术方案进行清楚、详尽地描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,另外,在本申请实施例的描述中,“多个”是指两个或多于两个。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
本申请以下实施例中的术语“用户界面(user interface,UI)”,是应用程序或操作系统与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。用户界面是通过java、可扩展标记语言(extensible markup language,XML)等特定计算机语言编写的源代码,界面源代码在电子设备上经过解析,渲染,最终呈现为用户可以识别的内容。用户界面常用的表现形式是图形用户界面(graphic user interface,GUI),是指采用图形方式显示的与计算机操作相关的用户界面。它可以是在电子设备的显示屏中显示的文本、图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、Widget等可视的界面元素。
在本申请实施例中,电子设备具备两种状态:锁定状态,和,解锁状态。
在锁定状态下,电子设备仅能执行预定义的操作,而不能执行预定义操作之外的其他操作。锁定状态可用于避免用户的误操作,或者阻止电子设备执行预定义操作之外的其他操作。
在本申请实施例中,电子设备执行某项操作具体是指,电子设备针对某项资源进行访问操作,该访问操作例如可包括读取、添加、删除、写入、修改、执行等操作。
在本申请实施例中,电子设备中的资源可包括以下一项或多项:电子设备的软件资源、硬件资源、外设或外设的资源等等。其中:
硬件资源和电子设备配置的硬件相关,例如可包括电子设备具备的摄像头、传感器、音频设备、显示屏、马达、闪光灯等等。
软件资源和电子设备配置的软件相关,例如可包括电子设备安装的应用程序(application,APP)或服务组件、具备的内存资源、计算能力(例如美颜算法能力、音视频编解码能力)、网络能力、设备连接能力、设备发现能力、数据传输能力等等。软件资源可以包括系统资源,也可以包括第三方资源,这里不做限定。
外设是指和电子设备连接的,用于对数据和信息进行传输、转送和存储等作用的设备。外设例如可包括电子设备的配件设备,如鼠标、外接显示屏、蓝牙耳机、键盘,以及,该电子设备管理的智能手表、智能手环等等。外设的资源可包括硬件资源和软件资源,硬件资源和软件资源可参考前文相关描述。
在本申请实施例中,预定义的操作可以由电子设备的生产商预先定义,无法修改。电子设备的生产商可以包括该电子设备的制造商、供应商、提供商等。制造商可以是指以自制或采购的零件及原料来加工制造电子设备的生产厂商。供应商可以是指提供该电子设备的整机、原料或零件的厂商。例如,华为“Mate”系列手机的制造商为华为技术有限公司。
该预定义的操作不涉及用户的隐私数据,仅包括一些基础操作或常用操作。该预定义的操作例如可包括启动或关闭一些基础应用,如启动相机应用、打开电筒、打开计算器、扫描二维码、关闭/开启蓝牙、关闭/开启蜂窝信号、开启/关闭无线保真(wireless fidelity,Wi-Fi)信号等等,并且启动相机应用后电子设备不能通过相机应用进入图库或相册。
该预定义的操作之外的其他操作可以包括:涉及用户隐私数据的操作,以及,部分不涉及用户隐私数据的操作。用户的隐私数据可包括:各个应用中存储的用户数据,如用户的照片、视频、音频、联系人信息、浏览记录、购物记录,等等。涉及用户隐私数据的操作例如可包括启动或关闭图库、相册、通讯录、购物类应用、即时通讯类应用、备忘录、通过后台、Wi-Fi、USB、蓝牙等分享用户数据,等等。部分不涉及用户隐私的操作例如可包括:启动导 航类应用但不读取用户数据、启动浏览器但不读取浏览记录、启动视频类应用但不读取浏览记录等等。其中,导航类应用又可称为地图应用等其他名词。
在解锁状态下,电子设备除了可以执行锁定状态下的预定义操作,还可以执行该预定义操作之外的其他操作。例如,电子设备在解锁状态下可以执行涉及用户隐私数据的操作,如启动图库或相册、启动购物类应用并查看购物记录、启动即时通讯类应用、查看备忘录、查看导航数据、查看浏览器的浏览记录等等。
在本申请实施例中,锁定状态还可以被称为其他名词,例如锁屏状态等。类似的,解锁状态也可以被称为其他名词,这里不做限定。为了描述简便,后续将统一使用锁定状态、解锁状态来描述。
电子设备可以预设多种身份认证方式,并可以在锁定状态下,接收预设的身份认证方式所对应的身份认证信息,在确定输入的身份认证信息满足身份认证标准后,解除锁定并进入解锁状态。
身份认证是用于确认用户身份的技术。目前,身份认证方式可以包括:密码认证、图形认证、生物特征认证。不同的用户可以通过不同的身份认证信息,来进行区分。具体的,电子设备可以预存密码、图形或者生物特征,在用户输入该预存的密码或图形,或者,输入和预存的生物特征匹配度达到一定值的生物特征时,电子设备可以确认该用户为之前预存信息的用户。该匹配度的值可以预先设定。该匹配度的值越高,生物特征认证方式的准确度也就越高。
密码可以是由数字、字母、符号组成的字符串。
生物特征分为身体特征和行为特征两类。身体特征包括:人脸、声纹、指纹、掌型、视网膜、虹膜、人体气味、脸型、血压、血氧、血糖、呼吸率、心率、一个周期的心电波形等、脱氧核糖核酸(deoxyribo nucleic acid,DNA)。行为特征包括:签名、形体姿态(如行走步态)等。
由于电子设备提取各类信息,如密码、图形以及各类生物特征的准确度不同,上述各个身份认证方式都具有对应的认证能力等级(authentication capability level,ACL)。ACL越高,使用该认证方式来进行身份认证的结果的可信度也就越高。电子设备提取信息的准确度,取决于当前的技术发展情况,例如电子设备提取密码、指纹的准确度非常高,但提取声纹、签名的准确度较低。针对同一种信息,不同电子设备使用的算法不同时,不同电子设备使用该身份认证方式提取信息的准确度也不同。
客观来说,可以根据使用身份认证方式时的错误接受率(false accept rate,FAR)、错误拒绝率(false reject rate,FRR)、欺骗接受率(spoof accept rate,SAR),来判断该身份认证方式的ACL。FAR越低,FRR越低,SAR越低,则ACL越高。例如,密码认证/图形认证、人脸认证/指纹认证、声纹认证、形体姿态认证的ACL依次降低。
ACL可以分为多个不同粒度的等级,这里不做限定。例如,ACL可以分为四个等级。
为了保证数据安全,电子设备通常仅会使用ACL较高的身份认证方式来解锁,而不会采用ACL较低的身份认证方式来解锁。
为了便于描述,后续将电子设备用于解锁的身份认证方式,称为第一认证方式;将电子 设备用于解锁的身份认证方式之外的其他身份认证方式,称为第二认证方式。第一认证方式可以由该电子设备或该电子设备的生产商自主设置,这里不做限定。例如,电子设备可以设定使用密码认证、图形认证、指纹认证以及人脸认证来解锁,而不使用声纹认证、心率认证、形体姿态认证来解锁。
电子设备在锁定状态下,可以接收用户输入的身份认证信息,在确定输入的身份认证信息符合第一认证方式的标准后,解锁并进入解锁状态。为了输入符合标准的身份认证信息,用户需要执行较为繁琐的操作。例如,用户需要严格输入预设的密码或图形、在一定距离内将面部对准电子设备的前置摄像头并保持不动、使用干净的手指按压指纹识别传感器所在位置并保持不动,等等。也就是说,用户通过繁琐的认证操作,甚至多次的认证操作才能解锁设备,浪费了大量时间以及电子设备的功耗。
此外,越来越多的用户使用语音指令、形体姿势等来操控电子设备,在驾车、做饭、锻炼等场景时无需触摸便可操控电子设备,带来了极大地便捷性。但由于声纹认证、形体姿势认证等认证方式的ACL较低,电子设备不能通过语音、形体姿势或者远程手势直接解锁,而需要使用其他较高ACL的身份认证方式来解锁。这导致语音指令、形体姿势、远程手势的便捷性丧失,对用户自如、方便地操控电子设备带来了障碍。
可以看出,如果用户想要触发电子设备执行上述锁定状态下预定义操作之外的其他操作,则需要通过繁琐的方式来输入符合较高ACL的身份认证方式标准的身份认证信息,从而解锁设备,这降低了电子设备的便捷性,对于用户使用电子设备带来了障碍。
本申请以下实施例提供了基于弱认证因子的访问控制方法。在该方法中,电子设备处于锁定状态时,在获取到第一操作指令和弱认证因子后,可以根据该第一操作指令对应操作的风险等级,和,该弱认证因子的安全等级,创建限制执行环境(limited execution environment),并在该限制执行环境中响应第一操作指令,执行对应的操作。
第一操作指令,和,其对应的操作之间的对应关系,由电子设备预先设置。第一操作指令可以是电子设备直接接收到的,也可以是其他设备获取后发送给该电子设备的。第一操作指令的具体内容可参考后续方法实施例的详细描述,这里暂不赘述。
在一些实施例中,第一操作指令用于请求电子设备执行上述锁定状态下预定义操作之外的其他操作。关于锁定状态、预定义操作、预定义操作之外的其他操作,具体可以参考前文相关描述。
弱认证因子是指未达到电子设备解锁要求的身份认证信息。弱认证因子可包括以下两类:1.低于第一认证方式所需标准的身份认证信息。2.符合第二认证方式所需标准的身份认证信息。弱认证因子可以是电子设备直接采集到的,也可以是其他设备采集后发送给该电子设备的。弱认证因子的具体内容可参考后续方法实施例的详细描述,这里暂不赘述。
在一些实施例中,电子设备可以分别接收到第一操作指令,和,弱认证因子。
在一些实施例中,电子设备可以同时接收到第一操作指令,和,弱认证因子。
限制执行环境是指受限制的执行环境(execution environment)。执行环境可以包括硬件环境和软件环境。执行环境可以是沙箱,也可以是包含多个函数的函数域。电子设备在限制执行环境中,只能执行指定的部分操作,而不能执行该部分操作意外的其他操作。相当于,电子设备在限制执行环境中,只能访问电子设备的部分资源,而不能访问该部分资源以外的 其他资源。本申请实施例中的限制执行环境还可以被称为受限执行环境、限制运行环境、限制域等等,这里不做限定。
电子设备可以根据第一操作指令对应的操作的风险等级,和,弱认证因子的安全等级来创建限制执行环境。第一操作指令对应的操作的风险等级越低,或者,弱认证因子的安全等级越高,电子设备创建的限制执行环境中能访问的资源越多。操作的风险等级、弱认证因子的安全等级、创建限制执行环境的方式等,具体可参考后续方法实施例中的相关描述。
通过上述基于弱认证因子的访问控制方法,电子设备不再仅仅根据是否解锁来决定是否响应执行对应的操作,而是根据操作指令的风险等级和弱认证因子的安全等级来决定是否执行该操作,这样可以实现更细粒度的访问控制,丰富了电子设备的使用场景和使用范围。对于用户来说,不必通过繁琐的认证来解锁电子设备,即可触发电子设备在锁定状态下执行预定义操作之外的其他操作,使得用户能够更加自如、方便地操控电子设备。此外,电子设备不再将资源简单地分为预定义操作可访问的资源和其他操作可访问的资源,还针对各类资源实现了更加细粒度的访问控制。
本申请实施例还提供了跨设备的访问控制方法,该方法应用于包含两个电子设备的通信系统。在该方法中,一个电子设备可以向另一个电子设备发送第二操作指令,另一个电子设备可以根据该第二操作指令对应操作的风险等级,创建限制执行环境(limited execution environment),并在该限制执行环境中响应该第二操作指令,执行对应的操作。
第二操作指令,和,其对应的操作之间的对应关系,由电子设备预先设置。该第二操作指令为其他电子设备发送的操作指令,例如可以为投屏请求等等。第二操作指令的具体内容可参考后续方法实施例的详细描述,这里暂不赘述。
在一些实施例中,第二操作指令用于请求电子设备执行上述锁定状态下预定义操作之外的其他操作。
第二操作指令对应的操作的风险等级越低,电子设备创建的限制执行环境中能访问的资源越多。
通过上述跨设备访问控制方法,电子设备不再仅仅根据是否解锁来决定是否响应用户操作,而是根据跨设备接收到的操作指令的风险等级来决定是否响应用户操作,这样可以实现更细粒度的访问控制,丰富了电子设备的使用场景和使用范围。对于用户来说,不必通过繁琐的认证来解锁电子设备,即可触发电子设备在锁定状态下执行预定义操作之外的其他操作,使得用户能够更加自如、方便地操控电子设备。此外,电子设备不再将资源简单地分为预定义操作可访问的资源和其他操作可访问的资源,还针对各类资源实现了更加细粒度的访问控制。
在上述两种访问控制方法中,电子设备创建限制执行环境之后,如果电子设备接收到用户操作,并且该用户操作请求执行该限制执行环境允许执行操作以外的其他操作,则该电子设备可以提示用户解锁。该电子设备在用户的触发下解锁后,可以响应先前接收到的用户操作,执行对应的操作。
在上述两种访问控制方法中,电子设备创建限制执行环境之后,用户还可以主动触发该电子设备解锁。该电子设备解锁后,可以响应于用户操作执行各类操作。
下面,首先介绍本申请实施例提供的电子设备100。
电子设备100可以为各种类型,本申请实施例对该电子设备100的具体类型不作限制。例如,该电子设备100包括手机,还可以包括平板电脑、桌面型计算机、膝上型计算机、手持计算机、笔记本电脑、大屏电视、智慧屏、可穿戴式设备、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、人工智能(artificial intelligence,AI)设备、车机、智能耳机,游戏机,还可以包括物联网(internet of things,IOT)设备或智能家居设备如智能热水器、智能灯具、智能空调、摄像头等等。不限于此,电子设备100还可以包括具有触敏表面或触控面板的膝上型计算机(laptop)、具有触敏表面或触控面板的台式计算机等非便携式终端设备等等。
图1示出了电子设备100的结构示意图。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解 决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号解调以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed, Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
内部存储器121可以包括一个或多个随机存取存储器(random access memory,RAM)和一个或多个非易失性存储器(non-volatile memory,NVM)。
随机存取存储器可以由处理器110直接进行读写,可以用于存储操作系统或其他正在运行中的程序的可执行程序(例如机器指令),还可以用于存储用户及应用程序的数据等。
非易失性存储器也可以存储可执行程序和存储用户及应用程序的数据等,可以提前加载到随机存取存储器中,用于处理器110直接进行读写。
外部存储器接口120可以用于连接外部的非易失性存储器,实现扩展电子设备100的存储能力。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或 发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
内部存储器121用于存储电子设备在锁定状态下能够执行的预定义操作。具体的,内部存储器121可记录电子设备在锁定状态下,能够访问的资源,以及,能够对该资源执行的具体的访问操作(例如修改、读取等等)。在本申请的一些实施例中:
内部存储器121可用于存储一个或多个用户的标准身份认证信息。这些身份认证信息用于标识用户,可以包括第一认证方式对应的身份认证信息,也可以包括第二认证方式对应的身份认证信息。例如,这些身份认证信息可包括:密码、图形、人脸、声纹、指纹、掌型、视网膜、虹膜、人体气味、脸型、血压、血氧、血糖、呼吸率、心率、一个周期的心电波形等、脱氧核糖核酸(deoxyribo nucleic acid,DNA)、签名、形体姿态(如行走步态)。
受话器170B、麦克风170C、显示屏194、摄像头193、按键190、传感器模块180(例如压力传感器180A,陀螺仪传感器180B)、耳机接口170D外接的耳机,等可用于接收用户输入的第一操作指令。第一操作指令的详细内容可参考后续方法实施例的描述。
移动通信模块150,无线通信模块160可用于接收其他设备发送的第一操作指令,还可 用于接收其他设备发送的弱认证因子。
显示屏194、摄像头193、指纹传感器180H、受话器170B,麦克风170C、光学传感器、电极等可用于采集用户输入的弱认证因子。具体的,显示屏194可用于采集用户输入的密码、图形、签名。摄像头193用于采集用户输入的人脸、虹膜、视网膜、脸型、形体姿态等。指纹传感器180H可用于采集用户输入的指纹。受话器170B,麦克风170C可用于采集用户输入的语音。光学传感器可用于使用光电容积图(Photoplethysmography,PPG)技术采集PPG信号(例如血压、血氧、血糖、呼吸率、心率、一个周期的心电波形等)等。电子设备100配置的电极可用于通过心电图(electrocardiogram,ECG)技术来采集一个周期内的心电波形。
处理器110可对上述各个模块获取到的弱认证因子进行分析,确定弱认证因子的安全等级。处理器还用于确定第一操作指令对应操作的风险等级。之后,处理器110还用于根据该第一操作指令对应操作的风险等级,和,弱认证因子的安全等级,创建限制执行环境,并在该限制执行环境中响应该第一操作指令,调度电子设备100的各个模块执行对应的操作。
在本申请的一些实施例中:
电子设备100中的移动通信模块150,无线通信模块160可用于接收其他设备发送的第二操作指令。
处理器110可用于确定第二操作指令对应操作的风险等级。之后,处理器110还用于根据该第二操作指令对应操作的风险等级,创建限制执行环境,并在该限制执行环境中响应该第二操作指令,调度电子设备100的各个模块执行对应的操作。
关于电子设备100的各个模块的作用,具体可参考后续方法实施例的详细描述,在此赞不赘述。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。
图2是本申请实施例的电子设备100的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图2所示,应用程序包可以包括语音助手、相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图2所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
下面介绍本申请实施例提供的通信系统10。
如图3所示,通信系统10包括电子设备100,还可以包括电子设备200,或,电子设备300。
电子设备100、电子设备200,或,电子设备300的数量,均可以为一个或多个。
电子设备100的实现,以及,电子设备100执行的各项操作,可参考上述图1或图2的相关描述,这里赞不赘述。
本申请实施例对电子设备200或电子设备300的具体类型不做限定。电子设备200或电子设备300的类型可以参考上述对电子设备100的类型的描述。例如,电子设备100可以为智能手机,电子设备200可以为智能手表、智能手环、耳机等等。又例如,电子设备100可以为智慧屏、大屏电视、笔记本电脑等等,电子设备300可以为智能手机。
通信系统10中的多个电子设备可以配置不同的软件操作系统(operating system,OS),也可以都配置相同的软件操作系统。操作系统包括但不限于
Figure PCTCN2022100826-appb-000001
Figure PCTCN2022100826-appb-000002
等等。其中,
Figure PCTCN2022100826-appb-000003
为华为的鸿蒙系统。
电子设备100和电子设备200之间,或者,电子设备100和电子设备300之间,均建立有通信连接。该通信连接可包括但不限于:有线连接、无线连接例如蓝牙(bluetooth,BT)连接、无线局域网(wireless local area networks,WLAN)例如无线保真点对点(wireless fidelity point to point,Wi-Fi P2P)连接、近距离无线通信(near field communication,NFC)连接,红外技术(infrared,IR)连接,以及远程连接(例如通过服务器建立的连接)等等。
例如,通信系统10中的任意两个电子设备之间可以通过登录相同的账号进行连接。例如,两个电子设备可以登录同一华为账号,并通过服务器来远程连接并通信。任意两个电子设备也可以登录不同账号,但通过绑定的方式进行连接。一个电子设备登录账号后,可以在设备管理应用中,绑定登录不同账号或未登录的其他电子设备,之后这些电子设备之间可以通过该设备管理应用通信。任意两个电子设备还可以通过扫描二维码、近场通信(near field communication,NFC)碰一碰、搜索蓝牙设备等方式建立连接,这里不做限制。此外,通信系统10中的电子设备也可以结合上述任意几种方式来连接并通信,本申请实施例对此不做限制。
在本申请的一些实施例中,电子设备200可用于接收携带第一操作指令的用户操作,然后将该用户操作的指示信息,发送给电子设备100。例如,电子设备200为连接电子设备100的耳机时,可接收到用户输入的语音指令,然后将该语音指令发送给电子设备100。
在本申请的另一些实施例中,电子设备200可用于接收携带第一操作指令的用户操作,然后识别该用户操作携带的第一操作指令,并根据该第一操作指令发送给电子设备100。例如,电子设备200为连接电子设备100的智能手表时,可接收到用户输入的语音指令“使用手机播放音乐”,然后识别该语音指令的意图为触发手机播放音乐,之后电子设备200可以将用于请求电子设备100播放音乐的第一操作指令发送给电子设备100。
在本申请的一些实施例中,电子设备200可用于接收携带弱认证因子的用户操作,然后将该用户操作的指示信息,发送给电子设备100。例如,电子设备200为连接电子设备100的耳机时,可接收到用户输入的携带声纹的语音指令,然后将该携带声纹的语音指令发送给电子设备100。
在本申请的另一些实施例中,电子设备200可用于接收携带弱认证因子的用户操作,然后识别该用户操作携带的弱认证因子,并根据该弱认证因子发送给电子设备100。例如,电子设备200为连接电子设备100的智能手表时,可接收到用户输入的携带声纹的语音指令,然后识别该语音指令携带的声纹,之后电子设备200可以将该声纹信息发送给电子设备100。
在本申请实施例中,电子设备300可用于接收用户操作,然后识别该用户操作的意图,并根据该用户操作的意图生成第二操作指令,然后将该第二操作指令发送给电子设备100。例如,电子设备300为智能手机,电子设备100为智慧屏时,电子设备300可接收到用于投屏到智慧屏的用户操作,然后电子设备300可以生成投屏请求(即第二操作指令),并将该投屏请求发送给智慧屏。
图3所示的通信系统10仅为示例,具体实现中,通信系统10还可以包括更多的终端设备,这里不做限定。通信系统10也可以被称作分布式系统等其他名词,这里不做限定。
通信系统10中各个设备的作用,可参考后续方法实施例的详细描述。
参考图4,图4为本申请实施例提供的基于弱认证因子的访问控制方法的流程示意图。
如图4所示,该方法可包括如下步骤:
步骤S101,电子设备100处于锁定状态时,获取第一操作指令,和,弱认证因子。
在本申请实施例中,电子设备100可以具备两种状态:锁定状态,和,解锁状态。关于锁定状态和解锁状态的具体定义,可参考前文相关描述。
电子设备100处于锁定状态时,显示屏可以是亮屏状态,也可以是熄屏状态,这里不做限定。电子设备100可以在长时间未接收到用户操作时,默认进入锁定状态,也可以响应于用户操作(例如按压电源键的操作)进入锁定状态。示例性地,参考图5A,图5A示出了电子设备100处于锁定状态时显示的用户界面50。
第一操作指令和其请求电子设备100执行的操作之间的对应关系,可以由电子设备100预先设置,这里不做限定。在本申请实施例中,可以将第一操作指令请求访问的电子设备100中的资源称为第一资源。电子设备100中资源的分类及具体内容,可参考前文相关描述。第一资源可以包括一个或多个资源,这里不做限定。
在一些实施例中,第一操作指令用于请求电子设备100执行锁定状态下预定义操作之外的其他操作。也就是说,第一操作指令用于请求访问电子设备100中的某项资源,而针对该资源的访问是电子设备在锁定状态下不能执行的。具体的,电子设备100中预先存储有锁定状态下能够执行的预定义操作。即,电子设备100记录有在锁定状态下,能够访问的资源,以及,能够对该资源执行的具体的访问操作(例如读取、添加、删除、写入、修改等等)。预定义操作的详细定义,可参考前文相关描述。
在本申请实施例对该第一操作指令的形式不做限定。该第一操作指令例如可包括但不限于:语音携带的语义、手势、脸部表情、签名、形体姿态、口型、按压按键的操作或摇晃操作,等等。其中,手势、脸部表情、签名、形体姿态、口型,可以为一个时间点的静态信息例如在某个时间点的手势,也可以为一段时间内的动态变化信息例如在一段时间内的口型变化等等。
电子设备100可以通过以下几种方式来获取第一操作指令:
1.电子设备100直接接收到携带第一操作指令的用户操作,从该用户操作中提取第一操作指令
在锁定状态下,电子设备100可以周期性,或者在一定的触发条件下,开始接收用户输入的用户操作并从中提取第一操作指令。该触发条件可以包括多种,例如可以包括启动语音助手后,电子设备100检测到抬腕操作后,电子设备100检测到敲击显示屏的操作后等等。这里,电子设备100可以持续低功耗地运行唤醒词识别程序,在检测到唤醒词后启动语音助手。这样通过在检测到触发条件下开始接收用户操作并从中提取第一操作指令,可以降低电子设备100的功耗。
携带第一操作指令的用户操作,可以有多种形式。例如,可包括携带语义的语音、包含手势/脸部表情/形体姿态/口型的一张或多张图像,包含签名的滑动操作,按压按键的操作、摇晃电子设备100的操作,等等。
电子设备100可以使用相应的模块,来接收该携带第一操作指令的用户操作。例如可通过受话器170B、麦克风170C来接收携带语义的语音,通过显示屏194来接收包含签名的滑动操作,包含手势的滑动操作,通过摄像头193来接收包含手势/脸部表情/形体姿态/口型的 图像,通过按键190来接收按压按键的操作,通过陀螺仪传感器180B来接收晃动操作,等等。
之后,电子设备100可以从接收到的用户操作中识别或者提取到第一操作指令。例如,电子设备100可以从语音中提取语义,从一张或多张图像中提取手势/脸部表情/形体姿态/口型,从滑动操作中提取签名或手势,等等。
电子设备100可以在本地,或者,通过网络,识别出用户操作中包含的第一操作指令。例如,电子设备100可以在本地通过处理器110来识别语音中的语义、识别图像中的手势/脸部表情/形体姿态等,也可以将语音或图像上传至网络,通过网络服务器或者其他设备来识别语音中的语义、图像中的手势/脸部表情/形体姿态/口型等。
语音中携带有语义,不同的语音可以携带不同的语义。用户可以通过输入不同的语音,来输入不同的操作指令。例如,语音“导航到家”可用于请求电子设备启动导航类应用并导航到家的位置;语音“打开相册”可用于请求电子设备启动图库应用。
当电子设备100接收到的第一操作指令为语音时,该电子设备100需要先启动语音助手。语音助手是一款安装于电子设备中,用于支持用户通过语音指令来操控电子设备的应用程序。通常情况下,语音助手是处于休眠状态的,用户在使用语音助手前,可以先唤醒或启动语音助手。只有在语音助手被唤醒后,电子设备才可以接收并识别用户输入的语音指令。用于唤醒语音助手的语音可以称为唤醒词,例如唤醒词可以为语音“小E小E”。在其他一些实施例中,电子设备100中的语音助手可以长期处于唤醒状态,不必通过唤醒词唤醒。语音助手只是本申请使用的一个词语,其还可以被称为智能助手等其他词语,这里不做限定。
手势可以为触摸电子设备的手势,例如触摸显示屏的滑动手势、点击手势等等。手势还可以为不接触电子设备的悬空手势,例如在显示屏上方张开手掌的手势或握拳手势等等。悬空手势也可称为悬浮手势、隔空手势、远程手势等等。用户可以通过输入不同的手势,来输入不同的操作指令。例如,在显示屏上方张开手掌的手势可用于请求电子设备启动导航类应用并导航到家的位置;在显示屏上方握拳的手势可用于请求电子设备启动图库应用。
脸部表情例如可包括眨眼的表情、张嘴的表情等等。用户可以通过输入不同的脸部表情,来输入不同的操作指令。
形体姿态例如可包括点头、摇头、摆臂、下蹲等等。用户可以通过输入不同的形体姿态,来输入不同的操作指令。例如,点头的形体姿态可用于请求电子设备播放音乐;摇头的形体姿态可用于请求电子设备暂停播放音乐。
按压按键的方式、摇晃电子设备的方式可以有多种,用户可以通过不同的方式来按压按键或者摇晃电子设备,来输入不同的操作指令。例如,双击电源键的操作可用于请求电子设备播放音乐,两次摇晃电子设备可用于请求电子设备暂停播放音乐。
不同的口型可用于指示不同的操作。例如,在一段时间内对应于语音“播放音乐”的口型变化,可以用于请求电子设备播放音乐。利用口型来输入第一操作指令,可以便于用户通过唇语来操控电子设备,丰富了电子设备的使用场景和使用范围。
不限于上述几种用户操作,第一操作指令还可以实现为其他形式,例如还可以为打响指的声音等等,这里不做限定。
2.其他设备向电子设备100发送用户操作的指示信息,电子设备100从该用户操作的指示信息中提取第一操作指令
电子设备100可以和其他设备例如电子设备200建立通信连接,该电子设备100和其他电子设备建立通信连接的方式,可参考图3相关描述。
其他设备接收到的用户操作携带有第一操作指令。其他设备接收携带第一操作指令的用户操作的时机、方式等,和上述第1种方式中电子设备100接收携带第一操作指令的用户操作的时机、方式相同,可参考相关描述。
其他设备发送的用户操作的指示信息,可以为该用户操作本身,也可以为该用户操作的其他指示信息。例如,电子设备200为连接电子设备100的耳机时,可接收到用户输入的包含语义的语音,然后将该语音发送给电子设备100。又例如,电子设备200为连接电子设备100的摄像头时,可采集到用户输入的包含手势/脸部表情/形体姿态的图像,然后将该图像发送给电子设备100。又例如,电子设备200为连接电子设备100的智能手环时,可接收到作用于电源键的按压操作,然后将该按压操作的指示信息发送给电子设备100。
电子设备100从该用户操作的指示信息中提取第一操作指令的方式,和上述第1种形式中电子设备100从接收到的用户操作中提取第一操作指令的方式相同,可参考相关描述。
在上述第2种情况下,其他设备例如电子设备200,可以看作是电子设备100的外设或者配件设备。
在上述第2种方式中,电子设备200可以默认选中电子设备100,也可以根据用户选中的电子设备100,而向该电子设备100发送用户操作的指示信息。用户选中电子设备100的方式不做限定,例如可以通过语音或者在用户界面上的选择操作。例如,电子设备200为耳机时,可将接收到的语音默认发送给连接的电子设备100。又例如,电子设备200可以检测到语音指令“使用手机播放音乐”,则将该语音发送给语音指令提及的手机(即电子设备100)。
3.其他设备接收到携带第一操作指令的用户操作,从该用户操作中提取第一操作指令后,将该第一操作指令发送给电子设备100
电子设备100可以和其他设备例如电子设备200建立通信连接,该电子设备100和其他电子设备建立通信连接的方式,可参考图3相关描述。
其他设备例如电子设备200可先接收到携带第一操作指令的用户操作,从该用户操作中识别出其包含的第一操作指令,然后将第一操作指令发送给电子设备100。这里,其他设备接收携带第一操作指令的用户操作,和上述第1种形式中电子设备100接收携带第一操作指令的用户操作类似,可参考相关描述。其他设备从接收到的用户操作中识别出其包含的第一操作指令的方式,和上述第1种形式中电子设备100从用户操作中识别出其包含的第一操作指令的方式相同,可参考相关描述。
例如,电子设备200可接收到用户输入的语音,然后将识别该语音的语义,然后该语义信息发送给电子设备100。又例如,电子设备200采集到用户输入的包含手势/脸部表情/形体姿态的图像,可以识别该图像中手势/脸部表情/形体姿态,然后将该手势/脸部表情/形体姿态信息发送给电子设备100。
在上述第3种方式中,电子设备200可以默认选中电子设备100,也可以根据用户选中的电子设备100,而向该电子设备100发送第一操作指令。
弱认证因子是指未达到电子设备解锁要求的身份认证信息。其中,身份认证信息可包括密码、图形以及生物特征。身份认证信息的详细介绍,可参考前文相关描述。
本申请实施例中,未达到电子设备解锁要求的身份认证信息,即弱认证因子可以包括以下两种:
1.低于第一认证方式所需标准的身份认证信息。
第一认证方式为ACL较高的身份认证方式。ACL的判定方式可参考前文相关描述。第一认证方式可以由电子设备或电子设备的生产商预先设置,可参考前文相关描述。例如,第一认证方式可包括密码认证、图形认证、指纹认证以及人脸认证等等。
电子设备可以预存用户的身份认证信息,用于后续使用对应的第一认证方式来解锁。例如,第一认证方式包括密码认证时,电子设备可以预存一个或多个密码。第一认证方式包括图形认证时,电子设备可以预存一个或多个图形。第一认证方式包括生物特征认证时,电子设备可以预存一个或多个生物特征,如指纹、人脸等等。
符合第一认证方式所需标准的身份认证信息例如可包括:和电子设备预存的密码或图形,或者,和预存的生物特征(例如指纹、人脸等)匹配度达到第一值的生物特征。电子设备在接收到符合自身第一认证方式所需标准的身份认证信息后,可以由锁定状态切换为解锁状态。第一值可以预先设定。
低于该第一认证方式所需标准的身份认证信息例如可包括:和预存的生物特征匹配度低于第一值的生物特征,或者,和电子设备预存的密码或图形的相似度达到一定值的密码或图形。
相对于符合第一认证方式所需标准的身份认证信息,用户无需繁琐的操作,也无需经过多次操作,就可以输入低于该第一认证方式所需标准的身份认证信息。例如,用户可以输入和预设的图形相似的图形,远距离将面部对准电子设备的摄像头且不必保持不动,使用带水渍的手指按压指纹识别传感器所在位置或者用手指对准摄像头等等。显然,这样可以降低用户输入身份认证信息的要求,使得用户可以更加简单、方便、自如地使用电子设备。
2.符合第二认证方式所需标准的身份认证信息。
第二认证方式为ACL较低的身份认证方式。ACL的判定方式可参考前文相关描述。第二认证方式可以由电子设备或电子设备的生产商预先设置,可参考前文相关描述。例如,第二认证方式可包括声纹认证、心率认证、形体姿态认证等等。
符合第二认证方式所需标准的身份认证信息例如可包括:和电子设备预存的生物特征(例如声纹、形体姿态等)的匹配度达到第二值的生物特征。第二值可以预先设定。
通过符合第二认证方式所需标准的身份认证信息,用户可以使用更加便捷的方式来操控电子设备。例如用户可以通过语音指令、形体姿势等来操控电子设备,在驾车、做饭、锻炼等场景时无需触摸便可操控电子设备,带来了极大地便捷性。
在本申请实施例中,电子设备100接收到的弱认证因子的数量可以为一个,也可以为多个,这里不做限定。也就是说,电子设备100可以接收到多个不同的弱认证因子。
和第一操作指令类似,本申请实施例中电子设备可以通过以下几种方式获取到的弱认证因子:
1.电子设备100直接接收到携带弱认证因子的用户操作,并从该用户操作中提取到弱认证因子
在锁定状态下,电子设备100可以周期性,或者在一定的触发条件下,开始接收用户输 入的用户操作并从中提取弱认证因子。该触发条件可以包括多种,例如可以包括启动语音助手后,电子设备100检测到抬腕操作后,电子设备100检测到敲击显示屏的操作后等等。这样通过在检测到触发条件下开始采集弱认证因子,可以降低电子设备100的功耗。
这里,携带弱认证因子的用户操作可以有多种,例如可包括指示密码的用户操作(例如点击操作)、指示图形的用户操作(例如滑动操作)、携带生物特征的图像或滑动操作等等。
电子设备100可以调度相应的模块来接收这些携带弱认证因子的用户操作。例如,电子设备100可通过显示屏194接收指示密码的用户操作(例如点击操作)、指示图形的用户操作(例如滑动操作),通过摄像头193采集包含生物特征(例如人脸、虹膜、视网膜、脸型、形体姿态)的图像,通过指纹传感器180H采集用户输入的指纹,通过受话器170B,麦克风170C采集用户输入的携带声纹的语音;通过光学传感器采集心率等。
之后,电子设备100可以从接收到的用户操作中,识别出其包含的弱认证因子。例如,从语音中提取声纹,从点击操作中提取密码,从滑动操作中提取图形或者签名,从图像中提取人脸、虹膜、视网膜、脸型、形体姿态或指纹,等等。
电子设备100可以在本地,或者,通过网络,识别出用户操作中包含的弱认证因子。例如,电子设备100可以在本地通过处理器110来识别语音中的声纹、识别图像中的形体姿态或脸型等,直接通过按键识别到按压按键的操作,通过指纹传感器180H识别到指纹等,也可以将语音或图像上传至网络,通过网络服务器或者其他设备来识别语音中的声纹、图像中的形体姿态或脸型等。
2.其他设备向电子设备100发送用户操作的指示信息,电子设备100从该用户操作的指示信息中提取弱认证因子
电子设备100可以和其他设备例如电子设备200建立通信连接,该电子设备100和其他电子设备建立通信连接的方式,可参考图3相关描述。
其他设备接收到的用户操作携带有弱认证因子。其他设备接收携带弱认证因子的用户操作的时机、方式等,和上述第1种方式中电子设备100接收携带弱认证因子的用户操作的时机、方式相同,可参考相关描述。
其他设备发送的用户操作的指示信息,可以为该用户操作本身,也可以为该用户操作的其他指示信息。例如,其他设备可以采集到指示密码的用户操作(例如点击操作)、指示图形的用户操作(例如滑动操作)、携带生物特征的图像或滑动操作等等,然后将这些点击操作或滑动操作的指示信息,或者,图像发送给电子设备100,由电子设备100来识别其中的弱认证因子。
电子设备100从该用户操作的指示信息中提取弱认证因子的方式,和上述第1种形式中电子设备100从接收到的用户操作中提取弱认证因子的方式相同,可参考相关描述。
在上述第2种情况下,其他设备例如电子设备200,可以看作是电子设备100的外设或者配件设备。
在上述第2种方式中,电子设备200可以默认选中电子设备100,也可以根据用户选中的电子设备100,而向该电子设备100发送用户操作的指示信息。
3.其他设备接收到携带弱认证因子的用户操作,从该用户操作中提取弱认证因子后,将该弱认证因子发送给电子设备100
电子设备100可以和其他设备例如电子设备200建立通信连接,该电子设备100和其他 电子设备建立通信连接的方式,可参考图3相关描述。
其他设备例如电子设备200可先接收到携带弱认证因子的用户操作,从该用户操作中识别出其包含的弱认证因子,然后将弱认证因子发送给电子设备100。这里,其他设备接收携带弱认证因子的用户操作,和上述第1种形式中电子设备100接收携带弱认证因子的用户操作类似,可参考相关描述。其他设备从接收到的用户操作中识别出其包含的弱认证因子的方式,和上述第1种形式中电子设备100从用户操作中识别出其包含的弱认证因子的方式相同,可参考相关描述。
例如,电子设备200可接收到用户输入的语音,然后将识别该语音的声纹,然后该声纹信息发送给电子设备100。又例如,电子设备200采集到用户输入的包含生物特征(例如人脸、指纹、掌型、视网膜、虹膜、形体姿态、脸型)的图像,可以识别该图像中包含的生物特征,然后将该生物特征信息发送给电子设备100。
在上述第3种方式中,电子设备200可以默认选中电子设备100,也可以根据用户选中的电子设备100,而向该电子设备100发送弱认证因子。
在本申请一些实施例中,电子设备100可以分别接收到第一操作指令,和,弱认证因子。例如,电子设备100可以先通过麦克风采集到语音指令“播放音乐”,然后通过摄像头采集到人脸图像。
在本申请一些实施例中,电子设备100可以同时接收到第一操作指令,和,弱认证因子。这样可以简化用户操作,使得用户的使用体验更加优良。
图5B-图5D分别示出了电子设备100同时接收到第一操作指令,和,弱认证因子的场景。在图5B-图5D中,电子设备100均处于锁定状态。
示例性地,参考图5B,图5B示例性示出了电子设备100(例如手机)同时接收到第一指令和弱认证因子的场景。如图5B所示,电子设备100可以通过麦克风采集到语音指令“导航到家”,该语音指令同时携带有声纹,电子设备100还可通过该语音指令识别到对应的语义。该语义请求访问的第一资源包括导航类应用以及“家”的地址。
示例性地,参考图5C,图5C示例性示出了电子设备100(例如手机)同时接收到第一指令和弱认证因子的另一种场景。如图5C所示,电子设备100可以通过摄像头采集到包括张开手掌手势的图像,电子设备100可识别到该手势图像中张开手掌的手势,还可以识别到该手掌的特征(例如指纹、指节大小等等)。该张开手掌的手势可用于请求电子设备100“导航到家”,其请求访问的第一资源包括导航类应用以及“家”的地址。
示例性地,参考图5D,图5D示例性示出了电子设备100(例如智能手环)同时接收到第一指令和弱认证因子的另一种场景。如图5D所示,电子设备200可以通过麦克风采集到语音指令“用手机播放音乐”,该语音指令同时携带有声纹,电子设备200可识别该语音指令对应的声纹,还可识别该语音指令对应的语义,然后根据该语义信息和声纹信息同时发送给电子设备100。这里,该语义请求访问的第一资源包括音乐类应用。
不限于图5B-图5D示出的几种场景,具体实现中,电子设备100还可以接收到其他形式的第一操作指令和弱认证因子,可参考前文相关描述,这里不再一一列举。
步骤S102,电子设备100根据第一操作指令,和,弱认证因子,创建限制执行环境。
限制执行环境是指受限制的执行环境。执行环境可以包括硬件环境和软件环境。执行环 境可以是沙箱,也可以是包含多个函数的函数域。电子设备在限制执行环境中,只能执行指定的部分操作,而不能执行该部分操作意外的其他操作。相当于,电子设备在限制执行环境中,只能访问电子设备的部分资源,而不能访问该部分资源以外的其他资源。
本申请实施例对电子设备100根据第一操作指令,和,弱认证因子,创建限制执行环境的策略不做限定。例如,电子设备100可以根据第一操作指令的种类、采集弱认证因子时的环境等等,来创建限制执行环境。例如,当第一操作指令分别为语音携带的语义、手势、脸部表情、签名、形体姿态时,电子设备100分别创建的限制执行环境中能够执行的操作数量依次降低。
在本申请的一些实施例中,电子设备100可以根据第一操作指令对应操作的风险等级,和/或,弱认证因子的安全等级,来创建限制执行环境。步骤S102具体可包括以下步骤S1021-S1024。
当电子设备100接收到多个认证因子时,该多个认证因子可以先后接收到。例如,用户可以分别输入5句语音,电子设备100可以从每一句语音中分别提取出一个声纹作为弱认证因子。
步骤S1021,电子设备100确定第一操作指令对应操作的风险等级。
首先,电子设备100可以先确定第一操作指令所对应的操作。
第一操作指令和其请求电子设备100执行的操作之间的对应关系,可以由电子设备100预先设置,这里不做限定。
具体的,可以预先设定不同的第一操作指令(包括语义、手势、脸部表情、形体姿态等)分别对应的操作。例如,语义“导航到家”,或者,在显示屏上方张开手掌的手势对应于启动导航类应用并导航到家的位置;语义“使用手机播放音乐”,对应于启动音乐类应用;语义“打开相册”,或者,在显示屏上方握拳的手势,对应于启动图库应用;点头的形体姿态对应于播放音乐;摇头的形体姿态对应于暂停播放音乐。该预先设定的不同的语义、手势、脸部表情、形体姿态和操作的对应关系,可以存储于电子设备100中,也可以存储于网络服务器中,这里不做限定。
电子设备100根据预先设定的信息,在本地或者网络中查找到第一操作指令对应的操作。
第一操作指令所对应的操作,包含针对某项资源进行的某项访问操作,该资源为电子设备中的一个或多个资源,该访问操作例如可包括读取、添加、删除、写入、修改、执行中的一项或多项。该资源和该访问操作的具体内容的确定,参考前文相关描述。电子设备中的资源可包括软件资源、硬件资源、外设或外设的资源等等,具体参考前文相关描述。
然后,电子设备100可以先确定第一操作指令所对应操作的风险等级。
在本申请实施例中,电子设备100可以预先存储执行不同的操作分别对应的风险等级。
本申请实施例可以按照不同的粒度,将电子设备100可执行的各项操作划分为不同的风险等级。本申请对该粒度不做限定。例如,可以粗略地将操作的风险等级划分为高、中、低三个等级。又例如,可以将操作的风险等级划分为1-10个等级,数值越高,操作的风险等级也越高。
在本申请实施例中,当电子设备100执行操作时给用户带来的隐私泄露的风险程度越高,则该操作的风险等级也越高。当一个操作要求访问的资源的隐私度越高,则执行该操作时给用户带来的隐私泄露的风险严重程度越高,该操作的风险等级也就越高。例如,查看照片、 查看购物记录、查看浏览器中的浏览记录的风险程度可以依次降低。当一个操作要求的访问操作的隐私度越高,对应操作的风险等级也就越高。例如,读取照片、删除照片、添加照片的风险程度可以依次降低。
在本申请一些实施例中,电子设备100可以自主设置不同操作分别对应的风险等级。例如,电子设备100可以考虑操作要求访问资源的类别、地点等因素来设定不同操作的风险等级。例如,要求访问第三方资源的操作的风险等级,高于,要求访问系统资源的操作的风险等级;在家中执行的操作的风险等级,低于,在其他地方执行的操作的风险等级。
在本申请另一些实施例中,电子设备100还可以根据用户需求来设置不同操作分别对应的风险等级。具体的,电子设备100可以响应于接收到的用户操作,确定或者设置电子设备100所能执行的各项操作的风险等级。例如,电子设备100在设置应用中提供用户界面,以供用户设置各项操作的风险等级。
在本申请其他一些实施例中,第一操作指令对应操作的风险等级,还可以根据该第一操作指令的获取方式来确定。例如,电子设备100通过上述第1-3种方式获取第一操作指令时,获得的第一操作指令的安全等级依次降低。即,电子设备100通过第1种方式获取的第一操作指令的安全等级,高于第2种或第3种方式获取的第一操作指令。又例如,电子设备100接收电子设备200发送的第一操作指令时,可以根据该电子设备200来决定该第一操作指令对应操作的风险等级。例如,如果电子设备200和电子设备100的历史通信频率越高,则第一操作指令对应操作的风险等级越低。
步骤S1022,电子设备100确定弱认证因子的安全等级。
本申请实施例可以按照不同的粒度,将弱认证因子划分为不同的安全等级。本申请对该粒度不做限定。例如,可以粗略地将弱认证因子的安全等级划分为高、中、低三个等级。又例如,可以将弱认证因子的安全等级划分为1-10个等级,数值越高,弱认证因子的安全等级也越高。
在本申请实施例中,弱认证因子的安全等级可以根据,该弱认证因子所属的身份认证方式的ACL确定。该弱认证因子所属的身份认证方式的ACL越高,该弱认证因子的安全等级也就越高。
在本申请其他一些实施例中,弱认证因子的安全等级还可以根据以下一项或多项来确定:该弱认证因子和预存的身份认证信息的匹配度,接收弱认证因子时的环境信息,弱认证因子的获取方式,或者,弱认证因子为声纹时对应语音的强度。
该弱认证因子和预存的身份认证信息的匹配度越高,或者,接收弱认证因子时的环境越安静,或者,弱认证因子为声纹时对应语音的强度越强,则该弱认证因子的安全等级也就越高。
电子设备100通过上述第1-3种方式获取弱认证因子时,其弱认证因子的安全等级依次降低。即,电子设备100通过第1种方式获取的弱认证因子的安全等级,高于第2种或第3种方式获取的弱认证因子。
电子设备100在执行S1022之后,可以记录该弱认证因子所属的身份认证方式,该弱认证因子的安全等级,以及,该弱认证因子的认证有效期。其中,该弱认证因子的认证有效期可以由电子设备预先设定,例如可以设定为固定值如在创建完限制执行环境后失效。
本申请实施例对S1021和S1022的先后顺序不做限定。
可选步骤S1023,电子设备100根据第一操作指令对应操作的风险等级,和,弱认证因子的安全等级,判断是否允许执行该第一操作指令所对应的操作。
具体的,电子设备100中预先设置了不同操作的风险等级和不同认证因子的安全等级下,允许该电子设备100执行的各项操作。该设置可以是用户或者电子设备100的生产商提前设置的。本申请实施例对操作的风险等级和认证因子的安全等级,与,允许该电子设备100执行的操作,之间的对应关系不做限定。
例如,当第一操作指令对应操作的风险等级较高,弱认证因子的安全等级较低时,不允许执行该第一操作指令对应的操作。又例如,当第一操作指令对应操作的风险等级较低,弱认证因子的安全等级较高时,允许执行该第一操作指令对应的操作。
在一些实施例中,电子设备100可以将第一操作指令对应操作的风险等级,和,弱认证因子的安全等级进行匹配,判断是否允许执行该第一操作指令所对应的操作。具体的,电子设备100可以预先设置执行各项操作的弱认证因子的安全等级。其中,当操作的风险等级越高,则执行该操作所需的弱认证因子的安全等级也越高。
如果S1023的执行结果为是,则电子设备100继续执行后续步骤。
如果S1023的执行结果为否,则电子设备100不再继续执行后续步骤。
在一些实施例中,如果S1023的执行结果为否,电子设备100还可以输出提示信息,该提示信息可用于提示用户当前不允许执行该第一操作指令所对应的操作。
在一些实施例中,该提示信息还可以进一步提示用户当前不允许执行该第一操作指令所对应的操作的原因,例如可包括第一操作指令对应操作的风险等级较高,或者,弱认证因子的安全等级较低。
在一些实施例中,该提示信息还可以进一步提示用户解决方案。例如提示用户输入安全等级更高的弱认证因子,或者,提示用户解锁等等,这里不做限定。
关于该提示信息的实现形式,和后续步骤S105中提示信息的实现形式相同,具体可参考后续步骤中的相关描述。
步骤S1024,电子设备100根据第一操作指令对应操作的风险等级,和,弱认证因子的安全等级,创建限制执行环境。
在一些实施例中,电子设备100可以在接收到预定值的弱认证因子后,执行S1024。也就是说,电子设备100可以利用多个弱认证因子来创建限制执行环境。
第一操作指令对应的操作的风险等级越低,或者,弱认证因子的安全等级越高,允许电子设备100执行的操作也就越多。这里,允许电子设备100执行的操作,也就是说电子设备100创建的限制执行环境中能执行的操作。
电子设备100中预先设置了不同操作的风险等级和不同认证因子的安全等级下,允许该电子设备100执行的各项操作。该设置可以是用户或者电子设备100的生产商提前设置的。本申请实施例对操作的风险等级和认证因子的安全等级,与,允许该电子设备100执行的操作,之间的对应关系不做限定。电子设备在接收到相同的第一操作指令,不同的弱认证因子时,可以创建不同的限制执行环境。电子设备在接收到不同的第一操作指令,相同的弱认证因子时,也可以创建不同的限制执行环境。
在一些实施例中,无论第一操作指令对应的操作的风险等级,弱认证因子的安全等级如何,电子设备100创建的限制执行环境中都能够执行锁定状态下的预定操作。
示例性地,参考表1,示例性示出了不同操作的风险等级和不同认证因子的安全等级下,允许该电子设备100执行的各项操作。其中,操作的风险等级和不同认证因子的安全等级均划分为1-5个等级,数值越高,操作的风险等级越高,弱认证因子的安全等级越高。
Figure PCTCN2022100826-appb-000004
表1
具体创建限制执行环境时,电子设备100可以记录根据操作的风险等级和不同认证因子的安全等级确定的,允许电子设备100执行的各项操作。也就是说,电子设备100记录允许电子设备100针对哪些资源或哪一类资源执行哪些具体的访问操作。
在一些实施例中,如果电子设备100当前已经创建有限制执行环境,则电子设备100可以根据操作的风险等级和不同认证因子的安全等级,更改当前的限制执行环境,将其更改为上述描述的限制执行环境。具体的,电子设备100可以更改记录的信息,从而更改当前的限制执行环境,具体可参考前文相关描述。
在一些实施例中,电子设备100还可以考虑S101中获取到的弱认证因子的数量来创建限制执行环境。例如,S101中获取到的弱认证因子的数量越多,创建的限制执行环境中允许执行的操作也就越多。
在一些实施例中,如果电子设备100执行了S1023,则S1024中创建的限制执行环境必然允许执行第一操作指令对应的操作。这样可以创建有效的限制执行环境,减少电子设备100中的资源浪费。
在另一些实施例中,电子设备100可以不必执行S1023,而直接执行S1024。此时,步骤S1024中创建的限制执行环境不一定允许执行第一操作指令对应的操作。
S103,电子设备100响应于第一操作指令,在创建的限制执行环境中执行该第一操作指令所对应的操作。
在一些实施例中,如果电子设备没有执行S1023,则在S103之前,电子设备100还需判断创建的限制执行环境是否允许执行该第一操作所对应的操作,如果判断结果为是,则执行S103。如果判断结果为否,则电子设备100可以停止执行任何步骤,或者,电子设备100可以尽量响应第一操作指令,在该限制执行环境中执行接近第一操作指令所对应的操作的其他操作。
其中,该第一操作指令所对应的操作,可参考S101以及S1021的详细描述。
图5E-图5F示例性示出了电子设备100执行S103时显示的用户界面。
参考图5E,图5E为电子设备100接收到图5B中的语音指令“导航到家”,以及接收到弱认证因子(即语音指令中携带的声纹)后,所显示的用户界面53。如图5E所示,电子设备100根据该语音指令和弱认证因子创建的限制执行环境允许启动导航类应用并且允许读取导航类应用的用户数据,例如读取到用户“家”的详细地址为“XX大厦”。因此,图5E中提供的导航界面中,电子设备100自动在目的地处自动填充了“家”的详细地址。
参考图5F,图5F还可以为电子设备100接收到图5C中的包括张开手掌手势的图像,以及接收到弱认证因子(即该手掌的特征,如指纹、指节大小等)后,所显示的用户界面。张开手掌的手势和语音指令“导航到家”相同,都用于请求电子设备100导航到“家”的位置。但是,由于图5C中电子设备100接收到的弱认证因子的安全等级低于图5B中电子设备100接收到的弱认证因子的安全等级,因此电子设备100根据该张开手掌的手势,和,弱认证因子创建限制执行环境,允许启动导航类应用,但不允许读取导航类应用的用户数据。如图5F所示,由于电子设备100无法读取“家”的详细地址,因此在目的地处未填充地址。用户可以手动在目的地处输入“家”的地址,以导航至家。
可选步骤S104,电子设备100接收到用户操作。
本申请实施例对S104中电子设备100接收到的用户操作的形式不作任何限定,例如可以为携带语义的语音、包含手势/脸部表情/形体姿态的图像,包含签名的滑动操作,按压按键的操作、摇晃电子设备100的操作,等等。S104中电子设备100接收用户操作的方式,和S101中电子设备100第1种接收携带第一操作指令的用户操作的方式相同,可参考相关描述。
可选步骤S105,如果创建的限制执行环境允许执行该用户操作请求电子设备100执行的操作,则电子设备100响应该用户操作;如果不允许执行该用户操作请求电子设备100执行的操作,则输出提示信息,该提示信息用于提示用户当前不允许执行该用户操作所对应的操作。
这里,电子设备100确定该用户操作请求电子设备100执行的操作,和,S1021中电子设备100确定第一操作指令所对应的操作相同,可参考相关描述。
在本申请实施例中,可以将S104中的用户操作所请求访问的资源称为第二资源。第二资源可以包括一个或多个资源,这里不做限定。
具体的,如果限制执行环境允许执行S104中用户操作对应的操作,则电子设备100将响应该用户操作,执行其请求电子设备100执行的操作。
例如,如果S104的用户操作用于请求电子设备100启动相机应用,且限制执行环境允许电子设备100启动相机应用,则电子设备100可以启动相机应用。
又例如,如图5F所示,电子设备100在用户界面53中的控件501上检测到用户操作(例如点击操作)后,如果限制执行环境允许调用麦克风,则电子设备100可以启动麦克风来采集用户输入的语音。
如果限制执行环境不允许执行该用户操作对应的操作,则电子设备100将不会响应该用户操作,并且会输出提示信息。
例如,如果S104的用户操作(例如从显示屏的底部向上的滑动操作)用于请求电子设备100显示桌面,且限制执行环境不允许电子设备100显示桌面,则电子设备100可以输出提示信息。
在一些实施例中,电子设备100输出的提示信息还可以进一步提示用户当前不允许执行该用户操作所对应的操作的原因,例如可包括该用户操作的风险等级较高,或者,当前电子设备100接收到的弱认证因子的安全等级较低。
在一些实施例中,电子设备100输出的提示信息还可以进一步提示用户解决方案。例如提示用户输入安全等级更高的弱认证因子,或者,提示用户解锁等等,这里不做限定。
该提示信息的实现形式可以为可视化元素、振动信号、闪光灯信号、音频等等,这里不做限定。
示例性地,参考图5G,图5G示例性示出了电子设备100输出的提示信息502。
通过S105,可以将电子设备100能够执行的操作限制在限制执行环境的范围之内,这样可以避免权限扩大化,保护电子设备100的数据安全。
可选步骤S106,电子设备100获取到强认证因子,由锁定状态切换为解锁状态。
具体的,强认证因子包括符合第一认证方式所需标准的身份认证信息。符合第一认证方式所需标准的身份认证信息,可参考S101中的详细描述,这里不再赘述。
在一些实施例中,强认证因子也可包括在一段时间内获取到的多个弱认证因子。这里,该多个弱认证因子的具体数量可以预先设定,这里不做限制。该多个弱认证因子可以是相同的身份认证信息,也可以是不同的身份认证信息。也就是说,用户可以通过多次输入弱认证因子,来完成身份认证。例如,用户可以持续输入多句语音,这样电子设备100提取到多个声纹(即弱认证因子)后,可以完成解锁。又例如,电子设备100可以同时提取到声纹和远距离的人脸,然后解锁。
电子设备100获取强认证因子的方式,可参考S101中电子设备100获取弱认证因子的方式,这里不再赘述。
在一些实施例中,电子设备100可以在S105输出提示信息后自动开始检测用户输入的强认证因子。用户在看到电子设备100输出的提示信息后,可以输入强认证因子。
在另一些实施例中,电子设备100可以在执行S103之后的任意时间点,响应于接收到的用户操作,开始检测用户输入的强认证因子。用户可以在输入该用户操作后,输入强认证因子。本申请实施例对该用户操作的形式不做限定。
示例性地,参考图5E及图5F,电子设备100在创建限制执行环境后,可以在显示的访问控制界面中持续显示解锁控件503。如图5E及图5F所示,电子设备100可以响应作用于该解锁控件503的操作,开始检测用户输入的强认证因子。此外,该解锁控件503还可用于提示用户当前电子设备100处于限制执行环境中,仍然处于锁定状态,从而避免用户限制执行环境范围外的用户操作。
本申请实施例对解锁控件503的实现不做限定,例如可以为图标、文字或其他形式,可以是透明的,也可以是不透明的。解锁控件503可以显示在显示屏中的任意位置,可以显示在固定区域,也可以被用户拖动,这里不做限定。
在本申请实施例中,解锁控件503可以被称为第一控件。
可选步骤S107,电子设备100关闭限制执行环境。
在一些实施例中,电子设备100可以在S106之后,切换到解锁状态后即关闭限制执行环境。
在另一些实施例中,电子设备100可以在接收到用于关闭第一操作指令对应启动应用的操作之后,关闭该限制执行环境。用户触发电子设备100关闭第一操作指令对应启动的应用,即表明当前用户已不再需要该限制执行环境,因此电子设备100关闭该限制执行环境,可以节约设备资源。
具体实现中,电子设备100关闭该限制执行环境是指,电子设备100删除S102中记录的各项信息,例如记录的该限制执行环境允许电子设备100执行的各项操作等等。
通过上述图4所示的基于弱认证因子的访问控制方法,电子设备不再仅仅根据是否解锁来决定是否响应执行对应的操作,而是根据操作指令的风险等级和弱认证因子的安全等级来决定是否执行该操作,这样可以实现更细粒度的访问控制,丰富了电子设备的使用场景和使用范围。对于用户来说,不必通过繁琐的认证来解锁电子设备,即可触发电子设备在锁定状态下执行预定义操作之外的其他操作,使得用户能够更加自如、方便地操控电子设备。此外,电子设备不再将资源简单地分为预定义操作可访问的资源和其他操作可访问的资源,还针对各类资源实现了更加细粒度的访问控制。
参考图6,图6为本申请实施例提供的跨设备的访问控制方法的流程示意图。
如图6所示,该方法可包括如下步骤:
步骤S201,电子设备300接收到用户操作,该用户操作用于请求电子设备100执行某项操作。
本申请实施例对S201中用户操作的形式不作任何限定,例如可以为作用于显示屏的点击操作或滑动操作、语音、手势/脸部表情/形体姿态,包含签名的滑动操作,按压按键的操作、摇晃电子设备100的操作,等等。
该用户操作请求电子设备100执行的某项操作,包括针对某项资源进行的某项访问操作,该资源为电子设备100中的一个或多个资源,该访问操作例如可包括读取、添加、删除、写入、修改、执行中的一项或多项。该资源和该访问操作的具体内容的确定,参考前文相关描述。电子设备中的资源可包括软件资源、硬件资源、外设或外设的资源等等,具体参考前文相关描述。
在本申请实施例中,可以将S201中的用户请求访问的电子设备100中的资源称为第三资源。第三资源可以包括一个或多个资源,这里不做限定。
在一个具体的实施例中,该用户操作用于请求将电子设备300中的一些数据共享至电子设备100中。
示例性地,图7A-图7B示出了一种投屏场景。
参考图7A,图7A示例性示出了电子设备100播放用户选择的网络视频时所显示的用户 界面71。该用户界面71可以是电子设备300响应于用户将电子设备300由竖屏状态切换为横屏状态的动作,或者,用户点击电子设备300播放视频时右下角所显示的全屏播放的控件而显示的。
如图7A所示,用户界面71中还可包括投屏的开关控件701,控件701用于监听开启/关闭视频应用的投屏功能的用户操作(例如点击操作、触摸操作等)。
参考图7A,电子设备300可以检测到作用于投屏控件701上的用户操作(例如点击操作、触摸操作等),发现附近支持投屏的电子设备,并显示发现的电子设备的标识。
图7B示出了电子设备300显示的附近支持投屏的电子设备的标识。示例性地,如图7B所示,电子设备300可以检测到作用于电子设备100对应的标识上的用户操作。
在图7A及图7B的示例中,电子设备300接收到的用户操作包括先点击控件701,然后点击电子设备100的标识的用户操作,该用户操作用于请求将电子设备300当前正在播放的视频投送到电子设备100中继续播放。该用户操作请求访问电子设备100的显示屏、扬声器以及投屏应用等。
在图7A及图7B的示例中,电子设备100为用户选择的,在其他一些实施例中,电子设备100也可以是电子设备300默认选择的。例如,电子设备300接收到点击控件701的用户操作后,可以默认请求将当前正在播放的视频投送到上一次投屏的设备(即电子设备100)中继续播放。
S202,电子设备300根据该用户操作,生成第二操作指令,该第二操作指令用于请求电子设备100执行某项操作。
第二操作指令和S201中的用户操作相同,用于请求访问电子设备100中的第三资源。
本申请实施例对该第二操作指令的形式不做限定。该第二操作指令例如可以为通过有线连接、无线连接例如蓝牙(bluetooth,BT)连接、Wi-Fi P2P连接、NFC连接,远程连接等发送的消息。
在上述图7A及图7B所示的投屏场景中,电子设备300生成的第二操作指令可以为投屏请求,该投屏请求用于请求将电子设备300当前正在播放的视频投送到电子设备100中继续播放。
S203,电子设备300将第二操作指令发送给电子设备100。
S204,电子设备100处于锁定状态,接收到第二操作指令,根据第二操作指令创建限制执行环境。
锁定状态的定义可参考图4中的相关描述。
第二操作指令的定义及获取方式,和第一操作指令类似,可参考图4中的相关描述。
本申请实施例对电子设备100根据第二操作指令,创建限制执行环境的策略不做限定。例如,电子设备100可以根据第二操作指令的种类来创建限制执行环境。例如,当第二操作指令分别为语音携带的语义、手势、脸部表情、签名、形体姿态时,电子设备100分别创建的限制执行环境中能够执行的操作数量依次降低。
在一些实施例中,电子设备100可以根据第二操作指令对应操作的风险等级来创建限制执行环境。步骤S204具体可包括以下步骤S2041-S2043。
S2041,确定第二操作指令对应操作的风险等级。
这里,电子设备100确定第二操作指令对应操作的风险等级,和图4的S102中电子设备 100确定第一操作指令对应操作的风险等级相同,可参考相关描述。
步骤S2042,电子设备100根据第二操作指令对应操作的风险等级,判断是否允许执行该第二操作指令所对应的操作。
具体的,电子设备100中预先设置了不同操作的风险等级下,允许该电子设备100执行的各项操作。该设置可以是用户或者电子设备100的生产商提前设置的。本申请实施例对操作的风险等级,与,允许该电子设备100执行的操作,之间的对应关系不做限定。
如果S2042的执行结果为是,则电子设备100继续执行后续步骤。
如果S2042的执行结果为否,则电子设备100不再继续执行后续步骤。
在一些实施例中,如果S2042的执行结果为否,电子设备100还可以输出提示信息,该提示信息可用于提示用户当前不允许执行该第二操作指令所对应的操作。
在一些实施例中,该提示信息还可以进一步提示用户当前不允许执行该第二操作指令所对应的操作的原因,例如可包括第二操作指令对应操作的风险等级较高。
在一些实施例中,该提示信息还可以进一步提示用户解决方案。例如提示提示用户解锁等等,这里不做限定。
关于该提示信息的实现形式,和后续步骤S207中提示信息的实现形式相同,具体可参考后续步骤中的相关描述。
步骤S2043,电子设备100根据第二操作指令对应操作的风险等级,创建限制执行环境。
电子设备100根据第二操作指令对应操作的风险等级,创建限制执行环境的方式,和图4的S1024中电子设备100根据第一操作指令对应操作的风险等级,创建限制执行环境的方式相同,可参考相关描述。
示例性地,参考图7C,图7C示出了电子设备100创建限制执行环境后,所显示的用户界面72。如图7C所示,电子设备100正在播放电子设备300投送过来的视频,并且显示有解锁控件702。该解锁控件702和图5E及图5F中解锁控件503的作用相同,可参考相关描述。在本申请实施例中,解锁控件702也可以被称为第一控件。
S205,电子设备100响应于第二操作指令,在创建的限制执行环境中执行该第二操作指令所对应的操作。
S205和图4的S103类似,可参考相关描述。
可选步骤S206-S209,参考图4中的可选步骤S104-S107。
在S206中,电子设备100接收到的用户操作所请求访问的资源称为第四资源。第四资源可以包括一个或多个资源,这里不做限定。
在S209的一些实施例中,电子设备100可以在接收到用于关闭第二操作指令对应启动应用的操作之后,关闭该限制执行环境。
例如,如果电子设备300接收到停止投屏的用户操作,则可以向电子设备100发送停止投屏的指示信息,之后电子设备100关闭限制执行环境。
通过上述图6所示的跨设备访问控制方法,电子设备不再仅仅根据是否解锁来决定是否响应用户操作,而是根据跨设备接收到的操作指令的风险等级来决定是否响应用户操作,这样可以实现更细粒度的访问控制,丰富了电子设备的使用场景和使用范围。对于用户来说,不必通过繁琐的认证来解锁电子设备,即可触发电子设备在锁定状态下执行预定义操作之外的其他操作,使得用户能够更加自如、方便地操控电子设备。此外,电子设备不再将资源简 单地分为预定义操作可访问的资源和其他操作可访问的资源,还针对各类资源实现了更加细粒度的访问控制。
特别地,针对投屏、多屏互动等数据共享场景,一个设备将数据共享至另一设备时,该另一设备无需解锁。相对于每次共享数据时都需要先解锁另一设备的方案,本申请实施例降低了投屏、多屏互动的难度和复杂性,可以给用户带来更好的使用体验。
在上述图4以及图6提供的访问控制方法中,电子设备100、电子设备200、电子设备300可以被称为第一设备、第二设备和第三设备。
弱认证因子也可被称为第一认证因子,强认证因子也可被称为第二认证因子。
参考图8A,图8A为本申请实施例提供的另一种电子设备100的软件架构图。
如图8A所示,电子设备100可包括如下模块:操作指令识别模块801、弱认证因子识别模块802、访问控制和执行环境管理模块803。其中:
操作指令识别模块801,用于获取电子设备100的第一操作指令。
在一些实施例中,该操作指令识别模块801可用于通过上述第1种方式来获取第一操作指令或第二操作指令。即,该操作指令识别模块801可用于接收携带第一/第二操作指令的用户操作,从该用户操作中提取第一/第二操作指令。在这种情况下,该操作指令识别模块801可包括电子设备100通过上述第1种方式获取第一/第二操作指令时涉及的各个模块,例如语音助手、麦克风等等。
在一些实施例中,该操作指令识别模块801可用于通过上述第2种方式来获取第一/第二操作指令。即,该操作指令识别模块801可用于接收其他设备向电子设备100发送的用户操作的指示信息,并从该用户操作的指示信息中提取第一/第二操作指令。在这种情况下,该操作指令识别模块801可包括电子设备100通过上述第2种方式获取第一/第二操作指令时涉及的各个模块,例如无线通信模块、有线通信模块、语音助手等等。
操作指令识别模块801还用于确定该第一/第二操作指令对应的操作。
弱认证因子识别模块802用于获取电子设备100的弱认证因子。
在一些实施例中,该弱认证因子识别模块802可用于通过上述第1种方式来获取弱认证因子。即,该弱认证因子识别模块802可用于接收携带弱认证因子的用户操作,从该用户操作中提取弱认证因子。在这种情况下,该弱认证因子识别模块802可包括电子设备100通过上述第1种方式获取弱认证因子时涉及的各个模块,例如语音助手、麦克风、摄像头、指纹传感器等等。
在一些实施例中,该弱认证因子识别模块802可用于通过上述第2种方式来获取弱认证因子。即,该弱认证因子识别模块802可用于接收其他设备向电子设备100发送的用户操作的指示信息,并从该用户操作的指示信息中提取弱认证因子。在这种情况下,该弱认证因子识别模块802可包括电子设备100通过上述第2种方式获取弱认证因子时涉及的各个模块,例如无线通信模块、移动通信模块、语音助手等等。
弱认证因子识别模块802还用于确定弱认证因子的安全等级。弱认证因子识别模块802获取到电子设备100的弱认证因子后,还可以生成认证令牌(token),该认证token指示弱认证因子的安全等级,还可以指示认证方式、该弱认证因子的有效时间等等。
之后,操作指令识别模块801将该第一操作指令对应的操作,弱认证因子识别模块802将认证token,各自发送给访问控制和执行环境管理模块803。该认证token可用于访问控制和执行环境管理模块803校验合法性。
在一些实施例中,访问控制和执行环境管理模块803,用于通过第一操作指令对应的操作的安全等级,和,弱认证因子的安全等级,判断是否允许执行该第一操作指令对应的操作。在一些实施例中,访问控制和执行环境管理模块803,用于通过第二操作指令对应的操作的安全等级,判断是否允许执行该第二操作指令对应的操作。如果判断结果为是,访问控制和执行环境管理模块803用于创建限制执行环境,并在限制执行环境中执行第一/第二操作指令对应的操作。这里,创建限制执行环境的具体操作,可参考前文方法实施例的相关描述。
在一些实施例中,电子设备100还可以包括分布式调度模块804,分布式调度模块804用于通过上述第3种方式来获取第一/第二操作指令,或者,通过上述第3种方式来获取弱认证因子。在这种情况下,该分布式调度模块804可包括无线通信模块、移动通信模块等等。
参考图8B,图8B示例性示出了电子设备访问控制和执行环境管理模块803的结构。
如图8B所示,访问控制和执行环境管理模块803可包括:访问控制模块8031、执行环境管理模块8032、策略管理模块8033、应用生命周期管理模块8034、资源管理模块8035。
访问控制模块8031用于将第一/第二操作指令对应的操作,也即是说,访问的资源的信息传递给执行环境管理模块8032。
执行环境管理模块8032可用于判断是否允许执行该第一/第二操作指令对应的操作,若是,则设置限制执行环境的标识,并配置策略管理模块8033中该限制执行环境的运行策略。
策略管理模块8033用于配置限制执行环境的运行策略,即记录该限制执行环境中允许执行的各项操作,也即记录允许针对哪些资源或哪一类资源执行哪些具体的访问操作。
资源管理模块8035可包括:应用信息管理模块、数据管理模块、权限管理模块。
应用信息管理模块存储和管理着所有应用的信息,特别记录当前限制执行环境中允许启动或者访问的应用的信息。
数据管理模块,可用于对电子设备中的数据进行分类分级管理,并设置限制执行环境中允许访问的数据级别或类别。例如,电子设备可以根据各类数据的特性对其进行分类,例如可分为不同安全级别的数据。
权限管理模块,用于对电子设备中的各项操作进行权限管理,设置限制执行环境中允许的权限。
应用生命周期管理模块8034用于管理电子设备100中各个应用的生命周期,例如启动或销毁等等。当应用生命周期管理模块8034响应于用户操作将要启动应用或者访问数据时,首先向应用信息管理模块确认当前的限制执行环境是否允许启动该应用,或者,向数据管理模块确认当前的限制执行环境是否允许访问该数据,若是,则可以启动该应用或者访问该数据。应用生命周期管理模块8034启动应用后,如果要执行某些操作,则需要向权限管理模块确认当前限制执行环境是否具备对应的权限,如果有,则执行该操作。
上述图8A及图8B中示出的模块,可以位于图2所示软件系统中的任意一层或多层,这里不做限定。
图8A及图8B示出的各个模块仅为示例,具体实现中,电子设备100可以包括更多或更少的模块,这里不做限定。
本申请的各实施方式可以任意进行组合,以实现不同的技术效果。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。
总之,以上所述仅为本申请技术方案的实施例而已,并非用于限定本申请的保护范围。凡根据本申请的揭露,所作的任何修改、等同替换、改进等,均应包含在本申请保护范围内。

Claims (29)

  1. 一种基于弱认证因子的访问控制方法,其特征在于,所述方法包括:
    第一设备处于锁定状态时,获取第一操作指令和第一认证因子;所述第一操作指令用于请求访问所述第一设备的第一资源,所述第一认证因子包括未达到所述第一设备的解锁要求的身份认证信息,达到所述第一设备的解锁要求的身份认证信息用于将所述第一设备由所述锁定状态切换至解锁状态;
    所述第一设备根据所述第一操作指令,和,所述第一认证因子,确定所述第一设备允许访问的资源;
    如果所述第一设备允许访问的资源包括所述第一资源,则所述第一设备响应于所述第一操作指令,访问所述第一资源。
  2. 根据权利要求1所述的方法,其特征在于,所述第一设备根据所述第一操作指令,确定所述第一设备允许访问的资源,具体包括:
    所述第一设备根据访问所述第一资源的风险等级,确定所述第一设备允许访问的资源;访问所述第一资源的风险等级越高,则所述第一设备允许访问的资源越少;
    其中,所述第一资源的隐私度越高,访问所述第一资源的风险等级越高。
  3. 根据权利要求1或2所述的方法,其特征在于,所述第一设备根据所述第一认证因子,确定所述第一设备允许访问的资源,具体包括:
    所述第一设备根据所述第一认证因子的安全等级,确定所述第一设备允许访问的资源;所述第一认证因子的安全等级越低,则所述第一设备允许访问的资源越少;
    其中,所述第一认证因子对应的身份认证方式的认证能力等级ACL越高,或者,所述第一认证因子和达到所述第一设备的解锁要求的身份认证信息的匹配度越高,或者,获取所述第一认证因子的,则所述第一认证因子的安全等级越高。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述第一资源包括:预先定义的所述第一设备在所述锁定状态下不能访问的资源。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述第一操作指令包括以下任意一项:语音携带的语义、手势、脸部表情、形体姿态。
  6. 根据权利要求5所述的方法,其特征在于,所述第一设备获取第一操作指令,具体包括以下任意一项:
    所述第一设备采集到语音或图像,识别出所述语音或所述图像中携带的第一操作指令;
    所述第一设备接收到第二设备发送的语音或图像,识别出所述语音或所述图像中携带的第一操作指令;或者,
    所述第一设备接收到第二设备发送的第一操作指令。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述身份认证信息包括以下任意一项或多项:密码、图形或者生物特征。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,未达到所述第一设备的解锁要求的身份认证信息,包括:低于第一认证方式所需标准的身份认证信息,或者,符合第二认证方式所需标准的身份认证信息;
    其中,所述第一认证方式为用于将所述第一设备由所述锁定状态切换至解锁状态的身份认证方式,所述第二认证方式为所述第一认证方式之外的身份认证方式。
  9. 根据权利要求8所述的方法,其特征在于,所述第一认证方式为认证能力等级ACL高于第三值的身份认证方式,或者,所述第一认证方式由所述第一设备预先设置。
  10. 根据权利要求8或9所述的方法,其特征在于,
    低于所述第一认证方式所需标准的身份认证信息包括:和预存的第一生物特征的匹配度低于第一值的生物特征,所述第一生物特征为所述第一认证方式对应的身份认证信息;
    和/或,
    符合第二认证方式所需标准的身份认证信息包括:和预存的第二生物特征的匹配度达到第二值的生物特征,所述第二生物特征为所述第二认证方式对应的身份认证信息。
  11. 根据权利要求1-10任一项所述的方法,其特征在于,所述第一设备获取第一认证因子,具体包括以下任意一项:
    所述第一设备采集到语音或图像,识别出所述语音或所述图像中携带的第一认证因子;
    所述第一设备接收到第二设备发送的语音或图像,识别出所述语音或所述图像中携带的第一认证因子;或者,
    所述第一设备接收到第二设备发送的第一认证因子。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,所述第一设备获取第一操作指令和第一认证因子,具体包括以下任意一项:
    所述第一设备采集到语音,识别出所述语音的语义,将所述语义确定为所述第一操作指令;识别出所述语音携带的声纹,将所述声纹确定为所述第一认证因子;
    或者,
    所述第一设备采集到的图像,识别出所述图像中的手势、脸部表情、形体姿态,将所述图像中的手势、脸部表情、形体姿态确定为所述第一操作指令;识别所述图像中携带的生物特征,将所述生物特征确定为所述第一认证因子。
  13. 根据权利要求1-12任一项所述的方法,其特征在于,所述第一设备响应于所述第一操作指令,访问所述第一资源之后,所述方法还包括:
    所述第一设备接收到用户操作,所述用户操作用于请求访问所述第一设备的第二资源;
    如果所述第一设备允许访问的资源包括所述第二资源,则所述第一设备响应于所述用户操作,访问所述第二资源;
    如果所述第一设备允许访问的资源不包括所述第二资源,则所述第一设备拒绝响应于所述用户操作。
  14. 根据权利要求1-13任一项所述的方法,其特征在于,所述第一设备响应于所述第 一操作指令,访问所述第一资源之后,所述方法还包括:
    所述第一设备获取到第二认证因子,所述第二认证因子包括达到所述第一设备的解锁要求的身份认证信息,或者,预定数量的所述第一认证因子;
    所述第一设备根据所述第二认证因子,由所述锁定状态切换为解锁状态。
  15. 根据权利要求14所述的方法,其特征在于,所述第一设备确定所述第一设备允许访问的资源之后,所述第一设备获取到第二认证因子之前,所述方法还包括:
    所述第一设备显示第一控件;
    所述第一设备检测到作用于所述第一控件的操作;
    所述第一设备响应作用于所述第一控件的操作,开始检测身份认证信息。
  16. 根据权利要求1-15任一项所述的方法,其特征在于,
    所述第一设备确定所述第一设备允许访问的资源之后,所述方法还包括:所述第一设备创建限制执行环境,在所述限制执行环境中,所述第一设备允许访问所述确定允许访问的资源;
    所述第一设备响应于所述第一操作指令,访问所述第一资源,具体包括:所述第一设备响应于所述第一操作指令,在所述限制执行环境中,访问所述第一资源。
  17. 一种跨设备的访问控制方法,其特征在于,所述方法包括:
    第一设备处于锁定状态时,接收到第三设备发送的第二操作指令;所述第二操作指令用于请求访问所述第一设备的第三资源;
    所述第一设备根据所述第二操作指令,确定所述第一设备允许访问的资源;
    如果所述第一设备允许访问的资源包括所述第三资源,则所述第一设备响应于所述第二操作指令,访问所述第三资源。
  18. 根据权利要求17所述的方法,其特征在于,所述第一设备根据所述第二操作指令,确定所述第一设备允许访问的资源,具体包括:
    所述第一设备根据访问所述第三资源的风险等级,确定所述第一设备允许访问的资源;访问所述第三资源的风险等级越高,则所述第一设备允许访问的资源越少;
    其中,所述第三资源的隐私度越高,访问所述第三资源的风险等级越高。
  19. 根据权利要求17或18所述的方法,其特征在于,所述第三资源包括:预先定义的所述第一设备在所述锁定状态下不能访问的资源。
  20. 根据权利要求1-3任一项所述的方法,其特征在于,所述第三操作指令包括以下任意一项:语音携带的语义、手势、脸部表情、形体姿态。
  21. 根据权利要求17-20任一项所述的方法,其特征在于,所述第三操作指令为投屏请求。
  22. 根据权利要求17-21任一项所述的方法,其特征在于,所述第一设备响应于所述第二操作指令,访问所述第三资源之后,所述方法还包括:
    所述第一设备接收到用户操作,所述用户操作用于请求访问所述第一设备的第四资源;
    如果所述第一设备允许访问的资源包括所述第四资源,则所述第一设备响应于所述用户操作,访问所述第四资源;
    如果所述第一设备允许访问的资源不包括所述第四资源,则所述第一设备拒绝响应于所述用户操作。
  23. 根据权利要求17-22任一项所述的方法,其特征在于,所述第一设备响应于所述第二操作指令,访问所述第三资源之后,所述方法还包括:
    所述第一设备获取到第二认证因子,所述第二认证因子包括达到所述第一设备的解锁要求的身份认证信息,或者,预定数量的所述第一认证因子;
    所述第一设备根据所述第二认证因子,由所述锁定状态切换为解锁状态。
  24. 根据权利要求23所述的方法,其特征在于,所述第一设备确定所述第一设备允许访问的资源之后,所述第一设备获取到第二认证因子之前,所述方法还包括:
    所述第一设备显示第一控件;
    所述第一设备检测到作用于所述第一控件的操作;
    所述第一设备响应作用于所述第一控件的操作,开始检测身份认证信息。
  25. 根据权利要求17-24任一项所述的方法,其特征在于,
    所述第一设备确定所述第一设备允许访问的资源之后,所述方法还包括:所述第一设备创建限制执行环境,在所述限制执行环境中,所述第一设备允许访问所述确定允许访问的资源;
    所述第一设备响应于所述第二操作指令,访问所述第三资源,具体包括:所述第一设备响应于所述第二操作指令,在所述限制执行环境中,访问所述第三资源。
  26. 一种电子设备,其特征在于,包括:存储器、一个或多个处理器;所述存储器与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行如权利要求1-16或17-25任一项所述的方法。
  27. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1-16或17-25中任一项所述的方法。
  28. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得计算机执行如权利要求1-16或17-25中任一项所述的方法。
  29. 一种通信系统,其特征在于,所述通信系统包括:第一设备、第三设备,所述第三设备用于执行如权利要求17-25中任一项所述的方法。
PCT/CN2022/100826 2021-06-29 2022-06-23 访问控制方法及相关装置 WO2023274033A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22831849.9A EP4350544A1 (en) 2021-06-29 2022-06-23 Access control method and related apparatus
US18/398,325 US20240126897A1 (en) 2021-06-29 2023-12-28 Access control method and related apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110742228.8A CN115544469A (zh) 2021-06-29 2021-06-29 访问控制方法及相关装置
CN202110742228.8 2021-06-29

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/398,325 Continuation US20240126897A1 (en) 2021-06-29 2023-12-28 Access control method and related apparatus

Publications (1)

Publication Number Publication Date
WO2023274033A1 true WO2023274033A1 (zh) 2023-01-05

Family

ID=84690042

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/100826 WO2023274033A1 (zh) 2021-06-29 2022-06-23 访问控制方法及相关装置

Country Status (4)

Country Link
US (1) US20240126897A1 (zh)
EP (1) EP4350544A1 (zh)
CN (1) CN115544469A (zh)
WO (1) WO2023274033A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150347734A1 (en) * 2010-11-02 2015-12-03 Homayoon Beigi Access Control Through Multifactor Authentication with Multimodal Biometrics
CN106959841A (zh) * 2016-01-08 2017-07-18 阿里巴巴集团控股有限公司 一种应用中功能的调用方法及装置
CN107612880A (zh) * 2017-07-28 2018-01-19 深圳竹云科技有限公司 一种应用访问方法和装置
CN109388937A (zh) * 2018-11-05 2019-02-26 用友网络科技股份有限公司 一种多因子身份认证的单点登录方法及登录系统
CN110381195A (zh) * 2019-06-05 2019-10-25 华为技术有限公司 一种投屏显示方法及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150347734A1 (en) * 2010-11-02 2015-12-03 Homayoon Beigi Access Control Through Multifactor Authentication with Multimodal Biometrics
CN106959841A (zh) * 2016-01-08 2017-07-18 阿里巴巴集团控股有限公司 一种应用中功能的调用方法及装置
CN107612880A (zh) * 2017-07-28 2018-01-19 深圳竹云科技有限公司 一种应用访问方法和装置
CN109388937A (zh) * 2018-11-05 2019-02-26 用友网络科技股份有限公司 一种多因子身份认证的单点登录方法及登录系统
CN110381195A (zh) * 2019-06-05 2019-10-25 华为技术有限公司 一种投屏显示方法及电子设备

Also Published As

Publication number Publication date
EP4350544A1 (en) 2024-04-10
CN115544469A (zh) 2022-12-30
US20240126897A1 (en) 2024-04-18

Similar Documents

Publication Publication Date Title
WO2021052263A1 (zh) 语音助手显示方法及装置
KR102470275B1 (ko) 음성 제어 방법 및 전자 장치
WO2021013158A1 (zh) 显示方法及相关装置
WO2021129326A1 (zh) 一种屏幕显示方法及电子设备
WO2021018067A1 (zh) 一种悬浮窗口的管理方法及相关装置
WO2020052529A1 (zh) 全屏显示视频中快速调出小窗口的方法、图形用户接口及终端
JP6309540B2 (ja) 画像処理方法、画像処理装置、端末装置、プログラム、及び記録媒体
WO2020062294A1 (zh) 系统导航栏的显示控制方法、图形用户界面及电子设备
CN112148400B (zh) 锁定状态下的显示方法及装置
WO2021036770A1 (zh) 一种分屏处理方法及终端设备
WO2022068483A1 (zh) 应用启动方法、装置和电子设备
CN112882777A (zh) 一种分屏显示方法及电子设备
WO2021078032A1 (zh) 用户界面的显示方法及电子设备
WO2020150917A1 (zh) 一种应用权限的管理方法及电子设备
WO2020107463A1 (zh) 一种电子设备的控制方法及电子设备
WO2022262439A1 (zh) 网络资源的处理方法、电子设备及计算机可读存储介质
WO2021218429A1 (zh) 应用窗口的管理方法、终端设备及计算机可读存储介质
WO2022052712A1 (zh) 处理交互事件的方法和装置
CN114077365A (zh) 分屏显示方法和电子设备
WO2024045801A1 (zh) 用于截屏的方法、电子设备、介质以及程序产品
WO2022001279A1 (zh) 跨设备桌面管理方法、第一电子设备及第二电子设备
WO2021190524A1 (zh) 截屏处理的方法、图形用户接口及终端
WO2020103091A9 (zh) 锁定触控操作的方法及电子设备
WO2022247626A1 (zh) 基于应用身份的访问控制方法、相关装置及系统
WO2023274033A1 (zh) 访问控制方法及相关装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22831849

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022831849

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022831849

Country of ref document: EP

Effective date: 20231219

NENP Non-entry into the national phase

Ref country code: DE