CN112384916A - Method and apparatus for performing user authentication - Google Patents

Method and apparatus for performing user authentication Download PDF

Info

Publication number
CN112384916A
CN112384916A CN201980045581.1A CN201980045581A CN112384916A CN 112384916 A CN112384916 A CN 112384916A CN 201980045581 A CN201980045581 A CN 201980045581A CN 112384916 A CN112384916 A CN 112384916A
Authority
CN
China
Prior art keywords
electronic device
user
challenge
identifying
actor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980045581.1A
Other languages
Chinese (zh)
Other versions
CN112384916B (en
Inventor
A.贾因
A.沙尔马
R.雅达夫
K.米什拉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority claimed from PCT/KR2019/008890 external-priority patent/WO2020017902A1/en
Publication of CN112384916A publication Critical patent/CN112384916A/en
Application granted granted Critical
Publication of CN112384916B publication Critical patent/CN112384916B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/36User authentication by graphic or iconic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/45Structures or tools for the administration of authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2133Verifying human interaction, e.g., Captcha

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method of authenticating a user includes obtaining a user authentication request for access to at least one application running on an electronic device, identifying actors and tasks for authentication based on one or more contextual parameters associated with at least one of the electronic device or the user, providing a live challenge generated based on the identifying, and identifying whether to access the at least one application based on whether the provided live challenge has been successfully executed.

Description

Method and apparatus for performing user authentication
Technical Field
The present disclosure relates to user authentication technology. More particularly, the present disclosure relates to methods and apparatus for performing user authentication by providing a field challenge generated based on contextual parameters associated with a user of an electronic device.
Background
With the rapid development of digital communication technology in various types of electronic devices, there is an increasing interest in maintaining data security. In electronic devices, data security is required to protect access, use, disclosure, modification, and destruction of information about unauthenticated individuals and entities.
In general, to access a restricted feature of an electronic device, such as a particular program, application, data, or website, a message prompting a password may be displayed, allowing a user to be authenticated with respect to the restricted feature. Furthermore, there are several ways to identify and/or authenticate users of electronic devices. Authentication may include, for example, authentication based on a Personal Identification Number (PIN), authentication based on a pattern lock, authentication based on a completely automated public turing test to distinguish computers and humans (CAPTCHA), authentication based on biometrics (fingerprint recognition, facial recognition, or iris recognition), and so forth. Fig. 1A is a diagram illustrating an example of authentication types according to the related art.
Existing user authentication methods are incompatible and cumbersome. For example, with existing methods, when a user wants to access an application or website of an electronic device, the user is recognized as not being a web robot (i.e., BOT) using the CAPTCHA or RE-CAPTCHA, and access rights are granted to the user. As shown in fig. 1A, a user may access an application or website after solving a challenge (e.g., captcha 10, pattern 20, or question). According to the method shown in fig. 1A, the BOT can be prevented from using an application or a web site. However, because the challenge questions have already been generated and stored in the electronic device, a non-interactive authentication method is performed.
Accordingly, there is a need for more useful alternative techniques to overcome the above disadvantages or other disadvantages in authentication.
Disclosure of Invention
Technical problem
There is a need for more useful alternative techniques to overcome the above disadvantages or other disadvantages in authentication.
Technical scheme
A method of authenticating a user includes obtaining a user authentication request for access to at least one application running on an electronic device, identifying actors and tasks for authentication based on one or more contextual parameters associated with at least one of the electronic device or the user, providing a live challenge generated based on the identifying, and identifying whether to access the at least one application based on whether the provided live challenge has been successfully executed.
Drawings
The above and other aspects, features and advantages of certain embodiments of the present disclosure will become more apparent from the following description when taken in conjunction with the accompanying drawings, in which:
fig. 1A is a diagram illustrating an example of authentication types according to the related art;
fig. 1B is a diagram for describing a method of performing user authentication according to an embodiment of the present disclosure;
fig. 2 is a block diagram of an electronic device according to an embodiment of the disclosure.
FIG. 3 is a block diagram illustrating a field challenge engine of an electronic device in accordance with an embodiment of the present disclosure;
FIG. 4 is a diagram showing a process used by a field challenge engine to generate a field challenge according to an embodiment of the present disclosure;
FIG. 5 is a diagram for describing a method of authenticating a user of an electronic device according to an embodiment of the present disclosure;
FIG. 6 is a block diagram of an authentication engine of an electronic device for authenticating a user according to an embodiment of the present disclosure;
fig. 7A is a diagram for describing a process in which an electronic device captures and displays an image of an object around a user according to an embodiment of the present disclosure;
fig. 7B is a diagram for describing a process in which an electronic device determines a story based on objects around a user according to an embodiment of the present disclosure;
fig. 7C is a diagram for describing a process in which an electronic device determines actors of a story according to an embodiment of the present disclosure;
FIG. 7D is a diagram used to describe a process by which an electronic device generates a live challenge based on stories, actors, and tasks according to an embodiment of the disclosure;
FIG. 8A shows a first part of a flow chart depicting a method of authenticating a user of an electronic device according to an embodiment of the disclosure;
FIG. 8B shows a second portion of a flowchart depicting a method of authenticating a user of an electronic device according to an embodiment of the present disclosure;
fig. 9 is a diagram for describing a method of an electronic device authenticating a user by using a field challenge generated based on weather information according to an embodiment of the present disclosure;
fig. 10 is a diagram for describing a method of authenticating a user by using a field challenge generated based on a recognized object according to an embodiment of the present disclosure;
fig. 11 is a diagram for describing a method of authenticating a user by using a field challenge generated based on a recognized object according to an embodiment of the present disclosure;
fig. 12 is a diagram for describing a method of authenticating a user by using a field challenge generated based on a recognized object according to an embodiment of the present disclosure;
fig. 13 is a diagram for describing a method of authenticating a user by using a field challenge generated based on a recognized object according to an embodiment of the present disclosure;
fig. 14 is a diagram for describing a method of authenticating a user by using a field challenge generated based on a recognized object according to an embodiment of the present disclosure;
FIG. 15 is a diagram for describing a method of authenticating a user by using a field challenge generated based on context parameters, according to an embodiment of the present disclosure;
FIG. 16 is a diagram for describing a method of authenticating a user by using a field challenge generated based on context parameters, according to an embodiment of the present disclosure;
FIG. 17 is a diagram for describing a method of authenticating a user by using a field challenge generated based on context parameters, according to an embodiment of the present disclosure;
FIG. 18 is a diagram for describing a method of authenticating a user by using a field challenge generated based on context parameters, according to an embodiment of the present disclosure;
FIG. 19 is a diagram for describing a method of authenticating a user by using a field challenge generated based on context parameters, according to an embodiment of the present disclosure;
fig. 20 is a diagram for describing a method of an electronic device performing user authentication according to an embodiment of the present disclosure; and
fig. 21 is a block diagram of an electronic device that performs user authentication according to an embodiment of the present disclosure.
Best mode for carrying out the invention
According to embodiments of the present disclosure, a field challenge may be generated based on a contextual parameter associated with a user of an electronic device, and user authentication may be performed based on the field challenge. According to another embodiment of the present disclosure, user authentication may be performed by identifying an object around an electronic device and providing a live challenge generated based on the object in an Augmented Reality (AR) mode.
Additional aspects will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the embodiments presented in this disclosure.
According to an embodiment of the present disclosure, a method of authenticating a user may include: receiving a user authentication request for accessing at least one application running on an electronic device; identifying actors and tasks that constitute a live challenge for authentication based on contextual parameters associated with at least one of an electronic device or a user; providing a field challenge generated based on the identification; and identifying whether to access the at least one application based on whether to execute the provided presence challenge. In another embodiment of the present disclosure, a method of authenticating a user may include: receiving a user authentication request for accessing at least one application running on an electronic device; identifying an actor and a task based on one or more contextual parameters associated with at least one of an electronic device or a user; generating a live challenge for authentication based on the identified actors and tasks; providing the generated field challenge to a user or an electronic device; and identifying whether to grant access to the at least one application based on whether the provided presence challenge has been performed. Actors and tasks may constitute live challenges.
The method may further include identifying an object displayed in a field of view (FoV) of a camera provided in the electronic device, wherein identifying the actor and the task may include identifying the actor and the task based on the identified object and the one or more contextual parameters.
Identifying an actor and a task may include identifying an actor corresponding to the identified object and identifying a task that can be performed by the identified actor, and providing the live challenge may include displaying a question that prompts the identified task.
Providing the live challenge may include outputting an Augmented Reality (AR) image of the live challenge, which is composed of the actor and the task, on the recognized object in an overlapping manner when an AR mode is set in the electronic device.
The method may further include identifying movement information about the electronic device or the user after the object recognition, wherein outputting the AR image may include adjusting a position of the output AR image based on the identified movement information.
The method may further include identifying a location of the electronic device, and identifying objects around the electronic device based on the identified location of the electronic device, wherein identifying the actor and the task may include identifying the actor and the task based on the identified objects and the one or more contextual parameters.
Identifying whether to access the at least one application may include: denying access to the at least one application based on the user action corresponding to the live challenge not being identified within the predetermined time; and allowing access to the at least one application based on identifying a user action corresponding to the live challenge within a predetermined time.
For example, identifying whether to access the at least one application may include: denying access to the at least one application when a user action corresponding to the live challenge is not recognized within a predetermined time; and allowing access to the at least one application when a user action corresponding to the live challenge is identified within a predetermined time.
The one or more contextual parameters may include at least one of: setting information about the electronic device, time information, a location where a user authentication request is obtained, an activity performed by a user in the electronic device, a notification obtained by the electronic device, Social Network Service (SNS) information, ambient environment information about the electronic device, a network to which the electronic device is connected, or the number of other electronic devices connected to the electronic device.
Identifying the actor and the task may include identifying the actor and the task by using a preset learning network model based on the one or more context parameters.
According to another embodiment of the present disclosure, an electronic device for performing user authentication may include an inputter/outputter, an inputter storing instructions, and at least one processor connected to a memory, wherein the at least one processor is configured to execute the instructions to: obtaining, by an inputter/outputter, a user authentication request for access to at least one application running on an electronic device; identifying actors and tasks that constitute a live challenge for authentication based on one or more contextual parameters associated with at least one of an electronic device or a user; providing, by an inputter/outputter, a field challenge generated based on the identifying; and identifying whether to access the at least one application based on whether to execute the provided presence challenge. For example, the inputter/outputter may be a touch screen display that can obtain input information (touch input) and display (output) information. According to another embodiment of the disclosure, at least one processor may be configured to execute instructions to: obtaining, by an inputter/outputter, a user authentication request for access to at least one application running on an electronic device; identifying an actor and a task based on one or more contextual parameters associated with at least one of an electronic device or a user; generating a live challenge for authentication based on the identified actors and tasks; providing the generated field challenge to a user of the electronic device; and identifying whether to grant access to the at least one application based on whether the provided presence challenge has been performed.
According to another embodiment of the disclosure, a computer program product may include a computer-readable recording medium, such as a non-transitory computer-readable storage medium, storing computer program code, which, when executed by a processor, causes the processor to perform a process comprising: receiving a user authentication request for accessing at least one application running on an electronic device; determining an actor and a task based on one or more contextual parameters associated with at least one of an electronic device or a user; generating a live challenge for authentication based on the determined actor and task; providing the generated field challenge to a user or an electronic device; and determining whether to grant access to the at least one application based on whether the provided presence challenge has been performed.
According to another embodiment of the disclosure, a computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device, causes the computing device to: obtaining a user authentication request for accessing at least one application running on an electronic device; identifying actors and tasks that constitute a live challenge for authentication based on contextual parameters associated with at least one of an electronic device or a user; providing a field challenge generated based on the determination; and identifying whether to access the at least one application based on whether to execute the provided presence challenge.
According to another embodiment of the present disclosure, a method of authenticating a user may include: receiving a user authentication request requesting access to at least one application running on an electronic device; based on the acquired user authentication request, automatically controlling a camera to capture an image or automatically controlling a sensor to acquire current user context information; determining a live Augmented Reality (AR) challenge for authentication based on an object identified in the captured image or based on current user context information; generating an AR image based on the determined field challenge; displaying the generated AR image; determining whether the user performed an action corresponding to the live AR challenge; and granting access to the at least one application based on determining that the user performed an action corresponding to the live AR challenge. The method may further include an AR image including at least one of: information about questions asked of a user of the electronic device and images associated with actions to be performed by the user.
Detailed Description
Terms used herein will be described briefly, and user authentication techniques according to embodiments of the present disclosure will be described in detail.
The terms used herein are those general terms currently widely used in the art in consideration of functions regarding user authentication technology, but they may be changed according to the intention of a person having ordinary skill in the art, precedent, or new technology. In addition, the designated terms may be selected by the applicant, and in this case, the detailed meanings thereof will be described in the detailed description of the present disclosure. Accordingly, the terms used herein should not be construed as simple names but based on the meanings of the terms and the overall description of the present disclosure.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one component from another. For example, a first component may be referred to as a second component, and similarly, a second component may also be referred to as a first component, without departing from the scope of user authentication techniques according to embodiments of the present disclosure. The term "and/or" includes any combination of multiple related items or any one of multiple related items.
Throughout the present disclosure, the expression "at least one of a, b or c" means: only a, only b, only c, both a and b, both a and c, both b and c, all of a, b and c, or variants thereof. Similarly, the expression "at least one of a, b and c" means: only a, only b, only c, both a and b, both a and c, both b and c, all of a, b and c, or variants thereof.
It will be understood that terms such as "comprising," "including," and "having," when used herein, specify the presence of stated elements, but do not preclude the presence or addition of one or more other elements. Further, the term "unit" used in this specification refers to software or hardware components such as a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC), and performs some functions. However, the "unit" is not limited to software or hardware. A "unit" may be configured in an addressable storage medium or configured to reproduce one or more processors. Thus, for example, a "unit" includes components such as software components, object-oriented software components, class components, task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. The functions provided in the components and "units" may be combined with a smaller number of components and "units" or may be separated from other components and "units".
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily practice the present disclosure. However, user authentication techniques according to embodiments of the present disclosure may be embodied in many different forms and are not limited to the embodiments of the present disclosure described herein. In order to clearly describe the user authentication technology according to the embodiment of the present disclosure, portions irrelevant to the description are omitted, and the same reference numerals are assigned to the same elements throughout the specification.
According to the existing user authentication technology, user authentication can be performed by prompting a user to provide a password, a pattern, a completely automatic public turing test (CAPTCHA) that distinguishes computers and humans, biometric information, and the like on an electronic device. Further, the electronic device may determine whether to authenticate the user based on whether a password input from the user matches data already stored. In addition to passwords, biometric information may be used for user authentication. For example, a user may perform authentication by providing a fingerprint to a smartphone. As described above, existing user authentication techniques are not interactive, and it may be difficult for a user to remember information (passwords, patterns, etc.) necessary for credentials each time. Therefore, the user may feel uncomfortable.
Unlike existing user authentication techniques, the user authentication technique according to an embodiment of the present disclosure may perform user authentication in an interactive manner. An electronic device performing a user authentication method according to an embodiment of the present disclosure may generate a live Augmented Reality (AR) challenge based on a plurality of context parameters. The generated live AR challenge may be displayed on a screen of an electronic device operating in AR mode. The electronic device may display the live AR challenge to guide the user in performing at least one task in real-time. The user may access the electronic device when the user successfully completes the task. Accordingly, the electronic device according to an embodiment of the present disclosure may perform authentication via interaction with a user by confirming in real time whether a task is performed according to a live AR challenge.
Further, user authentication techniques according to embodiments of the present disclosure may generate a field challenge based on user behavior. For example, the electronic device may identify whether the authentication requester is a BOT or a user of the electronic device (i.e., a real user) by providing a field challenge based on a question such as the question "who calls every day".
Furthermore, user authentication techniques according to embodiments of the present disclosure may generate real-time live AR challenges without using external hardware components.
Hereinafter, an embodiment of the present disclosure will be described in more detail with reference to fig. 1B to 21.
Fig. 1B is a diagram for describing a method of performing user authentication according to an embodiment of the present disclosure.
Referring to fig. 1B, the electronic device 100 may receive a user authentication request for accessing at least one application from a user. For example, the electronic device 100 may determine that a user authentication request has been received when a user input touching an icon indicating one of the at least one application requiring access rights is recognized. However, this is merely an example, and the method in which the electronic apparatus 100 receives the user authentication request is not limited to the above example.
The electronic apparatus 100 according to an embodiment of the present disclosure may automatically operate a camera function to capture an image of an area around a user in response to receipt of a user authentication request. The camera captures an image of an object around the user and displays the captured image in the field of view of the camera. Thus, objects around the user can be identified. For example, electronic device 100 may identify windows around the user (e.g., windows behind or in front of the user) through a captured image of an area around the user.
Further, the electronic device 100 according to embodiments of the present disclosure may determine a plurality of context parameters associated with at least one of the user or the electronic device 100. The context parameter may include at least one of setting information about the electronic device, a time and a location at which the user authentication request is received, an activity performed by the user in the electronic device, a notification received by the electronic device, Social Network Service (SNS) information, ambient environment information about the electronic device, a network to which the electronic device is connected, or the number of other electronic devices connected to the electronic device. For example, the electronic device 100 may determine weather information, such as information indicating that it is currently raining.
The electronic device 100 may determine the real-time story based on recognized objects (such as windows) and contextual parameters. The identified objects may be identified by performing image recognition and/or pattern matching.
Further, the electronic device 100 may select at least one task to apply to the real-time story. Further, the electronic device 100 may determine that at least one task needs to be applied to at least one actor. The task may be to "kill spiders". Further, the electronic device 100 may generate the live challenge by combining the real-time story, the at least one actor, and the at least one task. For example, the live AR challenge shown in fig. 1B may be generated by adding a spider as a virtual AR image in a window recognized by electronic device 100. The live AR challenge may be generated by using at least one of AR technology, Virtual Reality (VR) technology, and the like. Electronic device 100 may display the live challenge to the user in AR mode and may guide the user to perform, for example, an interactive task, such as task 110 "killing spiders". Further, the user may kill the spider by performing a touch operation on the spider to complete the requested task. The electronic device 100 may allow the user access when the user successfully performs the task. When the user is not successfully performing the task, the electronic device 100 may deny access.
According to another example, when the time that the electronic device 100 recognizes a window is the morning, the on-site AR challenge may be "opening the window" for ventilation. According to another example, the live AR challenge may be "close a window" when the electronic device 100 recognizes that the time of the window is evening. The live AR challenge may be generated in real-time based on contextual parameters indicative of at least one of user behavior, user environment, and the like. Thus, mutual user authentication can be performed.
Fig. 2 is a block diagram of an electronic device 100 according to an embodiment of the disclosure.
Referring to fig. 2, the electronic device 100 may include a camera 110, a field challenge engine 120, an AR engine 130, an authentication engine 140, a communicator 150, a processor 160, a memory 170, and a display 180. However, this is merely an example, and electronic device 100 may include fewer or more components than described above. For example, the electronic device 100 may further include one or more sensors, such as gyroscopes, GPS sensors, and/or acceleration sensors, capable of identifying a location or movement of the user or the electronic device 100. The above-described sensors are merely examples, and the sensors included in the electronic device 100 are not limited to the above-described examples.
The field challenge engine 120 according to embodiments of the present disclosure may receive a user authentication request to authenticate that the user is a user of the electronic device 100. Upon receiving the user authentication request, the field challenge engine 120 may generate a field challenge for the user of the electronic device 100. The live challenge may indicate a real-time story and may include at least one task to be performed by the user.
In embodiments of the present disclosure, the live challenge engine 120 may generate a live challenge for the user by automatically launching the camera 110 upon receiving the user authentication request. The camera 110 may be an imaging sensor or the like and may be used to capture images of the area surrounding the user. In addition, the live challenge engine 120 may recognize objects around the user that are displayed in the FoV of the camera 110.
Further, the field challenge engine 120 may determine a plurality of contextual parameters associated with at least one of the user or the electronic device 100. In an embodiment of the present disclosure, the context parameter may include at least one of setting information about the electronic device, a time (e.g., a current date and time), a location where the user authentication request is received, an activity performed by the user in the electronic device, a notification received by the electronic device, SNS information, ambient environment information (e.g., weather information or lighting information) of the electronic device, a network to which the electronic device is connected, or the number of other electronic devices connected to the electronic device.
Further, the live challenge engine 120 may determine a real-time story based on recognized objects (such as windows) and contextual parameters. Live challenge engine 120 may determine at least one actor based on the real-time story. The live challenge engine 120 may determine at least one task to apply to at least one actor. Further, live challenge engine 120 may generate a live challenge by combining the real-time story, at least one actor, and at least one task. At this point, the live challenge may include a live AR challenge. Live AR challenges may be generated by augmenting live views with AR images (e.g., spiders) that may indicate to actors tasks to perform related to recognized objects and that may be provided when electronic device 100 is operating in AR mode.
According to embodiments of the present disclosure, the authentication engine 140 may be connected to the memory 170 and the processor 160. The authentication engine 140 may perform user authentication based on the live AR challenge. AR engine 130 may display the live AR challenge in AR mode on the screen of electronic device 100. Further, the AR engine 130 may derive at least one task to be performed by the user in AR mode. The authentication engine 140 may determine whether the user successfully performed at least one task in AR mode. Further, the authentication engine 140 may authorize the user to access at least one application of the electronic device 100 when the user successfully performs at least one task.
According to another embodiment of the present disclosure, the electronic apparatus 100 may recognize objects around the user without using the camera 110. The electronic device 100 may determine whether the user's location is inside by using a Global Positioning System (GPS) sensor, a gyroscope, or any other sensor, and may recognize the object based on the user's location. For example, when the electronic device 100 determines that the user is in his or her bedroom, the electronic device 100 may acquire data for a particular location (such as the bedroom) and recognize objects present in the particular location to generate a field challenge based on the acquired data. The acquired data may be, for example, a captured image of a particular location. However, this is merely an example, and data is not limited thereto.
According to another embodiment of the present disclosure, the electronic device 100 may generate a field challenge without using a camera, AR, VR, or the like. The electronic device 100 may dynamically generate a field challenge based on contextual parameters, such as current user behavior. For example, the location of the user may be determined as an office based on the coordinates of the electronic apparatus 100 acquired by using GPS or the like. Accordingly, the electronic device 100 may determine objects present in the office, determine actors and tasks based on the determined objects, and generate live challenges. For example, the electronic device 100 may request that the user select the color of the water bottle on the user's table.
Communicator 150 may be a communication interface configured to enable hardware components in electronic device 100 to communicate internally with each other. The communicator 150 may be further configured such that the electronic device may communicate with other electronic devices and or servers.
Processor 160 may be connected to memory 170 to process various instructions stored in memory 170 to authenticate a user of electronic device 100.
Memory 170 may store instructions to be executed by processor 160. The memory 170 may include non-volatile storage elements. Examples of non-volatile storage elements may include magnetic hard disks, optical disks, floppy disks, flash memory, or electrically programmable memories (EPROM) or electrically erasable and programmable memories (EEPROM). Further, in some examples, the memory 170 may be considered a non-transitory storage medium. The term "non-transitory" means that the storage medium is not implemented by a carrier wave or a propagated signal. However, the term "non-transitory" should not be construed to mean that the memory 170 is not removable. In some examples, memory 170 may be configured to store more information than memory. In some examples, a non-transitory storage medium may store data that may change over time (e.g., in Random Access Memory (RAM) or cache).
In embodiments of the present disclosure, the display 180 may be configured to display content on the electronic device 100. Examples of the display 180 may include a Liquid Crystal Display (LCD), an active matrix organic light emitting diode (AM-OLED) display, a Light Emitting Diode (LED) display, and the like.
Although fig. 2 illustrates various hardware components of the electronic device 100, the configuration of the electronic device 100 according to an embodiment of the present disclosure is not limited thereto. In another embodiment of the present disclosure, electronic device 100 may include fewer or more components. Further, the label or name of each component is used for illustrative purposes only, and is not intended to limit the scope of the present disclosure. One or more components may be connected together to perform the same or substantially similar functions of authenticating a user of electronic device 100.
The electronic device 100 may be one of a smartphone, a mobile phone, a laptop, a tablet, etc., but is not limited thereto.
Fig. 3 is a block diagram illustrating field challenge engine 120 of electronic device 100 in accordance with an embodiment of the present disclosure.
Referring to fig. 3, live challenge engine 120 may include an object recognition engine 121, a context determination engine 122, a database recognition engine 123, a convolution engine 124, a real-time story engine 125, an actor determination engine 126, a task determination engine 127, and a response determination engine 128.
In an embodiment of the present disclosure, the live challenge engine 120 may automatically start the camera 110 of the electronic device 100 when a user authentication request is received. In addition, the object recognition engine 121 may recognize objects around the user that are displayed in the FoV of the camera 110. According to another example, object recognition engine 121 may determine objects present around the user based on sensors capable of determining location. Examples of the sensor may include a GPS sensor provided in the electronic device 100.
Further, the context determination engine 122 may determine a plurality of context parameters associated with at least one of the user or the electronic device. Further, real-time story engine 125 may determine a real-time story based on the recognized objects and contextual parameters.
Actor determination engine 126 may determine at least one actor based on the real-time story. Database recognition engine 123 may be configured to recognize and select user stories from the database. In addition, database recognition engine 123 may be configured to identify or select actor groups for a user story from the database. The task determination engine 127 may determine at least one task to be applied to at least one actor.
The convolution engine 124 may combine the real-time story, at least one actor, and at least one task to generate a live challenge. Live challenge engine 120 may receive a live challenge from convolution engine 124.
The task determination engine 127 may direct or prompt the user to perform the at least one task determined in real-time.
The response determination engine 128 may determine whether the user successfully performed at least one task.
A live challenge engine 120 according to another embodiment of the present disclosure may generate a live challenge for a user without using a camera 110. This may correspond to the same embodiment described above with reference to fig. 2.
Fig. 4 is a diagram illustrating a process used by field challenge engine 120 to generate a field challenge according to an embodiment of the present disclosure.
Referring to FIG. 4, the following process may be performed by field challenge engine 120 of electronic device 100 to generate a field challenge for a user of electronic device 100.
In operation 401a, the field challenge engine 120 may send the detected object to the database recognition engine 123. The object may be detected, for example, by an intelligent agent (e.g., Bixby vision) that recognizes and classifies the object. The intelligent agent may recognize and classify an object by performing image recognition on the object included in an image captured by the camera.
In operation 401b, the field challenge engine 120 may send a plurality of context parameters to the convolution engine 124. The context parameters may include one or more of the following: current date, current time, current location, difficulty of field challenge, weather information, lighting conditions of the user's current environment, speed information of the user or the electronic device 100 moving, landscape information (e.g., landscape images), portrait, orientation of the electronic device 100 (e.g., reverse orientation or forward orientation), season information (e.g., indication of current season such as spring, summer, fall, or winter), number or type of accessories connected to the electronic device 100, setting information (e.g., sound on/off, power saving on/off), and the like.
In operation 402, the field challenge engine 120 may select any one of a plurality of databases stored in the memory 170.
In operation 403, the field challenge engine 120 may send the selected database and number of entries to the convolution engine 124.
In operations 404a and 404b, the convolution engine 124 may mix one, more than one, or all of the context parameters with the number of entries received into the database and generate unique values (e.g., each time) by using, for example, hashing techniques and/or random number generation techniques.
In operations 405a and 405b, the convolution engine 124 may send the generated values to a database stored in the memory 170. Meanwhile, the live challenge engine 120 may send the generated values to the actor determination engine 126.
In operation 406, upon receiving the generated value, the presence challenge engine 120 may select a user story from a database stored in the memory 170. Additionally, actor determination engine 126 may send user events to task determination engine 127.
In operation 407, the task determination engine 127 may determine a question or task to display to the user. The task determination engine 127 may store a list of tasks that may be performed for each type of actor and recognized object. Further, the task determination engine 127 may be trained using a learning network model based on some training data set of inputs and outputs. For example, actors and objects may be used as inputs and questions may be used as outputs. Thus, task determination engine 127 may identify a set of questions that may be posed for the current scene based on the actors and objects. The task determination engine 127 can determine the problem based on the current user context, such as location or timer. For example, when the user watches a movie, the task determination engine 127 may determine the question as a question that prompts the user to enter information about who the movie actor is.
In operation 408, when a problem or activity is determined, the task determination engine 127 can send the determined problem or activity to the response determination engine 128.
In operation 409, the response determination engine 128 may determine accurate answers to the questions and send the determined accurate answers to the actor determination engine 126.
In operation 410, the actor determination engine 126 may select characteristics of the actor (e.g., size, shape, color, or user story) based on input such as user story, difficulty of live challenge, and contextual parameters.
For example, an example scenario showing "display window and window with curtain" may be assumed as the user history. The context parameters of the current usage environment may be as follows:
a. the current location of the user (e.g., state or country)
b. Current weather conditions (e.g., sunny, rainy, etc.)
c. The current time (e.g., day, night, afternoon, etc.)
d. A difficulty value (e.g., difficult, easy, or medium).
For example, the current usage environment may include information indicating that the user is in india, sunny weather, daytime, and with easy difficulty. Based on this current usage environment information, the actor determination engine 126 may display a window with a window shade open (easy in difficulty), and may ask the user to close the window shade because the weather is sunny (and, for example, the sun is very sunny).
For user authentication, a background service may be continuously run in the electronic device 100 to capture user behavior, and a database of field challenges and solutions to field challenges may be generated and stored from the captured user behavior. To generate a field challenge, it may be necessary to activate one or more of the following types of functions:
a. message delivery
b. Electronic mail
c. Position of
d. Telephone set
e. General activities [ e.g. calendar information, health records, etc. ]
F. User trends [ e.g., camera usage, call usage, frequent calls, and home-office practice ]
Fig. 5 is a diagram for describing a method of authenticating a user of the electronic apparatus 100 according to an embodiment of the present disclosure.
In the following process, an embodiment of the present disclosure in which the AR engine 130 of the electronic device 100 participates in user authentication will be described.
1) The input module 101 of the electronic device 100 may receive a user authentication request.
2) The user authentication request may be sent to the AR engine 130.
3) The AR engine 130 may also operate the camera 110 of the electronic device 100.
4) The camera 110 may send the image to an intelligent agent, such as the Bixby visual agent 110a of the electronic device 100. The Bixby visual agent 110a may be built into the camera 110 so that the user can tap a visual icon in the viewfinder to interpret the logo or construct the AR image.
5) The Bixby visual agent 110a may identify an object in the user's FoV and send the identified object to the input module 101.
6) Input module 101 may send the identified object to field challenge engine 120.
7) Live challenge engine 120 may generate a live AR challenge by augmenting the user's FoV with AR images related to the user's story and actors. The live AR challenge may be generated based on the identified object and other contextual parameters. In addition, live challenge engine 120 may send a live AR challenge to input module 101. Further, live challenge engine 120 may send the results of the live AR challenge to authentication engine 140.
8) Input module 101 may send the live AR challenge and the context parameters to AR engine 130.
9) The AR engine 130 may display the live challenge in AR mode via the camera 110. The AR animator 131 may be configured to display the live challenge or an AR image associated with the live challenge at a particular location on the display 180 of the electronic device 100. AR engine 130 may also operate movement or motion detector 132 to obtain movement information about the user and electronic device 100.
10) The movement or motion detector 132 may continue to send movement information to the AR engine 130.
11) The AR engine 130 may send the movement information to the input module 101. The input module 101 may identify whether the live AR challenge has been successfully completed based on the movement information.
12) The input module 101 may also send the authentication result to the authentication engine 140. To determine whether the live AR challenge has been successfully completed, authentication engine 140 may determine an association between the results received from live challenge engine 120 and the results received from input module 101.
a. The challenge may be successfully completed when the results received from the input module 101 and the results received from the field challenge engine 120 are the same. Thus, the authentication engine 140 may allow user access to the electronic device 100.
b. When the results received from the input module 101 and the results received from the field challenge engine 120 are not the same, the field challenge may not be completed successfully. Thus, the authentication engine 140 may deny user access to the electronic device 100.
Fig. 6 is a block diagram of an authentication engine 140 of the electronic device 100 for authenticating a user according to an embodiment of the present disclosure.
Live challenge engine 120 may generate a live challenge and send relevant information to AR engine 130 to interactively provide the live challenge with camera 110. The AR engine 130 may include an AR animator 131 and a movement or motion detector 132, which movement or motion detector 132 may be a sensor. AR engine 130 may perform an analysis of when, where, and how the presence challenge is presented.
AR animator 131 may analyze the data of the live challenge and determine where in electronic device 100 to display the live challenge. AR animator 131 may calculate the exact location of the AR image displaying the live challenge based on the parameters provided by the live challenge. The AR animator 131 may display the live challenge in the AR mode of the display at a location determined based on the calculation. Further, AR engine 130 may interact with movement or motion detector 132 to receive user input and send the user input to authentication engine 140. The movement or motion detector 132 may use a sensor, such as a gyroscope or accelerometer, to detect movement of the electronic device 100 and identify whether the user is in the correct state or three-dimensional (3D) space.
AR engine 130 of electronic device 100 may perform the following process to authenticate the user of electronic device 100.
1) The AR animator 131 and the camera 110 are activated.
2) Base coordinates and destination coordinates of the 3D plane are acquired.
3) The movement of the electronic device 100 and the user is observed until the destination coordinates and the electronic device 100 do not match.
4) When the electronic device 100 does not perform the necessary operations for a limited time, the execution of the AR animator 131 and the camera 110 is disabled, and the authentication is set to fail.
5) When the electronic device 100 is at the destination coordinates,
a. the presence challenge engine 120 may select the type of questionnaire [ such as a one-click activity ] or an activity [ such as user movement tracking ].
b. The live challenge engine 120 may receive event details of the object, such as the size, type, sub-type, color, or base coordinates of the object, and the number of objects. In addition, presence challenge engine 120 may send the received event details to AR engine 130.
The ar engine 130 may receive information on the user action, compare the user action with the result data, and transmit the result data to the authentication engine 140, or may transmit information on the comparison result to the authentication engine 140.
The authentication engine 140 may determine whether the request was successful. Specifically, authentication engine 140 may determine whether the request was successful based on a comparison between the raw results sent by presence challenge engine 120 and user behavior from data sent by AR engine 130.
Fig. 7A to 7D are diagrams for describing an example scenario for authenticating a user of the electronic device 100 according to an embodiment of the present disclosure. In the embodiment of the present disclosure, it is assumed that the user wants to access the electronic apparatus 100.
The electronic device 100 may receive an access request from a user. Upon receiving the access request, the electronic device 100 may automatically drive the camera 110 to capture images of objects around the user displayed in the FoV of the camera 110. Fig. 7A is a diagram for describing a process in which the electronic apparatus 100 captures and displays objects around the user according to an embodiment of the present disclosure. When the user is at home, the window 200 may be displayed on the electronic device, as shown in fig. 7A.
Further, the electronic device 100 may determine the user story based on the detected object and the condition of the user. Fig. 7B is a diagram for describing a process in which the electronic apparatus 100 determines a story based on objects around the user according to an embodiment of the present disclosure. The story may include spider webs 210a selected from a database based on the detected objects (such as windows 200), as shown in fig. 7B. Further, according to another embodiment of the present disclosure, the electronic device 100 may determine the story based on the detected conditions of the object and the user.
Further, the electronic device 100 may determine actors of the selected user story based on the context of the user. Fig. 7C is a diagram for describing a process in which the electronic apparatus 100 determines actors of a story according to an embodiment of the present disclosure. The actor is a spider 210b selected from the database based on the detected object (such as window 200) and story, as shown in fig. 7C.
Further, the electronic device 100 may generate a live challenge for the user based on the selected story and the actors. Specifically, the electronic device 100 may determine the task based on the story and the actors. Fig. 7D is a diagram for describing a process in which the electronic device 100 generates a live challenge based on a story, actors, and tasks according to an embodiment of the present disclosure. Referring to fig. 7D, the electronic device 100 may display an AR image in which the spider 210b and the spider web 210a are added on the window 200, and may present a task of killing the spider 210 b. The user may be required to kill spider 210b as a live challenge, as shown in fig. 7D. The live challenge may be accomplished when the user moves his or her hand 300 toward the spider 210b and performs a flick to kill the spider 210 b. The electronic device 100 may continuously monitor the movement of the user to determine if the field challenge has been completed. Accordingly, the electronic device 100 can mutually identify and authenticate the user.
FIG. 8A shows a first part of a flow chart describing a method of authenticating a user of an electronic device 100 according to an embodiment of the present disclosure. FIG. 8B shows a second portion of a flowchart depicting a method of authenticating a user of an electronic device according to an embodiment of the disclosure.
Hereinafter, a user authentication method according to an embodiment of the present disclosure will be described in detail with reference to fig. 8A and 8B.
In operation 801, the electronic device 100 may obtain a user authentication request. For example, a field challenge engine 120 included in electronic device 100 may obtain a request to authenticate a user.
In operation 802, the electronic device 100 may identify whether authentication is performed by using the AR. For example, presence challenge engine 120 may identify whether authentication is performed using AR.
The electronic device 100 according to an embodiment of the present disclosure may display a message inquiring whether authentication is performed by using the AR, and recognize whether authentication is performed by using the AR based on a user response to the authentication. According to another embodiment of the present disclosure, when the electronic device 100 operates in the AR mode, the electronic device 100 may recognize that authentication is performed by using the AR without a further inquiry.
In operation 803, the electronic device 100 may automatically run the camera 110. When the electronic device 100 recognizes that authentication is performed by using the AR, the electronic device 100 may operate the camera 100. Further, a field challenge engine 120 included in the electronic device 100 may perform corresponding operations.
In operation 804, the electronic device 100 may identify objects around the user that are displayed in the camera's FoV. For example, the object recognition engine 121 included in the electronic device 100 may recognize objects around the user.
In operation 805, the electronic device 100 may identify a plurality of contextual parameters associated with at least one of a user or the electronic device 100. For example, context determination engine 122 included in electronic device 100 may identify context parameters associated with at least one of a user or electronic device 100.
In operation 806, the electronic device 100 may identify a real-time story based on the recognized object and the context parameters. For example, real-time story engine 125 included in electronic device 100 may identify real-time stories based on recognized objects and contextual parameters.
Information about the story corresponding to the object and the context parameters may be stored in advance in a database of the electronic device 100. When the electronic device 100 recognizes the object and identifies the context parameter indicating the current situation, the electronic device 100 may identify the real-time story through comparison with information previously stored in the database.
In operation 807, the electronic device 100 may identify at least one actor based on the real-time story. For example, actor determination engine 126 included in electronic device 100 may identify at least one actor based on a real-time story.
The database of the electronic device 100 may previously store information on at least one actor that can be set for each story. When the story is determined, the electronic device 100 according to an embodiment of the present disclosure may determine the actor based on at least one of the determined story, the context parameter, or the recognized object.
In operation 808, the electronic device 100 may identify at least one task to be applied to at least one actor. For example, task determination engine 127 included in electronic device 100 may determine at least one task to be applied to at least one actor.
The database of the electronic device 100 may previously store information on at least one task that can be set for each story. The electronic device 100 according to an embodiment of the present disclosure may determine the task based on at least one of a story, an actor, a contextual parameter, or a recognized object.
In operation 809, the electronic device 100 may generate a live AR challenge for a user of the electronic device 100 based on the recognized object and the context parameters. The electronic device 100 may generate live AR challenges based on stories, actors, and tasks. Live AR challenges may allow a user to derive a task to perform.
For example, a live challenge engine 120 included in electronic device 100 may generate a live AR challenge for a user based on recognized objects and contextual parameters.
In operation 810, the electronic device 100 may display the live AR challenge on the display in the AR mode.
In operation 811, the electronic device 100 may derive at least one task to be performed by the user in the AR mode. For example, the task determination engine 127 may determine at least one task to be performed by the user in the AR mode.
In operation 812, the electronic device 100 may identify whether the user successfully performed at least one task in the AR mode. For example, the response determination engine 128 included in the electronic device 100 may determine whether the user successfully performed at least one task in the AR mode.
In operation 813, the electronic device 100 may identify whether the live AR challenge has been completed. For example, live challenge engine 120 may determine whether the live AR challenge has been completed.
In operation 814, the electronic device 100 may allow user access to the electronic device. When the user has completed the live AR challenge, the electronic device 100 may allow user access to the electronic device.
In operation 815, the electronic device 100 may deny user access to the electronic device. When the user fails to complete the live AR challenge, the electronic device 100 may deny the user access to the electronic device.
For example, the authentication engine 140 may deny user access to the electronic device 100.
In operation 816, the electronic device 100 may identify a contextual parameter associated with at least one of a user or the electronic device 100. When the electronic device 100 determines not to perform authentication by using the AR, the electronic device 100 may determine a context parameter associated with at least one of the user or the electronic device 100. For example, the context determination engine 122 may determine context parameters associated with at least one of the user or the electronic device 100.
In operation 817, the electronic device 100 may identify a real-time story based on the context parameters. For example, real-time story engine 125 may determine a real-time story based on contextual parameters.
In operation 818, the electronic device 100 may identify at least one actor based on the real-time story. For example, actor determination engine 126 may determine at least one actor based on a real-time story.
In operation 819, the electronic device 100 may identify at least one task to be applied to at least one actor. For example, the task determination engine 127 may determine at least one task to be applied to at least one actor.
In operation 820, the electronic device 100 may generate a live challenge based on the real-time story, the at least one actor, and the at least one task. For example, live challenge engine 120 may generate a live challenge for a user of electronic device 100 based on a real-time story, at least one actor, and at least one task.
In operation 821, the electronic device 100 may display the field challenge on the display 180.
In operation 822, the electronic device 100 may export at least one task to be performed by the user. For example, the task determination engine 127 may derive at least one task to be performed by the user.
In operation 823, the electronic device 100 may identify whether the user successfully performed at least one task. For example, the response determination engine 128 may determine whether the user has successfully performed at least one task.
The various operations, blocks, steps, etc. in flowchart 800 described above may be performed in a different order or simultaneously. Moreover, in some embodiments of the present disclosure, some operations, blocks, steps, etc. may be omitted, added, or modified without departing from the scope of the present disclosure.
Fig. 9 is a diagram for describing a method in which the electronic device 100 authenticates a user by using a field challenge generated based on weather information according to an embodiment of the present disclosure.
The electronic device 100 may obtain a user authentication request from the user. In the embodiment of the present disclosure, it is assumed that the user authentication of the electronic device 100 is performed in the AR mode.
Referring to fig. 9, the electronic apparatus 100 may automatically operate the camera when the user authentication request is obtained. Thus, an image of objects around the user may be captured in the camera's FoV. For example, an image of the vehicle 910 may be captured in the FoV of the camera 110.
Meanwhile, the electronic device 100 may determine that the current weather is cloudy based on the context parameter. The electronic device 100 may determine to perform raindrop removal on the actor and to perform tasks using the wiper 930 of the vehicle based on the captured object image and the context parameters. Thus, the electronic device 100 may superimpose the AR image of the vehicle's wiper 930 and the raindrops 920 on the real-world image of the vehicle 910 captured in the camera's FoV. Further, the electronic device 100 may provide a live AR challenge by outputting a question or statement prompting the raindrop 920 to be removed by touching the wiper 930 with the real-world image and the image in which the AR image is superimposed.
User access to the electronic device 100 may be allowed when the user completes a field challenge of touching the wiper 930 with the hand 300 to remove the raindrops 920.
Fig. 10 is a diagram for describing a method of authenticating a user by using a field challenge generated based on a recognized object according to an embodiment of the present disclosure.
The electronic device 100 may obtain a user authentication request from the user. In the embodiment of the present disclosure, it is assumed that the user authentication of the electronic device 100 is performed in the AR mode.
Referring to fig. 10, the electronic apparatus 100 may automatically operate the camera when the user authentication request is obtained. Thus, an image of objects around the user may be captured in the camera's FoV. For example, an image of the balloon 1010 may be captured in the camera's FoV.
The electronic device 100 may generate a field challenge based on the captured image of the object. For example, electronic device 100 may select compass 1020 as the actor that constitutes the live challenge. Further, electronic device 100 may indicate the orientation of balloon 1010 by using compass 1020 to determine the tasks that constitute the field challenge. Further, the user may be required to complete the live AR challenge of rotating compass 1020 so that the pointer points to balloon 1010. The user may access the electronic device 100 by rotating the pointer of the compass 1020 in the direction of the balloon 1010 with his or her hand 300.
Fig. 11 is a diagram for describing a method of authenticating a user by using a field challenge generated based on a recognized object according to an embodiment of the present disclosure.
The electronic device 100 may receive a user authentication request from a user. In the embodiment of the present disclosure, it is assumed that the user authentication of the electronic device 100 is performed in the AR mode.
Referring to fig. 11, the electronic apparatus 100 may automatically operate a camera when a user authentication request is received. Thus, an image of objects around the user may be captured in the camera's FoV. For example, an image of the hat 1120 may be captured in the camera's FoV.
The electronic device 100 may generate a field challenge based on the captured image of the object. For example, the electronic device 100 may select cowboy 1110 as the actor constituting the live challenge. Further, the electronic device 100 may determine to wear a hat for the cowboy 1110 as a task that constitutes a field challenge. Thus, the electronic device 100 may generate a live challenge by superimposing an AR image of the cowboy 1110 over a real-world image of the hat 1120 captured in the camera's FoV.
Further, the electronic device 100 may provide a live AR challenge by outputting a question or statement prompting the user to move the hat 1120 to the head of the cowboy 1110 along with the real-world image and the image in which the AR image is superimposed. Further, the user may be required to pull the hat 1120 into the hand 300 and place the hat 1120 on the face of the cowboy 1110.
Fig. 12 is a diagram for describing a method of authenticating a user by using a field challenge generated based on a recognized object according to an embodiment of the present disclosure.
The electronic device 100 may receive a user authentication request from a user. In the embodiment of the present disclosure, it is assumed that the user authentication of the electronic device 100 is performed in the AR mode.
Referring to fig. 12, the electronic apparatus 100 may automatically operate the camera when a user authentication request is received. Thus, an image of objects around the user may be captured in the camera's FoV. For example, an image of the denim 1210 may be captured in the camera's FoV.
The electronic device 100 may generate a field challenge based on the captured image of the object. For example, the electronic device 100 may determine to select cowboy as the actor for the live challenge and to attach beard 1220 to cowboy 1210 as the task to make the live challenge. Thus, the electronic device 100 may generate the live challenge by superimposing an AR image of the mustache 1220 over a real-world image of the cowboy 1220 captured in the camera's FoV. Further, the user may be asked to drag a beard 1220 into the hand 300 and place the beard 1220 on the face of the denim 1210.
Fig. 13 is a diagram for describing a method of authenticating a user by using a field challenge generated based on a recognized object according to an embodiment of the present disclosure.
The electronic device 100 may receive a user authentication request from a user.
Referring to fig. 13, the electronic apparatus 100 may automatically operate the camera when a user authentication request is received. Thus, an image of objects around the user may be captured in the camera's FoV.
The electronic device 100 may generate a field challenge based on the captured image of the object. For example, the electronic device 100 may determine to select different colored balloons 1310 captured by the camera as actors that constitute the live challenge and to select an odd number of particular colored balloons among the different colored balloons as tasks that constitute the live challenge. Accordingly, the electronic device 100 may output a question or statement prompting selection of an odd number of balloons of a particular color among the different colored balloons 1310 captured in the camera's FoV.
Access to the electronic device 100 may be allowed when the user has completed a field challenge by selecting an odd number of balloons of a particular color with his or her hand 300.
Fig. 14 is a diagram for describing a method of authenticating a user by using a field challenge generated based on a recognized object according to an embodiment of the present disclosure.
The electronic device 100 may receive a user authentication request from a user. Referring to fig. 14, the electronic device 100 may automatically operate the camera when a user authentication request is received. Thus, an image of objects around the user may be captured in the camera's FoV. For example, an image of the house 1410 may be captured in the camera's FoV.
When an image of the house 1410 is captured, the electronic device 100 may determine to tap the door of the house 1410 as a field challenge. The electronic device 100 may output a question or statement prompting to tap on a door of the house 1410 captured in the camera FoV.
Access to the electronic device 100 may be allowed when the user has completed a field challenge of tapping the door of the cabin 1410 with his or her hand 300.
Fig. 15 is a diagram for describing a method of authenticating a user by using a field challenge generated based on context parameters according to an embodiment of the present disclosure.
The electronic device 100 may receive a user authentication request from a user. Upon receiving the user authentication request, the electronic device 100 may provide a text entry pad to enter two or three letters of a password associated with the user. The password associated with the user is determined based on the context parameter and may be a word indicating a condition of the user or the electronic device.
Meanwhile, for each letter, a letter input pad may be displayed on the display such that a first letter is input in bold, a second letter is input in italics, and a third letter is input in lower case. However, this is merely an example, and an alphabet input board in which combinations of letters are mixed to generate more complex field challenges may be provided. Further, according to another example, the size, color, and the like of the letters may be set differently.
The user may perform a live challenge by touching a particular letter in each letter input pad with his or her hand 300. When the field challenge is successfully performed, the user may access the electronic device 100.
Fig. 16 is a diagram for describing a method of authenticating a user by using a field challenge generated based on context parameters according to an embodiment of the present disclosure.
The electronic device 100 may receive a user authentication request from a user. Upon receiving the user authentication request, the electronic device 100 may determine actors and tasks constituting the live challenge based on the context parameters. Referring to FIG. 16, the electronic device 100 may store context information indicating that the user has recently booked a ticket to a new German trip through the electronic device 100. Based on this, the electronic device 100 may generate a field challenge that moves the airplane such that the airplane is located on a travel date in the calendar.
Thus, the electronic device 100 may display a question or statement prompting the image of the airplane, the calendar image on the display, and the travel date on which the airplane is to be placed in the calendar. The user may access the electronic device 100 by dragging the airplane with his or her hand 300 and dropping the airplane on a date corresponding to the date of travel.
Fig. 17 is a diagram for describing a method of authenticating a user by using a field challenge generated based on context parameters according to an embodiment of the present disclosure.
The electronic device 100 may receive a user authentication request from a user. Upon receiving the user authentication request, the electronic device 100 may determine tasks and actors constituting the live challenge based on the context parameters. Referring to fig. 17, a call record of a user may be stored in the electronic device 100. Based on this, electronic device 100 may generate a field challenge that selects who was the person who last spoken the most times.
Thus, the electronic device 100 may display on the display a prompt question or statement that prompts the user to select a phone icon, information about the person the user has talked yesterday, and the person that has talked yesterday the most. The user can access the electronic device 100 by dragging the phone icon with his or her hand 300 and dropping the phone icon over the image of the person who has talked the most times yesterday.
Fig. 18 is a diagram for describing a method of authenticating a user by using a field challenge generated based on context parameters according to an embodiment of the present disclosure.
The electronic device 100 may receive a user authentication request from a user. Upon receiving the user authentication request, the electronic device 100 may determine tasks and actors constituting the live challenge based on the context parameters. Referring to fig. 18, a consumption history of a user may be stored in the electronic device 100. Based on this, the electronic device 100 may generate a field challenge that selects the highest amount the user has paid in the grocery store.
Thus, the electronic device 100 may display the currency image and the wallet image on the display and may display a question or statement prompting the user to select the amount he pays for the grocery purchase. The user may repeatedly perform the operation of dragging the money image to the wallet with his or her hand 300 until the amount of money paid by the user is indicated. When the user has completed the field challenge, the user may access the electronic device 100.
Fig. 19 is a diagram for describing a method of authenticating a user by using a field challenge generated based on context parameters according to an embodiment of the present disclosure.
The electronic device 100 may receive a user authentication request from a user. Upon receiving the user authentication request, the electronic device 100 may determine tasks and actors constituting the live challenge based on the context parameters. Referring to fig. 19, the electronic apparatus 100 may store call records, communication records, schedule information, photographs, and the like of a user. Based on this, the electronic device 100 may generate a field challenge that allows the user to select today and who the user has met in the past month and where the user has met.
Accordingly, the electronic apparatus 100 may display a plurality of location images on the display, including an image of a location that the user actually went to today in the past and an image of another location. Further, when the user inputs information about people seen today in the past month, the electronic device 100 may display a question or statement prompting the user to generate a corresponding image and move the image to a particular location. The user may accomplish the live challenge by dragging the image of the person seen today in the past month onto the image of the particular location with his or her hand 300. When the user has completed the field challenge, the user may access the electronic device 100.
Fig. 20 is a diagram for describing a method of an electronic device performing user authentication according to an embodiment of the present disclosure.
In operation S2010, the electronic device may obtain a user authentication request for accessing at least one application running on the electronic device. For example, with the electronic device in a locked state, when a touch input is obtained from a user, the electronic device may determine that a user authentication request for accessing a home screen of the electronic device has been obtained. However, this is merely an example, and the method of receiving the user authentication request is not limited to the above example.
In operation S2020, the electronic device may identify actors and tasks constituting a live challenge for authentication based on context parameters associated with at least one of the electronic device or the user.
An electronic device according to embodiments of the present disclosure may determine a condition of the electronic device or the user based on the context parameters. The electronic device may determine actors and tasks that constitute a live challenge based on the determined conditions for interactive user authentication. An actor may be a person, thing, animal, etc. that is the subject of performing a particular task.
Meanwhile, according to another embodiment of the present disclosure, the electronic device may recognize the sensed object on the FoV of the camera and determine the actors and tasks constituting the live challenge based on the recognized object. This may correspond to the method of generating a field challenge described with reference to fig. 9-14.
In operation S2030, the electronic device may provide a field challenge generated based on the determination.
An electronic device according to an embodiment of the present disclosure may output to the device questions or statements prompting running images and tasks related to actors and tasks constituting a live challenge.
In operation S2040, the electronic device may identify whether to access at least one application based on whether to execute the provided presence challenge.
An electronic device according to embodiments of the present disclosure may deny access to at least one application when a user action corresponding to a live challenge is not recognized within a predetermined time. The electronic device may allow access to the at least one application when a user action corresponding to the live challenge is identified within a predetermined time.
Fig. 21 is a block diagram of an electronic device 2100 that performs user authentication according to an embodiment of the disclosure.
Referring to fig. 21, an electronic device 2100 according to an embodiment of the present disclosure may include an input/output 2110, a processor 2120, and a memory 2130. However, all illustrated components are not required components. The electronic device 2100 may be implemented with more components than shown, or may be implemented with fewer components than shown. For example, the electronic device 2100 may include multiple processors and may include a camera and at least one sensor.
Hereinafter, these components will be described in turn.
The inputter/outputter 2110 is configured to obtain user input or output an audio signal or an image signal, and may further include a display and an audio outputter. However, this is only an example, and the components of the input/output device 2110 are not limited to the above example.
The inputter/outputter 2110 according to an embodiment of the present disclosure may obtain a user authentication request. Upon obtaining the user authentication request, the inputter/outputter 2110 may output the generated field challenge based on the context parameters. In addition, when providing a field challenge, the inputter/outputter 2110 may obtain information input by the user to perform the field challenge.
The processor 2120 generally controls the overall operation of the electronic device 2100. For example, the processor 2120 may execute the operations of the user authentication method described above by executing a program stored in the memory 2130.
The processor 2120 may control the inputter/outputter 2110 to obtain a user authentication request for accessing at least one application running on the electronic device. Further, processor 2120 may determine actors and tasks that constitute a live challenge for authentication based on contextual parameters associated with at least one of an electronic device or a user. Processor 2120 may provide the field challenge generated based on the determination via inputter/outputter 2110. Further, processor 2120 may determine whether to access at least one application based on whether to execute the provided presence challenge.
The processor 2120 according to an embodiment of the present disclosure may identify an object displayed in a FoV of a camera (not shown). Processor 2120 may determine actors and tasks based on the identified objects and context parameters. Further, the processor 2120 may display a question prompting the determined task via the inputter/outputter.
When the AR mode is set in the electronic device 2100, the processor 2120 according to an embodiment of the present disclosure may output an AR image of a live challenge composed of actors and a task on the recognized object in a superimposed manner.
The processor 2120 according to an embodiment of the present disclosure may determine movement information about the electronic device or the user after object recognition based on the movement of the electronic device or the user recognized via a sensor (not shown). The processor 2120 may adjust a position of the output AR image based on the determined movement information.
The processor 2120 according to an embodiment of the present disclosure may identify a location of the electronic device via a sensor (not shown). The processor 2120 may determine an object around the host device based on a location of the electronic device identified via a sensor (not shown). Processor 2120 may determine actors and tasks based on the determined object and context parameters.
The processor 2120 according to an embodiment of the present disclosure may deny access to the at least one application when a user action corresponding to the live challenge is not recognized within a predetermined time. Further, the processor 2120 may allow access to the at least one application when a user action corresponding to the live challenge is identified within a predetermined time.
The processor 2120 according to an embodiment of the present disclosure may determine actors and tasks by using a predetermined learning network model based on context parameters.
The memory 2130 may store programs for processing and control in the processor 2120, and may store input or output data (e.g., field challenge or context parameters).
Memory 2130 may include at least one storage medium selected from the group consisting of: flash memory, a hard disk, a multimedia card micro type memory, a card type memory (e.g., SD or XD memory), Random Access Memory (RAM), Static Random Access Memory (SRAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Programmable Read Only Memory (PROM), magnetic memory, a magnetic disk, and an optical disk. Further, the electronic device 2100 may operate a network storage or a cloud server that performs a storage function of the memory 2130 on the internet.
Embodiments of the present disclosure may be implemented by at least one software program running on at least one hardware device. The components or embodiments of the present disclosure shown in fig. 1 to 21 may include hardware devices or blocks, which may be at least one of a combination of hardware devices and software modules.
It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, although the embodiments of the present disclosure have been described with reference to the exemplary embodiments, the embodiments of the present disclosure may be implemented with modifications within the scope of the technical idea of the present disclosure.
The method according to the embodiment of the present disclosure may be embodied as a program command that can be executed by various computing devices and recorded on a non-transitory computer-readable recording medium. Examples of the non-transitory computer-readable recording medium may include program commands, data files, and data structures, alone or in combination. The program commands recorded on the non-transitory computer-readable recording medium may be specially designed and configured for the present disclosure, or may be well known to and used by those having ordinary skill in the computer software art. Examples of the non-transitory computer-readable recording medium may include a magnetic medium (e.g., a hard disk, a floppy disk, a magnetic tape, etc.), an optical medium (e.g., a CD-ROM, a DVD, etc.), a magneto-optical medium (e.g., a floppy disk, etc.), a ROM, a RAM, and a flash memory, which are configured to store the program command. Examples of the program command may include not only a machine language code prepared by a compiler but also a high-level code executable by a computer by using an interpreter.
Devices according to embodiments of the present disclosure may include a processor, memory to store and execute program data, persistent storage such as a disk drive, a communications port to communicate with external devices, a user interface device such as a touch panel or keys, and the like. The method implemented by the software module or algorithm may be stored in a non-transitory computer-readable recording medium as code or program commands executable on a computer. Examples of the non-transitory computer-readable recording medium may include magnetic storage media (e.g., ROM, RAM, floppy disks, hard disks, etc.) and optical recording media (e.g., CD-ROMs, Digital Versatile Disks (DVDs)). The non-transitory computer-readable recording medium may also be distributed over network-coupled computer systems so that the computer-readable code is stored and executed in a distributed fashion. The non-transitory computer-readable recording medium may be stored in the memory and may be executed by the processor.
In the present disclosure, the term "computer program product" or "non-transitory computer-readable recording medium" is generally used to refer to media such as a memory, a hard disk installed in a hard disk drive, and a signal. A "computer program product" or "non-transitory computer-readable recording medium" is an object for providing software configured with instructions to a computer system for performing a user authentication operation by providing a field challenge according to an embodiment of the present disclosure.
Although reference numerals are denoted in the embodiments of the present disclosure shown in the drawings and specific terms are used to describe the embodiments of the present disclosure, the present disclosure is not limited by any specific terms and the embodiments of the present disclosure may include all components that may be generally reached by those skilled in the art.
Embodiments of the present disclosure may be described in terms of functional block components and various processing operations. Functional blocks may be implemented by any number of hardware and/or software configurations that perform the specified functions. For example, embodiments of the present disclosure may employ an integrated circuit component, such as a memory, a process, logic, or a look-up table, which may carry out a variety of functions by controlling one or more microprocessors or by other control devices. Furthermore, embodiments of the present disclosure may employ different types of cores, different types of CPUs, and the like. The components of the present disclosure are implemented using software programming or software elements. Similarly, the present disclosure may be implemented in any programming or scripting language, such as C, C + +, Java, assembler, or the like, in which the various algorithms are implemented in any combination of data structures, objects, procedures, routines, or other programming elements. The functional blocks may be implemented by algorithms running on one or more processors. Furthermore, embodiments of the present disclosure may employ techniques according to the related art for electronic environment configuration, signal processing, and/or data processing. The terms "mechanism," "component," "device," and "configuration" may be used broadly and are not limited to mechanical and physical configurations. The term may include the meaning of a series of software routines in conjunction with a processor or the like.
The particular manners of operation shown and described herein are illustrative examples and are not intended to limit the scope of the present disclosure in any way. For clarity, electronics, control systems, software, and other functional aspects of the systems according to the prior art may not be described. Furthermore, the connecting lines or connecting members shown in the various figures are intended to represent exemplary functional relationships and/or physical or logical connections between the various elements. It should be noted that many alternative or additional functional relationships, physical connections or logical connections may be present in a practical device. Moreover, no component is essential to the practice of the embodiments of the disclosure unless specifically described as "essential" or "critical".
The use of the term "the" or similar referents in the specification (especially in the claims) is to be construed to cover both the singular and the plural. Further, when a range is described in an embodiment of the present disclosure, an embodiment of the present disclosure to which various values belonging to the range are applied (unless otherwise indicated herein) may be included, and this is the same as each of the various values falling within the range described in the detailed description of the present disclosure. Finally, the steps of all methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. Embodiments of the present disclosure are not limited by the order of the steps described herein. The use of all illustrated or descriptive terms (e.g., "etc.") in embodiments of the disclosure is for purposes of describing embodiments of the disclosure in detail only, and the scope of the disclosure is not limited by the illustrated or descriptive terms unless they are limited by the claims. Further, it will be understood by those skilled in the art that various modifications, combinations and changes may be made in accordance with design conditions and factors within the scope of the appended claims or equivalents.

Claims (15)

1. A method of authenticating a user, the method comprising:
obtaining a user authentication request for accessing at least one application running on an electronic device;
identifying an actor and a task based on one or more contextual parameters associated with at least one of an electronic device or a user;
generating a live challenge for authentication based on the identified actors and tasks;
providing the generated field challenge to a user or an electronic device; and
identifying whether to grant access to the at least one application based on whether the provided field challenge has been successfully executed.
2. The method of claim 1, further comprising identifying objects displayed in a field of view (FoV) of a camera provided in the electronic device, wherein identifying the actor and the task comprises identifying the actor and the task based on the identified objects and the one or more contextual parameters.
3. The method of claim 2, wherein,
identifying actors and tasks includes:
identifying an actor corresponding to the identified object; and
identifying tasks that can be performed by the identified actor, an
Providing field challenges includes: a question is displayed prompting the identified task.
4. The method of claim 2, wherein providing the field challenge comprises: when an Augmented Reality (AR) mode is set in an electronic device, an AR image of a live challenge composed of actors and tasks is output on a recognized object in a superimposed manner.
5. The method of claim 1, further comprising:
identifying a location of an electronic device; and
identifying objects around the electronic device based on the identified location of the electronic device, wherein identifying the actor and the task comprises: an actor and a task are identified based on the identified object and the one or more contextual parameters.
6. The method of claim 1, wherein identifying whether to access the at least one application comprises:
denying access to the at least one application based on not identifying a user action corresponding to the live challenge within a predetermined time; and
access to the at least one application is allowed based on identifying a user action corresponding to the live challenge within a predetermined time.
7. The method of claim 1, wherein identifying the actor and the task comprises identifying the actor and the task based on the one or more contextual parameters using a preset learning network model.
8. An electronic device for performing user authentication, the electronic device comprising:
an input/output device;
a memory storing instructions; and
at least one processor coupled to the memory, wherein the at least one processor is configured to execute instructions to:
obtaining, by an inputter/outputter, a user authentication request for access to at least one application running on an electronic device;
identifying an actor and a task based on one or more contextual parameters associated with at least one of an electronic device or a user;
generating a live challenge for authentication based on the identified actors and tasks;
providing the generated field challenge to a user of the electronic device; and
identifying whether to grant access to the at least one application based on whether the provided field challenge has been successfully executed.
9. The electronic device of claim 8, further comprising a camera, wherein the at least one processor is further configured to execute instructions to:
identifying an object displayed in a field of view (FoV) of a camera; and
identifying an actor and a task based on the identified object and the one or more contextual parameters.
10. The electronic device of claim 9, further comprising a display, wherein the at least one processor is further configured to execute instructions to:
identifying an actor corresponding to the identified object;
identifying a task that can be performed by the identified actor; and
a question is displayed prompting the identified task.
11. The electronic device of claim 9, wherein the at least one processor is further configured to execute the instructions to output, in an overlaid manner, an Augmented Reality (AR) image of a live challenge composed of an actor and a task on the identified object when the AR mode is set in the electronic device.
12. The electronic device of claim 8, further comprising a sensor configured to identify a location of the electronic device, wherein the at least one processor is further configured to execute instructions to:
identifying an object around the electronic device based on the location of the electronic device identified via the sensor; and
an actor and a task are identified based on the identified object and the one or more contextual parameters.
13. The electronic device of claim 8, wherein the at least one processor is further configured to execute instructions to:
denying access to the at least one application based on not identifying a user action corresponding to the live challenge within a predetermined time; and
access to the at least one application is allowed based on identifying a user action corresponding to the live challenge within a predetermined time.
14. The electronic device of claim 8, wherein the at least one processor is further configured to execute instructions to identify actors and tasks based on context parameters by using a preset learning network model.
15. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program when executed on a computing device causes the computing device to:
obtaining a user authentication request for accessing at least one application running on an electronic device;
identifying an actor and a task based on one or more contextual parameters associated with at least one of an electronic device or a user;
generating a live challenge for authentication based on the identified actors and tasks;
providing the generated field challenge to a user or an electronic device; and
determining whether to grant access to the at least one application based on whether the provided field challenge has been performed.
CN201980045581.1A 2018-07-18 2019-07-18 Method and apparatus for performing user authentication Active CN112384916B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
IN201841026856 2018-07-18
IN201841026856 2018-07-18
KR10-2019-0079001 2019-07-01
KR1020190079001A KR20200010041A (en) 2018-07-18 2019-07-01 Method and apparatus for performing user authentication
PCT/KR2019/008890 WO2020017902A1 (en) 2018-07-18 2019-07-18 Method and apparatus for performing user authentication

Publications (2)

Publication Number Publication Date
CN112384916A true CN112384916A (en) 2021-02-19
CN112384916B CN112384916B (en) 2024-04-09

Family

ID=69322085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980045581.1A Active CN112384916B (en) 2018-07-18 2019-07-18 Method and apparatus for performing user authentication

Country Status (2)

Country Link
KR (1) KR20200010041A (en)
CN (1) CN112384916B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023106621A1 (en) * 2021-12-08 2023-06-15 삼성전자주식회사 Cloud server for authenticating user and operation method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1889562A (en) * 2005-06-28 2007-01-03 华为技术有限公司 Method for identifying equipment for receiving initial session protocol request information
US20150026796A1 (en) * 2013-07-18 2015-01-22 At&T Intellectual Property I, L.P. Event-Based Security Challenges
US20170004656A1 (en) * 2005-10-26 2017-01-05 Cortica, Ltd. System and method for providing augmented reality challenges

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1889562A (en) * 2005-06-28 2007-01-03 华为技术有限公司 Method for identifying equipment for receiving initial session protocol request information
US20170004656A1 (en) * 2005-10-26 2017-01-05 Cortica, Ltd. System and method for providing augmented reality challenges
US20150026796A1 (en) * 2013-07-18 2015-01-22 At&T Intellectual Property I, L.P. Event-Based Security Challenges

Also Published As

Publication number Publication date
KR20200010041A (en) 2020-01-30
CN112384916B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
US11281760B2 (en) Method and apparatus for performing user authentication
US20200026920A1 (en) Information processing apparatus, information processing method, eyewear terminal, and authentication system
EP3491493B1 (en) Gesture based control of autonomous vehicles
US11908187B2 (en) Systems, methods, and apparatus for providing image shortcuts for an assistant application
CN104364753B (en) Method for highlighting active interface element
US20210342427A1 (en) Electronic device for performing user authentication and operation method therefor
US9251333B2 (en) Wearable user device authentication system
WO2016119696A1 (en) Action based identity identification system and method
US20230315828A1 (en) Systems and methods for authenticating users
US11169675B1 (en) Creator profile user interface
CN105740688B (en) Unlocking method and device
CN106462242A (en) User interface control using gaze tracking
US10846514B2 (en) Processing images from an electronic mirror
KR20190140519A (en) Electronic apparatus and controlling method thereof
CN112384916B (en) Method and apparatus for performing user authentication
CN111405175B (en) Camera control method, device, computer equipment and storage medium
US11514082B1 (en) Dynamic content selection
WO2024066977A1 (en) Palm-based human-computer interaction method, and apparatus, device, medium and program product
KR102565197B1 (en) Method and system for providing digital human based on the purpose of user's space visit
US20210042405A1 (en) Method for user verification, communication device and computer program
Von Dehsen Camera Lens with Display Mode
Stockdale et al. A fuzzy system for three-factor, non-textual authentication
Yadav et al. Implementation of SARSA-HMM Technique for Face Recognition
Dhillon et al. Health Analyzing Smart Mirror
CN116524611A (en) Living body detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant