CN112384916B - Method and apparatus for performing user authentication - Google Patents

Method and apparatus for performing user authentication Download PDF

Info

Publication number
CN112384916B
CN112384916B CN201980045581.1A CN201980045581A CN112384916B CN 112384916 B CN112384916 B CN 112384916B CN 201980045581 A CN201980045581 A CN 201980045581A CN 112384916 B CN112384916 B CN 112384916B
Authority
CN
China
Prior art keywords
electronic device
user
challenge
identifying
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201980045581.1A
Other languages
Chinese (zh)
Other versions
CN112384916A (en
Inventor
A.贾因
A.沙尔马
R.雅达夫
K.米什拉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority claimed from PCT/KR2019/008890 external-priority patent/WO2020017902A1/en
Publication of CN112384916A publication Critical patent/CN112384916A/en
Application granted granted Critical
Publication of CN112384916B publication Critical patent/CN112384916B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/36User authentication by graphic or iconic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/45Structures or tools for the administration of authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2133Verifying human interaction, e.g., Captcha

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method of authenticating a user includes obtaining a user authentication request for accessing at least one application running on an electronic device, identifying actors and tasks for authentication based on one or more contextual parameters associated with at least one of the electronic device or the user, providing a site challenge generated based on the identifying, and identifying whether to access the at least one application based on whether the provided site challenge has been successfully executed.

Description

Method and apparatus for performing user authentication
Technical Field
The present disclosure relates to user authentication techniques. More particularly, the present disclosure relates to methods and apparatus for performing user authentication by providing a site challenge generated based on contextual parameters associated with a user of an electronic device.
Background
As digital communication technology has evolved rapidly in various types of electronic devices, there has been increasing concern about maintaining data security. In electronic devices, data security is required to protect access, use, disclosure, modification, and destruction of information about unauthorized individuals and entities.
In general, in order to access restricted features of an electronic device, such as a particular program, application, data, or website, a message prompting for a password may be displayed, allowing a user to be authenticated with respect to the restricted feature. Furthermore, there are several methods of identifying and/or authenticating a user of an electronic device. Authentication may include, for example, personal Identification Number (PIN) -based authentication, pattern lock-based authentication, graphic testing (CAPTCHA) -based authentication to fully automatically distinguish between a computer and a human, biometric (fingerprint, facial, or iris) based authentication, and the like. Fig. 1a is a diagram illustrating an example of an authentication type according to the related art.
Existing user authentication methods are incompatible and cumbersome. For example, in the case of existing methods, when a user wants to access an application or website of an electronic device, the user is identified as not being a web robot (i.e., BOT) using CAPTCHA or RE-CAPTCHA, and access rights are granted to the user. As shown in fig. 1a, a user may access an application or website after solving a challenge (e.g., captcha 10, pattern 20, or question). According to the method shown in fig. 1a, BOT may be prevented from using an application or website. However, because challenge questions have been generated and stored in the electronic device, non-interactive authentication methods are performed.
Thus, there is a need for more useful alternative techniques to overcome the above-mentioned drawbacks or other drawbacks in authentication.
Disclosure of Invention
Technical problem
There is a need for more useful alternative techniques to overcome the above-mentioned drawbacks or other drawbacks in authentication.
Technical proposal
A method of authenticating a user includes obtaining a user authentication request for accessing at least one application running on an electronic device, identifying actors and tasks for authentication based on one or more contextual parameters associated with at least one of the electronic device or the user, providing a site challenge generated based on the identifying, and identifying whether to access the at least one application based on whether the provided site challenge has been successfully executed.
Drawings
The above and other aspects, features, and advantages of certain embodiments of the present disclosure will become more apparent from the following description when taken in conjunction with the accompanying drawings in which:
fig. 1a is a diagram showing an example of an authentication type according to the related art;
FIG. 1b is a diagram for describing a method of performing user authentication according to an embodiment of the present disclosure;
fig. 2 is a block diagram of an electronic device according to an embodiment of the present disclosure.
FIG. 3 is a block diagram illustrating a field challenge engine of an electronic device in accordance with an embodiment of the present disclosure;
FIG. 4 is a diagram illustrating a process by which a site challenge engine generates a site challenge in accordance with an embodiment of the present disclosure;
FIG. 5 is a diagram for describing a method of authenticating a user of an electronic device according to an embodiment of the present disclosure;
FIG. 6 is a block diagram of an authentication engine of an electronic device for authenticating a user according to an embodiment of the present disclosure;
FIG. 7a is a diagram for describing a process in which an electronic device captures and displays images of objects surrounding a user in accordance with an embodiment of the present disclosure;
FIG. 7b is a diagram for describing a process by which an electronic device determines a story based on objects surrounding a user, according to an embodiment of the present disclosure;
FIG. 7c is a diagram for describing a process of an electronic device determining an actor of a story according to an embodiment of the present disclosure;
FIG. 7d is a diagram for describing a process by which an electronic device generates live challenges based on stories, actors, and tasks, according to embodiments of the present disclosure;
FIG. 8a shows a first portion of a flowchart describing a method of authenticating a user of an electronic device in accordance with an embodiment of the present disclosure;
FIG. 8b illustrates a second portion of a flowchart describing a method of authenticating a user of an electronic device in accordance with an embodiment of the present disclosure;
FIG. 9 is a diagram for describing a method for an electronic device to authenticate a user using a site challenge generated based on weather information in accordance with an embodiment of the present disclosure;
FIG. 10 is a diagram for describing a method of authenticating a user using a site challenge generated based on recognized objects in accordance with an embodiment of the present disclosure;
FIG. 11 is a diagram for describing a method of authenticating a user using a site challenge generated based on recognized objects in accordance with an embodiment of the present disclosure;
FIG. 12 is a diagram for describing a method of authenticating a user using a site challenge generated based on recognized objects in accordance with an embodiment of the present disclosure;
FIG. 13 is a diagram for describing a method of authenticating a user using a site challenge generated based on recognized objects in accordance with an embodiment of the present disclosure;
FIG. 14 is a diagram for describing a method of authenticating a user using a site challenge generated based on recognized objects in accordance with an embodiment of the present disclosure;
FIG. 15 is a diagram for describing a method of authenticating a user using a site challenge generated based on contextual parameters according to an embodiment of the present disclosure;
FIG. 16 is a diagram for describing a method of authenticating a user using a site challenge generated based on contextual parameters according to an embodiment of the present disclosure;
FIG. 17 is a diagram for describing a method of authenticating a user using a site challenge generated based on contextual parameters according to an embodiment of the present disclosure;
FIG. 18 is a diagram for describing a method of authenticating a user using a site challenge generated based on contextual parameters according to an embodiment of the present disclosure;
FIG. 19 is a diagram for describing a method of authenticating a user using a site challenge generated based on contextual parameters according to an embodiment of the present disclosure;
fig. 20 is a diagram for describing a method of an electronic device performing user authentication according to an embodiment of the present disclosure; and
fig. 21 is a block diagram of an electronic device performing user authentication according to an embodiment of the present disclosure.
Best mode for carrying out the invention
According to embodiments of the present disclosure, a site challenge may be generated based on contextual parameters associated with a user of an electronic device, and user authentication may be performed based on the site challenge. According to another embodiment of the present disclosure, user authentication may be performed by identifying an object surrounding an electronic device and providing a field challenge generated based on the object in an Augmented Reality (AR) mode.
Additional aspects will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the embodiments presented herein.
According to an embodiment of the present disclosure, a method of authenticating a user may include: receiving a user authentication request for accessing at least one application running on an electronic device; identifying actors and tasks that constitute a live challenge for authentication based on contextual parameters associated with at least one of the electronic device or the user; providing a field challenge generated based on the identification; and identifying whether to access the at least one application based on whether to execute the provided field challenge. In another embodiment of the present disclosure, a method of authenticating a user may include: receiving a user authentication request for accessing at least one application running on an electronic device; identifying actors and tasks based on one or more contextual parameters associated with at least one of the electronic device or the user; generating a live challenge for authentication based on the identified actors and tasks; providing the generated field challenge to a user or electronic device; and identifying whether access to the at least one application is granted based on whether the provided presence challenge has been executed. Actors and tasks can constitute a site challenge.
The method may further include identifying objects displayed in a field of view (FoV) of a camera provided in the electronic device, wherein identifying the actors and tasks may include identifying the actors and tasks based on the identified objects and the one or more contextual parameters.
Identifying actors and tasks may include identifying actors corresponding to the identified objects, and identifying tasks that can be performed by the identified actors, and providing a live challenge may include displaying a question prompting the identified tasks.
Providing the site challenge may include outputting an Augmented Reality (AR) image of the site challenge, which is made up of actors and tasks, in an overlapping manner on the identified object when an AR mode is set in the electronic device.
The method may further include identifying movement information about the electronic device or the user after the object identification, wherein outputting the AR image may include adjusting a position of the output AR image based on the identified movement information.
The method may further include identifying a location of the electronic device, and identifying objects surrounding the electronic device based on the identified location of the electronic device, wherein identifying the actor and task may include identifying the actor and task based on the identified objects and the one or more contextual parameters.
Identifying whether to access the at least one application may include: denying access to the at least one application based on not identifying a user action corresponding to the field challenge for a predetermined time; and allowing access to the at least one application based on identifying a user action corresponding to the field challenge within a predetermined time.
For example, identifying whether to access the at least one application may include: refusing access to the at least one application when no user action corresponding to the field challenge is identified within a predetermined time; and allowing access to the at least one application when a user action corresponding to the field challenge is identified within a predetermined time.
The one or more contextual parameters may include at least one of: setting information about the electronic device, time information, a location to obtain a user authentication request, an activity performed by a user in the electronic device, a notification obtained by the electronic device, social Networking Service (SNS) information, ambient environment information about the electronic device, a network to which the electronic device is connected, or the number of other electronic devices connected to the electronic device.
Identifying actors and tasks may include identifying actors and tasks by using a preset learning network model based on the one or more contextual parameters.
According to another embodiment of the present disclosure, an electronic device for performing user authentication may include an inputter/outputter, an inputter storing instructions, and at least one processor connected to a memory, wherein the at least one processor is configured to execute instructions to: obtaining, by the inputter/exporter, a user authentication request for accessing at least one application running on the electronic device; identifying actors and tasks that constitute a live challenge for authentication based on one or more contextual parameters associated with at least one of the electronic device or the user; providing, by the inputter/exporter, a field challenge generated based on the identification; and identifying whether to access the at least one application based on whether to execute the provided field challenge. For example, the inputter/outputter may be a touch screen display that can obtain input information (touch input) and display (output) information. According to another embodiment of the present disclosure, at least one processor may be configured to execute instructions to: obtaining, by the inputter/exporter, a user authentication request for accessing at least one application running on the electronic device; identifying actors and tasks based on one or more contextual parameters associated with at least one of the electronic device or the user; generating a live challenge for authentication based on the identified actors and tasks; providing the generated field challenge to a user of the electronic device; and identifying whether access to the at least one application is granted based on whether the provided presence challenge has been executed.
According to another embodiment of the present disclosure, a computer program product may include a computer readable recording medium, such as a non-transitory computer readable storage medium, storing computer program code which, when executed by a processor, causes the processor to perform a process comprising: receiving a user authentication request for accessing at least one application running on an electronic device; determining actors and tasks based on one or more contextual parameters associated with at least one of the electronic device or the user; generating a live challenge for authentication based on the determined actors and tasks; providing the generated field challenge to a user or electronic device; and determining whether to grant access to the at least one application based on whether the provided presence challenge has been performed.
According to another embodiment of the present disclosure, a computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program when run on a computing device causes the computing device to: obtaining a user authentication request for accessing at least one application running on an electronic device; identifying actors and tasks that constitute a live challenge for authentication based on contextual parameters associated with at least one of the electronic device or the user; providing a field challenge generated based on the determination; and identifying whether to access the at least one application based on whether to execute the provided field challenge.
According to another embodiment of the present disclosure, a method of authenticating a user may include: receiving a user authentication request requesting access to at least one application running on an electronic device; based on the acquired user authentication request, automatically controlling a camera to capture an image or automatically controlling a sensor to acquire current user context information; determining a live Augmented Reality (AR) challenge for authentication based on the object identified in the captured image or based on current user context information; generating an AR image based on the determined site challenge; displaying the generated AR image; determining whether the user performed an action corresponding to the live AR challenge; and granting access to the at least one application based on determining that the user performed an action corresponding to the live AR challenge. The method may further include an AR image including at least one of: information about a question asked by a user of the electronic device and an image associated with an action to be performed by the user.
Detailed Description
Terms used herein will be briefly described, and user authentication techniques according to embodiments of the present disclosure will be described in detail.
The terms used herein are those general-purpose terms that are currently widely used in the art in view of functions related to user authentication technology, but may be changed according to the intention, precedent, or new technology of one of ordinary skill in the art. Furthermore, the terms specified may be selected by the applicant, in which case their detailed meanings will be described in the detailed description of the present disclosure. Accordingly, the terms used herein should not be construed as simple names, but rather based on the meaning of the terms and the overall description of the present disclosure.
It will be understood that, although the terms "first," "second," etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first component may be referred to as a second component, and similarly, a second component may also be referred to as a first component, without departing from the scope of user authentication techniques according to embodiments of the present disclosure. The term "and/or" includes any combination of a plurality of related items or any one of a plurality of related items.
Throughout this disclosure, the expression "at least one of a, b or c" means: only a, only b, only c, both a and b, both a and c, both b and c, all of a, b and c, or variants thereof. Similarly, the expression "at least one of a, b and c" means: only a, only b, only c, both a and b, both a and c, both b and c, all of a, b and c, or variants thereof.
It will be understood that terms, such as "comprising," "including," and "having," when used herein, specify the presence of stated and elements, but do not preclude the presence or addition of one or more other elements. Furthermore, the term "unit" as used in this specification refers to a software or hardware component, such as a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC), and the "unit" performs certain functions. However, the "unit" is not limited to software or hardware. The "unit" may be configured in an addressable storage medium or configured to reproduce one or more processors. Thus, for example, a "unit" includes components such as software components, object-oriented software components, class components, task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. The functionality provided for in the components and "units" may be combined with a fewer number of components and "units" or may be separated from additional components and "units".
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily implement the present disclosure. However, the user authentication techniques according to embodiments of the present disclosure may be embodied in many different forms and are not limited to the embodiments of the present disclosure described herein. In order to clearly describe the user authentication technology according to the embodiments of the present disclosure, parts irrelevant to the description are omitted, and the same reference numerals are assigned to the same elements throughout the present specification.
According to existing user authentication techniques, user authentication may be performed by prompting a user to provide a password, a pattern, a fully automated challenge test (CAPTCHA) to distinguish between computer and human, biometric information, etc. on an electronic device. Further, the electronic device may determine whether to authenticate the user based on whether a password input from the user matches already stored data. In addition to passwords, biometric information may be used for user authentication. For example, a user may perform authentication by providing a fingerprint to a smartphone. As described above, existing user authentication techniques are not interactive, and it may be difficult for a user to remember information (passwords, patterns, etc.) necessary for a credential each time. Thus, the user may feel uncomfortable.
Unlike existing user authentication techniques, user authentication techniques according to embodiments of the present disclosure may interactively perform user authentication. An electronic device performing a user authentication method according to embodiments of the present disclosure may generate a field Augmented Reality (AR) challenge based on a plurality of contextual parameters. The generated live AR challenge may be displayed on a screen of an electronic device operating in AR mode. The electronic device may display the live AR challenge to guide the user in performing at least one task in real-time. When the user successfully completes the task, the user may access the electronic device. Thus, an electronic device according to embodiments of the present disclosure may perform authentication via interaction with a user by confirming in real-time whether a task is performed according to a live AR challenge.
Furthermore, user authentication techniques according to embodiments of the present disclosure may generate a presence challenge based on user behavior. For example, the electronic device may identify whether the authentication requester is a BOT or a user of the electronic device (i.e., a real user) by providing a live challenge based on a question such as the question "who calls every day.
Furthermore, user authentication techniques according to embodiments of the present disclosure may generate real-time field AR challenges without using external hardware components.
Hereinafter, embodiments of the present disclosure will be described in more detail with reference to fig. 1b to 21.
Fig. 1b is a diagram for describing a method of performing user authentication according to an embodiment of the present disclosure.
Referring to fig. 1b, the electronic device 100 may receive a user authentication request from a user for accessing at least one application. For example, when user input touching an icon indicating one of the at least one application requiring access rights is identified, the electronic device 100 may determine that a user authentication request has been received. However, this is merely an example, and the method of the electronic device 100 receiving the user authentication request is not limited to the above-described example.
The electronic device 100 according to embodiments of the present disclosure may automatically operate the camera function to capture an image of an area surrounding the user in response to receipt of the user authentication request. The camera captures an image of objects surrounding the user and displays the captured image in the field of view of the camera. Thus, objects around the user can be identified. For example, the electronic device 100 may identify windows around the user (e.g., windows behind or in front of the user) through captured images of the area around the user.
Further, the electronic device 100 according to embodiments of the present disclosure may determine a plurality of contextual parameters associated with at least one of the user or the electronic device 100. The context parameters may include at least one of setting information about the electronic device, a time and location at which a user authentication request was received, an activity performed by a user in the electronic device, a notification received by the electronic device, social Networking Service (SNS) information, ambient environment information about the electronic device, a network to which the electronic device is connected, or a number of other electronic devices connected to the electronic device. For example, the electronic device 100 may determine weather information, such as information indicating that it is currently raining.
The electronic device 100 may determine a real-time story based on recognized objects (such as windows) and contextual parameters. The identified objects may be identified by performing image recognition and/or pattern matching.
Further, the electronic device 100 can select at least one task to apply to the real-time story. Further, the electronic device 100 may determine that at least one task needs to be applied to at least one actor. The task may be to "kill spiders". Further, the electronic device 100 may generate a live challenge by combining the real-time story, the at least one actor, and the at least one task. For example, the live AR challenge shown in fig. 1b may be generated by adding a spider as a virtual AR image in a window recognized by the electronic device 100. The onsite AR challenge may be generated by using at least one of AR technology, virtual Reality (VR) technology, and the like. The electronic device 100 may display the field challenges to the user in AR mode and may direct the user to perform, for example, interactive tasks, such as task 110 "killing spiders. Further, the user may kill the spider by performing a touch operation on the spider to complete the requested task. When the user successfully performs the task, the electronic device 100 may allow the user access. When the user has not successfully performed the task, the electronic device 100 may deny access.
According to another example, when the electronic device 100 recognizes that the time of the window is morning, the field AR challenge may be to "open the window" for ventilation. According to another example, the field AR challenge may be "close window" when the electronic device 100 recognizes that the time of the window is evening. The live AR challenge may be generated in real-time based on contextual parameters indicative of at least one of user behavior, user environment, and the like. Thus, mutual user authentication can be performed.
Fig. 2 is a block diagram of an electronic device 100 according to an embodiment of the present disclosure.
Referring to fig. 2, the electronic device 100 may include a camera 110, a field challenge engine 120, an AR engine 130, an authentication engine 140, a communicator 150, a processor 160, a memory 170, and a display 180. However, this is merely an example, and the electronic device 100 may include fewer or more components than those described above. For example, the electronic device 100 may further include one or more sensors, such as gyroscopes, GPS sensors, and/or acceleration sensors, capable of identifying the position or movement of the user or the electronic device 100. The above-described sensor is merely an example, and the sensor included in the electronic device 100 is not limited to the above-described example.
The presence challenge engine 120 according to embodiments of the present disclosure may receive a user authentication request to authenticate that the user is a user of the electronic device 100. Upon receiving the user authentication request, the presence challenge engine 120 may generate a presence challenge for the user of the electronic device 100. The live challenge may indicate a real-time story and may include at least one task to be performed by a user.
In embodiments of the present disclosure, the presence challenge engine 120 may generate a presence challenge for a user by automatically starting the camera 110 when a user authentication request is received. The camera 110 may be an imaging sensor or the like and may be used to capture images of the area surrounding the user. In addition, the live challenge engine 120 may recognize objects around the user that are displayed in the FoV of the camera 110.
Further, the site challenge engine 120 may determine a plurality of contextual parameters associated with at least one of the user or the electronic device 100. In embodiments of the present disclosure, the context parameters may include at least one of setting information about the electronic device, time (e.g., current date and time), location where the user authentication request was received, activity performed by the user in the electronic device, notification received by the electronic device, SNS information, ambient information of the electronic device (e.g., weather information or lighting information), a network to which the electronic device is connected, or the number of other electronic devices connected to the electronic device.
Further, the live challenge engine 120 may determine a real-time story based on recognized objects (such as windows) and contextual parameters. The live challenge engine 120 may determine at least one actor based on the real-time story. The live challenge engine 120 may determine at least one task to apply to at least one actor. Further, the live challenge engine 120 may generate a live challenge by combining the real-time story, the at least one actor, and the at least one task. At this point, the site challenge may include a site AR challenge. Live-view may be enhanced by an AR image (e.g., a spider) that may indicate to an actor a task to be performed in relation to the recognized object and may be provided when the electronic device 100 is operating in AR mode.
According to an embodiment of the present disclosure, the authentication engine 140 may be connected to the memory 170 and the processor 160. Authentication engine 140 may perform user authentication based on the live AR challenge. The AR engine 130 may display the live AR challenge in AR mode on the screen of the electronic device 100. Further, AR engine 130 may derive at least one task to be performed by the user in AR mode. The authentication engine 140 may determine whether the user successfully performed at least one task in AR mode. Further, the authentication engine 140 may authorize the user to access at least one application of the electronic device 100 when the user successfully performs at least one task.
According to another embodiment of the present disclosure, the electronic device 100 may identify objects around the user without using the camera 110. The electronic device 100 may determine whether the user's location is internal by using a Global Positioning System (GPS) sensor, a gyroscope, or any other sensor, and may recognize an object based on the user's location. For example, when the electronic device 100 determines that the user is in his or her bedroom, the electronic device 100 may acquire data of a particular location (such as the bedroom) and recognize an object present in the particular location to generate a field challenge based on the acquired data. The acquired data may be, for example, a captured image of a particular location. However, this is merely an example, and the data is not limited thereto.
According to another embodiment of the present disclosure, the electronic device 100 may generate a field challenge without using a camera, AR, VR, etc. The electronic device 100 may dynamically generate a site challenge based on contextual parameters, such as current user behavior. For example, the location of the user may be determined as an office based on the coordinates of the electronic device 100 acquired by using GPS or the like. Thus, the electronic device 100 may determine objects present in the office, determine actors and tasks based on the determined objects, and generate a live challenge. For example, the electronic device 100 may request that the user select the color of the water bottle on the user's desk.
Communicator 150 may be a communication interface configured to cause hardware components in electronic device 100 to communicate internally with each other. The communicator 150 may be further configured so that the electronic device may communicate with other electronic devices and or servers.
The processor 160 may be connected to the memory 170 to process various instructions stored in the memory 170 to authenticate a user of the electronic device 100.
Memory 170 may store instructions to be executed by processor 160. The memory 170 may include non-volatile storage elements. Examples of non-volatile storage elements may include magnetic hard disks, optical disks, floppy disks, flash memory, or electrically programmable memory (EPROM) or electrically erasable programmable memory (EEPROM). Further, in some examples, the memory 170 may be considered a non-transitory storage medium. The term "non-transitory" refers to storage media implemented not by a carrier wave or propagated signal. However, the term "non-transitory" should not be construed to mean that the memory 170 is not removable. In some examples, memory 170 may be configured to store more information than memory. In some examples, a non-transitory storage medium may store data that may change over time (e.g., in Random Access Memory (RAM) or cache).
In embodiments of the present disclosure, the display 180 may be configured to display content on the electronic device 100. Examples of the display 180 may include a Liquid Crystal Display (LCD), an active matrix organic light emitting diode (AM-OLED) display, a Light Emitting Diode (LED) display, and the like.
Although fig. 2 illustrates various hardware components of the electronic device 100, the configuration of the electronic device 100 according to the embodiment of the present disclosure is not limited thereto. In another embodiment of the present disclosure, electronic device 100 may include fewer or more components. Moreover, the labels or names of each component are used for illustration purposes only and are not intended to limit the scope of the present disclosure. One or more components may be connected together to perform the same or substantially similar functions of authenticating a user of electronic device 100.
The electronic device 100 may be one of a smart phone, a mobile phone, a laptop, a tablet phone, etc., but is not limited thereto.
Fig. 3 is a block diagram illustrating a site challenge engine 120 of the electronic device 100 according to an embodiment of the present disclosure.
Referring to fig. 3, live challenge engine 120 may include an object recognition engine 121, a context determination engine 122, a database recognition engine 123, a convolution engine 124, a real-time story engine 125, an actor determination engine 126, a task determination engine 127, and a response determination engine 128.
In embodiments of the present disclosure, the presence challenge engine 120 may automatically activate the camera 110 of the electronic device 100 when a user authentication request is received. In addition, the object recognition engine 121 may recognize objects around the user displayed in the FoV of the camera 110. According to another example, object recognition engine 121 may determine objects present around a user based on sensors capable of determining location. Examples of the sensor may include a GPS sensor provided in the electronic device 100.
Further, the context determination engine 122 may determine a plurality of context parameters associated with at least one of the user or the electronic device. Further, the real-time story engine 125 may determine a real-time story based on recognized objects and contextual parameters.
Actor determination engine 126 may determine at least one actor based on the real-time story. Database recognition engine 123 may be configured to recognize and select user stories from the database. In addition, database recognition engine 123 may be configured to recognize or select actor groups for user stories from the database. Task determination engine 127 may determine at least one task to be applied to at least one actor.
Convolution engine 124 may combine the real-time story, the at least one actor, and the at least one task to generate a live challenge. The site challenge engine 120 may receive a site challenge from the convolution engine 124.
Task determination engine 127 may direct or prompt the user to perform at least one task determined in real-time.
The response determination engine 128 may determine whether the user successfully performed at least one task.
The site challenge engine 120 according to another embodiment of the present disclosure may generate a site challenge for a user without using the camera 110. This may correspond to the same embodiment described above with reference to fig. 2.
Fig. 4 is a diagram illustrating a process by which the site challenge engine 120 generates a site challenge in accordance with an embodiment of the present disclosure.
Referring to fig. 4, the following process may be performed by the site challenge engine 120 of the electronic device 100 to generate a site challenge for a user of the electronic device 100.
In operation 401a, the live challenge engine 120 may send the detected object to the database recognition engine 123. The object may be detected by, for example, an intelligent agent (e.g., bixby vision) that recognizes and classifies the object. The intelligent agent may recognize and classify objects by performing image recognition on objects included in images captured by the camera.
In operation 401b, the site challenge engine 120 may send a plurality of context parameters to the convolution engine 124. The context parameters may include one or more of the following: current date, current time, current location, difficulty of field challenges, weather information, lighting conditions of a user's current environment, speed information of movement of the user or electronic device 100, landscape information (e.g., landscape images), portrait, orientation of the electronic device 100 (e.g., reverse or forward direction), seasonal information (e.g., an indication of a current season such as spring, summer, autumn, or winter), number or type of accessories connected to the electronic device 100, setup information (e.g., sound on/off, power saving on/off), and so forth.
In operation 402, the site challenge engine 120 may select any one of a plurality of databases stored in the memory 170.
In operation 403, the field challenge engine 120 may send the selected database and number of entries to the convolution engine 124.
In operations 404a and 404b, convolution engine 124 may mix one, more than one, or all of the context parameters with the received number of entries into a database and generate a unique value (e.g., each time) using, for example, a hashing technique and/or a random number generation technique.
In operations 405a and 405b, the convolution engine 124 may send the generated values to a database stored in the memory 170. At the same time, the live challenge engine 120 may send the generated values to the actor determination engine 126.
In operation 406, upon receiving the generated value, the live challenge engine 120 may select a user story from a database stored in the memory 170. In addition, actor determination engine 126 may send the user's story to task determination engine 127.
In operation 407, task determination engine 127 may determine a question or task to be displayed to the user. Task determination engine 127 may store a list of tasks that may be performed for each type of actor and recognized object. Further, task determination engine 127 may be trained using a learning network model based on some training data sets of inputs and outputs. For example, actors and objects may be used as inputs and questions may be used as outputs. Accordingly, task determination engine 127 may identify a set of questions that may be posed for the current scene based on actors and objects. Task determination engine 127 may determine the problem based on the current user environment, such as location or a timer. For example, when a user views a movie, task determination engine 127 may determine a question as a question prompting the user to input information about who the movie actor is.
In operation 408, when a question or activity is determined, task determination engine 127 may send the determined question or activity to response determination engine 128.
In operation 409, the response determination engine 128 may determine an accurate answer to the question and send the determined accurate answer to the actor determination engine 126.
In operation 410, the actor determination engine 126 may select features (e.g., size, shape, color, or user story) of the actor based on inputs such as user story, difficulty of live challenges, and contextual parameters.
For example, an example scenario exhibiting "display window and window with curtains" may be assumed to be a user history. The context parameters of the current use environment may be as follows:
a. the current location of the user (e.g., state or country)
b. Current weather conditions (e.g., sunny days, rainy days, etc.)
c. Current time (e.g., day, night, afternoon, etc.)
d. Difficulty values (e.g., difficult, easy, or medium).
For example, the current usage environment may include information indicating that the user is in india, sunny weather, daytime, and easy to difficulty. Based on this current usage environment information, actor determination engine 126 may display a window with an open window covering (ease in difficulty) and may require the user to close the window covering because the weather is clear (and, for example, the sun is very sunny).
For user authentication, a background service may be continuously run in the electronic device 100 to capture user behavior and generate and store a database of site challenges and solutions to site challenges from the captured user behavior. To generate a field challenge, it may be necessary to activate one or more of the following types of functions:
a. message delivery
b. E-mail
c. Position of
d. Telephone set
e. General activities [ e.g., calendar information, health records, etc. ]
F. User trends [ e.g., camera usage, call usage, frequent calls, and home-office practices ]
Fig. 5 is a diagram for describing a method of authenticating a user of the electronic device 100 according to an embodiment of the present disclosure.
In the following procedure, an embodiment of the present disclosure will be described in which the AR engine 130 of the electronic device 100 participates in user authentication.
1) The input module 101 of the electronic device 100 may receive a user authentication request.
2) The user authentication request may be sent to AR engine 130.
3) The AR engine 130 may also operate the camera 110 of the electronic device 100.
4) The camera 110 may send the image to an intelligent agent, such as the Bixby vision agent 110a of the electronic device 100. The Bixby visual agent 110a may be built into the camera 110 so that the user may tap a visual icon in the viewfinder to interpret the logo or build the AR image.
5) The Bixby vision agent 110a may identify objects in the user's FoV and send the identified objects to the input module 101.
6) The input module 101 may send the identified object to the site challenge engine 120.
7) The live challenge engine 120 may generate a live AR challenge by enhancing the user's FoV with AR images related to the user's story and actors. The live AR challenge may be generated based on the identified object and other contextual parameters. In addition, the site challenge engine 120 may send site AR challenges to the input module 101. In addition, the site challenge engine 120 may send the results of the site AR challenge to the authentication engine 140.
8) The input module 101 may send the live AR challenge and the contextual parameters to the AR engine 130.
9) AR engine 130 may display the site challenge in AR mode via camera 110. The AR animator 131 may be configured to display the live challenge or AR image associated with the live challenge at a particular location on the display 180 of the electronic device 100. The AR engine 130 may also operate a movement or motion detector 132 to obtain movement information about the user and the electronic device 100.
10 The movement or motion detector 132 may continue to send movement information to the AR engine 130.
11 AR engine 130 may send the movement information to input module 101. The input module 101 may identify whether the live AR challenge has been successfully completed based on the movement information.
12 The input module 101 may also send the authentication result to the authentication engine 140. To determine whether the onsite AR challenge has been successfully completed, the authentication engine 140 may determine an association between the results received from the onsite challenge engine 120 and the results received from the input module 101.
a. When the results received from the input module 101 and the results received from the site challenge engine 120 are the same, the challenge may be successfully completed. Thus, the authentication engine 140 may allow access to the user of the electronic device 100.
b. When the results received from the input module 101 and the results received from the site challenge engine 120 are not the same, the site challenge may not be successfully completed. Thus, the authentication engine 140 may deny access to the user of the electronic device 100.
Fig. 6 is a block diagram of an authentication engine 140 of the electronic device 100 for authenticating a user according to an embodiment of the present disclosure.
The site challenge engine 120 may generate a site challenge and send relevant information to the AR engine 130 to interactively provide the site challenge with the camera 110. The AR engine 130 may include an AR animator 131 and a movement or motion detector 132, which movement or motion detector 132 may be a sensor. AR engine 130 may perform analysis of when, where, and how the field challenges are presented.
The AR animation 131 may analyze the data of the site challenge and determine where to display the site challenge in the electronic device 100. The AR animation 131 may calculate the exact location of the AR image displaying the live challenge based on the parameters provided by the live challenge. The AR animation 131 may display the site challenge in AR mode of the display at a location determined based on the calculation. Further, AR engine 130 may interact with movement or motion detector 132 to receive user input and send the user input to authentication engine 140. The movement or motion detector 132 may use a sensor such as a gyroscope or accelerometer to detect movement of the electronic device 100 and identify whether the user is in a correct state or in three-dimensional (3D) space.
The AR engine 130 of the electronic device 100 may perform the following procedure to authenticate the user of the electronic device 100.
1) The AR animator 131 and the camera 110 are activated.
2) And acquiring the basic coordinates and the destination coordinates of the 3D plane.
3) The movements of the electronic device 100 and the user are observed until the destination coordinates and the electronic device 100 do not match.
4) When the electronic device 100 does not perform necessary operations for a limited time, the operations of the AR animator 131 and the camera 110 are disabled and authentication is set to fail.
5) When the electronic device 100 is at the destination coordinates,
a. the presence challenge engine 120 may select the type of questionnaire such as push-to-talk activity or activity such as user movement tracking.
b. The site challenge engine 120 may receive event details of the object, such as the size, type, subtype, color or base coordinates of the object, and the number of objects. In addition, the site challenge engine 120 may send the received event details to the AR engine 130.
The ar engine 130 may receive information about the user action, compare the user action with the result data, and transmit the result data to the authentication engine 140, or may transmit information about the comparison result to the authentication engine 140.
Authentication engine 140 may determine whether the request was successful. In particular, authentication engine 140 may determine whether the request was successful based on a comparison between the raw results sent by field challenge engine 120 and the user behavior from the data sent by AR engine 130.
Fig. 7 a-7 d are diagrams for describing example scenarios for authenticating a user of electronic device 100 according to embodiments of the present disclosure. In an embodiment of the present disclosure, it is assumed that a user wants to access the electronic device 100.
The electronic device 100 may receive an access request from a user. Upon receiving the access request, the electronic device 100 may automatically drive the camera 110 to capture an image of objects around the user displayed in the FoV of the camera 110. Fig. 7a is a diagram for describing a process in which the electronic device 100 captures and displays objects around a user according to an embodiment of the present disclosure. When the user is at home, window 200 may be displayed on an electronic device, as shown in FIG. 7 a.
Further, the electronic device 100 may determine a user story based on the detected object and the condition of the user. Fig. 7b is a diagram for describing a process in which the electronic device 100 determines a story based on objects around a user according to an embodiment of the present disclosure. The story may include spider web 210a selected from a database based on a detected object (such as window 200), as shown in fig. 7 b. Further, according to another embodiment of the present disclosure, the electronic device 100 may determine a story based on detected conditions of the object and the user.
Further, the electronic device 100 may determine the actors of the selected user story based on the context of the user. Fig. 7c is a diagram for describing a process in which the electronic device 100 determines an actor of a story according to an embodiment of the present disclosure. The actor is a spider 210b selected from the database based on the detected object (such as window 200) and story, as shown in fig. 7 c.
Further, the electronic device 100 may generate a live challenge for the user based on the selected story and actors. In particular, the electronic device 100 may determine the task based on the story and the actor. Fig. 7d is a diagram for describing a process in which the electronic device 100 generates a live challenge based on stories, actors, and tasks according to embodiments of the present disclosure. Referring to fig. 7d, the electronic device 100 may display an AR image in which the spider 210b and the spider web 210a are added on the window 200, and may present a task of killing the spider 210 b. The user may be required to kill spider 210b as a field challenge, as shown in fig. 7 d. The field challenge may be accomplished when the user moves his or her hand 300 toward the spider 210b and performs a tap to kill the spider 210 b. The electronic device 100 may continuously monitor the user's movements to determine if the field challenge has been completed. Thus, the electronic device 100 can mutually identify and authenticate the user.
Fig. 8a shows a first part of a flowchart describing a method of authenticating a user of an electronic device 100 according to an embodiment of the present disclosure. Fig. 8b shows a second portion of a flowchart describing a method of authenticating a user of an electronic device according to an embodiment of the present disclosure.
Hereinafter, a user authentication method according to an embodiment of the present disclosure will be described in detail with reference to fig. 8a and 8 b.
In operation 801, the electronic device 100 may obtain a user authentication request. For example, the presence challenge engine 120 included in the electronic device 100 may obtain a request to authenticate the user.
In operation 802, the electronic device 100 may identify whether authentication is performed by using the AR. For example, the site challenge engine 120 may identify whether authentication is performed using AR.
The electronic device 100 according to an embodiment of the present disclosure may display a message asking whether authentication is performed by using the AR, and recognize whether authentication is performed by using the AR based on a user response to authentication. According to another embodiment of the present disclosure, when the electronic device 100 operates in the AR mode, the electronic device 100 may recognize that authentication is performed by using the AR without further interrogation.
In operation 803, the electronic device 100 may automatically run the camera 110. When the electronic device 100 recognizes that authentication is performed by using the AR, the electronic device 100 may operate the camera 100. Further, the site challenge engine 120 included in the electronic device 100 may perform corresponding operations.
In operation 804, the electronic device 100 may identify objects around the user displayed in the FoV of the camera. For example, the object recognition engine 121 included in the electronic device 100 may recognize objects around the user.
In operation 805, the electronic device 100 may identify a plurality of contextual parameters associated with at least one of the user or the electronic device 100. For example, the context determination engine 122 included in the electronic device 100 may identify a context parameter associated with at least one of the user or the electronic device 100.
In operation 806, the electronic device 100 can identify a real-time story based on the recognized object and the contextual parameters. For example, a real-time story engine 125 included in the electronic device 100 may identify a real-time story based on recognized objects and contextual parameters.
Information about a story corresponding to an object and context parameters may be pre-stored in a database of the electronic device 100. When the electronic device 100 recognizes the object and identifies a contextual parameter indicating the current condition, the electronic device 100 may identify a real-time story by comparison with information previously stored in a database.
In operation 807, the electronic device 100 can identify at least one actor based on the real-time story. For example, an actor determination engine 126 included in the electronic device 100 may identify at least one actor based on the real-time story.
The database of the electronic device 100 may pre-store information about at least one actor that may be set for each story. When a story is determined, the electronic device 100 according to embodiments of the present disclosure may determine an actor based on at least one of the determined story, contextual parameters, or recognized objects.
In operation 808, the electronic device 100 may identify at least one task to be applied to at least one actor. For example, task determination engine 127 included in electronic device 100 may determine at least one task to be applied to at least one actor.
The database of the electronic device 100 may pre-store information regarding at least one task that may be set for each story. The electronic device 100 according to embodiments of the present disclosure may determine the task based on at least one of a story, an actor, a contextual parameter, or an identified object.
In operation 809, the electronic device 100 may generate a live AR challenge for the user of the electronic device 100 based on the recognized object and the contextual parameters. The electronic device 100 may generate live AR challenges based on stories, actors, and tasks. The live AR challenge may allow the user to derive tasks to be performed.
For example, the presence challenge engine 120 included in the electronic device 100 may generate a presence AR challenge for the user based on the recognized object and the contextual parameters.
In operation 810, the electronic device 100 may display the live AR challenge on a display in AR mode.
In operation 811, the electronic device 100 may derive at least one task to be performed by the user in AR mode. For example, task determination engine 127 may determine at least one task to be performed by the user in AR mode.
In operation 812, the electronic device 100 may identify whether the user successfully performed at least one task in the AR mode. For example, the response determination engine 128 included in the electronic device 100 may determine whether the user successfully performed at least one task in AR mode.
In operation 813, the electronic device 100 may identify whether the live AR challenge has been completed. For example, the site challenge engine 120 may determine whether the site AR challenge has been completed.
In operation 814, the electronic device 100 may allow access to a user of the electronic device. When the user has completed the on-site AR challenge, the electronic device 100 may allow access to the user of the electronic device.
In operation 815, the electronic device 100 may deny access to a user of the electronic device. When the user fails to complete the on-site AR challenge, the electronic device 100 may deny the user access to the electronic device.
For example, authentication engine 140 may deny access to a user of electronic device 100.
In operation 816, the electronic device 100 may identify a contextual parameter associated with at least one of the user or the electronic device 100. When the electronic device 100 determines that authentication is not performed by using AR, the electronic device 100 may determine a context parameter associated with at least one of the user or the electronic device 100. For example, the context determination engine 122 may determine context parameters associated with at least one of the user or the electronic device 100.
In operation 817, the electronic device 100 may identify a real-time story based on the contextual parameters. For example, the real-time story engine 125 may determine a real-time story based on the contextual parameters.
In operation 818, the electronic device 100 may identify at least one actor based on the real-time story. For example, actor determination engine 126 may determine at least one actor based on the real-time story.
In operation 819, the electronic device 100 may identify at least one task to be applied to at least one actor. For example, task determination engine 127 may determine at least one task to be applied to at least one actor.
In operation 820, the electronic device 100 may generate a live challenge based on the real-time story, the at least one actor, and the at least one task. For example, the presence challenge engine 120 may generate a presence challenge for a user of the electronic device 100 based on the real-time story, the at least one actor, and the at least one task.
In operation 821, the electronic device 100 may display the site challenge on the display 180.
In operation 822, the electronic device 100 may derive at least one task to be performed by the user. For example, task determination engine 127 may derive at least one task to be performed by the user.
In operation 823, the electronic device 100 may identify whether the user successfully performed at least one task. For example, the response determination engine 128 may determine whether the user has successfully performed at least one task.
The various operations, blocks, steps, etc. in flowchart 800 described above may be performed in a different order or simultaneously. Further, in some embodiments of the present disclosure, some operations, blocks, steps, etc. may be omitted, added, or modified without departing from the scope of the present disclosure.
Fig. 9 is a diagram for describing a method of authenticating a user by using a site challenge generated based on weather information by the electronic device 100 according to an embodiment of the present disclosure.
The electronic device 100 may obtain a user authentication request from a user. In the embodiments of the present disclosure, it is assumed that user authentication of the electronic device 100 is performed in the AR mode.
Referring to fig. 9, the electronic device 100 may automatically operate the camera when a user authentication request is obtained. Thus, an image of objects surrounding the user can be captured in the FoV of the camera. For example, an image of the vehicle 910 may be captured in the FoV of the camera 110.
Meanwhile, the electronic device 100 may determine that the current weather is cloudy based on the context parameter. The electronic device 100 may determine to perform raindrop removal on the actor and to perform tasks with the wiper 930 of the vehicle based on the captured object image and the contextual parameters. Thus, the electronic device 100 may superimpose the AR image of the wiper 930 of the vehicle and the raindrops 920 on the real world image of the vehicle 910 captured in the FoV of the camera. Further, the electronic device 100 may provide a live AR challenge by outputting a question or statement with the image in which the real world image and the AR image are superimposed, the question or statement prompting the raindrops 920 to be removed by touching the wiper 930.
When the user completes a field challenge of touching wiper 930 with hand 300 to remove raindrop 920, user access to electronic device 100 may be allowed.
Fig. 10 is a diagram for describing a method of authenticating a user by using a field challenge generated based on a recognized object according to an embodiment of the present disclosure.
The electronic device 100 may obtain a user authentication request from a user. In the embodiments of the present disclosure, it is assumed that user authentication of the electronic device 100 is performed in the AR mode.
Referring to fig. 10, the electronic device 100 may automatically operate a camera when a user authentication request is obtained. Thus, an image of objects surrounding the user can be captured in the FoV of the camera. For example, an image of the balloon 1010 may be captured in the FoV of the camera.
The electronic device 100 may generate a presence challenge based on the captured object image. For example, the electronic device 100 may select the compass 1020 as the actor that constitutes a live challenge. Further, the electronic device 100 may determine the direction of the mission indication balloon 1010 by using the compass 1020, which constitutes a field challenge. Further, the user may be required to complete the on-site AR challenge of rotating compass 1020 such that the pointer points to balloon 1010. The user may access the electronic device 100 by rotating the pointer of the compass 1020 with his or her hand 300 into the direction of the balloon 1010.
Fig. 11 is a diagram for describing a method of authenticating a user by using a field challenge generated based on a recognized object according to an embodiment of the present disclosure.
The electronic device 100 may receive a user authentication request from a user. In the embodiments of the present disclosure, it is assumed that user authentication of the electronic device 100 is performed in the AR mode.
Referring to fig. 11, the electronic device 100 may automatically operate the camera when a user authentication request is received. Thus, an image of objects surrounding the user can be captured in the FoV of the camera. For example, an image of the cap 1120 may be captured in the FoV of the camera.
The electronic device 100 may generate a presence challenge based on the captured object image. For example, electronic device 100 may select jean 1110 as the actor that makes up the live challenge. Further, the electronic device 100 may determine to wear a hat for the jeans 1110 as a task that constitutes a field challenge. Thus, the electronic device 100 may generate a live challenge by superimposing the AR image of the jean 1110 on the real world image of the cap 1120 captured in the FoV of the camera.
In addition, the electronic device 100 may provide a live AR challenge by outputting a question or statement with the image in which the real world image and the AR image are superimposed, the question or statement prompting the user to move the cap 1120 to the head of the jean 1110. In addition, the user may be required to drag the cap 1120 into the hand 300 and place the cap 1120 on the face of the jean 1110.
Fig. 12 is a diagram for describing a method of authenticating a user by using a field challenge generated based on a recognized object according to an embodiment of the present disclosure.
The electronic device 100 may receive a user authentication request from a user. In the embodiments of the present disclosure, it is assumed that user authentication of the electronic device 100 is performed in the AR mode.
Referring to fig. 12, the electronic device 100 may automatically operate the camera when a user authentication request is received. Thus, an image of objects surrounding the user can be captured in the FoV of the camera. For example, an image of the jean 1210 may be captured in the FoV of the camera.
The electronic device 100 may generate a presence challenge based on the captured object image. For example, electronic device 100 may determine to select jeans as actors for the site challenge and to attach beards 1220 to jeans 1210 as tasks that make up the site challenge. Thus, the electronic device 100 may generate a live challenge by superimposing the AR image of the beard 1220 on the real world image of the jean 1220 captured in the FoV of the camera. In addition, the user may be required to drag the beard 1220 into the hand 300 and place the beard 1220 on the face of the jean 1210.
Fig. 13 is a diagram for describing a method of authenticating a user by using a field challenge generated based on a recognized object according to an embodiment of the present disclosure.
The electronic device 100 may receive a user authentication request from a user.
Referring to fig. 13, the electronic device 100 may automatically operate the camera when a user authentication request is received. Thus, an image of objects surrounding the user can be captured in the FoV of the camera.
The electronic device 100 may generate a presence challenge based on the captured object image. For example, the electronic device 100 may determine to select different colored balloons 1310 captured by a camera as the actor that constituted the site challenge, and to select an odd number of particular colored balloons among the different colored balloons as the task that constituted the site challenge. Thus, the electronic device 100 may output a question or statement prompting selection of an odd number of balloons of a particular color among the different colored balloons 1310 captured in the FoV of the camera.
Access to the electronic device 100 may be allowed when a user completes a field challenge by selecting an odd number of balloons of a particular color with his or her hand 300.
Fig. 14 is a diagram for describing a method of authenticating a user by using a field challenge generated based on a recognized object according to an embodiment of the present disclosure.
The electronic device 100 may receive a user authentication request from a user. Referring to fig. 14, the electronic device 100 may automatically operate the camera when a user authentication request is received. Thus, an image of objects surrounding the user can be captured in the FoV of the camera. For example, an image of the cabin 1410 may be captured in the FoV of the camera.
When an image of the cabin 1410 is captured, the electronic device 100 may determine to strike a door of the cabin 1410 as a field challenge. The electronic device 100 may output a question or statement prompting a tap of the door of the cabin 1410 captured in the camera FoV.
Access to the electronic device 100 may be allowed when a user has completed a field challenge of tapping the door of the cabin 1410 with his or her hand 300.
Fig. 15 is a diagram for describing a method of authenticating a user by using a site challenge generated based on context parameters according to an embodiment of the present disclosure.
The electronic device 100 may receive a user authentication request from a user. Upon receiving a user authentication request, the electronic device 100 may provide a text entry pad to enter two or three letters of a password associated with the user. The password associated with the user is determined based on the context parameter and may be a word indicating a condition of the user or the electronic device.
Meanwhile, for each letter, a letter input pad may be displayed on the display such that a first letter is input in bold, a second letter is input in italics, and a third letter is input in lowercase. However, this is merely an example, and an alphabet input pad may be provided in which combinations of letters are mixed to generate more complex field challenges. Further, according to another example, the size, color, etc. of the letters may be differently set.
The user may perform a field challenge by touching a particular letter in each of the letter input pads with his or her hand 300. When a field challenge is successfully performed, the user may access the electronic device 100.
FIG. 16 is a diagram for describing a method of authenticating a user using a site challenge generated based on contextual parameters according to an embodiment of the present disclosure.
The electronic device 100 may receive a user authentication request from a user. Upon receiving the user authentication request, the electronic device 100 may determine actors and tasks that make up the live challenge based on the contextual parameters. Referring to fig. 16, the electronic device 100 may store context information indicating that the user has recently subscribed to an air ticket for a new deby trip through the electronic device 100. Based on this, the electronic device 100 may generate a live challenge to move the aircraft such that the aircraft is located on the travel date in the calendar.
Thus, the electronic device 100 may display a question or statement prompting the aircraft image, a calendar image on the display, and a travel date on which the aircraft is to be placed in the calendar. The user may access the electronic device 100 by dragging the airplane with his or her hand 300 and dropping the airplane on a date corresponding to the travel date.
Fig. 17 is a diagram for describing a method of authenticating a user by using a site challenge generated based on context parameters according to an embodiment of the present disclosure.
The electronic device 100 may receive a user authentication request from a user. Upon receiving the user authentication request, the electronic device 100 may determine tasks and actors that make up the live challenge based on the contextual parameters. Referring to fig. 17, a call record of a user may be stored in the electronic device 100. Based on this, the electronic device 100 may generate a live challenge that selects who is the person talking the most times yesterday.
Thus, the electronic device 100 may display on the display a prompt question or statement prompting the user to select a phone icon, information about the person the user talked yesterday, and the person who talked the most times yesterday. The user may access the electronic device 100 by dragging the phone icon with his or her hand 300 and dropping the phone icon over the image of the person who talked the most times yesterday.
Fig. 18 is a diagram for describing a method of authenticating a user by using a site challenge generated based on context parameters according to an embodiment of the present disclosure.
The electronic device 100 may receive a user authentication request from a user. Upon receiving the user authentication request, the electronic device 100 may determine tasks and actors that make up the live challenge based on the contextual parameters. Referring to fig. 18, a consumption history of a user may be stored in the electronic device 100. Based on this, the electronic device 100 may generate a spot challenge that selects the highest amount the user has paid in the grocery store.
Thus, the electronic device 100 may display a money image and a wallet image on a display, and may display a question or statement prompting the user to select the amount he pays for the grocery purchase. The user may repeat the operation of dragging the money image to the wallet with his or her hand 300 until the amount of money paid by the user is indicated. When the user has completed the field challenge, the user may access the electronic device 100.
FIG. 19 is a diagram for describing a method of authenticating a user using a site challenge generated based on contextual parameters according to an embodiment of the present disclosure.
The electronic device 100 may receive a user authentication request from a user. Upon receiving the user authentication request, the electronic device 100 may determine tasks and actors that make up the live challenge based on the contextual parameters. Referring to fig. 19, the electronic device 100 may store call records, communication records, schedule information, photos, etc. of the user. Based on this, the electronic device 100 may generate a live challenge that allows the user to select who and with whom the user meets in the last month.
Thus, the electronic device 100 may display a plurality of location images on the display, including an image of a location that the user actually goes to today in the last month and an image of another location. In addition, when the user inputs information about the people seen today in the last month, the electronic device 100 may display a question or statement prompting the user to generate a corresponding image and move the image to a specific location. The user may complete the live challenge by dragging an image of the person seen today for the last month with his or her hand 300 onto an image of a particular location. When the user has completed the field challenge, the user may access the electronic device 100.
Fig. 20 is a diagram for describing a method of an electronic device performing user authentication according to an embodiment of the present disclosure.
In operation S2010, the electronic device may obtain a user authentication request for accessing at least one application running on the electronic device. For example, in a case where the electronic device is in a locked state, when a touch input is obtained from a user, the electronic device may determine that a user authentication request for accessing a home screen of the electronic device has been obtained. However, this is merely an example, and the method of receiving the user authentication request is not limited to the above-described example.
In operation S2020, the electronic device may identify actors and tasks that constitute a live challenge for authentication based on contextual parameters associated with at least one of the electronic device or the user.
An electronic device according to embodiments of the present disclosure may determine a condition of the electronic device or a user based on the contextual parameters. The electronic device may determine actors and tasks that make up the live challenge for interactively authenticating the user based on the determined conditions. An actor may be a person, thing, animal, etc. who is the subject of performing a particular task.
Meanwhile, according to another embodiment of the present disclosure, the electronic device may identify a sensed object on the FoV of the camera and determine actors and tasks constituting the live challenge based on the identified object. This may correspond to the method of generating a site challenge described with reference to fig. 9-14.
In operation S2030, the electronic device may provide a field challenge generated based on the determination.
An electronic device according to embodiments of the present disclosure may output questions or statements prompting a running image and tasks related to actors and tasks that make up a live challenge to the device.
In operation S2040, the electronic device may identify whether to access at least one application based on whether to execute the provided field challenge.
The electronic device according to embodiments of the present disclosure may deny access to the at least one application when a user action corresponding to the field challenge is not identified for a predetermined time. The electronic device may allow access to the at least one application when a user action corresponding to the field challenge is identified within a predetermined time.
Fig. 21 is a block diagram of an electronic device 2100 that performs user authentication according to an embodiment of the present disclosure.
Referring to fig. 21, an electronic device 2100 according to an embodiment of the present disclosure may include an inputter/outputter 2110, a processor 2120, and a memory 2130. However, all of the illustrated components are not required components. The electronic device 2100 may be implemented with more components than those shown, or may be implemented with fewer components than those shown. For example, the electronic device 2100 may include multiple processors and may include a camera and at least one sensor.
Hereinafter, these components will be described in order.
The inputter/outputter 2110 is configured to obtain user input or output an audio signal or an image signal, and may further include a display and an audio outputter. However, this is merely an example, and the components of the input/output device 2110 are not limited to the above-described examples.
The inputter/outputter 2110 according to the embodiment of the present disclosure can obtain the user authentication request. When obtaining a user authentication request, inputter/outputter 2110 may output the generated presence challenge based on the contextual parameters. In addition, when providing a field challenge, the inputter/outputter 2110 may obtain information entered by the user to perform the field challenge.
The processor 2120 generally controls the overall operation of the electronic device 2100. For example, the processor 2120 may perform the operations of the user authentication method described above by running a program stored in the memory 2130.
The processor 2120 may control the inputter/outputter 2110 to obtain a user authentication request for accessing at least one application running on the electronic device. Further, the processor 2120 may determine actors and tasks that constitute a live challenge for authentication based on contextual parameters associated with at least one of the electronic device or the user. The processor 2120 may provide the field challenges generated based on the determination via the inputter/outputter 2110. Further, the processor 2120 may determine whether to access at least one application based on whether to execute the provided field challenge.
The processor 2120 according to an embodiment of the present disclosure may recognize an object displayed in a FoV of a camera (not shown). The processor 2120 may determine actors and tasks based on the identified objects and context parameters. Further, the processor 2120 may display a question prompting the determined task via the inputter/outputter.
When the AR mode is set in the electronic device 2100, the processor 2120 according to an embodiment of the present disclosure may output an AR image of a live challenge made up of actors and tasks on the identified object in a superimposed manner.
The processor 2120 according to an embodiment of the present disclosure may determine movement information about the electronic device or user after object recognition based on movement of the electronic device or user recognized via a sensor (not shown). The processor 2120 may adjust the position of the output AR image based on the determined movement information.
The processor 2120 according to an embodiment of the present disclosure may identify the location of the electronic device via a sensor (not shown). The processor 2120 may determine objects surrounding the host device based on the location of the electronic device identified via the sensor (not shown). The processor 2120 may determine actors and tasks based on the determined object and context parameters.
The processor 2120 according to an embodiment of the present disclosure may deny access to the at least one application when a user action corresponding to the field challenge is not identified for a predetermined time. Further, the processor 2120 may allow access to at least one application when a user action corresponding to the field challenge is identified within a predetermined time.
The processor 2120 according to an embodiment of the present disclosure may determine actors and tasks by using a predetermined learning network model based on the context parameters.
The memory 2130 may store programs for processing and control in the processor 2120 and may store input or output data (e.g., site challenges or context parameters).
The memory 2130 may include at least one storage medium selected from the group consisting of: flash memory, hard disk, multimedia card micro-type memory, card type memory (e.g., SD or XD memory), random Access Memory (RAM), static Random Access Memory (SRAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), programmable Read Only Memory (PROM), magnetic memory, magnetic disk, and optical disk. Further, the electronic device 2100 may operate a network storage or cloud server that performs the storage functions of the memory 2130 on the internet.
Embodiments of the present disclosure may be implemented by at least one software program running on at least one hardware device. The components or embodiments of the present disclosure shown in fig. 1-21 may include hardware devices or blocks, which may be at least one of a combination of hardware devices and software modules.
It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, although the embodiments of the present disclosure have been described with reference to the exemplary embodiments, the embodiments of the present disclosure may be implemented with modifications within the scope of the technical ideas of the present disclosure.
Methods according to embodiments of the present disclosure may be embodied as program commands that may be executed by various computing devices and recorded on a non-transitory computer-readable recording medium. Examples of the non-transitory computer readable recording medium may include program commands, data files, and data structures, alone or in combination. The program commands recorded on the non-transitory computer readable recording medium may be designed and configured specifically for the present disclosure, or may be well known to and used by one of ordinary skill in the computer software art. Examples of the non-transitory computer readable recording medium may include magnetic media (e.g., hard disk, floppy disk, magnetic tape, etc.), optical media (e.g., CD-ROM, DVD, etc.), magneto-optical media (e.g., floppy disk, etc.), ROM, RAM, and flash memory configured to store program commands. Examples of program commands may include not only machine language code prepared by a compiler but also high-level code executable by a computer using an interpreter.
Devices according to embodiments of the present disclosure may include a processor, memory storing and running program data, persistent storage such as a disk drive, a communication port to communicate with external devices, a user interface device such as a touch panel or keys, and the like. The methods implemented by the software modules or algorithms may be stored in a non-transitory computer readable recording medium as code or program commands executable on a computer. Examples of the non-transitory computer-readable recording medium may include magnetic storage media (e.g., ROM, RAM, floppy disks, hard disks, etc.) and optical recording media (e.g., CD-ROMs, digital Versatile Disks (DVDs)). The non-transitory computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion. The non-transitory computer readable recording medium may be stored in a memory and may be executed by a processor.
In this disclosure, the term "computer program product" or "non-transitory computer-readable recording medium" is generally used to refer to media such as memory, hard disk installed in a hard disk drive, and signals. A "computer program product" or "non-transitory computer readable recording medium" is an object for providing software configured with instructions for performing user authentication operations by providing field challenges according to embodiments of the present disclosure to a computer system.
Although reference numerals are indicated in the embodiments of the present disclosure shown in the drawings and specific terms are used to describe the embodiments of the present disclosure, the present disclosure is not limited by any specific terms and the embodiments of the present disclosure may include all components generally accessible to those skilled in the art.
Embodiments of the present disclosure may be described in terms of functional block components and various processing operations. The functional blocks may be implemented by any number of hardware and/or software configurations that perform the specified functions. For example, embodiments of the present disclosure may employ integrated circuit components, such as memories, processes, logic, or look-up tables, which may perform various functions by controlling one or more microprocessors or by other control devices. Further, embodiments of the present disclosure may employ different types of cores, different types of CPUs, and the like. The components of the present disclosure are implemented using software programming or software elements. Similarly, the present disclosure may be implemented in any programming or scripting language, such as C, C ++, java, assembler, or the like, wherein the various algorithms are implemented with any combination of data structures, objects, procedures, routines, or other programming elements. The functional blocks may be implemented by algorithms running on one or more processors. Further, embodiments of the present disclosure may employ techniques according to the related art for electronic environment configuration, signal processing, and/or data processing. The terms "mechanism," "assembly," "apparatus," and "configuration" may be used broadly and are not limited to mechanical and physical configurations. The term may include the meaning of a series of software routines in combination with a processor or the like.
The particular manner of operation shown and described herein is an illustrative example and is not intended to limit the scope of the present disclosure in any way. For clarity, electronics, control systems, software, and other functional aspects of the system according to the prior art may not be described. Furthermore, the connecting lines or connecting members shown in the various figures are intended to represent exemplary functional relationships and/or physical or logical connections between the various elements. It should be noted that many alternative or additional functional relationships, physical connections or logical connections may be present in a practical device. Moreover, no component is essential to the practice of the embodiments of the present disclosure unless the component is specifically described as "essential" or "critical".
The use of the term "the" or similar defined term in this specification (especially in the claims) is to be construed to cover both the singular and the plural. Further, when a range is described in the embodiments of the present disclosure, the embodiments of the present disclosure to which the respective values falling within the range are applied may be included (unless otherwise indicated herein), and this is the same as each of the respective values falling within the range is described in the detailed description of the present disclosure. Finally, the steps of all methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. Embodiments of the present disclosure are not limited by the order of the steps described herein. All illustrated or explanatory terms (e.g., "and the like") are used in the embodiments of the disclosure to describe the embodiments of the disclosure in detail only, and the scope of the disclosure is not limited by the illustrated or explanatory terms unless they are limited by the claims. Furthermore, it will be understood by those of ordinary skill in the art that various modifications, combinations, and variations can be made in accordance with design conditions and factors within the scope of the appended claims or equivalents.

Claims (13)

1. A method of authenticating a user, the method comprising:
obtaining a user authentication request for accessing at least one application running on an electronic device;
identifying an object displayed in a field of view (FoV) of a camera provided in an electronic device;
identifying an actor and a task based on the identified object and one or more contextual parameters associated with at least one of the electronic device or the user, wherein the actor is for a selected story indicated by a live challenge and the task is applied to the identified actor and the selected story;
generating a live challenge for authentication based on the identified actors and tasks;
providing the generated live challenge to the user or electronic device by outputting an Augmented Reality (AR) image of the live challenge on the identified object in a superimposed manner, wherein the AR image of the live challenge is indicative of the actor and task; and
whether access to the at least one application is granted is identified based on whether the provided presence challenge has been successfully executed.
2. The method of claim 1, wherein,
identifying actors and tasks includes:
identifying an actor corresponding to the identified object; and
Identifying tasks that can be performed by the identified actors
Providing a site challenge includes: the question prompting the identified task is displayed.
3. The method of claim 1, wherein when an Augmented Reality (AR) mode is set in the electronic device, an AR image of a live challenge made up of actors and tasks is output in a superimposed manner on the identified object.
4. The method of claim 1, further comprising:
identifying a location of the electronic device; and
identifying objects around the electronic device based on the identified location of the electronic device, wherein identifying actors and tasks includes: actors and tasks are identified based on the identified objects surrounding the electronic device and the one or more contextual parameters.
5. The method of claim 1, wherein identifying whether to access the at least one application comprises:
denying access to the at least one application based on not identifying a user action corresponding to the field challenge for a predetermined time; and
access to the at least one application is allowed based on identifying a user action corresponding to the field challenge within a predetermined time.
6. The method of claim 1, wherein identifying actors and tasks comprises identifying actors and tasks by using a preset learning network model based on the one or more contextual parameters.
7. An electronic device for performing user authentication, the electronic device comprising:
an inputter/outputter;
a memory storing instructions; and
at least one processor coupled to the memory, wherein the at least one processor is configured to execute instructions to:
obtaining, by the inputter/exporter, a user authentication request for accessing at least one application running on the electronic device;
identifying an object displayed in a field of view (FoV) of a camera provided in an electronic device;
identifying an actor and a task based on the identified object and one or more contextual parameters associated with at least one of the electronic device or the user, wherein the actor is for a selected story indicated by a live challenge and the task is applied to the identified actor and the selected story;
generating a live challenge for authentication based on the identified actors and tasks;
providing the generated live challenge to a user of the electronic device by outputting an Augmented Reality (AR) image of the live challenge on the identified object in a superimposed manner, wherein the AR image of the live challenge is indicative of the actor and task; and
whether access to the at least one application is granted is identified based on whether the provided presence challenge has been successfully executed.
8. The electronic device of claim 7, further comprising a display, wherein the at least one processor is further configured to execute instructions to:
identifying an actor corresponding to the identified object;
identifying tasks that can be performed by the identified actors; and
the question prompting the identified task is displayed.
9. The electronic device of claim 7, wherein the at least one processor is further configured to execute instructions to output an Augmented Reality (AR) image of a live challenge made up of an actor and a task in an overlaid manner on the identified object when an AR mode is set in the electronic device.
10. The electronic device of claim 7, further comprising a sensor configured to identify a location of the electronic device, wherein the at least one processor is further configured to execute instructions to:
identifying objects around the electronic device based on the location of the electronic device identified via the sensor; and
actors and tasks are identified based on the identified objects surrounding the electronic device and the one or more contextual parameters.
11. The electronic device of claim 7, wherein the at least one processor is further configured to execute instructions to:
Denying access to the at least one application based on not identifying a user action corresponding to the field challenge for a predetermined time; and
access to the at least one application is allowed based on identifying a user action corresponding to the field challenge within a predetermined time.
12. The electronic device of claim 7, wherein the at least one processor is further configured to execute instructions to identify actors and tasks based on the contextual parameters by using a preset learning network model.
13. A computer readable medium comprising instructions which, when executed by a processor, cause an electronic device to perform the method of any one of claims 1 to 6.
CN201980045581.1A 2018-07-18 2019-07-18 Method and apparatus for performing user authentication Active CN112384916B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
IN201841026856 2018-07-18
IN201841026856 2018-07-18
KR10-2019-0079001 2019-07-01
KR1020190079001A KR20200010041A (en) 2018-07-18 2019-07-01 Method and apparatus for performing user authentication
PCT/KR2019/008890 WO2020017902A1 (en) 2018-07-18 2019-07-18 Method and apparatus for performing user authentication

Publications (2)

Publication Number Publication Date
CN112384916A CN112384916A (en) 2021-02-19
CN112384916B true CN112384916B (en) 2024-04-09

Family

ID=69322085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980045581.1A Active CN112384916B (en) 2018-07-18 2019-07-18 Method and apparatus for performing user authentication

Country Status (2)

Country Link
KR (1) KR20200010041A (en)
CN (1) CN112384916B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023106621A1 (en) * 2021-12-08 2023-06-15 삼성전자주식회사 Cloud server for authenticating user and operation method thereof

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1889562A (en) * 2005-06-28 2007-01-03 华为技术有限公司 Method for identifying equipment for receiving initial session protocol request information

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10614626B2 (en) * 2005-10-26 2020-04-07 Cortica Ltd. System and method for providing augmented reality challenges
US9298898B2 (en) * 2013-07-18 2016-03-29 At&T Intellectual Property I, L.P. Event-based security challenges

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1889562A (en) * 2005-06-28 2007-01-03 华为技术有限公司 Method for identifying equipment for receiving initial session protocol request information

Also Published As

Publication number Publication date
KR20200010041A (en) 2020-01-30
CN112384916A (en) 2021-02-19

Similar Documents

Publication Publication Date Title
US11281760B2 (en) Method and apparatus for performing user authentication
US11714890B2 (en) Systems and methods for authenticating users
US11908187B2 (en) Systems, methods, and apparatus for providing image shortcuts for an assistant application
US10331945B2 (en) Fair, secured, and efficient completely automated public Turing test to tell computers and humans apart (CAPTCHA)
US10460164B2 (en) Information processing apparatus, information processing method, eyewear terminal, and authentication system
CN109688451B (en) Method and system for providing camera effect
Mulfari et al. Using Google Cloud Vision in assistive technology scenarios
CN104364753B (en) Method for highlighting active interface element
WO2016119696A1 (en) Action based identity identification system and method
CN106462242A (en) User interface control using gaze tracking
US10846514B2 (en) Processing images from an electronic mirror
KR101729959B1 (en) User authentication system and method based on eye responses
KR20170038378A (en) Electronic device for processing image and method for controlling thereof
US11151750B2 (en) Displaying a virtual eye on a wearable device
CN112384916B (en) Method and apparatus for performing user authentication
Yang et al. Fatigueview: A multi-camera video dataset for vision-based drowsiness detection
US11134079B2 (en) Cognitive behavioral and environmental access
WO2013024667A1 (en) Site of interest extraction device, site of interest extraction method, and computer-readable recording medium
US20200380099A1 (en) Variable access based on facial expression configuration
US20240108985A1 (en) Managing virtual collisions between moving virtual objects
KR102697346B1 (en) Electronic device and operating method for recognizing an object in a image
US11217032B1 (en) Augmented reality skin executions
CN111405175B (en) Camera control method, device, computer equipment and storage medium
US11514082B1 (en) Dynamic content selection
WO2024066977A1 (en) Palm-based human-computer interaction method, and apparatus, device, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant