WO2013014328A1 - Methods and apparatuses for facilitating locking and unlocking of secure functionality through object recognition - Google Patents

Methods and apparatuses for facilitating locking and unlocking of secure functionality through object recognition Download PDF

Info

Publication number
WO2013014328A1
WO2013014328A1 PCT/FI2012/050452 FI2012050452W WO2013014328A1 WO 2013014328 A1 WO2013014328 A1 WO 2013014328A1 FI 2012050452 W FI2012050452 W FI 2012050452W WO 2013014328 A1 WO2013014328 A1 WO 2013014328A1
Authority
WO
WIPO (PCT)
Prior art keywords
representation
images
processor
user
dimensional representation
Prior art date
Application number
PCT/FI2012/050452
Other languages
French (fr)
Inventor
Pranav Mishra
Krishna Annasagar Govindarao
Gururaj Gopal Putraya
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Publication of WO2013014328A1 publication Critical patent/WO2013014328A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • Example embodiments of the present invention relate generally to preventing unauthorized access to secure functionality through use of object recognition technology and, more particularly, relate to methods and apparatuses for 2-dimensional and 3 -dimensional object and/or face recognition to lock and unlock a mobile computing device.
  • Face recognition technology can be useful for such security applications and may provide enhanced security while limiting the need to remember or input difficult passwords.
  • face recognition technology may also provide better security, as any user can enter the correct password, whereas only a user with the same face may gain authorized access to functionality of the mobile computing device.
  • a method may include receiving at least two images of an object, wherein at least two of the at least two images correspond to a different view of the object. The method may further include developing, based at least in part on the at least two images, a multi-dimensional
  • the method further includes comparing the developed multidimensional representation with a stored multi-dimensional representation.
  • the method also includes permitting access to secured functionality in an instance in which the developed multi-dimensional representation is within a predefined similarity threshold of the stored multi-dimensional representation.
  • an apparatus comprising at least one processor and at least one memory storing computer program code, wherein the at least one memory and stored computer program code are configured, with the at least one processor, to cause the apparatus to receive at least two images of an object, wherein at least two of the at least two images correspond to a different view of the object.
  • the at least one memory and stored computer program code are configured, with the at least one processor, to further cause the apparatus of this example embodiment to develop, based at least in part on the at least two images, a multi-dimensional representation of the object.
  • the at least one memory and stored computer program code are configured, with the at least one processor, to further cause the apparatus of this example embodiment to compare the developed multi-dimensional representation with a stored multi-dimensional representation.
  • the at least one memory and stored computer program code are configured, with the at least one processor, to also cause the apparatus of this example embodiment to permit access to secured functionality in an instance in which the developed multi-dimensional representation is within a predefined similarity threshold of the stored multi-dimensional representation.
  • a computer program product includes at least one computer-readable storage medium having computer-readable program instructions stored therein.
  • the program instructions of this example embodiment comprise program instructions configured to cause an apparatus to perform a method comprising receiving at least two images of an object, wherein at least two of the at least two images correspond to a different view of the object.
  • the computer program product of this example embodiment further comprises developing, based at least in part on the at least two images, a multi-dimensional representation of the object.
  • the computer program product of this example embodiment additionally comprises comparing the developed multi-dimensional representation with a stored multi-dimensional representation.
  • the computer program product of this example embodiment further comprises permitting access to secured functionality in an instance in which the developed multi-dimensional representation is within a predefined similarity threshold of the stored multi-dimensional representation.
  • an apparatus that includes means for receiving at least two images of an object, wherein at least two of the at least two images correspond to a different view of the object.
  • the apparatus may also comprise means for developing, based at least in part on the at least two images, a multi-dimensional representation of the object.
  • the apparatus may additionally comprise means for comparing the developed multidimensional representation with a stored multi-dimensional representation.
  • the apparatus may further comprise means for permitting access to secured functionality in an instance in which the developed multi-dimensional representation is within a predefined similarity threshold of the stored multi-dimensional representation.
  • FIG. 1 illustrates a block diagram of an apparatus for facilitating locking and unlocking of secured functionality through object recognition technology, in accordance with an example embodiment
  • FIG. 2 is a schematic block diagram of a mobile terminal according to an example embodiment
  • FIG. 3 illustrates an example apparatus for facilitating locking and unlocking of a mobile device through face recognition, wherein the apparatus displays instructions prompting a user to create a representation of their face for security calibration purposes, in accordance with an example embodiment
  • FIG. 4 illustrates example interaction of a user with the apparatus shown in FIG. 3, in accordance with an example embodiment
  • FIG. 5 illustrates an example apparatus for facilitating locking and unlocking of a mobile device through face recognition, wherein the apparatus displays instructions prompting a user to create a representation of their face for unlocking of the apparatus, in accordance with an example embodiment
  • FIG. 6 illustrates a flowchart according to an example method for permitting access to secure functionality, in accordance with an example embodiment
  • FIG. 7 illustrates a flowchart according to another example method for permitting access to secure functionality, in accordance with an example embodiment
  • FIG. 8 illustrates a flowchart according to one example embodiment of a method for permitting access to secure functionality
  • FIG. 9 illustrates a flowchart according to another example embodiment of a method for permitting access to secure functionality.
  • FIG. 10 illustrates a flowchart according to another example embodiment of a method for permitting access to secure functionality.
  • computer-readable medium refers to any medium configured to participate in providing information to a processor, including instructions for execution. Such a medium may take many forms, including, but not limited to a non-transitory computer-readable storage medium (e.g., non-volatile media, volatile media), and
  • Transmission media include, for example, coaxial cables, copper wire, fiber optic cables, and carrier waves that travel through space without wires or cables, such as acoustic waves and electromagnetic waves, including radio, optical and infrared waves.
  • Non- transitory computer-readable media include a magnetic computer readable medium (e.g., a floppy disk, hard disk, magnetic tape, any other magnetic medium), an optical computer readable medium (e.g., a compact disc read only memory (CD-ROM), a digital versatile disc (DVD), a Blu-Ray disc, or the like), a random access memory (RAM), a programmable read only memory (PROM), an erasable programmable read only memory (EPROM), a FLASH- EPROM, or any other non-transitory medium from which a computer can read.
  • a magnetic computer readable medium e.g., a floppy disk, hard disk, magnetic tape, any other magnetic medium
  • an optical computer readable medium e.g., a compact disc read only memory (CD-ROM), a digital versatile disc (DVD), a Blu-Ray disc, or the like
  • RAM random access memory
  • PROM programmable read only memory
  • EPROM erasable programmable read only memory
  • computer-readable storage medium is used herein to refer to any computer-readable medium except transmission media. However, it will be appreciated that where embodiments are described to use a computer-readable storage medium, other types of computer-readable mediums may be substituted for or used in addition to the computer-readable storage medium in alternative embodiments.
  • circuitry refers to (a) hardware-only circuit implementations (e.g., implementations in analog circuitry and/or digital circuitry); (b) combinations of circuits and computer program product(s) comprising software and/or firmware instructions stored on one or more computer readable memories that work together to cause an apparatus to perform one or more functions described herein; and (c) circuits, such as, for example, a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation even if the software or firmware is not physically present.
  • This definition of 'circuitry' applies to all uses of this term herein, including in any claims.
  • the term 'circuitry' also includes an implementation comprising one or more processors and/or portion(s) thereof and accompanying software and/or firmware.
  • the term 'circuitry' as used herein also includes, for example, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, other network device, and/or other computing device.
  • FIG. 1 illustrates a block diagram of an apparatus 102 for facilitating locking and unlocking of a mobile device through face recognition.
  • the apparatus 102 is provided as an example of one embodiment and should not be construed to narrow the scope or spirit of the invention in any way.
  • the scope of the disclosure encompasses many potential embodiments in addition to those illustrated and described herein.
  • FIG. 1 illustrates one example of a configuration of an apparatus for facilitating locking and unlocking of a mobile device through face recognition, other configurations may also be used to implement embodiments of the present invention.
  • the apparatus 102 may be embodied as a desktop computer, laptop computer, mobile terminal, mobile computer, mobile phone, mobile communication device, game device, digital camera/camcorder, audio/video player, television device, radio receiver, digital video recorder, positioning device, a chipset, a computing device comprising a chipset, any combination thereof, and/or the like.
  • the apparatus 102 is embodied as a mobile computing device, such as the mobile terminal illustrated in FIG. 2.
  • FIG. 2 illustrates a block diagram of a mobile terminal 10 representative of one example embodiment of an apparatus 102.
  • the mobile terminal 10 illustrated and hereinafter described is merely illustrative of one type of apparatus 102 that may implement and/or benefit from various example embodiments of the invention and, therefore, should not be taken to limit the scope of the disclosure. While several embodiments of the electronic device are illustrated and will be hereinafter described for purposes of example, other types of electronic devices, such as mobile telephones, mobile computers, personal digital assistants (PDAs), pagers, laptop computers, desktop computers, gaming devices, televisions, e-papers, and other types of electronic systems, may employ various embodiments of the invention. As shown, the mobile terminal 10 may include an antenna 12 (or multiple antennas 12) in communication with a transmitter 14 and a receiver 16.
  • PDAs personal digital assistants
  • the mobile terminal 10 may include an antenna 12 (or multiple antennas 12) in communication with a transmitter 14 and a receiver 16.
  • the mobile terminal 10 may also include a processor 20 configured to provide signals to and receive signals from the transmitter and receiver, respectively.
  • the processor 20 may, for example, be embodied as various means including circuitry, one or more microprocessors with accompanying digital signal processor(s), one or more processor(s) without an accompanying digital signal processor, one or more coprocessors, one or more multi-core processors, one or more controllers, processing circuitry, one or more computers, various other processing elements including integrated circuits such as, for example, an ASIC (application specific integrated circuit) or FPGA (field programmable gate array), or some combination thereof.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the processor 20 comprises a plurality of processors.
  • These signals sent and received by the processor 20 may include signaling information in accordance with an air interface standard of an applicable cellular system, and/or any number of different wireline or wireless networking techniques, comprising but not limited to Wi-Fi, wireless local access network (WLAN) techniques such as Institute of Electrical and Electronics Engineers (IEEE) 802.11, 802.16, and/or the like.
  • these signals may include speech data, user generated data, user requested data, and/or the like.
  • the mobile terminal may be capable of operating with one or more air interface standards, communication protocols, modulation types, access types, and/or the like.
  • the mobile terminal may be capable of operating in accordance with various first generation (1G), second generation (2G), 2.5G, third-generation (3G) communication protocols, fourth-generation (4G) communication protocols, Internet Protocol Multimedia Subsystem (IMS) communication protocols (e.g., session initiation protocol (SIP)), and/or the like.
  • the mobile terminal may be capable of operating in accordance with 2G wireless communication protocols IS- 136 (Time Division Multiple Access (TDMA)), Global System for Mobile communications (GSM), IS- 95 (Code Division Multiple Access (CDMA)), and/or the like.
  • TDMA Time Division Multiple Access
  • GSM Global System for Mobile communications
  • CDMA Code Division Multiple Access
  • the mobile terminal may be capable of operating in accordance with 2.5G wireless communication protocols General Packet Radio Service (GPRS), Enhanced Data GSM Environment
  • the mobile terminal may be capable of operating in accordance with 3G wireless communication protocols such as Universal Mobile Telecommunications System (UMTS), Code Division Multiple Access 2000 (CDMA2000), Wideband Code Division Multiple Access (WCDMA), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), and/or the like.
  • the mobile terminal may be additionally capable of operating in accordance with 3.9G wireless communication protocols such as Long Term Evolution (LTE) or Evolved Universal Terrestrial Radio Access Network (E-UTRAN) and/or the like.
  • LTE Long Term Evolution
  • E-UTRAN Evolved Universal Terrestrial Radio Access Network
  • the mobile terminal may be capable of operating in accordance with fourth-generation (4G) wireless communication protocols and/or the like as well as similar wireless communication protocols that may be developed in the future.
  • 4G fourth-generation
  • Some Narrow-band Advanced Mobile Phone System (NAMPS), as well as Total Access Communication System (TACS), mobile terminals may also benefit from embodiments of this invention, as should dual or higher mode phones (e
  • the mobile terminal 10 may be capable of operating according to Wi-Fi or Worldwide Interoperability for Microwave Access
  • the processor 20 may comprise circuitry for implementing audio/video and logic functions of the mobile terminal 10.
  • the processor 20 may comprise a digital signal processor device, a microprocessor device, an analog-to-digital converter, a digital-to-analog converter, and/or the like. Control and signal processing functions of the mobile terminal may be allocated between these devices according to their respective capabilities.
  • the processor may additionally comprise an internal voice coder (VC) 20a, an internal data modem (DM) 20b, and/or the like.
  • the processor may comprise functionality to operate one or more software programs, which may be stored in memory.
  • the processor 20 may be capable of operating a connectivity program, such as a web browser.
  • the connectivity program may allow the mobile terminal 10 to transmit and receive web content, such as location-based content, according to a protocol, such as Wireless Application Protocol (WAP), hypertext transfer protocol (HTTP), and/or the like.
  • WAP Wireless Application Protocol
  • HTTP hypertext transfer protocol
  • the mobile terminal 10 may be capable of using a Transmission Control Protocol/Internet Protocol (TCP/IP) to transmit and receive web content across the internet or other networks.
  • TCP/IP Transmission Control Protocol/Internet Protocol
  • the mobile terminal 10 may also comprise a user interface including, for example, an earphone or speaker 24, a ringer 22, a microphone 26, a display 28, a camera 36, a user input interface, and/or the like, which may be operationally coupled to the processor 20.
  • the processor 20 may comprise user interface circuitry configured to control at least some functions of one or more elements of the user interface, such as, for example, the speaker 24, the ringer 22, the microphone 26, the display 28, and/or the like.
  • the processor 20 and/or user interface circuitry comprising the processor 20 may be configured to control one or more functions of one or more elements of the user interface through computer program instructions (e.g., software and/or firmware) stored on a memory accessible to the processor 20 (e.g., volatile memory 40, non-volatile memory 42, and/or the like).
  • a memory accessible to the processor 20 e.g., volatile memory 40, non-volatile memory 42, and/or the like.
  • the mobile terminal may comprise a battery for powering various circuits related to the mobile terminal, for example, a circuit to provide mechanical vibration as a detectable output.
  • the display 28 of the mobile terminal may be of any type appropriate for the electronic device in question with some examples including a plasma display panel (PDP), a liquid crystal display (LCD), a light-emitting diode (LED), an organic light-emitting diode display (OLED), a projector, a holographic display or the like.
  • the user input interface may comprise devices allowing the mobile terminal to receive data, such as a keypad 30, a touch display (e.g., some example embodiments wherein the display 28 is configured as a touch display), a joystick (not shown), and/or other input device.
  • the keypad may comprise numeric (0-9) and related keys (#, *), and/or other keys for operating the mobile terminal.
  • the mobile terminal 10 may comprise memory, such as a subscriber identity module (SIM) 38, a removable user identity module (R-UIM), and/or the like, which may store information elements related to a mobile subscriber. In addition to the SIM, the mobile terminal may comprise other removable and/or fixed memory.
  • the mobile terminal 10 may include volatile memory 40 and/or non-volatile memory 42.
  • volatile memory 40 may include Random Access Memory (RAM) including dynamic and/or static RAM, on-chip or off-chip cache memory, and/or the like.
  • RAM Random Access Memory
  • Non-volatile memory 42 which may be embedded and/or removable, may include, for example, read-only memory, flash memory, magnetic storage devices (e.g., hard disks, floppy disk drives, magnetic tape, etc.), optical disc drives and/or media, non-volatile random access memory (NVRAM), and/or the like.
  • NVRAM non-volatile random access memory
  • the memories may store one or more software programs, instructions, pieces of information, data, and/or the like which may be used by the mobile terminal for performing functions of the mobile terminal.
  • the memories may comprise an identifier, such as an international mobile equipment identification (IMEI) code, capable of uniquely identifying the mobile terminal 10.
  • IMEI international mobile equipment identification
  • the mobile terminal 10 may include a media capturing element, such as a camera, video and/or audio module, in communication with the processor 20.
  • the media capturing element may be any means for capturing an image, video and/or audio for storage, display or transmission.
  • the camera 36 may include a digital camera capable of forming a digital image file from a captured image.
  • the digital camera of the camera 36 may be capable of capturing a video clip.
  • the camera 36 may include all hardware, such as a lens or other optical component(s), and software necessary for creating a digital image file from a captured image as well as a digital video file from a captured video clip.
  • the camera 36 may include only the hardware needed to view an image, while a memory device of the mobile terminal 10 stores instructions for execution by the processor 20 in the form of software necessary to create a digital image file from a captured image.
  • an object or objects within a field of view of the camera 36 may be displayed on the display 28 of the mobile terminal 10 to illustrate a view of an image currently displayed which may be captured if desired by the user.
  • an image may be either a captured image or an image comprising the object or objects currently displayed by the mobile terminal 10, but not necessarily captured in an image file.
  • the camera 36 may further include a processing element such as a co-processor which assists the processor 20 in processing image data and an encoder and/or decoder for compressing and/or decompressing image data.
  • a processing element such as a co-processor which assists the processor 20 in processing image data
  • an encoder and/or decoder for compressing and/or decompressing image data.
  • the encoder and/or decoder may encode and/or decode according to, for example, a joint photographic experts group (JPEG) standard, a moving picture experts group (MPEG) standard, or other format.
  • JPEG joint photographic experts group
  • MPEG moving picture experts group
  • the apparatus 102 includes various means for performing the various functions herein described. These means may comprise one or more of a processor 1 10, memory 1 12, communication interface 1 14, user interface 1 16, sensor 1 18, camera 1 19, or user interface (UI) control circuitry 122.
  • the means of the apparatus 102 as described herein may be embodied as, for example, circuitry, hardware elements (e.g., a suitably programmed processor, combinational logic circuit, and/or the like), a computer program product comprising computer-readable program instructions (e.g., software or firmware) stored on a computer-readable medium (e.g. memory 1 12) that is executable by a suitably configured processing device (e.g., the processor 1 10), or some combination thereof.
  • a suitably configured processing device e.g., the processor 1 10
  • one or more of the means illustrated in FIG. 1 may be embodied as a chip or chip set.
  • the apparatus 102 may comprise one or more physical packages (e.g., chips) including materials, components and/or wires on a structural assembly (e.g., a baseboard).
  • the structural assembly may provide physical strength, conservation of size, and/or limitation of electrical interaction for component circuitry included thereon.
  • the processor 1 10, memory 1 12, communication interface 1 14, user interface 1 16, sensor 1 18, camera 1 19, and/or UI control circuitry 122 may be embodied as a chip or chip set.
  • the apparatus 102 may therefore, in some cases, be configured to or may comprise component(s) configured to implement embodiments of the present invention on a single chip or as a single "system on a chip.”
  • a chip or chipset may constitute means for performing one or more operations for providing the functionalities described herein.
  • the processor 1 10 may, for example, be embodied as various means including one or more microprocessors with accompanying digital signal processor(s), one or more processor(s) without an accompanying digital signal processor, one or more coprocessors, one or more multi-core processors, one or more controllers, processing circuitry, one or more computers, various other processing elements including integrated circuits such as, for example, an ASIC (application specific integrated circuit) or FPGA (field programmable gate array), one or more other types of hardware processors, or some combination thereof. Accordingly, although illustrated in FIG. 1 as a single processor, in some embodiments the processor 1 10 comprises a plurality of processors.
  • the plurality of processors may be in operative communication with each other and may be collectively configured to perform one or more functionalities of the apparatus 102 as described herein.
  • the plurality of processors may be embodied on a single computing device or distributed across a plurality of computing devices collectively configured to function as the apparatus 102.
  • the processor 110 may be embodied as or comprise the processor 20 (shown in FIG. 2).
  • the processor 110 is configured to execute instructions stored in the memory 112 or otherwise accessible to the processor 110. These instructions, when executed by the processor 110, may cause the apparatus 102 to perform one or more of the functionalities of the apparatus 102 as described herein.
  • the processor 110 may comprise an entity capable of performing operations according to embodiments of the present invention while configured accordingly.
  • the processor 110 when the processor 110 is embodied as an ASIC, FPGA or the like, the processor 110 may comprise specifically configured hardware for conducting one or more operations described herein.
  • the processor 110 when the processor 110 is embodied as an executor of instructions, such as may be stored in the memory 112, the instructions may specifically configure the processor 110 to perform one or more algorithms and operations described herein.
  • the memory 112 may comprise, for example, volatile memory, non- volatile memory, or some combination thereof.
  • the memory 112 may comprise a non-transitory computer-readable storage medium.
  • the memory 112 may comprise a plurality of memories.
  • the plurality of memories may be embodied on a single computing device or may be distributed across a plurality of computing devices collectively configured to function as the apparatus 102.
  • the memory 112 may comprise a hard disk, random access memory, cache memory, flash memory, a compact disc read only memory (CD-ROM), digital versatile disc read only memory (DVD-ROM), an optical disc, circuitry configured to store information, or some combination thereof.
  • the memory 112 may comprise the volatile memory 40 and/or the non- volatile memory 42 (shown in FIG. 2).
  • the memory 112 may be configured to store information, data, applications, instructions, or the like for enabling the apparatus 102 to carry out various functions in accordance with various example embodiments.
  • the memory 112 is configured to buffer input data for processing by the processor 110.
  • the memory 112 may be configured to store program instructions for execution by the processor 110.
  • the memory 112 may store information in the form of static and/or dynamic information. The stored information may include, for example, images, content, media content, user data, application data, and/or the like.
  • the communication interface 114 may be embodied as any device or means embodied in circuitry, hardware, a computer program product comprising computer readable program instructions stored on a computer readable medium (e.g., the memory 112) and executed by a processing device (e.g., the processor 110), or a combination thereof that is configured to receive and/or transmit data from/to another computing device.
  • a processing device e.g., the processor 110
  • the communication interface 114 is at least partially embodied as or otherwise controlled by the processor 110.
  • the communication interface 114 may be in communication with the processor 110, such as via a bus.
  • the communication interface 114 may include, for example, an antenna, a transmitter, a receiver, a transceiver and/or supporting hardware or software for enabling communications with one or more remote computing devices.
  • the apparatus 102 is embodied as a mobile terminal 10
  • the communication interface 114 may be embodied as or comprise the transmitter 14 and receiver 16 (shown in FIG. 2).
  • the communication interface 1 14 may be configured to receive and/or transmit data using any protocol that may be used for communications between computing devices.
  • the communication interface 114 may be configured to receive and/or transmit data using any protocol that may be used for transmission of data over a wireless network, wireline network, some combination thereof, or the like by which the apparatus 102 and one or more computing devices may be in communication.
  • the communication interface 114 may be configured to receive and/or otherwise access content (e.g., web page content, streaming media content, and/or the like) over a network from a server or other content source.
  • the communication interface 1 14 may additionally be in communication with the memory 112, user interface 116, sensor 118, camera 119, and/or UI control circuitry 122, such as via a bus.
  • the user interface 116 may be in communication with the processor 110 and configured to receive an indication of a user input and/or to provide an audible, visual, mechanical, or other output to a user.
  • the user interface 116 may include, for example, a keyboard, a mouse, a joystick, a display, a touch screen display, a microphone, a speaker, and/or other input/output mechanisms.
  • the user interface 116 may be embodied as or comprise the user input interface, such as the display 28 and keypad 30 (shown in FIG. 2).
  • the user interface 116 may be in communication with the memory 112, communication interface 114, sensor 118, camera 119, and/or UI control circuitry 122, such as via a bus.
  • the UI control circuitry 122 may be embodied as various means, such as circuitry, hardware, a computer program product comprising computer readable program instructions stored on a computer readable medium (e.g., the memory 112) and executed by a processing device (e.g., the processor 110), or some combination thereof and, in some embodiments, is embodied as or otherwise controlled by the processor 110.
  • the UI control circuitry 122 may be in communication with the processor 1 10.
  • the UI control circuitry 122 may further be in communication with one or more of the memory 112, communication interface 114, or user interface 116, such as via a bus. In some embodiments, the UI control circuitry 122 may be configured to aid in identification of user input, such as determination of the position of user input for a touch display.
  • the apparatus 102 may include a sensor 118 that is in communication with the processor 110.
  • the sensor 118 may be configured to determine certain properties or circumstances of the apparatus 102.
  • the sensor 118 may be configured to detect movement of the apparatus 102, such as movement of the apparatus 102 in a sweeping motion.
  • the sensor 118 may be configured to detect when the apparatus is swept across a user's face.
  • the sensor 118 may be an accelerometer or similar device for detecting motion or the like.
  • the apparatus 102 may include an image capturing device, such as a camera 119, that is in communication with the processor 110.
  • the image capturing device may comprise the camera 36 (shown in FIG. 2).
  • the camera 119 may be configured to capture an image.
  • a user may input an instruction into the apparatus 102 that may cause the processor 110 to instruct the camera 119 to capture/take an image.
  • the processor 110 may instruct the camera 119 to capture an image without direct user input.
  • the processor 110 may instruct the camera 119 to capture an image of an object (e.g., a user's face), such as in response to detecting movement of the apparatus 102 from the sensor 118. Additionally or alternatively, the processor 110 may instruct the camera 119 to take more than one image as the apparatus 102 is being moved (e.g., as the apparatus 102 is being swept across a user's face).
  • an object e.g., a user's face
  • the processor 110 may instruct the camera 119 to take more than one image as the apparatus 102 is being moved (e.g., as the apparatus 102 is being swept across a user's face).
  • Face recognition technology can be useful for such security applications and may provide enhanced security while limiting the need to remember or input difficult passwords.
  • face recognition technology may provide even better security, as any user can enter the correct password, whereas only a user with the same face may gain authorized access to functionality of the apparatus 102.
  • similar recognition technology may be utilized for object recognition.
  • security applications may require images of a certain or specific object to enable secure functionality. For example, a user's driver's license, personal ring, medallion, or other object may be used to lock or unlock secured functionality.
  • embodiments of the present invention may be used with face recognition and/or object recognition technology and reference herein to face recognition technology or object recognition technology is not meant to be limited to one or the other.
  • While use of object recognition technology for facilitating locking/unlocking of secured functionality may be a desirable alternative to text-based passwords in at least some circumstances, current capabilities of some mobile computing devices (e.g., apparatus 102) may limit the effectiveness of object recognition as a security application.
  • object recognition technology requires a large amount of computing power that may be difficult or highly demanding for a mobile computing device with limited computing power.
  • current mobile computing devices are often limited to one camera (e.g., camera 119), that is further limited to capturing 2-dimensional ("2d") images.
  • 2d object/face recognition technology is less effective and less accurate than 3- dimensional ("3d") object/face recognition technology.
  • 3d 3- dimensional
  • the processor 110 may be configured to permit/deny access to at least a portion of the functionality of the apparatus 102.
  • the processor 110 may require verification of proper authorization to permit/enable access to at least a portion of the functionality of the apparatus 102.
  • the processor 110 may require proper facial recognition as a biometric in order to permit access to certain functionality (e.g., secured functionality).
  • the processor 110 may assign certain functionality to certain faces such that multiple users may have varying access to
  • functionality may refer to access to or extension of any function of an apparatus 102 (e.g., phone functionality, computing functionality, applications, data, etc.).
  • embodiments of the present invention may be useful for maintaining security of functionality stored on a server (e.g., cloud network).
  • a server e.g., cloud network
  • embodiments may be useful in securing and/or allowing access to functionality and/or information (e.g., personal back account information) stored on a server.
  • the processor 110 may also be configured to set up or calibrate the security application so as to define the representation or images of the object (e.g., the face that represents the user). Such a process may be similar to prompting a user to enter a password.
  • the processor 110 may be configured to prompt a user to perform a sweeping motion relative to the apparatus 102/camera 119.
  • the processor 110 may be configured to cause the user to be prompted to sweep the apparatus 102 including the camera 119 across the user's face.
  • the processor 110 may be configured to cause a user to be prompted to move the apparatus 102 (e.g., sweep the camera 119) across the user's face to permit capturing/taking of images of the user's face for creating a secure face
  • sweeping the apparatus/camera across the user's face may include holding the apparatus/camera generally spaced apart from the user's face and moving the apparatus/camera from side to side (e.g., similar to moving a hand in front of a face).
  • the camera 119 may be physically separate from the apparatus 102 such that the user may sweep the camera 119 across the user's face.
  • the camera 119 may be in communication with the apparatus 102 (e.g., wire, wirelessly, or later connected) such that a user may sweep the camera 119 without the apparatus 102 across their face.
  • some embodiments of the present invention described herein may utilize a camera that captures images of an object and transmits those images to a remote processor (e.g., processor 110).
  • the remote processor may receive those images and/or image features and perform verification and/or other functionality described herein to determine if a user should be permitted access to secure functionality at the remote processor (e.g., a server, cloud, etc.).
  • the processor 110 may be configured to prompt a user to perform a sweeping motion such as sweeping the object across the camera 119 of the apparatus 102.
  • a person may move their face from side to side in front of the camera 119, such that at least two images can be taken from differing views of the user's face.
  • the user may maneuver their face in a sweeping motion relative to the camera 119.
  • at least two images may be captured in response to at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus.
  • a user can be prompted to perform a sweeping motion of at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus.
  • the processor 110 may be configured to cause at least one arrow or other directional indictor to be displayed upon, for example, the user interface 116 to indicate how the user is to move the apparatus 102 and/or camera 119. Additionally or alternatively, the user interface 116 may be configured, by the processor, to display the image the camera 119 is currently focused on. In such an embodiment, the processor 110 may be configured to cause a box or other bounded shape to be displayed around the user's face (if it appears in the display) as the user sweeps the apparatus 102 across their face, thereby indicating proper alignment of the apparatus 102. For example, the processor 110 may be configured to use applications such as Face Tracker to aid the user in properly aligning the camera 119 during the sweeping motion.
  • applications such as Face Tracker to aid the user in properly aligning the camera 119 during the sweeping motion.
  • the processor 110 may be configured to provide any type of indication which may aid/instruct a user to properly sweep the apparatus 102/camera 119 across their face.
  • the processor 110 may cause the display of an image of a person holding the apparatus 102 and sweeping the apparatus 102 across their face along with an arrow (e.g., see FIG. 4).
  • a mobile device 200 e.g., apparatus 102
  • the display 210 on the mobile device 200 may read "PHONE SECURITY CALIBRATION.” Additionally, the display 210 may instruct the user to perform an action to assist in calibration, such as "SWEEP ACROSS FACE.”
  • the camera 220 is located on the opposite face of the display 210. As such, simple instructions may be difficult to effectively follow since the camera 220 would be facing the wrong direction if the user simply swept the mobile device 200 across their face with the display 210 facing them. Thus, as noted above, additional indication may be beneficial.
  • the camera 220 may be configured on the front of the device 200. In other embodiments, the camera 220 may be configured separate from the device 200. As such, indication customized to the specific device 200 may be beneficial for aiding a user in facilitation calibration of the security application.
  • the processor 110 may be configured to instruct the camera 119 to capture/take at least one image of the object (e.g., the user's face), such as in response to the apparatus 102 being swept across the object.
  • the processor 110 may instruct the camera 119 to capture at least two images of the object.
  • the processor 110 may be configured to instruct the camera 119 to take images from different views of the object.
  • the processor 110 may instruct/cause the camera 119 to take an image at the beginning of the sweep (e.g., the right side of the user's face), the middle of the sweep (e.g., the front of the user's face), and the end of the sweep (e.g., the left side of the user's face).
  • the processor 110 may be configured to start taking pictures when the sensor 118 detects movement of the apparatus 102. Additionally or alternatively, the processor 110 may be configured to start taking pictures when the sensor 118 detects movement of the apparatus 102 in a desired sweeping motion.
  • the processor 110 may instruct/cause the camera 119 to capture a plurality of images, such as at a pre-defined frame rate, until the camera
  • the determination as to when to take a picture may depend on a number of pre-determined variables (e.g., number of desired images, timing of the images, every 0.5 seconds, etc.).
  • FIG. 4 illustrates an example of a user sweeping the mobile device 200 across their face.
  • the mobile device 200 includes a camera 220 that may take/capture at least two images of a user's face 250 as the user sweeps the apparatus across their face (e.g., along line A).
  • the captured images 230 may be displayed on a user interface/display 210.
  • the processor 110 may be configured to receive an image and/or image feature, such as from the camera 119.
  • the processor 110 may be configured to receive the at least two images of the object (e.g., the user's face) taken by the camera 119.
  • the processor 110 may also be configured to store the received images in the memory 112.
  • the received images are 2d images of the user's face.
  • the processor 110 may be configured to store at least one of the received 2d images as a secure 2d image password.
  • the processor 110 may be configured to create/develop a 2d representation (e.g., 2d texture map) of the object from the at least two received 2d images.
  • a 2d representation e.g., 2d texture map
  • the camera 119 is moved (e.g., swept across) relative to the object (e.g., the user's face) or vice versa, the camera 119 is configured to take at least two images that have different views of the object.
  • the processor 110 may be configured to receive at least two images that correspond to a different view of the object. These different views aid in creation of a more accurate 2d representation of the object.
  • the front view, right side view, and left side view of a user's face can be combined by the processor 110 to create a 2d representation of the user's face.
  • This developed 2d representation can be stored by the processor 110 as a secure 2d representation password in the memory 112.
  • the processor 110 may be configured to create/develop a 3d representation of the object (e.g., the user's face) from the at least two received 2d images.
  • the camera 119 is moved (e.g., swept across) relative to the object (e.g., the user's face) or vice versa, the camera 119 is configured to take at least two images that have different views of the object.
  • the processor 110 may be configured to receive at least two images that correspond to a different view of the object. These different views aid in creation of a more accurate 3d representation of the object.
  • the front view, right side view, and left side view of a user's face can be combined by the processor 110 to create a 3d representation of the user's face.
  • This developed 3d representation can be stored by the processor 110 as a secure 3d representation password in the memory 112.
  • 3d may include 3 or more dimensions (e.g., include hyper-dimensions).
  • the processor 110 may further be configured to determine specific functionality to which access is permitted or enabled for each secure 2d image, 2d representation, and/or 3d representation password.
  • the processor 110 may be configured to link the secured functionality to the 2d image, 2d representation, and/or 3d representation password in the memory 112, such as by associating each with the same user and, in turn, with the secured functionality accessible by the user. Additionally or alternatively, a user of the apparatus 102 may be prompted to determine which functionality is permitted for the corresponding image/representation password.
  • the selected functionality may be stored and linked with the image/representation password in the memory 112 such that when the stored image/representation password is determined to sufficiently match the currently inputted image/representation, the processor 110 may determine which functionality to permit/enable access to.
  • the processor 110 may be configured to disable/deny access to certain secure functionality (e.g., the apparatus 102 locks). In some embodiments, once the apparatus 102 is locked, the apparatus 102 may be unlocked through authentication.
  • the processor 110 may be configured to cause a user to be prompted to unlock the secured functionality.
  • the processor 110 may be configured to cause a user to be prompted to perform a sweeping motion of at least one of sweeping the apparatus 102/camera 119 across the object or sweeping the object across the apparatus 102/camera 119.
  • a sweeping motion may be consistent with the description provided above with respect to calibration.
  • the processor 110 may be configured to cause the user to be prompted to sweep the apparatus 102 including the camera 119 across the user's face.
  • the processor 110 may be configured to prompt a user to perform a sweeping motion such as sweeping the object across the camera 119 of the apparatus 102.
  • a person may move their face from side to side in front of the camera 119, such that at least two images can be taken from differing views of the user's face.
  • the user may maneuver their face in a sweeping motion relative to the camera 119.
  • at least two images may be captured in response to at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus.
  • a user can be prompted to perform a sweeping motion of at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus.
  • the processor 110 may also be configured to cause instructions or other indications to a user to be displayed to aid a user to properly sweep the apparatus 102/camera 119 and/or object.
  • the processor 110 may be configured to cause at least one arrow to be displayed to indicate the direction in which the user is to sweep the apparatus 102 across their face.
  • the user interface 116 may be configured to display the image of which the camera 119 is currently focused.
  • the processor 110 may be configured to cause a box or other bounded shape to be displayed around the user's face (if it appears in the display) as the user sweeps the apparatus 102 across their face, thereby providing interaction indicating proper alignment of the apparatus 102.
  • the processor 110 may be configured to use applications such as Face Tracker to aid the user in properly aligning the camera during sweeping.
  • the processor 110 may be configured to provide any type of indication which may aid/instruct a user to sweep the apparatus
  • the processor 110 may display an image of a person holding the apparatus 102 and sweeping the apparatus 102 across their face along an arrow (e.g., see FIG. 4).
  • FIG. 5 illustrates example instructions to prompt a user to sweep the apparatus/camera across their face to unlock the secured functionality of the mobile device 200.
  • the display 210 on the mobile device 200 reads "PHONE LOCKED," which indicates to the user that at least some secure functionality of the mobile device 200 is disabled. Additionally, the display 210 instructs the user to perform an action in an attempt to unlock the mobile device's secure functionality (e.g., "SWEEP ACROSS FACE TO UNLOCK").
  • the camera 220 is located on the opposite face of the display 210.
  • the camera 220 may be configured on the front of the device 200. In other embodiments, the camera 220 may be configured separate from the device 200. As such, indication customized to the specific device 200 may be beneficial for aiding a user in facilitation calibration of the security application.
  • the processor 110 may be configured to instruct the camera 119 to capture/take at least one image of the object (e.g., the user's face), such as the apparatus 102 is swept across the object.
  • the processor 110 may instruct the camera 119 to capture at least two images of the object.
  • the processor 110 may be configured to instruct the camera 119 to take images from different views of the user's face.
  • the processor 110 may instruct/cause the camera 119 to take an image at the beginning of the sweep (e.g., the right side of the user's face), the middle of the sweep (e.g., the front of the user's face), and the end of the sweep (e.g., the left side of the user's face).
  • the processor 110 may be configured to start taking pictures when the sensor 118 detects movement of the apparatus 102. Additionally or alternatively, the processor 110 may be configured to start taking pictures when the sensor 118 detects movement of the apparatus 102 in a desired sweeping motion.
  • the processor 110 may instruct/cause the camera 119 to take an image at the beginning of the sweep (e.g., the right side of the user's face), the middle of the sweep (e.g., the front of the user's face), and the end of the sweep (e.g., the left side of the user's face).
  • the processor 110 may be configured to start taking pictures when the sensor 118 detects movement of the apparatus 102. Additionally or alternatively
  • the decision when to take a picture may depend on a number of pre-determined variables (e.g., number of desired images, timing of the images, every 0.5 seconds, etc.).
  • FIG. 4 illustrates an example of a user sweeping the mobile device 200 across their face.
  • the mobile device 200 includes a camera 220 that may take/capture at least two images of a user's face 250 as the user sweeps the apparatus across their face (e.g., along line A).
  • the captured images 230 may be displayed on a user interface/display 210.
  • the processor 110 may be configured to receive an image and/or image feature, such as from the camera 119.
  • the processor 110 may be configured to receive the at least two images of the object (e.g., the user's face) taken by the camera 119.
  • the processor 110 may also be configured to store the received images in the memory 112.
  • the received images are 2d images of the user's face.
  • the processor 110 may be configured to store at least one of the received images as a received 2d image.
  • the processor 110 may be configured to create/develop a 2d representation (e.g., 2d texture map) of the object from the at least two received 2d images.
  • a 2d representation e.g., 2d texture map
  • the camera 119 is moved (e.g., swept across) relative to the object (e.g., the user's face) or vice versa, the camera 119 is configured to take at least two images that have different views of the object.
  • the processor 110 may be configured to receive at least two images that correspond to a different view of the object. These different views aid in creation of a more accurate 2d representation of the object.
  • the front view, right side view, and left side view of a user's face can be combined by the processor 110 to create a 2d representation of the user's face.
  • This developed 2d representation can be stored by the processor 110 as the developed 2d representation in the memory 112.
  • the processor 110 may be configured to create/develop a 3d representation of the object (e.g., the user's face) from the at least two received 2d images.
  • the camera 119 is configured to take at least two images that have different views of the object.
  • the processor 110 may be configured to receive at least two images that correspond to a different view of the object. These different views aid in creation of a more accurate 3d representation of the object.
  • the front view, right side view, and left side view of a user's face can be combined to create a 3d representation of the user's face.
  • the greater number of images from differing views correlates to a more accurate 3d representation of the user's face.
  • This developed 3d representation can be stored by the processor 110 in the memory 112 as the received/developed 3d representation.
  • the processor 110 may be configured to compare at least one received 2d image to the at least one stored 2d image (e.g., the at least one secure 2d image password). Additionally, in some embodiments, the processor 110 may be configured to determine when at least one of the received 2d image is within a predefined similarity threshold of (e.g., sufficiently similar, matches, etc.) the stored/secure 2d image, such as with 2d object/face recognition technology. The determination as to when the received 2d image is within a predefined similarity threshold of the stored 2d image may be customizable and may depend on a number of factors (e.g., the computing power of the processor 110, level of desired security, computation time, etc.). In an instance in which at least one of the received 2d image is within a predefined similarity threshold of at least one of the stored 2d image, the processor 110 may be configured to permit/enable access to secured functionality of the apparatus 102.
  • a predefined similarity threshold of e.g., sufficiently similar, matches, etc.
  • the processor 110 may
  • the processor 110 may be configured to compare the developed 2d representation to a stored 2d representation (e.g., the secure 2d representation password). Additionally, in some embodiments, the processor 110 may be configured to determine when the developed 2d representation is within a predefined similarity threshold of (e.g., sufficiently similar to, matches, etc.) the stored/secure 2d representation, such as with 2d object/face recognition technology. The determination as to when the developed 2d representation is within a predefined similarity threshold of the stored 2d representation may be customizable and may depend on a number of factors (e.g., the computing power of the processor 110, level of desired security, computation time, etc.).
  • a predefined similarity threshold e.g., sufficiently similar to, matches, etc.
  • the determination as to when the developed 2d representation is within a predefined similarity threshold of the stored 2d representation may be customizable and may depend on a number of factors (e.g., the computing power of the processor 110, level of desired security, computation time, etc.).
  • the processor 110 may be configured to permit/enable access to secured functionality of the apparatus 102.
  • the processor 110 may be configured to compare the developed 3d representation to a stored 3d representation (e.g., the secure 3d representation password). Additionally, in some embodiments, the processor 110 may be configured to determine when the developed 3d representation is within a predefined similarity threshold of (e.g., sufficiently similar to, matches, etc.) the stored/secure 3d representation, such as with 3d object/face recognition technology.
  • the determination as to when the developed 3d representation is within a predefined similarity threshold of the stored 3d representation may be customizable and may depend on a number of factors (e.g., the computing power of the processor 110, level of desired security, computation time, etc.).
  • the processor 110 may be configured to permit/enable access to secured functionality o f the apparatus 102.
  • the processor 110 may be configured to permit access to secure functionality in an instance in which both the developed 3d representation is within a predefined similarity threshold of the stored 3d representation and the 2d representation is within a second predefined similarity threshold of the stored 2d representation. Additionally or alternatively, the processor may be configured to permit access to secure functionality in an instance in which both the developed 3d representation is within a predefined similarity threshold of the stored 3d representation and at least one of the received 2d images is within a second predefined similarity threshold of at least one stored 2d image.
  • some embodiments may only employ a second comparison function if a first comparison function does not produce a comparison that is within the predefined similarity threshold (e.g., the match is not quite close enough, confidence level of similarity score is low, etc.).
  • the processor 110 may be configured to compare a developed multi-dimensional representation (e.g., 2d representation, 3d representation, hyper- dimensional representation, etc.) to a stored multi-dimensional representation and in an instance in which the developed multi-dimensional representation is not within a predefined similarity threshold of the stored multi-dimensional representation, the processor 110 may be configured to then compare a developed second multi-dimensional representation with a stored second multi-dimensional representation to determine if they are within a second predefined similarity threshold before permitting access to secure functionality.
  • a developed multi-dimensional representation e.g., 2d representation, 3d representation, hyper- dimensional representation, etc.
  • the processor 110 may be configured to compare the developed 3d representation to the stored 3d representation and in an instance in which the developed 3d representation is not within a predefined similarity threshold of the stored 3d representation, the processor 110 may be configured to then compare the developed 2d representation with the stored 2d representation to determine if they are within a second predefined similarity threshold before permitting access to secure functionality. Additionally or alternatively, the processor 110 may be configured to compare the developed 3d representation to the stored 3d representation and in an instance in which the developed 3d representation is within a lesser predefined similarity threshold of the stored 3d representation (e.g., it is closely similar, but not quite enough to permit access to secure functionality), then the processor 110 may be further configured to compare the developed 2d representation with the stored 2d
  • the processor 110 may be configured to compare the developed 2d representation to the stored 2d representation and in an instance in which the developed 2d representation is not within a predefined similarity threshold of the stored 2d representation, the processor 110 may be configured to then compare the developed 3d representation with the stored 3d representation to determine if they are within a second predefined similarity threshold before permitting access to secure functionality.
  • the processor 110 may be configured to compare the developed 2d representation to the stored 2d representation and in an instance in which the developed 2d representation is within a lesser predefined similarity threshold of the stored 2d representation (e.g., it is closely similar, but not quite enough to permit access to secure functionality), then the processor 110 may be further configured to compare the developed 3d representation with the stored 3d representation to determine if they are within a predefined similarity threshold before permitting access to secure functionality.
  • a lesser predefined similarity threshold of the stored 2d representation e.g., it is closely similar, but not quite enough to permit access to secure functionality
  • FIG. 6 illustrates a flowchart of an example method 300 for permitting access to secure functionality through 2d and 3d object recognition.
  • the method comprises causing a user to be prompted to perform a sweeping motion of at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus.
  • the processor 110, user interface 116, and/or UI control circuitry 122 may, for example, provide means for performing operation 302.
  • the method may further comprise detecting movement of the
  • the processor 110, user interface 116, sensor 118, camera 119, and/or UI control circuitry 122 may, for example, provide means for performing operation 304. If movement is not sensed, the method may return to operation 302 to continue to prompt the user to perform the sweeping motion. However, if movement is detected, then the method may comprise causing at least two images of the object to be captured at operation 306.
  • the processor 110, user interface 116, camera 119, and/or UI control circuitry 122 may, for example, provide means for performing operation 306. Next, the method comprises receiving the at least two images at operation 308.
  • the processor 110 may, for example, provide means for performing operation 308.
  • the method comprises developing, based at least in part on the at least two images, a 2d representation of the object.
  • the processor 110 may, for example, provide means for performing operation 310.
  • the method further comprises comparing the developed 2d representation with a stored 2d representation (e.g., the secure 2d representation password) at operation 312.
  • the processor 110 may, for example, provide means for performing operation 312.
  • the method comprises determining if the developed 2d representation is within a predefined similarity threshold of the stored 2d representation at operation 314.
  • the processor 110 may, for example, provide means for performing operation 314. If the developed 2d representation and the stored 2d representation are within a predefined similarity threshold, then the method may comprise permitting access to the secured functionality at operation 316.
  • the processor 110, user interface 116, and/or UI control circuitry 122 may, for example, provide means for performing operation 316. In some embodiments, permitting access may comprise displaying a message to indicate granting of access.
  • the method may further comprise developing, based at least in part on the at least two images, a 3d representation of the object at operation 317.
  • the processor 110 may, for example, provide means for performing operation 317.
  • the method may comprise comparing the developed 3d representation with a stored 3d representation (e.g., the secure 3d representation password) at operation 318, and determining if they are within a predefined similarity threshold at operation 320.
  • the processor 110 may, for example, provide means for performing operations 318 and 320.
  • the method may comprise denying access to the secured functionality at operation 322.
  • the processor 110, user interface 116, and/or UI control circuitry 122 may, for example, provide means for performing operation 322.
  • the method may comprise permitting access to the secured functionality at operation 324.
  • the processor 110, user interface 116, and/or UI control circuitry 122 may, for example, provide means for performing operation 324. In such a way, some embodiments of the present invention provide methods, apparatuses, and computer program products that may use both 2d and 3d object recognition technology to provide a robust and effective security system, such as for apparatus 102 (shown in FIG. 1).
  • FIG. 7 illustrates a flowchart of an example method 400 for permitting access to secure functionality through 3d and 2d object recognition.
  • the method comprises causing a user to be prompted to perform a sweeping motion of at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus.
  • the processor 110, user interface 116, and/or UI control circuitry 122 may, for example, provide means for performing operation 402.
  • the method may further comprise detecting movement of the
  • the processor 110, user interface 116, sensor 118, camera 119, and/or UI control circuitry 122 may, for example, provide means for performing operation 404. If movement is not sensed, the method may return to operation 402 to continue to prompt the user to perform the sweeping motion. However, if movement is detected, then the method may comprise causing at least two images of the object to be captured at operation 406.
  • the processor 110, user interface 116, camera 119, and/or UI control circuitry 122 may, for example, provide means for performing operation 406. Next, the method comprises receiving the at least two images at operation 308.
  • the processor 110 may, for example, provide means for performing operation 408.
  • the method comprises developing, based at least in part on the at least two images, a 3d representation of the object.
  • the processor 110 may, for example, provide means for performing operation 410.
  • the method further comprises comparing the developed 3d representation with a stored 3d representation (e.g., the secure 3d representation password) at operation 412.
  • the processor 110 may, for example, provide means for performing operation 412.
  • the method comprises determining if the developed 3d representation is within a predefined similarity threshold of the stored 3d representation at operation 414.
  • the processor 110 may, for example, provide means for performing operation 414. If the developed 3d representation and the stored 3d representation are within a predefined similarity threshold, then the method may comprise permitting access to the secured functionality at operation 416.
  • the processor 110, user interface 116, and/or UI control circuitry 122 may, for example, provide means for performing operation 416. In some embodiments, permitting access may comprise displaying a message to indicate granting of access.
  • the method may further comprise developing, based at least in part on the at least two images, a 2d representation of the object at operation 417.
  • the processor 110 may, for example, provide means for performing operation 417.
  • the method may comprise comparing the developed 2d representation with a stored 2d representation (e.g., the secure 2d representation password) at operation 418, and determining if they are within a predefined similarity threshold at operation 420.
  • the processor 110 may, for example, provide means for performing operations 418 and 420.
  • the method may comprise denying access to the secured functionality at operation 422.
  • the processor 110, user interface 116, and/or UI control circuitry 122 may, for example, provide means for performing operation 422.
  • the method may comprise permitting access to the secured functionality at operation 424.
  • the processor 110, user interface 116, and/or UI control circuitry 122 may, for example, provide means for performing operation 424.
  • FIG. 8 illustrates a flowchart of the operations for facilitating permission of access to secure functionality through object recognition according to another embodiment 500.
  • embodiments of the present invention may be utilized for developing and comparing representations of an object/face in multi- dimensions (e.g., 2-dimensions, 3-dimensions, hyper-dimensions, etc.).
  • at least two images of an object may be received at operation 502.
  • at least two of the at least two images correspond to a different view of the object.
  • the processor 110, user interface 116, camera 119, and/or UI control circuitry 122 may, for example, provide means for performing operation 502.
  • the method further comprises developing, based at least in part on the at least two images, a multi-dimensional representation of the object at operation 504.
  • the processor 110 may, for example, provide means for performing operation 504.
  • the developed multi-dimensional representation and a stored multi-dimensional representation are compared at operation 506.
  • the processor 110 may, for example, provide means for performing operation 506.
  • access to secured functionality is permitted in an instance in which the developed multi-dimensional representation is within a predefined similarity threshold of the stored multi-dimensional representation.
  • the processor 110, user interface 116 and/or UI control circuitry 122 may, for example, provide means for performing operation 508.
  • FIG. 9 illustrates a flowchart of the operations for facilitating permission of access to secure functionality through object recognition according to another embodiment 600.
  • at least two images of an object may be received at operation 602.
  • at least two of the at least two images correspond to a different view of the object.
  • the processor 110, user interface 116, camera 119, and/or UI control circuitry 122 may, for example, provide means for performing operation 602.
  • the method further comprises developing, based at least in part on the at least two images, a 3d representation of the object at operation 604.
  • the processor 110 may, for example, provide means for performing operation 604.
  • the developed 3d representation and a stored 3d representation are compared at operation 606.
  • the processor 110 may, for example, provide means for performing operation 606.
  • access to secured functionality is permitted in an instance in which the developed 3d representation is within a predefined similarity threshold of the stored 3d representation.
  • the processor 110, user interface 116 and/or UI control circuitry 122 may, for example, provide means for performing operation 608.
  • FIG. 10 illustrates a flowchart of the operations for facilitating permission of access to secure functionality through object recognition according to another embodiment 700.
  • at least two images of an object may be received at operation 702.
  • at least two of the at least two images correspond to a different view of the object.
  • the processor 110, user interface 116, camera 119, and/or UI control circuitry 122 may, for example, provide means for performing operation 702.
  • the method further comprises developing, based at least in part on the at least two images, a 2d representation of the object at operation 704.
  • the processor 110 may, for example, provide means for performing operation 704.
  • the developed 2d representation and a stored 2d representation are compared at operation 706.
  • the processor 110 may, for example, provide means for performing operation 706.
  • access to secured functionality is permitted in an instance in which the developed 2d representation is within a predefined similarity threshold of the stored 2d representation.
  • the processor 110, user interface 116 and/or UI control circuitry 122 may, for example, provide means for performing operation 708.
  • FIGs. 6-10 each illustrate a flowchart of a system, method, and computer program product according to an example embodiment. It will be understood that each block of the flowcharts, and combinations of blocks in the flowcharts, may be implemented by various means, such as hardware and/or a computer program product comprising one or more computer-readable mediums having computer readable program instructions stored thereon. For example, one or more of the procedures described herein may be embodied by computer program instructions of a computer program product.
  • the computer program product(s) which embody the procedures described herein may be stored by one or more memory devices of a mobile terminal, server, or other computing device (for example, in the memory 112) and executed by a processor in the computing device (for example, by the processor 110).
  • the computer program instructions comprising the computer program product(s) which embody the procedures described above may be stored by memory devices of a plurality of computing devices.
  • any such computer program product may be loaded onto a computer or other programmable apparatus (for example, an apparatus 102) to produce a machine, such that the computer program product including the instructions which execute on the computer or other programmable apparatus creates means for implementing the functions specified in the flowchart block(s).
  • the computer program product may comprise one or more computer-readable memories on which the computer program instructions may be stored such that the one or more computer-readable memories can direct a computer or other programmable apparatus to function in a particular manner, such that the computer program product comprises an article of manufacture which implements the function specified in the flowchart block(s).
  • the computer program instructions of one or more computer program products may also be loaded onto a computer or other programmable apparatus (for example, an apparatus 102) to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus implement the functions specified in the flowchart block(s).
  • blocks of the flowcharts support combinations of means for performing the specified functions. It will also be understood that one or more blocks of the flowcharts, and combinations of blocks in the fiowcharts, may be implemented by special purpose hardware- based computer systems which perform the specified functions, or combinations of special purpose hardware and computer program product(s).
  • a suitably configured processor for example, the processor 110
  • all or a portion of the elements may be configured by and operate under control of a computer program product.
  • the computer program product for performing the methods of an example embodiment of the invention includes a computer-readable storage medium (for example, the memory 112), such as the non- volatile storage medium, and computer- readable program code portions, such as a series of computer instructions, embodied in the computer-readable storage medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Telephone Function (AREA)

Abstract

Methods, apparatus, and computer program products are provided for facilitating unlocking and locking of secure functionality through object recognition. A method may include receiving at least two images of an object, wherein at least two of the at least two images correspond to a different view of the object. The method may further include developing, based at least in part on the at least two images, a multi-dimensional representation of the object. The method further includes comparing the developed multi-dimensional representation with a stored multi-dimensional representation. The method also includes permitting access to secured functionality in an instance in which the developed multi-dimensional representation is within a predefined similarity threshold of the stored multi-dimensional representation. Corresponding apparatus and computer program products may also be provided.

Description

METHODS AND APPARATUSES FOR FACILITATING LOCKING AND UNLOCKING OF SECURE FUNCTIONALITY THROUGH OBJECT
RECOGNITION
TECHNOLOGICAL FIELD
Example embodiments of the present invention relate generally to preventing unauthorized access to secure functionality through use of object recognition technology and, more particularly, relate to methods and apparatuses for 2-dimensional and 3 -dimensional object and/or face recognition to lock and unlock a mobile computing device.
BACKGROUND The modern communications era has brought about a tremendous expansion of wireline and wireless networks. Wireless and mobile networking technologies have addressed related consumer demands, while providing more flexibility and immediacy of information transfer. Concurrent with the expansion of networking technologies, an expansion in computing power has resulted in development of affordable computing devices capable of taking advantage of services made possible by modern networking technologies. This expansion in computing power has led to a reduction in the size of computing devices and given rise to a new generation of mobile devices that are capable of performing functionality that only a few years ago required processing power that could be provided only by the most advanced desktop computers. Consequently, mobile computing devices having a small form factor have become ubiquitous and are used to access network applications and services by consumers of all socioeconomic backgrounds.
Often mobile computing devices incorporate information and applications personal or private to a user. As such, there is an increased need for protection from unauthorized access to these mobile computing devices. Passwords are often utilized in order to permit a user to be authenticated prior to permitting access to functionality of a mobile computing device. Many passwords require specific keys or inputs to be provided to the device in a specific order. Moreover, these passwords can be rather long in order to provide sufficient security, which, however, can be troublesome for even an authorized user to accurately enter. Additionally, passwords may be lost or stolen and used by others to impermissibly gain access to a mobile computing device, thereby compromising the security otherwise provided by password protection of the mobile computing device. BRIEF SUMMARY
Face recognition technology can be useful for such security applications and may provide enhanced security while limiting the need to remember or input difficult passwords. In fact, face recognition technology may also provide better security, as any user can enter the correct password, whereas only a user with the same face may gain authorized access to functionality of the mobile computing device.
As such, some embodiments of the present invention provide a method, apparatus, and computer program product for facilitating unlocking and locking of secure functionality of a mobile computing device through object recognition. In one example embodiment, a method may include receiving at least two images of an object, wherein at least two of the at least two images correspond to a different view of the object. The method may further include developing, based at least in part on the at least two images, a multi-dimensional
representation of the object. The method further includes comparing the developed multidimensional representation with a stored multi-dimensional representation. The method also includes permitting access to secured functionality in an instance in which the developed multi-dimensional representation is within a predefined similarity threshold of the stored multi-dimensional representation.
In another example embodiment, an apparatus comprising at least one processor and at least one memory storing computer program code, wherein the at least one memory and stored computer program code are configured, with the at least one processor, to cause the apparatus to receive at least two images of an object, wherein at least two of the at least two images correspond to a different view of the object. The at least one memory and stored computer program code are configured, with the at least one processor, to further cause the apparatus of this example embodiment to develop, based at least in part on the at least two images, a multi-dimensional representation of the object. The at least one memory and stored computer program code are configured, with the at least one processor, to further cause the apparatus of this example embodiment to compare the developed multi-dimensional representation with a stored multi-dimensional representation. The at least one memory and stored computer program code are configured, with the at least one processor, to also cause the apparatus of this example embodiment to permit access to secured functionality in an instance in which the developed multi-dimensional representation is within a predefined similarity threshold of the stored multi-dimensional representation.
In a further example embodiment, a computer program product is provided. The computer program product of this example embodiment includes at least one computer-readable storage medium having computer-readable program instructions stored therein. The program instructions of this example embodiment comprise program instructions configured to cause an apparatus to perform a method comprising receiving at least two images of an object, wherein at least two of the at least two images correspond to a different view of the object. The computer program product of this example embodiment further comprises developing, based at least in part on the at least two images, a multi-dimensional representation of the object. The computer program product of this example embodiment additionally comprises comparing the developed multi-dimensional representation with a stored multi-dimensional representation. The computer program product of this example embodiment further comprises permitting access to secured functionality in an instance in which the developed multi-dimensional representation is within a predefined similarity threshold of the stored multi-dimensional representation.
In yet another example embodiment, an apparatus that includes means for receiving at least two images of an object, wherein at least two of the at least two images correspond to a different view of the object. The apparatus may also comprise means for developing, based at least in part on the at least two images, a multi-dimensional representation of the object. The apparatus may additionally comprise means for comparing the developed multidimensional representation with a stored multi-dimensional representation. The apparatus may further comprise means for permitting access to secured functionality in an instance in which the developed multi-dimensional representation is within a predefined similarity threshold of the stored multi-dimensional representation.
BRIEF DESCRIPTION OF THE DRAWINGS
Having thus described embodiments of the invention in general terms, reference will now be made to the accompanying drawings, which are not necessarily drawn to scale, and wherein: FIG. 1 illustrates a block diagram of an apparatus for facilitating locking and unlocking of secured functionality through object recognition technology, in accordance with an example embodiment;
FIG. 2 is a schematic block diagram of a mobile terminal according to an example embodiment;
FIG. 3 illustrates an example apparatus for facilitating locking and unlocking of a mobile device through face recognition, wherein the apparatus displays instructions prompting a user to create a representation of their face for security calibration purposes, in accordance with an example embodiment;
FIG. 4 illustrates example interaction of a user with the apparatus shown in FIG. 3, in accordance with an example embodiment;
FIG. 5 illustrates an example apparatus for facilitating locking and unlocking of a mobile device through face recognition, wherein the apparatus displays instructions prompting a user to create a representation of their face for unlocking of the apparatus, in accordance with an example embodiment;
FIG. 6 illustrates a flowchart according to an example method for permitting access to secure functionality, in accordance with an example embodiment; FIG. 7 illustrates a flowchart according to another example method for permitting access to secure functionality, in accordance with an example embodiment;
FIG. 8 illustrates a flowchart according to one example embodiment of a method for permitting access to secure functionality;
FIG. 9 illustrates a flowchart according to another example embodiment of a method for permitting access to secure functionality; and
FIG. 10 illustrates a flowchart according to another example embodiment of a method for permitting access to secure functionality. DETAILED DESCRIPTION
Some embodiments of the present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown. Indeed, the invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like reference numerals refer to like elements throughout.
As used herein, the terms "data," "content," "information" and similar terms may be used interchangeably to refer to singular or plural data capable of being transmitted, received, displayed and/or stored in accordance with various example embodiments. Thus, use of any such terms should not be taken to limit the spirit and scope of the disclosure.
The term "computer-readable medium" as used herein refers to any medium configured to participate in providing information to a processor, including instructions for execution. Such a medium may take many forms, including, but not limited to a non-transitory computer-readable storage medium (e.g., non-volatile media, volatile media), and
transmission media. Transmission media include, for example, coaxial cables, copper wire, fiber optic cables, and carrier waves that travel through space without wires or cables, such as acoustic waves and electromagnetic waves, including radio, optical and infrared waves.
Signals include man-made transient variations in amplitude, frequency, phase, polarization or other physical properties transmitted through the transmission media. Examples of non- transitory computer-readable media include a magnetic computer readable medium (e.g., a floppy disk, hard disk, magnetic tape, any other magnetic medium), an optical computer readable medium (e.g., a compact disc read only memory (CD-ROM), a digital versatile disc (DVD), a Blu-Ray disc, or the like), a random access memory (RAM), a programmable read only memory (PROM), an erasable programmable read only memory (EPROM), a FLASH- EPROM, or any other non-transitory medium from which a computer can read. The term computer-readable storage medium is used herein to refer to any computer-readable medium except transmission media. However, it will be appreciated that where embodiments are described to use a computer-readable storage medium, other types of computer-readable mediums may be substituted for or used in addition to the computer-readable storage medium in alternative embodiments.
Additionally, as used herein, the term 'circuitry' refers to (a) hardware-only circuit implementations (e.g., implementations in analog circuitry and/or digital circuitry); (b) combinations of circuits and computer program product(s) comprising software and/or firmware instructions stored on one or more computer readable memories that work together to cause an apparatus to perform one or more functions described herein; and (c) circuits, such as, for example, a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation even if the software or firmware is not physically present. This definition of 'circuitry' applies to all uses of this term herein, including in any claims. As a further example, as used herein, the term 'circuitry' also includes an implementation comprising one or more processors and/or portion(s) thereof and accompanying software and/or firmware. As another example, the term 'circuitry' as used herein also includes, for example, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, other network device, and/or other computing device.
FIG. 1 illustrates a block diagram of an apparatus 102 for facilitating locking and unlocking of a mobile device through face recognition. It will be appreciated that the apparatus 102 is provided as an example of one embodiment and should not be construed to narrow the scope or spirit of the invention in any way. In this regard, the scope of the disclosure encompasses many potential embodiments in addition to those illustrated and described herein. As such, while FIG. 1 illustrates one example of a configuration of an apparatus for facilitating locking and unlocking of a mobile device through face recognition, other configurations may also be used to implement embodiments of the present invention.
The apparatus 102 may be embodied as a desktop computer, laptop computer, mobile terminal, mobile computer, mobile phone, mobile communication device, game device, digital camera/camcorder, audio/video player, television device, radio receiver, digital video recorder, positioning device, a chipset, a computing device comprising a chipset, any combination thereof, and/or the like. In some example embodiments, the apparatus 102 is embodied as a mobile computing device, such as the mobile terminal illustrated in FIG. 2. In this regard, FIG. 2 illustrates a block diagram of a mobile terminal 10 representative of one example embodiment of an apparatus 102. It should be understood, however, that the mobile terminal 10 illustrated and hereinafter described is merely illustrative of one type of apparatus 102 that may implement and/or benefit from various example embodiments of the invention and, therefore, should not be taken to limit the scope of the disclosure. While several embodiments of the electronic device are illustrated and will be hereinafter described for purposes of example, other types of electronic devices, such as mobile telephones, mobile computers, personal digital assistants (PDAs), pagers, laptop computers, desktop computers, gaming devices, televisions, e-papers, and other types of electronic systems, may employ various embodiments of the invention. As shown, the mobile terminal 10 may include an antenna 12 (or multiple antennas 12) in communication with a transmitter 14 and a receiver 16. The mobile terminal 10 may also include a processor 20 configured to provide signals to and receive signals from the transmitter and receiver, respectively. The processor 20 may, for example, be embodied as various means including circuitry, one or more microprocessors with accompanying digital signal processor(s), one or more processor(s) without an accompanying digital signal processor, one or more coprocessors, one or more multi-core processors, one or more controllers, processing circuitry, one or more computers, various other processing elements including integrated circuits such as, for example, an ASIC (application specific integrated circuit) or FPGA (field programmable gate array), or some combination thereof.
Accordingly, although illustrated in FIG. 2 as a single processor, in some embodiments the processor 20 comprises a plurality of processors. These signals sent and received by the processor 20 may include signaling information in accordance with an air interface standard of an applicable cellular system, and/or any number of different wireline or wireless networking techniques, comprising but not limited to Wi-Fi, wireless local access network (WLAN) techniques such as Institute of Electrical and Electronics Engineers (IEEE) 802.11, 802.16, and/or the like. In addition, these signals may include speech data, user generated data, user requested data, and/or the like. In this regard, the mobile terminal may be capable of operating with one or more air interface standards, communication protocols, modulation types, access types, and/or the like. More particularly, the mobile terminal may be capable of operating in accordance with various first generation (1G), second generation (2G), 2.5G, third-generation (3G) communication protocols, fourth-generation (4G) communication protocols, Internet Protocol Multimedia Subsystem (IMS) communication protocols (e.g., session initiation protocol (SIP)), and/or the like. For example, the mobile terminal may be capable of operating in accordance with 2G wireless communication protocols IS- 136 (Time Division Multiple Access (TDMA)), Global System for Mobile communications (GSM), IS- 95 (Code Division Multiple Access (CDMA)), and/or the like. Also, for example, the mobile terminal may be capable of operating in accordance with 2.5G wireless communication protocols General Packet Radio Service (GPRS), Enhanced Data GSM Environment
(EDGE), and/or the like. Further, for example, the mobile terminal may be capable of operating in accordance with 3G wireless communication protocols such as Universal Mobile Telecommunications System (UMTS), Code Division Multiple Access 2000 (CDMA2000), Wideband Code Division Multiple Access (WCDMA), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), and/or the like. The mobile terminal may be additionally capable of operating in accordance with 3.9G wireless communication protocols such as Long Term Evolution (LTE) or Evolved Universal Terrestrial Radio Access Network (E-UTRAN) and/or the like. Additionally, for example, the mobile terminal may be capable of operating in accordance with fourth-generation (4G) wireless communication protocols and/or the like as well as similar wireless communication protocols that may be developed in the future. Some Narrow-band Advanced Mobile Phone System (NAMPS), as well as Total Access Communication System (TACS), mobile terminals may also benefit from embodiments of this invention, as should dual or higher mode phones (e.g., digital/analog or
TDMA/CDMA/analog phones). Additionally, the mobile terminal 10 may be capable of operating according to Wi-Fi or Worldwide Interoperability for Microwave Access
(WiMAX) protocols.
It is understood that the processor 20 may comprise circuitry for implementing audio/video and logic functions of the mobile terminal 10. For example, the processor 20 may comprise a digital signal processor device, a microprocessor device, an analog-to-digital converter, a digital-to-analog converter, and/or the like. Control and signal processing functions of the mobile terminal may be allocated between these devices according to their respective capabilities. The processor may additionally comprise an internal voice coder (VC) 20a, an internal data modem (DM) 20b, and/or the like. Further, the processor may comprise functionality to operate one or more software programs, which may be stored in memory. For example, the processor 20 may be capable of operating a connectivity program, such as a web browser. The connectivity program may allow the mobile terminal 10 to transmit and receive web content, such as location-based content, according to a protocol, such as Wireless Application Protocol (WAP), hypertext transfer protocol (HTTP), and/or the like. The mobile terminal 10 may be capable of using a Transmission Control Protocol/Internet Protocol (TCP/IP) to transmit and receive web content across the internet or other networks.
The mobile terminal 10 may also comprise a user interface including, for example, an earphone or speaker 24, a ringer 22, a microphone 26, a display 28, a camera 36, a user input interface, and/or the like, which may be operationally coupled to the processor 20. In this regard, the processor 20 may comprise user interface circuitry configured to control at least some functions of one or more elements of the user interface, such as, for example, the speaker 24, the ringer 22, the microphone 26, the display 28, and/or the like. The processor 20 and/or user interface circuitry comprising the processor 20 may be configured to control one or more functions of one or more elements of the user interface through computer program instructions (e.g., software and/or firmware) stored on a memory accessible to the processor 20 (e.g., volatile memory 40, non-volatile memory 42, and/or the like). Although not shown, the mobile terminal may comprise a battery for powering various circuits related to the mobile terminal, for example, a circuit to provide mechanical vibration as a detectable output. The display 28 of the mobile terminal may be of any type appropriate for the electronic device in question with some examples including a plasma display panel (PDP), a liquid crystal display (LCD), a light-emitting diode (LED), an organic light-emitting diode display (OLED), a projector, a holographic display or the like. The user input interface may comprise devices allowing the mobile terminal to receive data, such as a keypad 30, a touch display (e.g., some example embodiments wherein the display 28 is configured as a touch display), a joystick (not shown), and/or other input device. In embodiments including a keypad, the keypad may comprise numeric (0-9) and related keys (#, *), and/or other keys for operating the mobile terminal.
The mobile terminal 10 may comprise memory, such as a subscriber identity module (SIM) 38, a removable user identity module (R-UIM), and/or the like, which may store information elements related to a mobile subscriber. In addition to the SIM, the mobile terminal may comprise other removable and/or fixed memory. The mobile terminal 10 may include volatile memory 40 and/or non-volatile memory 42. For example, volatile memory 40 may include Random Access Memory (RAM) including dynamic and/or static RAM, on-chip or off-chip cache memory, and/or the like. Non-volatile memory 42, which may be embedded and/or removable, may include, for example, read-only memory, flash memory, magnetic storage devices (e.g., hard disks, floppy disk drives, magnetic tape, etc.), optical disc drives and/or media, non-volatile random access memory (NVRAM), and/or the like. Like volatile memory 40 non- volatile memory 42 may include a cache area for temporary storage of data. The memories may store one or more software programs, instructions, pieces of information, data, and/or the like which may be used by the mobile terminal for performing functions of the mobile terminal. For example, the memories may comprise an identifier, such as an international mobile equipment identification (IMEI) code, capable of uniquely identifying the mobile terminal 10. In an example embodiment, the mobile terminal 10 may include a media capturing element, such as a camera, video and/or audio module, in communication with the processor 20. The media capturing element may be any means for capturing an image, video and/or audio for storage, display or transmission. For example, in an example embodiment in which the media capturing element is a camera 36, the camera 36 may include a digital camera capable of forming a digital image file from a captured image. In addition, the digital camera of the camera 36 may be capable of capturing a video clip. As such, the camera 36 may include all hardware, such as a lens or other optical component(s), and software necessary for creating a digital image file from a captured image as well as a digital video file from a captured video clip. Alternatively, the camera 36 may include only the hardware needed to view an image, while a memory device of the mobile terminal 10 stores instructions for execution by the processor 20 in the form of software necessary to create a digital image file from a captured image. As yet another alternative, an object or objects within a field of view of the camera 36 may be displayed on the display 28 of the mobile terminal 10 to illustrate a view of an image currently displayed which may be captured if desired by the user. As such, as referred to hereinafter, an image may be either a captured image or an image comprising the object or objects currently displayed by the mobile terminal 10, but not necessarily captured in an image file. In an example embodiment, the camera 36 may further include a processing element such as a co-processor which assists the processor 20 in processing image data and an encoder and/or decoder for compressing and/or decompressing image data. The encoder and/or decoder may encode and/or decode according to, for example, a joint photographic experts group (JPEG) standard, a moving picture experts group (MPEG) standard, or other format.
Returning to FIG. 1 , in an example embodiment, the apparatus 102 includes various means for performing the various functions herein described. These means may comprise one or more of a processor 1 10, memory 1 12, communication interface 1 14, user interface 1 16, sensor 1 18, camera 1 19, or user interface (UI) control circuitry 122. The means of the apparatus 102 as described herein may be embodied as, for example, circuitry, hardware elements (e.g., a suitably programmed processor, combinational logic circuit, and/or the like), a computer program product comprising computer-readable program instructions (e.g., software or firmware) stored on a computer-readable medium (e.g. memory 1 12) that is executable by a suitably configured processing device (e.g., the processor 1 10), or some combination thereof.
In some example embodiments, one or more of the means illustrated in FIG. 1 may be embodied as a chip or chip set. In other words, the apparatus 102 may comprise one or more physical packages (e.g., chips) including materials, components and/or wires on a structural assembly (e.g., a baseboard). The structural assembly may provide physical strength, conservation of size, and/or limitation of electrical interaction for component circuitry included thereon. In this regard, the processor 1 10, memory 1 12, communication interface 1 14, user interface 1 16, sensor 1 18, camera 1 19, and/or UI control circuitry 122 may be embodied as a chip or chip set. The apparatus 102 may therefore, in some cases, be configured to or may comprise component(s) configured to implement embodiments of the present invention on a single chip or as a single "system on a chip." As such, in some cases, a chip or chipset may constitute means for performing one or more operations for providing the functionalities described herein.
The processor 1 10 may, for example, be embodied as various means including one or more microprocessors with accompanying digital signal processor(s), one or more processor(s) without an accompanying digital signal processor, one or more coprocessors, one or more multi-core processors, one or more controllers, processing circuitry, one or more computers, various other processing elements including integrated circuits such as, for example, an ASIC (application specific integrated circuit) or FPGA (field programmable gate array), one or more other types of hardware processors, or some combination thereof. Accordingly, although illustrated in FIG. 1 as a single processor, in some embodiments the processor 1 10 comprises a plurality of processors. The plurality of processors may be in operative communication with each other and may be collectively configured to perform one or more functionalities of the apparatus 102 as described herein. The plurality of processors may be embodied on a single computing device or distributed across a plurality of computing devices collectively configured to function as the apparatus 102. In embodiments wherein the apparatus 102 is embodied as a mobile terminal 10, the processor 110 may be embodied as or comprise the processor 20 (shown in FIG. 2). In some example embodiments, the processor 110 is configured to execute instructions stored in the memory 112 or otherwise accessible to the processor 110. These instructions, when executed by the processor 110, may cause the apparatus 102 to perform one or more of the functionalities of the apparatus 102 as described herein. As such, whether configured by hardware or software methods, or by a combination thereof, the processor 110 may comprise an entity capable of performing operations according to embodiments of the present invention while configured accordingly. Thus, for example, when the processor 110 is embodied as an ASIC, FPGA or the like, the processor 110 may comprise specifically configured hardware for conducting one or more operations described herein. Alternatively, as another example, when the processor 110 is embodied as an executor of instructions, such as may be stored in the memory 112, the instructions may specifically configure the processor 110 to perform one or more algorithms and operations described herein.
The memory 112 may comprise, for example, volatile memory, non- volatile memory, or some combination thereof. In this regard, the memory 112 may comprise a non-transitory computer-readable storage medium. Although illustrated in FIG. 1 as a single memory, the memory 112 may comprise a plurality of memories. The plurality of memories may be embodied on a single computing device or may be distributed across a plurality of computing devices collectively configured to function as the apparatus 102. In various example embodiments, the memory 112 may comprise a hard disk, random access memory, cache memory, flash memory, a compact disc read only memory (CD-ROM), digital versatile disc read only memory (DVD-ROM), an optical disc, circuitry configured to store information, or some combination thereof. In embodiments wherein the apparatus 102 is embodied as a mobile terminal 10, the memory 112 may comprise the volatile memory 40 and/or the non- volatile memory 42 (shown in FIG. 2). The memory 112 may be configured to store information, data, applications, instructions, or the like for enabling the apparatus 102 to carry out various functions in accordance with various example embodiments. For example, in some example embodiments, the memory 112 is configured to buffer input data for processing by the processor 110. Additionally or alternatively, the memory 112 may be configured to store program instructions for execution by the processor 110. The memory 112 may store information in the form of static and/or dynamic information. The stored information may include, for example, images, content, media content, user data, application data, and/or the like. This stored information may be stored and/or used by the UI control circuitry 122 during the course of performing its functionalities. The communication interface 114 may be embodied as any device or means embodied in circuitry, hardware, a computer program product comprising computer readable program instructions stored on a computer readable medium (e.g., the memory 112) and executed by a processing device (e.g., the processor 110), or a combination thereof that is configured to receive and/or transmit data from/to another computing device. In some example
embodiments, the communication interface 114 is at least partially embodied as or otherwise controlled by the processor 110. In this regard, the communication interface 114 may be in communication with the processor 110, such as via a bus. The communication interface 114 may include, for example, an antenna, a transmitter, a receiver, a transceiver and/or supporting hardware or software for enabling communications with one or more remote computing devices. In embodiments wherein the apparatus 102 is embodied as a mobile terminal 10, the communication interface 114 may be embodied as or comprise the transmitter 14 and receiver 16 (shown in FIG. 2). The communication interface 1 14 may be configured to receive and/or transmit data using any protocol that may be used for communications between computing devices. In this regard, the communication interface 114 may be configured to receive and/or transmit data using any protocol that may be used for transmission of data over a wireless network, wireline network, some combination thereof, or the like by which the apparatus 102 and one or more computing devices may be in communication. As an example, the communication interface 114 may be configured to receive and/or otherwise access content (e.g., web page content, streaming media content, and/or the like) over a network from a server or other content source. The communication interface 1 14 may additionally be in communication with the memory 112, user interface 116, sensor 118, camera 119, and/or UI control circuitry 122, such as via a bus. The user interface 116 may be in communication with the processor 110 and configured to receive an indication of a user input and/or to provide an audible, visual, mechanical, or other output to a user. As such, the user interface 116 may include, for example, a keyboard, a mouse, a joystick, a display, a touch screen display, a microphone, a speaker, and/or other input/output mechanisms. In embodiments wherein the apparatus 102 is embodied as a mobile terminal 10, the user interface 116 may be embodied as or comprise the user input interface, such as the display 28 and keypad 30 (shown in FIG. 2). The user interface 116 may be in communication with the memory 112, communication interface 114, sensor 118, camera 119, and/or UI control circuitry 122, such as via a bus. The UI control circuitry 122 may be embodied as various means, such as circuitry, hardware, a computer program product comprising computer readable program instructions stored on a computer readable medium (e.g., the memory 112) and executed by a processing device (e.g., the processor 110), or some combination thereof and, in some embodiments, is embodied as or otherwise controlled by the processor 110. In some example embodiments wherein the UI control circuitry 122 is embodied separately from the processor 110, the UI control circuitry 122 may be in communication with the processor 1 10. The UI control circuitry 122 may further be in communication with one or more of the memory 112, communication interface 114, or user interface 116, such as via a bus. In some embodiments, the UI control circuitry 122 may be configured to aid in identification of user input, such as determination of the position of user input for a touch display.
In some embodiments, the apparatus 102 may include a sensor 118 that is in communication with the processor 110. The sensor 118 may be configured to determine certain properties or circumstances of the apparatus 102. For example, the sensor 118 may be configured to detect movement of the apparatus 102, such as movement of the apparatus 102 in a sweeping motion. In particular, the sensor 118 may be configured to detect when the apparatus is swept across a user's face. For example, in some embodiments, the sensor 118 may be an accelerometer or similar device for detecting motion or the like.
In some embodiments, the apparatus 102 may include an image capturing device, such as a camera 119, that is in communication with the processor 110. In embodiments wherein the apparatus 102 is embodied as a mobile terminal 10, the image capturing device may comprise the camera 36 (shown in FIG. 2). The camera 119 may be configured to capture an image. For example, a user may input an instruction into the apparatus 102 that may cause the processor 110 to instruct the camera 119 to capture/take an image. In some embodiments, the processor 110 may instruct the camera 119 to capture an image without direct user input. In a non- limiting example, the processor 110 may instruct the camera 119 to capture an image of an object (e.g., a user's face), such as in response to detecting movement of the apparatus 102 from the sensor 118. Additionally or alternatively, the processor 110 may instruct the camera 119 to take more than one image as the apparatus 102 is being moved (e.g., as the apparatus 102 is being swept across a user's face).
It is often desirable to protect devices, such as apparatus 102, by preventing unauthorized access to certain functionality. As such, some security applications require passwords for authentication purposes before permitting access to the device. As noted above, textual or other passwords may be difficult for an authorized user to remember or input (e.g., the password may be long or require odd character combinations).
Face recognition technology can be useful for such security applications and may provide enhanced security while limiting the need to remember or input difficult passwords. In fact, face recognition technology may provide even better security, as any user can enter the correct password, whereas only a user with the same face may gain authorized access to functionality of the apparatus 102. In some embodiments, similar recognition technology may be utilized for object recognition. As such, security applications may require images of a certain or specific object to enable secure functionality. For example, a user's driver's license, personal ring, medallion, or other object may be used to lock or unlock secured functionality. Thus, as described herein, embodiments of the present invention may be used with face recognition and/or object recognition technology and reference herein to face recognition technology or object recognition technology is not meant to be limited to one or the other. While use of object recognition technology for facilitating locking/unlocking of secured functionality may be a desirable alternative to text-based passwords in at least some circumstances, current capabilities of some mobile computing devices (e.g., apparatus 102) may limit the effectiveness of object recognition as a security application. For example, object recognition technology requires a large amount of computing power that may be difficult or highly demanding for a mobile computing device with limited computing power. Additionally, current mobile computing devices are often limited to one camera (e.g., camera 119), that is further limited to capturing 2-dimensional ("2d") images. Research has shown that 2d object/face recognition technology is less effective and less accurate than 3- dimensional ("3d") object/face recognition technology. Thus, the limitations of only being able to capture 2d images provides for a less effective object recognition application that is likely to have a greater risk of improper verification, thereby decreasing security for the desired functionality.
As such, embodiments of the present invention provide enhanced methods and apparatus for facilitating locking and unlocking of secured functionality through object recognition. While description of some embodiments contained herein includes examples for facial recognition, such example embodiments are not meant to be limited to facial recognition and may be applicable for recognition of any object. In some embodiments, the processor 110 may be configured to permit/deny access to at least a portion of the functionality of the apparatus 102. For example, the processor 110 may require verification of proper authorization to permit/enable access to at least a portion of the functionality of the apparatus 102. In some embodiments, the processor 110 may require proper facial recognition as a biometric in order to permit access to certain functionality (e.g., secured functionality). In some embodiments, the processor 110 may assign certain functionality to certain faces such that multiple users may have varying access to
functionality on the apparatus 102. Similarly, in this regard, some functionality may be present without verification of proper authorization (e.g., in order to determine if proper authorization is present to permit access to the secured functionality). As used herein, functionality may refer to access to or extension of any function of an apparatus 102 (e.g., phone functionality, computing functionality, applications, data, etc.). Additionally or alternatively, embodiments of the present invention may be useful for maintaining security of functionality stored on a server (e.g., cloud network). For example, some embodiments may be useful in securing and/or allowing access to functionality and/or information (e.g., personal back account information) stored on a server. The processor 110 may also be configured to set up or calibrate the security application so as to define the representation or images of the object (e.g., the face that represents the user). Such a process may be similar to prompting a user to enter a password. In some
embodiments, the processor 110 may be configured to prompt a user to perform a sweeping motion relative to the apparatus 102/camera 119. For example, the processor 110 may be configured to cause the user to be prompted to sweep the apparatus 102 including the camera 119 across the user's face. In particular, the processor 110 may be configured to cause a user to be prompted to move the apparatus 102 (e.g., sweep the camera 119) across the user's face to permit capturing/taking of images of the user's face for creating a secure face
image/representation password. In some embodiments, sweeping the apparatus/camera across the user's face may include holding the apparatus/camera generally spaced apart from the user's face and moving the apparatus/camera from side to side (e.g., similar to moving a hand in front of a face). In some embodiments, the camera 119 may be physically separate from the apparatus 102 such that the user may sweep the camera 119 across the user's face. For example, the camera 119 may be in communication with the apparatus 102 (e.g., wire, wirelessly, or later connected) such that a user may sweep the camera 119 without the apparatus 102 across their face. As such, some embodiments of the present invention described herein may utilize a camera that captures images of an object and transmits those images to a remote processor (e.g., processor 110). The remote processor may receive those images and/or image features and perform verification and/or other functionality described herein to determine if a user should be permitted access to secure functionality at the remote processor (e.g., a server, cloud, etc.).
Additionally or alternatively, the processor 110 may be configured to prompt a user to perform a sweeping motion such as sweeping the object across the camera 119 of the apparatus 102. For example, a person may move their face from side to side in front of the camera 119, such that at least two images can be taken from differing views of the user's face. In another embodiment, the user may maneuver their face in a sweeping motion relative to the camera 119. As such, in some embodiments, at least two images may be captured in response to at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus. Likewise, in some embodiments, a user can be prompted to perform a sweeping motion of at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus.
In some embodiments, the processor 110 may be configured to cause at least one arrow or other directional indictor to be displayed upon, for example, the user interface 116 to indicate how the user is to move the apparatus 102 and/or camera 119. Additionally or alternatively, the user interface 116 may be configured, by the processor, to display the image the camera 119 is currently focused on. In such an embodiment, the processor 110 may be configured to cause a box or other bounded shape to be displayed around the user's face (if it appears in the display) as the user sweeps the apparatus 102 across their face, thereby indicating proper alignment of the apparatus 102. For example, the processor 110 may be configured to use applications such as Face Tracker to aid the user in properly aligning the camera 119 during the sweeping motion. The processor 110 may be configured to provide any type of indication which may aid/instruct a user to properly sweep the apparatus 102/camera 119 across their face. For example, the processor 110 may cause the display of an image of a person holding the apparatus 102 and sweeping the apparatus 102 across their face along with an arrow (e.g., see FIG. 4). In a non-limiting example, with reference to FIG. 3, a mobile device 200 (e.g., apparatus 102) may include a display 210. During security calibration to determine the secure face representation password, the display 210 on the mobile device 200 may read "PHONE SECURITY CALIBRATION." Additionally, the display 210 may instruct the user to perform an action to assist in calibration, such as "SWEEP ACROSS FACE." In the depicted mobile device 200, the camera 220 is located on the opposite face of the display 210. As such, simple instructions may be difficult to effectively follow since the camera 220 would be facing the wrong direction if the user simply swept the mobile device 200 across their face with the display 210 facing them. Thus, as noted above, additional indication may be beneficial. In some embodiments, the camera 220 may be configured on the front of the device 200. In other embodiments, the camera 220 may be configured separate from the device 200. As such, indication customized to the specific device 200 may be beneficial for aiding a user in facilitation calibration of the security application.
The processor 110 may be configured to instruct the camera 119 to capture/take at least one image of the object (e.g., the user's face), such as in response to the apparatus 102 being swept across the object. In exemplary embodiments, the processor 110 may instruct the camera 119 to capture at least two images of the object. Additionally, as the apparatus 102/camera 119 is swept across the object, different views of the object appear for the camera 119. As such, in some embodiments, the processor 110 may be configured to instruct the camera 119 to take images from different views of the object. For an example in which the camera is moved right to left across a user's face, the processor 110 may instruct/cause the camera 119 to take an image at the beginning of the sweep (e.g., the right side of the user's face), the middle of the sweep (e.g., the front of the user's face), and the end of the sweep (e.g., the left side of the user's face). In some embodiments, the processor 110 may be configured to start taking pictures when the sensor 118 detects movement of the apparatus 102. Additionally or alternatively, the processor 110 may be configured to start taking pictures when the sensor 118 detects movement of the apparatus 102 in a desired sweeping motion. In some embodiments, the processor 110 may instruct/cause the camera 119 to capture a plurality of images, such as at a pre-defined frame rate, until the camera
119/apparatus 102 ceases movement. The determination as to when to take a picture (e.g., capture an image) may depend on a number of pre-determined variables (e.g., number of desired images, timing of the images, every 0.5 seconds, etc.).
FIG. 4 illustrates an example of a user sweeping the mobile device 200 across their face. In the depicted embodiment, the mobile device 200 includes a camera 220 that may take/capture at least two images of a user's face 250 as the user sweeps the apparatus across their face (e.g., along line A). In some embodiments, the captured images 230 may be displayed on a user interface/display 210. Referring again to FIG. 1, the processor 110 may be configured to receive an image and/or image feature, such as from the camera 119. In some embodiments, the processor 110 may be configured to receive the at least two images of the object (e.g., the user's face) taken by the camera 119. The processor 110 may also be configured to store the received images in the memory 112. As noted above, in some embodiments, the received images are 2d images of the user's face. The processor 110 may be configured to store at least one of the received 2d images as a secure 2d image password.
In some embodiments, the processor 110 may be configured to create/develop a 2d representation (e.g., 2d texture map) of the object from the at least two received 2d images. As noted above, in some embodiments, as the camera 119 is moved (e.g., swept across) relative to the object (e.g., the user's face) or vice versa, the camera 119 is configured to take at least two images that have different views of the object. As such, the processor 110 may be configured to receive at least two images that correspond to a different view of the object. These different views aid in creation of a more accurate 2d representation of the object. For example, the front view, right side view, and left side view of a user's face can be combined by the processor 110 to create a 2d representation of the user's face. Generally, the greater number of images from differing views correlates to a more accurate 2d representation of the object. This developed 2d representation can be stored by the processor 110 as a secure 2d representation password in the memory 112.
In some embodiments, the processor 110 may be configured to create/develop a 3d representation of the object (e.g., the user's face) from the at least two received 2d images. As noted above, in some embodiments, as the camera 119 is moved (e.g., swept across) relative to the object (e.g., the user's face) or vice versa, the camera 119 is configured to take at least two images that have different views of the object. As such, the processor 110 may be configured to receive at least two images that correspond to a different view of the object. These different views aid in creation of a more accurate 3d representation of the object. For example, the front view, right side view, and left side view of a user's face can be combined by the processor 110 to create a 3d representation of the user's face. Generally, the greater number of images from differing views correlates to a more accurate 3d representation. This developed 3d representation can be stored by the processor 110 as a secure 3d representation password in the memory 112. In some embodiments, 3d may include 3 or more dimensions (e.g., include hyper-dimensions).
The processor 110 may further be configured to determine specific functionality to which access is permitted or enabled for each secure 2d image, 2d representation, and/or 3d representation password. In some embodiments, the processor 110 may be configured to link the secured functionality to the 2d image, 2d representation, and/or 3d representation password in the memory 112, such as by associating each with the same user and, in turn, with the secured functionality accessible by the user. Additionally or alternatively, a user of the apparatus 102 may be prompted to determine which functionality is permitted for the corresponding image/representation password. The selected functionality may be stored and linked with the image/representation password in the memory 112 such that when the stored image/representation password is determined to sufficiently match the currently inputted image/representation, the processor 110 may determine which functionality to permit/enable access to.
Once the security application of the apparatus 102 has been calibrated (e.g., the 2d image, 2d representation, and/or 3d representation password created and stored), the processor 110 may be configured to disable/deny access to certain secure functionality (e.g., the apparatus 102 locks). In some embodiments, once the apparatus 102 is locked, the apparatus 102 may be unlocked through authentication.
In order to commence the authentication process, the processor 110 may be configured to cause a user to be prompted to unlock the secured functionality. In some embodiments, the processor 110 may be configured to cause a user to be prompted to perform a sweeping motion of at least one of sweeping the apparatus 102/camera 119 across the object or sweeping the object across the apparatus 102/camera 119. Such a sweeping motion may be consistent with the description provided above with respect to calibration. For example, the processor 110 may be configured to cause the user to be prompted to sweep the apparatus 102 including the camera 119 across the user's face. Moreover, as noted above, the processor 110 may be configured to prompt a user to perform a sweeping motion such as sweeping the object across the camera 119 of the apparatus 102. For example, a person may move their face from side to side in front of the camera 119, such that at least two images can be taken from differing views of the user's face. In another embodiment, the user may maneuver their face in a sweeping motion relative to the camera 119. As such, in some embodiments, at least two images may be captured in response to at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus. Likewise, a user can be prompted to perform a sweeping motion of at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus. As noted above with respect to the calibration process, the processor 110 may also be configured to cause instructions or other indications to a user to be displayed to aid a user to properly sweep the apparatus 102/camera 119 and/or object. For example, the processor 110 may be configured to cause at least one arrow to be displayed to indicate the direction in which the user is to sweep the apparatus 102 across their face. Similarly, in some
embodiments, the user interface 116 may be configured to display the image of which the camera 119 is currently focused. In such an embodiment, the processor 110 may be configured to cause a box or other bounded shape to be displayed around the user's face (if it appears in the display) as the user sweeps the apparatus 102 across their face, thereby providing interaction indicating proper alignment of the apparatus 102. For example, the processor 110 may be configured to use applications such as Face Tracker to aid the user in properly aligning the camera during sweeping. The processor 110 may be configured to provide any type of indication which may aid/instruct a user to sweep the apparatus
102/camera 119 across their face. For example, the processor 110 may display an image of a person holding the apparatus 102 and sweeping the apparatus 102 across their face along an arrow (e.g., see FIG. 4).
FIG. 5 illustrates example instructions to prompt a user to sweep the apparatus/camera across their face to unlock the secured functionality of the mobile device 200. In the depicted embodiment, the display 210 on the mobile device 200 reads "PHONE LOCKED," which indicates to the user that at least some secure functionality of the mobile device 200 is disabled. Additionally, the display 210 instructs the user to perform an action in an attempt to unlock the mobile device's secure functionality (e.g., "SWEEP ACROSS FACE TO UNLOCK"). In the depicted mobile device 200, the camera 220 is located on the opposite face of the display 210. As such, simple instructions may be difficult to effectively follow since the camera 220 would be facing the wrong direction if the user simply swept the mobile device 200 across their face with the display 210 facing them. Thus, as noted above, additional indication may be beneficial. In some embodiments, the camera 220 may be configured on the front of the device 200. In other embodiments, the camera 220 may be configured separate from the device 200. As such, indication customized to the specific device 200 may be beneficial for aiding a user in facilitation calibration of the security application.
As noted above with respect to calibration, the processor 110 may be configured to instruct the camera 119 to capture/take at least one image of the object (e.g., the user's face), such as the apparatus 102 is swept across the object. In exemplary embodiments, the processor 110 may instruct the camera 119 to capture at least two images of the object. Additionally, as the apparatus 102/camera 119 is swept across the object, different views of the object appear for the camera 119. As such, in some embodiments, the processor 110 may be configured to instruct the camera 119 to take images from different views of the user's face. For an example in which the camera is moved right to left across a user's face, the processor 110 may instruct/cause the camera 119 to take an image at the beginning of the sweep (e.g., the right side of the user's face), the middle of the sweep (e.g., the front of the user's face), and the end of the sweep (e.g., the left side of the user's face). In some embodiments, the processor 110 may be configured to start taking pictures when the sensor 118 detects movement of the apparatus 102. Additionally or alternatively, the processor 110 may be configured to start taking pictures when the sensor 118 detects movement of the apparatus 102 in a desired sweeping motion. In some embodiments, the processor 110 may
instruct/cause the camera 119 to capture a plurality of images, such as at a pre-defined frame rate, until the camera 119/apparatus 102 ceases movement. The decision when to take a picture (e.g., capture an image) may depend on a number of pre-determined variables (e.g., number of desired images, timing of the images, every 0.5 seconds, etc.).
FIG. 4 illustrates an example of a user sweeping the mobile device 200 across their face. In the depicted embodiment, the mobile device 200 includes a camera 220 that may take/capture at least two images of a user's face 250 as the user sweeps the apparatus across their face (e.g., along line A). In some embodiments, the captured images 230 may be displayed on a user interface/display 210.
Referring again to FIG. 1, as noted above, the processor 110 may be configured to receive an image and/or image feature, such as from the camera 119. In some embodiments, the processor 110 may be configured to receive the at least two images of the object (e.g., the user's face) taken by the camera 119. In some embodiments, the processor 110 may also be configured to store the received images in the memory 112. As noted above, in some embodiments, the received images are 2d images of the user's face. Additionally, the processor 110 may be configured to store at least one of the received images as a received 2d image.
In some embodiments, the processor 110 may be configured to create/develop a 2d representation (e.g., 2d texture map) of the object from the at least two received 2d images. As noted above, in some embodiments, as the camera 119 is moved (e.g., swept across) relative to the object (e.g., the user's face) or vice versa, the camera 119 is configured to take at least two images that have different views of the object. As such, the processor 110 may be configured to receive at least two images that correspond to a different view of the object. These different views aid in creation of a more accurate 2d representation of the object. For example, the front view, right side view, and left side view of a user's face can be combined by the processor 110 to create a 2d representation of the user's face. Generally, the greater number of images from differing views correlates to a more accurate 2d representation of the object. This developed 2d representation can be stored by the processor 110 as the developed 2d representation in the memory 112. In some embodiments, the processor 110 may be configured to create/develop a 3d representation of the object (e.g., the user's face) from the at least two received 2d images. As noted above, in some embodiments, as the camera 119 is moved (e.g., swept across) relative to the object (e.g., the user's face) or vice versa, the camera 119 is configured to take at least two images that have different views of the object. As such, the processor 110 may be configured to receive at least two images that correspond to a different view of the object. These different views aid in creation of a more accurate 3d representation of the object. For example, the front view, right side view, and left side view of a user's face can be combined to create a 3d representation of the user's face. Generally, the greater number of images from differing views correlates to a more accurate 3d representation of the user's face. This developed 3d representation can be stored by the processor 110 in the memory 112 as the received/developed 3d representation.
In some embodiments, the processor 110 may be configured to compare at least one received 2d image to the at least one stored 2d image (e.g., the at least one secure 2d image password). Additionally, in some embodiments, the processor 110 may be configured to determine when at least one of the received 2d image is within a predefined similarity threshold of (e.g., sufficiently similar, matches, etc.) the stored/secure 2d image, such as with 2d object/face recognition technology. The determination as to when the received 2d image is within a predefined similarity threshold of the stored 2d image may be customizable and may depend on a number of factors (e.g., the computing power of the processor 110, level of desired security, computation time, etc.). In an instance in which at least one of the received 2d image is within a predefined similarity threshold of at least one of the stored 2d image, the processor 110 may be configured to permit/enable access to secured functionality of the apparatus 102.
In some embodiments, the processor 110 may be configured to compare the developed 2d representation to a stored 2d representation (e.g., the secure 2d representation password). Additionally, in some embodiments, the processor 110 may be configured to determine when the developed 2d representation is within a predefined similarity threshold of (e.g., sufficiently similar to, matches, etc.) the stored/secure 2d representation, such as with 2d object/face recognition technology. The determination as to when the developed 2d representation is within a predefined similarity threshold of the stored 2d representation may be customizable and may depend on a number of factors (e.g., the computing power of the processor 110, level of desired security, computation time, etc.). In an instance in which the developed 2d representation is within a predefined similarity threshold of the stored 2d representation, the processor 110 may be configured to permit/enable access to secured functionality of the apparatus 102. In some embodiments, the processor 110 may be configured to compare the developed 3d representation to a stored 3d representation (e.g., the secure 3d representation password). Additionally, in some embodiments, the processor 110 may be configured to determine when the developed 3d representation is within a predefined similarity threshold of (e.g., sufficiently similar to, matches, etc.) the stored/secure 3d representation, such as with 3d object/face recognition technology. The determination as to when the developed 3d representation is within a predefined similarity threshold of the stored 3d representation may be customizable and may depend on a number of factors (e.g., the computing power of the processor 110, level of desired security, computation time, etc.). In an instance in which the developed 3d representation is within a predefined similarity threshold of the stored 3d representation, the processor 110 may be configured to permit/enable access to secured functionality o f the apparatus 102.
Some embodiments of the present invention may utilize different combinations of the comparison functionality as described above to provide a robust security application. In some embodiments, the processor 110 may be configured to permit access to secure functionality in an instance in which both the developed 3d representation is within a predefined similarity threshold of the stored 3d representation and the 2d representation is within a second predefined similarity threshold of the stored 2d representation. Additionally or alternatively, the processor may be configured to permit access to secure functionality in an instance in which both the developed 3d representation is within a predefined similarity threshold of the stored 3d representation and at least one of the received 2d images is within a second predefined similarity threshold of at least one stored 2d image.
Similarly, some embodiments may only employ a second comparison function if a first comparison function does not produce a comparison that is within the predefined similarity threshold (e.g., the match is not quite close enough, confidence level of similarity score is low, etc.). In some embodiments, the processor 110 may be configured to compare a developed multi-dimensional representation (e.g., 2d representation, 3d representation, hyper- dimensional representation, etc.) to a stored multi-dimensional representation and in an instance in which the developed multi-dimensional representation is not within a predefined similarity threshold of the stored multi-dimensional representation, the processor 110 may be configured to then compare a developed second multi-dimensional representation with a stored second multi-dimensional representation to determine if they are within a second predefined similarity threshold before permitting access to secure functionality. For example, in some embodiments, the processor 110 may be configured to compare the developed 3d representation to the stored 3d representation and in an instance in which the developed 3d representation is not within a predefined similarity threshold of the stored 3d representation, the processor 110 may be configured to then compare the developed 2d representation with the stored 2d representation to determine if they are within a second predefined similarity threshold before permitting access to secure functionality. Additionally or alternatively, the processor 110 may be configured to compare the developed 3d representation to the stored 3d representation and in an instance in which the developed 3d representation is within a lesser predefined similarity threshold of the stored 3d representation (e.g., it is closely similar, but not quite enough to permit access to secure functionality), then the processor 110 may be further configured to compare the developed 2d representation with the stored 2d
representation to determine if they are within a predefined similarity threshold before permitting access to secure functionality. Similarly, in some embodiments, the processor 110 may be configured to compare the developed 2d representation to the stored 2d representation and in an instance in which the developed 2d representation is not within a predefined similarity threshold of the stored 2d representation, the processor 110 may be configured to then compare the developed 3d representation with the stored 3d representation to determine if they are within a second predefined similarity threshold before permitting access to secure functionality. Additionally or alternatively, the processor 110 may be configured to compare the developed 2d representation to the stored 2d representation and in an instance in which the developed 2d representation is within a lesser predefined similarity threshold of the stored 2d representation (e.g., it is closely similar, but not quite enough to permit access to secure functionality), then the processor 110 may be further configured to compare the developed 3d representation with the stored 3d representation to determine if they are within a predefined similarity threshold before permitting access to secure functionality.
Embodiments of the present invention provide methods, apparatus and computer program products for facilitating unlocking and locking secure functionality of a mobile device through 2d and/or 3d object recognition. Various examples of the operations performed in accordance with embodiments of the present invention will now be provided with reference to FIGS. 6-10. In this regard, FIG. 6 illustrates a flowchart of an example method 300 for permitting access to secure functionality through 2d and 3d object recognition. Initially, at operation 302, the method comprises causing a user to be prompted to perform a sweeping motion of at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus. The processor 110, user interface 116, and/or UI control circuitry 122 may, for example, provide means for performing operation 302. In some embodiments, the method may further comprise detecting movement of the
apparatus/camera across the object at operation 304. The processor 110, user interface 116, sensor 118, camera 119, and/or UI control circuitry 122 may, for example, provide means for performing operation 304. If movement is not sensed, the method may return to operation 302 to continue to prompt the user to perform the sweeping motion. However, if movement is detected, then the method may comprise causing at least two images of the object to be captured at operation 306. The processor 110, user interface 116, camera 119, and/or UI control circuitry 122 may, for example, provide means for performing operation 306. Next, the method comprises receiving the at least two images at operation 308. The processor 110 may, for example, provide means for performing operation 308. At operation 310, the method comprises developing, based at least in part on the at least two images, a 2d representation of the object. The processor 110 may, for example, provide means for performing operation 310. Then, the method further comprises comparing the developed 2d representation with a stored 2d representation (e.g., the secure 2d representation password) at operation 312. The processor 110 may, for example, provide means for performing operation 312. The method comprises determining if the developed 2d representation is within a predefined similarity threshold of the stored 2d representation at operation 314. The processor 110 may, for example, provide means for performing operation 314. If the developed 2d representation and the stored 2d representation are within a predefined similarity threshold, then the method may comprise permitting access to the secured functionality at operation 316. The processor 110, user interface 116, and/or UI control circuitry 122 may, for example, provide means for performing operation 316. In some embodiments, permitting access may comprise displaying a message to indicate granting of access.
If, however, the developed 2d representation are not within a predefined similarity threshold of the stored 2d representation, then the method may further comprise developing, based at least in part on the at least two images, a 3d representation of the object at operation 317. The processor 110 may, for example, provide means for performing operation 317. Then, the method may comprise comparing the developed 3d representation with a stored 3d representation (e.g., the secure 3d representation password) at operation 318, and determining if they are within a predefined similarity threshold at operation 320. The processor 110 may, for example, provide means for performing operations 318 and 320. If the developed 3d representation is not within a predefined similarity threshold of the stored 3d representation, then the method may comprise denying access to the secured functionality at operation 322. The processor 110, user interface 116, and/or UI control circuitry 122 may, for example, provide means for performing operation 322. However, if the developed 3d representation is within a predefined similarity threshold of the stored 3d representation, then the method may comprise permitting access to the secured functionality at operation 324. The processor 110, user interface 116, and/or UI control circuitry 122 may, for example, provide means for performing operation 324. In such a way, some embodiments of the present invention provide methods, apparatuses, and computer program products that may use both 2d and 3d object recognition technology to provide a robust and effective security system, such as for apparatus 102 (shown in FIG. 1).
In this regard, FIG. 7 illustrates a flowchart of an example method 400 for permitting access to secure functionality through 3d and 2d object recognition. Initially, at operation 402, the method comprises causing a user to be prompted to perform a sweeping motion of at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus. The processor 110, user interface 116, and/or UI control circuitry 122 may, for example, provide means for performing operation 402. In some embodiments, the method may further comprise detecting movement of the
apparatus/camera across the object at operation 404. The processor 110, user interface 116, sensor 118, camera 119, and/or UI control circuitry 122 may, for example, provide means for performing operation 404. If movement is not sensed, the method may return to operation 402 to continue to prompt the user to perform the sweeping motion. However, if movement is detected, then the method may comprise causing at least two images of the object to be captured at operation 406. The processor 110, user interface 116, camera 119, and/or UI control circuitry 122 may, for example, provide means for performing operation 406. Next, the method comprises receiving the at least two images at operation 308. The processor 110 may, for example, provide means for performing operation 408.
At operation 410, the method comprises developing, based at least in part on the at least two images, a 3d representation of the object. The processor 110 may, for example, provide means for performing operation 410. Then, the method further comprises comparing the developed 3d representation with a stored 3d representation (e.g., the secure 3d representation password) at operation 412. The processor 110 may, for example, provide means for performing operation 412. The method comprises determining if the developed 3d representation is within a predefined similarity threshold of the stored 3d representation at operation 414. The processor 110 may, for example, provide means for performing operation 414. If the developed 3d representation and the stored 3d representation are within a predefined similarity threshold, then the method may comprise permitting access to the secured functionality at operation 416. The processor 110, user interface 116, and/or UI control circuitry 122 may, for example, provide means for performing operation 416. In some embodiments, permitting access may comprise displaying a message to indicate granting of access.
If, however, the developed 3d representation are not within a predefined similarity threshold of the stored 3d representation, then the method may further comprise developing, based at least in part on the at least two images, a 2d representation of the object at operation 417. The processor 110 may, for example, provide means for performing operation 417. Then, the method may comprise comparing the developed 2d representation with a stored 2d representation (e.g., the secure 2d representation password) at operation 418, and determining if they are within a predefined similarity threshold at operation 420. The processor 110 may, for example, provide means for performing operations 418 and 420. If the developed 2d representation is not within a predefined similarity threshold of the stored 2d representation, then the method may comprise denying access to the secured functionality at operation 422. The processor 110, user interface 116, and/or UI control circuitry 122 may, for example, provide means for performing operation 422. However, if the developed 2d representation is within a predefined similarity threshold of the stored 2d representation, then the method may comprise permitting access to the secured functionality at operation 424. The processor 110, user interface 116, and/or UI control circuitry 122 may, for example, provide means for performing operation 424.
FIG. 8 illustrates a flowchart of the operations for facilitating permission of access to secure functionality through object recognition according to another embodiment 500. As described herein, and is consistent with the disclosure above, embodiments of the present invention may be utilized for developing and comparing representations of an object/face in multi- dimensions (e.g., 2-dimensions, 3-dimensions, hyper-dimensions, etc.). Initially, at least two images of an object may be received at operation 502. In some embodiments, at least two of the at least two images correspond to a different view of the object. The processor 110, user interface 116, camera 119, and/or UI control circuitry 122 may, for example, provide means for performing operation 502. The method further comprises developing, based at least in part on the at least two images, a multi-dimensional representation of the object at operation 504. The processor 110 may, for example, provide means for performing operation 504. The developed multi-dimensional representation and a stored multi-dimensional representation are compared at operation 506. The processor 110 may, for example, provide means for performing operation 506. Then, at operation 508, access to secured functionality is permitted in an instance in which the developed multi-dimensional representation is within a predefined similarity threshold of the stored multi-dimensional representation. The processor 110, user interface 116 and/or UI control circuitry 122 may, for example, provide means for performing operation 508.
FIG. 9 illustrates a flowchart of the operations for facilitating permission of access to secure functionality through object recognition according to another embodiment 600. Initially, at least two images of an object may be received at operation 602. In some embodiments, at least two of the at least two images correspond to a different view of the object. The processor 110, user interface 116, camera 119, and/or UI control circuitry 122 may, for example, provide means for performing operation 602. The method further comprises developing, based at least in part on the at least two images, a 3d representation of the object at operation 604. The processor 110 may, for example, provide means for performing operation 604. The developed 3d representation and a stored 3d representation are compared at operation 606. The processor 110 may, for example, provide means for performing operation 606. Then, at operation 608, access to secured functionality is permitted in an instance in which the developed 3d representation is within a predefined similarity threshold of the stored 3d representation. The processor 110, user interface 116 and/or UI control circuitry 122 may, for example, provide means for performing operation 608.
FIG. 10 illustrates a flowchart of the operations for facilitating permission of access to secure functionality through object recognition according to another embodiment 700. Initially, at least two images of an object may be received at operation 702. In some embodiments, at least two of the at least two images correspond to a different view of the object. The processor 110, user interface 116, camera 119, and/or UI control circuitry 122 may, for example, provide means for performing operation 702. The method further comprises developing, based at least in part on the at least two images, a 2d representation of the object at operation 704. The processor 110 may, for example, provide means for performing operation 704. The developed 2d representation and a stored 2d representation are compared at operation 706. The processor 110 may, for example, provide means for performing operation 706. Then, at operation 708, access to secured functionality is permitted in an instance in which the developed 2d representation is within a predefined similarity threshold of the stored 2d representation. The processor 110, user interface 116 and/or UI control circuitry 122 may, for example, provide means for performing operation 708.
FIGs. 6-10 each illustrate a flowchart of a system, method, and computer program product according to an example embodiment. It will be understood that each block of the flowcharts, and combinations of blocks in the flowcharts, may be implemented by various means, such as hardware and/or a computer program product comprising one or more computer-readable mediums having computer readable program instructions stored thereon. For example, one or more of the procedures described herein may be embodied by computer program instructions of a computer program product. In this regard, the computer program product(s) which embody the procedures described herein may be stored by one or more memory devices of a mobile terminal, server, or other computing device (for example, in the memory 112) and executed by a processor in the computing device (for example, by the processor 110). In some embodiments, the computer program instructions comprising the computer program product(s) which embody the procedures described above may be stored by memory devices of a plurality of computing devices. As will be appreciated, any such computer program product may be loaded onto a computer or other programmable apparatus (for example, an apparatus 102) to produce a machine, such that the computer program product including the instructions which execute on the computer or other programmable apparatus creates means for implementing the functions specified in the flowchart block(s). Further, the computer program product may comprise one or more computer-readable memories on which the computer program instructions may be stored such that the one or more computer-readable memories can direct a computer or other programmable apparatus to function in a particular manner, such that the computer program product comprises an article of manufacture which implements the function specified in the flowchart block(s). The computer program instructions of one or more computer program products may also be loaded onto a computer or other programmable apparatus (for example, an apparatus 102) to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus implement the functions specified in the flowchart block(s). Accordingly, blocks of the flowcharts support combinations of means for performing the specified functions. It will also be understood that one or more blocks of the flowcharts, and combinations of blocks in the fiowcharts, may be implemented by special purpose hardware- based computer systems which perform the specified functions, or combinations of special purpose hardware and computer program product(s).
The above described functions may be carried out in many ways. For example, any suitable means for carrying out each of the functions described above may be employed to carry out embodiments of the invention. In one embodiment, a suitably configured processor (for example, the processor 110) may provide all or a portion of the elements. In another embodiment, all or a portion of the elements may be configured by and operate under control of a computer program product. The computer program product for performing the methods of an example embodiment of the invention includes a computer-readable storage medium (for example, the memory 112), such as the non- volatile storage medium, and computer- readable program code portions, such as a series of computer instructions, embodied in the computer-readable storage medium.
Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the embodiments of the invention are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the invention. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of elements and/or functions, it should be appreciated that different
combinations of elements and/or functions may be provided by alternative embodiments without departing from the scope of the invention. In this regard, for example, different combinations of elements and/or functions than those explicitly described above are also contemplated within the scope of the invention. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

Claims

We Claim:
1. A method comprising:
receiving at least two images of an object, wherein at least two of the at least two images correspond to a different view of the object;
developing, based at least in part on the at least two images, a multi-dimensional representation of the object;
comparing the developed multi-dimensional representation with a stored multidimensional representation; and
permitting access to secured functionality in an instance in which the developed multi-dimensional representation is within a predefined similarity threshold of the stored multi-dimensional representation.
2. The method of Claim 1 further comprising:
developing, based at least in part on the at least two images, a second multidimensional representation of the object;
comparing the developed second multi-dimensional representation with a stored second multi-dimensional representation; and
permitting access to secured functionality in an instance in which the developed second multi-dimensional representation is within a second predefined similarity threshold of the stored second multi-dimensional representation.
3. The method of any one of Claims 1-2, wherein the images comprise 2d images.
4. The method of any one of Claims 1-3, wherein the multi-dimensional representation comprises a 3d representation.
5. The method of any one of Claims 2-4, wherein the second multi-dimensional representation comprises a 2d representation.
6. The method of any one of Claims 1-3, wherein the multi-dimensional representation comprises a 2d representation.
7. The method of any one of Claims 2, 3, and 6, wherein the second multidimensional representation comprises a 3d representation.
8. The method of any one of Claims 1-7 further comprising causing at least two images to be captured in response to a user performing a sweeping motion of at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus.
9. The method of any one of Claims 1-8 further comprising causing a user to be prompted to perform a sweeping motion of at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus.
10. The method of Claim 9, wherein causing a user to be prompted to perform the sweeping motion comprises causing at least one arrow to be displayed.
11. The method of any one of Claims 9-10, wherein causing a user to be prompted to perform the sweeping motion comprises causing a box to be displayed around an image of the object while the user is performing the sweeping motion.
12. The method of any one of Claims 8-11, wherein the apparatus comprises a mobile computing device.
13. A computer program which, when executed, causes the method of any one of Claims 1 through 12 to be performed.
14. An apparatus comprising at least one processor and at least one memory storing computer program code, wherein the at least one memory and stored computer program code are configured, with the at least one processor, to cause the apparatus to at least:
receive at least two images of an object, wherein at least two of the at least two images correspond to a different view of the object;
develop, based at least in part on the at least two images, a multi-dimensional representation of the object;
compare the developed multi-dimensional representation with a stored multidimensional representation; and
permit access to secured functionality in an instance in which the developed multidimensional representation is within a predefined similarity threshold of the stored multidimensional representation.
15. The apparatus of Claim 14, wherein the at least one memory and stored computer program code are configured, with the at least one processor, to cause the apparatus to:
develop, based at least in part on the at least two images, a second multi-dimensional representation of the object;
compare the developed second multi-dimensional representation to a stored second multi-dimensional representation, and permit access to secured functionality in an instance in which the developed second multi-dimensional representation is within a second predefined similarity threshold of the stored second multi-dimensional representation.
16. The apparatus of any one of Claims 14-15, wherein the images comprise 2d images.
17. The apparatus of any one of Claims 14-16, wherein the multi-dimensional representation comprises a 3d representation.
18. The apparatus of any one of Claims 15-17, wherein the second multidimensional representation comprises a 2d representation.
19. The apparatus of any one of Claims 14-16, wherein the multi-dimensional representation comprises a 2d representation.
20. The apparatus of any one of Claims 15, 16, and 19, wherein the second multidimensional representation comprises a 3d representation.
21. The apparatus of any one of Claims 14-20, wherein the at least one memory and stored computer program code are configured, with the at least one processor, to cause the apparatus to:
cause at least two images to be captured in response to a user performing a sweeping motion of at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus.
22. The apparatus of any one of Claims 14-21, wherein the at least one memory and stored computer program code are configured, with the at least one processor, to cause the apparatus to:
cause a user to be prompted to perform a sweeping motion of at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus.
23. The apparatus of Claim 22, wherein the at least one memory and stored computer program code are configured, with the at least one processor, to cause the apparatus to:
cause a user to be prompted to perform the sweeping motion by causing at least one arrow to be displayed.
24. The apparatus of any one of Claims 22-23, wherein the at least one memory and stored computer program code are configured, with the at least one processor, to cause the apparatus to:
cause a user to be prompted to perform the sweeping motion by causing a box to be displayed around an image of the object while the user is performing the sweeping motion.
25. The apparatus of any one of Claims 14-24, wherein the apparatus comprises a mobile computing device.
26. A computer program product comprising at least one computer-readable storage medium having computer-readable program instructions stored therein, the computer- readable program instructions comprising program instructions configured to cause an apparatus to perform a method comprising:
receiving at least two images of an object, wherein at least two of the at least two images correspond to a different view of the object;
developing, based at least in part on the at least two images, a multi-dimensional representation of the object;
comparing the developed multi-dimensional representation with a stored multidimensional representation; and
permitting access to secured functionality in an instance in which the developed multi-dimensional representation is within a predefined similarity threshold of the stored multi-dimensional representation.
27. The computer program product of Claim 26, wherein the method further comprises:
developing, based at least in part on the at least two images, a second multidimensional representation of the object;
comparing the second multi-dimensional representation with a stored second multidimensional representation; and
permitting access to secured functionality in an instance in which the developed second multi-dimensional representation is within a second predefined similarity threshold of the stored second multi-dimensional representation.
28. The computer program product of any one of Claims 26-27, wherein the images comprise 2d images.
29. The computer program product of any one of Claims 26-28, wherein the multi-dimensional representation comprises a 3d representation.
30. The computer program product of any one of Claims 27-29, wherein the second multi-dimensional representation comprises a 2d representation.
31. The computer program product of any one of Claims 26-28, wherein the multi-dimensional representation comprises a 2d representation.
32. The computer program product of any one of Claims 27, 28, and 31 , wherein the second multi-dimensional representation comprises a 3d representation.
33. The computer program product of any one of Claims 26-32, wherein the method further comprises:
causing at least two images to be captured in response to a user performing a sweeping motion of at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus.
34. The computer program product of any one of Claims 26-33, wherein the method further comprises:
causing a user to be prompted to perform a sweeping motion of at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus.
35. The computer program product of Claim 34, wherein:
causing a user to be prompted to perform the sweeping motion comprises causing at least one arrow to be displayed.
36. The computer program product of any one of Claims 34-35, wherein:
causing a user to be prompted to perform the sweeping motion comprises causing a box to be displayed around an image of the object while the user is performing the sweeping motion.
37. The computer program product of any one of Claims 26-36, wherein the apparatus comprises a mobile computing device.
38. An apparatus comprising:
means for receiving at least two images of an object, wherein at least two of the at least two images correspond to a different view of the object;
means for developing, by a processor, based at least in part on the at least two images, a multi-dimensional representation of the object;
means for, comparing the developed multi-dimensional representation with a stored multi-dimensional representation; and
means for permitting access to secured functionality in an instance in which the developed multi-dimensional representation is within a predefined similarity threshold of the stored 3d representation.
39. The apparatus of Claim 38 further comprising:
means for developing, based at least in part on the at least two images, a second multidimensional representation of the object;
means for comparing the second multi-dimensional representation with a stored second multi-dimensional representation; and
means for permitting access to secured functionality in an instance in which the developed second multi-dimensional representation is within a second predefined similarity threshold of the stored second multi-dimensional representation.
40. The apparatus of any one of Claims 38-39, wherein the images comprise 2d images.
41. The apparatus of any one of Claims 38-40, wherein the multi-dimensional representation comprises a 3d representation.
42. The apparatus of any one of Claims 39-41, wherein the second multidimensional representation comprises a 2d representation.
43. The apparatus of any one of Claims 38-40, wherein the multi-dimensional representation comprises a 2d representation.
44. The apparatus of any one of Claims 39, 40, and 43, wherein the second multidimensional representation comprises a 3d representation.
45. The apparatus of any one of Claims 38-44 further comprising means for causing at least two images to be captured in response to a user performing a sweeping motion of at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus.
46. The apparatus of any one of Claims 38-45 further comprising means for causing a user to be prompted to perform the sweeping motion of at least one of sweeping an apparatus configured to capture the at least two images across the object or sweeping the object across the apparatus.
47. The apparatus of Claim 46, wherein the means for causing a user to be prompted to perform the sweeping motion comprises a means for causing at least one arrow to be displayed.
48. The apparatus of any one of Claims 46-47, wherein causing a user to be prompted to perform the sweeping motion comprises causing a box to be displayed around an image of the object while the user is performing the sweeping motion.
49. The apparatus of any one of Claims 38-48, wherein the apparatus comprises a mobile computing device.
PCT/FI2012/050452 2011-07-25 2012-05-10 Methods and apparatuses for facilitating locking and unlocking of secure functionality through object recognition WO2013014328A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN2546CH2011 2011-07-25
IN2546/CHE/2011 2011-07-25

Publications (1)

Publication Number Publication Date
WO2013014328A1 true WO2013014328A1 (en) 2013-01-31

Family

ID=47600560

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2012/050452 WO2013014328A1 (en) 2011-07-25 2012-05-10 Methods and apparatuses for facilitating locking and unlocking of secure functionality through object recognition

Country Status (1)

Country Link
WO (1) WO2013014328A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389570A (en) * 2015-11-19 2016-03-09 吴建忠 Face angle determination method and system
CN107277047A (en) * 2017-07-25 2017-10-20 湖南中迪科技有限公司 Log-on message generation method and device
CN107358084A (en) * 2017-07-25 2017-11-17 湖南云迪生物识别科技有限公司 The cloud storage method and apparatus of data
CN107368734A (en) * 2017-07-25 2017-11-21 湖南中迪科技有限公司 Cipher-code input method and device
CN108038363A (en) * 2017-12-05 2018-05-15 吕庆祥 Improve the method and device of Terminal security

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060039600A1 (en) * 2004-08-19 2006-02-23 Solem Jan E 3D object recognition
US20090309702A1 (en) * 2008-06-16 2009-12-17 Canon Kabushiki Kaisha Personal authentication apparatus and personal authentication method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060039600A1 (en) * 2004-08-19 2006-02-23 Solem Jan E 3D object recognition
US20090309702A1 (en) * 2008-06-16 2009-12-17 Canon Kabushiki Kaisha Personal authentication apparatus and personal authentication method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BAE, M. ET AL.: "Automated 3D Face Authentication & Recognition", ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE, 2007, AVSS 2007, IEEE CONFERENCE ON, 5 September 2007 (2007-09-05) - 7 September 2007 (2007-09-07), TEMPE, CONFERENCE PUBLICATIONS, pages 45 - 50 *
HU, J-Y. ET AL.: "Android-Based Mobile Payment Service Protected by 3-Factor Authentication and Virtual Private Ad Hoc Networking", COMPUTING, COMMUNICATIONS AND APPLICATIONS CONFERENCE (COMCOMAP), 2012, 11 January 2012 (2012-01-11) - 13 January 2012 (2012-01-13), DOULIOU, TAIWAN, pages 111 - 116 *
LU, X. ET AL.: "Three-Dimensional Model Based Face Recognition", PATTERN RECOGNITION, 2004, ICPR 2004, PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON, vol. 1, 23 August 2004 (2004-08-23), MICHIGAN STATE UNIV. EAST LANSING, MI, USA, pages 362 - 366 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389570A (en) * 2015-11-19 2016-03-09 吴建忠 Face angle determination method and system
CN107277047A (en) * 2017-07-25 2017-10-20 湖南中迪科技有限公司 Log-on message generation method and device
CN107358084A (en) * 2017-07-25 2017-11-17 湖南云迪生物识别科技有限公司 The cloud storage method and apparatus of data
CN107368734A (en) * 2017-07-25 2017-11-21 湖南中迪科技有限公司 Cipher-code input method and device
CN108038363A (en) * 2017-12-05 2018-05-15 吕庆祥 Improve the method and device of Terminal security

Similar Documents

Publication Publication Date Title
US11228601B2 (en) Surveillance-based relay attack prevention
EP3849130B1 (en) Method and system for biometric verification
CN105323065B (en) Security verification method and device
US10091196B2 (en) Method and apparatus for authenticating user by using information processing device
US20180012094A1 (en) Spoofing attack detection during live image capture
US10339334B2 (en) Augmented reality captcha
EP3132368B1 (en) Method and apparatus of verifying usability of biological characteristic image
US9503474B2 (en) Identification of trusted websites
US20140181929A1 (en) Method and apparatus for user authentication
US20160050341A1 (en) Security feature for digital imaging
EP3183681A1 (en) Accessing a secured software application
US10735436B1 (en) Dynamic display capture to verify encoded visual codes and network address information
WO2013014328A1 (en) Methods and apparatuses for facilitating locking and unlocking of secure functionality through object recognition
CN107786487B (en) Information authentication processing method, system and related equipment
US20160162677A1 (en) Performing authentication based on user shape manipulation
Rassan et al. Securing mobile cloud using finger print authentication
CN105790948B (en) A kind of identity identifying method and device
US9710633B2 (en) Method and apparatus for authenticating user
US20180124034A1 (en) Image based method, system and computer program product to authenticate user identity
Malik et al. Multifactor authentication using a QR code and a one-time password
JP2020140735A (en) Apparatus and method for camera-based user authentication for content access
KR20230049607A (en) Apparatus and method for approving playing of augmented reality contents using identification of object and user
JP6167667B2 (en) Authentication system, authentication method, authentication program, and authentication apparatus
GB2522606A (en) User authentication system
US20150373003A1 (en) Simple image lock and key

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12817729

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12817729

Country of ref document: EP

Kind code of ref document: A1