WO2023156478A1 - Method for operating a display device, and display device having a secure authentication process - Google Patents

Method for operating a display device, and display device having a secure authentication process Download PDF

Info

Publication number
WO2023156478A1
WO2023156478A1 PCT/EP2023/053795 EP2023053795W WO2023156478A1 WO 2023156478 A1 WO2023156478 A1 WO 2023156478A1 EP 2023053795 W EP2023053795 W EP 2023053795W WO 2023156478 A1 WO2023156478 A1 WO 2023156478A1
Authority
WO
WIPO (PCT)
Prior art keywords
authentication
display device
scene
image
user
Prior art date
Application number
PCT/EP2023/053795
Other languages
French (fr)
Inventor
Friedrich SCHICK
Christian Lennartz
Original Assignee
Trinamix Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Trinamix Gmbh filed Critical Trinamix Gmbh
Publication of WO2023156478A1 publication Critical patent/WO2023156478A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • H04L63/105Multiple levels of security

Definitions

  • This disclosure relates to methods, devices and computer programs for access control for a software service or transaction based on security levels.
  • electronic devices having data processing, imaging and displaying features authentication processes as disclosed herein can be performed.
  • Electronic computerized devices such as hand-held devices, as, for example, smartphones, laptops and the like, sometimes require users to authenticate themselves in order to use a specific function of the device.
  • a face recognition algorithm is implemented, and when the user wants to activate his or her device, a user authentication occurs using the face recognition.
  • a method for operating a display device having at least one processing unit configured to execute apps comprises the steps of: receiving an access request signal for executing a software service or transaction; assigning a security level to the request; if the assigned security level exceeds a predetermined security level, initiating an advanced security level authentication process including a first authentication process and a second authentication process.
  • the first authentication process comprises the steps of: receiving imaging data associated to a scene, said imaging data being obtained by the process of irradiating at least one illumination pattern comprising a plurality of illumination features onto the scene and receiving at least one first image comprising a spot pattern originating from the scene in response to the irradiated illumination pattern; determining, by processing the received imaging data, at least one reflection feature corresponding to a spot in the first image; comparing, by processing the at least one reflection feature, the spot pattern comprised in the first image with reference spot patterns for obtaining a comparison result; and determining a first authentication parameter as a function of the comparison results.
  • the second authentication process comprises the steps of: receiving authentication data associated to the scene; evaluating the received authentication data for obtaining an evaluation result; and determining a second authentication parameter as a function of the evaluation result.
  • the method may further include the step of generating an authentication signal as a function of the first and the second authentication parameter.
  • the method is a computer-implemented method.
  • a display device comprising: a light source, in particular a monochromatic light source, configured to generate at least one illumination pattern comprising a plurality of illumination features; an optical sensor unit configured to capture at least one first image comprising a spot pattern originating from the scene and to generate imaging data associated with a scene; at least one processing unit configured:
  • the display device further comprises an output unit configured to output an authentication signal as a function of the first and the second authentication parameter.
  • the presented display device may include a processing unit, that is configured to cause the components in the device to cooperatively carry out any one of the method steps of the method for operating the display device disclosed herein with respect to further aspects or embodiments of the method.
  • the presented method for operating a display device and the display device allow for a secure and reliable access control to specific functions of the display device.
  • specific software services or transactions may require an advanced security level that is fulfilled by the combined execution of two authentication processes.
  • the first authentication process can be based on face recognition.
  • the proposed method involves a multi-level security approach. Hence, under normal circumstances, granting access to a software service, app or function is granted or denied using the first authentication process. However, if according to predetermined security levels, an access request is submitted for a high-security software service or transaction, a second authentication process is additionally required to complement the first authentication process.
  • a scene is, for example, a visual image captured by a camera device of the display device that allows for generating the imaging data for the first and/or second authentication process.
  • a scene may comprise various elements, as, for example, a background, a user face directed towards an optical sensor unit or camera of the device.
  • the scene may include a user and an environment, as well as his or her behavior, that can be captured by imaging and/ or other sensor devices being components of the display device.
  • Authentication data associated tot the scene is also data being representative of a user input.
  • Scene may include any item in an environment of the display device that may interact or can be sensed by the display device.
  • Secure software services or transactions that may exceed the predetermined security level associated with the first authentication process may involve banking apps, GPS-requiring apps or the like.
  • assigning a security level to the request includes: by a manufacturer of the display device, assigning the security level to the request; by a smartphone provider, setting the security level to the request; by a user of the display device, setting the security level in a device settings menu; setting the security level in response to a command received from a network service provider; and/or assigning the security level in response to an identity from a user, the identity in particular being derived from the scene.
  • the security level of the request can be assigned by the smartphone maker, the software/app provider, the user by amending settings of the device, the software or app itself, and/or the network service provider, e.g. if the display device is a smartphone.
  • the method further includes setting the predetermined security level, in particular comprising at least one of the steps: by a manufacturer of the display device, setting the predetermined security level; by a smartphone provider, setting the predetermined security level; by a user of the display device, setting the predetermined security level in a device settings menu; setting the predetermined security level in response to a command received from a network service provider; changing the predetermined security level in response to a command from a user; setting the predetermined security level as a function of a digital certificate associated with the software service or the app; setting the predetermined security level as a function of the scene; changing the predetermined security level as a function of the first and/or second authentication parameter; setting the predetermined security level as a function of a prior use of the software service or app; setting the predetermined security level in response to an identity from a user, the identity, in particular, being derived from the scene; and/or when installing the software service or transaction, setting the predetermined security level, in particular, by the software provider.
  • the predetermined security level is set by the manufacture of the device, the app or software provider, the user and/or the network provider.
  • the authentication data in particular used in the second authentication process, include at least one of the following:
  • Authentication data may refer to aspects of a person's or user behaviour that can be sensed or captured. Also biometric characteristics can be used to generate authentication data.
  • data being associated to an entity is to be interpreted that the entity has a causal effect on the data, e.g. in terms of the data format, content or information contained in the data, the process of capturing the data and the like.
  • Data is considered associated to an entity if the data contains information representative for the entity. For example, fingerprint data is associated to a person, if the person's fingerprint is contained in the fingerprint data in a coded fashion such that at least parts of the person's fingerprint can be reproduced by decoding the fingerprint data.
  • Imaging data associated with a display content of a further display device can comprise a 3D pattern that is shown on a display as a moving object or video.
  • additional devices in the second authentication process may further fulfill two-factors authentication requirements.
  • the second authentication process comprises at least two subprocesses that include applying the steps of:
  • the sets of authentication data are preferably independent from one another and/or obtained through different acquisition processes.
  • the second authentication process includes first the user's fingerprint data, and second, the use of video data associated with a head movement of a user or mimics or the user.
  • At least two second authentication parameters are generated corresponding to processing at least two different sets of authentication data.
  • the method includes at least one of the steps of:
  • Generating the authentication signal may have the advantage that third-party apps may use the authentication service provided by the operating system of the display device.
  • the method of operating the display device can be implemented as a feature of the operating system for the display device.
  • the method further comprises the step of generating the imaging data, wherein generating comprises: irradiating at least one illumination pattern comprising a plurality of illumination features onto the scene, in particular using coherent light from a monochromatic light source; and receiving at least one first image comprising a spot pattern originating from the scene in response to the irradiated illumination pattern at an optical sensor device.
  • Suitable illumination patterns and light sources for generating the imaging data are, for example, disclosed in WO 2020/187719 A1 which is herewith incorporated by a reference.
  • page 44/line 17 through page 47/line 16 of WO 2020/187719 A1 discloses aspects for generating and analyzing reflection features of scenes or objects illuminated with structured illumination patterns.
  • the illumination patterns and determined reflection features therein can be used in the methods and devices of the present disclosure.
  • Each of the reflection features may comprise at least one beam profile.
  • the term “beam profile” of the reflection feature may generally refer to at least one intensity distribution of the reflection feature, such as of a light spot in the image.
  • the beam profile may be selected from the group consisting of a trapezoid beam profile; a triangle beam profile; a conical beam profile and a linear combination of Gaussian beam profiles.
  • the display device is preferably capable of executing apps and is implemented as a display device, in particular, a display device having a translucent display unit.
  • a translucent display has the advantage of covering the illumination source and the optical sensor unit, thereby rendering the device easier to clean and protecting the light source and sensor unit.
  • the method then further comprises: irradiating the at least one illumination pattern through a translucent display unit; and/or passing the at least one first image comprising the spot pattern through said translucent display unit prior to the step of receiving the at least first image at the optical sensor unit.
  • the process of obtaining the imaging data associated to the scene further comprises the steps of irradiating illumination light towards the scene, and receiving reflected light from the scene for obtaining a second image of the scene.
  • the illumination light may be flat light, generated by a flood-light projector device, essentially homogeneously illuminating the scene, thus allowing to capture a second (two-dimensional) image in terms of the imaging data.
  • Capturing a first image and a second image comprising different features renders the authentication of a user contained in the scene even more reliable.
  • the first image can include spots having an increased brightness or luminosity
  • the second image may include a two-dimensional image of the scene including a face of the user.
  • the step of determining a reflection feature then includes: identifying or extracting at least one patch, area, region or footprint of the associated beam profile of the first image, including at least one spot having highest brightness among the spots; and generating for said identified or extracted spot at least one feature vector or array.
  • the step of comparing may then include: comparing the generated at least one feature vector with a plurality of predetermined feature vectors being representative for images of authenticated users.
  • the first image may stem from reflected laser light realizing the illumination pattern with illumination features. This may involve surface and volume or bulk backscattering at or from the scene. Investigations of the applicant have shown that considering the brightest spots in the first image can be considered sufficiently reliable for deriving the material properties of the scene.
  • the material property can be used to distinguish between human tissue of a user's face and counterfeited faces, e.g. masks. Detected and characterized material properties of components of the scene can be considered authentication data.
  • WO 2021/105265 A1 which is hereby included by reference, methods and aspects of evaluation devices for determining beam profiles of reflection features and deriving material properties from feature vectors are disclosed. The steps of identifying or extracting the patches where spots having the highest brightness are located, and generating respective feature vectors may involve a neural network that is trained accordingly. Training the neural network can involve aspects for identifying brightest spots according to WO 2021/105265 A1.
  • the step of comparing the at least one feature vector with reference feature vectors may include deploying a machine-learned classifier, in particular an artificial neural network.
  • Reference feature vectors may be predetermined by carrying out the method steps for obtaining imaging data associated to reference objects.
  • the method may comprise:
  • the scenes used as reference scenes preferably have a known content.
  • scenes with the face of the user to be authenticated can be used as reference scenes.
  • categorizing or classifying reference feature vectors leads to a collection of reference data that can be used in comparing the feature vectors from the user shown in the respective scene to be authenticated. For example, if a generated feature vector corresponding to the scene to be authenticated is same or similar to one of the reference feature vectors, the method or device determines that the user in the scene to be authenticated corresponds to the user associated to the reference vector.
  • the method may further include the process of training a machine-learning classifier based on the generated and classified plurality of reference vectors.
  • the display device comprises a secure enclave configured to carry out the processes of comparing the spot pattern comprised in the first and/or second image with reference spot patterns for obtaining the first authentication parameter.
  • the second authentication process process in a secure enclave.
  • processes involving pre-classified reference feature vectors should be protected from unauthorized access and may thus be performed within secure enclaves.
  • a secure enclave may be a secure enclave processor implemented as a system-on-chip that performs security services for other components in the device and that securely communicates with other subsystems in device, e.g. the processing unit.
  • a secure enclave processor may include one or more processors, a secure boot ROM, one or more security peripherals, and/or other components.
  • the security peripherals may be hardware-configured to assist in the secure services performed by secure enclave processor.
  • the security peripherals may include: authentication hardware implementing various authentication techniques, encryption hardware configured to perform encryption, secure-interface controllers configured to communicate over the secure interface to other components, and/or other components.
  • instructions executable by secure enclave processor are stored in a trust zone in memory subsystem that is assigned to secure enclave processor.
  • the secure enclave processor fetches the instructions from the trust zone for execution.
  • secure enclave processor may be isolated from the rest of processing subsystems except for a carefully controlled interface, thus forming a secure enclave for the secure enclave processor and its components.
  • a computer-program or computer-program product comprises a program code for executing the above-described methods and functions by a computerized control device when run on at least one computerized device, in particular when run on the display device.
  • a computer program product such as a computer program means, may be embodied as a memory card, USB stick, CD-ROM, DVD or as a file which may be downloaded from a server in a network.
  • a file may be provided by transferring the file comprising the computer program product from a wireless communication network.
  • the display device is a smartphone or a tablet computer having a translucent screen as the display unit.
  • the imaging unit is for example a front camera.
  • the imag- ing unit can be located in an interior of the display device, behind the translucent screen.
  • the imaging unit can include the optical sensor unit and an illumination source for emitting light through the translucent screen to illuminate the object.
  • the optical sensor unit receives light from the object that passes through the translucent screen.
  • the optical sensor unit may general a sensor signal in a manner dependent on an illumination of a sensor region or light sensitive area of the optical sensor.
  • the sensor signal may be passed onto the processing unit to reconstruct an image of the object captured by the camera and/or to process the image, in particular, along the lines defined above and below with respect to embodiments of the method disclosed.
  • optical sensor unit generally refers to a device or a combination of a plurality of devices configured for sensing at least one optical parameter.
  • the optical sensor unit may be formed as a unitary, single device or as a combination of several devices.
  • the optical sensor unit comprises a matrix of optical sensors.
  • the optical sensor unit may comprise at least one CMOS sensor.
  • the matrix may be composed of independent pixels such as of independent optical sensors.
  • a matrix of inorganic photodiodes may be composed.
  • a commercially available matrix may be used, such as one or more of a CCD detector, such as a CCD detector chip, and/or a CMOS detector, such as a CMOS detector chip.
  • the optical sensor unit may be and/or may comprise at least one CCD and/or CMOS device and/or the optical sensors may form a sensor array or may be part of a sensor array, such as the above- mentioned matrix.
  • the sensor element may be part of or constitute at least one CCD and/or CMOS device having a matrix of pixels, each pixel forming a light-sensitive area.
  • an “optical sensor” generally refers to a light-sensitive device for detecting a light beam, such as for detecting an illumination and/or a light spot generated by at least one light beam.
  • a “light-sensitive area” generally refers to an area of the op-tical sensor which may be illuminated externally, by the at least one light beam, in response to which illumination at least one sensor signal is generated.
  • the sensor signals are electronically processed and result in sensor data.
  • the plurality of sensor data relating to the capture the light reflected by an object may be referred to as imaging data associated to the object.
  • the display device is a hand-held device, a smartphone, a laptop computer, a banking terminal, a smartwatch, a payment device, an ATM display device and/or a display comprising a translucent display.
  • aspects of this disclosure also relate to a use of the display device or a use of the presented method as disclosed above or below with respect to specific embodiments.
  • Aspects for a purpose of use include, selected from the group consisting of: a position measurement in traffic technology, an entertainment application, a security application, a surveillance application, a safety application, a human-machine interface application, a tracking application, a photography application, an imaging application or camera application, a mapping application for generating maps of at least one space, a homing or tracking beacon detector for vehicles, an outdoor application, a mobile application, a communication application, a machine vision application, a robotics application, a quality control application, a manufacturing application.
  • Fig. 1 shows a display device according to a first embodiment
  • Fig. 2 shows components of the display device of Fig. 1 ;
  • FIG. 3 shows method steps involved in methods for operating a display device according to a first embodiment
  • Fig. 4 shows method steps involved in embodiments for a first authentication processes
  • Fig. 5 shows a display device according to a second embodiment
  • Fig. 6 shows method steps involved in embodiments for a second authentication processes
  • Fig. 7 shows method steps involved in a process for acquiring imaging data for embodiments of the first authentication process
  • Fig. 8 shows method steps involved in embodiments for processes for generating pluralities of reference vectors and for generating an authentication signal
  • Fig. 9 shows method steps involved in method for operating a display device according to a second embodiment.
  • Fig. 1 shows a display device 1 according to a first embodiment.
  • the display device 1 is a smartphone and includes a translucent touchscreen 3 as a display unit.
  • the display unit 3 is configured for displaying information. Such information can include a text, image, diagram, video, or the like.
  • the display device 1 includes an imaging unit 4, a processing unit 5 and an output unit 6.
  • the imaging unit 4, the processing unit 5 and the output unit 6 are represented by dashed squares because they are located within a housing 2 of the display device 1 , and behind the display unit 3 when viewed from an exterior of the display device 1 .
  • Fig. 2 shows the components of the display device 1 located in the interior of the housing 2 in more detail.
  • Fig. 2 corresponds to a view onto the display unit 3 from an interior of the display device 1 , with the imaging unit 4, the processing unit 5 and the output unit 6 being located in front of the display unit 3.
  • the imaging unit 4 is a front camera.
  • the imaging unit 4 is configured to capture an image of surroundings of the display device 1 .
  • an image of a scene in front of the display unit 3 of the display device 1 can be captured using the imaging unit 4.
  • the surroundings are here defined as a half-sphere located in front of the imaging unit 4 and centered around a center of the display.
  • the radius of the half-sphere is, for example, 5m.
  • the imaging unit 4 includes an illumination source 9 and an optical sensor unit 7 having a light sensitive area 8.
  • the illumination source 9 is an infrared (IR) laser point projector realized by a vertical- cavity surface-emitting laser (VCSEL).
  • VCSEL vertical- cavity surface-emitting laser
  • the IR light emitted by the illumination source 9 shines through the translucent display unit 3 and generates multiple laser points on the scene surrounding the display device 1.
  • an object such as a person
  • This reflected image also includes reflections of the laser points.
  • the illumination source 9 may be realized as any illumination source capable of generating at least one illumination light beam for fully or partially illuminating the object in the surroundings.
  • the illumination source may be configured for emitting modulated or non-modulated light. In case a plurality of illumination sources is used, the different illumination sources may have different modulation frequencies.
  • the illumination source may be adapted to generate and/or to project a cloud of points, for example the illumination source may comprise one or more of at least one digital light processing (DLP) projector, at least one Liquid crystal on silicon (LCoS) projector, at least one spatial light modulator, at least one diffractive optical element, at least one array of light emitting diodes, at least one array of laser light sources.
  • DLP digital light processing
  • LCD Liquid crystal on silicon
  • diffractive optical element at least one array of light emitting diodes
  • laser light sources at least one array of laser light sources.
  • the optical sensor 7 is here realized as a complementary metal-oxide-semiconductor (CMOS) camera.
  • CMOS complementary metal-oxide-semiconductor
  • the optical sensor unit 7 looks through the display unit 3. In other words, it receives the reflection of the objects or the scene through the display unit 3.
  • the image reflected by the object, such as the person, is captured by the light sensitive area 8.
  • a sensor signal indicating an illumination of the light sensitive area 8 is generated.
  • the light sensitive area 8 is divided into a matrix of multiple sensors, which are each sensitive to light and each generate a signal in response to illumination of the sensor.
  • the optical sensor 7 can be any type of optical sensor designed to generate at least one sensor signal in a manner dependent on an illumination of the sensor region or light sensitive area 8.
  • the optical sensor 7 may be realized as a charge-coupled device (CCD) sensor.
  • the signals from the light sensitive area 8 are transmitted to the processing unit 5.
  • the processing unit 5 is configured to process the signals received from the optical sensor 7 (which form an image). By analyzing a shape of the laser spots reflected by an object or person in the scene and captured by the optical sensor 7, the processing unit 5 can determine a distance to the object, e.g. a user's face, and a material information of the object.
  • the imaging unit 4, the processing unit 5 and the output unit 6 are communicatively coupled, e.g. through an internal bus system or connection lines 10.
  • the display device 1 shown in Fig. 1 and 2 is configured to perform processes for authenticating objects or users to be authenticated using imaging data associated to a scene including the objects, persons, or faces. This can be achieved through an application or "app" loaded into the display device in terms of a computer program with instructions to be executed by the processing unit 5 and other components of the smartphone or display device 1 , respectively.
  • Fig. 3 shows method steps involved in a method for operating a display device according to a first embodiment. Triggered by a user input or an app request, the display device is requested to perform a specific function, for example, a security function in terms of executing a software service or transaction.
  • a specific function for example, a security function in terms of executing a software service or transaction.
  • step S1 an access request signal for executing the specific software service or transaction is received.
  • an app triggered by a user input requests a specific software service from the operating system implemented by the processing unit 5.
  • a banking app may, for example, require the transmission of an encrypted data set or the execution of a money order from the user. Other security relevant scenarios are feasible.
  • a security level is assigned to the access request received by the access request signal. Assigning a security level in step S2 can involve a comparison of the content of the access request signal with predetermined security levels. For example, some smartphone makers provide for a plurality of security levels, wherein the prerequisites for the respective authentication process increase in complexity and resource requirements with the security level.
  • the predetermined security level can be assigned by the smartphone maker, the user, the software or app itself, and/or the service provider, e.g. if the display device is a smartphone.
  • step S3 it is checked if the required and assigned security level exceeds a predetermined security level.
  • a predetermined security level For example, one may implement that specific functions of the smartphone 1 can be executed at a security level 3 out of three.
  • the predetermined level 3 corresponds to an authentication process involving face recognition based on a two-dimensional image of the user captured by the camera unit 4.
  • an app may request a specific security level during its installation on the smartphone device.
  • An additional advanced security level can be implemented according to this disclosure, thereby increasing the security of an access control and an access right management function.
  • the authentication parameter can be an authentication score.
  • the first authentication parameter may indicate that a user image retrieved by the imaging unit 4 fulfills the requirements of a Face-ID, i.e. corresponds to an image of a registered user and thus is allowed to use the requested functionality of the smartphone.
  • step S3 it is detected that the security level assigned to the access request exceeds the predetermined security level, e.g. level 3, the second authentication process is initiated in step S5.
  • the second authentication process S5 provides a second authentication parameter that is considered in step S6.
  • an additional security level 4 is implemented.
  • Step S6 involves generating an authentication signal based on the first and/or second authentication parameter. If the second authentication process S5 is carried out, both authentication parameters must be considered in step S6. If the authentication is successful, i.e. both authentication processes in steps S4 and S5 confirm that the access request can be granted, a respective authentication signal is generated.
  • step S7 access is granted so that in the following step S8 the software service or transaction that is requested through the access request signal in step S1 can be carried out or executed.
  • Fig. 4 shows method steps involved in embodiments for the first authentication process.
  • imaging data associated to the scene comprising a user's face to be authenticated is received.
  • the imaging unit 4 provides imaging data obtained by capturing a first image comprising a spot pattern originating from the object.
  • the spot pattern occurs in response to an irradiated illumination pattern wherein the illumination pattern comprises a plurality of illumination features.
  • the processing unit 5 receives the imaging data.
  • the received imagining data containing a first image with a spot pattern are data processed in step S120.
  • processing unit 5 outputs the first authentication parameter indicating an authentication score of the scene to be authenticated to the output unit 6.
  • the output unit 6 may serve as an interface for security commands.
  • the output signal that is available at the output unit 6 is a signal indicative of grant or denial of access to the requested function.
  • the processing of the imaging data comprises steps S102, S103 and S104.
  • step S102 at least one reflection feature corresponding to a spot in the first image is determined.
  • the reflection feature can have an associated beam profile. For example, a plurality of bright spots as a result of structured light impinging on the surface and/or bulk of objects in the scene to be authenticated is detected as a first image by the optical sensor unit 7.
  • the structured light may be coherent laser light produced by the infrared (IR) laser point projector 9.
  • a spot pattern is projected onto the scene, and a CMOS camera as imaging unit 4 captures the reflected spot pattern.
  • the intensity distribution of the spots can give rise to specific reflection features that can be representative for an identity of a user face contained in the scene.
  • Reflection features may, for example, include a ratio of a surface and a volume backscattering, a beam or edge profile, a contrast of laser speckle signals, a ratio of diffusive or direct reflection and the like.
  • a reflection feature is obtained.
  • Material dependent reflection features are known from WO 2020/187719, WO 2021/105265 and WO2018/091649.
  • the reflection feature obtained from the imaging data associated to the scene to be authenticated is compared with reference reflection features.
  • the spot pattern comprised in the first image obtained from the scene is compared with reference spot patterns to obtain a comparison result.
  • the reference spot patterns or reference reflection features are based on reference data for reference scenes with objects or reference material properties of objects.
  • the first authentication parameter is determined as a function of the result of the comparison between the reflection feature and reference reflection features.
  • a library of reference detection features with a mapping to scenes and/or images of preregistered user faces can be used, for example a reference library or database can contain a specific reference reflection feature or reference spot pattern that is associated with a specific user. If the reflection feature determined in step S102 compared with a reference reflection feature corresponding to a registered user's face does not match or is evaluated as dissimilar in the comparing step S103, it is determined in step S104 that the scene or user, respectively, cannot be authenticated as registered user.
  • Fig. 6 shows method steps involved in embodiments for a second authentication process.
  • the second authentication process S5 involves three consecutive steps S51 , S52 and S53.
  • authentication data is acquired, for example through user interfaces or sensors of the display device 1 .
  • the authentication data can involve imaging data or other multi-modal data that are associated with a specific user or registered user.
  • step S52 the received authentication data is evaluated and a second authentication parameter or score is generated.
  • the authentication parameter indicates as to whether the analyzed authentication data is genuine and thus may indicate that the specific user requesting access to a function or process in step S1 is genuine.
  • step S53 the determined second authentication parameter is generated and output to the processing unit 5.
  • the second authentication parameter can be used in the further authentication process. For example, it is eventually evaluated as to whether the first and the second authentication parameters match with each other, thus indicating the same requesting entity or user.
  • the second embodiments of a display device shown in Fig. 5 is suitable for carrying out additional aspects of the first authentication process that are illustrated in Figs. 7 and 8.
  • Fig. 5 shows a display device 1 according to the second embodiment.
  • the display device 1 of Fig. 5 further includes a flood light projector 11 for emitting flood light through the display unit 3 towards a scene.
  • the components of the display device are communicatively coupled to each other which is indicated by the arrows and dashed line 18.
  • the neural network used for general image processing by the processing unit 5 in step S2 is represented by the reference numeral 12.
  • the output unit 6 forms an interface to apps for communication between the processing unit 5 and the apps.
  • processes involving the reference vectors are executed within a secure enclave 13 including a trained neural network 14 implemented as a classifier.
  • a structured light or an illumination pattern is irradiated towards the scene to be authenticated.
  • a coherent light source generates an illumination pattern with a plurality of illumination features that impinge onto the objects within the scene including a face of the user.
  • the light originating from the scene in response to the irradiated lamination pattern comprises a first image with the spot pattern.
  • the imaging unit 4 processes the electronic signals from the optical sensor unit (sensor signals) and provides digital imaging data in step S112.
  • a second two-dimensional image of the scene to be authenticated is acquired.
  • a flat light projector 11 of display device 1 generates and emits illumination light towards the scene to be authenticated in step S113.
  • illumination light is irradiated from the display device to the scene and thus onto the face of the user.
  • step S114 the reflected light, in particular visible light, is received by the optical sensor unit 7 within the imaging unit 4. Again, the imaging unit 4 processes the optical sensor unit signals and provides two-dimensional imaging data in step S115.
  • processing unit 5 obtains two images, first an image comprising the spot pattern and second a two-dimensional image of the scene. Both images are merged in step S116 to imaging data to be analyzed in a further process.
  • the imaging data contains information about the reflection features relating to the spot pattern and two-dimensional image information on the scene.
  • the imaging data is provided to a neural network 12 used within the processing unit 5.
  • the neural network 12 is implemented to generate feature vectors or feature arrays that relate to the brightest spots within the images of the scene.
  • Fig. 8 shows method steps involved in the first authentication process using reference feature vectors.
  • step S102 potential sub-steps of step S102, determining at least one reflection feature corresponding to a spot in the first image are explained, wherein the reflection feature may have an associated beam profile.
  • Step S102 may be divided into steps S121 , S122 and S123.
  • step S121 spots with increased brightness or luminosity relating to the spot pattern of the first image are identified. Methods for identifying the spots are, for example, disclosed in WO 2021/105265 A1 .
  • spots or regions with a high brightness are identified in step S121 , those are extracted in step S122.
  • patches around brightest spots are extracted.
  • the patches may have a square, rectangular or circular shape and should include at least the footprint of the associated beam profile of the spot under consideration.
  • a feature vector for each extracted patch with a brightest spot is generated in step S123.
  • the steps of extracting the brightest spots and generating respective feature vectors in step S123 can be carried out using the neural network that is appropriately configured.
  • the respective feature vector can include data or information relating to the ratio of surface and volume backscattering in the respective spot, a beam profile, a contrast of a laser speckle signal and/or a ratio of diffusive or direct reflection from the scene.
  • the feature vector may include aspects as disclosed in WO 2020/187719, which is hereby incorporated by reference.
  • a plurality of feature vectors are generated or calculated based on the imaging data of the respective scene used for authentication.
  • the feature vectors are compared with a plurality of reference feature vectors that are pre-classified so that a match or high similarity with one of the reference feature vectors indicates a specific registered user.
  • This comparison step is done in step S124 and may involve a trained neural network 14 implemented in a secure enclave 13. Comparing the obtained feature vectors referring to the scene with the plurality of reference feature vectors in a reference library or database can be implemented by a similarity measure in the feature vector space.
  • the first authentication parameter is generated in step S125.
  • the feature vectors stemming from the imaging data of the images associated to the scene used as authentication data correspond to a reference feature vector associated to a specific registered user.
  • the authentication parameter may indicate that the user requesting access for a software service or transaction is approved or not according to an access right management database.
  • Method step S160 indicated in a dashed box refers to providing the plurality of reference vectors.
  • One reference vector for example, is generated by first selecting a scene with a sample user's face and objects in step S161 . A plurality of sample scenes involving the same user may be needed to generate reference vectors for one user.
  • the sample scenes are each processed under steps S121 through S123, i.e.
  • imaging data is acquired by irradiating an illumination pattern comprising a plurality of illumination features, e.g. by a light source equivalent to the laser light source 9. Further, the sample scenes are irradiated by flat illumination light, for example by a light source corresponding or equivalent to flat light projector 11 . Thus a respective first and a second image is obtained.
  • reference feature vectors are generated according to step S123 and classified according to the known identities of the user face in the scenes.
  • the respective reference vectors are classified or mapped to an access right for a secure software service or specific function of the display device.
  • the trained neural network 14 can then be used in step S124 to intrinsically compare the feature vector obtained in step S123 to generate the authentication parameter.
  • Fig. 9 shows method steps involved in a method operating a display device according to an alternative embodiment. The method steps can equally be executed by the display device according to the first and/or second embodiment according to Fig. 1 or Fig. 5 of this disclosure.
  • the flowchart of Fig. 9 shows, in step S200, a step of receiving sensor signals from the CMOS camera in the imaging unit 4.
  • the scene in front of the display device or smartphone is captured by a CMOS camera.
  • imaging data is obtained that can be used in subsequent authentication processes.
  • a pre-processing of the pixel data included in the imaging data is performed.
  • This pre-processing of the sensor signals from the optical sensor unit 4 can involve the steps depicted in Fig. 7 above.
  • Pre-processing may include filtering the sensor signal or the corresponding imaging data according to a filter, e.g. a bandpass.
  • step S202 a low-level representation of the imaging data is generated. Feature vectors referring to spot patterns of the reflected light from the scene and, in particular, the face of the user can be considered a low-level representation.
  • processing device 5 checks if an advanced security level, e.g. level 4, is required. Level 4 may be indicative of an advanced authentica- tion process mandating more authentication data than a face recognition algorithm requires corresponding to a level 3 security level. If the required security level for granting access to a function of the phone is lower than level 4, step S204 is carried out.
  • an advanced security level e.g. level 4
  • Level 4 may be indicative of an advanced authentica- tion process mandating more authentication data than a face recognition algorithm requires corresponding to a level 3 security level. If the required security level for granting access to a function of the phone is lower than level 4, step S204 is carried out.
  • Step S204 involves the first authentication process as described above with respect to Fig. 4.
  • the authentication process is based on the low-level representation of the retrieved imaging data. If, in step S205, the generated feature vectors match with reference vectors, thus indicating that access to the required function can be granted, step S205 triggers the generation of operating parameters to unlock the application device or to execute the desired function in step S207. If, in step S205, it is found that the feature vectors compared with the reference feature vectors do not match, step S206 requires an alternative unlock mechanism, such as, for example, the input of a specific user pin number.
  • step S208 an advanced authentication process is initiated in step S208, wherein the advanced authentication process requires the execution of the first authentication process and a second authentication process, as, for example, depicted with respect to Fig. 6 above.
  • a library of potential additional authentication procedures or processes can be used in step S210.
  • the operating system of the display device may provide for a library implementing authentication processes, for example being based on the full three-dimensional point scanning match with a previously registered 3D head representation of the user, the user's face mimics variations, as, for example, laughing, eyebrows' movements or eye movements.
  • the library may provide authentication processes based on biometric parameters of the user, as, for example, involving the user's voice recognition or vital signs, such as pulse, blood flow or temperature.
  • the library may further include authentication processes based on specific movements of the user's head or face characteristics.
  • step S212 is carried out, thus generating the operating parameters, such as unlocking the application device or executing the required function requested in step S212. If the advanced authentication process fails in step S209, the device may request an alternative unlock mechanism in step S211 .
  • the present disclosure provides for improved methods and systems for access control to a software service or transaction in computerized devices. In particular, the use of imaging data for different security levels required for accessing a specific function of the device reduces resource consumption and improves the overall security of the device.
  • illumination devices and imaging devices do not need to be arranged in or on the same housing.
  • the sequence of method steps carried out do not need to include all steps mentioned in Fig. 3, and 6 - 9. It is understood that all the disclosed aspects of methods may relate to computer- implemented methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Collating Specific Patterns (AREA)

Abstract

A method for operating a display device (1) having at least one processing unit (5) configured to execute apps, comprises the steps of: receiving (S1) an access request signal for executing a soft- ware service or transaction; assigning (S2) a security level to the access request. If the assigned security level exceeds a predetermined security level, an advanced security level authentication process including a first authentication process (S4) and a second authentication process (S5) is initiated. The first authentication process (S4) comprises the steps of: receiving imaging data associated to the scene (S101), said imaging data being obtained by the process of irradiating (S110) at least one illumination pattern comprising a plurality of illumination features onto the scene, and receiving (S111) at least one first image comprising a spot pattern originating from the scene in response to the irradiated illumination pattern; determining (S112), by processing the received imaging data, at least one reflection feature corresponding to a spot in the first image; comparing (S113), by processing the at least one reflection feature, the spot pattern comprised in the first image with reference spot patterns for obtaining a comparison result; and determining (S114) a first authentication parameter as a function of the comparison result. The second authentication process (S5) comprises the steps of: receiving (S51) authentication data associated to the scene; evaluating (S52) the received authentication data for obtaining an evaluation result; and determining (S53) a second authentication parameter as a function of the evaluation result. According to the method, an authentication signal as a function of the first and the second authentication parameter is generated. Further a display device with data processing capabilities and implementing the method is disclosed.

Description

Method for operating a display device, and display device having a secure authentication process
This disclosure relates to methods, devices and computer programs for access control for a software service or transaction based on security levels. In particular, in electronic devices having data processing, imaging and displaying features authentication processes as disclosed herein can be performed.
Electronic computerized devices, such as hand-held devices, as, for example, smartphones, laptops and the like, sometimes require users to authenticate themselves in order to use a specific function of the device. For example, in some smartphones, a face recognition algorithm is implemented, and when the user wants to activate his or her device, a user authentication occurs using the face recognition.
Depending on the sensitivity of the function to be performed by the hand-held device, certain security levels are set that require different authentication processes with increased security.
It is desirable to improve the reliability and safety of an electronic device with respect to unauthorized access, in particular if sensitive transactions or software services are to be performed through a respective device requested by a user or through an API from another software service or app.
It is therefore an object of the present disclosure to provide improved means to operate a display device where the execution of software apps or the like involves sensitive data. It is another object of the present disclosure to provide reliable and efficient means for granting or denying access to specific software services or transactions executed by a computerized device.
According to one aspect of this disclosure, a method for operating a display device having at least one processing unit configured to execute apps is presented. The method comprises the steps of: receiving an access request signal for executing a software service or transaction; assigning a security level to the request; if the assigned security level exceeds a predetermined security level, initiating an advanced security level authentication process including a first authentication process and a second authentication process.
The first authentication process comprises the steps of: receiving imaging data associated to a scene, said imaging data being obtained by the process of irradiating at least one illumination pattern comprising a plurality of illumination features onto the scene and receiving at least one first image comprising a spot pattern originating from the scene in response to the irradiated illumination pattern; determining, by processing the received imaging data, at least one reflection feature corresponding to a spot in the first image; comparing, by processing the at least one reflection feature, the spot pattern comprised in the first image with reference spot patterns for obtaining a comparison result; and determining a first authentication parameter as a function of the comparison results.
The second authentication process comprises the steps of: receiving authentication data associated to the scene; evaluating the received authentication data for obtaining an evaluation result; and determining a second authentication parameter as a function of the evaluation result.
The method may further include the step of generating an authentication signal as a function of the first and the second authentication parameter.
In embodiments, the method is a computer-implemented method.
Another aspect of this disclosure relates to a display device comprising: a light source, in particular a monochromatic light source, configured to generate at least one illumination pattern comprising a plurality of illumination features; an optical sensor unit configured to capture at least one first image comprising a spot pattern originating from the scene and to generate imaging data associated with a scene; at least one processing unit configured:
- to receive the imaging data; - to determine, by processing the received imaging data, at least one reflection feature corresponding to a spot in the first image, wherein said reflection feature may have an associated beam profile;
- to compare, by processing the at least one reflection feature, the spot pattern comprised in the first image with reference spot patterns for obtaining a comparison result;
- to determine a first authentication parameter as a function of the comparison result;
- in response to an access request signal for executing a software service or transaction, to initiate a further authentication process for determining a second authentication parameter; wherein the display device further comprises an output unit configured to output an authentication signal as a function of the first and the second authentication parameter.
It is understood that the presented display device may include a processing unit, that is configured to cause the components in the device to cooperatively carry out any one of the method steps of the method for operating the display device disclosed herein with respect to further aspects or embodiments of the method.
The presented method for operating a display device and the display device allow for a secure and reliable access control to specific functions of the display device. For example, specific software services or transactions may require an advanced security level that is fulfilled by the combined execution of two authentication processes. In particular, the first authentication process can be based on face recognition.
The proposed method involves a multi-level security approach. Hence, under normal circumstances, granting access to a software service, app or function is granted or denied using the first authentication process. However, if according to predetermined security levels, an access request is submitted for a high-security software service or transaction, a second authentication process is additionally required to complement the first authentication process.
A scene is, for example, a visual image captured by a camera device of the display device that allows for generating the imaging data for the first and/or second authentication process. A scene may comprise various elements, as, for example, a background, a user face directed towards an optical sensor unit or camera of the device. The scene may include a user and an environment, as well as his or her behavior, that can be captured by imaging and/ or other sensor devices being components of the display device. Authentication data associated tot the scene is also data being representative of a user input. Scene may include any item in an environment of the display device that may interact or can be sensed by the display device.
Secure software services or transactions that may exceed the predetermined security level associated with the first authentication process may involve banking apps, GPS-requiring apps or the like.
In embodiments, assigning a security level to the request includes: by a manufacturer of the display device, assigning the security level to the request; by a smartphone provider, setting the security level to the request; by a user of the display device, setting the security level in a device settings menu; setting the security level in response to a command received from a network service provider; and/or assigning the security level in response to an identity from a user, the identity in particular being derived from the scene.
Thus, in embodiments, the security level of the request can be assigned by the smartphone maker, the software/app provider, the user by amending settings of the device, the software or app itself, and/or the network service provider, e.g. if the display device is a smartphone.
In embodiments, the method further includes setting the predetermined security level, in particular comprising at least one of the steps: by a manufacturer of the display device, setting the predetermined security level; by a smartphone provider, setting the predetermined security level; by a user of the display device, setting the predetermined security level in a device settings menu; setting the predetermined security level in response to a command received from a network service provider; changing the predetermined security level in response to a command from a user; setting the predetermined security level as a function of a digital certificate associated with the software service or the app; setting the predetermined security level as a function of the scene; changing the predetermined security level as a function of the first and/or second authentication parameter; setting the predetermined security level as a function of a prior use of the software service or app; setting the predetermined security level in response to an identity from a user, the identity, in particular, being derived from the scene; and/or when installing the software service or transaction, setting the predetermined security level, in particular, by the software provider.
In embodiments, the predetermined security level is set by the manufacture of the device, the app or software provider, the user and/or the network provider.
In embodiments of the method, the authentication data, in particular used in the second authentication process, include at least one of the following:
- imaging data associated with a user's face,
- imaging data associated with an iris scan of a user,
- fingerprint data associated with a fingerprint of a user,
- medical data associated with vital signs of a user,
- audio data associated with a voice of a user,
- video data associated with a head movement of a user,
- video data associated with face mimics of a user,
- three-dimensional scanning data associated with a head, body or face of a user,
- touchscreen input data associated with input of a user, and
- imaging data associated with a display content of a further display device.
Authentication data may refer to aspects of a person's or user behaviour that can be sensed or captured. Also biometric characteristics can be used to generate authentication data.
The expression "data being associated to an entity" is to be interpreted that the entity has a causal effect on the data, e.g. in terms of the data format, content or information contained in the data, the process of capturing the data and the like. Data is considered associated to an entity if the data contains information representative for the entity. For example, fingerprint data is associated to a person, if the person's fingerprint is contained in the fingerprint data in a coded fashion such that at least parts of the person's fingerprint can be reproduced by decoding the fingerprint data.
If an advanced security level authentication process is required, additional resource-consuming processes may be used that can involve a full three-dimensional scan, biometric mappings of voice or pin patterns or an in-depth face authentication, as for example disclosed in WO 2020/187719. One may contemplate of other authentication processes that rely on specific authentication data associated with the user. Imaging data associated with a display content of a further display device can comprise a 3D pattern that is shown on a display as a moving object or video. Using additional devices in the second authentication process may further fulfill two-factors authentication requirements.
In embodiments of the method, the second authentication process comprises at least two subprocesses that include applying the steps of:
- receiving authentication data,
- evaluating authentication data, and
- determining an authentication parameter to at least two different sets of authentication data. The sets of authentication data are preferably independent from one another and/or obtained through different acquisition processes.
It is, for example, an advantage if the second authentication process includes first the user's fingerprint data, and second, the use of video data associated with a head movement of a user or mimics or the user.
In embodiments, at least two second authentication parameters are generated corresponding to processing at least two different sets of authentication data.
In embodiments, the method includes at least one of the steps of:
- generating the access request in response to a user input indicative of a secure software service;
- granting or denying access to the requested access for executing the software service or transaction as a function of the authentication signal; and
- executing the software service or transaction as a function of the authentication signal. Generating the authentication signal may have the advantage that third-party apps may use the authentication service provided by the operating system of the display device. The method of operating the display device can be implemented as a feature of the operating system for the display device.
In embodiments, the method further comprises the step of generating the imaging data, wherein generating comprises: irradiating at least one illumination pattern comprising a plurality of illumination features onto the scene, in particular using coherent light from a monochromatic light source; and receiving at least one first image comprising a spot pattern originating from the scene in response to the irradiated illumination pattern at an optical sensor device.
Suitable illumination patterns and light sources for generating the imaging data are, for example, disclosed in WO 2020/187719 A1 which is herewith incorporated by a reference. Specifically, page 44/line 17 through page 47/line 16 of WO 2020/187719 A1 discloses aspects for generating and analyzing reflection features of scenes or objects illuminated with structured illumination patterns. The illumination patterns and determined reflection features therein can be used in the methods and devices of the present disclosure.
Each of the reflection features may comprise at least one beam profile. As used herein, the term “beam profile” of the reflection feature may generally refer to at least one intensity distribution of the reflection feature, such as of a light spot in the image. The beam profile may be selected from the group consisting of a trapezoid beam profile; a triangle beam profile; a conical beam profile and a linear combination of Gaussian beam profiles.
The display device is preferably capable of executing apps and is implemented as a display device, in particular, a display device having a translucent display unit. Using a translucent display has the advantage of covering the illumination source and the optical sensor unit, thereby rendering the device easier to clean and protecting the light source and sensor unit.
In embodiments, the method then further comprises: irradiating the at least one illumination pattern through a translucent display unit; and/or passing the at least one first image comprising the spot pattern through said translucent display unit prior to the step of receiving the at least first image at the optical sensor unit. In embodiments, the process of obtaining the imaging data associated to the scene further comprises the steps of irradiating illumination light towards the scene, and receiving reflected light from the scene for obtaining a second image of the scene.
The illumination light may be flat light, generated by a flood-light projector device, essentially homogeneously illuminating the scene, thus allowing to capture a second (two-dimensional) image in terms of the imaging data.
Capturing a first image and a second image comprising different features renders the authentication of a user contained in the scene even more reliable.
Consequently, the first image can include spots having an increased brightness or luminosity, and the second image may include a two-dimensional image of the scene including a face of the user.
In embodiments of the first authentication process, the step of determining a reflection feature, then includes: identifying or extracting at least one patch, area, region or footprint of the associated beam profile of the first image, including at least one spot having highest brightness among the spots; and generating for said identified or extracted spot at least one feature vector or array.
The step of comparing may then include: comparing the generated at least one feature vector with a plurality of predetermined feature vectors being representative for images of authenticated users.
The first image may stem from reflected laser light realizing the illumination pattern with illumination features. This may involve surface and volume or bulk backscattering at or from the scene. Investigations of the applicant have shown that considering the brightest spots in the first image can be considered sufficiently reliable for deriving the material properties of the scene. The material property can be used to distinguish between human tissue of a user's face and counterfeited faces, e.g. masks. Detected and characterized material properties of components of the scene can be considered authentication data. In WO 2021/105265 A1 , which is hereby included by reference, methods and aspects of evaluation devices for determining beam profiles of reflection features and deriving material properties from feature vectors are disclosed. The steps of identifying or extracting the patches where spots having the highest brightness are located, and generating respective feature vectors may involve a neural network that is trained accordingly. Training the neural network can involve aspects for identifying brightest spots according to WO 2021/105265 A1.
It is understood that the material features disclosed and explained in WO 2020/187719, WO 2021/105265 and WO2018/091649, all of which are incorporated by reference, can be deployed as feature vectors used in the herewith disclosed methods and devices involving authentication processes.
In embodiments, the step of comparing the at least one feature vector with reference feature vectors may include deploying a machine-learned classifier, in particular an artificial neural network.
Reference feature vectors may be predetermined by carrying out the method steps for obtaining imaging data associated to reference objects.
In particular, the method may comprise:
- for a plurality of images including a scene with a face of a user, generating at least one reference feature vector for each scene; and
- classifying the generated at least one reference feature vector into an authentication class or category or assigning the reference feature vector to a user.
The scenes used as reference scenes preferably have a known content. For example, scenes with the face of the user to be authenticated can be used as reference scenes. Thus, categorizing or classifying reference feature vectors leads to a collection of reference data that can be used in comparing the feature vectors from the user shown in the respective scene to be authenticated. For example, if a generated feature vector corresponding to the scene to be authenticated is same or similar to one of the reference feature vectors, the method or device determines that the user in the scene to be authenticated corresponds to the user associated to the reference vector.
The method, in embodiments, may further include the process of training a machine-learning classifier based on the generated and classified plurality of reference vectors. In some embodiments of the display device, the display device comprises a secure enclave configured to carry out the processes of comparing the spot pattern comprised in the first and/or second image with reference spot patterns for obtaining the first authentication parameter.
In embodiments, the second authentication process process in a secure enclave.
In particular, processes involving pre-classified reference feature vectors should be protected from unauthorized access and may thus be performed within secure enclaves.
A secure enclave may be a secure enclave processor implemented as a system-on-chip that performs security services for other components in the device and that securely communicates with other subsystems in device, e.g. the processing unit. A secure enclave processor may include one or more processors, a secure boot ROM, one or more security peripherals, and/or other components. The security peripherals may be hardware-configured to assist in the secure services performed by secure enclave processor. For example, the security peripherals may include: authentication hardware implementing various authentication techniques, encryption hardware configured to perform encryption, secure-interface controllers configured to communicate over the secure interface to other components, and/or other components. In some embodiments, instructions executable by secure enclave processor are stored in a trust zone in memory subsystem that is assigned to secure enclave processor. The secure enclave processor fetches the instructions from the trust zone for execution. In general, secure enclave processor may be isolated from the rest of processing subsystems except for a carefully controlled interface, thus forming a secure enclave for the secure enclave processor and its components.
In embodiments, a computer-program or computer-program product comprises a program code for executing the above-described methods and functions by a computerized control device when run on at least one computerized device, in particular when run on the display device. A computer program product, such as a computer program means, may be embodied as a memory card, USB stick, CD-ROM, DVD or as a file which may be downloaded from a server in a network. For example, such a file may be provided by transferring the file comprising the computer program product from a wireless communication network.
In a further aspect, the display device is a smartphone or a tablet computer having a translucent screen as the display unit. In this aspect, the imaging unit is for example a front camera. The imag- ing unit can be located in an interior of the display device, behind the translucent screen. The imaging unit can include the optical sensor unit and an illumination source for emitting light through the translucent screen to illuminate the object. The optical sensor unit receives light from the object that passes through the translucent screen. The optical sensor unit may general a sensor signal in a manner dependent on an illumination of a sensor region or light sensitive area of the optical sensor. The sensor signal may be passed onto the processing unit to reconstruct an image of the object captured by the camera and/or to process the image, in particular, along the lines defined above and below with respect to embodiments of the method disclosed.
As used herein, the term “optical sensor unit” generally refers to a device or a combination of a plurality of devices configured for sensing at least one optical parameter. The optical sensor unit may be formed as a unitary, single device or as a combination of several devices. In embodiments, the optical sensor unit comprises a matrix of optical sensors. The optical sensor unit may comprise at least one CMOS sensor. The matrix may be composed of independent pixels such as of independent optical sensors. Thus, a matrix of inorganic photodiodes may be composed. Alternatively, however, a commercially available matrix may be used, such as one or more of a CCD detector, such as a CCD detector chip, and/or a CMOS detector, such as a CMOS detector chip. Thus, generally, the optical sensor unit may be and/or may comprise at least one CCD and/or CMOS device and/or the optical sensors may form a sensor array or may be part of a sensor array, such as the above- mentioned matrix. As an example, the sensor element may be part of or constitute at least one CCD and/or CMOS device having a matrix of pixels, each pixel forming a light-sensitive area.
As used herein, an “optical sensor” generally refers to a light-sensitive device for detecting a light beam, such as for detecting an illumination and/or a light spot generated by at least one light beam. As further used herein, a “light-sensitive area” generally refers to an area of the op-tical sensor which may be illuminated externally, by the at least one light beam, in response to which illumination at least one sensor signal is generated. The sensor signals are electronically processed and result in sensor data. The plurality of sensor data relating to the capture the light reflected by an object may be referred to as imaging data associated to the object.
In embodiments, the display device is a hand-held device, a smartphone, a laptop computer, a banking terminal, a smartwatch, a payment device, an ATM display device and/or a display comprising a translucent display.
Aspects of this disclosure also relate to a use of the display device or a use of the presented method as disclosed above or below with respect to specific embodiments. Aspects for a purpose of use include, selected from the group consisting of: a position measurement in traffic technology, an entertainment application, a security application, a surveillance application, a safety application, a human-machine interface application, a tracking application, a photography application, an imaging application or camera application, a mapping application for generating maps of at least one space, a homing or tracking beacon detector for vehicles, an outdoor application, a mobile application, a communication application, a machine vision application, a robotics application, a quality control application, a manufacturing application.
Further possible implementations or alternative solutions of the invention also encompass combinations - that are not explicitly mentioned herein - of features described above or below in regard to the embodiments. The person skilled in the art may also add individual or isolated aspects and features to the most basic form of the invention.
Further embodiments, features and advantages of the present invention will become apparent from the subsequent description and dependent claims, taken in conjunction with the accompanying drawings, in which:
Fig. 1 shows a display device according to a first embodiment;
Fig. 2 shows components of the display device of Fig. 1 ;
Fig. 3 shows method steps involved in methods for operating a display device according to a first embodiment;
Fig. 4 shows method steps involved in embodiments for a first authentication processes;
Fig. 5 shows a display device according to a second embodiment;
Fig. 6 shows method steps involved in embodiments for a second authentication processes; Fig. 7 shows method steps involved in a process for acquiring imaging data for embodiments of the first authentication process;
Fig. 8 shows method steps involved in embodiments for processes for generating pluralities of reference vectors and for generating an authentication signal; and
Fig. 9 shows method steps involved in method for operating a display device according to a second embodiment.
In the Figures, like reference numerals designate like or functionally equivalent elements, unless otherwise indicated.
Fig. 1 shows a display device 1 according to a first embodiment. The display device 1 is a smartphone and includes a translucent touchscreen 3 as a display unit. The display unit 3 is configured for displaying information. Such information can include a text, image, diagram, video, or the like. Besides the display unit 3, the display device 1 includes an imaging unit 4, a processing unit 5 and an output unit 6. In Fig. 1 , the imaging unit 4, the processing unit 5 and the output unit 6 are represented by dashed squares because they are located within a housing 2 of the display device 1 , and behind the display unit 3 when viewed from an exterior of the display device 1 .
Fig. 2 shows the components of the display device 1 located in the interior of the housing 2 in more detail. Fig. 2 corresponds to a view onto the display unit 3 from an interior of the display device 1 , with the imaging unit 4, the processing unit 5 and the output unit 6 being located in front of the display unit 3.
The imaging unit 4 is a front camera. The imaging unit 4 is configured to capture an image of surroundings of the display device 1 . In detail, an image of a scene in front of the display unit 3 of the display device 1 can be captured using the imaging unit 4. The surroundings are here defined as a half-sphere located in front of the imaging unit 4 and centered around a center of the display. The radius of the half-sphere is, for example, 5m.
The imaging unit 4 includes an illumination source 9 and an optical sensor unit 7 having a light sensitive area 8. The illumination source 9 is an infrared (IR) laser point projector realized by a vertical- cavity surface-emitting laser (VCSEL). The IR light emitted by the illumination source 9 shines through the translucent display unit 3 and generates multiple laser points on the scene surrounding the display device 1. When an object, such as a person, is located in front of the display device 1 (in the surroundings of the display device 1 , facing the display unit 3 and the imaging unit 2), an image of the object is reflected towards the imaging unit 4. This reflected image also includes reflections of the laser points.
Instead of the illumination source 9 being an IR laser pointer, it may be realized as any illumination source capable of generating at least one illumination light beam for fully or partially illuminating the object in the surroundings. For example, other spectral ranges are feasible. The illumination source may be configured for emitting modulated or non-modulated light. In case a plurality of illumination sources is used, the different illumination sources may have different modulation frequencies. The illumination source may be adapted to generate and/or to project a cloud of points, for example the illumination source may comprise one or more of at least one digital light processing (DLP) projector, at least one Liquid crystal on silicon (LCoS) projector, at least one spatial light modulator, at least one diffractive optical element, at least one array of light emitting diodes, at least one array of laser light sources.
The optical sensor 7 is here realized as a complementary metal-oxide-semiconductor (CMOS) camera. The optical sensor unit 7 looks through the display unit 3. In other words, it receives the reflection of the objects or the scene through the display unit 3. The image reflected by the object, such as the person, is captured by the light sensitive area 8. When light from the reflected image reaches the light sensitive area 8, a sensor signal indicating an illumination of the light sensitive area 8 is generated. Preferably, the light sensitive area 8 is divided into a matrix of multiple sensors, which are each sensitive to light and each generate a signal in response to illumination of the sensor.
Instead of a CMOS camera, the optical sensor 7 can be any type of optical sensor designed to generate at least one sensor signal in a manner dependent on an illumination of the sensor region or light sensitive area 8. The optical sensor 7 may be realized as a charge-coupled device (CCD) sensor.
The signals from the light sensitive area 8 are transmitted to the processing unit 5. The processing unit 5 is configured to process the signals received from the optical sensor 7 (which form an image). By analyzing a shape of the laser spots reflected by an object or person in the scene and captured by the optical sensor 7, the processing unit 5 can determine a distance to the object, e.g. a user's face, and a material information of the object. In the example of Fig. 1 and 2, the imaging unit 4, the processing unit 5 and the output unit 6 are communicatively coupled, e.g. through an internal bus system or connection lines 10.
The display device 1 shown in Fig. 1 and 2 is configured to perform processes for authenticating objects or users to be authenticated using imaging data associated to a scene including the objects, persons, or faces. This can be achieved through an application or "app" loaded into the display device in terms of a computer program with instructions to be executed by the processing unit 5 and other components of the smartphone or display device 1 , respectively.
Fig. 3 shows method steps involved in a method for operating a display device according to a first embodiment. Triggered by a user input or an app request, the display device is requested to perform a specific function, for example, a security function in terms of executing a software service or transaction.
In step S1 , an access request signal for executing the specific software service or transaction is received. For example, an app triggered by a user input requests a specific software service from the operating system implemented by the processing unit 5. A banking app may, for example, require the transmission of an encrypted data set or the execution of a money order from the user. Other security relevant scenarios are feasible.
In step S2, a security level is assigned to the access request received by the access request signal. Assigning a security level in step S2 can involve a comparison of the content of the access request signal with predetermined security levels. For example, some smartphone makers provide for a plurality of security levels, wherein the prerequisites for the respective authentication process increase in complexity and resource requirements with the security level.
The predetermined security level can be assigned by the smartphone maker, the user, the software or app itself, and/or the service provider, e.g. if the display device is a smartphone.
In step S3, it is checked if the required and assigned security level exceeds a predetermined security level. For example, one may implement that specific functions of the smartphone 1 can be executed at a security level 3 out of three. For example, the predetermined level 3 corresponds to an authentication process involving face recognition based on a two-dimensional image of the user captured by the camera unit 4. For example, an app may request a specific security level during its installation on the smartphone device. An additional advanced security level can be implemented according to this disclosure, thereby increasing the security of an access control and an access right management function. If it is evaluated in step S3 that the assigned security level does not exceed the predetermined security level, for example level 3, a first authentication process is carried out in step S4. The first authentication process generates a first authentication parameter in step S4. The authentication parameter can be an authentication score. For example, the first authentication parameter may indicate that a user image retrieved by the imaging unit 4 fulfills the requirements of a Face-ID, i.e. corresponds to an image of a registered user and thus is allowed to use the requested functionality of the smartphone.
However, if, in step S3, it is detected that the security level assigned to the access request exceeds the predetermined security level, e.g. level 3, the second authentication process is initiated in step S5. The second authentication process S5 provides a second authentication parameter that is considered in step S6. Thus, an additional security level 4 is implemented.
Step S6 involves generating an authentication signal based on the first and/or second authentication parameter. If the second authentication process S5 is carried out, both authentication parameters must be considered in step S6. If the authentication is successful, i.e. both authentication processes in steps S4 and S5 confirm that the access request can be granted, a respective authentication signal is generated.
In step S7, access is granted so that in the following step S8 the software service or transaction that is requested through the access request signal in step S1 can be carried out or executed.
Fig. 4 shows method steps involved in embodiments for the first authentication process.
In step S101 , imaging data associated to the scene comprising a user's face to be authenticated is received. For example, the imaging unit 4 provides imaging data obtained by capturing a first image comprising a spot pattern originating from the object. The spot pattern occurs in response to an irradiated illumination pattern wherein the illumination pattern comprises a plurality of illumination features. The processing unit 5 receives the imaging data. Next, the received imagining data containing a first image with a spot pattern are data processed in step S120. As a result of the data processing, as a result of step S120, processing unit 5 outputs the first authentication parameter indicating an authentication score of the scene to be authenticated to the output unit 6. The output unit 6 may serve as an interface for security commands. In embodiments, the output signal that is available at the output unit 6 is a signal indicative of grant or denial of access to the requested function.
The processing of the imaging data comprises steps S102, S103 and S104. In step S102, at least one reflection feature corresponding to a spot in the first image is determined. The reflection feature can have an associated beam profile. For example, a plurality of bright spots as a result of structured light impinging on the surface and/or bulk of objects in the scene to be authenticated is detected as a first image by the optical sensor unit 7. The structured light may be coherent laser light produced by the infrared (IR) laser point projector 9.
For example, a spot pattern is projected onto the scene, and a CMOS camera as imaging unit 4 captures the reflected spot pattern. The intensity distribution of the spots can give rise to specific reflection features that can be representative for an identity of a user face contained in the scene. Reflection features may, for example, include a ratio of a surface and a volume backscattering, a beam or edge profile, a contrast of laser speckle signals, a ratio of diffusive or direct reflection and the like. By carrying out step S2, a reflection feature is obtained. Material dependent reflection features are known from WO 2020/187719, WO 2021/105265 and WO2018/091649.
In the next step S103, the reflection feature obtained from the imaging data associated to the scene to be authenticated is compared with reference reflection features. Hence, in a comparison step S103, the spot pattern comprised in the first image obtained from the scene is compared with reference spot patterns to obtain a comparison result. The reference spot patterns or reference reflection features are based on reference data for reference scenes with objects or reference material properties of objects. Next, in step S104, the first authentication parameter is determined as a function of the result of the comparison between the reflection feature and reference reflection features.
A library of reference detection features with a mapping to scenes and/or images of preregistered user faces can be used, for example a reference library or database can contain a specific reference reflection feature or reference spot pattern that is associated with a specific user. If the reflection feature determined in step S102 compared with a reference reflection feature corresponding to a registered user's face does not match or is evaluated as dissimilar in the comparing step S103, it is determined in step S104 that the scene or user, respectively, cannot be authenticated as registered user.
Fig. 6 shows method steps involved in embodiments for a second authentication process. The second authentication process S5 involves three consecutive steps S51 , S52 and S53. In the first step S51 , authentication data is acquired, for example through user interfaces or sensors of the display device 1 . The authentication data can involve imaging data or other multi-modal data that are associated with a specific user or registered user.
In an evaluation step S52, the received authentication data is evaluated and a second authentication parameter or score is generated. The authentication parameter indicates as to whether the analyzed authentication data is genuine and thus may indicate that the specific user requesting access to a function or process in step S1 is genuine. Next, in step S53, the determined second authentication parameter is generated and output to the processing unit 5.
The second authentication parameter can be used in the further authentication process. For example, it is eventually evaluated as to whether the first and the second authentication parameters match with each other, thus indicating the same requesting entity or user.
In particular, the second embodiments of a display device shown in Fig. 5 is suitable for carrying out additional aspects of the first authentication process that are illustrated in Figs. 7 and 8.
Fig. 5 shows a display device 1 according to the second embodiment. In addition to the IR laser point projector 9 (patterned light projector) of Fig. 1 and 2, the display device 1 of Fig. 5 further includes a flood light projector 11 for emitting flood light through the display unit 3 towards a scene. The components of the display device are communicatively coupled to each other which is indicated by the arrows and dashed line 18.
In Fig. 5, the neural network used for general image processing by the processing unit 5 in step S2 is represented by the reference numeral 12. In Fig. 7, the output unit 6 forms an interface to apps for communication between the processing unit 5 and the apps. As the information relating to authenticating an object is a security issue, processes involving the reference vectors are executed within a secure enclave 13 including a trained neural network 14 implemented as a classifier. In step S110, a structured light or an illumination pattern is irradiated towards the scene to be authenticated. For example, a coherent light source generates an illumination pattern with a plurality of illumination features that impinge onto the objects within the scene including a face of the user.
At the objects in the scene, backscattering on the surface and its bulk volume occurs. Consequently, reflected light is received in step S111 by the imaging unit 4. The light originating from the scene in response to the irradiated lamination pattern comprises a first image with the spot pattern.
The imaging unit 4 processes the electronic signals from the optical sensor unit (sensor signals) and provides digital imaging data in step S112.
In addition to the first image, in steps S113, S114 and S115, a second two-dimensional image of the scene to be authenticated is acquired. To this extent, a flat light projector 11 of display device 1 generates and emits illumination light towards the scene to be authenticated in step S113. Hence, illumination light is irradiated from the display device to the scene and thus onto the face of the user.
In step S114, the reflected light, in particular visible light, is received by the optical sensor unit 7 within the imaging unit 4. Again, the imaging unit 4 processes the optical sensor unit signals and provides two-dimensional imaging data in step S115.
Hence, processing unit 5 obtains two images, first an image comprising the spot pattern and second a two-dimensional image of the scene. Both images are merged in step S116 to imaging data to be analyzed in a further process. The imaging data contains information about the reflection features relating to the spot pattern and two-dimensional image information on the scene. In step S117, the imaging data is provided to a neural network 12 used within the processing unit 5. The neural network 12 is implemented to generate feature vectors or feature arrays that relate to the brightest spots within the images of the scene.
Fig. 8 shows method steps involved in the first authentication process using reference feature vectors. First, potential sub-steps of step S102, determining at least one reflection feature corresponding to a spot in the first image are explained, wherein the reflection feature may have an associated beam profile. Step S102 may be divided into steps S121 , S122 and S123. In step S121 , spots with increased brightness or luminosity relating to the spot pattern of the first image are identified. Methods for identifying the spots are, for example, disclosed in WO 2021/105265 A1 .
Once spots or regions with a high brightness are identified in step S121 , those are extracted in step S122. For example, patches around brightest spots are extracted. The patches may have a square, rectangular or circular shape and should include at least the footprint of the associated beam profile of the spot under consideration. There can be a plurality of spots having a sufficient brightness to be considered brightest spots. One may contemplate of filtering the image according to predetermined criteria so that only a suitable number of spots are further processed. In embodiments only one brightest spot - a central main spot - is identified, and the respective patch is extracted.
Next, a feature vector for each extracted patch with a brightest spot is generated in step S123. The steps of extracting the brightest spots and generating respective feature vectors in step S123 can be carried out using the neural network that is appropriately configured. The respective feature vector can include data or information relating to the ratio of surface and volume backscattering in the respective spot, a beam profile, a contrast of a laser speckle signal and/or a ratio of diffusive or direct reflection from the scene. The feature vector may include aspects as disclosed in WO 2020/187719, which is hereby incorporated by reference.
In particular, a plurality of feature vectors are generated or calculated based on the imaging data of the respective scene used for authentication. In step S124, the feature vectors are compared with a plurality of reference feature vectors that are pre-classified so that a match or high similarity with one of the reference feature vectors indicates a specific registered user. This comparison step is done in step S124 and may involve a trained neural network 14 implemented in a secure enclave 13. Comparing the obtained feature vectors referring to the scene with the plurality of reference feature vectors in a reference library or database can be implemented by a similarity measure in the feature vector space.
Based on the comparison result the first authentication parameter is generated in step S125. For example, the feature vectors stemming from the imaging data of the images associated to the scene used as authentication data correspond to a reference feature vector associated to a specific registered user. Thus the authentication parameter may indicate that the user requesting access for a software service or transaction is approved or not according to an access right management database. Method step S160 indicated in a dashed box refers to providing the plurality of reference vectors. One reference vector, for example, is generated by first selecting a scene with a sample user's face and objects in step S161 . A plurality of sample scenes involving the same user may be needed to generate reference vectors for one user. The sample scenes are each processed under steps S121 through S123, i.e. imaging data is acquired by irradiating an illumination pattern comprising a plurality of illumination features, e.g. by a light source equivalent to the laser light source 9. Further, the sample scenes are irradiated by flat illumination light, for example by a light source corresponding or equivalent to flat light projector 11 . Thus a respective first and a second image is obtained.
Next, as explained above, reference feature vectors are generated according to step S123 and classified according to the known identities of the user face in the scenes. In step S163, the respective reference vectors are classified or mapped to an access right for a secure software service or specific function of the display device.
One may contemplate training a neural network with the plurality of reference vectors that are classified with respect to access rights. The trained neural network 14 can then be used in step S124 to intrinsically compare the feature vector obtained in step S123 to generate the authentication parameter.
Fig. 9 shows method steps involved in a method operating a display device according to an alternative embodiment. The method steps can equally be executed by the display device according to the first and/or second embodiment according to Fig. 1 or Fig. 5 of this disclosure.
The flowchart of Fig. 9 shows, in step S200, a step of receiving sensor signals from the CMOS camera in the imaging unit 4. For example, the scene in front of the display device or smartphone is captured by a CMOS camera. Hence, imaging data is obtained that can be used in subsequent authentication processes. In step S201 , a pre-processing of the pixel data included in the imaging data is performed. This pre-processing of the sensor signals from the optical sensor unit 4 can involve the steps depicted in Fig. 7 above. Pre-processing may include filtering the sensor signal or the corresponding imaging data according to a filter, e.g. a bandpass.
In step S202, a low-level representation of the imaging data is generated. Feature vectors referring to spot patterns of the reflected light from the scene and, in particular, the face of the user can be considered a low-level representation. In the next step S203, processing device 5 checks if an advanced security level, e.g. level 4, is required. Level 4 may be indicative of an advanced authentica- tion process mandating more authentication data than a face recognition algorithm requires corresponding to a level 3 security level. If the required security level for granting access to a function of the phone is lower than level 4, step S204 is carried out.
Step S204 involves the first authentication process as described above with respect to Fig. 4. The authentication process is based on the low-level representation of the retrieved imaging data. If, in step S205, the generated feature vectors match with reference vectors, thus indicating that access to the required function can be granted, step S205 triggers the generation of operating parameters to unlock the application device or to execute the desired function in step S207. If, in step S205, it is found that the feature vectors compared with the reference feature vectors do not match, step S206 requires an alternative unlock mechanism, such as, for example, the input of a specific user pin number.
If, in step S203, an advanced security level 4 is required, an advanced authentication process is initiated in step S208, wherein the advanced authentication process requires the execution of the first authentication process and a second authentication process, as, for example, depicted with respect to Fig. 6 above.
In step S208, a library of potential additional authentication procedures or processes can be used in step S210. The operating system of the display device may provide for a library implementing authentication processes, for example being based on the full three-dimensional point scanning match with a previously registered 3D head representation of the user, the user's face mimics variations, as, for example, laughing, eyebrows' movements or eye movements. The library may provide authentication processes based on biometric parameters of the user, as, for example, involving the user's voice recognition or vital signs, such as pulse, blood flow or temperature. The library may further include authentication processes based on specific movements of the user's head or face characteristics.
If, in step S209, a sufficiently high authentication score is achieved, i.e. a probability that the authentication processes confirms the authenticity of the user, step S212 is carried out, thus generating the operating parameters, such as unlocking the application device or executing the required function requested in step S212. If the advanced authentication process fails in step S209, the device may request an alternative unlock mechanism in step S211 . The present disclosure provides for improved methods and systems for access control to a software service or transaction in computerized devices. In particular, the use of imaging data for different security levels required for accessing a specific function of the device reduces resource consumption and improves the overall security of the device.
Although the present invention has been described in accordance with preferred embodiments, it is obvious for the person skilled in the art that modifications are possible in all embodiments. For example, illumination devices and imaging devices do not need to be arranged in or on the same housing. The sequence of method steps carried out do not need to include all steps mentioned in Fig. 3, and 6 - 9. It is understood that all the disclosed aspects of methods may relate to computer- implemented methods.
Reference signs:
1 display device
2 housing
3 display unit
4 imaging unit
5 processing unit
6 output unit
7 optical sensor unit
8 light sensitive area
9 light source
10 line
11 flood light projector
12 neural network
13 secure enclave
14 neural network
51 receiving access request signal
52 assigning security level
53 checking if advanced security level required 54 first authentication process
55 second authentication process
56 generating authentication signal
57 granting or denying access
58 executing requested software service/transaction
551 receiving authentication data
552 evaluating the received authentication data
553 determining a second authentication parameter
5101 receiving imaging data
5102 determining reflection features
5103 comparing reflection features with reference reflection features
5104 determining a first authentication parameter
5110 irradiating structured light/illumination pattern on object
5111 receiving spot pattern
5112 providing imaging data with first image
5113 irradiating illumination light on object
5114 receiving reflected light
5115 providing imaging data with second image
5116 merging first and second imaging data
5117 providing imaging data
5120 processing imaging data
5121 identifying spot with increased brightness
5122 extracting imaging data corresponding to brightest spot(s) in first image
5123 generating feature vector
5124 comparing feature vectors with reference vectors S125 generating first authentication parameter
5160 providing plurality of reference vectors
5161 selecting material/product sample
5162 generating reference feature vector
5163 classifying reference vector
5200 receiving sensor signals from CMOS camera (imaging data)
5201 preprocessing sensor signals
5202 generating low-level representation (feature vectors)
5203 security level check
5204 first authentication process
5205 check match with reference vectors
5206 require further authentication process
5207 unlock device/service/app
5208 select further authentication process
5209 check match with reference vectors and further authentication score/parameter
5210 library of further authentication processes
5211 require further authentication process
5212 unlock device/service/app

Claims

Claims
1 . A method for operating a display device (1) having at least one processing unit (5) configured to execute apps, the method comprising the steps of: receiving (S1) an access request signal for executing a software service or transaction; assigning (S2) a security level to the access request; if the assigned security level exceeds a predetermined security level, initiating an advanced security level authentication process including a first authentication process (S4) and a second authentication process (S5); wherein the first authentication process (S4) comprises the steps of: receiving imaging data associated to a scene (S101), said imaging data being obtained by the process of irradiating (S110) at least one illumination pattern comprising a plurality of illumination features onto the scene, and receiving (S111) at least one first image comprising a spot pattern originating from the scene in response to the irradiated illumination pattern; determining (S112), by processing the received imaging data, at least one reflection feature corresponding to a spot in the first image; comparing (S113), by processing the at least one reflection feature, the spot pattern comprised in the first image with reference spot patterns for obtaining a comparison result; and determining (S114) a first authentication parameter as a function of the comparison result; wherein the second authentication process (S5) comprises the steps of: receiving (S51) authentication data associated to the scene; evaluating (S52) the received authentication data for obtaining an evaluation result; and determining (S53) a second authentication parameter as a function of the evaluation result; generating (S6) an authentication signal as a function of the first and the second authentication parameter.
2. The method according to claim 1 , wherein the authentication data includes at least one of the group of: imaging data associated with an iris scan of a user; fingerprint data associated with a fingerprint of a user; medical data associated with vital signs of a user; audio data associated with a voice of a user; video data associated with a head movement of a user; video data associated with face mimics of a user; three-dimensional scanning data associated with a head, body or face of a user; touchscreen input data associated with input of a user; and imaging data associated with a display content of a further display device.
3. The method according to claim 1 or 2, wherein the second authentication process (S5) comprises at least two sub-processes comprising applying the steps of receiving, evaluating and determining to at least two sets of authentication data, the sets being independent from one another and/or obtained through different acquisition processes, wherein at least two second authentication parameters are generated.
4. The method according to any one of claims 1 - 3, further comprising at least one of the steps of: generating the access request in response to a user input indicative of a secure software service; granting or denying (S7) access to the requested access for executing the software service or transaction as a function of the authentication signal; and executing (S8) the software service or transaction as a function of the authentication signal.
5. The method according to any one of claims 1 - 4, further comprising the step of generating the imaging data, generating comprising: irradiating (S110) at least one illumination pattern comprising a plurality of illumination features onto the scene, in particular using coherent light from a monochromatic light source (9); and receiving (S111) at least one first image comprising a spot pattern originating from the scene in response to the irradiated illumination pattern at an optical sensor unit (7).
6. The method according to claim 5, further comprising: irradiating (S110) the at least one illumination pattern through a translucent display unit (3); and/or passing the at least one first image comprising the spot pattern through said translucent display unit (3) prior to the step of receiving (S111 ).
7. The method according to claim 5 or 6, wherein the step of generating the imaging data further comprises irradiating (S113) illumination light onto the object, and receiving (S114) reflected light from the scene for obtaining a second image representing a two-dimensional image of the scene.
8. The method according to any one of claims 5 - 7, wherein the first image includes spots having an increased brightness, and the step of determining (S102) includes: extracting (S122) at least one patch of the associated beam profile of the first image including at least one spot having highest brightness among the spots; and generating (S123) for said extracted spot at least one feature vector; wherein the step of comparing (S3) includes: comparing (S124) the generated at least one feature vector with a plurality of predetermined reference feature vectors being representative for authenticated scenes or users.
9. The method according to claim 5, wherein the step of comparing (S124) the at least one feature vector with reference feature vectors includes deploying a machine learned classifier, in particular, an artificial neural network.
10. A display device (1 ) comprising: a monochromatic light source (9) configured to generate at least one illumination pattern comprising a plurality of illumination features; an optical sensor unit (4) configured to capture at least one first image comprising a spot pattern originating from scene and to generate imaging data associated with the scene; at least one processing unit (5) configured: to receive (S1) the imaging data; to determine (S102), by processing the received imaging data, at least one reflection feature corresponding to a spot in the first image; to compare (S103), by processing the at least one reflection feature, the spot pattern comprised in the first image with reference spot patterns for obtaining a comparison result; and to determine (S104) a first authentication parameter as a function of the comparison result; and in response to an access request signal for executing a software service or transaction, to initiate a further authentication process (S5) for determining a second authentication parameter, and an output unit (3) configured to output an authentication signal as a function of the first and the second authentication parameter.
11 . The display device according to claim 10, wherein the processing unit (5) is configured to cause the display device (1) to carry out the method steps according to any one of claims 1 - 9.
12. The display device according to claim 10 or 11 , comprising a secure enclave (13) configured to carry out the process of comparing (S3) and of determining (S4).
13. The display device according to any one of claims 10 - 12, wherein the display device is a handheld device, a smartphone, a laptop computer, banking terminal, a smartwatch, a payment device, and/or a display comprising a translucent display.
14. Use of a display device and a method according to any one of the preceding claims relating to a display device, for a purpose of use, selected from the group consisting of: a position measurement in traffic technology; an entertainment application; a security application; a surveillance application; a safety application; a human-machine interface application; a tracking application; a photography application; an imaging application or camera application; a mapping application for generating maps of at least one space; a homing or tracking beacon detector for vehicles; an outdoor application; a mobile application; a communication application; a machine vision application; a robotics application; a quality control application; a manufacturing application.
15. A computer readable medium storing computer program instructions, the computer program instructions, when executed by a processing unit (5) in a device (1) according to any one of claims 10 - 13, cause the device (1) to perform operations comprising the method according to any one of claims 1 - 9
PCT/EP2023/053795 2022-02-15 2023-02-15 Method for operating a display device, and display device having a secure authentication process WO2023156478A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP22156832.2 2022-02-15
EP22156832 2022-02-15

Publications (1)

Publication Number Publication Date
WO2023156478A1 true WO2023156478A1 (en) 2023-08-24

Family

ID=80953408

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/053795 WO2023156478A1 (en) 2022-02-15 2023-02-15 Method for operating a display device, and display device having a secure authentication process

Country Status (1)

Country Link
WO (1) WO2023156478A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018091649A1 (en) 2016-11-17 2018-05-24 Trinamix Gmbh Detector for optically detecting at least one object
US20180285544A1 (en) * 2017-03-28 2018-10-04 Samsung Electronics Co., Ltd. Method for adaptive authentication and electronic device supporting the same
US20190080153A1 (en) * 2017-09-09 2019-03-14 Apple Inc. Vein matching for difficult biometric authentication cases
WO2020187719A1 (en) 2019-03-15 2020-09-24 Trinamix Gmbh Detector for identifying at least one material property
WO2021105265A1 (en) 2019-11-27 2021-06-03 Trinamix Gmbh Depth measurement through display

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018091649A1 (en) 2016-11-17 2018-05-24 Trinamix Gmbh Detector for optically detecting at least one object
US20180285544A1 (en) * 2017-03-28 2018-10-04 Samsung Electronics Co., Ltd. Method for adaptive authentication and electronic device supporting the same
US20190080153A1 (en) * 2017-09-09 2019-03-14 Apple Inc. Vein matching for difficult biometric authentication cases
WO2020187719A1 (en) 2019-03-15 2020-09-24 Trinamix Gmbh Detector for identifying at least one material property
WO2021105265A1 (en) 2019-11-27 2021-06-03 Trinamix Gmbh Depth measurement through display

Similar Documents

Publication Publication Date Title
US10242364B2 (en) Image analysis for user authentication
EP3567535A1 (en) Virtual reality scene-based business verification method and device
CN110069970A (en) Activity test method and equipment
EP3673406B1 (en) Laser speckle analysis for biometric authentication
KR20210062381A (en) Liveness test method and liveness test apparatus, biometrics authentication method and biometrics authentication apparatus
KR20180134280A (en) Apparatus and method of face recognition verifying liveness based on 3d depth information and ir information
KR20210069404A (en) Liveness test method and liveness test apparatus
KR101919090B1 (en) Apparatus and method of face recognition verifying liveness based on 3d depth information and ir information
CN109074483A (en) Multi-modal biological identification
CN103973964A (en) Image capturing method and image capturing apparatus
CN112232155A (en) Non-contact fingerprint identification method and device, terminal and storage medium
CN112232159B (en) Fingerprint identification method, device, terminal and storage medium
KR102024372B1 (en) System for dealing a digital currency with block chain matching biometric identification
WO2023156478A1 (en) Method for operating a display device, and display device having a secure authentication process
KR101803396B1 (en) Method for relaying financial transaction with multiple safety function
EP3724815B1 (en) Anti-spoofing face id sensing based on retro-reflection
KR20100123812A (en) Apparatus for distinguishing living body fingerprint and method therof
KR20180069683A (en) Method for relaying financial transaction with multiple safety function
US20240005703A1 (en) Optical skin detection for face unlock
WO2023156475A1 (en) Method for protecting information displayed on a display device and display device
KR101792017B1 (en) Sharing system for proceeds to be produced by using WEB contents
WO2023156473A1 (en) Method for determining an access right of a user, requesting computer device, authenticating computer device, and authenticating system
WO2023156460A1 (en) Method and device for characterizing an object for authentication
JP2022031456A (en) Impersonation detection device, impersonation detection method, and program
CN114694265A (en) Living body detection method, device and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23704794

Country of ref document: EP

Kind code of ref document: A1