EP3295696A1 - Mécanisme de commande de confidentialité inter au dispositif pour des dispositifs intelligents pouvant être portés - Google Patents

Mécanisme de commande de confidentialité inter au dispositif pour des dispositifs intelligents pouvant être portés

Info

Publication number
EP3295696A1
EP3295696A1 EP15730693.7A EP15730693A EP3295696A1 EP 3295696 A1 EP3295696 A1 EP 3295696A1 EP 15730693 A EP15730693 A EP 15730693A EP 3295696 A1 EP3295696 A1 EP 3295696A1
Authority
EP
European Patent Office
Prior art keywords
face
video stream
person
smart device
buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP15730693.7A
Other languages
German (de)
English (en)
Inventor
Pan Hui
Ji Yang
Muhammad Haris
Christoph Peylo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Deutsche Telekom AG
Original Assignee
Deutsche Telekom AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Deutsche Telekom AG filed Critical Deutsche Telekom AG
Publication of EP3295696A1 publication Critical patent/EP3295696A1/fr
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/02Protecting privacy or anonymity, e.g. protecting personally identifiable information [PII]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • G06F21/6254Protecting personal data, e.g. for financial or medical purposes by anonymising data, e.g. decorrelating personal data from the owner's identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Definitions

  • the present invention generally relates to a control mechanism for privacy in wearable computing devices (wearable smart devices) equipped with a camera, in particular in smart glasses.
  • the present invention relates to a method of providing a framework to ensure privacy of people which are detected by a camera, preferably a digital camera of wearable computing devices.
  • the framework of the present invention can be provided by a system and/or a method which guarantees that privacy of people on photos or videos taken by wearable smart devices is preserved, preferably by using in-device techniques.
  • Augmented reality is a live direct or indirect view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified by a computer. As a result, the technology functions by enhancing one's current perception of reality. Augmentation is conventionally in real-time and in semantic context with environmental elements, such as sports scores on TV during a match. With the help of advanced AR technology, the information about the surrounding real world of the user becomes interactive. Artificial information about the environment and its objects can be overlaid on the real world.
  • Smart glasses are wearable computing devices in the form of computerized eyeglasses, which typically comprise an optical head-mounted display (OHMD). Due to latest developments in wearable technology modern smart glasses typically possess enhanced data processing functionality similar to a smart phone or tablet and are able to run sophisticated applications. These devices can also include special features such as augmented reality overlay, GPS and mapping capability. Despite all the advantages, these devices also give rise to new challenges about privacy. A main feature regarding this aspect is the camera used in these devices. Since the smart glass is controlled by the wearer, the wearer can control when photos or videos are taken, i.e., a wearer would typically not necessarily ask for permission from those around the wearer (see Fig. 1).
  • OHMD optical head-mounted display
  • the present invention is preferably applicable, but not limited to smart glasses as long as the devices are wearable (e.g. smart watch which is worn around the wrist). Taking a photo or a video by a camera of such devices may be easier to be recognized than the case using smart glasses, but can still be done without people in the surroundings being aware of that.
  • the smart device to which the present invention is applicable may not necessarily be a single integrated device.
  • the present invention covers a device construction in which a camera module with a communication interface is separately provided as a wearable unit, and other elements (which are not necessarily wearable) are integrated as a separate unit for communication with the camera module.
  • face recognition capabilities can be easily integrated into AR applications.
  • photos and videos of people from the on-device camera of the device can be identified with facial recognition software.
  • the holder of the device could be presented with the person's social networking service profiles (e.g. Facebook profile, Twitter feed) or Internet search results linked to his/her profile.
  • Individuals typically do not expect such an automated link with their internet data if they move in public, they have an expectation of anonymity.
  • the present invention is preferably applicable, but not limited to AR applications (requiring a display) such as smart glasses as long as the wearable computing devices are equipped with a camera.
  • Google GlassTM is known in the art.
  • This smart glass is capable of recording audio, video, and photos.
  • This device can also use GPS (Global Positioning System) for location-tracking and directions.
  • the device is further capable of handling computational tasks as it is also equipped with processor chip and GPU. For instance, there exists an application to take pictures surreptitiously by winking. To an outside individual it is difficult or impossible to recognize whether the user of the smart glass is recording audio or video with the smart glass.
  • all data recorded by the smart glass including photos, videos, audio, location data, and user data, can be stored in a cloud server, e.g., on Google's cloud servers.
  • Present smart glass can connect to the Internet via Wi-Fi, or tether to the user's smartphone. Moreover, even when temporary offline the smart glass can record audio and/or video.
  • control mechanism which ensures privacy of people around wearable smart devices.
  • devices owners can still take photos or videos of those who are not disturbed (e.g. their friends) without violation of privacy.
  • the present invention proposes an in-device automated privacy framework for a wearable smart device, such as smart glasses or smart watches.
  • the preferable goal of this invention is to protect the privacy of individuals while preserving sufficient information.
  • the framework is composed of human face detection in the images from on-device camera. After the face detection, tracking of the person is performed in order to recognize a certain gesture of the person. In the tracking, robust tracking algorithm is preferably used, as any error in tracking will typically decrease the chances of recognition.
  • the framework further comprises an intelligent de-identification.
  • the framework of the present invention will provide balance between privacy and utility.
  • the framework of the present invention is preferably intelligent to preserve privacy while keeping functionality of camera and sufficient information in the images/videos.
  • a method for preserving privacy of a person visible in a camera module of a wearable smart device is performed within the smart device, wherein the method comprises the steps of:
  • detecting a face of the person in the video stream stored in the buffer by face recognition tracking, after the face-detecting step, the person in the video stream stored in the buffer, and determining whether the person has made a predefined gesture by detecting the predefined gesture in the video stream stored in the buffer; and de-identifying, in the video stream stored in the buffer, the face of the person who has made the predefined gesture by removing facial identification information from a video segment which will be taken by the camera module after the predefined gesture has been detected at the determining step.
  • the smart device de-identifies, in a video segment stored in the buffer, the face of the person.
  • a predefined gesture e.g. hand gesture, facial expression
  • such a future video segment may be a single image (photo) or more (rather than a seamless video stream) taken by the camera module after the predefined gesture has been made.
  • the user of the smart device may stop taking a video stream for a second and then take a photo, or may take a video stream and at the same time take a photo. In this case, the person's face of the photo can be de-identified.
  • De-identification is an essential part of the present framework of the privacy control mechanism. De-identification preferably maintains balance between utility and privacy. Utility is a function of amount of features or information in the image. That is, it is preferable that information of each image (constituting the video stream) than is necessary to de-identify the face of the person is not image-processed upon de-identification in order to keep as much information as possible.
  • the video stream in which the face of the person has been de-identified is stored in a storage within the smart device.
  • This video stream stored in this storage is ready for access by the user of the smart device, for access by an application running in the smart device and/or for supply to an external device which is connected with the smart device. This guarantees that the video stream stored in the buffer before the de-identification has been performed is not accessible for any purpose than the de-identification (by a de-identification module).
  • face features obtained by the face recognition at the face detecting step are preserved in a cache for a predetermined period of time after it has been determined that the face of the person disappears in the video stream.
  • the de-identifying step restarts when it has been determined that the face of the person reappears in the video stream before the period of time has elapsed.
  • the person may temporarily be not face-detected, for example if he turns back, e.g. for a few seconds, or moves out of a view range of the camera.
  • This embodiment is also applicable to a case where the user of the smart device stops taking a video stream for a while and then restarts taking a video stream and/or a photo.
  • the de-identifying step is retroactively performed on a video segment stored in the buffer between a time when the face has been detected and a time when the predefined gesture has been detected. Since the de-identification starts from a time when the gesture has been detected, and therefore the face of the person remains identifiable before the person has made a gesture.
  • a video segment in the past can be subjected to the de-identifi cation process, which further improves privacy of people in the surroundings.
  • such a past video segment may be a single image (photo) or more, if any, (rather than a seamless video stream) which were taken by the camera module.
  • the smart device may allow the user to take a video stream and a photo at the same time.
  • the method is implemented as a program which is located at a layer that is directly above an operating system kernel layer, wherein said layer adjacent the kernel layer is not accessible by applications running in the smart device, which are programmed by application developers.
  • the purpose of this preferred requirement is to protect the framework from hacking attacks by developers.
  • developers refer to those who can write code to access services (camera, speaker, call log) for their applications that are intended to run in the smart device.
  • a wearable smart device e.g. smart glass
  • a wearable smart device comprises:
  • a camera module for taking a video stream
  • a buffer for temporarily storing the video stream
  • a face detection module for detecting a face of a person in the video stream in the buffer by face recognition
  • a gesture detection module for tracking the person in the video stream stored in the buffer after the face of the person has been detected by the face detection module, and determining whether the person has made a predefined gesture by detecting the predefined gesture in the video stream stored in the buffer;
  • the present invention relates to a computer program comprising computer executable program code adapted to be executed to implement the method of the present invention when being executed.
  • the term “mechanism” or “framework” can relate to a set of methods and a system.
  • video stream and “video” in the specification are interchangeable; they consists of multiple images (frames). Each image is subjected to de-identification process.
  • FIG. 7 shows a flowchart illustrating preferred method steps according to the present invention.
  • Fig. 8 shows a further flowchart illustrating preferred method steps of a further preferred embodiment according to the present invention.
  • Figure 2 shows a smart glass (device) according to an embodiment of the present invention.
  • the device comprises: a memory (not shown), a memory controller 202, a processor (CPU) 203, a peripheral interface 204, RF circuitry 205, audio circuitry 207, a speaker 213, a microphone 210, an input output subsystem 208, a projection display 21 1 , a camera 212, software components 201, and other input devices or control devices (e.g. motion module 209).
  • These components can communicate with each other over at least one communication buses or signal lines.
  • the device can be any smart glasses and it is only one example for smart glasses. Therefore, it may have more or fewer components as shown in Fig. 2.
  • the various components shown in Fig. 2 may be implemented in hardware and/or software.
  • the framework of the control mechanism according to the present invention preferably resides inside the device and will preferably be automated to ensure firewall against any kind of privacy breaching attempt.
  • Most advanced, currently available, smart glasses have slimmed down their operating systems, but these operating systems are still adequate to handle software stacks running on mature kernels.
  • the preferable exact location of the framework of the present invention will be dependent on architecture of the operating system.
  • Google Glass has an Android operating system running in its core.
  • the framework exists in libraries layer of Android software stacks.
  • FIG. 3 a One such example of an Android OS, which is used for example in Google Glass, is shown in Fig. 3 a.
  • Fig. 3b shows the architecture of iOSTM, i.e., the operation system which is currently used in Apple smart devices.
  • iOSTM i.e., the operation system which is currently used in Apple smart devices.
  • the abstraction layers of the operating system comprise an iv) application layer, e.g., applications 501 in Fig. 3a and Cocoa TouchTM in Fig. 3b.
  • Cocoa Touch is a UI (user interface) framework for building software programs to run on the iOS operating system (for the iPhoneTM, iPod Touch , and iPad ) from Apple Inc. Cocoa Touch provides an abstraction layer of iOS.
  • the service layer (iii) is located, e.g., application framework 502 in Fig. 3a and Media Services (iii) in Fig. 3b.
  • the layer (ii) of core services or core libraries is provided (see again Figs. 3a and 3b).
  • a core e.g., the operating system kernel 505 in Fig. 3a and the Core OS in Fig. 3b.
  • operating systems of sophisticated smart glasses comprise an operating system kernel (e.g. core OS) and a hardware abstraction layer (see e.g. "Hardware” in Fig. 3).
  • This hardware abstraction layer manages hardware resources and provides interface for hardware components like camera, microphone and speaker etc. These layers are the lowest layers in the abstraction layer model. Libraries and services exist in combined or separate layers just after this layer hardware abstraction layer and use hardware abstraction interfaces to perform their dedicated tasks.
  • Fig. 5 usually layers until services (see layers (iv) and (iii); 501 and 502) are accessible to developers.
  • layers after services e.g., layers (i) and (ii) are not prune to manipulation/ hacking.
  • the framework of the present invention after services layer, e.g., a layer (ii) below layer (iii).
  • the framework should be inside the kernel layer (e.g. layer (i)). However, it can be located anywhere between services and the kernel layer.
  • the framework according to the present invention is preferably not an "application” and therefore does not require SDK (software development kit) of the operating system.
  • SDK software development kit
  • the framework does not require SDK because it is not regarded as an application, but treated as a system level service.
  • the framework can preferably be implemented directly in the kernel using languages like C and C++.
  • the framework can also use functions of library like OpenGL.
  • the framework of the present invention does not necessarily require to be a separate (i.e. single) abstraction layer. Since it is preferably related to only one hardware feature that is preferably the camera, it is possible that the framework resides within current abstraction layers of the operating system.
  • the drawing presents a detailed overview of software components according to an embodiment of the present invention.
  • the architecture includes the following components: an operating system kernel 505, core libraries 504, a virtual machine (run time libraries) 503, an application framework 502 and one or more applications 501.
  • an operating system kernel 505 core libraries 504, a virtual machine (run time libraries) 503, an application framework 502 and one or more applications 501.
  • the device is not restricted to the shown components. It is possible that more or fewer components are used in the invention.
  • the operating system kernel 505 includes components and drivers to control general system tasks as well as to manage communication between software and hardware components.
  • the operating system kernel 505 may have: a display driver, a Wi-Fi driver, a camera driver, a power management, a memory driver and/or other drivers.
  • core libraries 504 on top of kernel 505. These libraries comprise instructions to instruct the device to handle data.
  • the core libraries may comprise a couple of modules, such as open-source Web browser engine and SQLite database. The modules will be useful for storage and sharing of application data, libraries to play and record audio and/or video, SSL libraries responsible for Internet security etc.
  • the core libraries include other support libraries to run the algorithms involved in the modules of the framework. Specific algorithms are implemented for face detection, gesture detection and de-identification, which will be described below in great detail.
  • a virtual machine and/or runtime libraries 503. It is designed to ensure the independence of individual applications. It further provides the preferred advantage in case of application crashes with such virtual machines construction. In addition, it can be easily ensured that the remaining applications are not affected by any other applications running on the device. In other words, a crashed application does preferably not influence the other running applications.
  • the virtual machine may also provide a time window to enhance de-identification functionality.
  • the time window is corresponding to certain virtual memory, which is served as a cache for input video streams.
  • the virtual memory can temporarily store the input video for a short period of time (e.g. 30 seconds).
  • the de-identification process should not be interrupted if he/she has already made some gestures for de-identification purpose. However, if the person disappears for over the duration of the time window, he/she needs to redo the gestures for de-identification upon reappearance.
  • the application framework 502 is on the next layer. It contains the programs of device manage basic functions, for example, resource allocation, process switching and physical location tracking, etc. In most cases, application developers should have full control of the application framework 502 so that they take advantage of processing capabilities and support features when building an application. In other words, the application framework can be seen as a set of basic tools used by a developer for building more complex tools or applications.
  • the application layer 501 is shown in Fig. 3a. This layer consists of applications like camera applications, calculators, image galleries, etc. The user of the device should only interact with applications on this layer.
  • Figure 4 illustrates the main components of the camera module 212 (Fig. 2) in the device according to an embodiment of the present invention.
  • the camera module includes an optical lens 401, an image sensor technology 402, an image signal processor 403 and a driver 404.
  • the lens 401 is used for taking high resolution photos and/or record high definition videos.
  • An optical image can be converted into an electronic signal with the image and/or video sensor 402.
  • CCD image sensors and CMOS sensors may be used as those used in most digital devices, which perform the task of capturing light and converting it into electrical signals.
  • the scenes i.e. optical images taken by the sensor
  • the image processor 403 is a specialized digital signal processor used for processing images, which is an on chip system of multi-processors or multi-core processors architecture.
  • the driver 404 provides an interface between software libraries and hardware chips.
  • the I/O subsystem 208 provides an interface between inputs and outputs on the device.
  • the I/O subsystem includes a voice module 210 which comprises a microphone and/or a voice controller.
  • the voice module 210 provides an input interface between the user and the device, which receives acoustic signals from the user and converts them into electrical signals.
  • the device may be controlled with the voice signal commands.
  • the user of the smart glass can say commands like "okay glass take picture" to take pictures with the device's camera.
  • the device may contain a motion module 209 for activating and deactivating different functions. It may comprise a motion detection sensor, for example, a gyro sensor or an accelerometer sensor. User's motion can be translated into commands to control device by this module. Some embodiments may also use the camera module 212 as an input interface, which judges the user's "virtual touching" and converts it into an input signal.
  • a motion module 209 for activating and deactivating different functions. It may comprise a motion detection sensor, for example, a gyro sensor or an accelerometer sensor. User's motion can be translated into commands to control device by this module. Some embodiments may also use the camera module 212 as an input interface, which judges the user's "virtual touching" and converts it into an input signal.
  • the present invention provides an automated in-device privacy framework for wearable smart devices.
  • a device is provided with a plurality of modules constituting the framework which resides inside the device to ensure privacy of people detected by the camera on device.
  • the term "in-device” should be interpreted as being already built in the device and should not be alterable by software installed on the device. In consequence, privacy of people recognized by the camera of the device can be ensured because of this "in-device" implementation.
  • the effectiveness of framework of the present invention depends on its location inside the device.
  • the location here preferably refers to the logical location or arrangement in terms of software components or software layers as illustrated in Figs. 3, 5 and 7.
  • location means the level at which the framework is located as well as the corresponding interfaces.
  • the location of the framework also depends on the architecture of the operating system used inside the device. However, most of the operating systems in such smart devices are software stacks, which share some degree of similarity in the architecture.
  • Figure 5 shows one preferred location of the framework in case of a smart device that runs Android operating system. Developers should be able to access application framework layer of the device. Furthermore, Fig. 5 shows that the location of the framework (except for the camera module) should preferably be adjacent to the operating system kernel. This location will allow the framework to work in automated way and preferably hide itself, preventing hacking attempts by application developers. In addition, the framework should run automatically at some layer which is not alterable by the user of the smart device.
  • the smart device comprises a buffer 601, a face-detection module 602, a gesture detection module 603, and a de-identification module 604.
  • the buffer 601 is adapted to store temporarily the video stream taken by the camera module 212.
  • the face detection module 602 is configured to detect a face of a person in the video stream in the buffer 601 by face recognition.
  • the gesture detection module 603 is configured to tracking the person in the video stream stored in the buffer 601 after the face of the person has been detected by the face detection module 602.
  • the gesture detection module 603 is further adapted to determine whether the person has made a predefined gesture by detecting the predefined gesture in the video stream stored in the buffer 601.
  • the de-identification module 604 is configured to de-identify, in the video stream stored in the buffer 601, the face of the person who has made the predefined gesture by removing facial identification information from a video segment which will be taken by the camera module 212 (i.e.
  • the "video segment” covers a single photo in the content of the present invention.
  • the claimed "camera” by definition allows a photo to be taken.
  • the de-identification module 604 is preferably associated with a library 605.
  • This library 605 comprises specific functions to fulfill de-identification purpose, for instance, to insert mosaic or blur on human faces to the output video stream.
  • the smart device comprises a storage (not shown) for storing the video stream to which the de-identification has been subjected is outputted.
  • the storage is accessible by the user of the smart device and/or by an application running in the smart device.
  • the video stream stored in the storage may be supplied to an external device (not shown) which is connected with the smart device.
  • the buffer 601 is preferably provided within an area of the smart device, which is not accessible by the user of the smart device and an application running in the smart device.
  • the smart device may further comprise an object generation module 606 for generating human objects and non-human objects on the input images/videos based on image processing and related techniques.
  • the face detection module 602 also serves as a human detection module for checking all the human objects and detecting any human face appearing in the objects.
  • the gesture detection module 603 may classify the detected gesture as negative and positive.
  • the smart device may further comprise a control module 607 for determining whether or not the face of the person should be de-identified according to negative gesture or positive gesture.
  • the user of the device uses any input method explained before, to turn the camera module 212 for taking images or a video.
  • the framework receives direct input from the camera module 212 at step 301 , wherein this input can be images and/or a video.
  • the object generation module 606 generates objects on the input images/video stored in the buffer 601 based on image processing and related techniques. Human objects and non-human objects are generated in this module 606.
  • Objects generated are passed into the face (human) detection module 602 at step 303.
  • This module 602 checks, in the images/video stored in the buffer 601 , all the human objects and detects any human face appearing in the objects. If no human face is detected (305 in Fig. 7), the process goes to step 312 and the images/video stored in the buffer 601 are directly sent to the output (i.e. to the storage accessible by the user or application). In other words, no privacy control mechanism is applied here if no human is involved.
  • the method preferably works in an idle state if no face is detected in the image, such that a subsequent tracking step is not performed (i.e. only camera and human face detection methods are working while all subsequent methods remain idle).
  • the gesture detection module 603 tracks the persons in the video stored in the buffer 601 and makes a determination as to whether the persons have made predefined gestures by detecting the predefined gestures in the video.
  • the gesture detection module 603 classifies them as positive gestures and negative gestures.
  • a positive gesture (308 in Fig. 7) is a gesture (e.g. nodding) indicating that it is not necessary to remove facial identifying information from the images/video.
  • a negative gesture (309 in Fig 7) is a gesture (e.g. waiving his hand, shaking his head) indicating that identifying information must be removed from the images/video.
  • the gesture may be facial expression or pose.
  • Figure 8 illustrates an exemplary embodiment of the gesture detection module 603 in detail.
  • the control module 607 determines at step 307 whether the images/video should be de-identified according to the type of gestures. If any negative gesture appears, de-identification process should be applied at step 311 for any faces concerned, otherwise no changes are needed (step 310). At step 312 the images/video is finally outputted to the storage accessible by the user or an application.
  • De-identification means removal of facial identifying information from the images or videos, prior to access/ sharing of data.
  • the preferable goal of the de-identification module 604 is to protect identity and meanwhile preserve utility, e.g. the ability to recognize surroundings of the person from the de-identified images without recognizing his/her identity. It is to be noted that the present invention is applicable to a situation where more than one person need to be de- identified. Therefore, it is preferable that while one person is being de-identified, a gesture of any other person at his surroundings can be detected. For this purpose, it is preferable to minimize a removal area of facial identification information of each person.
  • face de-identification will factorize the face parts into identity and non- identity factors using a generative multi-factor model.
  • De-identification is applied on combined factorized data, and then de-identified images are reconstructed from this data.
  • De-identification is preferably performed on identity factors by taking an average of k inputs.
  • Andrew Senior "Protecting Privacy in Video Surveillance”; Springer Science & Business Media, 2009.
  • the present invention covers any de-identification method by removing facial identification information.
  • various techniques such as blurring, noise addition, black out may be used, although they are less sophisticated.
  • a facial mask which only covers pixels representing a face or identifiable part of face may be used. Such a facial mask will change its shape as the person's face is moving.
  • the wearable smart device may be equipped with a display for displaying a video stream taken by the camera module 212.
  • the user may want to view a video after taking it.
  • the library 605 may include some AR (augmented reality) functionality for de-identification in the sense that a virtual image is superposed on a real image (which is stored in the buffer 601 , but not yet de-identified) so that only a superposed image is allowed to appear on the display.
  • some mosaic-pattern may be overlaid on a facial region, or ripple may be added for blurring.
  • a gesture start detection module detects a (possible) starting point of a gesture (e.g. a hand is located close to a face within a predetermined distance) appearing in an image (i.e. single shot) from the face detection module 602 at step 801.
  • a track module keeps tracking of the gesture in subsequent shots (e.g. covering a face with a hand) and passes it to a match module at step 807.
  • the match module compares the tracked gesture with a gesture database and determines whether the gesture is positive or negative. Afterwards, it sends out a control message indicating that the gesture is positive or negative out to the control module 607.
  • face features obtained by the face recognition at the face detection module 602 may be preserved in a cache for a predetermined period of time (e.g. in the range from a few seconds to one minute) after it has been detennined that the face of the person disappears in the video stream.
  • the de-identifying step restarts when it has been determined that the face of the person reappears in the video stream before the period of time has elapsed.
  • Such a cache 608 (Fig. 6) is preferably provided at a layer adjacent to the kernel layer and not accessible by the user of the smart device and an application running in the smart device.
  • the de-identifying step may be retroactively performed on a video segment stored in the buffer 601 between a time when the face has been detected and a time when the predefined gesture has been detected. This is because the de-identification starts from a time when the gesture has been detected, and the face of the person will remain identifiable before the person has made a gesture.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioethics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente invention concerne un dispositif intelligent pouvant être porté, tel qu'un verre intelligent. Le dispositif comprend une caméra pour capturer un flux vidéo; un tampon pour stocker temporairement le flux vidéo; un module de détection de visage pour détecter un visage d'une personne dans le flux vidéo dans la mémoire tampon par la reconnaissance faciale; un module de détection de gestes pour le suivi de la personne dans le flux vidéo stocké après que le visage de la personne ait été détecté par le module de détection de visage, et déterminer le fait de savoir si la personne a réalisé un geste prédéfini par la détection du geste prédéfini dans le flux vidéo stocké; et un module de désidentification pour désidentifier, dans le flux vidéo stocké, le visage de la personne qui a effectué le geste prédéfini par suppression d'informations d'identification de visage à partir d'un segment vidéo qui va être capturé par la caméra après le geste prédéfini ait été détecté par le module de détection de gestes.
EP15730693.7A 2015-05-11 2015-05-11 Mécanisme de commande de confidentialité inter au dispositif pour des dispositifs intelligents pouvant être portés Ceased EP3295696A1 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2015/060310 WO2016180460A1 (fr) 2015-05-11 2015-05-11 Mécanisme de commande de confidentialité inter au dispositif pour des dispositifs intelligents pouvant être portés

Publications (1)

Publication Number Publication Date
EP3295696A1 true EP3295696A1 (fr) 2018-03-21

Family

ID=53476819

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15730693.7A Ceased EP3295696A1 (fr) 2015-05-11 2015-05-11 Mécanisme de commande de confidentialité inter au dispositif pour des dispositifs intelligents pouvant être portés

Country Status (2)

Country Link
EP (1) EP3295696A1 (fr)
WO (1) WO2016180460A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711318B (zh) * 2018-12-24 2021-02-12 北京澎思科技有限公司 一种基于视频流的多人脸检测与跟踪方法
US20210195120A1 (en) * 2019-12-19 2021-06-24 Lance M. King Systems and methods for implementing selective vision for a camera or optical sensor
CN111753755B (zh) * 2020-06-28 2024-06-07 刘晨 一种智能眼镜
US11593520B2 (en) 2021-04-19 2023-02-28 Western Digital Technologies, Inc. Privacy enforcing memory system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8326061B2 (en) * 2008-05-12 2012-12-04 Google Inc. Fast visual degrading of images
AT510352A1 (de) * 2010-09-13 2012-03-15 Smartspector Artificial Perception Engineering Gmbh Kamera für die unkenntlichmachung personenbezogener daten
US10223710B2 (en) * 2013-01-04 2019-03-05 Visa International Service Association Wearable intelligent vision device apparatuses, methods and systems
KR101936802B1 (ko) * 2012-07-20 2019-01-09 한국전자통신연구원 얼굴인식 기반의 개인정보보호 장치 및 방법
JP2014078910A (ja) * 2012-10-12 2014-05-01 Sony Corp 画像処理装置、画像処理システム、画像処理方法、及びプログラム
US20140108501A1 (en) * 2012-10-17 2014-04-17 Matthew Nicholas Papakipos Presence Granularity with Augmented Reality
US10037082B2 (en) * 2013-09-17 2018-07-31 Paypal, Inc. Physical interaction dependent transactions
CN105980965A (zh) * 2013-10-10 2016-09-28 视力移动科技公司 用于非接触式打字的系统、设备和方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SCHIFF J ET AL: "Respectful cameras: detecting visual markers in real-time to address privacy concerns", INTELLIGENT ROBOTS AND SYSTEMS, 2007. IROS 2007. IEEE/RSJ INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 29 October 2007 (2007-10-29), pages 971 - 978, XP031222121, ISBN: 978-1-4244-0911-2 *

Also Published As

Publication number Publication date
WO2016180460A1 (fr) 2016-11-17

Similar Documents

Publication Publication Date Title
EP3692461B1 (fr) Élimination de données pouvant identifier un individu avant transmission par un dispositif
EP3224757B1 (fr) Structure de respect de la vie privée intra-dispositif pour lunettes connectées et montres connectées
US10740617B2 (en) Protection and recovery of identities in surveillance camera environments
CN109313911B (zh) 沉浸式显示设备上的自动音频衰减
US10529071B2 (en) Facial skin mask generation for heart rate detection
CN107077598B (zh) 具有隐私保护的视频捕捉
JP6348176B2 (ja) 適応イベント認識
EP3047361B1 (fr) Procédé et dispositif d'affichage d'une interface utilisateur graphique
US10255690B2 (en) System and method to modify display of augmented reality content
US11461986B2 (en) Context-aware extended reality systems
WO2017093883A1 (fr) Procédé et appareil permettant de fournir une fenêtre de visualisation dans une scène de réalité virtuelle
US11087562B2 (en) Methods of data processing for an augmented reality system by obtaining augmented reality data and object recognition data
EP3295696A1 (fr) Mécanisme de commande de confidentialité inter au dispositif pour des dispositifs intelligents pouvant être portés
US11010980B2 (en) Augmented interface distraction reduction
US20230094658A1 (en) Protected access to rendering information for electronic devices
JP7194158B2 (ja) 情報処理装置及びプログラム
US20230315509A1 (en) Automatic Determination of Application State in a Multi-User Environment
US20240104967A1 (en) Synthetic Gaze Enrollment
EP4385199A1 (fr) Apprentissage automatique à faible puissance à l'aide de régions d'intérêt capturées en temps réel
JP2021093125A (ja) 目復元に基づく目追跡方法及び装置

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20171024

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20200318

REG Reference to a national code

Ref country code: DE

Ref legal event code: R003

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20200920