EP2864930A1 - Self learning face recognition using depth based tracking for database generation and update - Google Patents
Self learning face recognition using depth based tracking for database generation and updateInfo
- Publication number
- EP2864930A1 EP2864930A1 EP13731633.7A EP13731633A EP2864930A1 EP 2864930 A1 EP2864930 A1 EP 2864930A1 EP 13731633 A EP13731633 A EP 13731633A EP 2864930 A1 EP2864930 A1 EP 2864930A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- person
- video camera
- frame
- contemporaneously
- environment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
- G06V40/173—Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/50—Maintenance of biometric data or enrolment thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Definitions
- Face recognition systems and processes essentially operate by comparing some type of model of a person's face to an image or characterization of the person's face extracted from an input image. These face models are typically obtained by training a face recognition systems using images of a person's face (or a characterization thereof). Thus, a database of training face images or characterizations is typically needed to train a face recognition system.
- Face recognition training database generation technique embodiments described herein generally involve collecting characterizations of a person's face that are captured over time and as the person moves through an environment, to create a training database of facial characterizations for that person.
- a computer-implemented process is employed to generate a face recognition training database for each person detected in an environment. The process begins with inputting a sequence of contemporaneously-captured frame pairs. Each frame pair includes a frame output from a color video camera and a frame output from a depth video camera. Next, a face detection method and the color video camera frames are used to detect potential persons in the
- a motion detection method and the depth video camera frames are used to detect potential persons in the environment.
- Detection results generated via the foregoing face and motion detection methods are used to determine the location of one or more persons in the environment.
- the detection results generated via the face detection method also include a facial characterization of the portion of a color video camera frame depicting a person's face, for each potential person detected.
- the process also includes identifying the corresponding location of that person in the contemporaneously- captured frame of the color video camera, and generating the facial
- each facial characterization generated for that person is assigned to an unknown person identifier established specifically for the person, and stored in a memory associated with the computer being used to implement the process. An attempt is then made to ascertain the identity of each person. If the attempt is successful for a person, each facial characterization assigned to the unknown person identifier established for that person is re-assigned to a face recognition training database established for the person.
- FIGS. 1 A-B are a flow diagram generally outlining one embodiment of a computer-implemented process for generating a face recognition training database for each person detected in an environment.
- FIGS. 2A-E are a flow diagram generally outlining one embodiment of a computer-implemented process for generating or supplementing a face recognition training database for each person detected in an environment based on a new sequence of contemporaneously-captured frame pairs.
- FIG. 3 is a flow diagram outlining one embodiment of a computer- implemented process for discarding facial characterization assigned to the unknown person identifier whenever the person remains unidentified for more than a prescribed number of attempts to identify the person.
- FIG. 4 is a flow diagram outlining one embodiment of a computer- implemented process for capturing a zoomed in image of a person located in the environment at a distance from the color video camera that exceeds a prescribed maximum distance.
- FIGS. 5A-C are a flow diagram generally outlining one embodiment of a computer-implemented process for generating or supplementing a face recognition training database for each person detected in an environment based on a sequence of contemporaneously-captured frame pairs output by an additional pair of color and depth video cameras capturing the scene from a different point of view.
- FIGS. 6A-F are a flow diagram generally outlining one embodiment of a computer-implemented process for generating or supplementing a face recognition training database for each person detected in an environment based on a sequence of contemporaneously-captured frame pairs output by an additional pair of color and depth video cameras capturing a different scene within the environment.
- FIGS. 7A-D are a flow diagram generally outlining one embodiment of a computer-implemented motion detection process for use in the face recognition training database generation technique embodiments described herein.
- FIG. 8 is a simplified component diagram of a suitable mobile robotic device in which the face recognition training database generation technique embodiments described herein can be implemented.
- FIG. 9 is a diagram depicting a general purpose computing device constituting an exemplary system for implementing face recognition training database generation technique embodiments described herein.
- Face recognition training database generation technique embodiments described herein generally involve collecting characterizations of a person's face that are captured over time and as the person moves through an environment, to create a training database of facial characterizations for that person. As the facial characterizations are captured over time, they are will represent the person's face as viewed from various angles and distances, different resolutions, and under different environmental conditions (e.g., lighting and haze conditions). Still further, over a long period of time where facial characterizations of a person are collected periodically, these characterizations can represent an evolution in the appearance of the person. For example, the person could gain or lose weight; grow or remove facial hair; change hairstyles; wear different hats; and so on.
- the resulting training database can be established and populated before training even begins, and added to over time to capture the aforementioned changes in the person's facial pose and appearance.
- a person's face recognition training database can be established before it is needed by a face recognition system, once employed, the training will be quicker.
- the face recognition training database generation technique embodiments described herein can generate training databases for multiple people found in the environment. Also, existing databases can be updated with incremental changes in faces. This allows a person face changes to be captured gradually enough to allow
- a computer-implemented process for generating a face recognition training database for each person detected as being located in an environment begins with inputting a sequence of contemporaneously-captured frame pairs (process action 100).
- Each frame pair includes a frame output from a color video camera and a frame output from a depth video camera.
- the cameras are synchronized in that each camera captures an image of the scene at the same time.
- a contemporaneous pair of color and depth frames is produced each time the scene is captured.
- a face detection method and the color video camera frames are used to detect potential persons in the environment (process action 102).
- process actions 102 and 104 are accomplished at approximately the same time.
- Detection results generated via the foregoing face and motion detection methods are used to determine the location of one or more persons in the environment (process action 106).
- the detection results generated via the face detection method also include a facial characterization of the portion of a color video camera frame depicting a person's face, for each potential person detected.
- the type of facial characterization is specific to the particular face detection method employed and is compatible with the aforementioned face recognition system that will use the training database being generated.
- Each person detected solely via the motion detection method is identified next (process action 108), and the corresponding location of each identified person is found in the
- process action 1 10 contemporaneously-captured frame of the color video camera
- a facial characterization of that portion of the color video camera frame is generated for each of the identified persons (process action 1 12).
- process action 114 The process continues with the selection of a previously unselected one of the persons detected in the environment (process action 114). Each facial characterization generated for the selected person is assigned to an unknown person identifier established specifically for that person (process action 116), and stored in a memory associated with the computer being used to implement the process (process action 1 18).
- the aforementioned computer can be, for example, one of the computers described in the Exemplary Operating Environments section of this disclosure.
- process action 124 determines whether the attempt of process action 120 was successful or not. If not, process actions 114 through 126 are repeated, until all the detected persons have been selected and considered. At that point the process ends.
- characterizations created for the person can be assigned to the appropriate collection, and a new unknown person identifier need not be established.
- any facial characterization created from the new sequence would be assigned either to that person's existing unknown person identifier if the person was not previously identified, or to that person's face recognition training database if the person had been previously identified.
- an unknown person identifier would be created and assigned to the facial characterizations produced.
- facial characterizations are assigned to an unknown person indicator (albeit an existing one or new one), an attempt to identify the person would be made.
- process action 202 The process then continues with the selection of one of the persons detected in the environment using the new frame pair sequence (process action 202). It is then determined if the selected person corresponds to a person whose location was previously determined using a sequence of contemporaneously- captured frame pairs preceding the new sequence (process action 204). As indicated previously, in one embodiment this is done by tracking the location of the previously detected person over time. If it is determined that the person corresponds to such a previously detected person, it is next determined if the identity of the person was previously ascertained (process action 206). If the identity of the person was previously ascertained, then a previously unselected one of the facial characterizations generated from the new sequence of contemporaneously-captured frame pairs for this person is selected (process action 208).
- the facial characterizations are generated as described previously. It is determined if the selected facial characterization differs to a prescribed degree from each facial characterization assigned to the face recognition training database established for the person (process action 210). If it does differ to the prescribed degree, the selected facial characterization is assigned to the face recognition training database established for the selected person (process action 212), and is stored in a memory associated with the computer (process action 214). Otherwise it is discarded (process action 216). In any event, it is then determined if all the facial characterizations created for the selected person from new frame pair sequence have been selected (process action 218). If not, process actions 208 through 218 are repeated, until all the facial characterizations have been selected and considered.
- contemporaneously-captured frame pairs for this person is selected (process action 220). It is then determined if the selected facial characterization differs to a prescribed degree from each facial characterization assigned to the unknown person identifier established for the person (process action 222). If it does differ to the prescribed degree, the selected facial characterization is assigned to the unknown person identifier established for the selected person (process action 224), and is stored in a memory associated with the computer (process action 226). Otherwise it is discarded (process action 228). In either case, it is then determined if all the facial characterizations created for the selected person from new frame pair sequence have been selected (process action 230). If not, process actions 220 through 230 are repeated, until all the facial characterizations have been selected and considered.
- process action 232 The process then continues with an attempt to ascertain the identity of the person (process action 232). As before, this identification action is accomplished using any appropriate conventional method, including inviting the unknown person interact with the computer to provide the identifying information. It is next determined if the attempt was successful (process action 234). If so, each facial characterization assigned to the unknown person identifier established for the selected person is re-assigned to a face recognition training database established for that person (process action 236).
- characterization generated for the selected person is assigned to an unknown person identifier established specifically for that person (process action 238), and stored in a memory associated with the computer being used to implement the process (process action 240).
- an attempt is made to ascertain the identity of the person (process action 242). It is then determined if the attempt was successful (process action 244). If so, each facial characterization assigned to the unknown person identifier established for the selected person is re-assigned to a face recognition training database established for that person (process action 246).
- process action 248 it is determined if all the persons detected in the environment using the new frame pair sequence have been selected. If not, process actions 202 through 248 are repeated, until all the detected persons have been selected and considered. At that point the current iteration of the process ends. However, the process can be repeated the next time new sequence of contemporaneously-captured frame pairs becomes available.
- Face recognition methods typically use facial characterizations such as those described previously in identifying a person from an image of their face. With regard to the foregoing process actions for attempting to ascertain the identity of the person, it is noted that the facial characterizations generated for that person and assigned to that person's unknown person identifier can be employed in the attempt.
- process action 400 It is then determined if the selected person is located in the environment at a distance from the color video camera that exceeds a prescribed maximum distance, e.g., 3 meters (process action 402). If so, the location of the selected person is provided to a controller that controls a color camera having zoom capability (process action 404). The controller causes the color camera to zoom in on the face of the selected person to a degree proportional to the distance from the color video camera to the person, and then to capture a zoomed image of the person's face. It is noted that the color camera can be the
- the zoomed image is then input (process action 406), and a facial characterization of the portion of the zoomed image depicting that person's face is generated
- the environment in which the face recognition training database generation technique embodiments described herein operate can be quite large.
- more than one pair of color and depth video cameras is employed to cover the environment. Given that more than one pair of cameras is available in the environment, they can be configured to capture the same scene, but from different points of view. This scenario allows more facial
- each pair of cameras it is advantageous for each pair of cameras to know the location of people in the scene so that it can be readily determined whether a person is the same person detected using another camera pair, or a different person. In one embodiment this is accomplished by configuring the camera pairs to capture frame pairs substantially contemporaneously. In this way the location of a person computed by one pair of camera would match that computed by another pair if it is the same person, and not match if it is a different person.
- an additional sequence of contemporaneously-captured frame pairs is input (process action 500).
- a face detection method and the color video camera frames output by the color video camera of the additional pair of cameras are used to detect potential persons in the environment (process action 502).
- a motion detection method and the depth video camera frames output by the depth video camera of the additional pair of cameras are used to detect potential persons in the environment (process action 504).
- Detection results generated via the foregoing face and motion detection methods are used to determine the location of one or more persons in the environment (process action 506).
- the detection results generated via the face detection method also include a facial characterization of the portion of a color video camera frame depicting a person's face, for each potential person detected.
- Each person detected solely via the motion detection method is identified next (process action 508), and the corresponding location of each identified person is found in the contemporaneously-captured frame of the color video camera of the additional pair of cameras (process action 510).
- a facial characterization of that portion of the color video camera frame is generated for each of the identified persons (process action 512).
- the process continues with the selection of a previously unselected one of the persons detected in the environment based on frame pairs output from the additional color and depth video camera pair (process action 514). It is then determined based on identified location of the person whether the person has also been detected using another color and depth video camera pair (process action 516). If so, each facial characterization generated for the selected person based on frame pairs output from the additional color and depth video camera pair is assigned to the unknown person identifier established for that person based on the person's detection using the other color and depth video camera pair (process action 518). Otherwise, each facial characterization generated for the selected person based on frame pairs output from the additional color and depth video camera pair is assigned to an unknown person identifier established for that person (process action 520).
- each of the facial characterizations generated for the selected person based on frame pairs output from the additional color and depth video camera pair is stored in the memory associated with the computer (process action 522).
- an attempt is also made to ascertain the identity of the person (process action 524). It is then determined if the attempt was successful (process action 526). If so, each facial characterization assigned to the unknown person identifier established for the selected person is reassigned to a face recognition training database established for that person (process action 528). Regardless of whether the attempt of process action 526 was successful or not, it is next determined if all the detected persons have been selected (process action 530). If not, process actions 514 through 530 are repeated, until all the detected persons have been selected and considered. At that point the process ends, but can be repeated whenever a new sequence of contemporaneously-captured frame pairs is input from the additional pair of color and depth video cameras.
- the camera pairs can be configured to capture different scenes. This configuration is useful in situations where a pair of cameras cannot cover the entire environment. Given this, a person detected in one scene covered by one camera pair can be tracked and if that person moves into a part of the
- the knowledge of the person's location as they leave one scene to another can be used to ascertain that a person detected in the new scene is the same person detected in the prior scene.
- face recognition methods or some other method of identifying the person, can be employed if feasible to ascertain that a person detected in the new scene is the same person detected in the prior scene. This facilitates assigning facial characterizations generated for the person in the new part of the
- an additional sequence of contemporaneously-captured frame pairs is input (process action 600).
- a face detection method and the color video camera frames output by the color video camera of the additional pair of cameras are used to detect potential persons in the environment (process action 602).
- a motion detection method and the depth video camera frames output by the depth video camera of the additional pair of cameras are used to detect potential persons in the environment (process action 604).
- Detection results generated via the foregoing face and motion detection methods are used to determine the location of one or more persons in the environment (process action 606).
- the detection results generated via the face detection method also include a facial characterization of the portion of a color video camera frame depicting a person's face, for each potential person detected.
- Each person detected solely via the motion detection method is identified next (process action 608), and the corresponding location of each identified person is found in the contemporaneously-captured frame of the color video camera of the additional pair of cameras (process action 610).
- a facial characterization of that portion of the color video camera frame is generated for each of the identified persons (process action 612).
- process action 614 The process continues with the selection of a previously unselected one of the persons detected in the environment based on frame pairs output from the additional color and depth video camera pair (process action 614). It is then determined whether the selected person was previously detected in another scene in the environment using another color and depth video camera pair (process action 616). As indicated previously, this can be based on the tracking of the person's location as they leave one scene to another, face recognition methods, or some other method of identifying the person. If the selected person was previously detected in another scene, it is further determined if the identity of the selected person was ascertained previously (process action 618). If the selected person was not previously identified, then a previously unselected one of the facial characterizations generated from the additional sequence of
- contemporaneously-captured frame pairs is selected (process action 620), and it is determined if the selected facial characterization differs to a prescribed degree from each facial characterization assigned to the unknown person identifier established previously for the selected person (process action 622). If so, the selected facial characterization is assigned to the unknown person identifier established previously for the person (process action 624), and stored in a memory associated with the computer (process action 626). Otherwise it is discarded (process action 628). It is then determined if all the facial
- process action 630 characterizations generated from the additional sequence of contemporaneously- captured frame pairs have been selected. If not, process actions 620 through 630 are repeated, until all the facial characterizations have been selected and considered. Next, an attempt is made to ascertain the identity of the selected person (process action 632). It is then determined if the attempt was successful (process action 634). If so, each facial characterization assigned to the unknown person identifier established for the selected person is reassigned to a face recognition training database established for that person (process action 636).
- process action 618 a previously unselected one of the facial characterizations generated from the additional sequence of contemporaneously- captured frame pairs, is selected (process action 638), and it is determined if the selected facial characterization differs to a prescribed degree from each facial characterization assigned to the face recognition training database established previously for the selected person (process action 640). If so, the selected facial characterization is assigned to the face recognition training database established for the person (process action 642) and stored in a memory associated with the computer (process action 644). Otherwise it is discarded (process action 646). It is then determined if all the facial characterizations generated from the additional sequence of contemporaneously-captured frame pairs have been selected
- process action 648 If not, process actions 638 through 648 are repeated, until all the facial characterizations have been selected and considered.
- process action 616 it was determined that the selected person was not previously detected in another scene in the environment, the process continues by assigning each facial characterization generated for the selected person based on frame pairs output from the additional color video camera and additional depth video camera to an unknown person identifier newly established for that person (process action 650).
- process action 650 assigns each facial characterization generated for the selected person based on frame pairs output from the additional color video camera and additional depth video camera to an unknown person identifier newly established for that person.
- characterizations is also stored in a memory associated with the computer (process action 652).
- An attempt to ascertain the identity of the selected person is then made (process action 654). It is then determined if the attempt was successful (process action 656). If the identity of the selected person is ascertained, each facial characterization assigned to the unknown person identifier established for the person is re-assigned to a face recognition training database established for the person (process action 658).
- process action 660 it is determined if all the detected persons have been selected. If not, process actions 614 through 660 are repeated, until all the detected persons have been selected and considered. At that point the process ends, but can be repeated whenever a new sequence of contemporaneously-captured frame pairs is input from the additional pair of color and depth video cameras.
- any motion detection method can be adopted for use in the face recognition training database generation technique embodiments described herein, in one embodiment the following method is employed.
- this method exploits short term changes in the depth data extracted from the depth video camera frames to detect potential persons in the environment.
- the motion detection process first involves designating all the pixels in the first depth video camera frame as background pixels (process action 700). Then, it is determined if a new subsequently-captured depth frame has become available (process action 702). If not, process action 702 is repeated until a new frame is available.
- a new depth frame is input, a previously unselected pixel of the depth frame is selected (process action 704), and it is determined if the depth value of the selected pixel has changed by more than a prescribed amount from the value of a pixel in the depth frame captured immediately before the frame currently under consideration that represents the same location within the environment (process action 706).
- the selected pixel is designated to be a foreground pixel (process action 708). It is next determined if there are any previously unselected pixels of the depth frame remaining (process action 710). If there are remaining pixels, process actions 704 through 710 are repeated. If not, then it is determined if the depth frame currently under consideration is the last frame in the sequence (process action 712). If not, process actions 702 through 712 are repeated.
- a seed point is established amongst the foreground pixels in the last frame and the pixel associated with this point is assigned to be a part of a blob (process action 714).
- a previously unselected pixel neighboring a pixel assigned to the blob (which would initially be just the seed point pixel) and that is not already assigned to that blob, is selected (process action 716). It is first determined if the selected pixel is assigned to a different blob (process action 718). If so, the two blobs are combined into one blob (process action 720).
- process action 722 it is determined if there are any previously unselected pixels neighboring a pixel assigned to the combined blob that is not already assigned to the combined blob (process action 722). If so, then a previously unselected one of these pixels is selected (process action 724), and process action 718 through 724 are repeated. However, whenever it was determined in process action 718 that the selected pixel was not assigned to a different blob, it is determined if the depth value of the selected pixel is the same within a prescribed tolerance as the current average of the pixels assigned to the blob (process action 726). If so, the selected pixel is assigned to the blob
- process action 728 If not, no action is taken. However, in either case, it is next determined if there are any previously unselected pixels neighboring a pixel assigned to the blob (combined or not) and that is not already assigned to that blob (process action 730). If there are such pixels, then process actions 716 through 730 are repeated. Otherwise, no action is taken. Thus, pixels
- process action 732 it is determined if there are foreground pixels that have not been assigned to a blob. If such pixels remain, then a seed point is established amongst the unassigned foreground pixels in the last frame and the pixel associated with this point is assigned to be a part of a new blob (process action 734). Process actions 716 through 734 are then repeated, until no unassigned foreground pixels remain.
- a previously unselected one of the blobs is selected (process action 736). It is then determined if the blob meets a set of prescribed criteria that is indicative of the blob representing a human (process action 738). If not, the blob is eliminated (process action 740). If, however, the selected blob does meet the prescribed criteria, the blob is designated as representing a potential person located within the environment (process action 742).
- the criteria used to indicative of a blob representing a human can be any conventional set of criteria.
- the criteria can include whether the blob fits normal human body parameters in real space dimensions. For example, does the blob exhibit rectangular areas corresponding to the human chest and head.
- the color video camera outputs a continuous sequence of digital color images of the scene captured by the camera. These images are sometimes referred to frames or image frames, as they were in the preceding descriptions.
- An example of a suitable color video camera is a conventional RGB video camera.
- the depth video camera outputs a continuous sequence of digital depth images of the scene captured by the camera. These images are sometimes referred to herein as frames or depth frames, as they were in the preceding descriptions.
- the pixel values in a depth frame are indicative of the distance between the depth camera and an object in the environment.
- one suitable depth video camera is a conventional infrared-based depth camera. This type of camera projects a known infrared pattern onto the environment and determines depth based on the pattern's deformation as captured by an infrared imager.
- embodiments of the face recognition training database generation technique described herein can use pixel correlations between a contemporaneously captured pair of color and depth frames. In other words, knowing which pixel in one of the frames of the pair depicts the same location in the scene as a given pixel in the other frame is sometimes useful. While conventional methods can be employed to ascertain this pixel correlation each time a pair of contemporaneous frames is captured, in one embodiment a pre-computed transform that defines the pixel coordination is employed. More particularly, if the color and depth video cameras are synchronized such that they are moved together in the same manner, the relative transformation between them will not change. As such, the transformation can be pre-computed and used to determine the pixel correlation for each pair of contemporaneous frames captured.
- the face recognition training database generation technique embodiments described herein can also employ fixed-location color and depth video cameras.
- fixed-location it is meant that the cameras are disposed at a particular location within the environment, and do not move from that location on their own. This, of course, does not preclude the cameras from being relocated within the
- the face recognition training database generation technique embodiments described herein can employ moving color and depth video cameras.
- the cameras can be mounted in a mobile robotic device.
- a suitable mobile robotic device can in general be any conventional mobile robotic device that exhibits the following attributes.
- the robotic device 800 is able to move about the environment in which it is intended to travel.
- the mobile robotic device 800 includes a locomotive section 802 for moving the device through the environment.
- the mobile robotic device 800 also has sensors that are employed to track and follow people through the applicable environment. In particular, these sensors include the aforementioned color video camera 804 and depth video camera 806.
- the color and depth video cameras 804, 806 are repositionable so that different portions of the environment can be captured.
- the color and depth video cameras 804, 806 can be housed in a head section 808 of the mobile robotic device 800 that typically is disposed above the aforementioned locomotive section 802.
- the point of view of the cameras 804, 806 can be changed by redirecting the cameras themselves, or by moving the head section 808, or both.
- An example of the latter scenario is a configuration where the head section rotates about a vertical axis to provide a 360 degree panning motion, while the cameras pivot up and down to provide a tilting motion.
- the cameras also have a zoom feature.
- the mobile robotic device 800 also includes a control unit 810 that controls the locomotive section 802 to move the robotic device through the environment in a conventional manner; and controls the movement of the head section 808, or the cameras 804, 806, or both, to capture different scenes within the environment.
- the control unit 810 includes a computing device 812 (such as those described in the Exemplary Operating Environments section of this disclosure).
- This computing device 812 includes a control module that is responsible for initiating movement control signals to the locomotive and head sections, and for using the frames captured by the color and depth video cameras in the manner described previously to generate face recognition training databases.
- the control of the movement of the locomotive and head sections is done using conventional methods. Whereas, the latter function is handled by a face recognition training database generation sub-module.
- FIG. 9 illustrates a simplified example of a general-purpose computer system on which various embodiments and elements of the face recognition training database generation technique, as described herein, may be implemented. It should be noted that any boxes that are represented by broken or dashed lines in FIG. 9 represent alternate embodiments of the simplified computing device, and that any or all of these alternate embodiments, as described below, may be used in combination with other alternate embodiments that are described throughout this document.
- FIG. 9 shows a general system diagram showing a simplified computing device 10.
- Such computing devices can be typically be found in devices having at least some minimum computational capability, including, but not limited to, personal computers, server computers, hand-held computing devices, laptop or mobile computers, communications devices such as cell phones and PDA's, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, audio or video media players, etc.
- the device should have a sufficient computational capability and system memory to enable basic
- processing unit(s) 12 may also include one or more GPUs 14, either or both in communication with system memory 16.
- processing unit(s) 12 of the general computing device may be specialized microprocessors, such as a DSP, a VLIW, or other micro-controller, or can be conventional CPUs having one or more processing cores, including specialized GPU-based cores in a multi-core CPU.
- the simplified computing device of FIG. 9 may also include other components, such as, for example, a communications interface 18.
- the simplified computing device of FIG. 9 may also include one or more conventional computer input devices 20 (e.g., pointing devices, keyboards, audio input devices, video input devices, haptic input devices, devices for receiving wired or wireless data transmissions, etc.).
- the simplified computing device of FIG. 9 may also include other optional components, such as, for example, one or more
- conventional display device(s) 24 and other computer output devices 22 e.g., audio output devices, video output devices, devices for transmitting wired or wireless data transmissions, etc.
- typical communications interfaces 18, input devices 20, output devices 22, and storage devices 26 for general- purpose computers are well known to those skilled in the art, and will not be described in detail herein.
- Computer readable media can be any available media that can be accessed by computer 10 via storage devices 26 and includes both volatile and nonvolatile media that is either removable 28 and/or non-removable 30, for storage of information such as computer-readable or computer-executable instructions, data structures, program modules, or other data.
- Computer readable media may comprise computer storage media and communication media.
- Computer storage media includes, but is not limited to, computer or machine readable media or storage devices such as DVD's, CD's, floppy disks, tape drives, hard drives, optical drives, solid state memory devices, RAM, ROM, EEPROM, flash memory or other memory technology, magnetic cassettes, magnetic tapes, magnetic disk storage, or other magnetic storage devices, or any other device which can be used to store the desired information and which can be accessed by one or more computing devices.
- computer or machine readable media or storage devices such as DVD's, CD's, floppy disks, tape drives, hard drives, optical drives, solid state memory devices, RAM, ROM, EEPROM, flash memory or other memory technology, magnetic cassettes, magnetic tapes, magnetic disk storage, or other magnetic storage devices, or any other device which can be used to store the desired information and which can be accessed by one or more computing devices.
- Retention of information such as computer-readable or computer- executable instructions, data structures, program modules, etc., can also be accomplished by using any of a variety of the aforementioned communication media to encode one or more modulated data signals or carrier waves, or other transport mechanisms or communications protocols, and includes any wired or wireless information delivery mechanism.
- modulated data signal or “carrier wave” generally refer to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media includes wired media such as a wired network or direct-wired connection carrying one or more modulated data signals, and wireless media such as acoustic, RF, infrared, laser, and other wireless media for transmitting and/or receiving one or more modulated data signals or carrier waves. Combinations of the any of the above should also be included within the scope of communication media.
- embodying some or all of the various face recognition training database generation technique embodiments described herein, or portions thereof, may be stored, received, transmitted, or read from any desired combination of computer or machine readable media or storage devices and communication media in the form of computer executable instructions or other data structures.
- the face recognition training database generation technique embodiments described herein may be further described in the general context of computer-executable instructions, such as program modules, being executed by a computing device.
- program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
- the embodiments described herein may also be practiced in distributed computing environments where tasks are performed by one or more remote processing devices, or within a cloud of one or more devices, that are linked through one or more communications networks.
- program modules may be located in both local and remote computer storage media including media storage devices.
- the aforementioned instructions may be implemented, in part or in whole, as hardware logic circuits, which may or may not include a processor.
- a depth video camera and a motion detection method that uses depth frames from such a camera were employed.
- conventional motion detection methods that can detect persons in an environment using just a color video camera.
- the depth video camera is eliminated and just the color video camera is used to detect potential persons in the environment.
- the process described previously would be modified such that a sequence of frames output from a color video camera is input.
- image frames are then used in conjunction with a face detection method to detect potential persons in an environment, and in conjunction with an appropriate motion detection method to also detect potential persons in the environment.
- new sequences of frames are employed as described previously, these too would just be new sequences of frames output from the color video camera.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Collating Specific Patterns (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/530,925 US8855369B2 (en) | 2012-06-22 | 2012-06-22 | Self learning face recognition using depth based tracking for database generation and update |
PCT/US2013/046447 WO2013192253A1 (en) | 2012-06-22 | 2013-06-19 | Self learning face recognition using depth based tracking for database generation and update |
Publications (2)
Publication Number | Publication Date |
---|---|
EP2864930A1 true EP2864930A1 (en) | 2015-04-29 |
EP2864930B1 EP2864930B1 (en) | 2018-11-28 |
Family
ID=48699349
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP13731633.7A Active EP2864930B1 (en) | 2012-06-22 | 2013-06-19 | Self learning face recognition using depth based tracking for database generation and update |
Country Status (7)
Country | Link |
---|---|
US (2) | US8855369B2 (en) |
EP (1) | EP2864930B1 (en) |
JP (1) | JP2015520470A (en) |
KR (1) | KR20150021526A (en) |
CN (1) | CN103400106B (en) |
ES (1) | ES2704277T3 (en) |
WO (1) | WO2013192253A1 (en) |
Families Citing this family (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6112823B2 (en) * | 2012-10-30 | 2017-04-12 | キヤノン株式会社 | Information processing apparatus, information processing method, and computer-readable program |
US8761448B1 (en) | 2012-12-13 | 2014-06-24 | Intel Corporation | Gesture pre-processing of video stream using a markered region |
KR102161783B1 (en) * | 2014-01-16 | 2020-10-05 | 한국전자통신연구원 | Performance Evaluation System and Method for Face Recognition of Service Robot using UHD Moving Image Database |
CN103792943B (en) * | 2014-02-19 | 2017-01-11 | 北京工业大学 | Robot autonomic movement control method based on distance information and example learning |
WO2015130309A1 (en) * | 2014-02-28 | 2015-09-03 | Hewlett-Packard Development Company, L.P. | Customizable profile to modify an identified feature in video feed |
US9552519B2 (en) * | 2014-06-02 | 2017-01-24 | General Motors Llc | Providing vehicle owner's manual information using object recognition in a mobile device |
US9384386B2 (en) | 2014-08-29 | 2016-07-05 | Motorola Solutions, Inc. | Methods and systems for increasing facial recognition working rang through adaptive super-resolution |
US9544679B2 (en) * | 2014-12-08 | 2017-01-10 | Harman International Industries, Inc. | Adjusting speakers using facial recognition |
CN104463899B (en) * | 2014-12-31 | 2017-09-22 | 北京格灵深瞳信息技术有限公司 | A kind of destination object detection, monitoring method and its device |
US9888174B2 (en) | 2015-10-15 | 2018-02-06 | Microsoft Technology Licensing, Llc | Omnidirectional camera with movement detection |
US10277858B2 (en) * | 2015-10-29 | 2019-04-30 | Microsoft Technology Licensing, Llc | Tracking object of interest in an omnidirectional video |
CN106778546A (en) * | 2016-11-29 | 2017-05-31 | 聚鑫智能科技(武汉)股份有限公司 | A kind of visual identity method and system based on visible ray and non-visible light |
JP2018093412A (en) * | 2016-12-06 | 2018-06-14 | 株式会社日立製作所 | Processor, transmission program, transmission method |
CN108154375B (en) * | 2016-12-06 | 2019-10-15 | 阿里巴巴集团控股有限公司 | A kind of business data processing method and device |
CN106650656B (en) * | 2016-12-16 | 2023-10-27 | 中新智擎科技有限公司 | User identity recognition device and robot |
WO2018186398A1 (en) * | 2017-04-07 | 2018-10-11 | 日本電気株式会社 | Learning data generation device, learning data generation method, and recording medium |
EP3610410A1 (en) * | 2017-04-14 | 2020-02-19 | Koninklijke Philips N.V. | Person identification systems and methods |
US10924670B2 (en) | 2017-04-14 | 2021-02-16 | Yang Liu | System and apparatus for co-registration and correlation between multi-modal imagery and method for same |
US10242486B2 (en) | 2017-04-17 | 2019-03-26 | Intel Corporation | Augmented reality and virtual reality feedback enhancement system, apparatus and method |
US10671840B2 (en) | 2017-05-04 | 2020-06-02 | Intel Corporation | Method and apparatus for person recognition using continuous self-learning |
US10943088B2 (en) * | 2017-06-14 | 2021-03-09 | Target Brands, Inc. | Volumetric modeling to identify image areas for pattern recognition |
CN109948468A (en) * | 2019-02-28 | 2019-06-28 | 南京甬宁科学仪器有限公司 | A kind of laser microscope image analysis identifying system |
GB2586996B (en) * | 2019-09-11 | 2022-03-09 | Canon Kk | A method, apparatus and computer program for acquiring a training set of images |
CN111401205B (en) * | 2020-03-11 | 2022-09-23 | 深圳市商汤科技有限公司 | Action recognition method and device, electronic equipment and computer readable storage medium |
CN111709974B (en) * | 2020-06-22 | 2022-08-02 | 苏宁云计算有限公司 | Human body tracking method and device based on RGB-D image |
CN114827435B (en) * | 2021-01-28 | 2024-05-07 | 深圳绿米联创科技有限公司 | Video stream processing method and device free of IR-Cut, intelligent door lock and medium |
Family Cites Families (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5164992A (en) * | 1990-11-01 | 1992-11-17 | Massachusetts Institute Of Technology | Face recognition system |
US6819783B2 (en) * | 1996-09-04 | 2004-11-16 | Centerframe, Llc | Obtaining person-specific images in a public venue |
US5991429A (en) * | 1996-12-06 | 1999-11-23 | Coffin; Jeffrey S. | Facial recognition system for security access and identification |
US6108437A (en) * | 1997-11-14 | 2000-08-22 | Seiko Epson Corporation | Face recognition apparatus, method, system and computer readable medium thereof |
US6944319B1 (en) * | 1999-09-13 | 2005-09-13 | Microsoft Corporation | Pose-invariant face recognition system and process |
AU2001282483A1 (en) * | 2000-08-29 | 2002-03-13 | Imageid Ltd. | Indexing, storage and retrieval of digital images |
US6920236B2 (en) * | 2001-03-26 | 2005-07-19 | Mikos, Ltd. | Dual band biometric identification system |
CA2359269A1 (en) * | 2001-10-17 | 2003-04-17 | Biodentity Systems Corporation | Face imaging system for recordal and automated identity confirmation |
US7024033B2 (en) | 2001-12-08 | 2006-04-04 | Microsoft Corp. | Method for boosting the performance of machine-learning classifiers |
US7050607B2 (en) | 2001-12-08 | 2006-05-23 | Microsoft Corp. | System and method for multi-view face detection |
JP2003346149A (en) * | 2002-05-24 | 2003-12-05 | Omron Corp | Face collating device and bioinformation collating device |
US7843495B2 (en) * | 2002-07-10 | 2010-11-30 | Hewlett-Packard Development Company, L.P. | Face recognition in a digital imaging system accessing a database of people |
JP2004299025A (en) * | 2003-04-01 | 2004-10-28 | Honda Motor Co Ltd | Mobile robot control device, mobile robot control method and mobile robot control program |
JP2005044330A (en) | 2003-07-24 | 2005-02-17 | Univ Of California San Diego | Weak hypothesis generation device and method, learning device and method, detection device and method, expression learning device and method, expression recognition device and method, and robot device |
JP4328286B2 (en) * | 2004-12-14 | 2009-09-09 | 本田技研工業株式会社 | Face area estimation device, face area estimation method, and face area estimation program |
US7668346B2 (en) | 2006-03-21 | 2010-02-23 | Microsoft Corporation | Joint boosting feature selection for robust face recognition |
JP4836633B2 (en) * | 2006-03-31 | 2011-12-14 | 株式会社東芝 | Face authentication device, face authentication method, and entrance / exit management device |
JP2008017169A (en) * | 2006-07-06 | 2008-01-24 | Nikon Corp | Electronic camera |
US8121356B2 (en) * | 2006-09-15 | 2012-02-21 | Identix Incorporated | Long distance multimodal biometric system and method |
US8010471B2 (en) | 2007-07-13 | 2011-08-30 | Microsoft Corporation | Multiple-instance pruning for learning efficient cascade detectors |
EP2174310A4 (en) * | 2007-07-16 | 2013-08-21 | Cernium Corp | Apparatus and methods for video alarm verification |
WO2009067670A1 (en) * | 2007-11-21 | 2009-05-28 | Gesturetek, Inc. | Media preferences |
CN100543764C (en) * | 2007-12-25 | 2009-09-23 | 西南交通大学 | A kind of face feature extraction method with illumination robustness |
KR101618735B1 (en) | 2008-04-02 | 2016-05-09 | 구글 인코포레이티드 | Method and apparatus to incorporate automatic face recognition in digital image collections |
US8265425B2 (en) * | 2008-05-20 | 2012-09-11 | Honda Motor Co., Ltd. | Rectangular table detection using hybrid RGB and depth camera sensors |
US8442355B2 (en) * | 2008-05-23 | 2013-05-14 | Samsung Electronics Co., Ltd. | System and method for generating a multi-dimensional image |
US20110078097A1 (en) | 2009-09-25 | 2011-03-31 | Microsoft Corporation | Shared face training data |
JP4844670B2 (en) * | 2009-11-13 | 2011-12-28 | 日本ビクター株式会社 | Video processing apparatus and video processing method |
US8730309B2 (en) * | 2010-02-23 | 2014-05-20 | Microsoft Corporation | Projectors and depth cameras for deviceless augmented reality and interaction |
US9858475B2 (en) * | 2010-05-14 | 2018-01-02 | Intuitive Surgical Operations, Inc. | Method and system of hand segmentation and overlay using depth data |
US9400503B2 (en) * | 2010-05-20 | 2016-07-26 | Irobot Corporation | Mobile human interface robot |
WO2012057665A1 (en) | 2010-10-28 | 2012-05-03 | Telefonaktiebolaget L M Ericsson (Publ) | A face data acquirer, end user video conference device, server, method, computer program and computer program product for extracting face data |
WO2012071677A1 (en) | 2010-11-29 | 2012-06-07 | Technicolor (China) Technology Co., Ltd. | Method and system for face recognition |
US9235977B2 (en) * | 2011-02-22 | 2016-01-12 | Richard Deutsch | Systems and methods for monitoring caregiver and patient protocol compliance |
US9530221B2 (en) * | 2012-01-06 | 2016-12-27 | Pelco, Inc. | Context aware moving object detection |
JP6259808B2 (en) * | 2012-03-14 | 2018-01-10 | グーグル エルエルシー | Modifying the appearance of participants during a video conference |
US9321173B2 (en) * | 2012-06-22 | 2016-04-26 | Microsoft Technology Licensing, Llc | Tracking and following people with a mobile robotic device |
-
2012
- 2012-06-22 US US13/530,925 patent/US8855369B2/en active Active
-
2013
- 2013-06-19 KR KR20147035864A patent/KR20150021526A/en not_active Application Discontinuation
- 2013-06-19 JP JP2015518531A patent/JP2015520470A/en active Pending
- 2013-06-19 ES ES13731633T patent/ES2704277T3/en active Active
- 2013-06-19 WO PCT/US2013/046447 patent/WO2013192253A1/en active Application Filing
- 2013-06-19 EP EP13731633.7A patent/EP2864930B1/en active Active
- 2013-06-21 CN CN201310271853.4A patent/CN103400106B/en active Active
-
2014
- 2014-10-07 US US14/507,956 patent/US9317762B2/en active Active
Also Published As
Publication number | Publication date |
---|---|
US20150092986A1 (en) | 2015-04-02 |
KR20150021526A (en) | 2015-03-02 |
CN103400106A (en) | 2013-11-20 |
US20130343600A1 (en) | 2013-12-26 |
CN103400106B (en) | 2017-06-16 |
EP2864930B1 (en) | 2018-11-28 |
WO2013192253A1 (en) | 2013-12-27 |
JP2015520470A (en) | 2015-07-16 |
US9317762B2 (en) | 2016-04-19 |
ES2704277T3 (en) | 2019-03-15 |
US8855369B2 (en) | 2014-10-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9317762B2 (en) | Face recognition using depth based tracking | |
Betancourt et al. | The evolution of first person vision methods: A survey | |
WO2019218824A1 (en) | Method for acquiring motion track and device thereof, storage medium, and terminal | |
US10055646B2 (en) | Local caching for object recognition | |
US10943095B2 (en) | Methods and systems for matching extracted feature descriptors for enhanced face recognition | |
CN105404860B (en) | Method and apparatus for managing personal information of wandering away | |
CN108629284A (en) | The method and device of Real- time Face Tracking and human face posture selection based on embedded vision system | |
CN111160202B (en) | Identity verification method, device, equipment and storage medium based on AR equipment | |
DE112019001257T5 (en) | VIDEO STABILIZATION TO REDUCE CAMERA AND FACE MOVEMENT | |
CN102959946A (en) | Augmenting image data based on related 3d point cloud data | |
CN110428449A (en) | Target detection tracking method, device, equipment and storage medium | |
WO2019242672A1 (en) | Method, device and system for target tracking | |
Phankokkruad et al. | An evaluation of technical study and performance for real-time face detection using web real-time communication | |
Saeed et al. | Boosted human head pose estimation using kinect camera | |
CN110858277A (en) | Method and device for obtaining attitude classification model | |
JPWO2020137536A1 (en) | Person authentication device, control method, and program | |
KR101826669B1 (en) | System and method for video searching | |
CN116824641B (en) | Gesture classification method, device, equipment and computer storage medium | |
Delibasoglu et al. | Motion detection in moving camera videos using background modeling and FlowNet | |
JP3401511B2 (en) | Computer-readable recording medium and image characteristic point extracting apparatus, in which a program for causing a computer to execute the method for extracting characteristic points of an image and the method for extracting the characteristic points of the image are recorded. | |
JP4449483B2 (en) | Image analysis apparatus, image analysis method, and computer program | |
Golda | Image-based Anomaly Detection within Crowds | |
US11403880B2 (en) | Method and apparatus for facilitating identification | |
CN107749068A (en) | Particle filter realizes object real-time tracking method with perceptual hash algorithm | |
Chu et al. | YG-SLAM: Enhancing Visual SLAM in Dynamic Environments with YOLOv8 and Geometric Constraints |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20141218 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAX | Request for extension of the european patent (deleted) | ||
17Q | First examination report despatched |
Effective date: 20160224 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ZHIGU HOLDINGS LIMITED |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ZHIGU HOLDINGS LIMITED |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20180619 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 1071081 Country of ref document: AT Kind code of ref document: T Effective date: 20181215 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602013047433 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: ES Ref legal event code: FG2A Ref document number: 2704277 Country of ref document: ES Kind code of ref document: T3 Effective date: 20190315 |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20181128 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1071081 Country of ref document: AT Kind code of ref document: T Effective date: 20181128 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190328 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190228 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190228 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190328 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190301 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602013047433 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 |
|
26N | No opposition filed |
Effective date: 20190829 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R082 Ref document number: 602013047433 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20190630 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190619 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190630 Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190630 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190630 Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190619 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20130619 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Ref document number: 602013047433 Country of ref document: DE Free format text: PREVIOUS MAIN CLASS: G06K0009000000 Ipc: G06V0010000000 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181128 |
|
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230523 |
|
P02 | Opt-out of the competence of the unified patent court (upc) changed |
Effective date: 20230530 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20230628 Year of fee payment: 11 Ref country code: DE Payment date: 20220914 Year of fee payment: 11 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: IT Payment date: 20230623 Year of fee payment: 11 Ref country code: GB Payment date: 20230622 Year of fee payment: 11 Ref country code: ES Payment date: 20230829 Year of fee payment: 11 |