US20190332848A1 - Facial enrollment and recognition system - Google Patents
Facial enrollment and recognition system Download PDFInfo
- Publication number
- US20190332848A1 US20190332848A1 US15/964,220 US201815964220A US2019332848A1 US 20190332848 A1 US20190332848 A1 US 20190332848A1 US 201815964220 A US201815964220 A US 201815964220A US 2019332848 A1 US2019332848 A1 US 2019332848A1
- Authority
- US
- United States
- Prior art keywords
- facial
- person
- images
- facial recognition
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G06K9/00288—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5838—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
-
- G06F17/30256—
-
- G06K9/00228—
-
- G06K9/00744—
-
- G06K9/00771—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Definitions
- the present disclosure relates generally to facial recognition systems, and more particularly, to facial recognition systems that employ a facial recognition engine to compare a facial image with a representation that is based upon facial images stored in a facial image database created and maintained by the facial recognition system.
- the disclosure relates to facial recognition systems that employ one or more facial recognition engines to compare a facial image with a representation that is based upon facial images stored in a facial image database created and maintained by the facial recognition system.
- the facial recognition system may be configured to monitor still and/or video sources capturing facial images of individuals within a space, and then utilize one or more facial recognition engines to identify the individuals within the space.
- the facial recognition system may report back to a building automation system with the identity of the individuals seen within the space so that the building automation system may take appropriate action. For example, if the building automation system includes an HVAC system, the HVAC system may change a temperature set point in response to being informed that a particular person has arrived home. If the building automation system includes a security system, the security system may unlock a door of a building in response to being informed that a particular authorized person is present at the door. These are just examples.
- a particular example of the disclosure is a facial recognition system that includes an input, an output and a memory for storing a facial image database that includes a plurality of entries each corresponding to a different person, and wherein each entry includes a person identifier along with one or more facial images of the person.
- the system includes a facial recognition module that is operably coupled to the memory, the input and the output.
- the facial recognition module is configured to receive a new facial image via the input and to ascertain one or more facial image parameters from the new facial image, and then select a subset of facial recognition engines from a larger set of available facial recognition engines based at least in part on one or more of the ascertained facial image parameters.
- the ascertained facial image parameters may include, for example, the size of the facial image in pixels, a relative brightness of the facial image, a relative contrast of the facial image, a relative back lighting of the facial image, a relative blurriness of the facial image and/or any other suitable image parameter(s).
- the ascertained facial image parameters may include whether the captured image shows the individual looking directly at the camera, or up or down and/or to the left or to the right.
- the ascertained facial image parameters may include whether and/or how much of the face is obstructed by a hat, hair, glasses or other object.
- each of the selected facial recognition engines may compare the new facial image to facial representations that are based upon the facial images in the facial image database to try to identify the person identifier that likely corresponds to the new facial image.
- the facial recognition module may then send a person ID to a control module via the output, wherein the control module may control one or more building control devices based at least in part on the person ID (e.g. change a setpoint, unlock a door, etc.).
- Another example of the disclosure includes a method of recognizing individuals within a building space. Access is gained to a facial image database that includes a plurality of enrolled persons, where the facial image database includes a facial image for each of the plurality of enrolled persons under each of a plurality of different facial conditions.
- One or more video feeds are monitored that provide images of spaces within the building space, at least some of the images including images of persons within the building space.
- the one or more video feeds are processed to detect one or more facial images of a person within the building space and one or more facial recognition engines may be selected to compare the detected facial image with facial models that are based on the facial images in the facial image database. Selecting the one or more facial recognition engines is based at least in part on one or more image criteria of the detected facial image.
- An identified one of the plurality of enrolled persons included in the facial image database that is identified in the detected facial image may be received from the selected one or more facial recognition engines and the identified one of the plurality of enrolled persons may be reported to a building automation system.
- One or more building control devices of the building automation system may be controlled based at least in part on the identified one of the plurality of enrolled persons.
- Another example of the disclosure includes a method of identifying an individual.
- the method includes monitoring a video feed that provides a series of images of activity in or around a building and extracting one or more images from the series of images of the video feed.
- the extracted one or more images are analyzed to find facial images, and the facial images are quantified to find a query-able facial image.
- One or more facial recognition engines are selected based at least in part upon one or more image properties of the query-able facial image.
- the query-able facial image is sent to the selected one or more facial recognition engines.
- the selected one or more facial recognition engines are configured to compare the query-able facial image with facial models that are based upon facial images within the facial image database.
- Facial recognition engine results include an identity of a person shown within the query-able facial image (if the person is present in the facial image database) and in some cases an associated confidence value.
- One or more building control devices may then be controlled based at least in part on the identity of the person shown within the query-able facial image.
- FIG. 1 is a schematic block diagram of an illustrative facial recognition system
- FIG. 2 is a schematic block diagram of an illustrative enrollment module forming a portion of the facial recognition system of FIG. 1 ;
- FIG. 3 is a schematic block diagram of an illustrative capture module forming a portion of the facial recognition system of FIG. 1 ;
- FIG. 4 is a schematic block diagram of an illustrative facial recognition system
- FIG. 5 is a schematic block diagram of an illustrative facial image database usable in the illustrative facial recognition systems of FIG. 1 and FIG. 4 ;
- FIG. 6 is a flow diagram showing an illustrative method of recognizing individuals within a space.
- FIG. 7 is a flow diagram showing an illustrative method of identifying an individual.
- references in the specification to “an embodiment”, “some embodiments”, “other embodiments”, etc. indicate that the embodiment described may include one or more particular features, structures, and/or characteristics. However, such recitations do not necessarily mean that all embodiments include the particular features, structures, and/or characteristics. Additionally, when particular features, structures, and/or characteristics are described in connection with one embodiment, it should be understood that such features, structures, and/or characteristics may also be used connection with other embodiments whether or not explicitly described unless clearly stated to the contrary.
- FIG. 1 is a schematic block diagram of an illustrative facial recognition system 10 that may, for example, be configured to create and update a database of facial images and to use that database of facial images to identify an individual.
- the facial recognition system 10 includes an enrollment module 12 and a capture module 14 . While illustrated as separate components, the enrollment module 12 and the capture module 14 may individually or in combination be manifested in a controller that may be part of a building automation system (e.g. an HVAC panel such as a thermostat, a security panel, etc.). In some cases, the enrollment module 12 and/or the capture module 14 , or at least some functionally of one or both modules, may be manifested in a server or a cloud-based application.
- a building automation system e.g. an HVAC panel such as a thermostat, a security panel, etc.
- the enrollment module 12 and/or the capture module 14 may be manifested in mobile device such as a tablet computer, laptop computer or smartphone. In some cases, the enrollment module 12 and/or the capture module 14 , or at least some functionally of one or both modules, may be manifested in a desktop computer.
- the enrollment module 12 may, for example, be responsible for creating and maintaining a facial images database 16 .
- the enrollment module 12 may obtain facial images from a variety of different sources and may in some cases analyze the facial images for quality before storing the facial images in the facial images database 16 .
- the enrollment module 12 may be responsible for periodically updating the facial images stored in the facial images database 16 to account for changing styles (hair styles, facial hair, glasses, etc.), aging and the like of the individuals in the facial images database 16 .
- the capture module 14 may be responsible for obtaining facial images of individuals to be identified.
- the capture module 14 may receive live video of a space, and may analyze the live video to find facial images of persons to be identified. Once the capture module 14 finds one or more facial images of person(s) that are to be identified, the capture module 14 may provide the one or more facial images to a facial recognition module 18 .
- One of the tasks of the facial recognition module 18 is to determine which of a variety of different facial recognition engines 20 are to be used to query the facial images database 16 in order to identify the persons in the one or more facial images. In some cases, which facial recognition engine 20 to use may be determined at least in part by one or more image parameters of the one or more facial images.
- the facial recognition module 18 includes a facial recognition engine evaluation module 22 that determines which of the facial recognition engines 20 are to be used. In some cases, a single facial recognition engine 20 may be used. In other situations, two, three or more distinct facial recognition engines 20 may be used. In FIG. 1 , the facial recognition engines 20 include an ENGINE #1 labeled as 24 , an ENGINE #2 labeled as 26 through an ENGINE #N labeled as 28 .
- the facial recognition engine evaluation module 22 may analyze a facial image to determine one or more image parameters of the facial image, and may utilize the one or more image parameters to determine which of the facial recognition engines 20 should be used in order to identify the person shown in the facial image.
- the facial recognition engine evaluation module 22 (or the facial recognition module 18 itself) may evaluate multiple images of a particular individual (e.g. multiple images of a video sequence) to determine which of the multiple images are most likely to provide good results.
- the image parameters may include, for example, the size of the facial image in pixels, a relative brightness of the facial image, a relative contrast of the facial image, a relative back lighting of the facial image, a relative blurriness of the facial image, and/or any other suitable image parameter(s).
- the image parameters may include whether the captured image shows the individual looking directly at the camera, or up or down and/or to the left or to the right. In some cases, the image parameters may include whether and/or how much of the face is obstructed by a hat, hair, glasses or other object.
- the engine evaluation model may store a table that maps certain image parameters to certain facial recognition engines 20 .
- a specific facial recognition engine 20 may be selected if the image parameters fall within a range suitable for that facial recognition engine 20 .
- the suitable range thresholds may be selected manually based on engine specifications, or may be based upon prior testing performed on a sample of facial images. In some cases, the suitable range thresholds may also be selected automatically, and may be adjusted over time, based on comparison of different facial recognition engine results on facial images processed by the system. Examples are shown in Table One, below:
- the input parameters may be described as a range, as specific values, or a specific set of conditions. In some cases, a weighed sum combining various parameters may also be considered. For example, if the face size for a particular facial image is between 40 and 90 pixels, facial sharpness needs to be greater than or equal to 0.8; and if the face size is greater than 90 pixels, facial sharpness needs to be greater than or equal to 0.7.
- facial recognition engines A variety of commercially available facial recognition engines may be utilized. Examples of suitable cloud-based APIs (Application Programming Interface) for facial recognition include but are not limited to Microsoft Face API and Amazon Rekognition. Examples of suitable facial recognition engines that may be employed as on-site software or be integrated into other products include but are not limited to NEC Face Recognition, Morpho Argus and Cognitec.
- cloud-based APIs Application Programming Interface
- suitable facial recognition engines that may be employed as on-site software or be integrated into other products include but are not limited to NEC Face Recognition, Morpho Argus and Cognitec.
- the facial recognition module 18 may provide the selected facial recognition engine(s) 20 with the facial image.
- one or more of the facial recognition engines 20 may be hosted on a remote server, but this is not required.
- the selected facial recognition engine(s) 20 may return an identity of the person shown in the facial image(s) that was sent to the selected facial recognition engine(s) 20 .
- the selected facial recognition engine(s) 20 may also return a confidence value that provides an indication of how confident (i.e. likely) that the identity of the person is correct.
- the confidence value may be relatively low. Conversely, if the best available facial image sent to the selected facial recognition engine(s) 20 is well-light and clear, and is an image of the individual looking directly or nearly directly at the camera, the confidence value may be relatively high.
- FIG. 2 is a schematic block diagram of an illustrative enrollment module 12 .
- a function of the enrollment module 12 is to obtain facial images that may be placed in the facial images database 16 ( FIG. 1 ) and subsequently used to build or update a facial model based upon facial images in the facial images database 16 in order to identify persons in captured facial images.
- the enrollment module 12 includes an image input module 30 .
- the image input module 30 may obtain facial images from a variety of different sources. Examples of suitable image sources include a selfies module 32 , a photos module 34 , a captured images module 36 and a social media module 38 .
- the selfies module 32 may instruct individuals who are expected to be in the building space and are enrolling in the facial recognition system to take a series of selfies. For example, the individuals may be instructed to take and upload selfies showing themselves, or at least their faces, looking directly at the camera, looking left, looking right, looking up and looking down. In some cases, the selfies module 32 may instruct the individual to take multiple selfies with their hair up and their hair down, with and without facial jewelry like earrings, nose piercings, lip piercings, and/or various glasses, for example. In some cases, the selfies module 32 may be implemented, at least in part, on a mobile device such as a smartphone or tablet computer, but this is not required.
- the photos module 34 may be configured to go through online and/or otherwise electronic photo libraries looking for suitable facial images. These can include photo libraries stored on a personal computer, on the cloud, on a mobile device, and/or any other device.
- the photos module 34 may assemble multiple facial images for a particular individual, may display the multiple facial images, and ask the individual to confirm that each of the images is in fact of that individual.
- the captured images module 36 may include or otherwise be operably coupled with a still camera, a video camera and the like, and may capture facial images as individuals move about the space. The captured images module 36 may compile these images, and in some cases may ask the individuals to confirm their identity.
- the social media module 38 may scan social media accounts, such as but not limited to Facebook, Snapchat and the like, looking for suitable facial images of an individual. In some cases, the social media module 38 may display the found images and ask for identity confirmation.
- the enrollment module 12 may include an image quality assessment module 40 that receives facial images from the image input module 30 and analyzes the received facial images to confirm that the images are of sufficient quality to be of use.
- the facial images that are believed to be of sufficient quality, and represent a suitable variety of poses and images (looking left, looking right, etc.) may be passed on to an image organization module 42 .
- a particular facial image pose may be determined to be of less-than sufficient quality, and the image quality assessment module 40 may ask the image input module 30 to obtain a higher quality facial image of that particular facial image pose if possible.
- Facial images that are deemed to be of sufficient quality, and of appropriate facial poses, are forwarded to the image organization module 42 .
- the image organization module 42 may at least partially contribute to the organization of facial images within the facial images database 16 ( FIG. 1 ).
- Facial images may be organized in any suitable manner.
- the facial images for a particular individual may include images of the individual looking directly at the camera, images of the individual looking above the camera, images of the individual looking below the camera, images of the individual looking to the left of the camera, and images of the individual looking to the right of the camera.
- facial images for a particular individual may be organized by whether they are wearing their hair up or down, have facial hair or are clean-shaven, whether or not they are wear jewelry, glasses and the like.
- facial images for a particular individual may also be organized by the size of the facial image in pixels, the relative brightness of the facial image, the relative contrast of the facial image, the relative back lighting of the facial image, the relative blurriness of the facial image and/or by any other suitable image parameter. These are just examples.
- FIG. 3 is a schematic block diagram of the capture module 14 . While the capture module 14 is configured to capture facial images of individuals within a space so that they can be identified, in some cases, the capture module 14 may also assist the enrollment module 12 in initially capturing facial images for populating the facial images database 16 ( FIG. 1 ). In some cases, the capture module 14 includes a video capture module 50 . In some instances, the video capture module 50 may be operably coupled to one or more still cameras and/or video cameras that are distributed within a space. In some cases, still images may be captured by the still cameras and/or from image frames captured by the video cameras. In some cases, still images may be captured 30 times (frames) per second using a video camera, although this is just an example.
- the still images may be forwarded to a face detection module 52 , which analyzes the still images looking for facial images. Once a possible facial image is detected, in some cases, subsequent still images are analyzed by a face tracking module 54 looking for confirmation the individual is still there and/or looking for better quality facial images of that individual for subsequent identification.
- a face image evaluation module 56 may review the facial images to ascertain whether and which of the captured image(s) are of sufficient quality and/or pose to be of use in identifying the individual shown in the captured image(s).
- FIG. 4 is a schematic block diagram of an illustrative facial recognition system 60 .
- the facial recognition system 60 includes an input 62 and an output 64 .
- a memory 67 may be configured to store the facial images database 16 .
- the facial images database 16 includes a plurality of entries each corresponding to a different person, and each entry includes a person identifier along with one or more facial images of the person.
- the facial images database 16 may include multiple facial images for each person identifier, with some of the multiple facial images representing the person at one or more of different facial angles, different facial lighting, different facial size in terms of pixels, and different facial obstructions. Examples of different facial obstructions include but are not limited to differing hair style, wearing glasses, not wearing glasses, wearing a hat, not wearing a hat, and differing ages.
- the facial recognition module 18 is operably coupled to the input 62 , the output 64 and to the memory 67 and is configured to receive a new facial image via the input 62 and to ascertain one or more facial image parameters from the new facial image.
- the facial recognition module 18 is configured to select a subset of facial recognition engines 20 ( FIG. 1 ) from a larger set of available facial recognition engines 20 ( FIG. 1 ) based at least in part on one or more of the ascertained facial image parameters.
- the ascertained facial image parameters may include, for example, the size of the facial image in pixels, a relative brightness of the facial image, a relative contrast of the facial image, a relative back lighting of the facial image, a relative blurriness of the facial image and/or any other suitable image parameter(s).
- the ascertained facial image parameters may include whether the captured image shows the individual looking directly at the camera, or up or down and/or to the left or to the right. In some cases, the ascertained facial image parameters may include whether and/or how much of the face is obstructed by a hat, hair, glasses or other object.
- the facial recognition engines 20 may include cloud-based facial recognition engines, but this is not required.
- the selected subset of facial recognition engines 20 may include two or more distinct facial recognition engines 20 .
- the selected subset of facial recognition engines 20 may include only a single facial recognition engine 20 .
- Each of the facial recognition engines 20 is configured to compare the new facial image to facial models or other facial representations that are based upon the facial images in the facial image database 16 and to identify a person identifier that likely corresponds to the new facial image.
- the facial recognition module 18 may be configured to evaluate the person identifiers and confidence levels returned by the selected facial recognition engines 20 . If the returned person identifiers are the same, the facial recognition module 18 will as sign a high confidence to the output person ID. However, if the returned person identifiers differ, the facial recognition module 18 may output the person ID with the highest combined confidence, or may select additional facial recognition engines to evaluate the new facial image. In case of disagreement, the output person ID will be assigned lower confidence.
- the facial recognition module 18 is configured to send a person ID to a control module 66 via the output 64 , wherein the control module 66 is configured to control one or more building control devices 68 , 70 based at least in part on the person ID.
- the facial recognition module 18 may be further configured to process the person identifiers identified by each of the subset of facial recognition engines 20 to determine the person ID that is sent to the control module 66 .
- the facial recognition module 18 may be configured to determine a confidence level in the person ID that is based at least in part on the confidence level of the person identifier provided by each of one or more of the subset of facial recognition engines 20 . In some cases, if the confidence level in the person ID is below a threshold confidence level, the facial recognition module 18 may select a different facial recognition engine 20 , or a different subset of facial recognition engines 20 , and may try again. This may be repeated until an acceptable confidence level is achieved. If an acceptable confidence level cannot be achieved, the facial recognition module 18 may report to the output module 64 that an unknown person was seen in the new facial image.
- FIG. 5 is a schematic illustration of an illustrative facial images database 116 .
- the facial images database 116 may be considered as being an illustrative but non-limiting example of the facial images database 16 . It will be appreciated that the information within the facial images database 16 may be organized in any suitable fashion.
- the facial images database 116 includes a plurality of facial images that are organized by individual. To illustrate, the facial images database 116 may include an INDIVIDUAL #1 labeled as 118 , an INDIVIDUAL #2 labeled as 120 through an INDIVIDUAL #P labeled as 122 . A number of facial images are organized underneath each individual 118 , 120 , 122 .
- the individual 118 includes an IMAGE #1 labeled as 124 , an IMAGE #2 labeled as 126 through an IMAGE #M labeled as 128 .
- the individual 120 includes an IMAGE #1 labeled as 134 , an IMAGE #2 labeled as 136 through an IMAGE #M labeled as 138 and the individual 122 includes an IMAGE #1 labeled as 144 , an IMAGE #2 labeled as 146 through an IMAGE #M labeled as 148 .
- the images for each individual 118 , 120 , 122 may be organized in a similar fashion.
- the image 124 may represent a straight on view of the individual 118
- the image 134 may represent a straight on view of the individual 120
- the image 144 may represent a straight on view of the individual 122 .
- the images 126 , 136 , 146 may represent left profiles of the individuals 118 , 120 , 122 , respectively. These are just examples. It will be appreciated that the various views of each individual 118 , 120 , 122 , and perhaps views with and without facial obstructions and/or other characteristics, may be organized in a similar manner.
- facial images for a particular individual may be organized by whether they are wearing their hair up or down, have facial hair or are clean-shaven, whether or not they are wear jewelry, glasses and the like.
- facial images for a particular individual may be organized by the size of the facial image in pixels, the relative brightness of the facial image, the relative contrast of the facial image, the relative back lighting of the facial image, the relative blurriness of the facial image and/or by any other suitable image parameter. These are just examples.
- FIG. 6 is a flow diagram showing an illustrative method 150 of recognizing individuals within or around a building space.
- a facial image database such as the facial images databases 16 , 116
- the facial image database includes a facial image for each of the plurality of enrolled persons under each of a plurality of different facial conditions.
- the plurality of different facial conditions include two or more of the person looking up, the person looking down, the person looking to the left, the person looking to the right and the person looking straight ahead.
- the facial image database organizes the facial images for each of the plurality of enrolled persons at each of a plurality of different facial conditions into predetermined separate categories.
- the separate categories include one or more of the person with glasses, the person without glasses, the person with their hair worn up, the person with their hair worn down, the person clean shaven, the person not clean shaven, the person wearing jewelry, the person not wearing jewelry, and the person wearing a hat, the person not wearing a hat and the person wearing a scarf.
- the separate categories include one or more of the person looking to the left, the person looking to the right, the person looking up, the person looking down, and the person looking straight ahead.
- the separate categories may include the size of the facial image in pixels, the relative brightness of the facial image, the relative contrast of the facial image, the relative back lighting of the facial image, the relative blurriness of the facial image and/or by any other suitable image parameter. These are just examples.
- One or more video feeds providing images of spaces within or around the building space may be monitored, at least some of the images include images of persons within the building space, as seen at block 154 .
- the one or more video feeds may be processed to detect one or more facial images of a person within or around the building space.
- One or more facial recognition engines may be selected, as generally indicated at block 158 . In some cases, selecting the one or more facial recognition engines is based at least in part on one or more image criteria of the one or more detected facial images.
- the one or more image criteria may include, for example, the size of the facial image in pixels, a relative brightness of the facial image, a relative contrast of the facial image, a relative back lighting of the facial image, a relative blurriness of the facial image and/or any other suitable image parameter(s).
- the one or more image criteria may include whether the captured image shows the individual looking directly at the camera, or up or down and/or to the left or to the right.
- the one or more image criteria may include whether and/or how much of the face is obstructed by a hat, hair, glasses or other object.
- an identified one of the plurality of enrolled persons included in the facial image database that is identified in the one or more detected facial images may be received from the selected one or more facial recognition engines.
- the identified one of the plurality of enrolled persons may be reported to a building automation system, as generally seen at block 162 , and one or more building control devices of the building automation system may be controlled based at least in part on the identified one of the plurality of enrolled persons, as indicated at block 164 .
- the building automation system includes an HVAC system
- the building control device may include a building control user interface device that allows the identified one of the plurality of enrolled persons to change one or more building control parameters only when the identified one of the plurality of enrolled persons has been granted permission to change one or more building control parameters.
- the building automation system includes an access control system, and a building access device is controlled to allow entry of the identified one of the plurality of enrolled persons only when the identified one of the plurality of enrolled persons has been granted permission to enter.
- FIG. 7 is a flow diagram illustrating a method 170 of identifying an individual.
- the method 170 includes creating a facial images database by soliciting facial images of each of a plurality of enrolled persons under each of a plurality of facial conditions that include one or more of the person looking to the left, the person looking to the right, the person looking up, the person looking down, and the person looking straight ahead.
- a video feed that provides a series of images of activity in or around a building is monitored, as indicated at block 174 .
- One or more images may be extracted from the series of images of the video feed, as seen at block 176 .
- the extracted one or more images may be analyzed to find facial images, as indicated at block 178 , and are quantified to find a query-able facial image, as noted at block 180 .
- one or more facial recognition engines may be selected based at least in part upon one or more image properties of the query-able facial image and the query-able facial image may be sent to the selected one or more facial recognition engines as indicated at block 184 , where the selected one or more facial recognition engines are configured to compare the query-able facial image with facial models that are based upon facial images stored within the facial image database.
- facial recognition engine results that include an identity of a person shown within the query-able facial image as well as an associated confidence value may be provided.
- one or more building control devices may be controlled based at least in part on the identity of the person shown within the query-able facial image.
Abstract
Description
- The present disclosure relates generally to facial recognition systems, and more particularly, to facial recognition systems that employ a facial recognition engine to compare a facial image with a representation that is based upon facial images stored in a facial image database created and maintained by the facial recognition system.
- The disclosure relates to facial recognition systems that employ one or more facial recognition engines to compare a facial image with a representation that is based upon facial images stored in a facial image database created and maintained by the facial recognition system. In some instances, the facial recognition system may be configured to monitor still and/or video sources capturing facial images of individuals within a space, and then utilize one or more facial recognition engines to identify the individuals within the space. In some cases, the facial recognition system may report back to a building automation system with the identity of the individuals seen within the space so that the building automation system may take appropriate action. For example, if the building automation system includes an HVAC system, the HVAC system may change a temperature set point in response to being informed that a particular person has arrived home. If the building automation system includes a security system, the security system may unlock a door of a building in response to being informed that a particular authorized person is present at the door. These are just examples.
- A particular example of the disclosure is a facial recognition system that includes an input, an output and a memory for storing a facial image database that includes a plurality of entries each corresponding to a different person, and wherein each entry includes a person identifier along with one or more facial images of the person. The system includes a facial recognition module that is operably coupled to the memory, the input and the output. The facial recognition module is configured to receive a new facial image via the input and to ascertain one or more facial image parameters from the new facial image, and then select a subset of facial recognition engines from a larger set of available facial recognition engines based at least in part on one or more of the ascertained facial image parameters. The ascertained facial image parameters may include, for example, the size of the facial image in pixels, a relative brightness of the facial image, a relative contrast of the facial image, a relative back lighting of the facial image, a relative blurriness of the facial image and/or any other suitable image parameter(s). In some cases, the ascertained facial image parameters may include whether the captured image shows the individual looking directly at the camera, or up or down and/or to the left or to the right. In some cases, the ascertained facial image parameters may include whether and/or how much of the face is obstructed by a hat, hair, glasses or other object. Some facial recognition engines may perform better on facial images under certain facial image parameters than other facial recognition engines. In any event, each of the selected facial recognition engines may compare the new facial image to facial representations that are based upon the facial images in the facial image database to try to identify the person identifier that likely corresponds to the new facial image. The facial recognition module may then send a person ID to a control module via the output, wherein the control module may control one or more building control devices based at least in part on the person ID (e.g. change a setpoint, unlock a door, etc.).
- Another example of the disclosure includes a method of recognizing individuals within a building space. Access is gained to a facial image database that includes a plurality of enrolled persons, where the facial image database includes a facial image for each of the plurality of enrolled persons under each of a plurality of different facial conditions. One or more video feeds are monitored that provide images of spaces within the building space, at least some of the images including images of persons within the building space. The one or more video feeds are processed to detect one or more facial images of a person within the building space and one or more facial recognition engines may be selected to compare the detected facial image with facial models that are based on the facial images in the facial image database. Selecting the one or more facial recognition engines is based at least in part on one or more image criteria of the detected facial image. An identified one of the plurality of enrolled persons included in the facial image database that is identified in the detected facial image may be received from the selected one or more facial recognition engines and the identified one of the plurality of enrolled persons may be reported to a building automation system. One or more building control devices of the building automation system may be controlled based at least in part on the identified one of the plurality of enrolled persons.
- Another example of the disclosure includes a method of identifying an individual. The method includes monitoring a video feed that provides a series of images of activity in or around a building and extracting one or more images from the series of images of the video feed. The extracted one or more images are analyzed to find facial images, and the facial images are quantified to find a query-able facial image. One or more facial recognition engines are selected based at least in part upon one or more image properties of the query-able facial image. The query-able facial image is sent to the selected one or more facial recognition engines. The selected one or more facial recognition engines are configured to compare the query-able facial image with facial models that are based upon facial images within the facial image database. Facial recognition engine results are provided that include an identity of a person shown within the query-able facial image (if the person is present in the facial image database) and in some cases an associated confidence value. One or more building control devices may then be controlled based at least in part on the identity of the person shown within the query-able facial image.
- The above summary of some illustrative embodiments is not intended to describe each disclosed embodiment or every implementation of the present disclosure. The Figures, and Description, which follow, more particularly exemplify some of these embodiments.
- The disclosure may be more completely understood in consideration of the following description in connection with the accompanying drawings, in which:
-
FIG. 1 is a schematic block diagram of an illustrative facial recognition system; -
FIG. 2 is a schematic block diagram of an illustrative enrollment module forming a portion of the facial recognition system ofFIG. 1 ; -
FIG. 3 is a schematic block diagram of an illustrative capture module forming a portion of the facial recognition system ofFIG. 1 ; -
FIG. 4 is a schematic block diagram of an illustrative facial recognition system; -
FIG. 5 is a schematic block diagram of an illustrative facial image database usable in the illustrative facial recognition systems ofFIG. 1 andFIG. 4 ; -
FIG. 6 is a flow diagram showing an illustrative method of recognizing individuals within a space; and -
FIG. 7 is a flow diagram showing an illustrative method of identifying an individual. - While the disclosure is amenable to various modifications and alternative forms, specifics thereof have been shown by way of example in the drawings and will be described in detail. It should be understood, however, that the intention is not to limit the disclosure to the particular embodiments described. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure.
- For the following defined terms, these definitions shall be applied, unless a different definition is given in the claims or elsewhere in this specification.
- All numeric values are herein assumed to be modified by the term “about,” whether or not explicitly indicated. The term “about” generally refers to a range of numbers that one of skill in the art would consider equivalent to the recited value (i.e., having the same function or result). In many instances, the terms “about” may include numbers that are rounded to the nearest significant figure.
- The recitation of numerical ranges by endpoints includes all numbers within that range (e.g. 1 to 5 includes 1, 1.5, 2, 2.75, 3, 3.80, 4, and 5).
- As used in this specification and the appended claims, the singular forms “a”, “an”, and “the” include plural referents unless the content clearly dictates otherwise. As used in this specification and the appended claims, the term “or” is generally employed in its sense including “and/or” unless the content clearly dictates otherwise.
- It is noted that references in the specification to “an embodiment”, “some embodiments”, “other embodiments”, etc., indicate that the embodiment described may include one or more particular features, structures, and/or characteristics. However, such recitations do not necessarily mean that all embodiments include the particular features, structures, and/or characteristics. Additionally, when particular features, structures, and/or characteristics are described in connection with one embodiment, it should be understood that such features, structures, and/or characteristics may also be used connection with other embodiments whether or not explicitly described unless clearly stated to the contrary.
- The following description should be read with reference to the drawings in which similar structures in different drawings are numbered the same. The drawings, which are not necessarily to scale, depict illustrative embodiments and are not intended to limit the scope of the disclosure.
-
FIG. 1 is a schematic block diagram of an illustrativefacial recognition system 10 that may, for example, be configured to create and update a database of facial images and to use that database of facial images to identify an individual. In cases, thefacial recognition system 10 includes anenrollment module 12 and acapture module 14. While illustrated as separate components, theenrollment module 12 and thecapture module 14 may individually or in combination be manifested in a controller that may be part of a building automation system (e.g. an HVAC panel such as a thermostat, a security panel, etc.). In some cases, theenrollment module 12 and/or thecapture module 14, or at least some functionally of one or both modules, may be manifested in a server or a cloud-based application. In some cases, theenrollment module 12 and/or thecapture module 14, or at least some functionally of one or both modules, may be manifested in mobile device such as a tablet computer, laptop computer or smartphone. In some cases, theenrollment module 12 and/or thecapture module 14, or at least some functionally of one or both modules, may be manifested in a desktop computer. - The
enrollment module 12, may, for example, be responsible for creating and maintaining afacial images database 16. As will be discussed, theenrollment module 12 may obtain facial images from a variety of different sources and may in some cases analyze the facial images for quality before storing the facial images in thefacial images database 16. In some cases, theenrollment module 12 may be responsible for periodically updating the facial images stored in thefacial images database 16 to account for changing styles (hair styles, facial hair, glasses, etc.), aging and the like of the individuals in thefacial images database 16. - The
capture module 14 may be responsible for obtaining facial images of individuals to be identified. In some cases, thecapture module 14 may receive live video of a space, and may analyze the live video to find facial images of persons to be identified. Once thecapture module 14 finds one or more facial images of person(s) that are to be identified, thecapture module 14 may provide the one or more facial images to afacial recognition module 18. One of the tasks of thefacial recognition module 18 is to determine which of a variety of differentfacial recognition engines 20 are to be used to query thefacial images database 16 in order to identify the persons in the one or more facial images. In some cases, whichfacial recognition engine 20 to use may be determined at least in part by one or more image parameters of the one or more facial images. In some cases, thefacial recognition module 18 includes a facial recognitionengine evaluation module 22 that determines which of thefacial recognition engines 20 are to be used. In some cases, a singlefacial recognition engine 20 may be used. In other situations, two, three or more distinctfacial recognition engines 20 may be used. InFIG. 1 , thefacial recognition engines 20 include anENGINE # 1 labeled as 24, anENGINE # 2 labeled as 26 through an ENGINE #N labeled as 28. - The facial recognition
engine evaluation module 22 may analyze a facial image to determine one or more image parameters of the facial image, and may utilize the one or more image parameters to determine which of thefacial recognition engines 20 should be used in order to identify the person shown in the facial image. In some instances, the facial recognition engine evaluation module 22 (or thefacial recognition module 18 itself) may evaluate multiple images of a particular individual (e.g. multiple images of a video sequence) to determine which of the multiple images are most likely to provide good results. In some cases, the image parameters may include, for example, the size of the facial image in pixels, a relative brightness of the facial image, a relative contrast of the facial image, a relative back lighting of the facial image, a relative blurriness of the facial image, and/or any other suitable image parameter(s). In some cases, the image parameters may include whether the captured image shows the individual looking directly at the camera, or up or down and/or to the left or to the right. In some cases, the image parameters may include whether and/or how much of the face is obstructed by a hat, hair, glasses or other object. The engine evaluation model may store a table that maps certain image parameters to certainfacial recognition engines 20. - In some cases, a specific
facial recognition engine 20 may be selected if the image parameters fall within a range suitable for thatfacial recognition engine 20. The suitable range thresholds may be selected manually based on engine specifications, or may be based upon prior testing performed on a sample of facial images. In some cases, the suitable range thresholds may also be selected automatically, and may be adjusted over time, based on comparison of different facial recognition engine results on facial images processed by the system. Examples are shown in Table One, below: -
TABLE ONE Engine A Engine B Engine C Image Parameter Range Range Range Face size (pixels) 40 to 90 120 to unlimited 60 to unlimited Face Contrast 0.9 to 1.0 0.5 to 1.0 0.8 to 1.0 Face Orientation −20° to +20° −30° to +30° −40° to +40° tilt Facial Occlusion no occlusion eye occlusion mouth and forehead of facial allowed occlusion allowed landmarks (glasses) - It will be appreciated that these are just examples, and a variety of other image parameters may be considered. For each facial recognition engine, the input parameters may be described as a range, as specific values, or a specific set of conditions. In some cases, a weighed sum combining various parameters may also be considered. For example, if the face size for a particular facial image is between 40 and 90 pixels, facial sharpness needs to be greater than or equal to 0.8; and if the face size is greater than 90 pixels, facial sharpness needs to be greater than or equal to 0.7.
- A variety of commercially available facial recognition engines may be utilized. Examples of suitable cloud-based APIs (Application Programming Interface) for facial recognition include but are not limited to Microsoft Face API and Amazon Rekognition. Examples of suitable facial recognition engines that may be employed as on-site software or be integrated into other products include but are not limited to NEC Face Recognition, Morpho Argus and Cognitec.
- Once the facial recognition
engine evaluation module 22 determines which of thefacial recognition engines 20 to select (often a subset of a larger set of available facial engines 20), thefacial recognition module 18 may provide the selected facial recognition engine(s) 20 with the facial image. In some cases, one or more of thefacial recognition engines 20 may be hosted on a remote server, but this is not required. Upon querying, the selected facial recognition engine(s) 20 may return an identity of the person shown in the facial image(s) that was sent to the selected facial recognition engine(s) 20. The selected facial recognition engine(s) 20 may also return a confidence value that provides an indication of how confident (i.e. likely) that the identity of the person is correct. For example, if the best available facial image sent to the selected facial recognition engine(s) is poorly lit and blurry, the confidence value may be relatively low. Conversely, if the best available facial image sent to the selected facial recognition engine(s) 20 is well-light and clear, and is an image of the individual looking directly or nearly directly at the camera, the confidence value may be relatively high. -
FIG. 2 is a schematic block diagram of anillustrative enrollment module 12. As noted, a function of theenrollment module 12 is to obtain facial images that may be placed in the facial images database 16 (FIG. 1 ) and subsequently used to build or update a facial model based upon facial images in thefacial images database 16 in order to identify persons in captured facial images. In some instances, theenrollment module 12 includes animage input module 30. Theimage input module 30 may obtain facial images from a variety of different sources. Examples of suitable image sources include aselfies module 32, aphotos module 34, a capturedimages module 36 and asocial media module 38. In some cases, theselfies module 32 may instruct individuals who are expected to be in the building space and are enrolling in the facial recognition system to take a series of selfies. For example, the individuals may be instructed to take and upload selfies showing themselves, or at least their faces, looking directly at the camera, looking left, looking right, looking up and looking down. In some cases, theselfies module 32 may instruct the individual to take multiple selfies with their hair up and their hair down, with and without facial jewelry like earrings, nose piercings, lip piercings, and/or various glasses, for example. In some cases, theselfies module 32 may be implemented, at least in part, on a mobile device such as a smartphone or tablet computer, but this is not required. - In some cases, the
photos module 34 may be configured to go through online and/or otherwise electronic photo libraries looking for suitable facial images. These can include photo libraries stored on a personal computer, on the cloud, on a mobile device, and/or any other device. Thephotos module 34 may assemble multiple facial images for a particular individual, may display the multiple facial images, and ask the individual to confirm that each of the images is in fact of that individual. In some instances, the capturedimages module 36 may include or otherwise be operably coupled with a still camera, a video camera and the like, and may capture facial images as individuals move about the space. The capturedimages module 36 may compile these images, and in some cases may ask the individuals to confirm their identity. - The
social media module 38 may scan social media accounts, such as but not limited to Facebook, Snapchat and the like, looking for suitable facial images of an individual. In some cases, thesocial media module 38 may display the found images and ask for identity confirmation. - In some cases, the
enrollment module 12 may include an imagequality assessment module 40 that receives facial images from theimage input module 30 and analyzes the received facial images to confirm that the images are of sufficient quality to be of use. The facial images that are believed to be of sufficient quality, and represent a suitable variety of poses and images (looking left, looking right, etc.) may be passed on to animage organization module 42. In some cases, a particular facial image pose may be determined to be of less-than sufficient quality, and the imagequality assessment module 40 may ask theimage input module 30 to obtain a higher quality facial image of that particular facial image pose if possible. - Facial images that are deemed to be of sufficient quality, and of appropriate facial poses, are forwarded to the
image organization module 42. In some cases, theimage organization module 42 may at least partially contribute to the organization of facial images within the facial images database 16 (FIG. 1 ). Facial images may be organized in any suitable manner. In some cases, for example, the facial images for a particular individual may include images of the individual looking directly at the camera, images of the individual looking above the camera, images of the individual looking below the camera, images of the individual looking to the left of the camera, and images of the individual looking to the right of the camera. In some cases, facial images for a particular individual may be organized by whether they are wearing their hair up or down, have facial hair or are clean-shaven, whether or not they are wear jewelry, glasses and the like. In some cases, facial images for a particular individual may also be organized by the size of the facial image in pixels, the relative brightness of the facial image, the relative contrast of the facial image, the relative back lighting of the facial image, the relative blurriness of the facial image and/or by any other suitable image parameter. These are just examples. -
FIG. 3 is a schematic block diagram of thecapture module 14. While thecapture module 14 is configured to capture facial images of individuals within a space so that they can be identified, in some cases, thecapture module 14 may also assist theenrollment module 12 in initially capturing facial images for populating the facial images database 16 (FIG. 1 ). In some cases, thecapture module 14 includes avideo capture module 50. In some instances, thevideo capture module 50 may be operably coupled to one or more still cameras and/or video cameras that are distributed within a space. In some cases, still images may be captured by the still cameras and/or from image frames captured by the video cameras. In some cases, still images may be captured 30 times (frames) per second using a video camera, although this is just an example. - The still images may be forwarded to a
face detection module 52, which analyzes the still images looking for facial images. Once a possible facial image is detected, in some cases, subsequent still images are analyzed by aface tracking module 54 looking for confirmation the individual is still there and/or looking for better quality facial images of that individual for subsequent identification. A faceimage evaluation module 56 may review the facial images to ascertain whether and which of the captured image(s) are of sufficient quality and/or pose to be of use in identifying the individual shown in the captured image(s). -
FIG. 4 is a schematic block diagram of an illustrativefacial recognition system 60. Thefacial recognition system 60 includes aninput 62 and anoutput 64. Amemory 67 may be configured to store thefacial images database 16. In some cases, thefacial images database 16 includes a plurality of entries each corresponding to a different person, and each entry includes a person identifier along with one or more facial images of the person. In some instances, thefacial images database 16 may include multiple facial images for each person identifier, with some of the multiple facial images representing the person at one or more of different facial angles, different facial lighting, different facial size in terms of pixels, and different facial obstructions. Examples of different facial obstructions include but are not limited to differing hair style, wearing glasses, not wearing glasses, wearing a hat, not wearing a hat, and differing ages. - The
facial recognition module 18 is operably coupled to theinput 62, theoutput 64 and to thememory 67 and is configured to receive a new facial image via theinput 62 and to ascertain one or more facial image parameters from the new facial image. Thefacial recognition module 18 is configured to select a subset of facial recognition engines 20 (FIG. 1 ) from a larger set of available facial recognition engines 20 (FIG. 1 ) based at least in part on one or more of the ascertained facial image parameters. The ascertained facial image parameters may include, for example, the size of the facial image in pixels, a relative brightness of the facial image, a relative contrast of the facial image, a relative back lighting of the facial image, a relative blurriness of the facial image and/or any other suitable image parameter(s). In some cases, the ascertained facial image parameters may include whether the captured image shows the individual looking directly at the camera, or up or down and/or to the left or to the right. In some cases, the ascertained facial image parameters may include whether and/or how much of the face is obstructed by a hat, hair, glasses or other object. - At least some of the
facial recognition engines 20 may include cloud-based facial recognition engines, but this is not required. In some cases, the selected subset offacial recognition engines 20 may include two or more distinctfacial recognition engines 20. In some cases, the selected subset offacial recognition engines 20 may include only a singlefacial recognition engine 20. - Each of the
facial recognition engines 20 is configured to compare the new facial image to facial models or other facial representations that are based upon the facial images in thefacial image database 16 and to identify a person identifier that likely corresponds to the new facial image. Thefacial recognition module 18 may be configured to evaluate the person identifiers and confidence levels returned by the selectedfacial recognition engines 20. If the returned person identifiers are the same, thefacial recognition module 18 will as sign a high confidence to the output person ID. However, if the returned person identifiers differ, thefacial recognition module 18 may output the person ID with the highest combined confidence, or may select additional facial recognition engines to evaluate the new facial image. In case of disagreement, the output person ID will be assigned lower confidence. Thefacial recognition module 18 is configured to send a person ID to acontrol module 66 via theoutput 64, wherein thecontrol module 66 is configured to control one or morebuilding control devices facial recognition module 18 may be further configured to process the person identifiers identified by each of the subset offacial recognition engines 20 to determine the person ID that is sent to thecontrol module 66. - In some cases, at least one of the
facial recognition engines 20 may provide a confidence level of the person identifier that likely corresponds to the new facial image. In some cases, particularly if the new facial image was sent to multiplefacial recognition engines 20 for identification, thefacial recognition module 18 may be configured to determine a confidence level in the person ID that is based at least in part on the confidence level of the person identifier provided by each of one or more of the subset offacial recognition engines 20. In some cases, if the confidence level in the person ID is below a threshold confidence level, thefacial recognition module 18 may select a differentfacial recognition engine 20, or a different subset offacial recognition engines 20, and may try again. This may be repeated until an acceptable confidence level is achieved. If an acceptable confidence level cannot be achieved, thefacial recognition module 18 may report to theoutput module 64 that an unknown person was seen in the new facial image. -
FIG. 5 is a schematic illustration of an illustrativefacial images database 116. In some cases, thefacial images database 116 may be considered as being an illustrative but non-limiting example of thefacial images database 16. It will be appreciated that the information within thefacial images database 16 may be organized in any suitable fashion. Thefacial images database 116 includes a plurality of facial images that are organized by individual. To illustrate, thefacial images database 116 may include anINDIVIDUAL # 1 labeled as 118, anINDIVIDUAL # 2 labeled as 120 through an INDIVIDUAL #P labeled as 122. A number of facial images are organized underneath each individual 118, 120, 122. As illustrated, the individual 118 includes anIMAGE # 1 labeled as 124, anIMAGE # 2 labeled as 126 through an IMAGE #M labeled as 128. Similarly, the individual 120 includes anIMAGE # 1 labeled as 134, anIMAGE # 2 labeled as 136 through an IMAGE #M labeled as 138 and the individual 122 includes anIMAGE # 1 labeled as 144, anIMAGE # 2 labeled as 146 through an IMAGE #M labeled as 148. - In some cases, the images for each individual 118, 120, 122 may be organized in a similar fashion. For example, the
image 124 may represent a straight on view of the individual 118, theimage 134 may represent a straight on view of the individual 120 and theimage 144 may represent a straight on view of the individual 122. Theimages individuals -
FIG. 6 is a flow diagram showing anillustrative method 150 of recognizing individuals within or around a building space. As generally shown atblock 152, access is gained to a facial image database (such as thefacial images databases 16, 116) that includes a plurality of enrolled persons, where the facial image database includes a facial image for each of the plurality of enrolled persons under each of a plurality of different facial conditions. In some cases, the plurality of different facial conditions include two or more of the person looking up, the person looking down, the person looking to the left, the person looking to the right and the person looking straight ahead. - In some cases, the facial image database organizes the facial images for each of the plurality of enrolled persons at each of a plurality of different facial conditions into predetermined separate categories. Examples of the separate categories include one or more of the person with glasses, the person without glasses, the person with their hair worn up, the person with their hair worn down, the person clean shaven, the person not clean shaven, the person wearing jewelry, the person not wearing jewelry, and the person wearing a hat, the person not wearing a hat and the person wearing a scarf. In some instances, the separate categories include one or more of the person looking to the left, the person looking to the right, the person looking up, the person looking down, and the person looking straight ahead. In some cases, the separate categories may include the size of the facial image in pixels, the relative brightness of the facial image, the relative contrast of the facial image, the relative back lighting of the facial image, the relative blurriness of the facial image and/or by any other suitable image parameter. These are just examples.
- One or more video feeds providing images of spaces within or around the building space may be monitored, at least some of the images include images of persons within the building space, as seen at
block 154. As seen atblock 156, the one or more video feeds may be processed to detect one or more facial images of a person within or around the building space. One or more facial recognition engines may be selected, as generally indicated atblock 158. In some cases, selecting the one or more facial recognition engines is based at least in part on one or more image criteria of the one or more detected facial images. The one or more image criteria may include, for example, the size of the facial image in pixels, a relative brightness of the facial image, a relative contrast of the facial image, a relative back lighting of the facial image, a relative blurriness of the facial image and/or any other suitable image parameter(s). In some cases, the one or more image criteria may include whether the captured image shows the individual looking directly at the camera, or up or down and/or to the left or to the right. In some cases, the one or more image criteria may include whether and/or how much of the face is obstructed by a hat, hair, glasses or other object. Some facial recognition engines may perform better on facial images under certain image criteria than other facial recognition engines. - As seen at
block 160, an identified one of the plurality of enrolled persons included in the facial image database that is identified in the one or more detected facial images may be received from the selected one or more facial recognition engines. The identified one of the plurality of enrolled persons may be reported to a building automation system, as generally seen atblock 162, and one or more building control devices of the building automation system may be controlled based at least in part on the identified one of the plurality of enrolled persons, as indicated atblock 164. - In some cases, the building automation system includes an HVAC system, and the building control device may include a building control user interface device that allows the identified one of the plurality of enrolled persons to change one or more building control parameters only when the identified one of the plurality of enrolled persons has been granted permission to change one or more building control parameters. In some instances, the building automation system includes an access control system, and a building access device is controlled to allow entry of the identified one of the plurality of enrolled persons only when the identified one of the plurality of enrolled persons has been granted permission to enter.
-
FIG. 7 is a flow diagram illustrating amethod 170 of identifying an individual. In some cases, themethod 170 includes creating a facial images database by soliciting facial images of each of a plurality of enrolled persons under each of a plurality of facial conditions that include one or more of the person looking to the left, the person looking to the right, the person looking up, the person looking down, and the person looking straight ahead. A video feed that provides a series of images of activity in or around a building is monitored, as indicated atblock 174. One or more images may be extracted from the series of images of the video feed, as seen atblock 176. The extracted one or more images may be analyzed to find facial images, as indicated atblock 178, and are quantified to find a query-able facial image, as noted atblock 180. - As indicated at
block 182, one or more facial recognition engines may be selected based at least in part upon one or more image properties of the query-able facial image and the query-able facial image may be sent to the selected one or more facial recognition engines as indicated atblock 184, where the selected one or more facial recognition engines are configured to compare the query-able facial image with facial models that are based upon facial images stored within the facial image database. As noted atblock 186, facial recognition engine results that include an identity of a person shown within the query-able facial image as well as an associated confidence value may be provided. In some cases, and as indicated atblock 188, one or more building control devices may be controlled based at least in part on the identity of the person shown within the query-able facial image. - It should be understood that this disclosure is, in many respects, only illustrative. Changes may be made in details, particularly in matters of shape, size, and arrangement of steps without exceeding the scope of the disclosure. This may include, to the extent that it is appropriate, the use of any of the features of one example embodiment being used in other embodiments.
Claims (20)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/964,220 US20190332848A1 (en) | 2018-04-27 | 2018-04-27 | Facial enrollment and recognition system |
US17/371,343 US11688202B2 (en) | 2018-04-27 | 2021-07-09 | Facial enrollment and recognition system |
US18/197,689 US20230282027A1 (en) | 2018-04-27 | 2023-05-15 | Facial enrollment and recognition system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/964,220 US20190332848A1 (en) | 2018-04-27 | 2018-04-27 | Facial enrollment and recognition system |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/371,343 Continuation US11688202B2 (en) | 2018-04-27 | 2021-07-09 | Facial enrollment and recognition system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20190332848A1 true US20190332848A1 (en) | 2019-10-31 |
Family
ID=68292669
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/964,220 Abandoned US20190332848A1 (en) | 2018-04-27 | 2018-04-27 | Facial enrollment and recognition system |
US17/371,343 Active 2038-07-31 US11688202B2 (en) | 2018-04-27 | 2021-07-09 | Facial enrollment and recognition system |
US18/197,689 Pending US20230282027A1 (en) | 2018-04-27 | 2023-05-15 | Facial enrollment and recognition system |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/371,343 Active 2038-07-31 US11688202B2 (en) | 2018-04-27 | 2021-07-09 | Facial enrollment and recognition system |
US18/197,689 Pending US20230282027A1 (en) | 2018-04-27 | 2023-05-15 | Facial enrollment and recognition system |
Country Status (1)
Country | Link |
---|---|
US (3) | US20190332848A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200057885A1 (en) * | 2018-01-12 | 2020-02-20 | Tyco Fire & Security Gmbh | Predictive theft notification for the prevention of theft |
US11113532B2 (en) * | 2019-04-16 | 2021-09-07 | Lg Electronics Inc. | Artificial intelligence apparatus for recognizing object and method therefor |
US20220207946A1 (en) * | 2020-12-30 | 2022-06-30 | Assa Abloy Ab | Using facial recognition system to activate an automated verification protocol |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220269388A1 (en) | 2021-02-19 | 2022-08-25 | Johnson Controls Tyco IP Holdings LLP | Security / automation system control panel graphical user interface |
Family Cites Families (93)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6052666A (en) | 1995-11-06 | 2000-04-18 | Thomson Multimedia S.A. | Vocal identification of devices in a home environment |
ES2173596T3 (en) | 1997-06-06 | 2002-10-16 | Bsh Bosch Siemens Hausgeraete | DOMESTIC APPARATUS, IN PARTICULAR ELECTRICAL DOMESTIC APPARATUS. |
EP0911808B1 (en) | 1997-10-23 | 2002-05-08 | Sony International (Europe) GmbH | Speech interface in a home network environment |
US6236749B1 (en) | 1998-03-23 | 2001-05-22 | Matsushita Electronics Corporation | Image recognition method |
US6330308B1 (en) | 1998-04-09 | 2001-12-11 | Bell Atlantic Network Services, Inc. | Voice mail system for obtaining forwarding number information from directory assistance systems having speech recognition |
US7634662B2 (en) | 2002-11-21 | 2009-12-15 | Monroe David A | Method for incorporating facial recognition technology in a multimedia surveillance system |
JP2002531901A (en) | 1998-12-02 | 2002-09-24 | ザ・ビクトリア・ユニバーシテイ・オブ・マンチエスター | Determination of face subspace |
US6408272B1 (en) | 1999-04-12 | 2002-06-18 | General Magic, Inc. | Distributed voice user interface |
US6944319B1 (en) | 1999-09-13 | 2005-09-13 | Microsoft Corporation | Pose-invariant face recognition system and process |
JP5118280B2 (en) | 1999-10-19 | 2013-01-16 | ソニー エレクトロニクス インク | Natural language interface control system |
US7392185B2 (en) | 1999-11-12 | 2008-06-24 | Phoenix Solutions, Inc. | Speech based learning/training system using semantic decoding |
EP1234303B1 (en) | 1999-12-02 | 2005-11-02 | Thomson Licensing | Method and device for speech recognition with disjoint language models |
US7127087B2 (en) | 2000-03-27 | 2006-10-24 | Microsoft Corporation | Pose-invariant face recognition system and process |
US7860706B2 (en) | 2001-03-16 | 2010-12-28 | Eli Abir | Knowledge system method and appparatus |
US6920236B2 (en) | 2001-03-26 | 2005-07-19 | Mikos, Ltd. | Dual band biometric identification system |
US7113074B2 (en) | 2001-03-30 | 2006-09-26 | Koninklijke Philips Electronics N.V. | Method and system for automatically controlling a personalized networked environment |
US7027620B2 (en) | 2001-06-07 | 2006-04-11 | Sony Corporation | Method of recognizing partially occluded and/or imprecisely localized faces |
KR100434545B1 (en) | 2002-03-15 | 2004-06-05 | 삼성전자주식회사 | Method and apparatus for controlling devices connected with home network |
US7398209B2 (en) | 2002-06-03 | 2008-07-08 | Voicebox Technologies, Inc. | Systems and methods for responding to natural language speech utterance |
US20040193603A1 (en) | 2003-03-28 | 2004-09-30 | Ljubicich Philip A. | Technique for effectively searching for information in response to requests in information assistance service |
US8553949B2 (en) | 2004-01-22 | 2013-10-08 | DigitalOptics Corporation Europe Limited | Classification and organization of consumer digital images using workflow, and face detection and recognition |
US7783082B2 (en) | 2003-06-30 | 2010-08-24 | Honda Motor Co., Ltd. | System and method for face recognition |
US7844597B2 (en) * | 2003-09-15 | 2010-11-30 | Nokia Corporation | Modifying a database comprising image fields |
US20050149496A1 (en) | 2003-12-22 | 2005-07-07 | Verity, Inc. | System and method for dynamic context-sensitive federated search of multiple information repositories |
US7788278B2 (en) | 2004-04-21 | 2010-08-31 | Kong Eng Cheng | Querying target databases using reference database records |
CA2615659A1 (en) | 2005-07-22 | 2007-05-10 | Yogesh Chunilal Rathod | Universal knowledge management and desktop search system |
US7806604B2 (en) | 2005-10-20 | 2010-10-05 | Honeywell International Inc. | Face detection and tracking in a wide field of view |
KR100738080B1 (en) | 2005-11-08 | 2007-07-12 | 삼성전자주식회사 | Method of and apparatus for face recognition using gender information |
US7912592B2 (en) | 2006-06-09 | 2011-03-22 | Garmin International, Inc. | Automatic speech recognition system and method for aircraft |
ATE403928T1 (en) | 2006-12-14 | 2008-08-15 | Harman Becker Automotive Sys | VOICE DIALOGUE CONTROL BASED ON SIGNAL PREPROCESSING |
RU2007102021A (en) | 2007-01-19 | 2008-07-27 | Корпораци "Самсунг Электроникс Ко., Лтд." (KR) | METHOD AND SYSTEM OF IDENTITY RECOGNITION |
US8321444B2 (en) | 2007-06-29 | 2012-11-27 | Microsoft Corporation | Federated search |
US8090160B2 (en) | 2007-10-12 | 2012-01-03 | The University Of Houston System | Automated method for human face modeling and relighting with application to face recognition |
US8520979B2 (en) | 2008-08-19 | 2013-08-27 | Digimarc Corporation | Methods and systems for content processing |
US8385971B2 (en) | 2008-08-19 | 2013-02-26 | Digimarc Corporation | Methods and systems for content processing |
US7933777B2 (en) | 2008-08-29 | 2011-04-26 | Multimodal Technologies, Inc. | Hybrid speech recognition |
US8266078B2 (en) | 2009-02-06 | 2012-09-11 | Microsoft Corporation | Platform for learning based recognition research |
US8379940B2 (en) | 2009-06-02 | 2013-02-19 | George Mason Intellectual Properties, Inc. | Robust human authentication using holistic anthropometric and appearance-based features and boosting |
US20110184740A1 (en) | 2010-01-26 | 2011-07-28 | Google Inc. | Integration of Embedded and Network Speech Recognizers |
JP5544006B2 (en) * | 2010-02-18 | 2014-07-09 | 株式会社日立製作所 | Information communication processing system |
US20110257985A1 (en) | 2010-04-14 | 2011-10-20 | Boris Goldstein | Method and System for Facial Recognition Applications including Avatar Support |
US8239366B2 (en) | 2010-09-08 | 2012-08-07 | Nuance Communications, Inc. | Method and apparatus for processing spoken search queries |
US20120059658A1 (en) | 2010-09-08 | 2012-03-08 | Nuance Communications, Inc. | Methods and apparatus for performing an internet search |
US8577915B2 (en) | 2010-09-10 | 2013-11-05 | Veveo, Inc. | Method of and system for conducting personalized federated search and presentation of results therefrom |
US9613258B2 (en) | 2011-02-18 | 2017-04-04 | Iomniscient Pty Ltd | Image quality assessment |
EP2498250B1 (en) | 2011-03-07 | 2021-05-05 | Accenture Global Services Limited | Client and server system for natural language-based control of a digital network of devices |
US8380711B2 (en) | 2011-03-10 | 2013-02-19 | International Business Machines Corporation | Hierarchical ranking of facial attributes |
US9251402B2 (en) | 2011-05-13 | 2016-02-02 | Microsoft Technology Licensing, Llc | Association and prediction in facial recognition |
US20130031476A1 (en) | 2011-07-25 | 2013-01-31 | Coin Emmett | Voice activated virtual assistant |
US8995729B2 (en) | 2011-08-30 | 2015-03-31 | The Mitre Corporation | Accelerated comparison using scores from coarse and fine matching processes |
US9495331B2 (en) | 2011-09-19 | 2016-11-15 | Personetics Technologies Ltd. | Advanced system and method for automated-context-aware-dialog with human users |
US8340975B1 (en) | 2011-10-04 | 2012-12-25 | Theodore Alfred Rosenberger | Interactive speech recognition device and system for hands-free building control |
US9542956B1 (en) | 2012-01-09 | 2017-01-10 | Interactive Voice, Inc. | Systems and methods for responding to human spoken audio |
US20130238326A1 (en) | 2012-03-08 | 2013-09-12 | Lg Electronics Inc. | Apparatus and method for multiple device voice control |
US9626552B2 (en) | 2012-03-12 | 2017-04-18 | Hewlett-Packard Development Company, L.P. | Calculating facial image similarity |
TWI479435B (en) | 2012-04-03 | 2015-04-01 | Univ Chung Hua | Method for face recognition |
US8861804B1 (en) * | 2012-06-15 | 2014-10-14 | Shutterfly, Inc. | Assisted photo-tagging with facial recognition models |
US8831957B2 (en) | 2012-08-01 | 2014-09-09 | Google Inc. | Speech recognition models based on location indicia |
GB201215944D0 (en) | 2012-09-06 | 2012-10-24 | Univ Manchester | Image processing apparatus and method for fittng a deformable shape model to an image using random forests |
US9547647B2 (en) | 2012-09-19 | 2017-01-17 | Apple Inc. | Voice-based media searching |
KR101434170B1 (en) | 2012-09-25 | 2014-08-26 | 한국전자통신연구원 | Method for study using extracted characteristic of data and apparatus thereof |
US9275269B1 (en) | 2012-11-09 | 2016-03-01 | Orbeus, Inc. | System, method and apparatus for facial recognition |
US9465392B2 (en) | 2012-11-14 | 2016-10-11 | International Business Machines Corporation | Dynamic temperature control for a room containing a group of people |
US9875741B2 (en) | 2013-03-15 | 2018-01-23 | Google Llc | Selective speech recognition for chat and digital personal assistant systems |
US9706252B2 (en) | 2013-02-04 | 2017-07-11 | Universal Electronics Inc. | System and method for user monitoring and intent determination |
EP4138075A1 (en) | 2013-02-07 | 2023-02-22 | Apple Inc. | Voice trigger for a digital assistant |
US9472205B2 (en) | 2013-05-06 | 2016-10-18 | Honeywell International Inc. | Device voice recognition systems and methods |
WO2014193990A1 (en) * | 2013-05-28 | 2014-12-04 | Eduardo-Jose Chichilnisky | Smart prosthesis for facilitating artificial vision using scene abstraction |
US20140379323A1 (en) | 2013-06-20 | 2014-12-25 | Microsoft Corporation | Active learning using different knowledge sources |
US9696055B1 (en) | 2013-07-30 | 2017-07-04 | Alarm.Com Incorporated | Thermostat control based on activity within property |
US20150317511A1 (en) | 2013-11-07 | 2015-11-05 | Orbeus, Inc. | System, method and apparatus for performing facial recognition |
JP6410450B2 (en) | 2014-03-31 | 2018-10-24 | キヤノン株式会社 | Object identification device, object identification method, and program |
US9430794B2 (en) | 2014-03-31 | 2016-08-30 | Monticello Enterprises LLC | System and method for providing a buy option in search results when user input is classified as having a purchase intent |
GB201406594D0 (en) * | 2014-04-11 | 2014-05-28 | Idscan Biometric Ltd | Method, system and computer program for validating a facial image-bearing identity document |
WO2015184186A1 (en) | 2014-05-30 | 2015-12-03 | Apple Inc. | Multi-command single utterance input method |
KR101956071B1 (en) | 2015-01-13 | 2019-03-08 | 삼성전자주식회사 | Method and apparatus for verifying a user |
DE102015206566A1 (en) | 2015-04-13 | 2016-10-13 | BSH Hausgeräte GmbH | Home appliance and method for operating a household appliance |
US10402410B2 (en) | 2015-05-15 | 2019-09-03 | Google Llc | Contextualizing knowledge panels |
US10847175B2 (en) | 2015-07-24 | 2020-11-24 | Nuance Communications, Inc. | System and method for natural language driven search and discovery in large data sources |
WO2017059210A1 (en) | 2015-09-30 | 2017-04-06 | Cooper Technologies Company | Electrical devices with camera sensors |
US9996773B2 (en) | 2016-08-04 | 2018-06-12 | International Business Machines Corporation | Face recognition in big data ecosystem using multiple recognition models |
CN108021847B (en) | 2016-11-02 | 2021-09-14 | 佳能株式会社 | Apparatus and method for recognizing facial expression, image processing apparatus and system |
CN108133222B (en) * | 2016-12-01 | 2021-11-02 | 富士通株式会社 | Apparatus and method for determining a Convolutional Neural Network (CNN) model for a database |
WO2018153469A1 (en) * | 2017-02-24 | 2018-08-30 | Telefonaktiebolaget Lm Ericsson (Publ) | Classifying an instance using machine learning |
KR102370063B1 (en) * | 2017-03-28 | 2022-03-04 | 삼성전자주식회사 | Method and apparatus for verifying face |
CN107239139B (en) | 2017-05-18 | 2018-03-16 | 刘国华 | Based on the man-machine interaction method and system faced |
US20190012442A1 (en) | 2017-07-06 | 2019-01-10 | Bylined Me, Inc. | Facilitating retrieval of permissions associated with a media item |
CN108230491A (en) | 2017-07-20 | 2018-06-29 | 深圳市商汤科技有限公司 | Access control method and device, system, electronic equipment, program and medium |
US10796514B2 (en) | 2017-10-13 | 2020-10-06 | Alcatraz AI, Inc. | System and method for optimizing a facial recognition-based system for controlling access to a building |
KR102466942B1 (en) | 2017-12-27 | 2022-11-14 | 한국전자통신연구원 | Apparatus and method for registering face posture for face recognition |
US10817710B2 (en) | 2018-01-12 | 2020-10-27 | Sensormatic Electronics, LLC | Predictive theft notification |
US11205236B1 (en) * | 2018-01-24 | 2021-12-21 | State Farm Mutual Automobile Insurance Company | System and method for facilitating real estate transactions by analyzing user-provided data |
EP3783534A4 (en) * | 2018-04-19 | 2022-01-05 | Positec Power Tools (Suzhou) Co., Ltd | Self-moving device, server, and automatic working system therefor |
-
2018
- 2018-04-27 US US15/964,220 patent/US20190332848A1/en not_active Abandoned
-
2021
- 2021-07-09 US US17/371,343 patent/US11688202B2/en active Active
-
2023
- 2023-05-15 US US18/197,689 patent/US20230282027A1/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200057885A1 (en) * | 2018-01-12 | 2020-02-20 | Tyco Fire & Security Gmbh | Predictive theft notification for the prevention of theft |
US10817710B2 (en) * | 2018-01-12 | 2020-10-27 | Sensormatic Electronics, LLC | Predictive theft notification |
US11113532B2 (en) * | 2019-04-16 | 2021-09-07 | Lg Electronics Inc. | Artificial intelligence apparatus for recognizing object and method therefor |
US20220207946A1 (en) * | 2020-12-30 | 2022-06-30 | Assa Abloy Ab | Using facial recognition system to activate an automated verification protocol |
Also Published As
Publication number | Publication date |
---|---|
US20230282027A1 (en) | 2023-09-07 |
US20210334521A1 (en) | 2021-10-28 |
US11688202B2 (en) | 2023-06-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11688202B2 (en) | Facial enrollment and recognition system | |
US11704936B2 (en) | Object tracking and best shot detection system | |
US11748983B2 (en) | Image-based personal protective equipment fit system using worker-specific fit test image data | |
US9077678B1 (en) | Facilitating photo sharing | |
US20170032182A1 (en) | System for adaptive real-time facial recognition using fixed video and still cameras | |
WO2017212813A1 (en) | Image search device, image search system, and image search method | |
US20150262068A1 (en) | Event detection apparatus and event detection method | |
CN102147856A (en) | Image recognition apparatus and its control method | |
US11694476B2 (en) | Apparatus, system, and method of providing a facial and biometric recognition system | |
JP2012252654A (en) | Face image retrieval system and face image retrieval method | |
CN112330833A (en) | Face recognition attendance data verification method and device and computer equipment | |
CN110827432B (en) | Class attendance checking method and system based on face recognition | |
JP2011227654A (en) | Collation device | |
US20210319226A1 (en) | Face clustering in video streams | |
KR101785427B1 (en) | Customer management system and method based on features extracted from facial image of customer by neural network | |
US9621505B1 (en) | Providing images with notifications | |
WO2020172870A1 (en) | Method and apparatus for determining motion trajectory of target object | |
WO2021233058A1 (en) | Method for monitoring articles on shop shelf, computer and system | |
US11620728B2 (en) | Information processing device, information processing system, information processing method, and program | |
US10007842B2 (en) | Same person determination device and method, and control program therefor | |
US11586682B2 (en) | Method and system for enhancing a VMS by intelligently employing access control information therein | |
CN106485221A (en) | A kind of method that benchmark photograph is replaced automatically according to similar concentration degree | |
US20210216755A1 (en) | Face authentication system and face authentication method | |
US10872317B2 (en) | Biometric-based punch-in/punch-out management | |
JP6658402B2 (en) | Frame rate determination device, frame rate determination method, and computer program for frame rate determination |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: HONEYWELL INTERNATIONAL INC., NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PADMANABHAN, ARAVIND;BRODSKY, TOMAS;LIN, YUNTING;SIGNING DATES FROM 20180425 TO 20180426;REEL/FRAME:045651/0753 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |