AU2021103335A4 - System for productivity determination using machine learning based face recognition - Google Patents

System for productivity determination using machine learning based face recognition Download PDF

Info

Publication number
AU2021103335A4
AU2021103335A4 AU2021103335A AU2021103335A AU2021103335A4 AU 2021103335 A4 AU2021103335 A4 AU 2021103335A4 AU 2021103335 A AU2021103335 A AU 2021103335A AU 2021103335 A AU2021103335 A AU 2021103335A AU 2021103335 A4 AU2021103335 A4 AU 2021103335A4
Authority
AU
Australia
Prior art keywords
face
unit
face recognition
machine learning
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2021103335A
Inventor
P. S. Aithal
Soofi Anwar
Anush Bekal
Ganga Bhavani
Ananth Prabhu G.
Taramol K. G.
Suhan Mendon
Harish Neermarga
Nethravathi P. S.
Bhawana Rudra
Sonia V. Soans
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anwar Soofi Dr
Bhavani Ganga Dr
Aithal P S Dr
Bekal Anush Dr
G Ananth Prabhu Dr
Mendon Suhan Dr
P S Nethravathi Dr
Rudra Bhawana Dr
Soans Sonia V Ms
Original Assignee
Anwar Soofi Dr
Bhavani Ganga Dr
Aithal P S Dr
Bekal Anush Dr
G Ananth Prabhu Dr
Mendon Suhan Dr
P S Nethravathi Dr
Rudra Bhawana Dr
Soans Sonia V Ms
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anwar Soofi Dr, Bhavani Ganga Dr, Aithal P S Dr, Bekal Anush Dr, G Ananth Prabhu Dr, Mendon Suhan Dr, P S Nethravathi Dr, Rudra Bhawana Dr, Soans Sonia V Ms filed Critical Anwar Soofi Dr
Priority to AU2021103335A priority Critical patent/AU2021103335A4/en
Application granted granted Critical
Publication of AU2021103335A4 publication Critical patent/AU2021103335A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

SYSTEM FOR PRODUCTIVITY DETERMINATION USING MACHINE LEARNING BASED FACE RECOGNITION ABSTRACT The present invention is related to a system for productivity determination using machine learning based face recognition. The objective of present invention is to solve the anomalies presented in the prior art techniques related to employee; productivity analysis using advance face recognition technologies. 26 DRAWINGS FACE RECOGNITION UNIT EMPLOYEE EMOTION DETERMINATION EMPLOYEE DATA ff m LOCATION I PROUCTIVITY OUTCOME FIGURE 1 29

Description

DRAWINGS FACE RECOGNITION UNIT
EMPLOYEE DATA ff I EMOTION DETERMINATION m EMPLOYEE LOCATION
PROUCTIVITY OUTCOME
FIGURE 1
SYSTEM FOR PRODUCTIVITY DETERMINATION USING MACHINE LEARNING BASED FACE RECOGNITION FIELD OF INVENTION
[001]. The present invention relates to the technical field of biometric
and machine leaning.
[002]. The presents invention related to the technical field of emotion
[003]. More particularly, the present invention is related to a system for
productivity determination using machine learning based face
recognition.
BACKGROUND & PRIOR ART
[004]. Face recognition technology is currently a hot spot in biometrics
technology. It has important applications in security, attendance, and
robot applications. It is recognized as the least intrusive and most convenient biometric technology. In view of this, the purpose of the present invention is to provide a face recognition method and system, which can improve the face recognition rate under the condition that the illumination changes greatly and the face posture is random.
[005]. Facial emotion recognition (FER) is a major concern in the
fields of computer vision and artificial intelligence owing to its
significant academic and commercial potential.
[006]. Some of the prior work is listed herewith:
[007]. CN108764047A Group emotion behavior analysis method and
apparatus, electronic device, medium and product Embodiments of the
invention disclose a group emotion behavior analysis method and
apparatus, an electronic device, a medium and a product. The method
comprises the steps of carrying out image acquisition on a group
comprising at least one person to obtain at least one group image;
executing human face recognition on the at least one group image to
obtain at least one human face image, and/or executing human body
recognition on the at least one group image to obtain at least one human
body image.
[008]. CN109472212A An emotion analysis check-in interaction
system based on human face recognition The invention discloses an
emotion analysis check-in interaction system based on human face
recognition, the invention can achieve the purpose of check-in by the
way of face recognition or the combination of fingerprint and face
recognition, avoiding the problem of inaccurate check-in information
in common check-in mode, Moreover, since the check-in method of
face recognition can collect the image information of the sign-in
person, so that the emotion analysis module can analyze the mood of
the sign-in personnel on the day according to the image, and then
sends the motivator's discourse through the motivating interaction
module, Emotions, songs, or music to the check-in person's cell
phone, encourage and encourage the sign-in personnel, When it is
used in the company, it is helpful for the manager to analyze the
character of the employee, so as to arrange the working tasks more
reasonably. In addition, it is also helpful for the manager to grasp the
working state of the employee in real time.
[009]. CN107833018A Face recognition technology-based
enterprise management system The present invention discloses a
face recognition technology-based enterprise management system
which relates to the enterprise management technology field, and
comprises a work attendance management sub-system, an employee
motivation sub-system and a warehouse management sub-system.
The attendance management sub-system can check on the work
attendance of the employees automatically, thereby improving the
work attendance management efficiency. The employee motivation
sub-system recognizes the face expressions of the employees to
reflect the emotion states of the employees and carry out the
psychological motivation on the employees, thereby improving the
working enthusiasms. The warehouse management sub-system can
effectively avoid the people without fixed duties entering a
warehouse to steal the public goods and materials, thereby saving the
enterprise cost. The face recognition technology-based enterprise
management system fully utilizes the face image data of the
employees, and enables the enterprise management efficiency to be
improved effectively.
[0010]. CN112382051A Smart residence security system based on
block chain The invention discloses a smart residence security
system based on a block chain, and the system is characterized in
that the system comprises a sound positioning module which is used
for judging whether there is an abnormal sound or not, and achieving
the positioning of the abnormal sound; a face recognition module
used for recognizing a face and judging whether the face is a house
owner or not; a police emotion sensing module used for monitoring
whether a dangerous case occurs in the house; a networking alarm
module used for sending alarm information to residents and security
personnel when an alarm situation occurs, and providing abnormal
position information; a block chain database used for storing face
information of residents and workers in a community; and a sound
positioning module which comprises a microphone array module, a
sound acquisition module, a sound preprocessing module and an
abnormal sound positioning module. The residence security system
is more stable in operation and safer in operation.
[0011]. AU2020103514A4 IFER- Student Behavior Identification:
INTELLIGENT STUDENT BEHAVIOUR IDENTIFICATION
USING FER ABSTRACT Our Invention\" IFER- Student
Behavior Identification\" is a human Face expression Recognition is
one of the most powerful and challenging tasks in the world. The
social communication. Generally, face expressions are natural and
direct means for human Beings to communicate their emotions and
intentions. Face expressions are the key Characteristics of non-verbal
communication and this describes the survey of Face Expression
Recognition (FER) techniques which include the three major stages
such as Pre-processing, feature extraction and classification. The
invented technology also includes the survey explains the various
types of FER techniques with its major contributions. The
performance of various FER techniques Is compared based on the
number of expressions recognized and complexity of algorithms.
The IFER- student behavior identification is a method for teaching
social behavior to students comprises the steps of identifying rules of
conduct, establishing positive consequences for obeying the rules of
conduct and establishing negative consequences for disobeying the
rules of conduct, teaching the students the rules of conduct.
[0012]. US20190311188AI Face emotion recognition method based
on dual-stream convolutional neural network A face emotion
recognition method based on dual-stream convolutional neural
network uses a multi-scale face expression recognition network to
single frame face images and face sequences to perform learning
classification. The method includes constructing a multi-scale face
expression recognition network which includes a channel network
with a resolution of 224x224 and a channel network with a
resolution of 336x336, extracting facial expression characteristics at
different resolutions through the recognition network, effectively
combining static characteristics of images and dynamic
characteristics of expression sequence to perform training and
learning, fusing the two channel models, testing and obtaining a
classification effect of facial expressions. The present invention fully
utilizes the advantages of deep learning, effectively avoids the
problems of manual extraction of feature deviations and long time,
and makes the method provided by the present invention more
adaptable. Moreover, the present invention improves the accuracy
and productivity of expression recognition.
[0013]. US11010600B2 Face emotion recognition method based
on dual-stream convolutional neural network A face emotion
recognition method based on dual-stream convolutional neural
network uses a multi-scale face expression recognition network to
single frame face images and face sequences to perform learning
classification. The method includes constructing a multi-scale face
expression recognition network which includes a channel network
with a resolution of 224x224 and a channel network with a
resolution of 336x336, extracting facial expression characteristics at
different resolutions through the recognition network, effectively
combining static characteristics of images and dynamic
characteristics of expression sequence to perform training and
learning, fusing the two channel models, testing and obtaining a
classification effect of facial expressions. The present invention fully
utilizes the advantages of deep learning, effectively avoids the
problems of manual extraction of feature deviations and long time,
and makes the method provided by the present invention more
adaptable. Moreover, the present invention improves the accuracy
and productivity of expression recognition.
[0014]. CN109558535A Personalized article pushing method and
system based on face recognition The invention provides a
personalized article pushing method and system based on face
recognition, and the method comprises the steps of obtaining a face
image in a video image, recognizing the face image, and building a
user file according to a recognition result; analyzing the face image
based on the demographic information to obtain user attribute
information, and storing the user attribute information to a user file;
calculating similarity among the users, performing clustering
analysis on the user archives to extract behavior characteristics of the
users, and converting the behavior characteristics of the users to
generate a neighbor set related to the users; and generating a user
feature vector by combining the neighbor set and the user attribute
information, and constructing a feature matrix between the user
feature vector and the article from the article set to form an initial
pushing result of the article.
[0015]. CN207960434U Safety device of family based on people's
face facial expression recognition and behavior analysis The utility
model relates to a safety device of family based on people's face
facial expression recognition and behavior analysis. It including
install stealthy anti -theft net on the balcony, slip driving mechanism
and with slip driving mechanism signal connection's processing
controls ware, still including be used for gathering and people's face
facial expression recognition device of analyst's the expression
information of face with be used for gathering and the behavior
analysis device of analyst's action information, this people's face
facial expression recognition device and behavior analysis device all
with processing controls ware signal connection. The utility model
discloses combine people's face facial expression recognition device
and behavior analysis device on using the basis of stealthy anti -theft
net, possess the advantage of stealthy anti -theft net outside, obtain
assay people's emotional state with the expression of the face
through the monitoring people with action through people's face
facial expression recognition device.
[0016]. CN112370037A Safe driving method and system based on
emotion recognition The invention relates to the technical field
of intelligent aided driving, and provides a safe driving method and
system based on emotion recognition. A master control module, an
image acquisition module, a vehicle state collection module, a heart
rate collection device, a voice module and a vehicle braking module
are taken as components to establish a safe driving mechanism,
wherein the image acquisition module, the vehicle state collection
module, the heart rate collection device, the voice module and the
vehicle braking module are connected with the master control
module; the personality information of a driver is collected in
advance to obtain a personality value, the heart rate and the face
image of the driver are collected in real time, a corresponding heart
rate change rate is obtained through calculation, and the emotion of
the driver is determined through face expression recognition;
through multi-pronged approaches, accuracy for driver emotion
recognition can be improved, and before the driver emits a
dangerous driving behavior, a voice prompt is given to pacify the
emotion of the driver.
[0017]. CN109558535B Method and system for individually
pushing articles based on face recognitionThe invention provides a
personalized article pushing method and system based on face
recognition, and the method comprises the steps of obtaining a face
image in a video image, recognizing the face image, and building a
user file according to a recognition result; analyzing the face image
based on the demographic information to obtain user attribute
information, and storing the user attribute information to a user file;
calculating similarity among the users, performing clustering
analysis on the user archives to extract behavior characteristics of the
users, and converting the behavior characteristics of the users to
generate a neighbor set related to the users; and generating a user
feature vector by combining the neighbor set and the user attribute
information, and constructing a feature matrix between the user
feature vector and the article from the article set to form an initial
pushing result of the article. According to the present invention, the
data information, such as the gender, the age and the emotion of the
user and the like is obtained by analyzing the face image of the user,
[0018]. US8873813B2Application of Z-webs and Z-factors to analytics,
search engine, learning, recognition, natural language, and other
utilities Here, we introduce Z-webs, including Z-factors and Z
nodes, for the understanding of relationships between objects,
subjects, abstract ideas, concepts, or the like, including face, car,
images, people, emotions, mood, text, natural language, voice, music,
video, locations, formulas, facts, historical data, landmarks,
personalities, ownership, family, friends, love, happiness, social
behavior, voting behavior, and the like, to be used for many
applications in our life, including on the search engine, analytics, Big
Data processing, natural language processing, economy forecasting,
face recognition, dealing with reliability and certainty, medical
diagnosis, pattern recognition, object recognition, biometrics, security
analysis, risk analysis, fraud detection, satellite image analysis,
machine generated data analysis, machine learning, training samples,
extracting data or patterns (from the video, images, and the like),
editing video or images, and the like. Z-factors include reliability
factor, confidence factor, expertise factor, bias factor, and the like,
which is associated with each Z-node in the Z-web.
[0019].
[0020]. Groupings of alternative elements or embodiments of the
invention disclosed herein are not to be construed as limitations.
Each group member can be referred to and claimed individually or in
any combination with other members of the group or other elements
found herein. One or more members of a group can be included in,
or deleted from, a group for reasons of convenience and/or
patentability. When any such inclusion or deletion occurs, the
specification is herein deemed to contain the group as modified thus
fulfilling the written description of all Markus groups used in the
appended claims.
[0021]. As used in the description herein and throughout the claims that
follow, the meaning of "a," "an," and "the" includes plural reference
unless the context clearly dictates otherwise. Also, as used in the
description herein, the meaning of "in" includes "in" and "on" unless
the context clearly dictates otherwise.
[0022]. The recitation of ranges of values herein is merely intended to
serve as a shorthand method of referring individually to each separate
value falling within the range. Unless otherwise indicated herein, each
individual value is incorporated into the specification as if it were
individually recited herein. All methods described herein can be
performed in any suitable order unless otherwise indicated herein or
otherwise clearly contradicted by context.
[0023]. The use of any and all examples, or exemplary language (e.g.
"such as") provided with respect to certain embodiments herein is
intended merely to better illuminate the invention and does not pose a
limitation on the scope of the invention otherwise claimed. No
language in the specification should be construed as indicating any
non-claimed element essential to the practice of the invention.
[0024]. The above information disclosed in this Background section is
only for enhancement of understanding of the background of the
invention and therefore it may contain information that does not form
the prior art that is already known in this country to a person of
ordinary skill in the art.
SUMMARY
[0025]. The present invention mainly cures and solves the technical
problems existing in the prior art. In response to these problems, the
present invention provides a system for productivity determination
using machine learning based face recognition.
[0026]. The present invention discloses a system for productivity
determination using machine learning based face recognition,
wherein system comprises at least one computing device and at least
one camera, wherein the computing device process the information
received from the camera through a computer implemented method
wherein computer implemented method of recording emotional
stability of a worker characterized by:
[0027]. starting the method with worker information data; Activating on
emotional stability unit on a mobile computing device with worker
information data to open the application; Determining the
Navigation information by navigation unit; activating the camera
unit; Determining the face image;
[0028]. Activating the face scanner unit; Determining the face
recognition; mothering the information through machine learning
based mothering unit and marking the emotional stability on
successful match and ending the method; wherein: on failing to
detect availability of worker information data after starting;
[0029]. logging into emotional stability unit on a mobile computing
device with Wireless communication; Determining the Navigation
information by navigation unit; Activating the camera unit;
Determining the face image; activating the face scanner unit;
[0030]. Determining the face recognition & transferring the data through
local wireless communication unit and perform mothering the
information through machine learning based computing unit and
marking the emotional stability on successful match and ending the
method.
OBJECTIVE OF THE INVENTION
[0031]. The principle objective of the present invention is to provide a
system for productivity determination using machine learning based
face recognition.
BRIEF DESCRIPTION OF DRAWINGS
[0032]. Further clarify various aspects of some example embodiments of
the present invention, a more particular description of the invention
will be rendered by reference to specific embodiments thereof which
are illustrated in the appended drawings. It is appreciated that these
drawings depict only illustrated embodiments of the invention and are
therefore not to be considered limiting of its scope. The invention will
be described and explained with additional specificity and detail
through the use of the accompanying drawings.
[0033]. In order that the advantages of the present invention will be
easily understood, a detailed description of the invention is discussed
below in conjunction with the appended drawings, which, however,
should not be considered to limit the scope of the invention to the
accompanying drawings, in which:
[0034]. Figure 1 shows an exemplary representation of a system for
productivity determination using machine learning based face
recognition, according to the present invention.
DETAIL DESCRIPTION
[0035]. The present invention discloses a system for productivity
determination using machine learning based face recognition.
[0036]. Figure 1 shows the exemplary representation of a system for
productivity determination using machine learning based face
recognition, according to the present invention.
[0037]. Although the present disclosure has been described with the
purpose of to a system for productivity determination using machine
learning based face recognition, it should be appreciated that the same
has been done merely to illustrate the invention in an exemplary
manner and to highlight any other purpose or function for which
explained structures or configurations could be used and is covered
within the scope of the present disclosure.
[001]. The present invention discloses a system for
productivity determination using machine learning
based face recognition, wherein system comprises at
least one computing device and at least one camera,
wherein the computing device process the information
received from the camera through a computer
implemented method wherein computer implemented
method of recording emotional stability of a worker
characterized by:
[002]. starting the method with worker information data; Activating on
emotional stability unit on a mobile computing device with worker
information data to open the application; Determining the
Navigation information by navigation unit; activating the camera
unit; Determining the face image;
[003]. Activating the face scanner unit; Determining the face
recognition; mothering the information through machine learning
based mothering unit and marking the emotional stability on
successful match and ending the method; wherein: on failing to
detect availability of worker information data after starting;
[004]. logging into emotional stability unit on a mobile computing
device with Wireless communication; Determining the Navigation
information by navigation unit; Activating the camera unit;
Determining the face image; activating the face scanner unit;
[005]. Determining the face recognition & transferring the data through
local wireless communication unit and perform mothering the
information through machine learning based computing unit and
marking the emotional stability on successful match and ending the
method.
[006]. The figures and the foregoing description give examples of
embodiments. Those skilled in the art will appreciate that one or
more of the described elements may well be combined into a single
functional element. Alternatively, certain elements may be split into
multiple functional elements. Elements from one embodiment may
be added to another embodiment. For example, order of processes
described herein may be changed and are not limited to the manner
described herein. Moreover, the actions of any block diagram need
not be implemented in the order shown; nor do all of the acts need to
be necessarily performed. Also, those acts that are not dependent on
other acts may be performed in parallel with the other acts. The
scope of embodiments is by no means limited by these specific
examples.
[007]. Although implementations of the invention have been described
in a language specific to structural features and/or methods, it is to
be understood that the appended claims are not necessarily limited to
the specific features or methods described. Rather, the specific
features and methods are disclosed as examples of implementations
of the invention.

Claims (1)

CLAIMS I/We claim:
1. A system for productivity determination using
machine learning based face recognition, wherein
system comprises at least one computing device and
at least one camera, wherein the computing device
process the information received from the camera
through a computer implemented method wherein
computer implemented method of recording
emotional stability of a worker characterized by:
starting the method with worker information data;
Activating on emotional stability unit on a mobile
computing device with worker information data to
open the application;
Determining the Navigation information by
navigation unit; activating the camera unit;
Determining the face image;
Activating the face scanner unit;
Determining the face recognition;
mothering the information through machine learning
based mothering unit and marking the emotional
stability on successful match and ending the method;
wherein: on failing to detect availability of worker
information data after starting;
logging into emotional stability unit on a mobile
computing device with Wireless communication;
Determining the Navigation information by
navigation unit;
Activating the camera unit;
Determining the face image;
activating the face scanner unit; Determining the
face recognition & transferring the data through
local wireless communication unit and perform
mothering the information through machine learning based computing unit and marking the emotional stability on successful match and ending the method.
FIGURE 1 DRAWINGS
AU2021103335A 2021-06-14 2021-06-14 System for productivity determination using machine learning based face recognition Ceased AU2021103335A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2021103335A AU2021103335A4 (en) 2021-06-14 2021-06-14 System for productivity determination using machine learning based face recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2021103335A AU2021103335A4 (en) 2021-06-14 2021-06-14 System for productivity determination using machine learning based face recognition

Publications (1)

Publication Number Publication Date
AU2021103335A4 true AU2021103335A4 (en) 2022-03-31

Family

ID=80855670

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2021103335A Ceased AU2021103335A4 (en) 2021-06-14 2021-06-14 System for productivity determination using machine learning based face recognition

Country Status (1)

Country Link
AU (1) AU2021103335A4 (en)

Similar Documents

Publication Publication Date Title
Wayman et al. An introduction to biometric authentication systems
Choudhury et al. Towards Activity Databases: Using Sensors and Statistical Models to Summarize People's Lives.
CN108537910A (en) A kind of employee work attendance method, device and Work attendance management system based on recognition of face
Dudzik et al. Context in human emotion perception for automatic affect detection: A survey of audiovisual databases
CN111881822A (en) Access control method, device, equipment and storage medium based on face recognition
CN103443772A (en) System and method for demographic analytics based on multimodal information
Alsellami et al. The recent trends in biometric traits authentication based on internet of things (IoT)
AU2021103335A4 (en) System for productivity determination using machine learning based face recognition
Generosi et al. Smart retrofitting for human factors: a face recognition-based system proposal
CN112766506A (en) Knowledge base construction method based on architecture
Kadhim et al. A multimodal biometric database and case study for face recognition based deep learning
Pham et al. A proposal model using deep learning model integrated with knowledge graph for monitoring human behavior in forest protection
Li et al. iwalk: Let your smartphone remember you
US20220335725A1 (en) Monitoring presence or absence of an object using local region matching
CN113689613A (en) Access control system, access control method, and storage medium
Amayri et al. A statistical process control chart approach for occupancy estimation in smart buildings
Kaur et al. An advance 2D face recognition by feature extraction (ICA) and optimize multilayer architecture
CN109858469A (en) Method and apparatus for output information
Pandey et al. Object detection and movement prediction for autonomous vehicle: a review
Sun et al. Research of context awareness based accident prevention during mobile phone use
Kapoor et al. IoT based real-time face detection and recognition system
Sun et al. Context awareness-based accident prevention during mobile phone use
Pantic et al. Self-adaptive expert system for facial expression analysis
Qodseya et al. Visual-based eye contact detection in multi-person interactions
Bhowmick et al. A speed invariant human identification system using gait biometrics

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry