AU2020103067A4 - Computer Vision IoT-Facial Expression/Human Action Recognition - Google Patents

Computer Vision IoT-Facial Expression/Human Action Recognition Download PDF

Info

Publication number
AU2020103067A4
AU2020103067A4 AU2020103067A AU2020103067A AU2020103067A4 AU 2020103067 A4 AU2020103067 A4 AU 2020103067A4 AU 2020103067 A AU2020103067 A AU 2020103067A AU 2020103067 A AU2020103067 A AU 2020103067A AU 2020103067 A4 AU2020103067 A4 AU 2020103067A4
Authority
AU
Australia
Prior art keywords
image data
facial
features
facial image
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2020103067A
Inventor
Sivaiah Bellamkonda
Ravisankar Malladi
Yerininti Venkata Narayana
Vedantham Ramachandran
Edara Sreenivasa Reddy
Gangavarapu Venkata Satya Kumar
Lavanya Settipalli
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bellamkonda Sivaiah Dr
Malladi Ravisankar Dr
Narayana Yerininti Venkata Mr
Ramachandran Vedantham Dr
Settipalli Lavanya Mrs
Original Assignee
Bellamkonda Sivaiah Dr
Malladi Ravisankar Dr
Narayana Yerininti Venkata Mr
Ramachandran Vedantham Dr
Settipalli Lavanya Mrs
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bellamkonda Sivaiah Dr, Malladi Ravisankar Dr, Narayana Yerininti Venkata Mr, Ramachandran Vedantham Dr, Settipalli Lavanya Mrs filed Critical Bellamkonda Sivaiah Dr
Priority to AU2020103067A priority Critical patent/AU2020103067A4/en
Application granted granted Critical
Publication of AU2020103067A4 publication Critical patent/AU2020103067A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/76Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries based on eigen-space representations, e.g. from pose or different illumination conditions; Shape manifolds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

In this invention a new technique and framework for improving an automated facial acknowledgment programming framework is given. This technique incorporates consequently identifying a face of a client by means of an IOT device. An facial image recovered and picture divides are extricated from the picture and spoke to as a vector. Here hybrid method for face detection in color images. The well known Haar feature-based face detector developed by Viola and Jones (VJ) that has been designed for gray-scale images is combined with a skin-color filter, which provides complementary information in color images. The image is first passed through a Haar-Feature based face detector, which is adjusted such that it is operating at a point on its ROC curve that has a low number of missed faces but a high number of false detections. Then, using the proposed skin color post-filtering method many of these false detections can be eliminated easily. We also use a color compensation algorithm to reduce the effects of lighting. Our experimental results on the Bao color face database show that the proposed method is superior to the original VJ algorithm and also to other skin color based pre-filtering methods in the literature in terms of precision. The client is ordered by means of decided facial element ascribes as for a majority of client type loads put away in a store and an underlying client kind of the client is resolved. The vector and information showing the underlying client type are sent to a worker and a cycle for deducing as for the underlying client type, the vector, and pictures in a predetermined information base related with the underlying client type, a last client sort of the client s performed. A character of the client is resolved dependent on the deriving and the personality is sent to the IOT device.

Description

EDITORIAL NOTE 2020103027
There are 9 pages of description only
Computer Vision IoT-Facial Expression/Human Action
Recognition
Field of the Invention
The current development relates by and large to a technique for naturally
performing IoT facial acknowledgment identification and specifically to a
strategy and related framework for improving facial acknowledgment
programming innovation related with deciding client personality and
empowering related equipment programming framework access. In this
invention IoT of face detection refers to determining whether or not there are
any faces in a given image and to estimate the location and size of any
detected faces through computer vision. Face detection is a trivial task for
humans, however it is not very easy for computers due to geometric (scale,
in-plane rotation, pose, facial expressions, occlusion etc.) and photometric
variations. In the next subsection, face detection algorithms in the literature
will be briefly reviewed.
Summary of the Invention
Earlier in this technique on face detection can be grouped as knowledge
based, feature-based, and template-based and appearance based methods.
Face detection IoT sensor is an expensive search problem. In general, a
sliding window is scanned through an image at various scales to classify the
window as face or non-face. Therefore, many background windows need to
be processed as well as actual face regions. The ratio of the number of non
face windows to face windows can be as high as 100000:1. Hence, a well
trained classifier is necessary that will produce a low number of false
positives.
This technique refers how the face images detecting through IoT. Here the
sensor will sense the images through the IoT device with the help of Haar
combination. This sensor helps with the skin color of facial of the human like
size through the computer vision The IoT Human segmentation in
uncontrolled environments is a hard task because of the constant changes
produced in natural scenes: illumination changes, moving objects, changes in
the point of view, or occlusions, just to mention a few. Because of the nature
of the problem, a common way to proceed is to discard most part of the
image so that the analysis can be performed on a reduced set of small candidate regions. However, the main problem of that method is that moving pixels corresponds to the boundaries between foreground and background regions, and thus, there is no clear discrimination.
Description of the Summary
In this invention it gives a computerized facial recognition improvement
strategy involving: naturally detecting, by a processor of an Internet of things
(IOT) equipment device of a user, a face of the user; consequently
recovering, by the processor, a picture of the substance of the client;
removing, by the processor from the picture, pictures of significant facial
component credits of the essence of the utilization, wherein the significant
facial element ascribes are spoken to in a vector; arranging, by the processor,
the client by means of the facial element ascribes concerning a majority of
client type loads put away in a reserve of the IOT equipment device;
deciding, by the processor dependent on consequences of the grouping, an
underlying user sort of the client; sending, by the processor to a worker, the
vector speaking to the facial element ascribes and information demonstrating
the underlying client type, wherein profound learning model programming code is executed for construing, through the worker and regarding the underlying client type, the vector, and a majority of pictures in a predetermined data set related with the underlying client type, a last client kind of the client, wherein a character of the client is resolved dependent on the gathering, and wherein the personality of the client is communicated to the IOT device, and accepting, by the processor, the character of the client.
In this IoT method that utilizes skin color detection to decrease the high false
positive rate of the VJ face detector. The VJ algorithm uses only the
brightness information in a search window, resulting in a high false
acceptance rate due to face-like brightness patterns in the background.
Therefore, skin-color is a complementary channel of information, and it is
very fast to process.
In order to achieve a low false detection rate while keeping a high true
detection rate, we propose a skin-color based post-filtering method for color
images. The windows that are detected as face by the VJ algorithm are
verified if the window contains sufficient number of skin pixels. To
maximize the overall true detection rate, we adjust the parameters of the VJ
algorithm such that the number of misses is low, and the number of false detections is high. Most of the false detections are easily eliminated by the proposed skin-color based post-filtering method. The human face detection method based on an over-complete set of Haar-like features which are calculated in scaled analysis windows. The rectangular Haar-like features are sensitive to edges, bars and other similar structures in the image and they are computed using an efficient method based on the integral image concept.
After calculation of a huge number of features for each analysis window, the
AdaBoost algorithm is used for combining a small number of these features
to form an effective classifier. For example, for an analysis window of size
24 x 24, there are approximately 160, 000 features, far more than the number
of pixels. A variant of AdaBoost is used both to select the best features and to
train the final classifier. In this method that utilizes skin color detection to
decrease the high false positive rate of the VJ face detector. The VJ algorithm
uses only the brightness information in a search window, resulting in a high
false acceptance rate due to face-like brightness patterns in the background.
Therefore, skin-color is a complementary channel of information, and it is
very fast to process. In order to achieve a low false detection rate while
keeping a high true detection rate, we propose a skin-color based post- filtering method for color images. The windows that are detected as face by the VJ algorithm are verified if the window contains sufficient number of skin pixels. To maximize the overall true detection rate, we adjust the parameters of the VJ algorithm such that the number of misses is low, and the number of false detections is high. Most of the false detections are easily eliminated by the proposed skin-color based post-filtering method. In order to reduce the effects of illumination, we also use a color compensation method before the skin-color detection step to improve the effectiveness of skin color detection, which was not present in previous pre-filtering based approaches.
This invention purpose of testing is to discover errors. Testing is the process
of trying to discover every conceivable fault or weakness in a work product.
It provides a way to check the functionality of components, sub-assemblies,
assemblies and/or a finished product It is the process of exercising software
with the intent of ensuring that the Software system meets its requirements
and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.
For improving facial recognition programming innovation related with deciding
user character and empowering related equipment programming framework access,
as per encapsulations of the current creation. This invention is empowered to create a progressive structure for a model dependent on facial highlights being separated.
As opposed to framing a old facial recognition measure, a shallow
acknowledgment measure is executed for gathering facial pictures into contrasting
information bases. For instance, male/female, hair tone, eye tone, facial items (e.g.,
glasses, adornments, and so forth.), presence of facial hair, facial shape, and so on
might be utilized as traits for making the contrasting information bases. The
ascribes might be removed through execution of a PC vision library, for example,
entomb alia, open source PC vision library, and so on. In this manner, profound
neural organization engineering programming (e.g., convolutional neural network
(CNN) software) is executed for producing precise information base models for
every data set consequently decreasing a size of a prepared model and improving
exactness and deduction effectiveness concerning information base memory
structure. Also, inner reserve memory is utilized for putting away profound facial
models with the end goal that when recognizing a face, system executes shallow
facial recognition programming for separating key ascribes (as presented, supra).
The key credits are utilized as a list for recovering a comparing model prepared
concerning the cloud. The profound facial models model(s) are put away inside the
reserve resulting model recovery processes.
EDITORIAL NOTE 2020103027
There are two pages of claims only

Claims (6)

  1. Claim: 1. IoT of face detection refers to determining whether or not there are any faces in a given image and to estimate the location and size of any detected faces through computer vision.
  2. 2. . A facial recognition system comprising: a camera device which in operation produces video image data by imaging an imaging area a) A facial recognition server, which is connected to the camera device, the facial recognition server comprising: a feature extractor b) In operation, extracts features of facial image data including a face of a person shown in the video image data, a preservation unit, which in operation, preserves the features of the facial image data extracted by the feature extractor in a facial feature memory, and a statistical processing unit, which in operation: performs statistical processing by using the features of the facial image data extracted by the feature extractor
  3. 3. The features of facial image data extracted by the feature extractor have a similarity that is equal to or greater than a predetermined value to features of facial image data that are present in the facial feature memory, and a predetermined time has elapsed from an imaging time point by the camera device of the video image data that includes the facial image data, wherein the facial recognition server discards the features of the facial image data extracted by the feature extractor, in a case where: the features of facial image data extracted by the feature extractor have a similarity that is equal to or greater than a predetermined value to features of facial image data that are present in the facial feature memory,.
  4. 4. The predetermined time has not elapsed from an imaging time point by the camera device of the video image data that includes the facial image data, and without satisfying one of preservation conditions: the facial image data is from the first captured facial image of the person, and an orientation of a face included in the facial image data is largely shown front face.
  5. 5. The facial recognition wherein the predetermined time is randomly set in accordance with an environment around the imaging area.
    a) A facial recognition server which is connected to a camera device, the server comprising: a feature extractor, which in operation, extracts features of facial image data including a face of a person shown in video image data, based on the video image data from the camera device b) The features of facial image data extracted by the feature extractor have a similarity that is equal to or greater than a predetermined value to features of facial image data that are present in the facial feature memory, and a predetermined time has elapsed from an imaging time point by the camera device of the video image data that includes the facial image data c) The facial recognition server discards the features of the facial image data extracted by the feature extractor, in a case where: the features of facial image data extracted by the feature extractor have a similarity that is equal to or greater than a predetermined value to features of facial image data that are present in the facial feature memory
  6. 6. The human face detection method based on an over-complete set of Haar-like features which are calculated in scaled analysis windows. a) The rectangular Haar-like features are sensitive to edges, bars and other similar structures in the image and they are computed using an efficient method based on the integral image concept. b) We also use a color compensation algorithm to reduce the effects of lighting. Our experimental results on the Bao color face database show that the proposed method is superior to the original VJ algorithm and also to other skin color based pre-filtering methods in the literature in terms of precision.
    Figure 1: From left to right: left, frontal, and right mesh fitting
AU2020103067A 2020-10-28 2020-10-28 Computer Vision IoT-Facial Expression/Human Action Recognition Ceased AU2020103067A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2020103067A AU2020103067A4 (en) 2020-10-28 2020-10-28 Computer Vision IoT-Facial Expression/Human Action Recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2020103067A AU2020103067A4 (en) 2020-10-28 2020-10-28 Computer Vision IoT-Facial Expression/Human Action Recognition

Publications (1)

Publication Number Publication Date
AU2020103067A4 true AU2020103067A4 (en) 2020-12-24

Family

ID=73838853

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2020103067A Ceased AU2020103067A4 (en) 2020-10-28 2020-10-28 Computer Vision IoT-Facial Expression/Human Action Recognition

Country Status (1)

Country Link
AU (1) AU2020103067A4 (en)

Similar Documents

Publication Publication Date Title
Zhang et al. Random Gabor based templates for facial expression recognition in images with facial occlusion
Srivastava et al. A survey of face detection algorithms
Arigbabu et al. Integration of multiple soft biometrics for human identification
Cornejo et al. Facial expression recognition with occlusions based on geometric representation
Haji et al. Real time face recognition system (RTFRS)
CN112633221A (en) Face direction detection method and related device
Song et al. Visual-context boosting for eye detection
Podbucki et al. CCTV based system for detection of anti-virus masks
Gürel et al. Design of a face recognition system
Karahan et al. Age and gender classification from facial features and object detection with machine learning
Bukht et al. A novel framework for human action recognition based on features fusion and decision tree
Arva et al. Embedded video processing on Raspberry Pi
Almasi An investigation on face detection applications
Nasrollahi et al. Summarization of surveillance video sequences using face quality assessment
Zahid et al. A Multi Stage Approach for Object and Face Detection using CNN
Basbrain et al. Shallow convolutional neural network for eyeglasses detection in facial images
AU2020103067A4 (en) Computer Vision IoT-Facial Expression/Human Action Recognition
Ravidas et al. Deep learning for pose-invariant face detection in unconstrained environment
Suma Dense feature based face recognition from surveillance video using convolutional neural network
Gul et al. A machine learning approach to detect occluded faces in unconstrained crowd scene
Shukla et al. Deep Learning Model to Identify Hide Images using CNN Algorithm
Bayhan et al. Scale and pose invariant real-time face detection and tracking
Heydarzadeh et al. Utilizing skin mask and face organs detection for improving the Viola face detection method
Hashem et al. Human gait identification system based on transfer learning
Gulhane et al. A review on surgically altered face images recognition using multimodal bio-metric features

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry