WO2023095168A1 - An intelligent security system and a method thereof - Google Patents

An intelligent security system and a method thereof Download PDF

Info

Publication number
WO2023095168A1
WO2023095168A1 PCT/IN2022/051028 IN2022051028W WO2023095168A1 WO 2023095168 A1 WO2023095168 A1 WO 2023095168A1 IN 2022051028 W IN2022051028 W IN 2022051028W WO 2023095168 A1 WO2023095168 A1 WO 2023095168A1
Authority
WO
WIPO (PCT)
Prior art keywords
further configured
neural network
frames
faces
network model
Prior art date
Application number
PCT/IN2022/051028
Other languages
French (fr)
Inventor
Nilesh Vidyadhar Puntambekar
Avani Nilesh PUNTAMBEKAR
Ved Nilesh PUNTAMBEKAR
Original Assignee
Nilesh Vidyadhar Puntambekar
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nilesh Vidyadhar Puntambekar filed Critical Nilesh Vidyadhar Puntambekar
Publication of WO2023095168A1 publication Critical patent/WO2023095168A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Definitions

  • the present invention relates to security systems. More particularly, the present disclosure relates to an intelligent security system and a method thereof.
  • ANNs artificial neural networks
  • SNNs simulated neural networks
  • ANNs Artificial neural networks
  • ANNs are comprised of a node layers, containing an input layer, one or more hidden layers, and an output layer. Each node, or artificial neuron, connects to another and has an associated weight and threshold. If the output of any individual node is above the specified threshold value, that node is activated, sending data to the next layer of the network. Otherwise, no data is passed along to the next layer of the network.
  • Image capturing unit referred to as means any mechanical, digital, or electronic viewing device; still camera; camcorder; motion picture camera; or any other instrument, equipment, or format capable of recording, storing, or transmitting visual images.
  • Memory referred to as a non-transitory computer-readable recording medium on which a data or model is recorded, such as a disk, hard drive, or the like.
  • Common forms of memory may include, for example, floppy disks, flexible disks, hard disks, magnetic tape, or any other magnetic storage medium, CD-ROM, DVD, or any other optical medium, a RAM, a PROM, an EPROM, a FLASH-EPROM, or other memory chip or cartridge, or any other tangible medium from which a computer can read and use.
  • Devices like desktops, laptops, tablets or cell phones and automatic teller machines (ATM) need to be secure and retain privacy, especially when a user is entering confidential information in either of the devices. In other cases, it is necessary that viewership, especially for children, for contents that are confidential or unsuitable for age are restricted.
  • ATM automatic teller machines
  • cellphones, desktops, laptops, or other computing devices usually do not have any inbuilt feature which stops someone other than an authorized user from peeking therein.
  • the authorized user has to use partitions or walls around display screens, turn display screens away from public view, always make sure that unattended displays are turned off, the display screen is always kept clear of sensitive documents, or a privacy filter is attached to the display screen of the desktop or the laptop. All these measures are not feasible under all circumstances, and in case of failure, can cause theft of the confidential or private information.
  • ATM machines are less likely to be tampered although they are always under video surveillance because the video recorded during the surveillance is only accessed after the theft, specifically when the authorized user registers a complaint.
  • the data stolen i.e., confidential numbers or letters
  • the only measure recommended is that the authorized user should cover the ATM keypad while entering the numbers or letters, so that a person (other than the authorized user) cannot peep from behind and steal the information. This measure is ineffective as the thief could create distractions or peek without the authorized user’s knowledge.
  • An object of the present disclosure is to provide an intelligent security system and a method thereof.
  • Another object of the present disclosure is to provide an intelligent security system and method that is inbuilt into systems such as desktops, laptops and ATM machines in which confidential data is entered.
  • Yet another object of the present disclosure is to provide an intelligent security system and method that prevents unauthorized personnel from peeking into and accessing while confidential data is being entered or displayed on screen of device.
  • the present disclosure envisages an intelligent security system.
  • the system comprises a memory, an image capturing unit, a processing unit, and an actuating unit.
  • the memory is configured to store a pre-trained neural network model.
  • the memory is further configured to store a plurality of facial features of a user.
  • the image capturing unit is configured to sequentially capture video frames of a surrounding environment in real-time.
  • the processing unit is configured to cooperate with the image capturing unit to receive the video frames and further configured to implement the pre-trained neural network model to generate a plurality of output frames containing one or more faces.
  • processing unit further comprises a segmentation module, and a face recognition module.
  • the segmentation module is configured to receive the video frames and further configured to segment each of the received video frames into a plurality of regions.
  • the segmentation module further configured to apply vision-based segmentation technique to each of the received video frames to segment the frames into the regions.
  • the face recognition module is configured to detect and mask the one or more faces in the regions and further configured to generate a plurality of output frames.
  • the face recognition module comprises an extractor and an identifier.
  • the extractor is configured to receive the regions, and is further configured to extract a plurality of features from the regions.
  • the identifier is configured to cooperate with the extractor to receive the extracted features, and is further configured to identify one or more faces in the regions by mapping the received features into the pre-trained neural network model and further configured to extract the plurality of faces to generate said plurality of output frames.
  • the processing unit employs deep learning based neural network techniques to perform scene segmentation, detection, and masking of the identified faces.
  • the actuating unit is configured to cooperate with the processing unit to receive the output frames and further configured to cooperate with the memory to implement the pre-trained neural network model to identify presence of an unauthorised face present in the output frames to actuate a preventive action.
  • the actuating unit is configured to compare the stored facial features of the authorized user with the output frame containing the faces to identify the authorized user and unauthorised users.
  • the preventive action can be selected from a group of shutting down a device implementing the system, changing display on a display screen of the device, locking screen of the device, darkening the display screen of the device, provide an alert on the display screen, and aborting a transaction in process on the device.
  • the system includes non-optic sensors including inbuilt accelerometers or a haptic sensor configured to sense the age of the user with finger touch and frequency of movements and transmit a sensed signals to the actuating unit to initiate preventive actions when user is identified as underaged person.
  • non-optic sensors including inbuilt accelerometers or a haptic sensor configured to sense the age of the user with finger touch and frequency of movements and transmit a sensed signals to the actuating unit to initiate preventive actions when user is identified as underaged person.
  • the present disclosure further envisages an intelligent security method.
  • Figure 1 illustrates a block diagram of an intelligent security system , in accordance with an embodiment of the present disclosure
  • FIG. 2A illustrates a flow diagram of an intelligent security method, in accordance with an embodiment of the present disclosure
  • Figure 2B illustrates a schematic image depicting an unauthorized person in the near vicinity of a user, peeking into a system in which the camera is installed;
  • Figure 3A illustrates a schematic image depicting the display of the system when the output frame for video captured by camera detects only authorised user
  • Figure 3B illustrates a schematic image depicting the display of the system when the output frame for video captured by camera detects the authorized user and an unauthorized person.
  • Embodiments are provided so as to thoroughly and fully convey the scope of the present disclosure to the person skilled in the art. Numerous details, are set forth, relating to specific components, and methods, to provide a complete understanding of embodiments of the present disclosure. It will be apparent to the person skilled in the art that the details provided in the embodiments should not be construed to limit the scope of the present disclosure. In some embodiments, well-known processes, well-known apparatus structures, and well-known techniques are not described in detail.
  • cellphones, desktops and laptops usually do not have any inbuilt feature which stops someone other than the user from peeking therein.
  • the user has to use partitions or walls around screens, turn screens away from public view, always make sure that unattended displays are turned off, the screen is always kept clear of sensitive documents, or a privacy filter is attached to the screen of the desktop or the laptop. All these measures are not feasible under all circumstances, and in case of failure can cause theft of the information.
  • ATM machines are less likely to be tampered since they are always under video surveillance.
  • the video recorded during the surveillance is only accessed after the theft, specifically when the user registers a complaint.
  • the data stolen i.e., confidential numbers or letters
  • the only measure recommended is that the user should cover the ATM keypad while entering the numbers or letters, so that a person (other than the user) cannot peep from behind and steal the information. This measure is ineffective as the thief could create distractions or peek without the user’s knowledge.
  • system 100 an intelligent security system
  • method 2000 a method thereof, of the present disclosure
  • the intelligent security system 100 includes a memory 125, an image capturing unit 105, a processing unit 110, and an actuating unit 140.
  • the memory 125 is configured to store a pre-trained neural network model. In an aspect, the memory 125 is further configured to store a plurality of facial features of an authorized user 205.
  • the image captunng unit 105 is configured to sequentially capture and generate video frames of the surrounding environment in real-time.
  • the processing unit 110 is configured to cooperate with the image capturing unit 105 to receive the video frames and further configured to implement the pre-trained neural network to generate a plurality of output frames 107 containing one or more faces.
  • processing unit 110 further comprises a segmentation module 115, and a face recognition module 120.
  • the segmentation module 115 is configured to receive the video frames and further configured to segment each of the received frames into a plurality of regions.
  • the segmentation module 115 is further configured to apply visionbased segmentation technique to each of the received video frames to segment the frames into the regions.
  • the face recognition module 120 is configured to detect and mask the one or more faces in the regions and further configured to generate a plurality of output frames 107.
  • the face recognition module 120 comprises an extractor 130 and an identifier 135.
  • the extractor 130 is configured to receive the regions, and is further configured to extract a plurality of features from the regions.
  • the identifier 135 is configured to cooperate with the extractor 130 to receive the extracted features, and is further configured to identify one or more faces in the regions by mapping the received features into the pre-trained neural network model and further configured to extract the plurality of faces to generate a plurality of output frames 107.
  • the processing unit 110 employs deep learning based neural network techniques to perform scene segmentation, detection and extraction of the identified faces.
  • the actuating unit 140 is configured to cooperate with the processing unit 110 to receive the output frames 107 and further configured to cooperate with the memory 125 to use pre-trained neural network model to identify presence of an unauthorised face present in the output frames 107 to actuate a preventive action.
  • the actuating unit 140 is configured to compare the stored facial features of the authorized user 205 with the output frame containing the faces to identify an authorized user and unauthorised users.
  • the preventive action can be selected from a group of shutting down the device 200, change display on display screen 150 of the device 200, lock screen of the device 200, darken display screen 150 of the device 200 and aborting a transaction in process.
  • the system 100 includes non-optic sensors such as inbuilt accelerometers or a haptic sensor configured to sense the age of the user with finger touch and frequency of movements and transmit a sensed signal to actuating unit 140.
  • non-optic sensors such as inbuilt accelerometers or a haptic sensor configured to sense the age of the user with finger touch and frequency of movements and transmit a sensed signal to actuating unit 140.
  • the intelligent security method 2000 is shown in accordance with an embodiment.
  • the order in which the method 2000 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any appropriate order to carry out the method 2000 or an alternative method. Additionally, individual blocks may be deleted from the method 2000 without departing from the scope of the subject matter described herein.
  • the the intelligent security method includes steps of:
  • step 2002 storing, by a system 100, a pre-trained neural network model in a memry 125;
  • step 2004 capturing, by an image capturing unit 105 of the system 100, video frames of the surrounding environment in real-time;
  • step 2006 generating, byg a processing unit 110 of the system 100, a plurality of output frames containing one or more faces by implementing said pre-trained neural network model; and
  • step 2008 identifing, by an actuating unit 140 of the system 100, presence of an unauthorised face present in said output frames by implementing said pre-trained neural network model so as to actuate a preventive action on positive identification of the unauthorised face.
  • Figure 2B showes a schematic image depicting an unauthorized person in the near vicinity of a user, peeking into a system in which the camera is installed.
  • Figure 3A showes a schematic image depicting the display of the system when the output frame for video captured by camera detects only authorised user.
  • Figure 3B showes a schematic image depicting the display of the system when the output frame for video captured by camera detects the authorized user and an unauthorized person.
  • the system 100 is configured to prevent unauthorized personnel from peeking into and accessing confidential data input entered in, or displayed on screen of a device 200 such as a desktop, a laptop, a cell phone, a digital locker or an ATM machine.
  • a device 200 such as a desktop, a laptop, a cell phone, a digital locker or an ATM machine.
  • the system 100 is implemented in the device 200.
  • the system 100 is partially implemented in the device 200 and partially in a remoter central server.
  • the system 100 includes an image capturing unit 105, a processing unit 110, and an actuating unit 140.
  • the image capturing unit or the camera 105 is configured to be inbuilt in the device 200, or attached to the device 200 as an integral component of the device 200, or an external unit functionally connected to the device 200.
  • the image capturing unit 105 is a dynamically enabled image capturing unit which is mounted on the device 200 such as the user and the user’s surroundings can be easily captured by the image capturing unit 105.
  • the image capturing unit 105 is configured to sequentially capture and generate video frames of the surrounding environment in real-time.
  • the processing unit 110 is configured to cooperate with the image capturing unit 105 to receive the video frames therefrom.
  • the processing unit 110 comprises a segmentation module 115 and a face recognition module 120 that enable the processing unit 110 to generate output frames 107 having a target object (which is the face of the user 205) identified therein.
  • the segmentation module 115 is configured to apply vision based segmentation technique to each of the received video frames to segment the scene in each of the received frames into various regions.
  • the face recognition module 120 is configured to detect and mask at least one target object (more specifically a face) from the segmented frames.
  • the face recognition module 120 is further configured to generate output frames having the masked target object (faces of the authorized/unauthorized users).
  • the processing unit 110 employs deep learning based neural network techniques to perform scene segmentation, and detection, and masking of the target objects.
  • the face recognition module 120 comprises a memory 125, an extractor 130, and an identifier 135.
  • the memory 125 is configured to store a pre-trained neural network model, wherein said pre-trained model correlates a set of learned features with the targets to be identified.
  • the extractor 130 is configured to receive the segmented video frames, and is further configured to extract dominant features from the received frames.
  • the identifier 135 is configured to cooperate with said extractor 130 to receive said extracted features, and further configured to cooperate with said memory 125 to identify and mask the target objects in the segmented video frame by mapping said received features into said pre-trained neural network model.
  • the identifier 135 is further configured to generate the output frame containing the masked target objects.
  • the memory 125 is further configured to store therein video frames that identify each and every facial feature of the user.
  • the memory 125 is configured to store therein video frames that identify facial features of an unauthorized person/user.
  • the actuating unit 140 is configured to cooperate with the processing unit 110 to receive the output frames generated by the face recognition module 120.
  • the actuating unit 140 is configured to cooperate with the memory 125 to receive the stored video frames therefrom, and compare the stored video frames with the output frame containing the masked target object.
  • the actuating unit 140 if the actuating unit 140 detects an unauthorized face 210 in the close vicinity of the user, the actuating unit 140 to shut down, change, lock, provide alert on screen, or darken the display screen 150 of the device 200 (in case of a desktop or a laptop), or abort the transaction process (in case of an ATM machine or cell phones).
  • the actuating unit 140 is connected to an actuator 145 which enables the shutting down or abortion of the process.
  • the system 100 may include non-optic sensors such as inbuilt accelerometers or a haptic sensor configured to sense the age of the user with finger touch and frequency of movements and transmit a sensed signal to the actuating unit 140.
  • the actuating unit 140 is configured to receive the sensed information and shut down, hide or darken the display screen 150 of the device 200 (especially in case of lockers provided with digital locks).
  • the system 100 is configured to identify any obstruction that maybe placed in front of the image capturing unit 105, wherein the obstruction can hide faces, generate or alter facial features, and restrict the access to the device 200.
  • the actuating unit 140 is further configured to cooperate with a display screen 150 to display a message to the user indicating that the security of the device 200 has been breached.
  • the intelligent security system 100 is configured to continuously monitor the vicinity of the user.
  • the actuating unit 140 is fed with a routine for reactivating the screen of the device 200, when no face other than the face of the user 205 is identified by the image capturing unit 105.
  • the user can bypass the routine and activate the screen.
  • the intelligent security system 100 can be used in cell phones, tablets, laptops, desktops, ATM machines, and lockers with digital locks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure relates to an intelligent security system (100). The system (100) includes a memory (125), an image capturing unit (105), a processing unit (110) and an actuating unit (140). The memory (125) store a pre-trained neural network model. The image capturing unit (105) is configured to sequentially capture and generate video frames of the surrounding environment in real-time. The processing unit (110) cooperate with the image capturing unit (105) to receive the video frames and further configured to implement the pre-trained neural network to generate a plurality of output frames (107) containing one or more faces. The actuating unit (140) receive the output frames and further configured to cooperate with the memory (125) to use pre-trained neural network model to identify presence of an unauthorised face present in the output frames to actuate a preventive action.

Description

AN INTELLIGENT SECURITY SYSTEM AND A METHOD THEREO
FIELD
The present invention relates to security systems. More particularly, the present disclosure relates to an intelligent security system and a method thereof.
DEFINITION
As used in the present disclosure, the following terms are generally intended to have the meaning as set forth below, except to the extent that the context in which they are used indicate otherwise.
“Neural networks”, referred to as artificial neural networks (ANNs) or simulated neural networks (SNNs), are a subset of machine learning and are at the heart of deep learning algorithms. Artificial neural networks (ANNs) are comprised of a node layers, containing an input layer, one or more hidden layers, and an output layer. Each node, or artificial neuron, connects to another and has an associated weight and threshold. If the output of any individual node is above the specified threshold value, that node is activated, sending data to the next layer of the network. Otherwise, no data is passed along to the next layer of the network.
“ Image capturing unit” referred to as means any mechanical, digital, or electronic viewing device; still camera; camcorder; motion picture camera; or any other instrument, equipment, or format capable of recording, storing, or transmitting visual images.
“Memory” referred to as a non-transitory computer-readable recording medium on which a data or model is recorded, such as a disk, hard drive, or the like. Common forms of memory may include, for example, floppy disks, flexible disks, hard disks, magnetic tape, or any other magnetic storage medium, CD-ROM, DVD, or any other optical medium, a RAM, a PROM, an EPROM, a FLASH-EPROM, or other memory chip or cartridge, or any other tangible medium from which a computer can read and use. BACKGROUND
The background information herein below relates to the present disclosure but is not necessarily prior art.
Devices like desktops, laptops, tablets or cell phones and automatic teller machines (ATM) need to be secure and retain privacy, especially when a user is entering confidential information in either of the devices. In other cases, it is necessary that viewership, especially for children, for contents that are confidential or unsuitable for age are restricted.
Typically, cellphones, desktops, laptops, or other computing devices usually do not have any inbuilt feature which stops someone other than an authorized user from peeking therein. In fact, to prevent such intrusion, the authorized user has to use partitions or walls around display screens, turn display screens away from public view, always make sure that unattended displays are turned off, the display screen is always kept clear of sensitive documents, or a privacy filter is attached to the display screen of the desktop or the laptop. All these measures are not feasible under all circumstances, and in case of failure, can cause theft of the confidential or private information.
On the other hand, ATM machines are less likely to be tampered although they are always under video surveillance because the video recorded during the surveillance is only accessed after the theft, specifically when the authorized user registers a complaint. In the meantime, the data stolen, i.e., confidential numbers or letters) will already be used to steal money from the authorized user’s account. To prevent the theft of the confidential numbers or letters, the only measure recommended is that the authorized user should cover the ATM keypad while entering the numbers or letters, so that a person (other than the authorized user) cannot peep from behind and steal the information. This measure is ineffective as the thief could create distractions or peek without the authorized user’s knowledge.
There is, therefore, felt a need for an intelligent security system and a method thereof. OBJECTS
Some of the objects of the present disclosure, which at least one embodiment herein satisfies, are as follows:
An object of the present disclosure is to provide an intelligent security system and a method thereof.
Another object of the present disclosure is to provide an intelligent security system and method that is inbuilt into systems such as desktops, laptops and ATM machines in which confidential data is entered.
Yet another object of the present disclosure is to provide an intelligent security system and method that prevents unauthorized personnel from peeking into and accessing while confidential data is being entered or displayed on screen of device.
Other objects and advantages of the present disclosure will be more apparent from the following description, which is not intended to limit the scope of the present disclosure.
SUMMARY
The present disclosure envisages an intelligent security system. The system comprises a memory, an image capturing unit, a processing unit, and an actuating unit.
The memory is configured to store a pre-trained neural network model. In an aspect, the memory is further configured to store a plurality of facial features of a user.
The image capturing unit is configured to sequentially capture video frames of a surrounding environment in real-time.
The processing unit is configured to cooperate with the image capturing unit to receive the video frames and further configured to implement the pre-trained neural network model to generate a plurality of output frames containing one or more faces.
In an aspect, processing unit further comprises a segmentation module, and a face recognition module. The segmentation module is configured to receive the video frames and further configured to segment each of the received video frames into a plurality of regions. In said aspect, the segmentation module further configured to apply vision-based segmentation technique to each of the received video frames to segment the frames into the regions. Thereafter, the face recognition module is configured to detect and mask the one or more faces in the regions and further configured to generate a plurality of output frames.
In an aspect, the face recognition module comprises an extractor and an identifier. The extractor is configured to receive the regions, and is further configured to extract a plurality of features from the regions. The identifier is configured to cooperate with the extractor to receive the extracted features, and is further configured to identify one or more faces in the regions by mapping the received features into the pre-trained neural network model and further configured to extract the plurality of faces to generate said plurality of output frames.
In an aspect, the processing unit employs deep learning based neural network techniques to perform scene segmentation, detection, and masking of the identified faces.
The actuating unit is configured to cooperate with the processing unit to receive the output frames and further configured to cooperate with the memory to implement the pre-trained neural network model to identify presence of an unauthorised face present in the output frames to actuate a preventive action.
In an aspect, the actuating unit is configured to compare the stored facial features of the authorized user with the output frame containing the faces to identify the authorized user and unauthorised users.
In an aspect, the preventive action can be selected from a group of shutting down a device implementing the system, changing display on a display screen of the device, locking screen of the device, darkening the display screen of the device, provide an alert on the display screen, and aborting a transaction in process on the device.
1. In an aspect, the system includes non-optic sensors including inbuilt accelerometers or a haptic sensor configured to sense the age of the user with finger touch and frequency of movements and transmit a sensed signals to the actuating unit to initiate preventive actions when user is identified as underaged person.
The present disclosure further envisages an intelligent security method.
Various objects, features, aspects, and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.
It is to be understood that the aspects and embodiments of the disclosure described above may be used in any combination with each other. Several of the aspects and embodiments may be combined to form a further embodiment of the disclosure.
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawing and the following detailed description.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWING
An intelligent security system and a method thereof, of the present disclosure will now be described with the help of the accompanying drawing, in which:
Figure 1 illustrates a block diagram of an intelligent security system , in accordance with an embodiment of the present disclosure;
Figure 2A illustrates a flow diagram of an intelligent security method, in accordance with an embodiment of the present disclosure;
Figure 2B illustrates a schematic image depicting an unauthorized person in the near vicinity of a user, peeking into a system in which the camera is installed;
Figure 3A illustrates a schematic image depicting the display of the system when the output frame for video captured by camera detects only authorised user; and Figure 3B illustrates a schematic image depicting the display of the system when the output frame for video captured by camera detects the authorized user and an unauthorized person.
DETAILED DESCRIPTION
Embodiments, of the present disclosure, will now be described with reference to the accompanying drawing.
Embodiments are provided so as to thoroughly and fully convey the scope of the present disclosure to the person skilled in the art. Numerous details, are set forth, relating to specific components, and methods, to provide a complete understanding of embodiments of the present disclosure. It will be apparent to the person skilled in the art that the details provided in the embodiments should not be construed to limit the scope of the present disclosure. In some embodiments, well-known processes, well-known apparatus structures, and well-known techniques are not described in detail.
The terminology used, in the present disclosure, is only for the purpose of explaining a particular embodiment and such terminology shall not be considered to limit the scope of the present disclosure. As used in the present disclosure, the forms "a,” "an," and "the" may be intended to include the plural forms as well, unless the context clearly suggests otherwise. The terms "comprises," "comprising," “including,” and “having,” are open ended transitional phrases and therefore specify the presence of stated features, elements, modules, units and/or components, but do not forbid the presence or addition of one or more other features, elements, components, and/or groups thereof. The particular order of steps disclosed in the method and process of the present disclosure is not to be construed as necessarily requiring their performance as described or illustrated. It is also to be understood that additional or alternative steps may be employed.
Systems like desktops, laptops, tablets or cell phones and ATM machines need to be secure and retain privacy, especially when a user is entering confidential information in either of the systems. In other cases, it is necessary that viewership, especially for children, for contents that are confidential or unsuitable for age are restricted.
Typically, cellphones, desktops and laptops usually do not have any inbuilt feature which stops someone other than the user from peeking therein. In fact, to prevent such intrusion, the user has to use partitions or walls around screens, turn screens away from public view, always make sure that unattended displays are turned off, the screen is always kept clear of sensitive documents, or a privacy filter is attached to the screen of the desktop or the laptop. All these measures are not feasible under all circumstances, and in case of failure can cause theft of the information.
On the other hand, ATM machines are less likely to be tampered since they are always under video surveillance. However, the video recorded during the surveillance is only accessed after the theft, specifically when the user registers a complaint. In the meantime, the data stolen, i.e., confidential numbers or letters) will already be used to steal money from the user’s account. To prevent the theft of the confidential numbers or letters, the only measure recommended is that the user should cover the ATM keypad while entering the numbers or letters, so that a person (other than the user) cannot peep from behind and steal the information. This measure is ineffective as the thief could create distractions or peek without the user’s knowledge.
To avoid this, an intelligent security system (hereinafter referred to as “system 100”) and a method ((hereinafter referred to as “method 2000”) thereof, of the present disclosure are now being described with reference to Figure 1 through Figure 3B.
Referring to Figure 1, the intelligent security system 100 includes a memory 125, an image capturing unit 105, a processing unit 110, and an actuating unit 140.
The memory 125 is configured to store a pre-trained neural network model. In an aspect, the memory 125 is further configured to store a plurality of facial features of an authorized user 205. The image captunng unit 105 is configured to sequentially capture and generate video frames of the surrounding environment in real-time.
The processing unit 110 is configured to cooperate with the image capturing unit 105 to receive the video frames and further configured to implement the pre-trained neural network to generate a plurality of output frames 107 containing one or more faces.
In an aspect, processing unit 110 further comprises a segmentation module 115, and a face recognition module 120.
The segmentation module 115 is configured to receive the video frames and further configured to segment each of the received frames into a plurality of regions.
In an aspect, the segmentation module 115 is further configured to apply visionbased segmentation technique to each of the received video frames to segment the frames into the regions.
The face recognition module 120 is configured to detect and mask the one or more faces in the regions and further configured to generate a plurality of output frames 107.
In an aspect, the face recognition module 120 comprises an extractor 130 and an identifier 135.
The extractor 130 is configured to receive the regions, and is further configured to extract a plurality of features from the regions.
The identifier 135 is configured to cooperate with the extractor 130 to receive the extracted features, and is further configured to identify one or more faces in the regions by mapping the received features into the pre-trained neural network model and further configured to extract the plurality of faces to generate a plurality of output frames 107.
In an aspect, the processing unit 110 employs deep learning based neural network techniques to perform scene segmentation, detection and extraction of the identified faces. The actuating unit 140 is configured to cooperate with the processing unit 110 to receive the output frames 107 and further configured to cooperate with the memory 125 to use pre-trained neural network model to identify presence of an unauthorised face present in the output frames 107 to actuate a preventive action.
In an aspect, the actuating unit 140 is configured to compare the stored facial features of the authorized user 205 with the output frame containing the faces to identify an authorized user and unauthorised users.
In an aspect, the preventive action can be selected from a group of shutting down the device 200, change display on display screen 150 of the device 200, lock screen of the device 200, darken display screen 150 of the device 200 and aborting a transaction in process.
In an aspect, the system 100 includes non-optic sensors such as inbuilt accelerometers or a haptic sensor configured to sense the age of the user with finger touch and frequency of movements and transmit a sensed signal to actuating unit 140.
Referring to Figure 2A, the intelligent security method 2000 is shown in accordance with an embodiment. The order in which the method 2000 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any appropriate order to carry out the method 2000 or an alternative method. Additionally, individual blocks may be deleted from the method 2000 without departing from the scope of the subject matter described herein. The the intelligent security method includes steps of:
At step 2002: storing, by a system 100, a pre-trained neural network model in a memry 125;
At step 2004: capturing, by an image capturing unit 105 of the system 100, video frames of the surrounding environment in real-time;
At step 2006: generating, byg a processing unit 110 of the system 100, a plurality of output frames containing one or more faces by implementing said pre-trained neural network model; and At step 2008: identifing, by an actuating unit 140 of the system 100, presence of an unauthorised face present in said output frames by implementing said pre-trained neural network model so as to actuate a preventive action on positive identification of the unauthorised face.
Figure 2B showes a schematic image depicting an unauthorized person in the near vicinity of a user, peeking into a system in which the camera is installed.
Figure 3A showes a schematic image depicting the display of the system when the output frame for video captured by camera detects only authorised user.
Figure 3B showes a schematic image depicting the display of the system when the output frame for video captured by camera detects the authorized user and an unauthorized person.
In an exemplary implementation, the system 100 is configured to prevent unauthorized personnel from peeking into and accessing confidential data input entered in, or displayed on screen of a device 200 such as a desktop, a laptop, a cell phone, a digital locker or an ATM machine. In an aspect, the system 100 is implemented in the device 200. In an aspect, the system 100 is partially implemented in the device 200 and partially in a remoter central server.
The system 100 includes an image capturing unit 105, a processing unit 110, and an actuating unit 140. The image capturing unit or the camera 105 is configured to be inbuilt in the device 200, or attached to the device 200 as an integral component of the device 200, or an external unit functionally connected to the device 200.
The image capturing unit 105 is a dynamically enabled image capturing unit which is mounted on the device 200 such as the user and the user’s surroundings can be easily captured by the image capturing unit 105. The image capturing unit 105 is configured to sequentially capture and generate video frames of the surrounding environment in real-time.
The processing unit 110 is configured to cooperate with the image capturing unit 105 to receive the video frames therefrom. The processing unit 110 comprises a segmentation module 115 and a face recognition module 120 that enable the processing unit 110 to generate output frames 107 having a target object (which is the face of the user 205) identified therein. More specifically, the segmentation module 115 is configured to apply vision based segmentation technique to each of the received video frames to segment the scene in each of the received frames into various regions. The face recognition module 120 is configured to detect and mask at least one target object (more specifically a face) from the segmented frames. The face recognition module 120 is further configured to generate output frames having the masked target object (faces of the authorized/unauthorized users).
In an embodiment, the processing unit 110 employs deep learning based neural network techniques to perform scene segmentation, and detection, and masking of the target objects.
In another embodiment, the face recognition module 120 comprises a memory 125, an extractor 130, and an identifier 135. The memory 125 is configured to store a pre-trained neural network model, wherein said pre-trained model correlates a set of learned features with the targets to be identified. The extractor 130 is configured to receive the segmented video frames, and is further configured to extract dominant features from the received frames. The identifier 135 is configured to cooperate with said extractor 130 to receive said extracted features, and further configured to cooperate with said memory 125 to identify and mask the target objects in the segmented video frame by mapping said received features into said pre-trained neural network model. The identifier 135 is further configured to generate the output frame containing the masked target objects.
In an embodiment, the memory 125 is further configured to store therein video frames that identify each and every facial feature of the user.
In another embodiment, the memory 125 is configured to store therein video frames that identify facial features of an unauthorized person/user.
The actuating unit 140 is configured to cooperate with the processing unit 110 to receive the output frames generated by the face recognition module 120. The actuating unit 140 is configured to cooperate with the memory 125 to receive the stored video frames therefrom, and compare the stored video frames with the output frame containing the masked target object.
In one aspect, if the actuating unit 140 detects an unauthorized face 210 in the close vicinity of the user, the actuating unit 140 to shut down, change, lock, provide alert on screen, or darken the display screen 150 of the device 200 (in case of a desktop or a laptop), or abort the transaction process (in case of an ATM machine or cell phones). In an embodiment, the actuating unit 140 is connected to an actuator 145 which enables the shutting down or abortion of the process.
In one aspect, the system 100 may include non-optic sensors such as inbuilt accelerometers or a haptic sensor configured to sense the age of the user with finger touch and frequency of movements and transmit a sensed signal to the actuating unit 140. The actuating unit 140 is configured to receive the sensed information and shut down, hide or darken the display screen 150 of the device 200 (especially in case of lockers provided with digital locks).
In an aspect, the system 100 is configured to identify any obstruction that maybe placed in front of the image capturing unit 105, wherein the obstruction can hide faces, generate or alter facial features, and restrict the access to the device 200.
The actuating unit 140 is further configured to cooperate with a display screen 150 to display a message to the user indicating that the security of the device 200 has been breached.
In an aspect, the intelligent security system 100 is configured to continuously monitor the vicinity of the user. In another embodiment, the actuating unit 140 is fed with a routine for reactivating the screen of the device 200, when no face other than the face of the user 205 is identified by the image capturing unit 105. In another embodiment, the user can bypass the routine and activate the screen.
In an aspect, the intelligent security system 100 can be used in cell phones, tablets, laptops, desktops, ATM machines, and lockers with digital locks.
The foregoing description of the embodiments has been provided for purposes of illustration and not intended to limit the scope of the present disclosure. Individual components of a particular embodiment are generally not limited to that particular embodiment, but, are interchangeable. Such variations are not to be regarded as a departure from the present disclosure, and all such modifications are considered to be within the scope of the present disclosure.
TECHNICAL ADVANCEMENTS
The present disclosure described herein above has several technical advantages including, but not limited to, the realization of an intelligent security system and method:
• that is inbuilt into systems such as desktops, laptops and ATM machines in which confidential data is entered.
• that prevents unauthorized personnel from peeking into and accessing while confidential data is being entered.
• that prevents unauthorized personnel from peeking into and accessing while confidential data is being displayed to authorized user
LIST OF REFERENCE NUMERALS
100 system
105 image capturing unit / camera
107 output frame
110 processing unit
115 segmentation module
120 face recognition module
125 memory
130 extractor
135 identifier
140 actuating unit
145 actuator 150 display screen
200 device
205 face of authorized user
210 unauthorized face / face of unauthorized user
215 contents of display screen
Equivalents
The embodiments herein and the various features and advantageous details thereof are explained with reference to the non-limiting embodiments in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The foregoing description of the specific embodiments so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the disclosure to achieve one or more of the desired objects or results. Any discussion of documents, acts, materials, devices, articles or the like that has been included in this specification is solely for the purpose of providing a context for the disclosure. It is not to be taken as an admission that any or all of these matters form a part of the prior art base or were common general knowledge in the field relevant to the disclosure as it existed anywhere before the priority date of this application.
The numerical values mentioned for the various physical parameters, dimensions or quantities are only approximations and it is envisaged that the values higher/lower than the numerical values assigned to the parameters, dimensions or quantities fall within the scope of the disclosure, unless there is a statement in the specification specific to the contrary.
While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.

Claims

WE CLAIM:
1. An intelligent security system (100), comprising: a memory (125) configured to store a pre-trained neural network model; an image capturing unit (105) configured to sequentially capture video frames of a surrounding environment in real-time; a processing unit (110) configured to cooperate with the image capturing unit (105) to receive said video frames and further configured to cooperate with memory (125) to implement said pre-trained neural network model to generate a plurality of output frames (107) containing one or more faces; an actuating unit (140) configured to cooperate with the processing unit (110) to receive said output frames (107) and further configured to cooperate with the memory (125) to implement said pre-trained neural network model to identify presence of an unauthorised face present in said output frames to actuate a preventive action.
2. The system (100) as claimed in claim 1, wherein said memory (125) further configured to store a plurality of facial features of atleast one authorized user (205) of the system (100).
3. The system (100) as claimed in claim 1, wherein processing unit (110) comprises: a segmentation module (115) configured to receive said video frames and further configured to segment each of the received video frames into a plurality of regions; a face recognition module (120) configured to cooperate with the segmenation module (115) to detect and mask the one or more faces in said regions and further configured to generate a plurality of output frames (107), wherein said face recognition module (120) comprises: an extractor (130) configured to receive said regions, and further configured to extract a plurality of features from said regions; an identifier (135) configured to cooperate with said extractor (130) to receive said extracted features, and further configured to identify one or more faces in said regions by mapping said received features into said pretrained neural network model and further configured to extract said one or more faces to generate said plurality of output frames (107).
4. The system (100) as claimed in claim 1, wherein said processing unit (110) employs deep learning based neural network techniques to perform scene segmentation, detection, and masking of said identified faces.
5. The system (100) as claimed in claim 1, wherein said actuating unit (140) is configured to compare said stored facial features of said authorized user (205) with said output frame containing said faces of the authorized user and unauthorised users.
6. The system (100) as claimed in claim 1, wherein said preventive action is selected from a group actions including shutting down a device (200) implementing said system (100), change display on a display screen 150 of said device (200), activating a lock screen of said device (200), darkening the display screen (150) of the device (200), provide an alert on the display screen, and aborting a transaction in process on the device (200).
7. The system (100) as claimed in claim 1, wherein said the system (100) includes non-optic sensors including inbuilt accelerometers or a haptic sensor configured to determine the age of the user with finger touch and frequency of movements and transmit a sensed signals to said actuating unit to initiate said preventive actions wherein user is identified as a person below predertimed age value.
8. The system (100) as claimed in claim 1, wherein said segmentation module (115) is further configured to apply vision-based segmentation technique to each of the received video frames to segment said frames into said regions. An intelligent security method (2000), comprising: storing (2002), by a system (100), a pre-trained neural network model in a memory (125); capturing (2004), by an image capturing unit (105) of the system (100), video frames of the surrounding environment in real-time; generating (2006), by a processing unit (110) of the system (100), a plurality of output frames (107) containing one or more faces by implementing said pre-trained neural network model; identifying (2008), by an actuating unit (140) of the system (100), presence of an unauthorised face present in said output frames (107) by implementing said pre-trained neural network model; and actuating (2010), a preventive action on positive identification of unauthorised face.
18
PCT/IN2022/051028 2021-11-25 2022-11-24 An intelligent security system and a method thereof WO2023095168A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202121054523 2021-11-25
IN202121054523 2021-11-25

Publications (1)

Publication Number Publication Date
WO2023095168A1 true WO2023095168A1 (en) 2023-06-01

Family

ID=86538994

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2022/051028 WO2023095168A1 (en) 2021-11-25 2022-11-24 An intelligent security system and a method thereof

Country Status (1)

Country Link
WO (1) WO2023095168A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190278976A1 (en) * 2018-03-11 2019-09-12 Krishna Khadloya Security system with face recognition
RU2721178C1 (en) * 2019-12-13 2020-05-18 Межрегиональное общественное учреждение "Институт инженерной физики" Intelligent automatic intruders detection system
CN111510675A (en) * 2020-04-13 2020-08-07 智粤云(广州)数字信息科技有限公司 Intelligent security system based on face recognition and big data analysis
CN112183394A (en) * 2020-09-30 2021-01-05 江苏智库智能科技有限公司 Face recognition method and device and intelligent security management system
CN112614260A (en) * 2020-11-19 2021-04-06 马鞍山黑火信息科技有限公司 Intelligent security system based on face recognition and positioning
CN113177469A (en) * 2021-04-27 2021-07-27 北京百度网讯科技有限公司 Training method and device for human body attribute detection model, electronic equipment and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190278976A1 (en) * 2018-03-11 2019-09-12 Krishna Khadloya Security system with face recognition
RU2721178C1 (en) * 2019-12-13 2020-05-18 Межрегиональное общественное учреждение "Институт инженерной физики" Intelligent automatic intruders detection system
CN111510675A (en) * 2020-04-13 2020-08-07 智粤云(广州)数字信息科技有限公司 Intelligent security system based on face recognition and big data analysis
CN112183394A (en) * 2020-09-30 2021-01-05 江苏智库智能科技有限公司 Face recognition method and device and intelligent security management system
CN112614260A (en) * 2020-11-19 2021-04-06 马鞍山黑火信息科技有限公司 Intelligent security system based on face recognition and positioning
CN113177469A (en) * 2021-04-27 2021-07-27 北京百度网讯科技有限公司 Training method and device for human body attribute detection model, electronic equipment and medium

Similar Documents

Publication Publication Date Title
US20190068895A1 (en) Preserving privacy in surveillance
US10460300B2 (en) Method of preventing fraud and theft during automated teller machine transactions and related system
CN102004923B (en) Method and apparatus for detecting behavior in a monitoring system
JP4961158B2 (en) Automatic transaction device and suspicious object detection system
JP7108395B2 (en) behavior monitoring system
CN103425915A (en) Method and device for identifying whether unauthorized users read display content of electronic device or not and electronic device
JPWO2009013822A1 (en) Video surveillance device and video surveillance program
JP2012069023A (en) Abnormality detection device
Sidhu et al. Smart surveillance system for detecting interpersonal crime
Parab et al. A new approach to detect anomalous behaviour in ATMs
Sujith Crime detection and avoidance in ATM: a new framework
CN108647509A (en) A kind of method and device for preventing sensitive document from revealing
JP2960490B2 (en) Automatic cash transaction system and automatic cash transaction equipment
JP2002304651A (en) Device and method for managing entering/leaving room, program for executing the same method and recording medium with the same execution program recorded thereon
KR20180085337A (en) Security gate system with tactile intrusion detection sensor
Gavaskar et al. A novel design and implementation of IoT based real-time ATM surveillance and security system
WO2023095168A1 (en) An intelligent security system and a method thereof
Ding et al. Energy-based surveillance systems for ATM machines
EP0629764A1 (en) Security door with biometric identifying device
KR20190022602A (en) Security gate system with tactile intrusion detection sensor
TWM487495U (en) Security monitoring system
Senior Privacy protection in a video surveillance system
Amrutha et al. A robust system for video classification: identification and tracking of suspicious individuals from surveillance videos
JP2006039906A (en) Alarm generation system and video processing apparatus
KR101053474B1 (en) Access control system and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22898129

Country of ref document: EP

Kind code of ref document: A1