WO2022035371A2 - Ai based data protection in wfh environment - Google Patents

Ai based data protection in wfh environment Download PDF

Info

Publication number
WO2022035371A2
WO2022035371A2 PCT/SG2020/050469 SG2020050469W WO2022035371A2 WO 2022035371 A2 WO2022035371 A2 WO 2022035371A2 SG 2020050469 W SG2020050469 W SG 2020050469W WO 2022035371 A2 WO2022035371 A2 WO 2022035371A2
Authority
WO
WIPO (PCT)
Prior art keywords
agent
call
personal data
module
environment
Prior art date
Application number
PCT/SG2020/050469
Other languages
French (fr)
Inventor
Vineeth NAYAK
Mohana Dhamayanthi Jeyatharan
Mukesh KINI
Binny MATHEW
Saqib KARIM
Saman WEERATHUNGAGE
Venkatesh JV
Original Assignee
Tetherfi Pte. Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tetherfi Pte. Ltd. filed Critical Tetherfi Pte. Ltd.
Priority to PCT/SG2020/050469 priority Critical patent/WO2022035371A2/en
Publication of WO2022035371A2 publication Critical patent/WO2022035371A2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database

Definitions

  • AI Artificial Intelligence
  • the present invention relates to usage of AI features such as prediction of the correct events by means of supervised or unsupervised techniques, identification of the correct objects plus scenarios in an environment and dynamic countermeasures evaluation from a plethora of real time logged events and system data that will aid in personal data protection in a work from home environment.
  • AI Artificial Intelligence
  • IoT Internet of Things
  • the said industries are not limited to entertainment, real estate, retail and e-commerce, travel, banking and financial services, manufacturing, food tech, health care and logistics and transportation.
  • AI engine needs training data and real time data to operate. Some of these said data are obtained from sensors interconnected to the internet with suitable internet protocol suite. Sensors interconnected using internet protocol (IP) is called the IoT.
  • IP internet protocol
  • Some of the other data for AI training and real time evaluation is also obtained from Data Bases (DB) and interconnected third-party systems.
  • DB Data Bases
  • Security refers to data/information security, physical security(human security), security of objects placed in an environment(prevent object removal from theft) and environment security(security of a nation from natural disasters and human war).
  • data/information security we will herein call data/information security simply as data security or personal data security.
  • data security There are many types of such data such as confidential data tied to an industry such as trade secrets, industry operational secrets and other data such as personal data which can include the person’s demographics, assets, health related data which needs to be protected at all times.
  • the industries will lose its value if these critical assets are not preserved.
  • solution providers are providing solutions using many technologies to achieve data protection. This data protection urge and need has significantly increased in the recent times globally where many are working from home and the threat is high in such environments because these environments are unknown and also they are not under constant supervision.
  • AI engine for security solution can make estimations that cannot be identified by humans in general (it possibly has much higher cognitive capabilities than humans in general), it also can identify threat events in a much higher scale than humans, it can identify threat events more accurately than humans also in a quick manner and it can identify these without involving many human resources in a completely autonomous manner.
  • AI is a broad term. It has many components such as machine learning, object recognition, deep learning, rule based AI or expert systems, neural network based estimation algorithms etc.
  • the AI algorithms are numerous and are well matured.
  • the term AI engine comprises of the AI algorithm used for learning and machine learnt model building, AI algorithm used for real time estimation using machine learnt model and the AI machine learnt model that will be used for estimation and prediction based on real time inputs.
  • the AI machine learnt model is a by product of the AI algorithm used for learning and model derivation.
  • a good AI solution would have continuous learning and machine learnt model update and evolvement during the AI run time too.
  • the cloud computing service providers are coming up with AI platforms that can be leveraged by solution providers to come up with AI assisted solutions.
  • the said solution providers need to only train the AI engine to identify the correct machine learning model and also provide the big data and the appropriate data labels for such training.
  • the algorithm high level logic to train can be picked from the AI framework itself and the basic skeleton of the machine learnt model can also be picked from the said framework.
  • the AI solution providers could also alternatively use many of the AI engine framework disclosed in the public domain to support the issue they plan to solve as an alternative approach to the AI framework provided by the cloud services.
  • the algorithm used for training can be any one of the algorithms provided by AI framework.
  • AI framework is widely available and cloud services enable its usage.
  • the AI engine s algorithm using the big data fed in from data scientists can identify the suitable machine learning model to predict the needed components as part of the solution space.
  • the training software needs to indicate to the training software algorithm which input data correlates positively or negatively to the desired output from the said algorithm.
  • This component of AI such as machine learning is mainly used for prediction of future events based on past events and current system behaviors.
  • AI object recognition and image recognition. These are necessary to identify objects of concern within an image similar to humans, that will aid in providing the system solution. Again AI object recognition algorithms and deep learning are necessary to identify objects without human intervention. Additionally, many Application Programming Interface (APIs) are also available to enable object recognition as part of the cloud AI platform. Solution providers can also come up with object recognition algorithms where neural networks can be used as part of the object recognition algorithm. These object recognition algorithms are also already publicly available and can be leveraged where needed when building a solution. Alternatively so called new neural network based algorithms can be formed to tackle a given object detection as part of the solution space. Here the neural network parameters, layers, nodes, weights etc can be fine tuned to address a particular object detection within an image.
  • APIs Application Programming Interface
  • the other aspect of AI is to help in counter action planning and support. This means what course of action to take if a certain event is identified.
  • the AI engine can be fed with real or human induced/moderated data where the AI engine can plan the countermeasures that are most appropriate based on the actual/human fed events of the past and current state variables.
  • the counter action planning by AI engine can also be done based on rule based design. Ie. Identify a condition then based on rule to handle the condition execute the counteraction. If the prior counter action was not successful based on the existing rule then change the rule.
  • Another factor that is lacking in the existing solution space is how to engage AI for personal data protection when there is no prior data of personal data being stolen. For example when personal data protection for a new environment such as work from home is planned to be designed, an organization will not have such prior real data as an input for the AI machine learning algorithm. Yet another factor that is missing in the existing solution space is that variant counter actions are missing that will protect the personal data and yet ensure no disturbance to the main operations. Counteractions that are suitable for various different data protection threat events are missing in the solution space. Most of the existing AI counter measures are alerts and thus falls under a single type of counteraction. In some cases due to lack of past data of data theft or due to less number of past/historical data, the alerts can be falsely generated.
  • alerts should be avoided at all times unless it has high event detection accuracy because it can falsely identify a wrong event and can be chaotic. Also in a WFH environment, when alerts are the main countermeasures generated, before action can be taken from a distant supervisor, the personal data would have been taken and damage done. These alerts are useful to identify the attacker but not to prevent it. In such case, the impact of the data stealing threat is difficult to reduce. If in the case of no past data, with excessive amount of data taken using various sensors in a new environment, a threat can be detected to a higher degree using existing solutions (ie putting the AI engine in a new system for many days for learning either using supervised or unsupervised learning).
  • AI engine should help to estimate and prevent the personal data exposure in not needed environment(i.e. personal data need not be revealed) and also estimate and help in suitable countermeasure that prevent the personal data of being stolen if such data threat event occurs and yet the personal data information has to be revealed in the system for the normal operation of the system.
  • the said speech or video call can be done using other dedicated applications and devices that are not related to personal data projection application.
  • the agent will want to browse through personal data of the caller/customer in his working desktop. This personal data needs to be protected at all times because this HA environment is not the regular work environment that is usually heavily secured.
  • the solution we propose in this document is targeted in protecting such personal data during call time and other time as well in such work from home HA environment or other environments of similar nature.
  • alert and delayed action mechanism as in existing solutions do not help.
  • This solution document/report reveals a system and method where AI assisted protection of personal data in a threat environment is provided whereby the personal data is prevented from the following: being shown on screen unnecessarily outside the related call, personal data shown in abundance, copied, stored, revealed via phone conversation to out of context person, being viewed by unauthorized and uttered unnecessarily during a call.
  • the said personal data protection system engages AI engine to achieve prediction of data protection threat, identification of threat entities/objects in environment and data analytics to plan the countermeasures in an autonomous, accurate and seamless manner without impacting the normal operations, achieve customer satisfaction and yet prevent personal data theft.
  • the Patent Document 1(PTLI) highlights a solution where the AI does pre-learning of the system to identify the home members before operations occur to detect the usual and unusual members of the home environment.
  • the AI pre-learning is based on real time data that is available during the learning phase.
  • the AI engine’s learning objective is to identify who can use a given confined space within the house during various time periods.
  • the learning is simply to identify the usual members within a given confined space within the home. Ie. Who is allowed to use a room/hall/kitchen etc.
  • the AI engine results are sent to the system supervisor for validation via an application installed on the supervisor’s smart phone.
  • the AI engine gets a confirmed set of permanent tenants of a home and who can access a given confined space and who cannot(ie. A small correction done to AI’s findings).
  • the solution further comprises of a method where the AI system during operation time is able to identify the intruder in a confined space within the home domain at a given time.
  • the abnormal behavior can be informed by means of alarms in a supervisor’s special hand-phone application.
  • the alert/alarm is sent to the surveillance domain (i.e. the confined space under surveillance) and the alert is manifested as a sound in a loud speaker attached to the system.
  • This solution only highlights counter measures such as alert/alarms and this is not useful in personal data theft environment because the data can be taken while an alert based counter action is planned. Alert is only to trace who is the possible attacker. Moreover, this solution does not identify the regular house member’s abnormal behavior such as trying to steal work data. Furthermore this solution does not put any prevention for an agent working from home intentionally or unintentionally exposing the data to attackers. The attacker could be agent himself or others connected to the agent remotely or some other home member and this solution is not helpful in such malicious home member scenario.
  • the Patent Document 2(PTL2) highlights an AI engine based solution where an abnormal behavior is detected with the objective to prevent threat in an environment.
  • trigger condition for abnormal behavior detection include at least one of detecting a person in the secured area attempting to hide an identity of the person by wearing a mask or a helmet, detecting multiple people being present in the secured area in excess of a predetermined number, detecting a presence of unexpected motion in the secured area and detecting a face.
  • Some of these abnormal behavior detection rules only helps in identifying abnormal people in an area and it does not identify abnormal behavior w.r.t. personal data stealing.
  • this solution too too highlights alerts/alarms as a countermeasure. If agent is engaged in a call that involves discussion based on personal data in a threat induced environment (i.e. some attacker is eavesdropping to the conversation), the countermeasure such as alert to supervisor is a delayed countermeasure because before any action taken by supervisor the personal data would be already taken by the attacker.
  • the Patent Document 3(PTL3) highlights an AI based surveillance solution where the system detects the threat level and based on the threat level in comparison to a fixed threshold, alert/notification is sent when the threat level exceeds the said threshold. If when the environment threat is lower than the fixed threshold, the system sends a local trigger to mobile device to improve the safety level in the environment.
  • This solution still has its drawbacks because primarily it detects weapon in environment or humans more than the allowed amount to identify or predict threat. This does not protect personal data theft because there are no protection mechanisms to stop and intruder/attacker stealing the data from the threat environment. Moreover the alert does not provide the action needed at the threat environment to be imposed immediately. The personal data attacker could be a person in the home itself and this prior art solution does not have the suitable component to handle such type of threat.
  • the Patent Document 4(PTL4) highlights the AI based solution where entrants into a secured environment are analyzed by means of many prediction and decision making systems. The entry data is analyzed and various profiles are derived to determine the threat state by various AI systems. From the results of various decision making systems, a common decision is derived whether to issue an alarm. This solution still fails to predict whether the exact motive is to steal personal data protection from a given entrant profile. Furthermore, this solution does not have any means to prevent the personal data from being stolen. The only countermeasure is that an alarm is issued. Alarms as mentioned in this document can take a longer action time. Also in a work from home environment the alarm may be given to a household member who himself may be an attacker w.r.t. the personal data. Thus this prior art solution as evaluated and illustrated is not useful in solving the issue highlighted.
  • Patent Document 5(PTL5) highlights an AI deep neural network based learning and prediction being triggered when motion is detected in the environment. This solution is unable to solve the data protection threat in an environment because there is no indication in the solution on the necessary measures to protect the data.
  • Patent Document 6 highlights an AI based solution where a particular subject such as a school student is monitored and abnormal behavior is notified to parents if some threat is detected towards the monitored subject.
  • This expert system is trained to predict a specific behavior, which is different from personal data threat behavior which is addressed in this solution.
  • the Patent Document 7(PTL7) This idea is about data protection when devices are connected to the internet.
  • AI is used to prevent device data, user data and network data from being stored in the web server, in some cases cookies are disabled, data is modified and data is masked.
  • the device location information is used to trigger the AI algorithm to start the process of data protection.
  • the data protection is for an internal system where the hacker sits in the middle of the data communication framework. Also the data is very much the data sent from one communication end point to another.
  • This solution cannot be used for data protection when data has to be shown to an end point as in the problem scenario. Also data masking etc will not help the work from home HA to continue his tasks.
  • the Patent Document 8(PTL8) highlights a solution where the sensor data is evaluated by the algorithm to identify the specific type of emergency. Then based on the emergency type, alerts are sent to various related bodies rather than a single entity to expedite the counter action. This solution heavily depends on sensors and more system components to aid in the solution. Moreover, the countermeasure based on alarms/alerts are not so suitable for data protection related system where the countermeasure has to be swift. If the supervisor to which the alarm is sent is also an attacker the solution will not work.
  • the existing inventive solution attempts to provide an AI assisted solution to enable personal data protection that solves the shortfalls of the prior art solutions.
  • the inventive solution provides normal work operations such as attending a call where personal data content retrieval is necessary and yet engage in the call without any impairment to the call quality of service (QoS) and yet provide personal data protection regardless of whether threat is present in the call recipient/agent environment or not.
  • QoS call quality of service
  • the system predicts personal data protection threat by using minimum view camera(i.e. less number of information devices to capture the data from the monitored domain) and also without a pre-learning phase for threat prediction.
  • the invention uses Object recognition based AI algorithms to predict data threat and also uses machine learning with big data to predict a call arrival to the agent that needs personal data retrieval to handle the call. The details of the invention will be unfolded in the subsequent sections of this report.
  • agent/worker refers to a worker who does work by engaging with a customer using a call etc.
  • this invention can be placed in another environment where the worker could be doing another type of work involving personal data without deviating from the scope and ambit of the invention.
  • the said worker/agent while attending to a customer may need to view and process personal data of the customer or organization to better serve the customer with the highest possible standards.
  • the present invention relates to usage of AI features such as predicting events that indicate the need of personal data protection (e.g. a call from customer that needs personal data viewing by agent is about to arrive or has hit the system) using big historical data, identifying events/objects of vast scale without human intervention that can act as indicative point of personal data theft (agent work environment has data threat indicative object/scenes or the worker himself is behaving in a malicious manner) and AI engine planning the counter action to the threat in a dynamic manner where the threat level is gauged dynamically and the counteraction is planned based on the existing data in the system (e.g. available agents of the same skill level, threat level in other agent environments, remaining call duration etc).
  • AI features such as predicting events that indicate the need of personal data protection (e.g. a call from customer that needs personal data viewing by agent is about to arrive or has hit the system) using big historical data, identifying events/objects of vast scale without human intervention that can act as indicative point of personal data theft (agent work environment has data threat indicative object/scenes or the
  • This invention is a system and a plurality of methods where customer/organization personal data protection involving a home agent (HA) or any worker/agent in a home environment is provided by AI technology/engine and some other supporting applications that interfaces using communication means with the said AI engine.
  • This said AI engine during its normal operations, operates with the help of limited video data from lesser number of cameras used in the system (mainly only from worker desktop inbuilt webcam).
  • the said AI engine and its encompassing methods helps in authenticating the worker/agent using face, emotion detection based on system given emotion state and liveliness detection and also authenticating the work environment of the worker/agent to ensure the worker/agent is at his home.
  • the AI engine as part of approving on-boarding of the worker/agent to enable use his work application ensures that at work start time there are no threats for personal data protection in the work environment.
  • Such additional on-boarding check done so that the rest of the work hours, data protection threats can be detected using fewer camera data based on initial passed check points (door closed, windows closed in work environment that are not in camera view, no other person in work environment other than worker checked during on-boarding time).
  • the disclosure further reveals AI assisted methodology where personal data threat activities when engaged in the work environment is detected using past data based prediction methods, rules and object/image recognition technology so that regular call/customer attending procedure is carried on smoothly without trading off the data security.
  • the countermeasure methods are: restricted viewing of personal data whereby personal data is projected in the smaller segment of the screen and at the middle of the screen when the threat is low, reduced font size of personal data on screen when the threat is low, blocking the personal data projected on the screen if the data protection threat is high, transferring the call to another worker/agent with appropriate skill and who is in a safer environment w.r.t.
  • the system only allows a given customer’s personal data to be viewed by the agent only when he is currently handling the call from the said customer.
  • the system allows personal data about the organisation to be viewed only when agent is using this data for some work done using the agent desktop application or he is in call with another agent who is within the same system/organization.
  • the system also has methods to avoid copy and paste to clipboard whenever the personal data is shown on the agent screen regardless of the threat for personal data in the environment.
  • the AI engine does not allow the personal data to be projected on the screen in spite of agent requesting for it, if it detects that the threat in the environment is high.
  • the system also has methods to identify malicious agent act such as agent is trying to record personal data using some application running on the work desktop, utter the personal data over a call to another unrelated person, utter the personal data in a given moment that is not related to the call he is handling, pass the personal data to a remote person via other video session applications running on the agent desktop or record the personal data from some other camera such as CCTV camera.
  • malicious agent act such as agent is trying to record personal data using some application running on the work desktop, utter the personal data over a call to another unrelated person, utter the personal data in a given moment that is not related to the call he is handling, pass the personal data to a remote person via other video session applications running on the agent desktop or record the personal data from some other camera such as CCTV camera.
  • the counter measures for such malicious acts are that the agent desktop application screen is blocked and also call transfer opportunities will be evaluated by the system to transfer the call to another agent’s safer environment.
  • AI engine is used in every aspect such as detecting the personal data protection threat in the agent’s surrounding environment, detecting malicious agent act w.r.t. personal data protection, detecting/estimating whether countermeasure is needed for a personal data threat detected, detecting the personal data threat severity and planning the countermeasure for the threat based on severity, identifying the agent to whom the call can be transferred to, agent skill value determination and restoration, incorporating virtual assistant where needed, deciding on the need for high priority call back for the call and implementing it and also enabling the execution of the countermeasure for the threat.
  • the AI engine communicates the threat related information for every applicable WFH environment to the smart surveillance application.
  • the AI engine communicates the following: counteractions/countermeasures to be enforced on the agent screen, other personal data threat related countermeasures such as transfer etc, the validity period indication (counter measure On state, counter measure Off state) of the enforcement of the counteraction/countermeasures, worker’s modified skill, threat value if present in an environment, agent modified integrity value, estimated intent of the call, virtual AI assistance ID that needs to be added into the call and transfer agent details (Agent identifier (ID)) to the smart surveillance application via a predefined interface.
  • the information will be sent by the AI engine via the said interface if at least one of these interface data parameters have a non-null value.
  • the smart surveillance application takes these interface parameters from AI engine and uses it or send to other relevant applications that it interfaces with.
  • the smart surveillance application with the help of AI engine does authentication, on-boarding, re-on boarding after threat incident and does complete desktop blocking for the worker in a high threat environment when personal data is projected and/or when personal data is voiced out.
  • the smart surveillance application further interfaces with the customized normal agent desktop application, customized supervisor desktop application and also call routing support application that handles skill/caller intent/agent ID based routing.
  • the said customized normal agent desktop application is the work application the agent uses to carry on his duties.
  • the said call routing based application does routing to a particular agent when the agent is identified by means of an agent ID, when an agent cluster is identified by means of skill and/or agent is identified by means of an intent. This identified skill, agent ID, intent etc can be given to the routing module indirectly by the AI engine using the smart surveillance application.
  • the skill is tied to a group of agents/workers to identify the type of customer pool these agents can serve based on their skill.
  • These contact centre call routing applications usually would be able to identify an agent with a given agent ID to attend the customer call, when the intent of customer call, the associated skill related to the intent, the agent ID pool assigned to serve the skill and the availability of the agent from the pool of agents to serve the customer of that particular intent.
  • the skill management in this solution is done by the AI engine whenever the WFH is incorporated into the contact centre system. It is further considered the picking of the agent ID for a call can be done by AI engine, call routing engine or both. Before a call is assigned to an agent, the agent ID picked by the AI engine has the highest priority. The reason is, the AI engine is the entity that changes the skills w.r.t. personal data protection and also knows the current status of the personal data protection threat in every agent’s WFH environment.
  • the mapping of an intent to customer call can be identified too using various methods. These kind of predictions are used in the solution where possible to pick the agent to handle the call early and also to avoid data protection threat environment for the call.
  • the intent related to the call can be identified using explicit customer selection of the intent at the beginning of the IVR (Interactive Voice Response) call.
  • the AI engine also has capability to identify the intent based on the route being taken by the call. AI engine leverages past data related to route of the calls tied to various intents, current call route path and machine learnt model to estimate the intent of the call based on the current IVR selections and the current call route.
  • agent-customer interactions take place by means of calls (Analogue calls or Voice over Internet Protocol (VoIP) calls running using Session Initiation Protocol (SIP)).
  • VoIP Voice over Internet Protocol
  • SIP Session Initiation Protocol
  • IVR Interactive Voice Response
  • the agents may have to do their normal work routines from home via a VPN.
  • This environment poses a very high threat for personal data protection because it is an un supervised environment without many surveillance cameras and no supervisors.
  • the agents may plan to copy or steal the data by copying using their desktop tools such as clip board, by some recording application running in their work desktop and/or recorder application running in CCTV placed in the work environment, take a picture of the personal data using phone camera or using various other appropriate means. Additionally another member of the house may plan to get some personal information and use it in an improper way. In such environment how to protect ill behavior by another household member and even a malicious agent is not solved yet.
  • This invention is to provide an AI based personal data protection solution for this new environment such as home environment, which has many restrictions, threats and unknowns.
  • the said restriction is that additional sensors, cameras cannot be fitted in this environment easily because it is one’s home and private environment.
  • the said unknown is that there is no data theft historical data from home environment for usual AI operation.
  • the traditional alert/alarms based AI counter measure is not useful here because the supervisor is in another physically distant environment and cannot come to prevent the threat.
  • AI and IoT With the AI and IoT gaining strong hold in the technology arena, many of the problems are currently solved with AI and IoT. Specifically AI and IoT have started making their footage in the security domain too. Specially in the surveillance industry, AI and IoT with various types of sensors are used to detect abnormal behavior and trigger alerts.
  • the present invention provides an AI based solution that combines the various aspects of AI technology in its appropriate portions without overdesigning of data capturing points and achieves data protection without trading off the quality of the service rendered to the called in customer.
  • the solution for the above stated problem has all the components needed to handle the issues in the problem occurring scenario and the solution gaps mentioned above.
  • the solution described has all the needed components to make the agent WFH environment safe in spite of malicious acts that may be planned in the WFH environment to capture the personal data stored in organization environment.
  • the solution has a method where the system simply does not activate countermeasures for personal data protection threat events. It identifies the personal data protection threat and also checks whether when threat present, personal data will be used. Only if the data protection threat and personal data involvement in the environment both evaluates to ‘yes’ and such can be estimated accurately, a countermeasure is activated to protect the data and have a smooth continuity of normal work operations. If the data protection threat and personal data involvement in the environment both evaluates to ‘no’ and such can be estimated accurately, a countermeasure is not activated. If in the case, whether personal data involvement in the threat environment cannot be estimated then the countermeasure will be activated in any case. Even if countermeasures are not executed explicitly in this system, every threat has some impact. Even if countermeasure are not present the agent skill, integrity points are modified if a threat value is positive in the environment. This solution can act even when the AI engine does not have an estimation mechanism for the personal data involvement during the call as mentioned above.
  • the AI engine is designed to act in such a way where the initial access to the agent desktop work application at the beginning of any work day is only allowed if the home work environment is considered completely safe by the AI engine w.r.t. personal data protection (i.e. safe proof). Safe proof herein refers to there is no threat now in the environment and the environment is also protected from any threats that can appear in the future.
  • the said safety of the WFH environment is detected by predefined rules embedded in the AI engine.
  • the AI engine will use the rules to check on the safety aspect. The rules are such that it ensures the data protection threat components are not present in the WFH environment and also the WFH environment safety degradation from the safety point is reduced.
  • the agent desktop work application will be given access to the agent to commence work, only if the agent’s face has been authenticated as an approved agent, the agent’s work environment is detected as the agent’s home work environment and the video image of the agent during authentication(on-boarding for the given day) is detected to be live by the AI engine.
  • the said liveliness detection is based on the correct agent emotion rendering evaluation when the emotion to be shown is requested by the system and also other liveliness detection methods used by the AI engine.
  • the on-boarding of the agent to access his agent desktop application is done every day and such design component also additionally tightens the security.
  • the agent logs out from the agent work application, it is considered the end of the current work day and any personal data threat detection will not be performed by the AI assisted system. It is considered in this solution that only at the initial on-boarding of agent at the beginning of the work day, the work environment is fully captured by more cameras (such as dedicated webcam to capture full room. Possibly a 360 degree webcam camera. Or even a CCTV camera that is already present in the agent’s home environment.).
  • the solution also considers the subsequent (i.e. after on-boarding of the agent to agent desktop work application) WFH environment capturing by the system is done using minimal camera such as possibly only the webcam that is inbuilt in the agent desktop.
  • Initial threat detection herein refers to the data protection threat detection where the said threat originating from WFH environment at start of work on an approved working day by an agent.
  • the data protection threat in WFH environment refers to threat originating from environment surrounding the agent and also the agent himself behaving maliciously.
  • the data protection threat detection is mainly targeting the threat originating from the environment surrounding the agent because at this time there is no call nor any personal data projected on the screen.
  • the personal data protection threat in agent WFH environment is subdivided into the above mentioned categories. The reason for such subdivisions is because these categories generally have non-overlapping threat events.
  • the only common data protection threat event that belongs to both categories is an agent using the dedicated camera such as a phone camera and attempting to capture the personal data that is projected on the screen in a visible manner to the inbuilt camera of the agent desktop.
  • Rule 1 The work area should be an enclosed area and only single door is allowed and that door should be in the view of AI engine that uses restricted camera view during operation.
  • Rule 4 Only agent is allowed to stay in the bounded work area and no other person is allowed to stay when work starts.
  • Rule 7 The agent has to be at his work desk facing the desktop as part of this rule.
  • Rule 8 The CCTV belonging to the agent’s home has to be kept in video recording Off state if the AI engine evaluates it as a threat during On-boarding approval check.
  • Rule 9 The agent is allowed to access the agent desktop work application only on the approved work day (e.g. the given day is not a public holiday nor the agent pre-planned leave day).
  • Another set of environmental data threat detection rules are incorporated into the solution operation space. This is to detect deviations to the initial approval condition that can further cause the environment based data protection threat.
  • These subsequent threat rules are identification of certain object/events in the work from home environment that can pose a threat to data protection considering that during operation after on-boarding only reduced camera view is present to the AI engine. More details of these additional rules will be revealed in the embodiment(s).
  • This solution has warning for data protection threats rather than immediate counteraction that changes the route of the customer call.
  • This warning is a solution countermeasure that is only sent when the personal data is not yet projected on the agent screen.
  • the AI engine when identifying such environment originating data protection threat during agent work time, it does not immediately send alerts to the agent concerned. It additionally predicts that the call, which needs personal data, is going to arrive to that particular agent situated in a data protection threat environment.
  • the system combines the data protection threat level in the agent environment and the prediction of call arriving to the particular agent who will be using personal data, into a countermeasure score for warning purpose. If the said score is a value greater than 0 and none of the contributors to the score has 0 value, then a warning will be sent to that agent.
  • the warning frequency will increase with the score value. More details of this will be described in the embodiment.
  • the intent value can identify whether the call needs personal data or not without any predictions
  • the usage of personal data for the call can be known with 100% accuracy.
  • the AI engine incorporated in the solution will ensure that no call is dropped without handling, even in the case of such data protection threat.
  • the system evaluates the probability of the call landing in a data protection threat environment and uses the following methodologies to prevent the call landing in a threat environment. If call is estimated to land in the threat environment, the AI engine reduces the skill of the agent (i.e. the skill tied to an intent) in threat environment slightly, so that the call has lesser chance of landing in that environment.
  • the AI engine issues a warning to the agent to clear the threat environment so that when the call lands the environment is safe. After the warning, the skill is restored based on compliance to the warning by the agent. The skill is not restored back completely to the original value even after such compliance shown by the agent.
  • the AI engine operates in a manner for a given caller intent, a skill range is defined. Within this skill range, agents with higher skill within that intent is assigned to a call where the skill value is used to indicate priority. Basically the system defines a skill range of non-discrete values related to an intent. Within the skill range, the agent’s with a higher skill is assigned to attend an intent based call. All skill modification for an intent has to fall within this skill range. If an agent is assigned a skill value outside the defined range, then it is considered the agent has no skill related to the particular intent and cannot be assigned to attend the related intent based call.
  • the solution incorporates prediction component such as:the prediction of call arrival (based on the condition that IVR call to a given intent has hit the system but not yet routed to an agent or call for an agent of particular intent has been put on hold and in a queue from the beginning of the call until being served by the agent) to an agent serving a particular intent in a threat environment is predicted using the estimation of total number of agents available at a future time when the said call hits an agent of the intended intent. More details will be given in the embodiment.
  • the solution has a warning component to make the environment safe when the call lands at the agent (as mentioned previously). But the agent may be confused as to what actions to take to ensure safety when the warning is issued.
  • the AI engine will send such safety restoration details too. This design principle is used to quickly restore the safety of the environment.
  • the AI engine identifies the threat in a continuous basis such as periodically and otherwise based on events such as call has arrived into the system, call going to be transferred to another agent etc.
  • this periodic basis is needed to track the agent compliance to security principle in the system.
  • periodic checks sometimes expedites the decision making w.r.t. personal data protection. If an agent shows less compliance then there is always a threat in that agent WFH environment. But the counter measure for threat is only planned if personal data is already exposed or planned to be exposed and at the same time threat is present in the agent work environment and this agent is supposed to handle the call. If warning is sufficient to restore environment safety then there is no need of extreme countermeasures.
  • the solution has such design components.
  • the AI system In addition to detecting call arrival to a particular agent when system already knows the call is for the particular intent (e.g. the callback call planned ahead, the customer select the intent at the beginning of the call in IVR system etc), the AI system also has the ability to detect the call arrival into the system for a particular intent and hitting the given agent (applicable in contact centers where the intent cannot be identified at the beginning of the call). Such predictions are incorporated into solution space so that early measures can be put in place for a smooth call experience in a threat induced environment. More details will be given in the embodiment.
  • the AI engine is able to identify malicious events w.r.t personal data threat that are related to the malicious agent as well.
  • the AI based solution is able to identify malicious agent events such as when personal data is exposed on the screen of the agent, the agent himself is acting maliciously to copy this data for later usage such as use a camera and capture the data.
  • the AI engine tracks whether agent trying to record personal data screen projection moments. Also AI engine track agent trying to utter personal data in an audible manner in an unrelated moment as perceived by the AI engine, agent trying to pass the personal data via call to another unrelated or pass the personal data information in realtime using any agent desktop third party applications. More details will be revealed in the embodiment. If such threats present, then the system will block personal data from shown in the screen and also subsequently consider other agent or AI assistant to handle this call within the call’s session.
  • the solution also has preventive and countermeasure methods such as personal data projection at unnecessary times (i.e. outside call time or when a call does not need a particular personal data) is prevented by the system.
  • the clipboard copy and paste is disabled from agent desktop during personal data projection time on the agent desktop application. If environment threat present when personal data exposed on agent screen is high, the system ensures the agent desktop application is immediately blocked.
  • the AI engine predicts that call will arrive at a given threat environment (identifying that the skill of this agent will enable the routing towards the agent) and environment threat is difficult to clear (or too late to clear), it will further reduce the skill of the agent to prevent call from being landed to this environment. This skill reduction is in addition to the initial skill reduction when warning is issued when the call first hits the system.
  • the high priority call back is such that when callback is decided, the callback call is put on queue to get an agent. Thus when agent is available and ready to serve this call back call immediately the call back gets handled. If not in queue then the agent has to complete the calls in queue and then only look at the call back call.
  • the solution also supports re-onboarding of the agent after the agent desktop application screen is locked.
  • the re-onboarding of agent to agent desktop application has many check points before re-granting access to the application.
  • the re-onboarding check points and conditions by the AI based system are: quick response (QR) code validation of agent ID and dynamic code generation and notification to smart phone of agent (this dynamic code used as a dynamic password), agent face detection, agent face liveliness detection, agent work environment’s data protection evaluation and agent past violation based compliance/integrity score is satisfactory and agreement given by the supervisor.
  • the system presents the agent’s security incident details and the net compliance/integrity score to the supervisor.
  • the supervisor can make decision on re-onboarding or not based on the information projected to the supervisor.
  • the said supervisor can give agreement or not give agreement.
  • the AI engine also presents the contact center call performance statistics to supervisor if the Agent under re-onboarding evaluation is not on-boarded. If the call volume for the given intent is predicted to be low, then the re-onboarding of the agent can be delayed. Otherwise, if the call volume for the intent is predicted to be high then the agent has to be re-on boarded. These details will be given to the supervisor so that he can make the appropriate re-onboarding decisions using many metrics. More details will be given in the embodiment(s).
  • the solution identifies environment related threats to data protection using AI components such as image recognition/object recognition and rules to estimate/predict threat rather than big data analytics and machine learning to build a machine learning model.
  • AI components such as image recognition/object recognition and rules to estimate/predict threat rather than big data analytics and machine learning to build a machine learning model.
  • past data is not available to adequately train the machine learning component of the AI.
  • a machine learning algorithm cannot be put in training mode for a long time to get the accurate results.
  • This solution uses the needed components of AI and yet enable data protection related environmental threat identification in a shorter time without relying on past data.
  • the object detection can use many off the shelf solutions where needed. However, the solution does not prevent the object recognition to be developed as an independent entity as well.
  • This solution identifies personal data related threat using many check points that are currently not available in the existing solution space. It tracks not only the external environmental threats which are usually tracked in many existing solutions but it also tracks any malicious behavior of agent where trying to copy the data, record the data, utter the data in an unrelated moment, reveal the data using phone conversation and many such using the AI technology.
  • the countermeasures are not fixed.
  • the countermeasures are dynamic and it is variant based on the degree of severity.
  • the countermeasures are carefully picked in a dynamic manner based on the severity, the state of the system (number agents, their skills, the importance of the call/special treatment to the call, QoS currently experienced during call, the call volume etc).
  • the solution has another advantage where too numerous data is not obtained from the environment to detect the threat. Unnecessary data and vast data can be costly to obtain and maintain. This is because in some cases such threats can be a rarity and only needed data would suffice.
  • Data protection is achieved by strict counter measures where when threat level is high, the data is completely hidden in the threat environment (i.e. either shown data is blocked or data not allowed to be projected in the screen in spite of request) and using of the data again in such threat environment needs many steps of authentication. This improves the data protection security in the system because the agent will comply more towards secure work environment.
  • the solution uses many ‘personal data exposure to threat environment prevention’ techniques. Some of such prevention techniques are: the solution enables the agent skill to be modified so that the call can be routed to another agent and not land in a threat related environment. The solution sends warning to clear the threat environment. The solution also enables dynamic transfer to another agent if threat identified during a call.
  • agents skills are static.
  • skill is modified dynamically to ensure a call that needs personal data to be revealed/examined, lands to an agent who has better environment safety and adequate technical skills to attend the call.
  • the agent to which the intent based call has to be landed and handled is evaluated at one time.
  • the appropriate agent is continuously evaluated among the available agents because the agent skill level is dynamically changing to fit the data protection threat. This dynamism helps to achieve the correct call handling with such external dynamic threat events.
  • the solution engages AI assistant.
  • the AI assistant helps to reveal the personal data to agent in audio mode so that the information cannot be obtained by malicious entities and the agent in a threat environment can still continue with the call.
  • the solution has many agent skill restoration methods where restoration takes place when restoration criteria are satisfied. This solution component is useful because technically skilled agents can still engage in call handling once they show compliance to safety.
  • the solution also has compliance or integrity points. This enables to penalize agents so that they improve the security best practices in their work domain.
  • Fig.1 is an exemplary system diagram, which highlights the AI assisted smart personal data protection system where the AI engine is hosted in the cloud and leveraging the cloud AI platform.
  • Fig.2 is an exemplary system diagram, which highlights the AI assisted smart personal data protection system where the AI engine is hosted along with other applications in a local restricted environment.
  • Fig.3A highlights the solution concept related to AI assisted authentication and environment safe proofing before on-boarding an agent into a work day.
  • Fig.3B highlights a given group of AI assisted solution concepts related to preventing personal data from being viewed at unrelated moments and also detecting threat originating from malicious agents.
  • Fig.3C highlights another set of group of AI assisted of solution concepts related to preventing personal data from being viewed at unneeded moments and also detecting threat originating from malicious agents.
  • Fig.3D highlights a given group of AI based solution concepts related to preventing a call being landed to an agent/worker environment when the agent environment is detected as not safe.
  • Fig.3E highlights a given group of AI based solution concepts related to ensuring a call which needs personal data is handled smoothly in spite of the fact there is threat present in the agent environment to which the call initially landed.
  • Fig.3F highlights a given group of AI based solution concepts related to ensuring the worker/agent is on-boarded again after his work environment is detected to be safe again and also highlights solution concept related to AI assisted restoration of skill points of agent/worker after threat incident.
  • Fig.4 is an exemplary AI engine software architecture diagram, when AI related modules are clustered together as a single application.
  • Fig.5A highlights the exemplary flowchart of the AI assisted main system operation when the system is in the initial rollout state of operation w.r.t. to an agent and in authentication phase.
  • Fig.5B highlights the exemplary flowchart of the AI assisted main system operation when the system is in a state where the call for a given agent has hit the system and the agent/work environment is not yet safe but the call has not arrived yet to the agent.
  • Fig.5C highlights the exemplary flowchart of the AI assisted main system operation when the system is in a state where data protection threat is evaluated as present and the call which needs personal data has hit the agent/worker. Also highlights the operation where when call arrives the threat condition is cleared in the agent environment.
  • Fig.5D highlights the exemplary flowchart of the AI assisted main system operation when the system restoration related to skill and integrity point happens after a given data protection threat event.
  • Fig.5E highlights the exemplary flowchart of the AI assisted main system operation when the system predicts that in the future a threat state may happen when the call arrives based on the current condition/state.
  • Fig.6 highlights the sequence diagram of re-boarding the agent/worker to use the system after a data protection threat has blocked the screen and the agent work application has got locked.
  • Fig.7 highlights the exemplary flow chart of the smart surveillance application that is interfaced to the AI engine and also has the capability to control the restricted and blocked information to be shown on the worker/agent desktop.
  • a system e.g. contact center system
  • intents where from the intent value of the call itself whether the call needs personal data or not can be identified without any estimations.
  • call center systems can be designed in such a manner, whereby from the call’s intent value itself, the system knows whether the call needs personal data to handle the call.
  • This intent can be explicitly selected by the customer during the call lifetime or route. It is considered that in such systems, the intent value provides the hint for definite personal data involvement for the call or personal data non-involvement for the call.
  • the solution operation is such when the call intent value implies personal data is needed to handle the call, then when personal data protection threat is detected for such call landing environment, appropriate countermeasure(s) are put in place to handle the threat.
  • the call is transferred where possible to another agent in safer environment before the call comes to the threat environment or if the threat happens after the call hits the agent, then the call will get transferred to another agent in a safer environment where possible.
  • the solution operation is such when personal data protection threat is detected for such call landing environment (i.e. the WFH environment related to agent picked to handle the call), appropriate countermeasures will not be put in place (i.e. the threat in the environment is ignored).
  • the countermeasure such as call transfer or any threat based countermeasure will not be implemented.
  • the AI engine knowing the intent value of the call and the relationship of the intent to personal data involvement is able to do such call handling.
  • intent value of the call implies that no personal data is needed to handle the call from the customer, then the system prohibits an agent’s request for personal data to be viewed tied to the said call’s customer during the call, where the said agent is the one handling the said call.
  • the personal data related to the called in customer is not allowed to be viewed within the system as the call’s intent is identified as an intent that does not need personal data to be viewed.
  • the core-point of the invention is highlighted where the countermeasure for data protection threat only takes place if there is a personal data threat in the WFH environment, the call has a high chance of landing in the said threat environment or call lands in the threat environment and also the said call needs personal data to be used during the call handling period of the said call (where the call needs personal data can be evaluated with reasonably high accuracy.
  • One example is separate call intents assigned for calls that needs personal data to be viewed).
  • the call’s intent value does not obviously identify the personal data is needed or not. Basically such system do not have intents as a means to identify such personal data involvement.
  • the relation of the call’s intent value to the involvement of personal data or not can only be estimated using call routing path within for example an IVR call. In such case, based on the route of the call this can be estimated to some degree.
  • the personal data involvement is not needed and this can be predicted with accuracy > 90%, then such calls do not need any countermeasure for example of call diversion/transfer to another agent unless the personal data is used while data threat is present in the environment.
  • the personal data involvement is needed and this can be predicted with accuracy > 90% and also the call landing will be a data protection threat related WFH environment, then such calls will generally have countermeasure of call diversion/transfer to another agent.
  • the countermeasures are always in place when the personal data threat is detected in the environment and the said call will land in such threat induced WFH environment.
  • the countermeasures are also in place when a call is happening in an environment and personal data threat suddenly induced in the call environment.
  • the Fig. 1 specifically highlights the system diagram of a contact center (CC) environment where one or plurality of agent(s) (109) are working from home and a supervisor (105) who monitors their work and conduct is also working from home.
  • the said supervisor in the system could be one or a plurality of supervisors.
  • the WFH environment of agent is described by 100 and the WFH environment of supervisor is described by 101. It is also considered that in the WFH environments (100, 101), the agents have to engage in calls as part of their daily work operations.
  • the supervisor also can take the role of the agent. These calls that hit the contact center can traverse via the IVR system before reaching the agent.
  • the agent’s help will be required in the call handling, if IVR supported functionalities are not sufficient or the customer has clarifications and an agent involvement in the call is necessary.
  • the solution operation in the contact center environment is not restricted to IVR calls only. One skilled in the art would know that the core concept of the solution presented in this embodiment is applicable even in contact centers that do not engage IVR for call handling. The solution operation is also applicable in environments where call hits a contact center where IVR is not in place or IVR is not supported for the given call but the CC has IVR component.
  • the system in Fig. 1 highlights some threat components such as phone with camera, an unlocked door or an intruder with an intention to copy the personal data in some form in the WFH environment.
  • threat components are highlighted using components 102 and 103 respectively in agent and supervisor environment in Fig. 1. It is generally considered that these threat components are not always present in the WFH environment.
  • the AI engine (115) situated in cloud environment (116) only retrieves minimal data as obtained from the agent desktop’s in built webcam (agent’s desktop inbuilt camera has limited view). It is considered that the AI engine (115) communicates with its own DB (118) using the interface (117).
  • the work desktop application is a web application, which the agent uses to carry on his daily activities.
  • This said application is hosted in the customer environment (125) and runs on the agent desktop browser when activated.
  • the supervisor work desktop application can be the application (125) and additionally the supervisor will use the application (126) that has features specifically targeting the supervisor.
  • This supervisor specific application (126) can also be hosted in organization’s customer premises (121). All the application that are hosted in customer premises (121) can use a DB (123).
  • the data for AI engine will traverse via internet (113) using the communication interface(s) (110, 114).
  • the AI engine (115) Since as mentioned, only restricted camera view is used by the AI engine (115) during its operation, it is essential that the only door to the confined agent work space is in the inbuilt camera view as highlighted by components 107 and 104 in Fig. 1.
  • the said door is considered as one of the high probable threat entry point (e.g. a malicious person trying to come into work environment) and thus should be in the restricted camera view during operation times so that threat related events can be identified well ahead from the opening of the door itself.
  • the pre-installed CCTV cameras (108, 106) in WFH environments can be seen.
  • Some WFH environment may have CCTV camera pre-installed (108, 106) in the specific work room of agent and/or supervisor as illustrated in Fig. 1.
  • This is an out of scope installation related to the current solution. That is, CCTV is not a mandatory part/component of the system solution but the agent/supervisor has it as part of their daily operation to protect their own home environment. But if such camera present in an agent (or even supervisor) WFH environment, then it has to be preferably integrated into the system since they may pose a threat (i.e.
  • CCTV may be able to record the personal data appearing in the agent desktop work application as a series of images or as a single video file) to personal data protection and by the said integration the system may be able to detect the threat caused by it, and if threat, then set CCTV (108) video recording to ‘Off’ and monitor the video recording ‘Off’ state and hence reduce the threat induced by CCTV (108) present in the WFH environment.
  • the mentioned ‘Off’ does not mean CCTV (108) system power is Off. It implies CCTV (108) video capture and video recording is kept in an ‘Off’ state. Basically, the said ‘Off’ implies video recording is in an ‘Off’ state. Also herein ‘On’ implies video recording is in an ‘On’ state.
  • the CCTV integration to the system will happen in CCTV ‘On’ state only when CCTV operating in ‘On’ is not considered as a threat to personal data protection.
  • CCTV (108) has to be integrated to the system. If during agent on-boarding time CCTV (108) integration is not detected in spite of having CCTV in WFH environment, then on-boarding will fail. This is because the AI engine (115) is able to detect CCTV (108) presence using object recognition method and then subsequently will trigger the smart surveillance application (122) to put the needed ‘test text message’ on screen for deciding on the CCTV integration. After sending the ‘test text message’ to agent screen, the AI engine (115) will expect the CCTV (108) capture of this said message to be returned to the AI engine (115). If such message captured is not given to AI engine (115), it will consider that the CCTV (108) is not integrated into the system.
  • the CCTV camera (108) before being evaluated by the AI engine (115) as to whether it should be integrated with video recording On or video recording Off will have recording On state and will send the evaluation related captured video to the AI engine for evaluation.
  • the said evaluation related captured video is the video comprising of the ‘test txt message’ projected on the agent desktop screen.
  • the communication between CCTV system (108) and the system hosting the AI engine (115) will be realized by the interfacing unit present in the CCTV system (108).
  • the primary objective of the interfacing unit is to send the evaluation video recording to AI engine to decide on the mode of integration of CCTV (108) and also to receive trigger from smart surveillance application as to turn On or turn Off CCTV (108) video recording.
  • the said interfacing unit will receive triggers such as ‘send recorded data’ to AI engine of the live captures and this said trigger will be sent by the smart surveillance application.
  • This said additional video from CCTV (108) is sent (i.e. the said live captures) simply to check whether the CCTV (108) position has changed after having detected there is no threat from the CCTV (108).
  • the said interfacing unit also has capability to use these triggers and perform certain functionality on the CCTV system (108) such as stop CCTV video recording, start CCTV video recording, send CCTV recorded data to AI engine and stop sending CCTV recorded data to AI engine. This invention briefly highlights the function of such interfacing module.
  • the AI engine (115) In such integrated state before CCTV integration mode is decided (i.e. CCTV integrated in ‘Video recording Off’ mode or ‘Video recording On’ mode to the system), the AI engine (115) first checks during on-boarding time of the agent to his work desktop application, if text related image (i.e. a ‘test text message’) shown on agent desktop and captured by CCTV (108) is clear enough for a human to read and interpret it. It also checks whether AI (115) evaluated/identified text obtained from the image captured from CCTV (108) is identical to the ‘test text message projected’ on the agent screen. The AI engine (115) does these 2 mentioned evaluations once it gets the CCTV’s (108) capture of the ‘test text message’ during the on-boarding time.
  • text related image i.e. a ‘test text message’
  • This ‘test text message’ shown during on-boarding cannot be composed using personal data and should be of human readable clarity if viewed by the agent sitting near the desktop.
  • This ‘test text message’ is randomly generated.
  • the clarity of the CCTV (108) captured image related to this said ‘test text message’ is very much dependent on the position of the CCTV (108) camera w.r.t the agent desktop screen and also based on some ‘zoom in’ features used for capturing/recording by CCTV (108). It is considered during on-boarding time the CCTV (108) is in maximum possible ‘zoom in’ state so that the image captured of the ‘test text message’ is identified using the lowest possible focal length of the CCTV (108) camera without trading off the quality of the image.
  • the ‘zoom in’ feature of CCTV (108) camera can be set to maximum by the system using appropriate interface protocols running between the solution system and the CCTV (108) system.
  • the reason for capturing the said ‘test text message’ with the best possible zoom in condition is to ensure that the CCTV (108) image even in the best possible operation condition of CCTV’s image capturing state does not pose a threat w.r.t. personal data protection.
  • the CCTV (108) captured ‘test text message’ image clarity will be less and the ‘test text message’ image captured by CCTV (108) may not be in human readable form and will have degraded clarity.
  • the said ‘test text message’ shown on agent work desktop during on-boarding check is sent by the smart surveillance application (122) so that when this text is captured by the CCTV (108) camera as a video image and sent back to the AI engine (115) as part of onboarding process, the AI engine (115) will evaluate its human readability clarity (i.e. whether the text massage captured by CCTV (108) camera can be read and interpreted by humans) and also evaluate whether it matches to the original test text message.
  • the characters retrieved from the CCTV (108) image has identical character sequencing to that of the ‘test text message’ that was shown on agent desktop and also font, font size and color also should be same.
  • This matching is needed to ensure that the CCTV image is really a live captured image of the ‘test text message’.
  • This text message will be called ‘evaluation text message’ or ‘test text message’ in an interchangeable manner in this document.
  • the CCTV system (108) will capture this ‘evaluation text message’ and send these test text message related captured images to the AI engine (115) during on-boarding time.
  • the AI engine (115) detects a data protection threat from the CCTV camera (108) and thus only allows the CCTV (108) to be integrated in an ‘Off’ state (i.e. no image capturing/recording allowed from CCTV (108) after the agent has on-boarded to his work application).
  • the AI engine (115) does this check on readability test for the ‘evaluation text message’ captured by CCTV (108) so that by using this check it can evaluate/predict/estimate if when the CCTV (108) is in ‘recording On state’ when real personal data is exposed on the agent screen, whether CCTV can capture readable images of this personal data and cause a threat. If AI engine (115) can read the said ‘evaluation text message’ or infer the ‘evaluation text message’ as of human readable clarity and also ensure that this message captured by CCTV was that projected on the screen, then it can be inferred correctly that the personal data text projected on agent desktop screen if captured using the said CCTV (108) can be readable and interpreted correctly by a malicious attacker in the system.
  • a CCTV (108) threat is identified even before personal data is exposed and proactively the CCTV (108) recording state is set to ‘Off’ if threat identified using ‘evaluation text message’ projected on screen.
  • threat detected for CCTV (108) it has to be integrated to the system but operated in ‘Video Off’ state so that it does not capture/record personal data related images and store in the CCTV (108) system during agent’s work time.
  • the CCTV integration in video Off mode is done.
  • the threat related to the CCTV is estimated and if threat the CCTV set to ‘video off’ mode.
  • the video off mode can be set by the agent or by the system.
  • ‘recording Off’ state of CCTV (108) camera it is considered the smart surveillance application (122) will still check whether the CCTV system (108) is still integrated into the system. Such checks will be based on some heart beat messages sent from CCTV system (108) to the smart appliance application (122). This heart beat message is received by the smart surveillance application (122) and it has the video recording Off state as one of the parameters sent to the smart surveillance application. If heartbeat is missing or the Video Off state is not present in the heart beat message, then an alert/warning will be sent to agent by the smart surveillance application. If personal data being projected on the screen and heartbeat missing or not having the correct parameter such as ‘video On’ is detected, desktop screen block will be triggered assuming a malicious behavior such as CCTV (108) is disconnected from the system by a malicious act.
  • the AI engine (115) will identify this event of CCTV disconnection or turn CCTV to On mode as a malicious act and trigger the smart surveillance application (122) about the counter measure such as agent screen locking when at this detection point personal data is projected on the agent screen.
  • the CCTV interfacing application that sends the said heartbeat via the API to smart surveillance application can have any appropriate design incorporated to transport the video recording state of the CCTV camera.
  • the solution allows the CCTV (108) to be integrated in an ‘On’ state during the on-boarding process.
  • ‘On’ means the video recording On state. Because these images are not clear, it is considered that CCTV (108) can perform its usual operation as its image captures are not considered to pose a threat w.r.t. personal data projected on the screen. In this ‘On’ state, it is further considered that the smart surveillance application (122) will prevent the video image/data captured from CCTV (108) to be sent to the AI engine (115) in a continuous manner.
  • the smart surveillance application (122) may set a state at the CCTV system (108) to prevent the images to be sent to AI engine (115) if recording state is set to ‘On’ or the interfacing application running in CCTV (108) need not send image by default to the AI engine (115).
  • Such evaluation of CCTV (108) integration state i.e. whether it should be integrated with recording ‘On’ state or recording ‘Off’ state) into system is decided before allowing the on-boarding of the agent to his work desktop application. Even in this ‘On’ state, which does not imply data protection threat, the CCTV (108) needs to be integrated into the system.
  • the smart surveillance application (122) and AI engine (115) can use the CCTV (108) for its benefit if it needs more information/wider view during on-boarding or other times to safe proof the WFH environment.
  • the AI engine can any time check the CCTV (108) camera view by checking its recorded images for personal data captures. If CCTV (108) has maliciously been shifted then from the recorded image clarity (e.g. from an unreadable to readable has been achieved due to the shifting of CCTV (108) camera) change, the AI system (115) can predict a malicious behavior and again impose the CCTV (108) recording state to ‘Off’.
  • the legacy optical character recognition (OCR) has been identified to have some drawback when it comes to identifying moving images with texts and identifying images with texts that are slightly blurred.
  • OCR optical character recognition
  • Recently AI based image recognition, object vision technologies have improved significantly and these technologies are using various neural network based deep learning algorithms to identify text in images to the clarity similar to humans.
  • This solution uses or considers such text based image recognition algorithm running in the AI engine (115) that can achieve accurate text recognition similar to human and the solution uses this capability to prevent the threat coming from CCTV image recording. It is considered that the neural network based algorithm or any AI image recognition algorithm used for text image recognition is trained and built to identify characters of any font, size and color that can be recognized by human eye.
  • the said trained algorithm is even able to identify characters in text images that are blurred as long as such can be identified by humans.
  • To train this said neural net algorithm and finalize on the algorithm structure many images of texts comprising of various text compositions and various levels of image clarity are used in the training mode.
  • the exact mathematical structure of the said neural network algorithm and the training methodology etc are outside the scope of the current invention.
  • the current invention utilizes the existing art and leverages it for such CCTV related threat detection.
  • CCTV w.r.t.
  • smart surveillance app (122) will project some text images/messages of various font, size, color, various composition of characters onto the agent screen to assist on-boarding check for CCTV integration (the images projected are usually of font size that is generally used in any web application to show personal data). It is generally considered that a series of such different ‘evaluation text messages’ are shown during each on-boarding event and for each on-boarding event different sets of such series of text messages are shown.
  • Such different sets of ‘evaluation text message’ is shown to tighten the security so that a malicious user does not do pre-recording a predicted ‘evaluation text message’ on the CCTV (108) and defeat the system.
  • These will be captured by CCTV camera (108) and sent to AI engine (115) via the interface (110).
  • the AI engine (115) also has capability to get the needed images from the environment (100, 101) in addition to its AI based features for detection and predictions.
  • the AI engine (115) can get these video recordings from CCTV system (108) using any real time streaming protocol that could well be an application running WebRTC (Web real time control protocol) stack or an application that uses the WebRTC stack using Application Programing Interface (API).
  • WebRTC Web real time control protocol
  • API Application Programing Interface
  • inventive solution can still operate with the rest of its rich features but the agent has to use another room without the CCTV.
  • CCTV cameras are shown as component (108) in the agent environment and shown as component (106) in the supervisor environment in Fig. 1.
  • this solution has many rules that detect personal data protection threat based on some other objects/events in the environment. Some of the object, events related to this data protection threat are highlighted in the Fig. 1 as shown as threat components (102) in agent work from home environment and threat component (103) in supervisor work from home environment.
  • the Fig. 1 specifically highlights the system solution in a cloud based solution environment (116).
  • the AI engine (115) is hosted and running in the cloud environment (116).
  • the said AI engine has many capabilities based on the trained machine learnt models running in this AI engine itself. This AI engine is able to estimate the threat from the CCTV w.r.t.
  • this AI engine is able to identify threat components/objects in the environment w.r.t personal data protection threat, this AI engine has the ability to detect an intent based call will arrive into the system, the AI engine is able to do a text to speech using the personal data in the DB as the text to be converted to speech, the AI engine is able to identify the intent related call’s data analytics to help in call handling, the AI engine has the speech recognition capability where it can detect personal data being uttered audibly in an unrelated moment as perceived by the AI engine, AI engine has the capability to predict whether a given intent based call having hit the system will reach a given agent and the AI engine will be able to detect whether the given intent related call will need personal data to be used during its call.
  • the AI engine (115) has many trained machine learnt models composed of various different mathematical elements to estimate various such above mentioned events or objects to a very high degree of accuracy.
  • the occurrence of various events and detection of objects are evaluated using real time inputs fed into the various machine learnt models running in the AI engine (115).
  • the output related to the said input will identify the event or the object during the system operation time.
  • the training of the AI engine (115) is done using real past operation data or data randomly fed for training purpose as in the case of object recognition or image recognition.
  • the training of AI engine (115) to detect the text image and its clarity will be heavily dependent on big arbitrary text data that has been used for training purpose.
  • the training of the AI engine (115) to detect the threat objects such as camera, a person etc with high precision will be based heavily on the big training data used to identify these objects.
  • the AI engine (115) will need much data during its operation and also for training purpose of the model. Here the much data for operation implies that the needed output is decided based on various input fields.
  • the data for training purpose to develop the image recognition model or machine learning model used in the AI engine (115) need not be stored in the DB (118) unless it is used to further improve the model through another learning process happening at system operation/runtime. It can be generally considered the AI model has already reached its maturity in its ability in prediction of an event/object accurately and need not be further trained using historical data obtained during runtime or operation time.
  • operation related real time data which acts as an input for the AI models/algorithms, are stored in the (DB) (118).
  • the said data may preferably be stored in DB (118) for audit purpose.
  • the AI engine (115) if it detects a threat in the environment then such related video from the moment threat identified is recorded and stored in DB.
  • the AI engine (115) will do such recording function as well.
  • the video related to threat event is stored in an appropriate video recording format in a file with suitable encoding and meta data for playback.
  • This said DB (118) is interconnected to the AI engine (115) using the interface (117).
  • the DB (118) also has data that the AI engine (115) will use to decide on the suitable transfer agent ID to handle the call if data protection threat in call environment, agent identification for call handling in general, past events of data protection threats related to a given agent’s WFH environment, current skill of the agent, integrity point of agent, the agents related to a given intent and skill range related to a given intent.
  • the DB (118) also has various call related data such as time in call, time in queue to get an available agent, call hold time, personal data usage within call etc. These data are generally used to make various call related estimations that are needed in this solution and also to detect malicious acts by agent.
  • the system can store many types of data in DB (118) where the data can be used for future data mining, produce various audit reports, to enable the supervisor see the threat history of the agent w.r.t. personal data protection, to enable various analytics and to make many real time decisions based on system states captured into data fields and stored in the said DB.
  • DB (118)
  • video captures related to a threat event can be used to reveal the scene during threat event to a supervisor (105) via the customized supervisor desktop application (126).
  • the core point of the solution/invention is that it is using the AI engine (115) to achieve personal data protection in a WFH environment where the chance of revealing personal data to anyone with a malicious intention of retrieving the personal data in any form is avoided or prevented. Additionally, the said AI engine (115) ensures that if in case of data threat in an environment, the call which needs the personal data is still carried out in the system at a safer location for the call without trading off the QoS tied to the call. To achieve this the AI engine (115) running as part of the solution incorporates many methods to realize it. The AI engine (115) is involved in authentication using agent face detection, external CCTV threat detection, agent WFH environment authentication and agent liveliness detection during on-boarding of agent to his work desktop application.
  • the AI engine (115) also safe proofs the environment to check for any malicious act that can be a threat to personal data during on-boarding and during agent’s work operation time, and if such detected appropriate countermeasures are activated on the system.
  • the AI engine (115) ensures that daily on-boarding checks are performed on the agent on every working day tied to the agent. Also these checks are performed after violation and re-entry into the system.
  • the AI engine (115) if it detects that the personal data is being viewed at any un needed time by an agent/supervisor (i.e. outside the related call or during un-related call), it blocks such viewing on the screen by coordinating with the smart surveillance application (122).
  • the AI engine is able to detect this malicious act or inappropriate act and prevent the smart surveillance application from projecting on the work agent desktop screen such personal data in the said agent’s WFH environment.
  • Unneeded time herein refers to a time when personal data is being requested to be viewed by the agent when such personal data is not needed to be viewed. These said unneeded time are illustrated by means of the following examples. For example, if customer A is making a call and during that time the agent who is attending to customer A and needs customer A’s personal data is instead requesting the system to view customer B’s personal data.
  • Another case is when there is no call in a given agent WFH environment, the agent is requesting the system to view the personal data of any customer. Yet another case is when the agent is in a call with customer A but he does not need personal data but yet he requests the personal data from the system. In this last case, it is considered that from the intent or the dialed number or the route the call took the personal data involvement is not needed to handle the call can be identified by the AI engine (115).
  • the AI engine (115) evaluates whether for a given agent whether personal data viewing for a given customer is allowed or not allowed. This state is continuously updated based on system state changes that impacts the data viewing state. One such said system state change is a call with particular intent is finalized to reach a given agent.
  • This information is conveyed to the smart surveillance application (122) whenever the personal data screen projection allowance state is changed for a customer tied to the agent (for. e.g. an agent may only be allowed to serve a certain customer pool only). If before this information such as rules for a given customer data viewing arrives at the smart surveillance application, the personal data is projected then the AI engine (115) ensures this data is blocked on the agent screen. Suppose the AI engine (115) said information on the rules of personal data projection is already present in the smart surveillance application (122), then the smart surveillance application (122) can readily advise the customized agent desktop application (125) as to whether personal data projection for a given customer call is allowed or not even before the request for the personal data is triggered by the agent.
  • the AI engine (115) if detecting a high threat such as camera being used in a screen capturing manner while personal data projected on screen, it considers it as a very high threat and immediately blocks the agent screen by communicating with the smart surveillance application (122).
  • the AI engine (115) if it detects that when personal data is being shown on screen while call is on and the agent is not at his desk, immediately the screen is blocked as the said AI engine considers this as a high threat environment.
  • the AI engine (115) if detecting any malicious activity by the agent such as copying/recording, it will impose the suitable countermeasures (such as clipboard copy is prevented and if malicious recording the screen is blocked).
  • the clipboard copy/paste deactivation is implemented by the smart surveillance application (122).
  • the said AI engine (115) monitors whether agent trying to record the personal data in the agent work desktop and if detected, it will block the agent desktop screen.
  • the said AI engine (115) also has ability to check whether the call holding is unusually long when personal data being projected on screen. If such detected, the screen will be locked. If the AI engine detects one or more other faces near the agent work desktop screen other than the agent when personal data is projected on screen, again the screen is blocked/locked. Every such screen block/lock state needs proper re-onboarding procedure for the related agent. As mentioned the screen blocking/locking happens only for such high threat events.
  • the AI engine (115) upon detecting data protection threat and when it estimates the call has not yet landed in the agent environment, it sends warning to agent to clear the data protection threat environment rather than overreacting and imposing severe countermeasures that impact the call (i.e. rather than unnecessary call diversion, warning is sent in-order to make the preferred agent environment to be safe). To whom the warning has to be sent is generally evaluated based on the highest skill within an available pool of agents provided no call waiting queue to get an available agent for the given intent when such evaluation is done. If there is a call waiting queue for the related intent then the agent to which the call will land is estimated/predicted when evaluating the agent to which the said warning will land.
  • the AI engine (115) transferring call to another secured location if high threat detected in the current environment where the call is happening.
  • the AI engine (115) proactively reducing the skill of agent if data protection threat detected in-order to safe guard the call and yet route the call to the safe environment.
  • the AI engine (115) based on need, engages an AI assistant that can identify the related personal data to be used for the call and sends this information via voice channel to the agent using the AI’s text to speech feature.
  • the AI engine (115) based on need (such as high threat in a given agent environment and needs another agent of same related skill but no such agent available) engages a less skilled agent with AI based analytics to handle a given intent based call if there is no other related skilled agent to handle the call.
  • the AI engine (115) has the ability to track the agent voice/speech content where by it detects whether personal data is being revealed by the agent using a speech/voice media. Once the AI engine (115) detects this (user has started voicing out personal data), it will also check whether the agent is allowed to voice out the personal data. The agent is allowed to voice out/utter the personal data in audible manner at a given moment only when the agent is in a call with a customer who is requesting for the personal data and this information was not known to the calling customer while traversing the IVR call. If the agent has started voicing out personal data in a scenario that does not have the above said approved condition about voicing out the personal data, again the personal data on the screen will be blocked.
  • the AI engine uses its natural language processing capability.
  • the invention utilizes well developed AI natural language processing capability to engage it to detect personal data information that is being conveyed by the agent using speech media at unneeded moment.
  • the AI engine (115) also detects that the agent is handling a call that needs personal data without retrieving the personal data from the back end and projecting on the screen.
  • the AI engine (115) will track this as a malicious event because the agent might have copied the personal data information in a paper etc using it during the call.
  • the AI engine (115) also has the ability to detect whether the customer is querying for personal data related information and also has the capability to check whether personal data was revealed using IVR text to Speech (TTS) modules.
  • TTS IVR text to Speech
  • the AI engine (115) interfaces mainly with the smart surveillance application (122). This component (122) interfaces with the AI engine (115) in-order to get the commands from the AI engine (115) via the interface (119).
  • the information sent via the interface (119) will indicate for a given call/session ID who is appropriate agent ID (this could even mean a transfer of call), which agent ID is additionally added to a call session, skill related to the chosen agent ID, skill modified event and modified skill value for an agent ID, AI assistant agent ID (i.e.
  • AI engine (115) will highlight for the given agent ID what countermeasure to take such as block agent screen, reduce the personal data view on the screen etc of customized agent desktop application (125) or the customized supervisor desktop application (126) or send a warning to the agent to improve the personal data threat state in the WFH environment.
  • the warning could be a short message service (SMS) to agent phone or if the warning is a pop up on the customized agent desktop, then the warning has to be sent to the customized agent desktop (125) by the smart surveillance application (122).
  • SMS short message service
  • the screen lock or block event received from AI engine (115) is handled by the smart surveillance application (122).
  • the smart surveillance application (122) will preferably have communication interfaces with other applications such as customized agent desktop application (125) via interface (131) and the customized supervisor desktop application (126) via the interface (130).
  • the applications (125) and (126) will preferably communicate with agent skill based routing module (127) respectively via the interfaces (129) and (128).
  • the smart surveillance application (122) also interacts with the customized supervisor desktop application (126). This interaction is mainly to highlight the various threat events in the agent environment, agent integrity points that will help supervisor make decisions in re-onboarding of the agent after the agent’s screen has been locked and also convey events such as screen blocked.
  • the screen lock event can be also highlighted to application (126) by the smart surveillance application (122) via the said interface (130).
  • the customized supervisor desktop application (126) When the customized supervisor desktop application (126) has received input from supervisor on re-onboarding agreement for the given agent ID that was previously prevented access, it will inform the smart surveillance application (122) about the supervisor agreement and the application (122) can perform the unblock of the screen for this agent ID once the agent inserts the correct unlock code.
  • the unlock code is given to the agent by the smart surveillance application (122) once it gets on-boarding agreement from AI engine (115) and the supervisor agent desktop application (126).
  • the application (122) can communicate with the customized agent desktop application (125) mainly to highlight the countermeasures that have to be placed on the agent desktop work application such as reduced screen size, reduced font size, as well as the call transfer information (e.g.
  • agent ID to which the call has to be transferred to where the said call transfer is induced by the data protection threat in a related environment and involvement of additional agent within the call session to inform the personal data to the agent who is handling the call.
  • the information given to application (125) from application (122) in order to include another agent into the call session is the additional agent ID to support in the call if the current agent environment has high data protection threat and hence cannot view the personal data.
  • the routing application (127) mainly performs functions like engaging a given additional agent ID/agent into a currently active call session based on the additional agent ID and said session ID information given by its interfacing module (125).
  • the said interfacing modules get the agent ID and call session ID information from the AI engine (115). That is add another agent into the call session so that personal data information can be whispered to the agent who is communicating with the customer.
  • the routing application (127) can engage the agent ID and session ID given by application (125) to transfer the call from the current agent to another/given agent all within the same call session.
  • the interfacing module (125) can give such agent ID and related call session ID via interface (129) to the skill based routing application (127).
  • the application (127) can also obtain the updated skill related to a given agent ID using the interface (129).
  • the application (127) can use this given skill to handle its own agent identification mechanism for some other operating scenario without contradicting the AI engine’s (115) decision on call routing.
  • application (127) has to follow the agent ID and skill related agent ID selections that comply with the AI engine’s agent ID and skill based agent ID selection.
  • agent ID Whenever the agent is selected by AI engine (115), that agent ID has to be used by application (127). If agent ID is not selected, then the application (127) can use the similar mechanism used by AI engine to determine the agent ID for the given intent.
  • the skill value can be stored in the DB (118) and also in the DB (123).
  • the DB (123) is mainly used to provide the data for the application (125, 126 and 127). This DB (123) can have information on the skill, call session ID, intent, the agent integrity point, various counter actions for security enhancements, the video files that has captured the data protection threat that has occurred in the agent’s WFH environment etc.
  • Fig. 1 The various applications such 125, 126 and 127 are shown to be implemented and activated in a specific manner using the defined interfaces shown in Fig. 1. But it is important to understand the solution can be implemented using some other software functional modules, appropriate communication interface design such as API and DB design without deviating from the scope and ambit of the inventions. What is illustrated in Fig. 1 is one of the preferable implementation of the current invention.
  • the previous embodiment highlighted the main solution concept and its methods operating in one of the most suitable scenario for this solution to operate.
  • the next embodiment i.e. embodiment 3 we highlight the solution operation in another preferred scenario.
  • the current embodiment illustration will refer to Fig 2.
  • the AI application (113a) and its associated DB (115a) are installed in the customer premises and not in the cloud services environment. Such deployments may be preferred due to safety reasons or when solution uses own tools and not use the cloud tools for the AI engine. All the other functionalities are similar to the previous embodiment (i.e. embodiment 2).
  • the AI engine (113a) receives environment data (video captures, images) using the interface(s) (111a, 118a).
  • AI engine (113a) it is additionally considered that some of the AI functionality and model used for object recognition can run on the browser of the agent (107a) desktop. In such case, additional video are sent to the AI engine (113a) only if malicious object (camera, face etc) is detected by the object recognition code running in the browser of agent desktop. When the code that runs in the browser detects that there is a threat due to a malicious object detected, then more video captures from that moment is sent to AI engine (113a) for some time that could be a configured time. Until then, not every scene from the WFH environment area is sent to the AI engine (113a). As mentioned before the sending of the video captures to the AI engine can be done by a real time video transport application that could use the WebRTC protocol stack.
  • the rules which are used by the AI engine (115) in Fig. 1 to decide on agent onboarding is revealed. Theses said rules are also used when agent re-on boards the system after the agent gets locked out due to screen locking by the system. These rules are illustrated as one preferred embodiment of the present invention.
  • Rule 1 The work area should be an enclosed area. Only single door is allowed in the work area and that single door should be in the view of the AI engine by means of the desktop inbuilt webcam camera that usually operates during agent work hours (also the only camera that is operative during agent work hours unless exceptional case).
  • the door has to be in the restricted view of the desktop inbuilt camera (this built in camera view is usually not a full 360 angle), so that door opening can be continuously monitored by AI engine to check anyone entering work space after the initial safe proofing.
  • the door being in the restricted camera view helps AI engine to detect anyone opening the door in disguise and entering through the door.
  • the AI engine will use the images from the built in desktop webcam camera to check the door’s visibility.
  • Rule 2 The said door of the agent’s work space should be locked. If locked then if intruder breaks, it gives more time before the intruder gets to the data and more time for AI countermeasure actions to prevent the threat.
  • Rule 3 Windows in the work area should be locked at all times and if a given window to be kept open during work time of the agent, then that window has to be in the agent desktop’s inbuilt web camera view. If an agent wants some other windows to be opened during work time, then those windows that are not in the inbuilt webcam view of the desktop have to be monitored using additional webcam cameras connected to the agent desktop and connected to the AI engine. Whether these additional cameras are connected to the AI system to view the window is also checked as part of this rule. The additional video view from the camera monitoring the window is obtained from the AI engine only when a call (which needs highly confidential data) is about to land in the agent home work environment.
  • Rule 4 Only agent is allowed to stay in the bounded work area and no other person is allowed to stay when work starts. This is checked so that when operation starts no other person peeps into personal data shown on screen during call. Because in the limited camera view after work starts, the full room view is not available and a malicious person peeping cannot be detected. Thus in the beginning itself this is checked.
  • the personal phone of agent is allowed to be used near the agent desktop vicinity but not in a malicious manner such as trying to get a picture.
  • the back camera view of regular hand-phone of agent has to be blocked by a light blocking material (this could be a tape). This condition of the personal phone will be checked by the AI engine before work starts. If the agent frequently used hand-phone does not have the back camera blocked, then the access to agent desktop application is not given. These points will be checked as part of this rule.
  • Rule 7 The agent has to be at his work desk facing the desktop as part of this rule. This is needed because if not for the rule, immediately after allowing user to access the work application the camera view to the environment is restricted to the AI engine and thus any malicious activity cannot be tracked.
  • the agent may accommodate an intruder in immediately after application is given access to the agent and this cannot be tracked if agent is not in the camera view during on-boarding time.
  • Rule 8 If already available CCTV is kept On to help get the full room view during initial data threat evaluation of the work room and this CCTV captures can clearly view the information (for e.g. numbers, letters, special characters) on the agent screen as detected by the AI engine, then this CCTV should be set to Off state before the agent is allowed to gain access to the agent desktop application. This is the core point of the rule. Before allowing access as part of the rule, it is further checked by AI engine that the CCTV is still connected to the AI system. This is needed to detect, anyone turning CCTV On during work time. Any agent’s private CCTV has to be integrated into the AI system in the agent’s work room. Furthermore, if the CCTV is considered not as a threat during the approval to the work application process, the CCTV need not be turned Off. But the images need not be fed to the AI system. If there is no data threat to the system from CCTV, yet the AI engine will want to keep the CCTV system integrated to the system so that it can be switched On to get more information at another operative point of the system.
  • This CCTV is assumed to be part of the agent’s private property. However, this can pose a threat during work time because it may be able to view the agent desktop personal data.
  • the full room is exposed to the AI system either using existing CCTV or additional webcam cameras(probably the ones that have 360 view) placed or a swift 360 degree video demo by the agent using his phone camera.
  • Such full room view checks are mainly done only during on-boarding for the day, re-onboarding after an incident or when AI system needs the full room view to do a complete check before call landing if it detects the particular agent environment cannot be trusted.
  • the subsequent threat rules are incorporated to detect deviations to the initial approval condition that can further cause the data protection threat based on the environment changes in the agent’s work environment after the agent has gained access to his agent desktop application.
  • These subsequent threat rules are identification of certain object/events in the work environment that can pose a threat to data protection following the initial approval state of the system.
  • the following rules are used by the AI engine to detect threat using minimal camera view such as the in built web cam camera after safe on-boarding of the agent to his work application: Whether any new person (face detection in the door and window frame) entering through the main door or window. Whether the door is unlocked. Whether a new human sound detected in the work environment other than the agent. Whether any human shadow is seen on the wall.
  • the net threat level is evaluated by the AI system continuously by simply assigning a threat level for each individual threat event and adding them linearly to get the combined threat value.
  • the said combined threat level is not limited to the environment threat alone and it can be contributed from any data protection threat related to this invention.
  • the prediction used by the AI engine mentioned in Fig. 1 w.r.t. to a call that has hit a system with a given intention/intent will land at an agent of associated intent is highlighted.
  • the prediction of call arrival (based on the condition that IVR call to a given intent has hit the system but not yet routed to an agent or call for an agent of particular intent has been put on hold and in a queue from the beginning of the call until being served by the agent) to an agent serving a particular intent in a threat environment is predicted using the estimation of total number of agents available at a future time when the said call hits an agent of the intended intent.
  • the system evaluates the additional time for a call to hit the agent of a given intent after hitting the system (e.g. IVR call to reach the agent).
  • the AI engine evaluates how many agents for the given skill range (to serve the intent) will be freed to attend the incoming call from their current active calls. How many agents related to an intent that are currently free will remain free to attend the incoming call.
  • the estimation of the future available agents for the given intent will be based on 2 factors. One is currently present calls in the queue waiting for agent of a given intent or the average queue size for a given intent. The second factor is the remaining average time to finish the current call (for an agent for the given intent) is less than the average call waiting time for the call to hit the agent. These averages are evaluated by the AI engine. Based on such logic, the number of free agents tied to a given intent can be evaluated at the future time of the incoming call needing an agent of a particular intent.
  • K be the original number of available agents when the intent call hit the system.
  • Y be the number of agents currently engaged and if the agents being freed is determined as true based on average remaining time for the given intent less than and the average remaining time for call to arrive to a given intent, then the total number of agents freed will be considered as Y. Then the total number of available agents for the given intent is Y + K-X. In cases where the removal of currently engaged agent to free when call arrives to agent is evaluated to no, then the total number of available agents is considered as K-X because Y is considered as 0. There are various cases where the Y+K-X will be small or 0 and also K-X will be small or 0. Based on this value of available agents for the intent, it is predicted whether the call will land to a particular agent.
  • the ability of the AI engine to identify the uttering of the personal data by the malicious agent is described.
  • the said AI engine is the one illustrated in Fig. 1. Also in this current embodiment the methods used to identify and prevent malicious agent trying to copy/record personal data are described.
  • the AI engine is able to identify malicious events w.r.t personal data threat that are related to the malicious agent as well.
  • the AI engine also identifies the events such as agent uttering the personal data in audible manner in an unrelated moment and immediately ensures the agent screen is locked by communicating with the smart surveillance application.
  • the AI engine is also able to identify some other person in the WFH environment other than the agent uttering the personal data in an audible manner. Again the AI engine will lock the agent desktop screen as a counter measure for this said malicious event.
  • the AI engine’s detection capability is such where the AI engine is also able to identify the personal data voiced out whereby it can detect personal data and impose countermeasures when at least first x% of it is voiced out in an audible manner.
  • the said x% is configurable and to avoid fault AI detection of the personal data voiced out, the said x% should be preferably 60% (of the personal data) or higher. This same guidance applies to all the below mention of the x%.
  • the AI engine has a means to identify the personal data is being voiced out even at a state when only a fraction of it is voiced out.
  • the AI engine is also able to identify when personal data is voiced out whereby each of the digits, alphabets and special characters of the personal data is voiced out individually by speech in the correct order and impose the countermeasure such as block screen. Additionally the AI engine is able to identify personal data where the user voices out the complete personal data by individually voicing out the digits, characters and special characters in a scrambled manner and impose the countermeasure such as block screen. Similarly the AI engine is able to identify when personal data is voiced out whereby each of the digits, alphabets and special characters of the personal data is voiced out by speech at a minimum x% of the characters in the correct order and subsequently impose the countermeasure such as block screen.
  • the AI engine is also able to identify when personal data is voiced out whereby each of the digits, alphabets and special characters of the personal data is voiced out by speech at a minimum x% of the characters in the scrambled order and impose the countermeasure such as block screen.
  • Which AI engine personal data uttering detection mechanism to use can be configured in the system.
  • the speech to text recognition and matching to the personal data in the DB is done by the AI engine.
  • the AI engine is first able to identify the text related to the x% speech or uttering of the personal data (either uttered by voicing out or uttered individually be means of individual characters). Then it will find a partial matching of it to any personal data field in DB. This said matching will be matching in the correct order or incorrect order as per the set AI detection mechanism. Then it will check whether the identified speech to text fits at least x% of any given personal data field in the DB. For every new character identified using speech to text mechanism of AI engine, the above said mechanism is done to check whether x% matching is achieved.
  • AI engine has the ability to identify personal data when it is spelt or voiced out and it is able to capture this within x% of the full content.
  • some other agent malicious behavior identification done by the AI engine are: detecting the event that user/agent is planning to get personal data from the web application when there is no call or during a call that does not need personal data and prevent personal data from being projected on screen for such probably malicious events.
  • AI engine when it detects personal data for a customer is requested by an agent when the said customer is not in call with the said agent at his WFH environment, then such personal data is prohibited from shown on the said agent work application screen.
  • AI engine assisted methods prevents clipboard copy/paste during personal data projection on the agent screen.
  • personal data is displayed on agent screen
  • all screen sharing event by the agent is detected by the AI engine. That is AI engine detects some of the common screen sharing application sessions are activated on agent desktop screen by analyzing screen samples and if detected immediately countermeasures are in place whereby the screen is locked.
  • AI engine When agent views personal data for a long time period more than usual, AI engine immediately blocks the data suspecting malicious behavior. AI engine also tracks all agent induced video recording start events. That is, agent using some other application to start recording is tracked by the AI engine by monitoring agent video recording action by means of a button press. This is realized because the system captures screen recordings and sends to AI engine.
  • an agent may use a timer based malicious application to start recording conversations and screen captures to capture personal data.
  • timer based malicious recording is also tracked by checking on the various video file types stored in the agent desktop work PC.
  • the solution also highlights usage of a running spy application that monitors all the video, audio files generated in the agent desktop work PC and sends the details to the AI engine/system (not necessarily the files). The details will have the file names, type and the time generation or modification.
  • the AI system/engine then checks (by means of the said spy application) whether during personal data screen projection time period on the agent screen, any video and audio files generated and stored in agent desktop work PC. If such malicious behavior is detected, then the AI system/engine is able to immediately impose the needed countermeasures such as block screen.
  • Such detection and blocking the personal data on the screen can take place in real time when the personal data is being recorded itself. Additionally, the AI engine will be able to also in the worst case retrieve the malicious audio and video files and engage in a video and audio based investigation using AI engine’s/framework’s image recognition and natural language processing components to detect any personal data related content.
  • the countermeasures for environment based data protection threat described in this embodiment or any embodiment in this document are activated when the AI engine can accurately confirm that personal data is needed to handle the call, the said call is active in a given agent environment and also the said agent environment has data protection threat.
  • the countermeasures for the data protection threat is not activated when the AI engine can accurately confirm that personal data is needed to handle the call, the said call is active but the agent handling the said call has no data protection threat in his environment.
  • the countermeasures for environment based data protection threat are not activated when the AI engine can accurately confirm that personal data is not needed to handle the call, the said call is active and also the said call’s handling agent’s environment has data protection threat.
  • the countermeasures will always be in place if data protection threat is present in the agent environment and the call is active in the said agent’s WFH environment. If the AI engine does not have any concrete identification means/method that the personal data will be used for the given intent call, then the countermeasures will not be in place if data protection threat is not present in the agent environment and the call is active.
  • countermeasures will be active only when data protection threat is present in the environment of the agent and the agent’s call needs personal data to be used for handling the call. If the AI engine running in the given system cannot detect that personal data involvement is needed for the call, then the countermeasures will be dispatched only based on the condition that the agent environment has data protection threat.
  • the AI engine running in the system will always check on agent environment w.r.t. data protection threat regardless of whether they handle call intents that need personal data or not. This is because any agent can be picked to handle the call with the help of AI assistants in the worst case scenarios where no other option is available to handle the call. Thus regardless of whether agent serves calls that needs personal data or not, the agent environment has to be always monitored. If an agent is not informed that he does not have to serve calls that have personal data to be viewed or personal data involved, then he does not have to have his environment threat free. In all other cases, all agents of this system has to have their environment threat free w.r.t. personal data protection. Any agent non-compliance to this will have their skills reduced and integrity points reduced.
  • the AI engine mentioned in this current embodiment 7 is shown in Fig. 1.
  • the basic design principle in this embodiment is that the long call holding time during the call (i.e. waiting for an agent of related skill to be available to handle the call when the call is about to be assigned to an agent initially or when the call is about to be transferred to an agent) and excessive call handling delay is avoided (i.e. unnecessarily cutting the current call off and putting the call on call back state when the call queue is high) even when there is data protection threat in the environment.
  • delay is avoided means if such long call holding delay is estimated then based on the estimation suitable countermeasures can be planned instead of sticking to long call holding time in spite of already having spent a significant time within the call.
  • the said call handling delay refers to the total time of the call which even includes the call back call duration as a continuation of the call.
  • Callback call generally increase the call handling time because when agent calls back he may not be successful in getting the customer immediately. If the customer is informed of such arrangement then probably he can get hold of the customer. However, rather than long call hold, callback will be preferred by customer. Long call hold will block the customer from doing their other activities.
  • this said intent based call queue can be made of new calls, transfer calls and/or call back calls.
  • callback calls are initiated by agent there will be a queue position for it to get access to the agent so that the agent can trigger this callback call at the due time where the system assigns the agent to serve the callback call.
  • the AI based solution component in this current embodiment also highlights how the waiting time to access the agent after joining the queue for a given intent can be identified by the AI engine running in the system.
  • queue times are identified by simulations and/or queueing models used to represent the queue dynamics. But for a real time queue, more accuracy can be obtained if the parameters that influence the queue time is used to build the machine learning model and use the real time queue parameters as inputs to get the output from the machine learnt model that can estimate the queue time when a call just joins the queue.
  • the output is the estimated waiting time or queue time for a given new call or transfer call or a call back call that has joined a given intent based call queue.
  • the waiting time can be estimated even when a machine learnt model is not in place.
  • This said machine learning model is able to identify the waiting time for a transfer, new or call back call that joins the intent based call queue at a given position based on the real time values for the variables that correlate to the waiting time. As mentioned before any call will join this queue at the end (i.e. just behind the last call in the queue).
  • the waiting time value can be derived using the machine learnt model using these input variables with related values for these input variables obtained at real time. These below described said input variables was also used from past data to train and derive the machine learnt model.
  • This waiting time output value from the machine learnt model is used to make a decision as to whether transfer/callback call waiting time is high or not. Which value of long waiting time or hold time is bad for the quality can be decided by system administrator and it is outside the scope of this solution.
  • This solution if a significant amount of time has elapsed for a call with a given agent and then there arises a need to find another agent because of data protection threat (could be environment based data protection threat or malicious agent trying to record personal data etc), and no available agent immediately present of the relevant skill, and if waiting time is high for call back and transfer agent access, then that approach such as call transfer or call back is not used and instead AI assistant/AI analytics is used to support the call.
  • the call will be transferred to the less skilled agent (i.e. an agent outside the intent) and AI analytics will assist him in the intent based call. If when no other agent available (not even less skilled) and all the above said options are not there, then the call will continue in the current agent with AI assistant informing the personal data via head phone.
  • the AI engine will track whether head phone ‘on’ or not and if people are seen closer to agent in the high threat environment for AI assistant is used. If head phone not ‘On’ in this high threat scenario and other faces seen closer to the agent, immediately the AI engine will inform customer and put the call in call back mode. If when during such AI assisted call in a high threat environment, a skilled agent becomes available to serve the call, then the call will be served by the newly available skilled agent. It is considered that when an agent of related skill is searched for the transfer, the call for transfer is put in the queue although the queue was long. Then that place holder in the queue enables to get the available agent into the call which already engaged the AI assistant.
  • the call back option is only even considered for evaluation when the call is in the initial stage with the agent.
  • the agent transfer can be considered at slightly later moments of the call. It is generally considered that the call back call will have longer delays when compared to transfer calls. The agent has to dial out and the customer has to answer. All these adds to the net delay.
  • the AI engine will do many checks points based on its priority of appropriate countermeasure. It is first checked whether data protection threat is high (or very high) and call that needs personal data is happening in the given agent environment. If yes for the 1 st check then the 2 nd check is whether the call can be transferred to a currently available agent of related skill without any waiting time. If 2 nd check is yes, then no issues the call will be transferred to a currently available agent of related skill and the call can continue in the safe environment. Whenever a transfer is made, additional check w.r.t. environment safety is also done and call only transferred to safe environments or environments that have minimal data protection threat. If 2 nd check evaluates to no, then a 3 rd check is activated.
  • 3 rd check it is checked whether the call has passed a considerable amount of time with its current handling agent or has spent significant time in the system. If 3 rd check evaluates to yes, then another check such as 4 t h is made where a less skilled agent is currently available to handle this call and this less skilled agent has no or minimal data protection threat in his environment is checked. In such case if 4 th check evaluates to yes, then the call will be transferred to this less skilled agent and the less skilled agent will be supported with AI analytics related to the intent to continue the call. If the 4 th check point evaluates to no and no less skilled agent, then the call will continue in the current agent environment where the personal data information will be informed in speech mode for the agent with head sets on.
  • the system uses AI engine to evaluate the waiting time for a call if transfer or call back is needed. Then if the waiting time is high and detected by using check point 5, then the call will have to be transferred to a less skilled agent and include AI analytics. Or if a less skilled agent not available, then the call will continue in the current environment with the agent and AI assisted mode where AI assistance engaged into the call will inform the personal data. Also the call place holder will be put in the queue when high waiting time is detected in check point 5. If the call in a threat environment and before the call ends another skilled agent is available the call will be transferred to that skilled agent. If the check point 5 is detected as no, then the call will be put in hold until a skilled agent is available. If hold is not preferred by customer and customer has signaled such during the call or other means, then the call is put in the high priority call back mode immediately and before that the call back call will be put in queue.
  • system assign/transfers the call to not so skilled agent it will assist him to conduct the call by providing various insights into the incoming call’s customer system attributes related to the intent.
  • This scenario is not an agent transfer scenario per say but choosing an agent outside the intent to handle the call.
  • the AI engine will analyze the incoming calling customer attribute state and highlight to an unskilled agent various insights related to that intent. For e.g. whether the customer can improve his attribute value due to internal events and external events occurring at the call coming moment. Whether any penalty he has to pay etc.
  • the AI engine will be able to work out the analytics based on rules related to an intent. What type of data to efficiently support a given intent based call is presented to the unskilled agent to complete the call.
  • the analytics done by the AI engine could be also based on the attribute state, inputs and outputs that happened in the past as well the current value of attribute and the inputs (rather than rules).
  • the said AI engine could also derive the suitable AI analytics based on the inputs given by the skilled agents who have handled calls from a given customer related to a given intent previously. Such AI assisted summaries will help the un skilled agent communicate with the caller/customer during a very busy time or when the agents are made redundant in the system due to a very unsafe work from home environment.
  • the AI system In addition to detecting call arrival when system already knows the call is for the particular intent (e.g. the callback call planned ahead, the customer select the intent at the beginning of the call in IVR system etc), the AI system also has the ability to detect the call arrival into the system for a particular intent (applicable in contact centers where the intent cannot be identified at the beginning of the call) based on the current business attribute values tied to a customer, external events that can impact the change to the current business attributes related to a customer, organization based internal events that can contribute to business attribute changes to the customer, customer demographics, intent based call arrival time, day of the week, special day identification. Based on such data, the AI engine is pre-trained in a dedicated training phase to identify the prediction model with high accuracy. The said prediction model will be able to identify as to when a new call arrives for a particular intent, based on the real time data that influences the output (output herein refers to the call is related to a given intent).
  • the call back state will be put in the queue and then the current call will be dropped. If an agent of related skill is available to serve the queued call back call state, the said agent will start to call the customer and activate the call back immediately. Additionally the system can send a short message service (SMS) to the customer and inform the call back has to be started within certain immediate time period. Then when the call comes from this customer as part of the call back within this said time period, the agent tied to the agent ID who was assigned to serve the queued call back state will be engaged to handle the call.
  • SMS short message service
  • the solution has a warning component to make the environment safe when the call lands at the agent (as mentioned previously). But the agent may be confused as to what actions to take to ensure safety when the warning is issued.
  • the AI engine will send such safety restoration details too. What exactly to do to make the environment safe is also sent as identified by the AI engine. Few of such safety restoration events are: Lock the door. The following members have to leave (the system will project additional faces detected). Increase the light luminous level etc. This will help the agent to quickly ensure any possible threat is removed from the work environment.
  • the AI engine has methodologies where the skill evaluation is done in a continuous manner to decide on the agent because the agent skill is modified in a continuous manner based on environment safety aspect and predictions. Until the call hits an agent, the suitable agent is continuously evaluated to ensure the call lands at the correct environment amidst the dynamically changing skills.
  • the agent skill is evaluated and agent selection done by The AI engine when the call just hits the system and when the call is just about to hit the agent. If the agent skill has been lowered to such an extent during a call (when newer high or higher threats identified in the environment) so that this agent cannot serve the intent any more, significant time already spent on this call by the agent and no other agent available to immediately attend to the transfer for this intent (ie. absolutely no agent of any skill set available), then call will be put in an AI assisted mode where the AI assistance application will verbally in speech mode convey the personal data information to the agent (such happens in high threat environment). Again such methodology is used when such a call needs personal data for call handling.
  • the call will be automatically transferred to another agent of a higher skill that is within the same intent when another agent becomes available again and transfer request was put in queue. If a call can be handled smoothly by AI assistant, then the call may not be transferred to another agent. This said transfer if done will be done in a dynamic and autonomous manner by the AI engine. During such transfer, to ensure quicker action, the agent who is transferring will do a transfer related audio summary.
  • the AI engine has a capability whereby the said audio summary is checked by the AI engine as to whether it has personal data information or not before being played in the new agent environment who handles the transferred call and only play the audio if no such personal data is present in the audio summary or the audio summary can be understood after removing the personal data information whereby the AI engine reformulates the audio summery after removing the personal data.
  • This audio will be played to the newly transferred agent using some related application.
  • This summary enables the new agent to handle the call smoothly without many hiccups. This transfer related information being sent to the new agent will happen in this system for any transfer.
  • the skill is restored back slightly by the AI engine.
  • the AI engine does environment safety based skill restoration as well where appropriate.
  • the AI engine has the capability to evaluate the data threat level in an agent WFH environment into various classifications such as high, medium and low.
  • the net threat value/level is determined by giving each threat a threat score and evaluating the net/cumulative score for a given agent’s work from home environment.
  • the high, medium and low is given fixed ranges based on the threat value and the cumulative threat score in a given environment is identified into the high, medium and low classifications. Based on these classifications appropriate counter measures are in place in this system.
  • the countermeasures are not rigid and depends on the state of the system. Additionally the countermeasures will generally be used only if the personal data is used or will be used in an environment.
  • agent work environment is considered not safe or agent trying to maliciously copy the personal data
  • the agent Whenever, agent work environment is considered not safe or agent trying to maliciously copy the personal data, the agent’s integrity points are reduced.
  • the points reduced is directly proportional to the degree of non-compliance. Basically if the threat level is high, then the integrity points reduced is also high. These integrity points are not restored quickly. Compliance of agent is monitored over a longer time period and then the said points are restored if compliance is seen.
  • environment personal data protection threat is medium
  • the call is allowed to continue in the environment using the speech assisted or agent assisted mode.
  • the personal data information is informed using the speech mode to the agent with head sets.
  • an agent who usually serves another intent can be used by the system to assist the agent to which the call landed. This other agent mainly helps in conveying the personal data value only and not really engaged in serving the call.
  • the personal data shown in the screen is reduced to the middle of the screen or the font size is slightly reduced.
  • the solution counteractions are not fixed.
  • the counteractions are planned based on available resources.
  • the AI engine generally plans the counter actions based on the threat level and the available resources (agents).
  • the environment data protection threat level detection and countermeasure is dynamic. But the countermeasures for malicious agent behavior is static and dynamic combined. The transfer refers to the dynamic part. If agent is trying to record video etc, then if detected the skill severely reduced, integrity points reduced and event notified to supervisor. Also the recorded files will be deleted by the supporting application. Also the screen will be blocked and call transferred to another agent.
  • Figs, 3A, 3B, 3C, 3D, 3E and 3F to fully illustrate the solution design components from the initial stage of on-boarding the agent after safe proofing the WFH environment and authentication, managing data protection threat when call is on and said threat happening in the said call handling agent’s WFH environment, using warning to clear threat when call is about to land in a threat related environment of the agent, managing data protection threat even when call is not ‘on’ and yet malicious agent behavior identified and subsequent restoration of agent into system whereby after countermeasure how re-onboarding of the agent is done is also illustrated.
  • the above mentioned are some of the high-level design components of the system.
  • the solution has a design concept whereby minimal camera is used and it is not necessary to have many sensors and/or cameras running as a pre-condition for this solution.
  • the solution during its core operating time only uses the in-built agent desktop camera to capture real-time video from its view and send to AI engine for further processing and detection. It is also considered, that this solution does not have prior data to predict data protection threat in the WFH environment.
  • this solution has unique features whereby AI object image/recognition is properly utilized to achieve personal data protection.
  • Fig. 3A highlights the safe on-boarding of the agent and initial configuration using many smaller design components such as agent WFH environment is safe proofed w.r.t personal data protection using AI, the agent is authenticated as valid agent and live agent using AI, the agent WFH environment is identified as his work environment and the agent skill is set to an appropriate value at the initial starting stage/state or rollout state of the agent to the system based on input provided by the agent regarding the safety of his WFH environment .
  • These smaller features/design components that help in the most suitable safe on-boarding of the agent to his agent desktop application is highlighted in Fig. 3A using components (202, 204 and 205).
  • the rules usage to ensure the agent WFH environment is safe is highlighted.
  • the design component objective is to eliminate the data protection threat in a special way (carefully planned rules for initial on-boarding considering the fact that subsequently the full view is not available for the AI engine) before-onboarding the agent to his agent desktop application, so that subsequent data protection threats can be correctly identified using minimal camera view based on the condition that the initial on-boarding check has eliminated certain threat events.
  • this minimal camera view what is detected by means of data protection threat evaluation rules at operation time is that no new threats are seen in the minimal camera view and also the rules are used to check whether the previous clear state is not violated by events seen in the minimal camera view.
  • agent may be going away from camera and trying to take a picture or agent disappearing from the camera view may be opening a window for an another to get in.
  • agent may be going away from camera and trying to take a picture or agent disappearing from the camera view may be opening a window for an another to get in.
  • the design component (205) highlights that the starting state/value of the skill can be preset as part of initial configuration by the agent/ supervisor for the given WFH environment based on evaluated data protection threat in the given WFH environment.
  • This said initial rollout evaluation is not done by the AI engine, and it is agent’s evaluation about his own WFH environment. Initially based on the agent work skill he will have a skill value tied to the call intent he serves. During initial configuration a change(reduction) to agent technical skill can be done purely based on the data protection safety of his work environment given by the agent/supervisor. This said skill modification is always a reduction and this happens only during the initial rollout.
  • This said skill reduction may or may not happen and that is purely based on the indication of the agent/supervisor about the agent’s WFH environment’s safety w.r.t. personal data protection. If environment is safe or the agent has no idea about his WFH environment w.r.t. personal data protection then the agent will not insert any threat indication value about his WFH environment into the system. In such case, the said agent skill will not be reduced. If the agent evaluates his environment or supervisor evaluates or approves the environment safety level, and inserts a threat value to system, then based on the severity of the threat value inserted into the system the skill will be reduced. Higher the threat value, the skill reduction will be proportionately higher.
  • the system also checks whether the current agent on-boarding day is an approved day for the agent to carry on his work duties. This is done in feature (202). That is, the agent has not taken leave and trying log in or the agent is trying to access the system on public holidays etc are checked as part of the daily on-boarding process.
  • the skill reduction during initial configuration gives the appropriate indication to the call routing engine of the system. Basically, for a given intent related call, in this solution, available agents of lower skills are not picked as a preferred agent to handle the call and always available agents of higher skills are picked.
  • FIG. 3B another sub component of the solution component is highlighted.
  • the functionality and features of the solution to enforce personal data protection in WFH even when there is no environmental data protection threat (i.e.) is highlighted using features such as (208, 209, 210, 214, 215, 216 and 217) as in Fig. 2B and Fig. 2C.
  • the main solution has appropriate features to ensure data protection security in WFH environment even when no environment related data protection threat present in the environment.
  • the said environmental data protection threat refers to data protection threat originating from agent’s work environment surrounding the agent. Even when there is no environment related data protection threat, there has to be continuous checking and threat elimination mechanism especially to catch agent behaving in a malicious manner and also to detect whether safe environment is changing to a data protection threat state. This is the main objective of this solution sub component.
  • the feature (208) refers to continuous monitoring of the agent environment so that any data protection related threat components are continuously identified even if the current environment does not have any data protection threat.
  • Such design philosophy is used so that the agent can be informed ahead by means of warning to clear the threat from the environment incase the call will land to that agent in the future.
  • the said warning frequency is very less if the system does not predict the call that needs personal data will arrive at this data protection threat related environment. Nevertheless, even if no call arrival to this said environment is predicted at least one warning will be sent if AI engine identifies data protection threat in the environment.
  • the AI engine can check whether if someone has entered a room then whether the person has left the room.
  • the total number of person count per WFH environment is detected and kept by the AI engine. Even if the additional person face cannot be seen, at a given moment using the restricted view of the camera, the AI engine can detect the total number of people entered using the single entry point that is in the restricted camera view (i.e. the door for the WFH area) and the number of people that have left the room using the single entry exit point that is in the restricted camera view. Based on this method the AI engine can detect the total number of people in the WFH environment even with the restricted camera view.
  • the threats are because of people and/or automated recording programs running.
  • this sub solution gives priority to detect these objects/additional people and any malicious recording by agent.
  • the solution implemented continuously checks whether any other camera in the WFH environment is integrated to the system and set to recording ‘Off’ mode.
  • This feature is about allowing the agent to access the personal data in his agent desktop work application only during a call handled by the said agent and when it is detected accurately the said call needs a given customer’s personal data to be viewed by the said agent to handle the call. In all other cases, the system prevents the customer personal data from being viewed by the agent from his agent desktop work application.
  • the feature (210) highlights the system’s ability to detect during a call or otherwise, as to whether when the personal data being projected on the screen or otherwise, the agent is quoting the personal data values verbally over the call or otherwise and the quoted personal data can be heard by any other person.
  • the said any other person is a threat only when this person is in the audible range from the source of sound (i.e. agent).
  • the AI engine has the capability to estimate from the recorded sound/speech received from agent desktop containing the personal data, in which/what area around the agent the agent’s sound can be clearly heard. If the said sound’s finalized audible area cannot be fully seen by the AI engine from the restricted camera view if the border of the area extends further than the door in the AI engine’s camera view or there is another human in the said audible area and this human can be seen by the AI engine using the restricted camera view or any combination thereof, the AI engine considers this act of the said agent (the agent quoting the data via voice) as a threat and imposes countermeasure such as significantly reducing the skill of the agent, agent integrity point reduction, blocking the agent work screen by requesting the smart surveillance application and this malicious behavior will be considered during re-on-boarding of the agent.
  • the agent the agent quoting the data via voice
  • the AI engine has voice energy degradation data (the rate at which voice amplitude will diminish in the given agent WFH area from the agent/source), distance to agent’s door from agent work location within his WFH environment and it uses such data and general speech audibility level guidance to detect whether the agent’s sound can be heard by any other.
  • voice energy degradation data the rate at which voice amplitude will diminish in the given agent WFH area from the agent/source
  • distance to agent’s door from agent work location within his WFH environment and it uses such data and general speech audibility level guidance to detect whether the agent’s sound can be heard by any other.
  • the AI engine will be able to draw an audibility area around the agent and check whether the seen door is within the area and use it for its decisions regarding the said threat. In general if the door lies within the audibility estimated area, the AI engine will consider that the threat is present.
  • the said audibility area is derived by the AI engine based on the sound level/amplitude capture by the agent and the voice degradation rate based on distance from voice source related information it has a priori for the given WFH agent environment.
  • the solution system has no capability of detecting the audible range of the personal data quoted or voiced out by the agent, then when personal data being quoted is detected, the system imposes counteraction (as mentioned) such as significantly reducing the agent skill of the agent, agent integrity point reduction, screen locking and this malicious behavior will be considered during re- on-boarding of the agent.
  • agent on-boarding for next day will be prevented because agent screen is locked.
  • a state will remain in the system tied to the given agent ID for such malicious act and system will prevent the user from gaining access to his work desktop application, until the supervisor clears such state and re-on boarding is approved.
  • the agent without viewing the personal data and getting this personal data projected on the screen, the agent is able to handle the call, where the call needs personal data.
  • the personal data is not voiced out during the call but the agent is getting the pre-written data from some other place.
  • This is also considered as a malicious event and the system will impose the same measures of severe skill degrading, integrity point deduction, screen locking and prevention of re-onboarding.
  • This said agent action is a malicious event because the agent might have registered the data in some other device or paper and using it for reference during the call without retrieving the data onto the screen.
  • the AI engine detects such copying to another device and re-using.
  • One of the core feature of the AI engine is that, using speech recognition method it will be able to match voice phrases to personal data conveyed and subsequently track the malicious behavior of the agent.
  • the solution always reduces more skill value when malicious act of the agent is detected.
  • the next component is (214, 216) as described in Fig 3C.
  • the AI engine will track and detect a given call that needs personal data to be accessed on to the screen is being handled in a hold phase or taking a very long time to complete the call.
  • the AI engine will track and detect a given call that needs personal data to be accessed on to the screen is being handled in a hold phase or taking a very long time to complete the call.
  • the AI engine will get the information from the DB it associates with about the hold time when the personal data was projected on the screen (i.e. start time of hold time within the call and the end time of hold time within the call). Also the AI engine is able to get the call duration timing for the call (i.e. start time of call and end time of call) which used personal data on the screen. The said DB will additionally have time as to when the personal data was projected on screen during the said call (i.e. start time of personal data projection and end time of personal data projection). Using such multiple information, the AI engine can detect whether the hold time was long, whether the hold time happened during personal data projection time and also the call took an unusually long time to complete.
  • check point events correlates to an act such as malicious agent trying to copy personal data on to paper or some external device during a call.
  • the AI engine will track this as a malicious behavior by the agent and reduce the skills. By such reduction of the skills, for future calls the AI engine based system will avoid this agent (as the skill is reduced for a given intent).
  • the system blocks the personal data on the screen.
  • the said hold time is an explicit hold state inserted by the agent into the system. Once hold is completed, the agent will press un-hold.
  • the said accepted hold time could be derived by the AI engine from past data or it could be obtained from a preconfigured information.
  • the AI engine is able to detect hold/silence period by real time analysis of the speech recorded during a call, which uses personal data.
  • the solution leverages such capabilities of the AI engine.
  • the AI engine as part of this solution will be able to analyze speech samples for normal agent conversation that uses personal data and compare against speech conversation which has unusual silent period as in a malicious event. Then it will correlate this event of long silent period as unusual event if the silent duration value is unusual. To conclude on an unusual event it will check many past data of speech samples during a conversation where agent used personal data.
  • the AI engine will specifically track the silent periods in the past data. This past speech samples used for analysis can be past real time data.
  • the AI engine detects this as a malicious act by the agent.
  • the AI engine will use big data such as silence periods within a call, time period when personal data is projected on screen, the time duration of personal data projection etc to identify the threat event such as copying done by the agent.
  • the system updates the hold time duration into the DB at the end of the call.
  • the solution can be implemented in a way whereby the hold time is updated into the DB immediately when the hold time is completed by un-hold during the middle of the call.
  • the AI engine is able to track in real time that the hold time has exceeded by querying the DB at a time that usually takes to update the call hold time. If the call hold time is not updated within the usual time, then the AI engine, even during the call will track the malicious act of the agent using this yet another method.
  • the said usual time is detected by the AI engine using some averages of hold time of past data. In such case, the system will block the agent screen if the hold is significantly high during the time when personal data is projected on the screen.
  • the AI engine upon detecting silence because there is no conversation when personal data is projected on the screen, it will immediately block the screen (by using the smart surveillance application) and reduce the agent skill.
  • the AI engine is able to track and immediately block the screen. Basically, such measure can be taken if the personal data is a very important one. In such case no time boundaries are used to decide on the counter action. If any unusual long silence time periods and/or hold/un-hold and/or long call conversation time then counteraction such as block screen and skill reduction will be in place.
  • the component (215) identifies the case where when personal data being exposed on the agent screen during a call involving the agent, the malicious agent starts another session via zoom, webex and share the agent screen having the said personal data and pass the personal data related video to another attacker. It is considered that such zoom session, webex session creation, screen share events can be detected by the AI engine.
  • the video application that sends data to AI engine also sends the screen video images too to the AI engine when personal data is projected on the agent desktop screen. If such malicious act detected, the AI engine will block the agent screen by using smart surveillance application, reduce the skill significantly, reduce agent integrity points and prevent the agent re-onboarding to the agent desktop work application the next day. In general these screen blocking related countermeasures are activated by the AI engine using other related applications such as smart surveillance application as illustrated in the previous embodiments.
  • the next component is (217).
  • This component function is such where the AI engine is able to detect for the customer calling which personal data has to be shown on the screen. The AI engine subsequently informs other applications not to project any other personal data that is not related to this call.
  • the components (222, 223) identifies the event that a call that needs personal data for its handling will land at a particular agent and activate prevention of the call landing if the said agent’s WFH environment is not safe at the moment of the said prediction.
  • One such counter measure or prevention measure is skill reduction and another such prevention method is issuing a warning to the said agent to clear the un-safe environment.
  • How the system predicts a call related to a given intent that needs personal data will land at a given agent can be broken into the one: identification of probability of the given intent related call landing in the system and being given to the agent which handles the intent and two: the probably that this intent call will need personal data.
  • the AI engine can detect this event by using big data analytics rather than identifying these probabilities individually too. That is the big data should have all sorts of data that will give insights to the AI engine to correlate call arrival of a particular intent that needs personal data to the arrival of the said call at a given said agent. In some previous embodiments of the present invention such details were revealed.
  • the skill is restored back if the WFH environment is considered safe again after initial unsafe condition as long as compliance from unsafe to safe is shown by the agent.
  • Such skill management is continuously done by the system when a call has landed in the system for a given intent and needs personal data protection and has not landed at any agent yet. This dynamic mechanism helps to find the appropriate agent.
  • the warning is sent to all agents whose environment has data protection threat. If in a case, whether the call needs personal data protection cannot be estimated, then warning is again sent to all the relevant agent’s whose WFH environment has safety issues w.r.t. personal data protection when such call hits the system.
  • the above mentioned skill restoration after environment safety is cleared is highlighted by the component (224).
  • the skill restoration to a value near full amount should happen whenever the WFH environment is considered safe and the agent has complied to the first warning. If the agent has not complied to the first warning and does environment safety restoration after 2 nd or 3 rd warning, then skill restoration value is not significant.
  • the system uses a method where skill restoration happens based on agent’s compliance to work environment safety rules.
  • the skill restoration value is decided by the system based on the compliance level of the agent to WFH environment safety w.r.t. personal data protection.
  • the system activates certain functions and features to ensure the call continues with the least disruption and also in a safe environment.
  • the component (228) uses a feature where the system even after the call lands to an agent continues to monitor the environment safety w.r.t. personal data protection and including any malicious behavior by the agent. This is to ensure that if data protection threat detected then transfer to another agent or some AI assistance or another agent assistance can be planned.
  • the component (229) it is highlighted the solution principle where the system does certain countermeasures when the personal data protection threat in the WFH environment is not so severe.
  • the AI engine uses the AI assistant to be involved in the call and tell the personal data in speech mode to an agent who is in the head phone mode and the said agent also in the data protection threat environment.
  • the component (229) also highlights the case when the threat is not severe in an agent WFH environment, then another less skilled agent who is available is incorporated into the call to help voice out the personal data information to the agent. Basically within the same session either a AI assistant or another available less skilled person is engaged to help the agent. The reason for such is because the threat is not severe and the system identifies that the call can continue in the current agent environment. For e.g. if suddenly many people in the agent WFH environment then the threat can be severe and in such case the system will try to transfer the call to another agent completely. Transfers generally cause delay and hiccups in performance.
  • the component (230) highlights the case where the agent receives the personal data via head phone and when threat is present in the environment agent tries to voice out or repeat what was received from other virtual or real agent.
  • the agent may be using some hidden recording device and recording such data.
  • the call will be transferred to another agent of related skill tied to the intent tied to the call.
  • the system will try to completely cutoff the agents involvement in the call and transfer the call to another agent if possible.
  • the component (231) highlights a case where the call cannot be transferred to another agent because there is no other agent available to handle the said call tied to a given intent.
  • the call may be put on hold if the another appropriate agent availability is soon (i.e. a given busy agent will be freed within a short time) or the call will be put in high priority call back state.
  • the system judges whether the remaining hold time and subsequent handling by another agent severely degrades the quality of the call. If it is detected to degrade the quality of the call (i.e. a call with excessive hold time will impact the QoS), the call will be put in call back state and said call back call will join the queue to get an agent as per other calls.
  • the component (232) highlights the sub feature that when data protection threat is detected in a WFH environment of an agent (can be environment based data protection threat and/or malicious agent), the skill of the agent is reduced, so that an incoming call is prevented from landing at the said agent who has an environment based data protection threat. This lower skill will generally ensure such agent is not picked to serve the call. If suppose such feature is not present, then the call will unnecessarily have to be transferred and there will be additional delays here due to session update and also finding the suitable agent again.
  • the agent integrity points are reduced whenever data protection threat is identified (this can be environment related and/or the malicious agent based).
  • This integrity points are used to re-evaluate the agent’s entry back into the usage of the agent desktop application.
  • the supervisor will be able to clearly distinguish the behavior and trustworthiness of an agent using the integrity points. Reduced skill value does not simply imply less integrity. It could also mean that the agent has reduced technical skill to handle a given intent based call. Thus this integrity point is a dedicated metric to highlight the trustworthiness of the agent and it also plays an important role in this solution.
  • Fig. 3F This component is specifically used for re-onboarding the agent after his desktop work application has been blocked because of a high data protection threat event in his environment or the agent himself have behaved maliciously or a combination thereof.
  • the sub component (236) highlights how a very strict re-onboarding security checks are done before allowing the agent to re-onboard.
  • the solution principle here is to further tighten the security after a data protection threat incident.
  • a QR code is projected on the agent screen after a certain amount of time.
  • the QR code on the screen needs to be read by the agent phone’s smart surveillance application (smart surveillance application mentioned in Fig. 1) and sent to system.
  • the QR code reading is integrated with the smart surveillance application it self.
  • the phone that has smart surveillance application installed can be used to get the QR code to be read and sent to solution system (or a phone that knows the URL of this application).
  • this URL can be sent using out of band means such as email and in an encrypted manner.
  • This re-login URL is only used for re-login/re-onboarding procedure and it is dynamic. There will not be a single URL system wide for this re-login/re-onboarding and it changes dynamically. Due to such measures, the security is further enhanced because a malicious person cannot access the smart surveillance application nor the dynamic URL so easily and send the QR code back to the solution system and thus impersonate the agent.
  • the smart surveillance application mentioned in Fig. 1 which is running in the agent’s phone has the capability to read the QR code and send the information to back end.
  • the system after the supervisor approves re-login will send a login code or password as a SMS to the agent smart phone. This password is sent to the said agent’s mobile phone number. This to ensure no malicious person is able to get this password. Before this said password is sent, many checks as discussed next will be done and has to be passed. Then this passed state will be shown to supervisor and he has to approve the re-login. After that, based on supervisor agreement, the system generates the re-login password and sends to the said agent whose screen got locked/blocked.
  • the sub component (237) highlights that the system also has important design components such as restoration. If skill is reduced then it is important to restore to the original value. If the integrity point is reduced it also has to be restored based on good behavior. The skill restoration happens when compliance is detected after a warning or for a certain time period no incident happens. Integrity points are restored after a longer time period. Having this design sub component in the solution, the system will have skilled agents in the system available to handle the call load and system will not be illogically/inappropriately depleted of agents due the threat management protocols implemented.
  • the AI engine s software architecture is revealed according to one exemplary illustration.
  • this is not the only software architecture for the AI engine to realize the current inventive solution.
  • the said AI engine s exemplary software architecture the Fig. 4 is referenced.
  • the complete software architecture of AI engine is illustrated by (300). It basically follows the open system interconnection (OSI) of communication protocols architecture where all the applications reside on top layer of the software communication architecture framework.
  • the application components that are involved in the AI specific framework are: (301), (302), (303), (304), (305), (306), (307), (308), (310), (311) and (313).
  • various other modules such as smart surveillance module (122) as mentioned in Fig. 1, it needs to generate the data for communication using the said individual application modules and send this data using the communication framework highlighted using communication protocol software components such as (315), (316) and (317).
  • the AI engine sends this said data to the smart surveillance application so that the listener in smart surveillance application can fetch these in real time and immediately execute the counteractions needed to manage the data protection threat.
  • the component (315) enables various data such as modified skill value, agent ID, session ID, data protection threat countermeasure action, updated integrity point for an agent and related data protection threat capture video to be suitably arranged in-order for it to be sent from the core AI application modules using an application layer transport protocol such as hyper text transfer protocol (HTTP).
  • HTTP hyper text transfer protocol
  • the said data arrangement for the said interface in one exemplary manner highlight Javascript Object Notation (JSON) format of data within the API body.
  • JSON Javascript Object Notation
  • the component (315) is an application to call the various sets of interface APIs between AI engine and the smart surveillance application.
  • the interface communication session establishment and data transport triggering is done by the interface application running in component (315).
  • the solution does not restrict to HTTP as the only suitable application transport protocol for the said interface communication. If HTTP is used, various Representational State Transfer (REST) communication models can be used to enable transfer.
  • REST Representational State Transfer
  • the layer (315) specifically highlights the appropriate data formation for HTTP session and thus it exposes various API to attain such objective. Using these APIs, various sets of data are sent from the AI engine applications to the smart surveillance application. By hosting such API and exposing it, the component (315) can use various API names to send various types of data from the AI engine.
  • the data can be sent via an API within one HTTP session or multiple HTTP messages for various APIs can be sent via the single HTTP session.
  • the API name is generally transported using the header of the HTTP protocol.
  • the component (315) is an application that calls the HTTP service including HTTP session establishment service.
  • the actual implementation of the HTTP session is handled by the component (316).
  • the communication layer (316) provides signaling, data packet transportation service for various communication protocols such as HTTP, Transport Control Protocol (TCP), Internet Protocol version 4 (IPv4) and Internet Protocol version 6 (IPv6) and IP Security (IPSEC).
  • TCP Transport Control Protocol
  • IPv4 Internet Protocol version 4
  • IPv6 Internet Protocol version 6
  • IPSEC IP Security
  • the data packet will be processed by suitable physical layer protocols before being transmitted via the communication medium.
  • Any AI engine will need its evaluations/predictions to be done quickly and it needs much hardware such as large cache size and processors with high processing speed. It is assumed that such additional hardware tools are given by component (309).
  • AI engine framework has capability to dynamically write the code.
  • IDE Integrated Development Environment
  • the AI engine needs code to be dynamically generated this Integrated Development Environment (IDE) can be used.
  • IDE Integrated Development Environment
  • the AI engine of the solution when it picks a certain algorithm from its machine learning suite to activate in learning. If libraries are not present to use this algorithm, then this algorithm can be coded using the said IDE within the machine learning framework.
  • the application of the AI framework (300) is segregated into many AI functions. These said functions are mapped in a one to one manner into various AI application software modules.
  • Such separate application modules in the software architecture design reduces the maintenance efforts such as new build deployed as an upgrade to a certain function does not need to disrupt other untouched modules, improves debugging when issue identified as only certain related code space has to be investigated, also enables reusability by forming application with the needed set of sub function/modules and smoother plug and play for different types of customization needed. For e.g. some customers that use this solution may not need all of the AI engine feature and may only need certain aspects. Thus compartmentalization of AI engine feature using these various sub modules is very useful for customization of this big solution into various customers with slightly different needs.
  • the component 301 refers to the AI engine’s involvement in identifying/predicting the given intent call arrival to an agent.
  • This said prediction uses big data and machine learning algorithm(s) to identify the suitable machine learnt mathematical model that can enable such prediction accurately.
  • the said machine learning algorithm from the big past data, will identify the positive or negative correlation between labelled data and the needed output and use such identified correlation to form the machine learning model that can accurately predict the outcome such as a given intent call arrival to an agent.
  • the machine learning algorithm also has the capability to improve on the machine learnt model to improve on its estimation capability.
  • the application component (301) can leverage data and also tools that can be used by AI engine and residing in the software framework (300).
  • the said data to train and identify the machine learnt model for application (301) can be accessed by getting the data from DB. Then this retrieved data could be used in cache memory or stored in a file during the machine training/learning process.
  • the component (312) is used. This component (312) in general helps to insert real time data obtained from the WFH environment or from any other AI interfacing application into DB and also retrieve the data from DB for the machine training/learning purpose and also for object/image recognition during agent on-boarding, data protection threat detection during on-boarded time for the given day and re-onboarding.
  • the software component (314) highlights the AI framework that is useful in identifying the appropriate finalized machine learnt model for application component (301) or any other AI application component residing in this software architecture (300). Some of the best fitting mathematical model defined in (314) are evaluated to identify the suitable machine learnt model for a given prediction problem. To identify the suitable machine learnt model a suitable machine learning algorithm can be used. The component (314) provides such library of algorithms and machine learnt mathematical model that can be accessed to enable in the derivation of a prediction model.
  • the application component (302) is another independent AI based application component. This component’s objective is to identify images, objects that can cause data protection threats in real time. To achieve this in real time, component (302) could use well developed deep learning model for image recognition or leverage existing API/libraries that are provided as part of software tools in this AI framework. This function is activated as part of on-boarding the agent to enable the agent access his work desktop application after his environment state is considered safe for work to start and continue.
  • the AI engine uses already given big data of images/objects, to identify various object and scenes that correlate to a threat. Some already given objects could possibly be agent face, image of door in locked state, image of phone camera where back camera sealed and many more.
  • the application component (302) in addition to detecting the object will use rules to evaluate the threat during on-boarding.
  • the component (314) may be exposing image recognition API where using this API, past data/training data for image recognition and current image captured that needs to be detected, the input image can be accurately detected and the response returned. It is considered the AI component (302) will be having an application that uses the image recognition API hosted in component (314). Alternatively, the said image recognition can be also be fully done in component (302) where suitable deep leaning algorithms can be built and deployed to detect certain objects of a given type. How component (302) achieve object detection is outside the scope of this solution. AI is a very matured technology and this invention uses its already available rich features that are available as implementations, exposed AI APIs or AI algorithms documented in many widely available materials.
  • the application component (303) uses AI engine to help identify malicious agent behavior that will pose a threat to personal data protection.
  • the application (303) has the ability to track whether agent voicing out personal data during call or when outside call. It employs AI engine speech recognition framework available in component (314) to identify whether the agent trying to voice out any sort of personal data at any time during his work time. It also monitors whether the personal data is being projected on the agent screen when such malicious act being conducted by the agent. If so, the agent screen will be locked.
  • the application (303) achieves this locking by interfacing with the smart surveillance application mentioned before. Again the AI speech recognition is a matured technology. Thus the application (303) simply integrates the suitable algorithm for speech recognition to achieve the objective of detecting the said malicious agent behavior.
  • the application component (304) has the functionality to continuously track the environment safety throughout the work time of the agent. This component (304) has capability to track an environment with highest priority where in the said WFH environment there is already a call is being engaged to an agent and there is a need for personal data to be projected or currently projected on the screen. If component (304) is using parallel thread processing to detect threat in all the WFH environments, then in the initial batch of parallel processing of this threat evaluation, such said high priority environments will be used.
  • the application components (308, 310) are such that the AI engine does skill management.
  • the AI engine does skill management.
  • the AI engine will ensure the skill is reduced appropriately so that the call does not land at the given threat environment.
  • the skill reduction does not take into account the skills of the other currently available agents for the given intent. But the skill reduction amount primarily is based on the threat level in the said agent environment.
  • the skill reduction as in component (310) and management is done by the AI engine because it has the environment threat identification means and it can quickly use this information and modify the skill. The revised skill and the related agent ID are informed to the smart surveillance application. Also if a call is in a given environment and data protection threat happens, again skill reduction is done.
  • the application component (307) highlights the specific AI engine feature where virtual AI assistant is enrolled or incorporated into a call session if there is no agent for transfer of a call from a data protection threat environment and also in case if the call is supposed to wait for a free agent, significant impairment to call quality will happen.
  • AI framework uses the text to speech function of component (307) to support the agent with personal data where the agent is in a data protection threat environment. The said data is conveyed via speech when the agent is in the head phone mode.
  • the said AI assistant of component (307) is incorporated into the session and the AI assistant will voice out the personal data information from the DB in the speech mode.
  • the component (307) is able to perform this text to speech service using deep learning mechanisms that are well known.
  • This said deep learning mechanisms are formed in general using neural networks.
  • the training happens for this model using various speech samples that denotes the text.
  • This solution does not highlight which mechanism in AI solution space should be used for text to speech conversion. It simply highlights that the text to speech happens using some form of AI deep learning.
  • the component (307) is also responsible to ensure the AI assistant is incorporated into the session and also handles the calling customer’s personal data from DB to be correctly converted to a speech mode and conveyed to the said agent.
  • the AI component (306) does AI based agent face detection, agent liveliness detection and agent environment threat detection related to personal data when agent is planned to be re-on boarded again after a threat incident. This component is activated when agent requests for re-onboarding by scanning the QR code using the smart surveillance application.
  • the said environment related personal data threat detection function of component (306) identifies various objects that poses a threat and uses rules to check environment safety by incorporating these detected objects into the rules.
  • the component (311) handles this.
  • a given agent’s environment did not have any environment data protection threat and a given agent is not detected to act maliciously, and such compliance state happens for a certain number of days, the environment is considered safe and the skill is restored back to its original value.
  • This compliance detection and skill restoration is done by the component (311). If within this time period some degree of improvement w.r.t. personal data protection is seen when compared to previous such time periods, partial restoration of the skill will happen. Such detection also done by the component (311).
  • the component (305) handles the agent integrity points restoration.
  • the component will track agent compliance as in component (311) but the restoration happens in steps and not full restoration after evaluation for a certain time period. Full restoration will happen after many such compliance periods.
  • the component (313) handles the identification of net data protection threat value at a given moment and plans the appropriate countermeasures based on the current state of the system such as available agents, calls in queue for any agent of related skill, personal data being projected on the screen or not, the time spent in the call when the threat happens etc.
  • the appropriate countermeasures are not static, it depends on what the threat is. This solution generally plans countermeasure based on severity of the threat.
  • the first scenario under consideration is about the needed configurations done by the relevant stakeholders before the system embedding the solution rolls out. Also it is highlighted in this scenario how the AI system uses these initial configurations to subsequently plan the initial suitable state for its system variables.
  • the Fig. 5A can be referenced for this.
  • the component (500a) highlights that initially the agent’s face images has to be given to back end. It should be understood it is not one agent’s face images that needs to be given to back end but a plurality of agent’s face images since the system comprises of many agents. Although reference is made to a given agent for illustrating the scenario, it is considered that the illustration is applicable to any agent who is part of the system where the solution is running. The said agent will be part of this AI based surveillance system and thus needs to be authenticated daily using face recognition. It is just not one image of agent but many images have to be given to the back end to attain such face authentication with accuracy.
  • this threat level value can simply high, medium or low and it is the agent interpretation of his WFH environment and this has to be approved by supervisor before inserting in back end. How this value is inserted is outside the scope) as part of initial roll out.
  • the said threat level is either given by the agent or supervisor into the system.
  • the agent face images is needed by the face recognition AI algorithm to authenticate the live captured agent face by accurate matching or similarity identification to these pictures/images.
  • the AI engine has to be trained to be able to match the seen real face against the many images.
  • many images of the same agent are considered to be given during the AI training/learning process so that a relevant machine learnt mathematical model is built.
  • This mathematical model possibly has the capability to use biometrics or some other means to be able to match the real face with the images.
  • the agent face images used for training also remains in the system so that the AI engine can improve its learning continuously where possible using the initially uploaded face images used for training and also the real images captured while real face recognition is happening.
  • the details of the training and the finally achieved machine learnt model is outside the scope of this invention.
  • the solution may also use an open source AI API to achieve the face recognition.
  • the system will reduce the agent skill(or not reduce the skill if there is no threat given for the WFH environment) appropriately to a given starting value at the rollout stage for a given agent. This is done by the subsequent method in the flow chart which is (501a).
  • the said rollout time/stage implies the very beginning of the system w.r.t. a given agent.
  • the agent will do daily login to access the system. This said daily login of agent into the system is not the initial rollout.
  • This initial adjustment of the skill is done so that an agent whose WFH environment security is not good can generally be avoided for call handling that needs personal data involvement.
  • the AI engine can detect threats but the initial skill value is an input from the agent, which additionally helps the system in addition to AI detection of such events.
  • this solution system has the capability to assign priority to skills and use the priority.
  • This priority metric is only used to pick a skill/agent during decision of call handling when there are multiple agents with the same skill within an intent. In such case, the priority metric will be used as an additional metric to choose agent for call.
  • picking a skill for call implies picking the agent.
  • the priority to skills(or agent) can also be assigned to the newly enrolled agent as highlighted in step (501a).
  • the priority value is the agent integrity point. The integrity point reflects how the agent is keeping his WFH environment threat free w.r.t. personal data from the moment of getting access to the system since rollout. At the rollout stage, all agents will have the same value for integrity point.
  • the call routing logic will be always mainly based on the skill value and this priority value is an additional metric to choose an agent in case skill value is the same for a group of agents within the same intent.
  • this priority value is an additional metric to choose an agent in case skill value is the same for a group of agents within the same intent.
  • an additional check by the system is whether the chosen agent WFH environment is data protection safe before call assignment. If an agent is chosen based on skill, priority etc and the current environment is not safe, then this agent will not be considered for handling the call.
  • This said priority assignment can be done at the initial stage/rollout stage and during operation time for a given agent.
  • the priority assignment herein refers to assigning the integrity value to the priority value. Whenever an agent integrity point is updated, that implies the priority value is updated.
  • the current value of any skill related to an agent is based/dependent on the starting value of skill at rollout time, the data protection threat events that occurred at periodic data protection threat evaluation points of the system, data protection threat score at every system related evaluation points, additional threat evaluation points due to new intent related call has arrived into system, system threat evaluation point from no threat to threat state transition in agent WFH environment and the corresponding skill reduction amount related to the threat value/score. Every time the system checks periodically or event based on data protection threat for skill evaluation (it is considered that such is frequency based and also sudden new threat event based and also when call has arrived and the given agent environment is one possible environment for the call), the system identifies the threat level.
  • the skill reduction possibility is also checked when the call is about to be transferred to another agent and there is data protection threat in that environment.
  • the said other agent skill is also reduced. If threat level is high then the skill reduction amount is also high. If threat level is medium, then the skill reduction amount is also medium and if the threat level is low the skill reduction amount is low. If the threat level is none, the there is no modification to the skill.
  • threat level is high then the skill reduction amount is also high. If threat level is medium, then the skill reduction amount is also medium and if the threat level is low the skill reduction amount is low. If the threat level is none, the there is no modification to the skill.
  • the system will assign a priority value within same skill cluster for the given intent. Higher priority is given for skills that have higher integrity points (priority equals integrity point).
  • the integrity points are also reduced using the same method as illustrated for skills during operation time. Whenever the algorithm checks for the possibility of threat and deduction of skill due to data protection threat, it also modifies the integrity points when skills are reduced. Then agents of same skill values will be prioritized based on priority value when the system evaluates the usage of an agent for handling a call. If in a very rare case integrity points/priority values are also the same, then random picking of agent will be done for call handling when a given cluster of agent have same skills and same priority points. The reason integrity point is used for priority is because at the rollout stage all agents will have the same integrity point and hence its current value will definitely highlight which agent is more trustworthy as far as data protection compliance is concerned.
  • the component (502a) is looked into.
  • the rules are inserted into the system. These rules can be hard coded or can be configured. But in-order to enable customization of rules, this solution highlights the rules to be configured during initial rollout. These rules are used to detect the data protection threat based on the objects identified by the AI system. Based on rules configured the application can do only the needed threat evaluation rather than the complete list of rules for which the application is built to support.
  • This next scenario is how the system behaves/operates in a given agent environment when the call is not predicted to land in the said given agent environment although the call has hit the system and has less probability of landing at the said given agent (e.g. the call is for another intent but needs personal data).
  • What happens in the said agent environment for such a scenario is explained by highlighting certain operations using Figs. 5B and 5E.
  • the system will ensure agent work environment is safe w.r.t. personal data protection, also will check whether the agent is live, using face detection agent is authenticated by the system and the shown work environment via the webcam capture is really agent’s work environment is checked. Then once all said checks cleared, agent is allowed to access the work desktop application.
  • step (504b) is done using step (504b) as shown in Fig. 5B.
  • the system continuously checks when a call lands, whether the call has a high chance of reaching the said logged in agent. This is done by checkpoint step (505b). If the step (505b) evaluates to ‘no’ then the scenario navigates to some processes and checkpoints as illustrated in Fig. 5E. When the call has hit another agent (as in this scenario), then the system will ensure that for the agent who does not have the call has no permission to access personal data of the customer related to the call or any other personal data through the desktop application. This process is shown by step (521e).
  • the system will continue checking for the environment safety of the said agent where the call did not land and this highlighted in step (522e). It is essential to understand that the environment has to be continuously monitored although the call did not land in the agent environment. This is to make sure that the environment is safe for future calls and also to handle call transfers from another environment if needed. Such environment safety check will prepare a safe environment early even before the call lands. Since there is no indication of a call landing at this given agent environment, the AI system could lessen the frequency value of the environment check in such period (i.e. period where the call is not predicted to land in the given agent environment).
  • the AI engine is able to fully check no other person in the room other than the agent.
  • the additional people check and checking them leaving is not based on usual face recognition.
  • Usual face recognition will have agent face in the DB. It is detecting human images arbitrarily popping up in the environment and then storing these images dynamically. Then ensuring when they leave via the door, the leaving images match the captured image of these people. This identification of arbitrary people entering and leaving are also managed by the AI engine’s face recognition technology.
  • step (523e) evaluates to ‘yes’, then the system will check the state of clearance after step (522e). If after (522e) the environment has not cleared yet, the system will generate a warning with slightly higher frequency to the said agent to clear the environment. This warning will have what to clear and the system will monitor the clearing process. For example if 3 additional faces seen then these have to leave the environment and this has to be proved to the system either by turning additional camera. Alternatively the system can check this event of a person leaving through the main door.
  • the AI engine as mentioned before, has the ability to detect a person identified leaving the door. If the said step (523e) evaluates to ‘no’ then the system continuously monitors the call arrival into the system and any chance of the call hitting the said agent. Basically this scenario is about how the system handles all agents that are part of the system but has no direct involvement with a given call.
  • step (505b) After the agent is given access to the agent desktop as in Fig. 5B step (504b), then whether call is about to land in this given agent environment is predicted by step (505b).
  • the system will check for environment based data protection threat based on many rules such as phones in the view of webcam, door being unlocked in the view of webcam, agent moving away from webcam view, multiple person in the view of webcam etc as mentioned in step (506b) in Fig. 5B. These checks done because the agent is predicted to get the call. If an agent is planning to go away from work environment for some reason for a short while, then he has to put himself as ‘unavailable’ or ‘Aux’ mode and update such mode value into the system.
  • the AI system will not do these above mentioned environment based checks in the agent environment at all to validate the safety.
  • the said environment based safety checks only done when the agent is in ‘available’ state. If the agent comes back after break then he must change his availability mode from ‘unavailable’ to ‘available’ state and this must be updated in the system. The agent can use his work web application to make such state change.
  • the system checks whether for the agent to which the call is predicted to arrive the environment is safe or not w.r.t. personal data protection. This is highlighted in step (507b). If step (507b) evaluates to ‘yes’ indicating then as shown in step (508b) the system sends a warning with higher frequency to remove the environment threat from the environment.
  • step (511c) the process as in step (514c) is executed.
  • step (514c) the process as in step (514c) is executed.
  • all copy/paste clip board function is deactivated in the agent desktop. This clip board copy/paste deactivation can be done by triggering another application in the desktop or can be done within the smart surveillance application.
  • the smart surveillance application manages the clip board copy/paste deactivation function when personal data is projected on the screen.
  • step (514c) the step (515c) is executed.
  • the system still checks whether the agent is acting maliciously such as passing the data to some other via another session and sharing the screen which has personal data. The system will also check whether agent trying to voice out personal data when it is being projected on screen or otherwise. Also whether the said agent trying to record personal data are also checked. Details of these have been shared in other embodiments. If the system detects such malicious acts which impact the personal data protection, then the screen will be blocked and the call will be transferred to other agent where possible. This will be done in an autonomous manner. Additionally the agent’s skill will be reduced and the integrity points will be also reduced when such malicious agent behavior is found out. The subsequent treatment for this malicious behavior was revealed in another embodiment.
  • the system will not send the personal data and also stop continuous checking of personal data for this agent work environment.
  • the threat severity is considered as very high/highest such as agent trying to copy personal data etc. in such case the treatment is always blocking the screen and finding away to transfer the call to another agent.
  • step (512c) if the call has landed to the agent and it is detected by the AI engine that the environment has data protection threat, then the operation goes to step (512c). If when environment data protection threat and malicious agent threat is simultaneously present, then the net threat will be considered as highest and the countermeasures will be same as for highest threat evaluated. The operation goes to step (512c) when the step (511c) evaluates to ‘no’ as shown in Fig. 5C. It is considered that step (511c) mainly checks only the environment based data protection threat. In step (512c) the threat level is detected. The threat level will be detected based on assigning a threat score for each threat event. One of the highest threat score is assigned when camera is used when personal data is projected.
  • step (510c) After the threat score is detected, it is further detected whether the threat is high or highest. This check point is shown by check point step (510c). If (510c) evaluates to ‘yes’ the step (513c) is activated. If step (510c) evaluates to ‘no’ then the step (509c) is executed. The step (513c) highlights when the environment threat is evaluated as high or highest then system will want to transfer the call to another agent. If the environment threat is high and also only personal data is projected on the screen or personal data is voiced out or CCTV is maliciously been turned to On mode when it is supposed to be integrated with Off mode or CCTV integration is removed, then the screen will be locked/blocked from accessing the agent desktop application and the call transfer is planned.
  • the step (509c) highlights the case of general treatment of medium or low environment related threat for personal data protection. Either the personal information is shown in the restricted area or the agent is informed the personal data via the AI assistant and the agent has to receive this data using the head set ‘on’ mode. Again in steps (513c) and also (509c) the skill and integrity points are reduced accordingly based on the environment data protection threat level.
  • the system also has a method where the personal data is only projected when the environment is considered safe. During call, if agent request personal data and environment threat present, the system will not lock screen but not send the data to the screen. This way, unnecessary locking of the screen is avoided. But in some cases, while the agent is looking at the personal data, environment threat can be suddenly induced into the environment and therefore needs transfer of call and screen locking.
  • the agent desktop is generally blocked only when personal data is shown on the screen/voiced out and a threat either induced by environment which is of very high level or threat induced by agent is of very high level and both threats together are at a very high level. Even if threat is present and if agent shows compliance then unnecessarily the agent screen is not locked. Only when personal data is projected on screen and high data protection threat is present, the agent screen is locked or even when no personal data projected on screen but agent voices out personal data in an audible manner or when CCTV integration is not in proper manner the agent screen is blocked. This second blocking scenario the screen is locked only because of re-evaluation of the malicious agent into the system.
  • step (516d) When the agent screen is locked no call can come this locked agent. In such case after the agent screen is locked, the step (516d) is performed. As highlighted in Fig. 5D, step (516d), the agent and the supervisor is notified of this locked event using separate means. But screen un-locking time may vary. It depends on past events and total integrity point reduction as a result of these past threat events. Based on this reduction in integrity point the lock period is decided. If the integrity point reduction is significant then proportionately the screen lock period will also be higher.
  • the step (520d) highlights the operation where the agent re-logs into the system after getting approval from the supervisor.
  • the supervisor may give the passcode or agreement for passcode generation to smart surveillance application that enables to re-login into the system. After such re-login, the system again tracks the compliance of the logged in agent. If he shows compliance then the skill and integrity points are restored after some time. This is showed by step (518d) in Fig. 5D.
  • the operation during re-onboarding of the agent after the agent’s work desktop application gets locked is briefly explained by means of the sequence diagram shown in Fig. 6.
  • the main entities involved during this re-onboarding process are the smart phone and the smart surveillance application running in the smart phone’s browser as shown as component (600), the smart surveillance application running in the agent desktop browser and shown by component (601), smart surveillance web application hosted in the webserver and this is shown by component (602) and the AI engine that informs the data protection violation incident details to the smart surveillance application and this is shown by component (603).
  • component (603a) is the customized supervisor desktop application. This said application is used to get the agreement from supervisor about the re-onboarding of the agent after his screen was locked.
  • the design principle during re-onboarding is that additional first level authentication is done by means of QR code.
  • QR code Using the decrypted QR code if the system validates that the agent whose screen was locked is trying re-login the supervisor application will check whether to generate a passcode for re-login. The decision to generate the passcode for re-login is done based on some indices such as incident related video and agent integrity point.
  • the agent tries to attempt re-login using a passcode that is randomly generated by the system once QR code based first level authentication is satisfied and the supervisor approves of the re-login. When randomly generated passcode based re-entry is tried, the system will finally approve the login when additional 2 nd level authentication is done by the AI engine.
  • agent face is approved agent face, agent is live, agent emotion is correct as per demanded by the system, agent work environment is safe and also agent work environment shown really belongs to that agent. Based on all these factors, the 2 nd level authentication is done and the agent screen will be un-locked. The details of this logic will be further explained by means of the message sequences as shown in Fig. 6.
  • the AI application (603) will send information as to until what time the agent screen has to be locked. This lock period information is sent to the application (602) and this is shown by message (604).
  • the smart surveillance application (602) gets the screen lock and time information, it will inform the customized supervisor desktop application (603a) that the screen has to be unlocked using the message (604a).
  • the agent Once the agent knows his agent desktop screen is locked by (601), he will use the smart surveillance application in his smart phone to scan the QR code appearing on the locked screen. This scanned QR code in its encrypted form will be sent to the smart surveillance application (602). This QR code passing message is shown by (605). The smart surveillance application will decrypt the QR code and also identify that this decrypted value which is agent ID is a valid agent ID and its entry is available in the DB. Upon such valid agent ID identification an ack is sent by the smart surveillance application. This ack message is shown by (606). In Parallel the customized supervisor app (603a) will generate an SMS and also generate an alert for the supervisor in the customized supervisor desktop application. The supervisor can login into application (603a) and subsequently review whether the agent can be logged in again.
  • the supervisor will inform Ok state to the system via the application (603a). This ‘Ok to re-board’ information is then sent to the smart surveillance application (602) via message (608). The smart surveillance application will then generate a pass code and send to smart phone via SMS. This is shown by message (609). Using the pass code received in his smart phone, the agent tries re-login using the passcode via the agent desktop application (601). Once the passcode is given to application (610), then the smart surveillance application sends a message (611) to the AI engine (603) to perform the 2 nd level authentication and checks. The AI engine will do various checks as mentioned before to decide whether re-onboarding with screen unlock is allowed.
  • the AI engine (603) sends message (612) and informs smart surveillance application (602) that the agent can successfully log in after the lock period is expired.
  • This message of agreement for re-boarding is sent to the agent desktop screen running the smart surveillance application on the browser (601) by using the message (613) from the smart surveillance application (602).
  • step (400) This step related to smart surveillance application is shown by step (400) in Fig. 4. Subsequently there is a checkpoint that is evaluated as highlighted in step (401). Here the check point is to check whether any counteraction/counter measure is given by the AI engine.
  • step (401) evaluates to ‘no’ then the smart surveillance application continues to update its DB and listens for the API calls from the AI engine. If step (401) evaluates to ‘yes’ then there is an additional check point (402), where it checks what countermeasures have to be implemented in the agent desktop application when WFH environment related threat is present. If this check point evaluates to ‘yes’, then the process (403) is executed. Basically the smart surveillance application can inform the customized agent desktop application as mentioned in Fig. 1, how the personal data information can be presented in a modified manner on the screen. For example show the personal data with smaller fonts, show it only in the middle of the screen, show personal data for a lesser time on the screen, do not show personal data at all because threat in environment although the user requests for it.
  • the AI engine can also inform the end of the data protection threat state in a given WFH agent environment to the smart surveillance application. Based on end of threat state, the countermeasures for all the screen related countermeasures will come to an end.
  • the smart surveillance application communicates with the customized agent desktop application to not show the personal data in any restricted manner. This is shown by step (404).
  • step (405) highlights the smart surveillance application’s role in ensuring the AI assistant is involved into the current call session. It is considered the AI assistant involvement is highlighted by the AI engine to the smart surveillance application and this application simply passes this information to relevant application to trigger the correct action.
  • the smart surveillance application by triggering the customized agent desktop application will trigger the call routing application to include the AI assistant into the call session by giving an agent ID for it.
  • step (406) shows the role of smart surveillance application in providing the reduced skill value as part of threat countermeasure to the customized agent desktop application and subsequently to the call routing application.
  • the reduced skill value is a countermeasure of the AI engine to threat. This value is given to call routing engine, so that it too can independently choose the agent if no information is sent by AI engine.
  • the call routing application should use the identical process as of the AI engine in picking the skill/agent within the given intent. It is considered the DB which is accessible to call routing has all needed information for such mechanism. Even the threat state in the agent environment is present in the DB.
  • the skill value is needed by the call routing application which clones the agent ID identification method used by AI engine to pick the needed skill for a given intent.
  • the step (407) highlights when extreme violation for personal data protection event is received by the smart surveillance application from the AI engine, then the smart surveillance application will completely block and lock the agent screen. This screen will have the QR code shown on the screen upon blocking/locking.
  • the step (408) highlights the smart surveillance application’s role in validating QR code and in supporting the re-onboarding of the agent after being locked.
  • the smart surveillance application will communicate with AI engine to retrieve the face detection check result, agent emotion check result, liveliness check result, environment safety check result and environment ownership check result etc before finally allowing the agent to log in.
  • step (410) there is highlighted a method where the information to be viewed by the supervisor to allow re-boarding of the agent is placed in the DB and also the supervisor is alerted by SMS or other means by the smart surveillance application or the customized supervisor desktop application.
  • the alert/SMS specifically highlights that the given agent environment threat was high and the agent screen is locked and needs review by the supervisor.
  • step (411) after the agent is re-boarded the smart surveillance application may receive the restored skill value if the agent has shown compliance over a period of time. In such a case, this restored skill value will be given to the customized desktop application to be given to the routing engine.
  • the restored skill value happens when no threat detected in a given agent environment over a period of time.
  • the step (412) highlights that in case of threat and call has to be transferred to another agent or another agent has to be involved to whisper the personal data information such information will be retrieved by the smart surveillance application from AI engine (AI engine will generally give such information by using the communication interface and this event can be continuously listened by listener methods running in the smart surveillance application) and given to the customized agent desktop application and finally to the call routing application.
  • Step (413) highlights when agent moves out from his seat and did not put his state to ‘Aux’ explicitly, then such move will be detected by AI engine and informed to the smart surveillance application. In that case, the smart surveillance application will pass this information to call routing application via the customized agent desktop application.
  • the call routing application will immediately consider the agent as not available for call handling and will attempt to transfer the call to another agent if call is happening in that environment.
  • the said call transfer can also be initiated by the AI engine.
  • the smart surveillance application will check with the customized agent desktop application whether personal data is shown. If personal data projection status is known, the smart surveillance application will block the screen and lock the agent.
  • the relevant agent/skill to handle a call will also be detected by the AI engine and given to the smart surveillance application and this is shown as step (414). This is to ensure that the appropriate agent of related skill who does not have a WFH environment protection threat is used by the call routing engine.
  • the AI engine is the best entity to pick this because it has the security dynamic knowledge too of the environments.
  • the call routing engine can use this information (appropriate skill) to pick the agent for call handling.
  • the call routing engine may also use its own replica of call routing similar to the AI engine’s decision process for the agent identification to handle the call in a WFH environment.
  • the available agent will be picked based on the skill for the said intent and agent safety. In such case if the agent is available and his environment is safe then that agent is picked for the call and if not, another agent of similar skill range is picked and the process continues within the skill range of picking an agent of suitable skill and environment safety. Otherwise agents of lower skill and safe environment with AI analytics is considered to handle the call. This handling where the call is about to hit the agent also happens when all the predictions are in place as well.
  • waiting time estimation is generally done based on the total number of currently present call in the given intent queue and the average waiting time for a given call type.
  • the said estimation is not a proper prediction as in the main solution.
  • the detail predictions as in previous embodiments help the solution from avoiding unnecessary actions that are not needed. It is also important to understand that the predictions is probability based. Similar to AI mechanisms, there can be a degree of error in the predictions. But in all cases, the solution ensures that the security and the quality of call is not affected regardless of whether predictions are in place or not in place in the solution space although with predictions too much system wide signaling can be reduced.
  • countermeasure in this solution is system’s reaction to a data protection threat happening in a WFH environment.
  • This countermeasure in the broad sense could incorporate the warning as well.
  • the criteria when such countermeasures are issued in the solution are illustrated. Countermeasure is related to a given data protection threat happening in a given WFH agent environment and how the system reacts to such threat.
  • the immediate countermeasure is a warning to the agent environment and skill modification related to that said agent.
  • the counter measure is ‘null’.
  • the call will not be having any countermeasure.
  • the system in some cases using AI prediction can estimate whether the call of the given intent needs personal data to handle the call. Alternatively by simply knowing the intent value, the system would know whether personal data is needed. Additionally in another case, the agent himself can highlight the personal data is needed in advance to the system. Basically, the system can use various means to know this.
  • a call is on-going in a given agent environment and it is detected that personal data involvement is needed for this call from current time until its completion time and also the data protection threat has been induced then the call will be treated with a suitable counter measure that is based on severity of the threat and other system variable states.
  • a call is on-going in a given agent environment and it is detected that personal data is shown on the screen and also the data protection threat has been induced then the call be treated with a suitable counter measure.
  • the screen will be blocked and an appropriate counter measure will be chosen if the data protection threat is considered as high.
  • This solution has warning for data protection threats rather than immediate counteraction that changes the route of the customer call.
  • This warning is a solution countermeasure that is only sent when the personal data is not yet projected on the agent screen.
  • the AI engine when identifying such environment originating data protection threat during agent work time, it does not immediately send alerts to the agent concerned. It additionally predicts that the call, which needs personal data, is going to arrive to that particular agent situated in a data protection threat environment.
  • the system combines the data protection threat level in the agent environment and the prediction of call arriving to the particular agent who will be using personal data, into a countermeasure score for warning purpose. If the said score is a value greater than 0 and none of the contributors to the score has 0 value, then a warning will be sent to that agent. The warning frequency will increase with the score value.
  • This AI assisted WFH related solution s security feature and associated call routing feature is applicable to many industries.
  • the industries are banking, telecom, travel or any industry that generally needs contact center type of call management and agents that are physically located at home and working from home.
  • the agent work scope needs some form of personal data of the customers.
  • the solution is also useful even when agent is working in their offices to prevent any security threats to the personal data of customers.
  • the solution applicability scope is vast.
  • the solution is also applicable where call routing in general considers secure paths. This can be used by telecom industries for the back bone routing too.
  • the security feature of the solution where show less of the personal data and show personal data only needed can be applicable to many industries that handle personal data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Alarm Systems (AREA)

Description

AI based data protection in WFH environment
Artificial Intelligence (AI) assisted personal data protection in a work from home environment.
The present invention relates to usage of AI features such as prediction of the correct events by means of supervised or unsupervised techniques, identification of the correct objects plus scenarios in an environment and dynamic countermeasures evaluation from a plethora of real time logged events and system data that will aid in personal data protection in a work from home environment.
The most happening trend today in digital world is usage of Artificial Intelligence (AI) and Internet of Things (IoT) to solve many real world problems swiftly in various industries with minimal human intervention, with greater accuracy, in the shortest time span and in a seamless manner. The said industries are not limited to entertainment, real estate, retail and e-commerce, travel, banking and financial services, manufacturing, food tech, health care and logistics and transportation. In each of these industries the problems that AI platform is engaged to solve are numerous. AI engine needs training data and real time data to operate. Some of these said data are obtained from sensors interconnected to the internet with suitable internet protocol suite. Sensors interconnected using internet protocol (IP) is called the IoT. Some of the other data for AI training and real time evaluation is also obtained from Data Bases (DB) and interconnected third-party systems.
One domain where AI engine is used significantly within a given industry is in security. Security refers to data/information security, physical security(human security), security of objects placed in an environment(prevent object removal from theft) and environment security(security of a nation from natural disasters and human war). One of the important component of security framework is data/information security. We will herein call data/information security simply as data security or personal data security. There are many types of such data such as confidential data tied to an industry such as trade secrets, industry operational secrets and other data such as personal data which can include the person’s demographics, assets, health related data which needs to be protected at all times. The industries will lose its value if these critical assets are not preserved. Currently many solution providers are providing solutions using many technologies to achieve data protection. This data protection urge and need has significantly increased in the recent times globally where many are working from home and the threat is high in such environments because these environments are unknown and also they are not under constant supervision.
In many industries/companies, vast amount of data is being used and stored in databases in DB servers behind firewalls and in some cases, these data are well protected by super-efficient encryption algorithms. Also many companies prevent there systems to be accessed via threat prone internet. If these systems are to be accessed outside the company networks (ie. work from home environment), secured virtual private network (VPN) technology is used. Data traversal via VPN is protected by means of secured tunneling technologies. Nowadays there are many advances in cyber security domain too which helps the data to be protected when these data traverses through the cyber space. In this document we will use the term personal data/data unanimously to refer to a single human being’s data or a company/organization data that needs to be protected at all times.
So far, the majority of the attention has been in protecting personal data when housed in a closed protected environment and protecting the personal data when traversing in the cyber spaces where many cyber attackers come and go in a dynamic manner. In addition to the above mentioned threat points (that is data traversal in cyber space, data storage in system for a long time), when these personal data is uttered/spoken out and viewed outside the cyber space there exists a whole new set of threats and these too need attention and solutions to protect. An attacker could utter this personal data to some other, the attacker could copy the personal data to another medium, record the data to a medium and/or show the data to an another attacker. Such new threat factors have been overlooked. In this document the terms attacker is referred to anyone who maliciously plan to steal personal data in particular.
Currently there is so much talk and implements on AI being used for video surveillance and video data analytics to attain higher security in an operative domain. But these solutions primarily consider physical and object security. Very few solutions are in place for personal data protection using AI engine/platform/algorithms/solution framework.
Why AI is engaged in providing security solutions for many industries is worthwhile being looked into. The reason for engaging AI engine for security solution is that it can make estimations that cannot be identified by humans in general (it possibly has much higher cognitive capabilities than humans in general), it also can identify threat events in a much higher scale than humans, it can identify threat events more accurately than humans also in a quick manner and it can identify these without involving many human resources in a completely autonomous manner.
AI is a broad term. It has many components such as machine learning, object recognition, deep learning, rule based AI or expert systems, neural network based estimation algorithms etc. The AI algorithms are numerous and are well matured. Nowadays there is no need to develop an AI engine from scratch. The term AI engine comprises of the AI algorithm used for learning and machine learnt model building, AI algorithm used for real time estimation using machine learnt model and the AI machine learnt model that will be used for estimation and prediction based on real time inputs. Here the AI machine learnt model is a by product of the AI algorithm used for learning and model derivation. A good AI solution would have continuous learning and machine learnt model update and evolvement during the AI run time too.
The cloud computing service providers are coming up with AI platforms that can be leveraged by solution providers to come up with AI assisted solutions. In such case of utilizing existing AI framework, the said solution providers need to only train the AI engine to identify the correct machine learning model and also provide the big data and the appropriate data labels for such training. The algorithm high level logic to train can be picked from the AI framework itself and the basic skeleton of the machine learnt model can also be picked from the said framework. The AI solution providers could also alternatively use many of the AI engine framework disclosed in the public domain to support the issue they plan to solve as an alternative approach to the AI framework provided by the cloud services.
When cloud AI framework is used, the algorithm used for training can be any one of the algorithms provided by AI framework. As mentioned this AI framework is widely available and cloud services enable its usage. The AI engine’s algorithm using the big data fed in from data scientists can identify the suitable machine learning model to predict the needed components as part of the solution space. The training software needs to indicate to the training software algorithm which input data correlates positively or negatively to the desired output from the said algorithm. This component of AI such as machine learning is mainly used for prediction of future events based on past events and current system behaviors.
There are other components of AI such as object recognition and image recognition. These are necessary to identify objects of concern within an image similar to humans, that will aid in providing the system solution. Again AI object recognition algorithms and deep learning are necessary to identify objects without human intervention. Additionally, many Application Programming Interface (APIs) are also available to enable object recognition as part of the cloud AI platform. Solution providers can also come up with object recognition algorithms where neural networks can be used as part of the object recognition algorithm. These object recognition algorithms are also already publicly available and can be leveraged where needed when building a solution. Alternatively so called new neural network based algorithms can be formed to tackle a given object detection as part of the solution space. Here the neural network parameters, layers, nodes, weights etc can be fine tuned to address a particular object detection within an image.
The other aspect of AI is to help in counter action planning and support. This means what course of action to take if a certain event is identified. To plan the counteraction, again the AI engine can be fed with real or human induced/moderated data where the AI engine can plan the countermeasures that are most appropriate based on the actual/human fed events of the past and current state variables. The counter action planning by AI engine can also be done based on rule based design. Ie. Identify a condition then based on rule to handle the condition execute the counteraction. If the prior counter action was not successful based on the existing rule then change the rule. These are design components of expert systems, that always use past data to plan future actions and continuously improve the accuracy of predictions.
Few years back although AI numerical computing algorithms were available, the implementation of these in a system was very costly. Only super computers that were very expensive could do such AI software tasks. With computing performance widely improving over the years and computing hardware being available at reasonable costs, the interests in AI has again surfaced and has gained big momentum in many industries helping in many aspects such as automation, security, data insights or data analytics to improve system performance and many more.
Having discussed about what AI engine generally does we can now look into its necessity in the personal data protection area or domain. In some data protection scenarios, excessive computing using very big data is necessary to predict data security threats accurately in the shortest possible time that deems these AI engines extremely necessary. These AI engines provide the insights into trends or prediction related to data security (data security etc) that cannot be evaluated by a human brain in that perceived time.
In summary, what is lacking in general is an AI based surveillance solution that encompasses all the needed components of AI in adequate quantity to enable protection of personal data in a work from home (WFH) kind of environment where the data has a chance of being manually copied, uttered, saved, recorded, passed to someone else via call, viewed by unauthorized people etc. These work from home environments also have many restrictions such as it cannot be under constant supervision (because the supervisor tied to work may be located in a spatially distant place) and event or activity data from these surveillance domain cannot be fetched in high quantity because of privacy and cost (every small special work place may need a closed circuit television (CCTV) which can be costly). Another factor that is lacking in the existing solution space is how to engage AI for personal data protection when there is no prior data of personal data being stolen. For example when personal data protection for a new environment such as work from home is planned to be designed, an organization will not have such prior real data as an input for the AI machine learning algorithm. Yet another factor that is missing in the existing solution space is that variant counter actions are missing that will protect the personal data and yet ensure no disturbance to the main operations. Counteractions that are suitable for various different data protection threat events are missing in the solution space. Most of the existing AI counter measures are alerts and thus falls under a single type of counteraction. In some cases due to lack of past data of data theft or due to less number of past/historical data, the alerts can be falsely generated. These alerts should be avoided at all times unless it has high event detection accuracy because it can falsely identify a wrong event and can be chaotic. Also in a WFH environment, when alerts are the main countermeasures generated, before action can be taken from a distant supervisor, the personal data would have been taken and damage done. These alerts are useful to identify the attacker but not to prevent it. In such case, the impact of the data stealing threat is difficult to reduce. If in the case of no past data, with excessive amount of data taken using various sensors in a new environment, a threat can be detected to a higher degree using existing solutions (ie putting the AI engine in a new system for many days for learning either using supervised or unsupervised learning). But this will overburden the system with so much redundant data and too much processing to predict an event. Also an intelligent attacker may defeat the AI engine to make it believe an abnormal behavior is a normal behavior. It will be an over engineered system design. Moreover it will take a long time to train the AI engine in the training mode which will delay the solution roll out. This is real problem and this invention targets to fill the gaps in such solution space and provide a suitable solution.
What is ideal is a system that has just right design components and data that is needed to facilitate the AI engine. AI engine should help to estimate and prevent the personal data exposure in not needed environment(i.e. personal data need not be revealed) and also estimate and help in suitable countermeasure that prevent the personal data of being stolen if such data threat event occurs and yet the personal data information has to be revealed in the system for the normal operation of the system.
A typical environment where personal data protection is necessary is described next. In this document/report we explain the solution using this environment which need personal data protection. But one skilled in the art is able to understand that the solution described in this document is not restricted to this one particular environment or scenario and the solution can be applicable in many such similar scenarios without deviating from the scope and intention of the invention. It is very common in emergency situation (global pandemic, war, natural disasters, economic crisis), to cut cost and safety concern many workers are encouraged to work from home. One such environment could be a contact center environment where agent works from home (home agent (HA)) and attends to a customer by means of a speech or video call as part of regular working. The said speech or video call could be done using the agent’s work desktop application too. Alternatively the said speech or video call can be done using other dedicated applications and devices that are not related to personal data projection application. During the call the agent will want to browse through personal data of the caller/customer in his working desktop. This personal data needs to be protected at all times because this HA environment is not the regular work environment that is usually heavily secured. The solution we propose in this document is targeted in protecting such personal data during call time and other time as well in such work from home HA environment or other environments of similar nature. During call time when the personal data is needed during call conversation and the agent environment is under threat(threat for personal data stealing is present), alert and delayed action mechanism as in existing solutions do not help.
This solution document/report reveals a system and method where AI assisted protection of personal data in a threat environment is provided whereby the personal data is prevented from the following: being shown on screen unnecessarily outside the related call, personal data shown in abundance, copied, stored, revealed via phone conversation to out of context person, being viewed by unauthorized and uttered unnecessarily during a call. The said personal data protection system engages AI engine to achieve prediction of data protection threat, identification of threat entities/objects in environment and data analytics to plan the countermeasures in an autonomous, accurate and seamless manner without impacting the normal operations, achieve customer satisfaction and yet prevent personal data theft.
The Patent Document 1(PTLI) highlights a solution where the AI does pre-learning of the system to identify the home members before operations occur to detect the usual and unusual members of the home environment. The AI pre-learning is based on real time data that is available during the learning phase. The AI engine’s learning objective is to identify who can use a given confined space within the house during various time periods. The learning is simply to identify the usual members within a given confined space within the home. Ie. Who is allowed to use a room/hall/kitchen etc. After the learning phase, the AI engine results are sent to the system supervisor for validation via an application installed on the supervisor’s smart phone. Based on the inputs from supervisor, the AI engine gets a confirmed set of permanent tenants of a home and who can access a given confined space and who cannot(ie. A small correction done to AI’s findings). The solution further comprises of a method where the AI system during operation time is able to identify the intruder in a confined space within the home domain at a given time. The abnormal behavior can be informed by means of alarms in a supervisor’s special hand-phone application. Also the alert/alarm is sent to the surveillance domain (i.e. the confined space under surveillance) and the alert is manifested as a sound in a loud speaker attached to the system. This solution only highlights counter measures such as alert/alarms and this is not useful in personal data theft environment because the data can be taken while an alert based counter action is planned. Alert is only to trace who is the possible attacker. Moreover, this solution does not identify the regular house member’s abnormal behavior such as trying to steal work data. Furthermore this solution does not put any prevention for an agent working from home intentionally or unintentionally exposing the data to attackers. The attacker could be agent himself or others connected to the agent remotely or some other home member and this solution is not helpful in such malicious home member scenario.
The Patent Document 2(PTL2) highlights an AI engine based solution where an abnormal behavior is detected with the objective to prevent threat in an environment. Specifically trigger condition for abnormal behavior detection include at least one of detecting a person in the secured area attempting to hide an identity of the person by wearing a mask or a helmet, detecting multiple people being present in the secured area in excess of a predetermined number, detecting a presence of unexpected motion in the secured area and detecting a face. Some of these abnormal behavior detection rules only helps in identifying abnormal people in an area and it does not identify abnormal behavior w.r.t. personal data stealing. Moreover this solution too highlights alerts/alarms as a countermeasure. If agent is engaged in a call that involves discussion based on personal data in a threat induced environment (i.e. some attacker is eavesdropping to the conversation), the countermeasure such as alert to supervisor is a delayed countermeasure because before any action taken by supervisor the personal data would be already taken by the attacker.
The Patent Document 3(PTL3) highlights an AI based surveillance solution where the system detects the threat level and based on the threat level in comparison to a fixed threshold, alert/notification is sent when the threat level exceeds the said threshold. If when the environment threat is lower than the fixed threshold, the system sends a local trigger to mobile device to improve the safety level in the environment. This solution still has its drawbacks because primarily it detects weapon in environment or humans more than the allowed amount to identify or predict threat. This does not protect personal data theft because there are no protection mechanisms to stop and intruder/attacker stealing the data from the threat environment. Moreover the alert does not provide the action needed at the threat environment to be imposed immediately. The personal data attacker could be a person in the home itself and this prior art solution does not have the suitable component to handle such type of threat.
The Patent Document 4(PTL4) highlights the AI based solution where entrants into a secured environment are analyzed by means of many prediction and decision making systems. The entry data is analyzed and various profiles are derived to determine the threat state by various AI systems. From the results of various decision making systems, a common decision is derived whether to issue an alarm. This solution still fails to predict whether the exact motive is to steal personal data protection from a given entrant profile. Furthermore, this solution does not have any means to prevent the personal data from being stolen. The only countermeasure is that an alarm is issued. Alarms as mentioned in this document can take a longer action time. Also in a work from home environment the alarm may be given to a household member who himself may be an attacker w.r.t. the personal data. Thus this prior art solution as evaluated and illustrated is not useful in solving the issue highlighted.
The Patent Document 5(PTL5) highlights an AI deep neural network based learning and prediction being triggered when motion is detected in the environment. This solution is unable to solve the data protection threat in an environment because there is no indication in the solution on the necessary measures to protect the data.
The Patent Document 6(PTL6) highlights an AI based solution where a particular subject such as a school student is monitored and abnormal behavior is notified to parents if some threat is detected towards the monitored subject. This expert system is trained to predict a specific behavior, which is different from personal data threat behavior which is addressed in this solution.
The Patent Document 7(PTL7) This idea is about data protection when devices are connected to the internet. Here AI is used to prevent device data, user data and network data from being stored in the web server, in some cases cookies are disabled, data is modified and data is masked. The device location information is used to trigger the AI algorithm to start the process of data protection. Here the data protection is for an internal system where the hacker sits in the middle of the data communication framework. Also the data is very much the data sent from one communication end point to another. This solution cannot be used for data protection when data has to be shown to an end point as in the problem scenario. Also data masking etc will not help the work from home HA to continue his tasks.
The Patent Document 8(PTL8) highlights a solution where the sensor data is evaluated by the algorithm to identify the specific type of emergency. Then based on the emergency type, alerts are sent to various related bodies rather than a single entity to expedite the counter action. This solution heavily depends on sensors and more system components to aid in the solution. Moreover, the countermeasure based on alarms/alerts are not so suitable for data protection related system where the countermeasure has to be swift. If the supervisor to which the alarm is sent is also an attacker the solution will not work.
To overcome the anomalies of the prior art, the existing inventive solution attempts to provide an AI assisted solution to enable personal data protection that solves the shortfalls of the prior art solutions.
Specifically the inventive solution provides normal work operations such as attending a call where personal data content retrieval is necessary and yet engage in the call without any impairment to the call quality of service (QoS) and yet provide personal data protection regardless of whether threat is present in the call recipient/agent environment or not. Additionally the system predicts personal data protection threat by using minimum view camera(i.e. less number of information devices to capture the data from the monitored domain) and also without a pre-learning phase for threat prediction. Specifically the invention uses Object recognition based AI algorithms to predict data threat and also uses machine learning with big data to predict a call arrival to the agent that needs personal data retrieval to handle the call. The details of the invention will be unfolded in the subsequent sections of this report.
Summary of Inventions
The core functionality of the invention are covered in the embodiments. This section covers a summary of the invention as a summary of the embodiments disclosed.
In this section and other sections the terms agent/worker refers to a worker who does work by engaging with a customer using a call etc. Those skilled in the art knows that this invention can be placed in another environment where the worker could be doing another type of work involving personal data without deviating from the scope and ambit of the invention. The said worker/agent while attending to a customer may need to view and process personal data of the customer or organization to better serve the customer with the highest possible standards.
The high level summary of the invention is next described. The present invention relates to usage of AI features such as predicting events that indicate the need of personal data protection (e.g. a call from customer that needs personal data viewing by agent is about to arrive or has hit the system) using big historical data, identifying events/objects of vast scale without human intervention that can act as indicative point of personal data theft (agent work environment has data threat indicative object/scenes or the worker himself is behaving in a malicious manner) and AI engine planning the counter action to the threat in a dynamic manner where the threat level is gauged dynamically and the counteraction is planned based on the existing data in the system (e.g. available agents of the same skill level, threat level in other agent environments, remaining call duration etc).
Next the detail summary of the invention is disclosed. This invention is a system and a plurality of methods where customer/organization personal data protection involving a home agent (HA) or any worker/agent in a home environment is provided by AI technology/engine and some other supporting applications that interfaces using communication means with the said AI engine. This said AI engine during its normal operations, operates with the help of limited video data from lesser number of cameras used in the system (mainly only from worker desktop inbuilt webcam). The said AI engine and its encompassing methods helps in authenticating the worker/agent using face, emotion detection based on system given emotion state and liveliness detection and also authenticating the work environment of the worker/agent to ensure the worker/agent is at his home. The AI engine as part of approving on-boarding of the worker/agent to enable use his work application ensures that at work start time there are no threats for personal data protection in the work environment. Such additional on-boarding check done so that the rest of the work hours, data protection threats can be detected using fewer camera data based on initial passed check points (door closed, windows closed in work environment that are not in camera view, no other person in work environment other than worker checked during on-boarding time). The disclosure further reveals AI assisted methodology where personal data threat activities when engaged in the work environment is detected using past data based prediction methods, rules and object/image recognition technology so that regular call/customer attending procedure is carried on smoothly without trading off the data security.
Some of the counter measures after personal data threat detection with a need to smoothly carry on the work duties of the worker/agent in an environment where personal data has to be viewed and processed in real time are also available in the solution. The said countermeasure methods are: restricted viewing of personal data whereby personal data is projected in the smaller segment of the screen and at the middle of the screen when the threat is low, reduced font size of personal data on screen when the threat is low, blocking the personal data projected on the screen if the data protection threat is high, transferring the call to another worker/agent with appropriate skill and who is in a safer environment w.r.t. personal data protection when the data protection threat is high in the current agent environment, modifying the skill value (appropriately based on the threat value) of the agent/worker so that the call routing engine avoids the data threat environment in its decision to pick the agent skill or agent ID, AI based text to speech virtual assistance involvement/engagement into the call during medium threat in environment of the agent who is handling the call, the text to speech AI assistance conveying the customer data related to the call to the agent alone during call session, putting the call in a high priority call back state if an available agent of appropriate skill is not identified/available at the high threat moment to transfer the call and also waiting for available agent is not preferred and the completed call duration within the given agent is not significant etc.
There are also methods in place where personal data exposure is reduced. Such as, the system only allows a given customer’s personal data to be viewed by the agent only when he is currently handling the call from the said customer. The system allows personal data about the organisation to be viewed only when agent is using this data for some work done using the agent desktop application or he is in call with another agent who is within the same system/organization. The system also has methods to avoid copy and paste to clipboard whenever the personal data is shown on the agent screen regardless of the threat for personal data in the environment. The AI engine does not allow the personal data to be projected on the screen in spite of agent requesting for it, if it detects that the threat in the environment is high. The system also has methods to identify malicious agent act such as agent is trying to record personal data using some application running on the work desktop, utter the personal data over a call to another unrelated person, utter the personal data in a given moment that is not related to the call he is handling, pass the personal data to a remote person via other video session applications running on the agent desktop or record the personal data from some other camera such as CCTV camera. The counter measures for such malicious acts are that the agent desktop application screen is blocked and also call transfer opportunities will be evaluated by the system to transfer the call to another agent’s safer environment.
AI engine is used in every aspect such as detecting the personal data protection threat in the agent’s surrounding environment, detecting malicious agent act w.r.t. personal data protection, detecting/estimating whether countermeasure is needed for a personal data threat detected, detecting the personal data threat severity and planning the countermeasure for the threat based on severity, identifying the agent to whom the call can be transferred to, agent skill value determination and restoration, incorporating virtual assistant where needed, deciding on the need for high priority call back for the call and implementing it and also enabling the execution of the countermeasure for the threat.
To achieve the above methods in the system, the AI engine communicates the threat related information for every applicable WFH environment to the smart surveillance application. The AI engine communicates the following: counteractions/countermeasures to be enforced on the agent screen, other personal data threat related countermeasures such as transfer etc, the validity period indication (counter measure On state, counter measure Off state) of the enforcement of the counteraction/countermeasures, worker’s modified skill, threat value if present in an environment, agent modified integrity value, estimated intent of the call, virtual AI assistance ID that needs to be added into the call and transfer agent details (Agent identifier (ID)) to the smart surveillance application via a predefined interface. The information will be sent by the AI engine via the said interface if at least one of these interface data parameters have a non-null value. The smart surveillance application takes these interface parameters from AI engine and uses it or send to other relevant applications that it interfaces with. The smart surveillance application with the help of AI engine does authentication, on-boarding, re-on boarding after threat incident and does complete desktop blocking for the worker in a high threat environment when personal data is projected and/or when personal data is voiced out. The smart surveillance application further interfaces with the customized normal agent desktop application, customized supervisor desktop application and also call routing support application that handles skill/caller intent/agent ID based routing. The said customized normal agent desktop application is the work application the agent uses to carry on his duties. The said call routing based application does routing to a particular agent when the agent is identified by means of an agent ID, when an agent cluster is identified by means of skill and/or agent is identified by means of an intent. This identified skill, agent ID, intent etc can be given to the routing module indirectly by the AI engine using the smart surveillance application.
In contact centre terminology, the skill is tied to a group of agents/workers to identify the type of customer pool these agents can serve based on their skill. These contact centre call routing applications usually would be able to identify an agent with a given agent ID to attend the customer call, when the intent of customer call, the associated skill related to the intent, the agent ID pool assigned to serve the skill and the availability of the agent from the pool of agents to serve the customer of that particular intent. The skill management in this solution is done by the AI engine whenever the WFH is incorporated into the contact centre system. It is further considered the picking of the agent ID for a call can be done by AI engine, call routing engine or both. Before a call is assigned to an agent, the agent ID picked by the AI engine has the highest priority. The reason is, the AI engine is the entity that changes the skills w.r.t. personal data protection and also knows the current status of the personal data protection threat in every agent’s WFH environment.
The mapping of an intent to customer call can be identified too using various methods. These kind of predictions are used in the solution where possible to pick the agent to handle the call early and also to avoid data protection threat environment for the call. The intent related to the call can be identified using explicit customer selection of the intent at the beginning of the IVR (Interactive Voice Response) call. Alternatively, the AI engine also has capability to identify the intent based on the route being taken by the call. AI engine leverages past data related to route of the calls tied to various intents, current call route path and machine learnt model to estimate the intent of the call based on the current IVR selections and the current call route.
Technical Problem
There are many industries that use personal data in their daily operations. For example Banking sector, Hospitals, Airline industry and Telecom industry etc. These industries will definitely engage workers herein called agents to perform the business activities. In this document, the term worker and agent are used interchangeably. In such industries, there are usually dedicated set of agents working full time attending to customer queries and promoting sales of the industry specific commodities. These agents are usually monitored w.r.t. work performance and conduct by supervisors who are higher authorities within the organization.
Traditionally such agent-customer interactions take place by means of calls (Analogue calls or Voice over Internet Protocol (VoIP) calls running using Session Initiation Protocol (SIP)). Nowadays many such agent-customer interactions have moved to Interactive Voice Response (IVR) in-order to handle the very large call volume and better operations in these industries. These agents generally have to view personal data to complete their daily tasks when call gets transferred to the agent (this can happen even during an IVR call). Such work environment is called a contact center.
In their normal work environment in contact centers, such personal data exposure in agent screen is not an issue because there are many company owned surveillance cameras, supervisors might be patrolling randomly and fellow agents will be sitting nearby. In such well protected environments, the potential threat of customer personal data being stolen is very low. Also the work domain will be protected by physical security guards and many cameras placed all over the work domain that will deter personal data theft crime significantly.
But such usual working environments of the contact center agents serving various industries may be suddenly shifted if there is an emergency situation in the country such as war or a global pandemic or natural disaster. In such case, the agents may have to do their normal work routines from home via a VPN. This environment poses a very high threat for personal data protection because it is an un supervised environment without many surveillance cameras and no supervisors. The agents may plan to copy or steal the data by copying using their desktop tools such as clip board, by some recording application running in their work desktop and/or recorder application running in CCTV placed in the work environment, take a picture of the personal data using phone camera or using various other appropriate means. Additionally another member of the house may plan to get some personal information and use it in an improper way. In such environment how to protect ill behavior by another household member and even a malicious agent is not solved yet.
This invention is to provide an AI based personal data protection solution for this new environment such as home environment, which has many restrictions, threats and unknowns. The said restriction is that additional sensors, cameras cannot be fitted in this environment easily because it is one’s home and private environment. The said unknown is that there is no data theft historical data from home environment for usual AI operation. Also the traditional alert/alarms based AI counter measure is not useful here because the supervisor is in another physically distant environment and cannot come to prevent the threat.
With the AI and IoT gaining strong hold in the technology arena, many of the problems are currently solved with AI and IoT. Specifically AI and IoT have started making their footage in the security domain too. Specially in the surveillance industry, AI and IoT with various types of sensors are used to detect abnormal behavior and trigger alerts.
Again how to use AI to provide data protection in home environment has not been explored in the past solutions. To train the AI machine learning algorithms to attain accuracy in their predictions for data protection violations, there needs to be historical data with real threat incidents. But such data will not be available when one suddenly changes the work environment to home and wants a quick solution in place. Even if you put AI engine for training in such work from home environments, if data protection thefts don’t happen, there is no possibility of useful data present to train the AI.
The main problem with existing art is that the there is no solution using AI technology available to predict personal data theft event, in case of personal data theft possibility how to prevent this and how to continue contact center calls with highest quality if threat is predicted or actually present in the agent’s work from home environment. Also most of the AI based prior art needs past historical data to solve various other type of problems. But as mentioned, past data for work from home with personal data threat outcomes is not possible to get. Additionally, in the prior art technology, to identify threat in real time, so much of data from many camera views and sensors are used. Also, it is well known that AI provides the greatest accuracy when it is trained over long time (using evaluation and correction) with big data as inputs to get the accurate results. But nowadays for a quick solution, these traditional training over long period of time is not possible. Thus it is clear that many of the current solutions cannot be assembled to solve the work from home data protection problem.
Thus how to ensure data protection and smooth calls with clear identification of data protection threat using adequate data (but not overwhelming data for predictions) for the work from home surveillance domain. Also how to have quick countermeasures that really stops the data from being stolen has not been revealed in existing art for such work from home environments.
The present invention provides an AI based solution that combines the various aspects of AI technology in its appropriate portions without overdesigning of data capturing points and achieves data protection without trading off the quality of the service rendered to the called in customer.
Solution to Problem
The solution for the above stated problem has all the components needed to handle the issues in the problem occurring scenario and the solution gaps mentioned above. The solution described has all the needed components to make the agent WFH environment safe in spite of malicious acts that may be planned in the WFH environment to capture the personal data stored in organization environment.
The solution has a method where the system simply does not activate countermeasures for personal data protection threat events. It identifies the personal data protection threat and also checks whether when threat present, personal data will be used. Only if the data protection threat and personal data involvement in the environment both evaluates to ‘yes’ and such can be estimated accurately, a countermeasure is activated to protect the data and have a smooth continuity of normal work operations. If the data protection threat and personal data involvement in the environment both evaluates to ‘no’ and such can be estimated accurately, a countermeasure is not activated. If in the case, whether personal data involvement in the threat environment cannot be estimated then the countermeasure will be activated in any case. Even if countermeasures are not executed explicitly in this system, every threat has some impact. Even if countermeasure are not present the agent skill, integrity points are modified if a threat value is positive in the environment. This solution can act even when the AI engine does not have an estimation mechanism for the personal data involvement during the call as mentioned above.
The AI engine is designed to act in such a way where the initial access to the agent desktop work application at the beginning of any work day is only allowed if the home work environment is considered completely safe by the AI engine w.r.t. personal data protection (i.e. safe proof). Safe proof herein refers to there is no threat now in the environment and the environment is also protected from any threats that can appear in the future. The said safety of the WFH environment is detected by predefined rules embedded in the AI engine. The AI engine will use the rules to check on the safety aspect. The rules are such that it ensures the data protection threat components are not present in the WFH environment and also the WFH environment safety degradation from the safety point is reduced.
Additionally, in addition to the mentioned rules, the agent desktop work application will be given access to the agent to commence work, only if the agent’s face has been authenticated as an approved agent, the agent’s work environment is detected as the agent’s home work environment and the video image of the agent during authentication(on-boarding for the given day) is detected to be live by the AI engine. The said liveliness detection is based on the correct agent emotion rendering evaluation when the emotion to be shown is requested by the system and also other liveliness detection methods used by the AI engine. The on-boarding of the agent to access his agent desktop application is done every day and such design component also additionally tightens the security. After the agent logs out from the agent work application, it is considered the end of the current work day and any personal data threat detection will not be performed by the AI assisted system. It is considered in this solution that only at the initial on-boarding of agent at the beginning of the work day, the work environment is fully captured by more cameras (such as dedicated webcam to capture full room. Probably a 360 degree webcam camera. Or even a CCTV camera that is already present in the agent’s home environment.). The solution also considers the subsequent (i.e. after on-boarding of the agent to agent desktop work application) WFH environment capturing by the system is done using minimal camera such as possibly only the webcam that is inbuilt in the agent desktop.
The initial home environment check by AI engine detects that there are no threats for personal data protection at the initial time and ensures that initial threat detection further helps in subsequent data threat detection during operation which has constraints. The said constraint is that, only minimal cameras used after on-boarding of the agent for the day in-order to reduce the over burdening of the system with excessive monitoring data transport, to reduce cost and to reduce redundant data being stored. To realize such, appropriate rules are used in the initial personal data protection threat detection phase. Initial threat detection herein refers to the data protection threat detection where the said threat originating from WFH environment at start of work on an approved working day by an agent. The data protection threat in WFH environment refers to threat originating from environment surrounding the agent and also the agent himself behaving maliciously. At the on-boarding stage, the data protection threat detection is mainly targeting the threat originating from the environment surrounding the agent because at this time there is no call nor any personal data projected on the screen. In this invention document where possible the personal data protection threat in agent WFH environment is subdivided into the above mentioned categories. The reason for such subdivisions is because these categories generally have non-overlapping threat events. The only common data protection threat event that belongs to both categories is an agent using the dedicated camera such as a phone camera and attempting to capture the personal data that is projected on the screen in a visible manner to the inbuilt camera of the agent desktop.
The initial work from home environment safety w.r.t personal data protection threat is evaluated by means of the following rules. These rules all have to be followed and in place before agent desktop work application is given access to the agent. Why such rules are composed are further explained in the embodiment sections of the document.
Rule 1: The work area should be an enclosed area and only single door is allowed and that door should be in the view of AI engine that uses restricted camera view during operation.
Rule 2: The said door of the agent’s work space should be locked.
Rule 3: Windows in the work area should be locked at all times and if a given window to be kept open during work time of the agent, then that window has to be in the agent desktop’s inbuilt web camera view.
Rule 4: Only agent is allowed to stay in the bounded work area and no other person is allowed to stay when work starts.
Rule 5: Before agent desktop application usage permission is granted, another check point is that the room lights are ‘On’. This is to prevent undetected person in the room.
Rule 6: All the additional phone cameras and dedicated cameras have to be kept aside by the agent at another location away from the work location. Personal smart phone can only be used with back camera view blocked.
Rule 7: The agent has to be at his work desk facing the desktop as part of this rule.
Rule 8: The CCTV belonging to the agent’s home has to be kept in video recording Off state if the AI engine evaluates it as a threat during On-boarding approval check.
Rule 9: The agent is allowed to access the agent desktop work application only on the approved work day (e.g. the given day is not a public holiday nor the agent pre-planned leave day).
After on-boarding of the agent where the agent has permission to use the agent desktop work application, another set of environmental data threat detection rules are incorporated into the solution operation space. This is to detect deviations to the initial approval condition that can further cause the environment based data protection threat. These subsequent threat rules are identification of certain object/events in the work from home environment that can pose a threat to data protection considering that during operation after on-boarding only reduced camera view is present to the AI engine. More details of these additional rules will be revealed in the embodiment(s).
This solution has warning for data protection threats rather than immediate counteraction that changes the route of the customer call. This warning is a solution countermeasure that is only sent when the personal data is not yet projected on the agent screen. In this solution, the AI engine when identifying such environment originating data protection threat during agent work time, it does not immediately send alerts to the agent concerned. It additionally predicts that the call, which needs personal data, is going to arrive to that particular agent situated in a data protection threat environment. The system combines the data protection threat level in the agent environment and the prediction of call arriving to the particular agent who will be using personal data, into a countermeasure score for warning purpose. If the said score is a value greater than 0 and none of the contributors to the score has 0 value, then a warning will be sent to that agent. The warning frequency will increase with the score value. More details of this will be described in the embodiment. In some systems, when the intent of the call is known and the intent value can identify whether the call needs personal data or not without any predictions, the usage of personal data for the call can be known with 100% accuracy.
The AI engine incorporated in the solution will ensure that no call is dropped without handling, even in the case of such data protection threat. Before the call is routed to a given agent (call has hit the IVR but not reached agent, the call is predicted to reach the call system), the system evaluates the probability of the call landing in a data protection threat environment and uses the following methodologies to prevent the call landing in a threat environment. If call is estimated to land in the threat environment, the AI engine reduces the skill of the agent (i.e. the skill tied to an intent) in threat environment slightly, so that the call has lesser chance of landing in that environment. The AI engine issues a warning to the agent to clear the threat environment so that when the call lands the environment is safe. After the warning, the skill is restored based on compliance to the warning by the agent. The skill is not restored back completely to the original value even after such compliance shown by the agent.
The AI engine operates in a manner for a given caller intent, a skill range is defined. Within this skill range, agents with higher skill within that intent is assigned to a call where the skill value is used to indicate priority. Basically the system defines a skill range of non-discrete values related to an intent. Within the skill range, the agent’s with a higher skill is assigned to attend an intent based call. All skill modification for an intent has to fall within this skill range. If an agent is assigned a skill value outside the defined range, then it is considered the agent has no skill related to the particular intent and cannot be assigned to attend the related intent based call.
The solution incorporates prediction component such as:the prediction of call arrival (based on the condition that IVR call to a given intent has hit the system but not yet routed to an agent or call for an agent of particular intent has been put on hold and in a queue from the beginning of the call until being served by the agent) to an agent serving a particular intent in a threat environment is predicted using the estimation of total number of agents available at a future time when the said call hits an agent of the intended intent. More details will be given in the embodiment.
To ensure data protection safety when the call arrives at the agent work environment, the solution has a warning component to make the environment safe when the call lands at the agent (as mentioned previously). But the agent may be confused as to what actions to take to ensure safety when the warning is issued. The AI engine will send such safety restoration details too. This design principle is used to quickly restore the safety of the environment.
The AI engine identifies the threat in a continuous basis such as periodically and otherwise based on events such as call has arrived into the system, call going to be transferred to another agent etc. In addition to event based checks, this periodic basis is needed to track the agent compliance to security principle in the system. Also such periodic checks sometimes expedites the decision making w.r.t. personal data protection. If an agent shows less compliance then there is always a threat in that agent WFH environment. But the counter measure for threat is only planned if personal data is already exposed or planned to be exposed and at the same time threat is present in the agent work environment and this agent is supposed to handle the call. If warning is sufficient to restore environment safety then there is no need of extreme countermeasures. The solution has such design components.
In addition to detecting call arrival to a particular agent when system already knows the call is for the particular intent (e.g. the callback call planned ahead, the customer select the intent at the beginning of the call in IVR system etc), the AI system also has the ability to detect the call arrival into the system for a particular intent and hitting the given agent (applicable in contact centers where the intent cannot be identified at the beginning of the call). Such predictions are incorporated into solution space so that early measures can be put in place for a smooth call experience in a threat induced environment. More details will be given in the embodiment.
In addition to the environmental based data protection threat, the AI engine is able to identify malicious events w.r.t personal data threat that are related to the malicious agent as well. The AI based solution is able to identify malicious agent events such as when personal data is exposed on the screen of the agent, the agent himself is acting maliciously to copy this data for later usage such as use a camera and capture the data. Additionally, the AI engine tracks whether agent trying to record personal data screen projection moments. Also AI engine track agent trying to utter personal data in an audible manner in an unrelated moment as perceived by the AI engine, agent trying to pass the personal data via call to another unrelated or pass the personal data information in realtime using any agent desktop third party applications. More details will be revealed in the embodiment. If such threats present, then the system will block personal data from shown in the screen and also subsequently consider other agent or AI assistant to handle this call within the call’s session.
The solution also has preventive and countermeasure methods such as personal data projection at unnecessary times (i.e. outside call time or when a call does not need a particular personal data) is prevented by the system. The clipboard copy and paste is disabled from agent desktop during personal data projection time on the agent desktop application. If environment threat present when personal data exposed on agent screen is high, the system ensures the agent desktop application is immediately blocked.
If the AI engine predicts that call will arrive at a given threat environment (identifying that the skill of this agent will enable the routing towards the agent) and environment threat is difficult to clear (or too late to clear), it will further reduce the skill of the agent to prevent call from being landed to this environment. This skill reduction is in addition to the initial skill reduction when warning is issued when the call first hits the system.
If an environment related threat or agent trying to record/copy threat are present when the call is present in that WFH environment and the said call needs personal data to be viewed for handling customer and this said environment threat is high, then first choice of the system is to find another agent for transfer. If the said call time within an agent has been significant and if no agent available then the call will be put in AI assisted mode where the AI assistance will inform the personal data. If the call has to be transferred to a lower skill agent then AI with intent based analytics will be engaged to the call. This lower skilled agent option is considered when there is some agent available and not of relevant skill. The solution will consider high priority call back if much time has not been spent in a given call in a threatful environment and there is no appropriate agent of needed skill is available currently. The high priority call back is such that when callback is decided, the callback call is put on queue to get an agent. Thus when agent is available and ready to serve this call back call immediately the call back gets handled. If not in queue then the agent has to complete the calls in queue and then only look at the call back call.
The solution also supports re-onboarding of the agent after the agent desktop application screen is locked. The re-onboarding of agent to agent desktop application has many check points before re-granting access to the application. The re-onboarding check points and conditions by the AI based system are: quick response (QR) code validation of agent ID and dynamic code generation and notification to smart phone of agent (this dynamic code used as a dynamic password), agent face detection, agent face liveliness detection, agent work environment’s data protection evaluation and agent past violation based compliance/integrity score is satisfactory and agreement given by the supervisor. The system presents the agent’s security incident details and the net compliance/integrity score to the supervisor. The supervisor can make decision on re-onboarding or not based on the information projected to the supervisor. The said supervisor can give agreement or not give agreement. The AI engine also presents the contact center call performance statistics to supervisor if the Agent under re-onboarding evaluation is not on-boarded. If the call volume for the given intent is predicted to be low, then the re-onboarding of the agent can be delayed. Otherwise, if the call volume for the intent is predicted to be high then the agent has to be re-on boarded. These details will be given to the supervisor so that he can make the appropriate re-onboarding decisions using many metrics. More details will be given in the embodiment(s).
Advantageous Effects of Invention
The advantages of the invention are as follows:
The solution identifies environment related threats to data protection using AI components such as image recognition/object recognition and rules to estimate/predict threat rather than big data analytics and machine learning to build a machine learning model. In many such threat scenarios, past data is not available to adequately train the machine learning component of the AI. Moreover, for a quick deployment of the solution, a machine learning algorithm cannot be put in training mode for a long time to get the accurate results. Thus many of the common AI solutions are at a disadvantage because they cannot fit into the requirement where past data for data threat is not available and the solution deployment has to be quick. This solution uses the needed components of AI and yet enable data protection related environmental threat identification in a shorter time without relying on past data. The object detection can use many off the shelf solutions where needed. However, the solution does not prevent the object recognition to be developed as an independent entity as well.
Many solutions are present that identify threat and send alerts to notify a threat (threat has occurred) or alert that a threat will be possibly taking place. This solution actually assembles many methods to actually prevent the threat from taking place by reducing the time where a protected entity is revealed to an attacker. That is protect the personal data by using the ‘Show Less’ principle. That is reveal the data only when needed.
Currently in the prior solution space how to manage a call in a threat environment is not handled. You can block the data from an agent view when there is a high personal data protection threat, but the call has to continue with the highest possible QoS inspite of such data protection threat environment. Blocking data is protecting the data only. Such solutions are not in place in the existing prior solutions. The current solution prevents the threat (i.e. only show personal data when needed and avoid agent who are in a data protection threat environment) and when it cannot be prevented(i.e. suddenly during call a safe environment has become unsafe) it enables the call to continue with high QoS using the other available agent resources and artificial agent support devices (AI assistant) combined intelligently.
In work from home environment, if alerts are sent to supervisor, that does not help to prevent the data threat. Because the supervisor can be in a far away place and there is no way to prevent the threat happening in the agent’s work domain. This solution is designed where when high threat is detected in environment, the countermeasures are simply not alerts but various types of countermeasures such as: completely block the personal data if data projected on screen when high threat detected in environment and put many re-boarding check points to further gain access to the system, proactively enable such important data to be shown in safer environments within the same session related to the call and enable the data to be whispered using AI assistant if the data cannot be shown in other safer environments.
This solution identifies personal data related threat using many check points that are currently not available in the existing solution space. It tracks not only the external environmental threats which are usually tracked in many existing solutions but it also tracks any malicious behavior of agent where trying to copy the data, record the data, utter the data in an unrelated moment, reveal the data using phone conversation and many such using the AI technology.
Another advantage of this solution is that the countermeasures are not fixed. The countermeasures are dynamic and it is variant based on the degree of severity. The countermeasures are carefully picked in a dynamic manner based on the severity, the state of the system (number agents, their skills, the importance of the call/special treatment to the call, QoS currently experienced during call, the call volume etc).
The solution has another advantage where too numerous data is not obtained from the environment to detect the threat. Unnecessary data and vast data can be costly to obtain and maintain. This is because in some cases such threats can be a rarity and only needed data would suffice.
Data protection is achieved by strict counter measures where when threat level is high, the data is completely hidden in the threat environment (i.e. either shown data is blocked or data not allowed to be projected in the screen in spite of request) and using of the data again in such threat environment needs many steps of authentication. This improves the data protection security in the system because the agent will comply more towards secure work environment.
The solution uses many ‘personal data exposure to threat environment prevention’ techniques. Some of such prevention techniques are: the solution enables the agent skill to be modified so that the call can be routed to another agent and not land in a threat related environment. The solution sends warning to clear the threat environment. The solution also enables dynamic transfer to another agent if threat identified during a call.
In many existing prior solutions the agents skills are static. In this solution the skill is modified dynamically to ensure a call that needs personal data to be revealed/examined, lands to an agent who has better environment safety and adequate technical skills to attend the call.
In many existing prior solutions, the agent to which the intent based call has to be landed and handled is evaluated at one time. Here the appropriate agent is continuously evaluated among the available agents because the agent skill level is dynamically changing to fit the data protection threat. This dynamism helps to achieve the correct call handling with such external dynamic threat events.
To manage the shrinking agent resource pool where due to threat some agents are given lower skill values and thus the agent resource pool depleting in size, the solution engages AI assistant. The AI assistant helps to reveal the personal data to agent in audio mode so that the information cannot be obtained by malicious entities and the agent in a threat environment can still continue with the call.
The solution has many agent skill restoration methods where restoration takes place when restoration criteria are satisfied. This solution component is useful because technically skilled agents can still engage in call handling once they show compliance to safety. The solution also has compliance or integrity points. This enables to penalize agents so that they improve the security best practices in their work domain.
Fig.1 is an exemplary system diagram, which highlights the AI assisted smart personal data protection system where the AI engine is hosted in the cloud and leveraging the cloud AI platform.
Fig.2 is an exemplary system diagram, which highlights the AI assisted smart personal data protection system where the AI engine is hosted along with other applications in a local restricted environment.
Fig.3A highlights the solution concept related to AI assisted authentication and environment safe proofing before on-boarding an agent into a work day.
Fig.3B highlights a given group of AI assisted solution concepts related to preventing personal data from being viewed at unrelated moments and also detecting threat originating from malicious agents.
Fig.3C highlights another set of group of AI assisted of solution concepts related to preventing personal data from being viewed at unneeded moments and also detecting threat originating from malicious agents.
Fig.3D highlights a given group of AI based solution concepts related to preventing a call being landed to an agent/worker environment when the agent environment is detected as not safe.
Fig.3E highlights a given group of AI based solution concepts related to ensuring a call which needs personal data is handled smoothly in spite of the fact there is threat present in the agent environment to which the call initially landed.
Fig.3F highlights a given group of AI based solution concepts related to ensuring the worker/agent is on-boarded again after his work environment is detected to be safe again and also highlights solution concept related to AI assisted restoration of skill points of agent/worker after threat incident.
Fig.4 is an exemplary AI engine software architecture diagram, when AI related modules are clustered together as a single application.
Fig.5A highlights the exemplary flowchart of the AI assisted main system operation when the system is in the initial rollout state of operation w.r.t. to an agent and in authentication phase.
Fig.5B highlights the exemplary flowchart of the AI assisted main system operation when the system is in a state where the call for a given agent has hit the system and the agent/work environment is not yet safe but the call has not arrived yet to the agent.
Fig.5C highlights the exemplary flowchart of the AI assisted main system operation when the system is in a state where data protection threat is evaluated as present and the call which needs personal data has hit the agent/worker. Also highlights the operation where when call arrives the threat condition is cleared in the agent environment.
Fig.5D highlights the exemplary flowchart of the AI assisted main system operation when the system restoration related to skill and integrity point happens after a given data protection threat event.
Fig.5E highlights the exemplary flowchart of the AI assisted main system operation when the system predicts that in the future a threat state may happen when the call arrives based on the current condition/state.
Fig.6 highlights the sequence diagram of re-boarding the agent/worker to use the system after a data protection threat has blocked the screen and the agent work application has got locked.
Fig.7 highlights the exemplary flow chart of the smart surveillance application that is interfaced to the AI engine and also has the capability to control the restricted and blocked information to be shown on the worker/agent desktop.
In this section the embodiments of the solution are described in the most suitable manner without deviating from the scope and ambit of the invention. One skilled in the art will know that the invention can also be implemented using systems and methods that are not restricted to the ones described in these embodiments. These embodiments are used to explain the core point of the invention using some very suitable systems to implement the core point of the solution and suitable methods to best realize the invention within the said system.
In one preferred embodiment (1 st embodiment) of the present invention, it is considered a system (e.g. contact center system) which implements and uses intents where from the intent value of the call itself whether the call needs personal data or not can be identified without any estimations. In some cases call center systems can be designed in such a manner, whereby from the call’s intent value itself, the system knows whether the call needs personal data to handle the call. This intent can be explicitly selected by the customer during the call lifetime or route. It is considered that in such systems, the intent value provides the hint for definite personal data involvement for the call or personal data non-involvement for the call. In such cases, the solution operation is such when the call intent value implies personal data is needed to handle the call, then when personal data protection threat is detected for such call landing environment, appropriate countermeasure(s) are put in place to handle the threat. In such case, the call is transferred where possible to another agent in safer environment before the call comes to the threat environment or if the threat happens after the call hits the agent, then the call will get transferred to another agent in a safer environment where possible. Alternatively, if the call intent value implies personal data is not needed to handle the call’s intent, then the solution operation is such when personal data protection threat is detected for such call landing environment (i.e. the WFH environment related to agent picked to handle the call), appropriate countermeasures will not be put in place (i.e. the threat in the environment is ignored). Additionally when the intent implies no personal data involved, then when personal data threat occurs while the call is in progress in a given agent environment, the countermeasure such as call transfer or any threat based countermeasure will not be implemented. The AI engine, knowing the intent value of the call and the relationship of the intent to personal data involvement is able to do such call handling.
In this embodiment it is also considered that if intent value of the call implies that no personal data is needed to handle the call from the customer, then the system prohibits an agent’s request for personal data to be viewed tied to the said call’s customer during the call, where the said agent is the one handling the said call. The personal data related to the called in customer is not allowed to be viewed within the system as the call’s intent is identified as an intent that does not need personal data to be viewed.
In this embodiment the core-point of the invention is highlighted where the countermeasure for data protection threat only takes place if there is a personal data threat in the WFH environment, the call has a high chance of landing in the said threat environment or call lands in the threat environment and also the said call needs personal data to be used during the call handling period of the said call (where the call needs personal data can be evaluated with reasonably high accuracy. One example is separate call intents assigned for calls that needs personal data to be viewed).
In some cases/system, the call’s intent value does not obviously identify the personal data is needed or not. Basically such system do not have intents as a means to identify such personal data involvement. In such systems, the relation of the call’s intent value to the involvement of personal data or not can only be estimated using call routing path within for example an IVR call. In such case, based on the route of the call this can be estimated to some degree. For such calls, if the personal data involvement is not needed and this can be predicted with accuracy > 90%, then such calls do not need any countermeasure for example of call diversion/transfer to another agent unless the personal data is used while data threat is present in the environment. Alternatively for such calls, if the personal data involvement is needed and this can be predicted with accuracy > 90% and also the call landing will be a data protection threat related WFH environment, then such calls will generally have countermeasure of call diversion/transfer to another agent.
In systems where there cannot be any definite mapping to call’s intent and personal data usage and this cannot be predicted at all (e.g. IVR call route has no structure to identify intent or non-IVR calls), the countermeasures are always in place when the personal data threat is detected in the environment and the said call will land in such threat induced WFH environment. The countermeasures are also in place when a call is happening in an environment and personal data threat suddenly induced in the call environment.
Whenever, a call that uses personal data has completed the personal data usage, if data threat in WFH environment happens after the personal data usage and if no additional personal data involvement needed to complete the call is notified to the system by the agent, then in such cases if threat happens, the countermeasures will not be in place. Additionally, once the agent notifies within a call session that personal data is not needed, then the system prohibits the personal data to be shown on screen.
Next one preferred embodiment (2 nd embodiment) of the present invention where the solution’s high-level system operations, the system level functional software components and the relevant interfaces between the said software functional components are highlighted to better understand the solution.
To better understand this embodiment, the Fig 1 is referenced. The Fig. 1 specifically highlights the system diagram of a contact center (CC) environment where one or plurality of agent(s) (109) are working from home and a supervisor (105) who monitors their work and conduct is also working from home. The said supervisor in the system could be one or a plurality of supervisors. In this Fig. 1, the WFH environment of agent is described by 100 and the WFH environment of supervisor is described by 101. It is also considered that in the WFH environments (100, 101), the agents have to engage in calls as part of their daily work operations. The supervisor also can take the role of the agent. These calls that hit the contact center can traverse via the IVR system before reaching the agent. The agent’s help will be required in the call handling, if IVR supported functionalities are not sufficient or the customer has clarifications and an agent involvement in the call is necessary. The solution operation in the contact center environment is not restricted to IVR calls only. One skilled in the art would know that the core concept of the solution presented in this embodiment is applicable even in contact centers that do not engage IVR for call handling. The solution operation is also applicable in environments where call hits a contact center where IVR is not in place or IVR is not supported for the given call but the CC has IVR component.
The system in Fig. 1 highlights some threat components such as phone with camera, an unlocked door or an intruder with an intention to copy the personal data in some form in the WFH environment. Such threat components are highlighted using components 102 and 103 respectively in agent and supervisor environment in Fig. 1. It is generally considered that these threat components are not always present in the WFH environment. It is also considered that during the solution operation time, the AI engine (115) situated in cloud environment (116) only retrieves minimal data as obtained from the agent desktop’s in built webcam (agent’s desktop inbuilt camera has limited view). It is considered that the AI engine (115) communicates with its own DB (118) using the interface (117). It is further considered during on-boarding of the agent to his work desktop application at the beginning of everyday, a full room view needs to be given to the AI engine (115) for on-boarding checks. Such on-boarding procedure/checks is done at the beginning of every work day and also after the system blocks the agent from using his work desktop application if breech or violations detected. The work desktop application is a web application, which the agent uses to carry on his daily activities. This said application is hosted in the customer environment (125) and runs on the agent desktop browser when activated. Similarly the supervisor work desktop application can be the application (125) and additionally the supervisor will use the application (126) that has features specifically targeting the supervisor. This supervisor specific application (126) can also be hosted in organization’s customer premises (121). All the application that are hosted in customer premises (121) can use a DB (123). Based on the system architecture revealed in Fig. 1, the data for AI engine will traverse via internet (113) using the communication interface(s) (110, 114).
Since as mentioned, only restricted camera view is used by the AI engine (115) during its operation, it is essential that the only door to the confined agent work space is in the inbuilt camera view as highlighted by components 107 and 104 in Fig. 1. The said door is considered as one of the high probable threat entry point (e.g. a malicious person trying to come into work environment) and thus should be in the restricted camera view during operation times so that threat related events can be identified well ahead from the opening of the door itself.
In the system described in Fig. 1, the pre-installed CCTV cameras (108, 106) in WFH environments (100, 101) can be seen. Some WFH environment may have CCTV camera pre-installed (108, 106) in the specific work room of agent and/or supervisor as illustrated in Fig. 1. This is an out of scope installation related to the current solution. That is, CCTV is not a mandatory part/component of the system solution but the agent/supervisor has it as part of their daily operation to protect their own home environment. But if such camera present in an agent (or even supervisor) WFH environment, then it has to be preferably integrated into the system since they may pose a threat (i.e. CCTV may be able to record the personal data appearing in the agent desktop work application as a series of images or as a single video file) to personal data protection and by the said integration the system may be able to detect the threat caused by it, and if threat, then set CCTV (108) video recording to ‘Off’ and monitor the video recording ‘Off’ state and hence reduce the threat induced by CCTV (108) present in the WFH environment. The mentioned ‘Off’ does not mean CCTV (108) system power is Off. It implies CCTV (108) video capture and video recording is kept in an ‘Off’ state. Basically, the said ‘Off’ implies video recording is in an ‘Off’ state. Also herein ‘On’ implies video recording is in an ‘On’ state. The CCTV integration to the system will happen in CCTV ‘On’ state only when CCTV operating in ‘On’ is not considered as a threat to personal data protection.
If there is CCTV (108) in the WFH environment, then as a first step, CCTV (108) has to be integrated to the system. If during agent on-boarding time CCTV (108) integration is not detected in spite of having CCTV in WFH environment, then on-boarding will fail. This is because the AI engine (115) is able to detect CCTV (108) presence using object recognition method and then subsequently will trigger the smart surveillance application (122) to put the needed ‘test text message’ on screen for deciding on the CCTV integration. After sending the ‘test text message’ to agent screen, the AI engine (115) will expect the CCTV (108) capture of this said message to be returned to the AI engine (115). If such message captured is not given to AI engine (115), it will consider that the CCTV (108) is not integrated into the system.
The CCTV camera (108) before being evaluated by the AI engine (115) as to whether it should be integrated with video recording On or video recording Off will have recording On state and will send the evaluation related captured video to the AI engine for evaluation. The said evaluation related captured video is the video comprising of the ‘test txt message’ projected on the agent desktop screen. The communication between CCTV system (108) and the system hosting the AI engine (115) will be realized by the interfacing unit present in the CCTV system (108). The primary objective of the interfacing unit is to send the evaluation video recording to AI engine to decide on the mode of integration of CCTV (108) and also to receive trigger from smart surveillance application as to turn On or turn Off CCTV (108) video recording. Also the said interfacing unit will receive triggers such as ‘send recorded data’ to AI engine of the live captures and this said trigger will be sent by the smart surveillance application. This said additional video from CCTV (108) is sent (i.e. the said live captures) simply to check whether the CCTV (108) position has changed after having detected there is no threat from the CCTV (108). The said interfacing unit also has capability to use these triggers and perform certain functionality on the CCTV system (108) such as stop CCTV video recording, start CCTV video recording, send CCTV recorded data to AI engine and stop sending CCTV recorded data to AI engine. This invention briefly highlights the function of such interfacing module.
In such integrated state before CCTV integration mode is decided (i.e. CCTV integrated in ‘Video recording Off’ mode or ‘Video recording On’ mode to the system), the AI engine (115) first checks during on-boarding time of the agent to his work desktop application, if text related image (i.e. a ‘test text message’) shown on agent desktop and captured by CCTV (108) is clear enough for a human to read and interpret it. It also checks whether AI (115) evaluated/identified text obtained from the image captured from CCTV (108) is identical to the ‘test text message projected’ on the agent screen. The AI engine (115) does these 2 mentioned evaluations once it gets the CCTV’s (108) capture of the ‘test text message’ during the on-boarding time.
This ‘test text message’ shown during on-boarding cannot be composed using personal data and should be of human readable clarity if viewed by the agent sitting near the desktop. This ‘test text message’ is randomly generated. The clarity of the CCTV (108) captured image related to this said ‘test text message’ is very much dependent on the position of the CCTV (108) camera w.r.t the agent desktop screen and also based on some ‘zoom in’ features used for capturing/recording by CCTV (108). It is considered during on-boarding time the CCTV (108) is in maximum possible ‘zoom in’ state so that the image captured of the ‘test text message’ is identified using the lowest possible focal length of the CCTV (108) camera without trading off the quality of the image. It is also considered in some cases the ‘zoom in’ feature of CCTV (108) camera can be set to maximum by the system using appropriate interface protocols running between the solution system and the CCTV (108) system. The reason for capturing the said ‘test text message’ with the best possible zoom in condition is to ensure that the CCTV (108) image even in the best possible operation condition of CCTV’s image capturing state does not pose a threat w.r.t. personal data protection.
In general, if CCTV (108) is far away w.r.t the said screen, then the CCTV (108) captured ‘test text message’ image clarity will be less and the ‘test text message’ image captured by CCTV (108) may not be in human readable form and will have degraded clarity. The said ‘test text message’ shown on agent work desktop during on-boarding check is sent by the smart surveillance application (122) so that when this text is captured by the CCTV (108) camera as a video image and sent back to the AI engine (115) as part of onboarding process, the AI engine (115) will evaluate its human readability clarity (i.e. whether the text massage captured by CCTV (108) camera can be read and interpreted by humans) and also evaluate whether it matches to the original test text message. Here what is meant by matching to the original test text message is that the characters retrieved from the CCTV (108) image has identical character sequencing to that of the ‘test text message’ that was shown on agent desktop and also font, font size and color also should be same. This matching is needed to ensure that the CCTV image is really a live captured image of the ‘test text message’. This text message will be called ‘evaluation text message’ or ‘test text message’ in an interchangeable manner in this document. The CCTV system (108) will capture this ‘evaluation text message’ and send these test text message related captured images to the AI engine (115) during on-boarding time.
If the received ‘evaluation text message’ images by the AI engine (115) from CCTV (108) are such that the evaluation text message can be identified and classified as consisting of human readable clarity and matching the ‘test text message’ sent by the smart surveillance app (122), the AI engine (115) detects a data protection threat from the CCTV camera (108) and thus only allows the CCTV (108) to be integrated in an ‘Off’ state (i.e. no image capturing/recording allowed from CCTV (108) after the agent has on-boarded to his work application). The AI engine (115) does this check on readability test for the ‘evaluation text message’ captured by CCTV (108) so that by using this check it can evaluate/predict/estimate if when the CCTV (108) is in ‘recording On state’ when real personal data is exposed on the agent screen, whether CCTV can capture readable images of this personal data and cause a threat. If AI engine (115) can read the said ‘evaluation text message’ or infer the ‘evaluation text message’ as of human readable clarity and also ensure that this message captured by CCTV was that projected on the screen, then it can be inferred correctly that the personal data text projected on agent desktop screen if captured using the said CCTV (108) can be readable and interpreted correctly by a malicious attacker in the system. In this system it is considered that a CCTV (108) threat is identified even before personal data is exposed and proactively the CCTV (108) recording state is set to ‘Off’ if threat identified using ‘evaluation text message’ projected on screen. When threat detected for CCTV (108) it has to be integrated to the system but operated in ‘Video Off’ state so that it does not capture/record personal data related images and store in the CCTV (108) system during agent’s work time. During on-boarding time, the CCTV integration in video Off mode is done. During On-boarding time the threat related to the CCTV is estimated and if threat the CCTV set to ‘video off’ mode. The video off mode can be set by the agent or by the system.
In this case of ‘recording Off’ state of CCTV (108) camera, it is considered the smart surveillance application (122) will still check whether the CCTV system (108) is still integrated into the system. Such checks will be based on some heart beat messages sent from CCTV system (108) to the smart appliance application (122). This heart beat message is received by the smart surveillance application (122) and it has the video recording Off state as one of the parameters sent to the smart surveillance application. If heartbeat is missing or the Video Off state is not present in the heart beat message, then an alert/warning will be sent to agent by the smart surveillance application. If personal data being projected on the screen and heartbeat missing or not having the correct parameter such as ‘video On’ is detected, desktop screen block will be triggered assuming a malicious behavior such as CCTV (108) is disconnected from the system by a malicious act. The AI engine (115) will identify this event of CCTV disconnection or turn CCTV to On mode as a malicious act and trigger the smart surveillance application (122) about the counter measure such as agent screen locking when at this detection point personal data is projected on the agent screen. The CCTV interfacing application that sends the said heartbeat via the API to smart surveillance application can have any appropriate design incorporated to transport the video recording state of the CCTV camera.
During on-boarding time of the agent, if the received ‘evaluation text message’ images from CCTV (108) are such that the evaluation text message cannot be identified into human readable clarity category (i.e. the characters cannot be identified and matched to the original test text message), the solution allows the CCTV (108) to be integrated in an ‘On’ state during the on-boarding process. Here ‘On’ means the video recording On state. Because these images are not clear, it is considered that CCTV (108) can perform its usual operation as its image captures are not considered to pose a threat w.r.t. personal data projected on the screen. In this ‘On’ state, it is further considered that the smart surveillance application (122) will prevent the video image/data captured from CCTV (108) to be sent to the AI engine (115) in a continuous manner. The reason is that the CCTV image need not be analyzed by the AI engine after on-boarding because after on-boarding, the system operates with limited camera view only and additional CCTV footage is not needed. To realize such design goal, the smart surveillance application (122) may set a state at the CCTV system (108) to prevent the images to be sent to AI engine (115) if recording state is set to ‘On’ or the interfacing application running in CCTV (108) need not send image by default to the AI engine (115). Such evaluation of CCTV (108) integration state (i.e. whether it should be integrated with recording ‘On’ state or recording ‘Off’ state) into system is decided before allowing the on-boarding of the agent to his work desktop application. Even in this ‘On’ state, which does not imply data protection threat, the CCTV (108) needs to be integrated into the system. The reason is that the smart surveillance application (122) and AI engine (115) can use the CCTV (108) for its benefit if it needs more information/wider view during on-boarding or other times to safe proof the WFH environment. Also if CCTV (108) is maliciously changed to another position after on-boarding, then since it is integrated into the system, the AI engine can any time check the CCTV (108) camera view by checking its recorded images for personal data captures. If CCTV (108) has maliciously been shifted then from the recorded image clarity (e.g. from an unreadable to readable has been achieved due to the shifting of CCTV (108) camera) change, the AI system (115) can predict a malicious behavior and again impose the CCTV (108) recording state to ‘Off’.
The legacy optical character recognition (OCR) has been identified to have some drawback when it comes to identifying moving images with texts and identifying images with texts that are slightly blurred. Recently AI based image recognition, object vision technologies have improved significantly and these technologies are using various neural network based deep learning algorithms to identify text in images to the clarity similar to humans. This solution uses or considers such text based image recognition algorithm running in the AI engine (115) that can achieve accurate text recognition similar to human and the solution uses this capability to prevent the threat coming from CCTV image recording. It is considered that the neural network based algorithm or any AI image recognition algorithm used for text image recognition is trained and built to identify characters of any font, size and color that can be recognized by human eye. The said trained algorithm is even able to identify characters in text images that are blurred as long as such can be identified by humans. To train this said neural net algorithm and finalize on the algorithm structure, many images of texts comprising of various text compositions and various levels of image clarity are used in the training mode. The exact mathematical structure of the said neural network algorithm and the training methodology etc are outside the scope of the current invention. The current invention utilizes the existing art and leverages it for such CCTV related threat detection.
During on-boarding time such checks mentioned above are done w.r.t. CCTV. Next we will further briefly illustrate the sequence of events that happens when validating the CCTV image capture of the ‘evaluating text message’ and deciding on the CCTV integration mode. If CCTV (108) is detected from full camera view during agent on-boarding, then smart surveillance app (122) will project some text images/messages of various font, size, color, various composition of characters onto the agent screen to assist on-boarding check for CCTV integration (the images projected are usually of font size that is generally used in any web application to show personal data). It is generally considered that a series of such different ‘evaluation text messages’ are shown during each on-boarding event and for each on-boarding event different sets of such series of text messages are shown. Such different sets of ‘evaluation text message’ is shown to tighten the security so that a malicious user does not do pre-recording a predicted ‘evaluation text message’ on the CCTV (108) and defeat the system. These will be captured by CCTV camera (108) and sent to AI engine (115) via the interface (110). The AI engine (115) also has capability to get the needed images from the environment (100, 101) in addition to its AI based features for detection and predictions. The AI engine (115) can get these video recordings from CCTV system (108) using any real time streaming protocol that could well be an application running WebRTC (Web real time control protocol) stack or an application that uses the WebRTC stack using Application Programing Interface (API).
If an agent opposes such integration into the system, the inventive solution can still operate with the rest of its rich features but the agent has to use another room without the CCTV. Such CCTV cameras are shown as component (108) in the agent environment and shown as component (106) in the supervisor environment in Fig. 1.
In addition to the CCTV (108, 106), this solution has many rules that detect personal data protection threat based on some other objects/events in the environment. Some of the object, events related to this data protection threat are highlighted in the Fig. 1 as shown as threat components (102) in agent work from home environment and threat component (103) in supervisor work from home environment.
The Fig. 1 specifically highlights the system solution in a cloud based solution environment (116). Here it is considered the AI engine (115) is hosted and running in the cloud environment (116). The said AI engine has many capabilities based on the trained machine learnt models running in this AI engine itself. This AI engine is able to estimate the threat from the CCTV w.r.t. personal data theft by estimating the CCTV’s recognition of ‘evaluation text’ images, this AI engine is able to identify threat components/objects in the environment w.r.t personal data protection threat, this AI engine has the ability to detect an intent based call will arrive into the system, the AI engine is able to do a text to speech using the personal data in the DB as the text to be converted to speech, the AI engine is able to identify the intent related call’s data analytics to help in call handling, the AI engine has the speech recognition capability where it can detect personal data being uttered audibly in an unrelated moment as perceived by the AI engine, AI engine has the capability to predict whether a given intent based call having hit the system will reach a given agent and the AI engine will be able to detect whether the given intent related call will need personal data to be used during its call. Basically the AI engine (115) has many trained machine learnt models composed of various different mathematical elements to estimate various such above mentioned events or objects to a very high degree of accuracy.
The occurrence of various events and detection of objects are evaluated using real time inputs fed into the various machine learnt models running in the AI engine (115). The output related to the said input will identify the event or the object during the system operation time.
The training of the AI engine (115) is done using real past operation data or data randomly fed for training purpose as in the case of object recognition or image recognition. The training of AI engine (115) to detect the text image and its clarity will be heavily dependent on big arbitrary text data that has been used for training purpose. Similarly the training of the AI engine (115) to detect the threat objects such as camera, a person etc with high precision will be based heavily on the big training data used to identify these objects. The AI engine (115) will need much data during its operation and also for training purpose of the model. Here the much data for operation implies that the needed output is decided based on various input fields. The data for training purpose to develop the image recognition model or machine learning model used in the AI engine (115) need not be stored in the DB (118) unless it is used to further improve the model through another learning process happening at system operation/runtime. It can be generally considered the AI model has already reached its maturity in its ability in prediction of an event/object accurately and need not be further trained using historical data obtained during runtime or operation time.
It is assumed that operation related real time data, which acts as an input for the AI models/algorithms, are stored in the (DB) (118). The said data may preferably be stored in DB (118) for audit purpose. Furthermore, the AI engine (115) if it detects a threat in the environment then such related video from the moment threat identified is recorded and stored in DB. The AI engine (115) will do such recording function as well. Here the video related to threat event is stored in an appropriate video recording format in a file with suitable encoding and meta data for playback. This said DB (118) is interconnected to the AI engine (115) using the interface (117). The DB (118) also has data that the AI engine (115) will use to decide on the suitable transfer agent ID to handle the call if data protection threat in call environment, agent identification for call handling in general, past events of data protection threats related to a given agent’s WFH environment, current skill of the agent, integrity point of agent, the agents related to a given intent and skill range related to a given intent. The DB (118) also has various call related data such as time in call, time in queue to get an available agent, call hold time, personal data usage within call etc. These data are generally used to make various call related estimations that are needed in this solution and also to detect malicious acts by agent. One skilled in the art should be able to infer that the system can store many types of data in DB (118) where the data can be used for future data mining, produce various audit reports, to enable the supervisor see the threat history of the agent w.r.t. personal data protection, to enable various analytics and to make many real time decisions based on system states captured into data fields and stored in the said DB.
As highlighted before, some data in DB (118) are video captures related to a threat event. These video recordings can be used to reveal the scene during threat event to a supervisor (105) via the customized supervisor desktop application (126).
The core point of the solution/invention is that it is using the AI engine (115) to achieve personal data protection in a WFH environment where the chance of revealing personal data to anyone with a malicious intention of retrieving the personal data in any form is avoided or prevented. Additionally, the said AI engine (115) ensures that if in case of data threat in an environment, the call which needs the personal data is still carried out in the system at a safer location for the call without trading off the QoS tied to the call. To achieve this the AI engine (115) running as part of the solution incorporates many methods to realize it. The AI engine (115) is involved in authentication using agent face detection, external CCTV threat detection, agent WFH environment authentication and agent liveliness detection during on-boarding of agent to his work desktop application. The AI engine (115) also safe proofs the environment to check for any malicious act that can be a threat to personal data during on-boarding and during agent’s work operation time, and if such detected appropriate countermeasures are activated on the system. The AI engine (115) ensures that daily on-boarding checks are performed on the agent on every working day tied to the agent. Also these checks are performed after violation and re-entry into the system. The AI engine (115), if it detects that the personal data is being viewed at any un needed time by an agent/supervisor (i.e. outside the related call or during un-related call), it blocks such viewing on the screen by coordinating with the smart surveillance application (122). Basically whenever personal data of customer is requested to be viewed by an agent when there is high data protection threat in the said agent’s WFH environment, or customer personal data is requested to be viewed by the agent when there is no call from said customer occurring in the agent’s WFH environment, or customer personal data is requested to be viewed by the agent totally unrelated to a call happening in the agent environment (i.e. the call is happening from another customer not related to the personal data requested by the agent), the AI engine is able to detect this malicious act or inappropriate act and prevent the smart surveillance application from projecting on the work agent desktop screen such personal data in the said agent’s WFH environment. Additionally the said AI engine (115) will pass the rules proactively to the smart surveillance application (122) as to whether personal data can be shown or not w.r.t. a given customer so that even if an agent request for the data, the requested personal data will not be shown on the screen when this customer data is prohibited from being shown by the said agent. Unneeded time herein refers to a time when personal data is being requested to be viewed by the agent when such personal data is not needed to be viewed. These said unneeded time are illustrated by means of the following examples. For example, if customer A is making a call and during that time the agent who is attending to customer A and needs customer A’s personal data is instead requesting the system to view customer B’s personal data. Another case is when there is no call in a given agent WFH environment, the agent is requesting the system to view the personal data of any customer. Yet another case is when the agent is in a call with customer A but he does not need personal data but yet he requests the personal data from the system. In this last case, it is considered that from the intent or the dialed number or the route the call took the personal data involvement is not needed to handle the call can be identified by the AI engine (115). The AI engine (115) evaluates whether for a given agent whether personal data viewing for a given customer is allowed or not allowed. This state is continuously updated based on system state changes that impacts the data viewing state. One such said system state change is a call with particular intent is finalized to reach a given agent. This information is conveyed to the smart surveillance application (122) whenever the personal data screen projection allowance state is changed for a customer tied to the agent (for. e.g. an agent may only be allowed to serve a certain customer pool only). If before this information such as rules for a given customer data viewing arrives at the smart surveillance application, the personal data is projected then the AI engine (115) ensures this data is blocked on the agent screen. Suppose the AI engine (115) said information on the rules of personal data projection is already present in the smart surveillance application (122), then the smart surveillance application (122) can readily advise the customized agent desktop application (125) as to whether personal data projection for a given customer call is allowed or not even before the request for the personal data is triggered by the agent.
The AI engine (115), if detecting a high threat such as camera being used in a screen capturing manner while personal data projected on screen, it considers it as a very high threat and immediately blocks the agent screen by communicating with the smart surveillance application (122). The AI engine (115), if it detects that when personal data is being shown on screen while call is on and the agent is not at his desk, immediately the screen is blocked as the said AI engine considers this as a high threat environment. The AI engine (115) if detecting any malicious activity by the agent such as copying/recording, it will impose the suitable countermeasures (such as clipboard copy is prevented and if malicious recording the screen is blocked). The clipboard copy/paste deactivation is implemented by the smart surveillance application (122). The said AI engine (115) monitors whether agent trying to record the personal data in the agent work desktop and if detected, it will block the agent desktop screen. The said AI engine (115) also has ability to check whether the call holding is unusually long when personal data being projected on screen. If such detected, the screen will be locked. If the AI engine detects one or more other faces near the agent work desktop screen other than the agent when personal data is projected on screen, again the screen is blocked/locked. Every such screen block/lock state needs proper re-onboarding procedure for the related agent. As mentioned the screen blocking/locking happens only for such high threat events.
The AI engine (115) upon detecting data protection threat and when it estimates the call has not yet landed in the agent environment, it sends warning to agent to clear the data protection threat environment rather than overreacting and imposing severe countermeasures that impact the call (i.e. rather than unnecessary call diversion, warning is sent in-order to make the preferred agent environment to be safe). To whom the warning has to be sent is generally evaluated based on the highest skill within an available pool of agents provided no call waiting queue to get an available agent for the given intent when such evaluation is done. If there is a call waiting queue for the related intent then the agent to which the call will land is estimated/predicted when evaluating the agent to which the said warning will land. The AI engine (115) transferring call to another secured location if high threat detected in the current environment where the call is happening. The AI engine (115) proactively reducing the skill of agent if data protection threat detected in-order to safe guard the call and yet route the call to the safe environment.
The AI engine (115) based on need, engages an AI assistant that can identify the related personal data to be used for the call and sends this information via voice channel to the agent using the AI’s text to speech feature. The AI engine (115) based on need (such as high threat in a given agent environment and needs another agent of same related skill but no such agent available) engages a less skilled agent with AI based analytics to handle a given intent based call if there is no other related skilled agent to handle the call.
The AI engine (115) has the ability to track the agent voice/speech content where by it detects whether personal data is being revealed by the agent using a speech/voice media. Once the AI engine (115) detects this (user has started voicing out personal data), it will also check whether the agent is allowed to voice out the personal data. The agent is allowed to voice out/utter the personal data in audible manner at a given moment only when the agent is in a call with a customer who is requesting for the personal data and this information was not known to the calling customer while traversing the IVR call. If the agent has started voicing out personal data in a scenario that does not have the above said approved condition about voicing out the personal data, again the personal data on the screen will be blocked. To achieve this, the AI engine uses its natural language processing capability. The invention utilizes well developed AI natural language processing capability to engage it to detect personal data information that is being conveyed by the agent using speech media at unneeded moment. The AI engine (115) also detects that the agent is handling a call that needs personal data without retrieving the personal data from the back end and projecting on the screen. The AI engine (115) will track this as a malicious event because the agent might have copied the personal data information in a paper etc using it during the call. The AI engine (115) also has the ability to detect whether the customer is querying for personal data related information and also has the capability to check whether personal data was revealed using IVR text to Speech (TTS) modules.
These above functionalities are all done by the AI engine (115). To achieve the many counteractions upon detecting threat and when on-boarding/re-onboarding, the AI engine (115) interfaces mainly with the smart surveillance application (122). This component (122) interfaces with the AI engine (115) in-order to get the commands from the AI engine (115) via the interface (119). The information sent via the interface (119) will indicate for a given call/session ID who is appropriate agent ID (this could even mean a transfer of call), which agent ID is additionally added to a call session, skill related to the chosen agent ID, skill modified event and modified skill value for an agent ID, AI assistant agent ID (i.e. virtual agent’s ID) that needs to be added to the session, the on-boarding approval/disapproval command, the extreme countermeasure such as block screen, any other countermeasures that impact the display of personal data and the personal data can be shown on screen or not state etc. This information will be sent in a continuous manner whenever a new evaluation done by the AI engine (115) and this evaluation has to be conveyed appropriately to other modules running in the system. Basically information is sent via this interface (119) only when there is change or action done by the AI engine (115) which needs to be conveyed to other modules to implement a system wide action(s). Whether the said information is sent in a single manner related to one call or whether the said information is sent in a collective manner related to multiple calls is outside the scope of this invention. However, one skilled in the art will know that the said information can be sent in many ways without deviating the object and scope of the invention. Also as mentioned briefly, via the said interface (119), AI engine (115) will highlight for the given agent ID what countermeasure to take such as block agent screen, reduce the personal data view on the screen etc of customized agent desktop application (125) or the customized supervisor desktop application (126) or send a warning to the agent to improve the personal data threat state in the WFH environment. The warning could be a short message service (SMS) to agent phone or if the warning is a pop up on the customized agent desktop, then the warning has to be sent to the customized agent desktop (125) by the smart surveillance application (122). The screen lock or block event received from AI engine (115) is handled by the smart surveillance application (122).
The smart surveillance application (122) will preferably have communication interfaces with other applications such as customized agent desktop application (125) via interface (131) and the customized supervisor desktop application (126) via the interface (130). The applications (125) and (126) will preferably communicate with agent skill based routing module (127) respectively via the interfaces (129) and (128). As highlighted, the smart surveillance application (122) also interacts with the customized supervisor desktop application (126). This interaction is mainly to highlight the various threat events in the agent environment, agent integrity points that will help supervisor make decisions in re-onboarding of the agent after the agent’s screen has been locked and also convey events such as screen blocked. As mentioned, the screen lock event can be also highlighted to application (126) by the smart surveillance application (122) via the said interface (130). When the customized supervisor desktop application (126) has received input from supervisor on re-onboarding agreement for the given agent ID that was previously prevented access, it will inform the smart surveillance application (122) about the supervisor agreement and the application (122) can perform the unblock of the screen for this agent ID once the agent inserts the correct unlock code. The unlock code is given to the agent by the smart surveillance application (122) once it gets on-boarding agreement from AI engine (115) and the supervisor agent desktop application (126). The application (122) can communicate with the customized agent desktop application (125) mainly to highlight the countermeasures that have to be placed on the agent desktop work application such as reduced screen size, reduced font size, as well as the call transfer information (e.g. agent ID to which the call has to be transferred to) where the said call transfer is induced by the data protection threat in a related environment and involvement of additional agent within the call session to inform the personal data to the agent who is handling the call. The information given to application (125) from application (122) in order to include another agent into the call session is the additional agent ID to support in the call if the current agent environment has high data protection threat and hence cannot view the personal data. These said interfaces will highlight real time triggers. However, the applications (125, 126) can get more data related to the information in the interface trigger from the DB (123).
The routing application (127) mainly performs functions like engaging a given additional agent ID/agent into a currently active call session based on the additional agent ID and said session ID information given by its interfacing module (125). The said interfacing modules get the agent ID and call session ID information from the AI engine (115). That is add another agent into the call session so that personal data information can be whispered to the agent who is communicating with the customer. Additionally the routing application (127) can engage the agent ID and session ID given by application (125) to transfer the call from the current agent to another/given agent all within the same call session. The interfacing module (125) can give such agent ID and related call session ID via interface (129) to the skill based routing application (127). The application (127) can also obtain the updated skill related to a given agent ID using the interface (129). The application (127) can use this given skill to handle its own agent identification mechanism for some other operating scenario without contradicting the AI engine’s (115) decision on call routing. For the WFH scenario, application (127) has to follow the agent ID and skill related agent ID selections that comply with the AI engine’s agent ID and skill based agent ID selection. Whenever the agent is selected by AI engine (115), that agent ID has to be used by application (127). If agent ID is not selected, then the application (127) can use the similar mechanism used by AI engine to determine the agent ID for the given intent. The skill value can be stored in the DB (118) and also in the DB (123). The DB (123) is mainly used to provide the data for the application (125, 126 and 127). This DB (123) can have information on the skill, call session ID, intent, the agent integrity point, various counter actions for security enhancements, the video files that has captured the data protection threat that has occurred in the agent’s WFH environment etc.
The various applications such 125, 126 and 127 are shown to be implemented and activated in a specific manner using the defined interfaces shown in Fig. 1. But it is important to understand the solution can be implemented using some other software functional modules, appropriate communication interface design such as API and DB design without deviating from the scope and ambit of the inventions. What is illustrated in Fig. 1 is one of the preferable implementation of the current invention.
The previous embodiment highlighted the main solution concept and its methods operating in one of the most suitable scenario for this solution to operate. In the next embodiment (i.e. embodiment 3) we highlight the solution operation in another preferred scenario. The current embodiment illustration will refer to Fig 2. Here the AI application (113a) and its associated DB (115a) are installed in the customer premises and not in the cloud services environment. Such deployments may be preferred due to safety reasons or when solution uses own tools and not use the cloud tools for the AI engine. All the other functionalities are similar to the previous embodiment (i.e. embodiment 2). The AI engine (113a) receives environment data (video captures, images) using the interface(s) (111a, 118a). It is additionally considered that some of the AI functionality and model used for object recognition can run on the browser of the agent (107a) desktop. In such case, additional video are sent to the AI engine (113a) only if malicious object (camera, face etc) is detected by the object recognition code running in the browser of agent desktop. When the code that runs in the browser detects that there is a threat due to a malicious object detected, then more video captures from that moment is sent to AI engine (113a) for some time that could be a configured time. Until then, not every scene from the WFH environment area is sent to the AI engine (113a). As mentioned before the sending of the video captures to the AI engine can be done by a real time video transport application that could use the WebRTC protocol stack. This kind of solution improves the network bandwidth efficiency where only additional video information is sent to AI engine after identification for a data protection threat at the browser level. It is considered that even when the AI engine (113a) is hosted in the cloud, this browser detection of data protection threat elements in the WFH environment and subsequent video transfer from the agent desktop only based on such detection condition can still be implemented/applicable.
In the next embodiment (i.e. 4 th embodiment), the rules which are used by the AI engine (115) in Fig. 1 to decide on agent onboarding is revealed. Theses said rules are also used when agent re-on boards the system after the agent gets locked out due to screen locking by the system. These rules are illustrated as one preferred embodiment of the present invention.
Rule 1: The work area should be an enclosed area. Only single door is allowed in the work area and that single door should be in the view of the AI engine by means of the desktop inbuilt webcam camera that usually operates during agent work hours (also the only camera that is operative during agent work hours unless exceptional case). The door has to be in the restricted view of the desktop inbuilt camera (this built in camera view is usually not a full 360 angle), so that door opening can be continuously monitored by AI engine to check anyone entering work space after the initial safe proofing. The door being in the restricted camera view helps AI engine to detect anyone opening the door in disguise and entering through the door. When evaluating rule 1, the AI engine will use the images from the built in desktop webcam camera to check the door’s visibility.
Rule 2: The said door of the agent’s work space should be locked. If locked then if intruder breaks, it gives more time before the intruder gets to the data and more time for AI countermeasure actions to prevent the threat.
Rule 3: Windows in the work area should be locked at all times and if a given window to be kept open during work time of the agent, then that window has to be in the agent desktop’s inbuilt web camera view. If an agent wants some other windows to be opened during work time, then those windows that are not in the inbuilt webcam view of the desktop have to be monitored using additional webcam cameras connected to the agent desktop and connected to the AI engine. Whether these additional cameras are connected to the AI system to view the window is also checked as part of this rule. The additional video view from the camera monitoring the window is obtained from the AI engine only when a call (which needs highly confidential data) is about to land in the agent home work environment.
Rule 4: Only agent is allowed to stay in the bounded work area and no other person is allowed to stay when work starts. This is checked so that when operation starts no other person peeps into personal data shown on screen during call. Because in the limited camera view after work starts, the full room view is not available and a malicious person peeping cannot be detected. Thus in the beginning itself this is checked.
Rule 5: Before agent desktop application usage permission is granted, another check point is that the room lights are ‘On’. This is to ensure no malicious person is in the room when work starts. In some cases due the temporal variations of luminous level across the room, only the agent’s face can be detected in a dark environment but not of intruders. Basically additional face may not be detected in a dark environment. Thus this rule is essential.
Rule 6: All the additional phone cameras and dedicated cameras have to be kept aside by the agent at another location away from the work location. These are considered threats because personal data on agent screen can be recorded or photographed using these external tools. This is to minimize the threat of agent suddenly becoming an attacker and taking a picture of personal data or an intruder using these camera devices to take a picture. If these said camera devices are far away, then the agent has to get up from his seat to get these. In such time if personal data is projected on the screen, the AI engine can immediately block the screen seeing the agent is not at his seat. If these other camera devices are kept near the agent, then he can still remain at his seat and use these without it being captured by the in built webcam camera to maliciously take a copy of the personal data. The personal phone of agent is allowed to be used near the agent desktop vicinity but not in a malicious manner such as trying to get a picture. The back camera view of regular hand-phone of agent has to be blocked by a light blocking material (this could be a tape). This condition of the personal phone will be checked by the AI engine before work starts. If the agent frequently used hand-phone does not have the back camera blocked, then the access to agent desktop application is not given. These points will be checked as part of this rule.
Rule 7: The agent has to be at his work desk facing the desktop as part of this rule. This is needed because if not for the rule, immediately after allowing user to access the work application the camera view to the environment is restricted to the AI engine and thus any malicious activity cannot be tracked. The agent may accommodate an intruder in immediately after application is given access to the agent and this cannot be tracked if agent is not in the camera view during on-boarding time.
Rule 8: If already available CCTV is kept On to help get the full room view during initial data threat evaluation of the work room and this CCTV captures can clearly view the information (for e.g. numbers, letters, special characters) on the agent screen as detected by the AI engine, then this CCTV should be set to Off state before the agent is allowed to gain access to the agent desktop application. This is the core point of the rule. Before allowing access as part of the rule, it is further checked by AI engine that the CCTV is still connected to the AI system. This is needed to detect, anyone turning CCTV On during work time. Any agent’s private CCTV has to be integrated into the AI system in the agent’s work room. Furthermore, if the CCTV is considered not as a threat during the approval to the work application process, the CCTV need not be turned Off. But the images need not be fed to the AI system. If there is no data threat to the system from CCTV, yet the AI engine will want to keep the CCTV system integrated to the system so that it can be switched On to get more information at another operative point of the system.
This CCTV is assumed to be part of the agent’s private property. However, this can pose a threat during work time because it may be able to view the agent desktop personal data. As mentioned previously, to approve work start, the full room is exposed to the AI system either using existing CCTV or additional webcam cameras(probably the ones that have 360 view) placed or a swift 360 degree video demo by the agent using his phone camera. Such full room view checks are mainly done only during on-boarding for the day, re-onboarding after an incident or when AI system needs the full room view to do a complete check before call landing if it detects the particular agent environment cannot be trusted.
The subsequent threat rules are incorporated to detect deviations to the initial approval condition that can further cause the data protection threat based on the environment changes in the agent’s work environment after the agent has gained access to his agent desktop application. These subsequent threat rules are identification of certain object/events in the work environment that can pose a threat to data protection following the initial approval state of the system. The following rules are used by the AI engine to detect threat using minimal camera view such as the in built web cam camera after safe on-boarding of the agent to his work application: Whether any new person (face detection in the door and window frame) entering through the main door or window. Whether the door is unlocked. Whether a new human sound detected in the work environment other than the agent. Whether any human shadow is seen on the wall. Whether the agent is offing the light unnecessarily such the light is Offed when the luminous level is high and as a result luminous level is low in the room subsequently. Whether agent moving from work desk and yet not going out of the room without turning the agent work mode to ‘Aux’. The CCTV in the room is switched On by the user is also detected when the system wants it to be in video recording Off mode. Whether agent phone cameras are in the view of the AI engine without the back cameras sealed off is also detected. Whether additional camera is in the view of AI system is also detected. If any of these threat points present in the environment after agent has started using the agent desktop application, the net threat level is evaluated by the AI system continuously by simply assigning a threat level for each individual threat event and adding them linearly to get the combined threat value. The said combined threat level is not limited to the environment threat alone and it can be contributed from any data protection threat related to this invention.
In the next embodiment of the present invention (5 th embodiment) the prediction used by the AI engine mentioned in Fig. 1 w.r.t. to a call that has hit a system with a given intention/intent will land at an agent of associated intent is highlighted. The prediction of call arrival (based on the condition that IVR call to a given intent has hit the system but not yet routed to an agent or call for an agent of particular intent has been put on hold and in a queue from the beginning of the call until being served by the agent) to an agent serving a particular intent in a threat environment is predicted using the estimation of total number of agents available at a future time when the said call hits an agent of the intended intent. The system evaluates the additional time for a call to hit the agent of a given intent after hitting the system (e.g. IVR call to reach the agent).
This is done by the AI engine based on past data. For this said additional time, the system evaluates how many agents for the given skill range (to serve the intent) will be freed to attend the incoming call from their current active calls. How many agents related to an intent that are currently free will remain free to attend the incoming call. The estimation of the future available agents for the given intent will be based on 2 factors. One is currently present calls in the queue waiting for agent of a given intent or the average queue size for a given intent. The second factor is the remaining average time to finish the current call (for an agent for the given intent) is less than the average call waiting time for the call to hit the agent. These averages are evaluated by the AI engine. Based on such logic, the number of free agents tied to a given intent can be evaluated at the future time of the incoming call needing an agent of a particular intent.
If there are many agents available even after considering the above mentioned depleting factors of the agents and the available agent skills are higher than the agent in scope, then the probability of the call landing to the agent is low (because there are many agents and random picking will reduce the probability). If the number of agents that are free is low when call hit an agent, then the probability of call landing at the agent is considered high. Any missing details of this will be further explained in the embodiment.
If the average calls in queue for the given intent is X then the agents that are needed is X to handle this queued calls. Let K be the original number of available agents when the intent call hit the system. Let Y be the number of agents currently engaged and if the agents being freed is determined as true based on average remaining time for the given intent less than and the average remaining time for call to arrive to a given intent, then the total number of agents freed will be considered as Y. Then the total number of available agents for the given intent is Y + K-X. In cases where the removal of currently engaged agent to free when call arrives to agent is evaluated to no, then the total number of available agents is considered as K-X because Y is considered as 0. There are various cases where the Y+K-X will be small or 0 and also K-X will be small or 0. Based on this value of available agents for the intent, it is predicted whether the call will land to a particular agent.
In this yet another embodiment (i.e. embodiment 6) of the present invention, the ability of the AI engine to identify the uttering of the personal data by the malicious agent is described. The said AI engine is the one illustrated in Fig. 1. Also in this current embodiment the methods used to identify and prevent malicious agent trying to copy/record personal data are described.
In addition to the environmental threat factor for personal data, the AI engine is able to identify malicious events w.r.t personal data threat that are related to the malicious agent as well. The AI engine also identifies the events such as agent uttering the personal data in audible manner in an unrelated moment and immediately ensures the agent screen is locked by communicating with the smart surveillance application. Using similar detection method, the AI engine is also able to identify some other person in the WFH environment other than the agent uttering the personal data in an audible manner. Again the AI engine will lock the agent desktop screen as a counter measure for this said malicious event.
The AI engine’s detection capability is such where the AI engine is also able to identify the personal data voiced out whereby it can detect personal data and impose countermeasures when at least first x% of it is voiced out in an audible manner. Here the said x% is configurable and to avoid fault AI detection of the personal data voiced out, the said x% should be preferably 60% (of the personal data) or higher. This same guidance applies to all the below mention of the x%. In order to block the agent screen quickly, the AI engine has a means to identify the personal data is being voiced out even at a state when only a fraction of it is voiced out. Similarly the AI engine is also able to identify when personal data is voiced out whereby each of the digits, alphabets and special characters of the personal data is voiced out individually by speech in the correct order and impose the countermeasure such as block screen. Additionally the AI engine is able to identify personal data where the user voices out the complete personal data by individually voicing out the digits, characters and special characters in a scrambled manner and impose the countermeasure such as block screen. Similarly the AI engine is able to identify when personal data is voiced out whereby each of the digits, alphabets and special characters of the personal data is voiced out by speech at a minimum x% of the characters in the correct order and subsequently impose the countermeasure such as block screen. Additionally, the AI engine is also able to identify when personal data is voiced out whereby each of the digits, alphabets and special characters of the personal data is voiced out by speech at a minimum x% of the characters in the scrambled order and impose the countermeasure such as block screen. Which AI engine personal data uttering detection mechanism to use can be configured in the system.
The speech to text recognition and matching to the personal data in the DB is done by the AI engine. When personal data detection is done at x% the AI engine is first able to identify the text related to the x% speech or uttering of the personal data (either uttered by voicing out or uttered individually be means of individual characters). Then it will find a partial matching of it to any personal data field in DB. This said matching will be matching in the correct order or incorrect order as per the set AI detection mechanism. Then it will check whether the identified speech to text fits at least x% of any given personal data field in the DB. For every new character identified using speech to text mechanism of AI engine, the above said mechanism is done to check whether x% matching is achieved.
Basically AI engine has the ability to identify personal data when it is spelt or voiced out and it is able to capture this within x% of the full content. In addition to the mentioned malicious agent behavior, some other agent malicious behavior identification done by the AI engine are: detecting the event that user/agent is planning to get personal data from the web application when there is no call or during a call that does not need personal data and prevent personal data from being projected on screen for such probably malicious events. Basically, AI engine when it detects personal data for a customer is requested by an agent when the said customer is not in call with the said agent at his WFH environment, then such personal data is prohibited from shown on the said agent work application screen.
AI engine assisted methods (i.e. smart surveillance application) prevents clipboard copy/paste during personal data projection on the agent screen. When personal data is displayed on agent screen, all screen sharing event by the agent is detected by the AI engine. That is AI engine detects some of the common screen sharing application sessions are activated on agent desktop screen by analyzing screen samples and if detected immediately countermeasures are in place whereby the screen is locked.
When agent views personal data for a long time period more than usual, AI engine immediately blocks the data suspecting malicious behavior. AI engine also tracks all agent induced video recording start events. That is, agent using some other application to start recording is tracked by the AI engine by monitoring agent video recording action by means of a button press. This is realized because the system captures screen recordings and sends to AI engine.
In some cases, an agent may use a timer based malicious application to start recording conversations and screen captures to capture personal data. Such agent’s timer based malicious recording is also tracked by checking on the various video file types stored in the agent desktop work PC. The solution also highlights usage of a running spy application that monitors all the video, audio files generated in the agent desktop work PC and sends the details to the AI engine/system (not necessarily the files). The details will have the file names, type and the time generation or modification. The AI system/engine then checks (by means of the said spy application) whether during personal data screen projection time period on the agent screen, any video and audio files generated and stored in agent desktop work PC. If such malicious behavior is detected, then the AI system/engine is able to immediately impose the needed countermeasures such as block screen. Such detection and blocking the personal data on the screen can take place in real time when the personal data is being recorded itself. Additionally, the AI engine will be able to also in the worst case retrieve the malicious audio and video files and engage in a video and audio based investigation using AI engine’s/framework’s image recognition and natural language processing components to detect any personal data related content.
In the next embodiment (i.e. embodiment 7) more details of the AI engine w.r.t. countermeasures such as skill handling when agent is in a data protection threat environment (i.e. agent environment is not safe) and call transfer to another agent when current agent is in such data protection threat environment is highlighted. In this embodiment additionally the AI assistant or AI analytics involved in handling the call when current agent is in data protection threat environment is also described and under which condition the said AI assistant or AI analytics is engaged into a current call is also described. Although this whole embodiment quotes environment related data protection threat and call transfer, call back call to handle this, the illustration in this embodiment is also applicable when threat induced by malicious agent such as agent trying to voice out personal data, pass the personal data via zoom, webex session etc. Even in these extreme cases, the screen will be blocked and transfers will be considered. One skilled in the art will be able to understand the core point illustrated in this embodiment is not limited to the scenario explained here.
It is essential to understand that the countermeasures for environment based data protection threat described in this embodiment or any embodiment in this document are activated when the AI engine can accurately confirm that personal data is needed to handle the call, the said call is active in a given agent environment and also the said agent environment has data protection threat. The countermeasures for the data protection threat is not activated when the AI engine can accurately confirm that personal data is needed to handle the call, the said call is active but the agent handling the said call has no data protection threat in his environment. The countermeasures for environment based data protection threat are not activated when the AI engine can accurately confirm that personal data is not needed to handle the call, the said call is active and also the said call’s handling agent’s environment has data protection threat.
If the AI engine does not have any concrete identification means/method that the personal data will be used for the given intent call, then the countermeasures will always be in place if data protection threat is present in the agent environment and the call is active in the said agent’s WFH environment. If the AI engine does not have any concrete identification means/method that the personal data will be used for the given intent call, then the countermeasures will not be in place if data protection threat is not present in the agent environment and the call is active.
In summary, countermeasures will be active only when data protection threat is present in the environment of the agent and the agent’s call needs personal data to be used for handling the call. If the AI engine running in the given system cannot detect that personal data involvement is needed for the call, then the countermeasures will be dispatched only based on the condition that the agent environment has data protection threat.
The AI engine running in the system will always check on agent environment w.r.t. data protection threat regardless of whether they handle call intents that need personal data or not. This is because any agent can be picked to handle the call with the help of AI assistants in the worst case scenarios where no other option is available to handle the call. Thus regardless of whether agent serves calls that needs personal data or not, the agent environment has to be always monitored. If an agent is not informed that he does not have to serve calls that have personal data to be viewed or personal data involved, then he does not have to have his environment threat free. In all other cases, all agents of this system has to have their environment threat free w.r.t. personal data protection. Any agent non-compliance to this will have their skills reduced and integrity points reduced.
The AI engine mentioned in this current embodiment 7 is shown in Fig. 1. The basic design principle in this embodiment is that the long call holding time during the call (i.e. waiting for an agent of related skill to be available to handle the call when the call is about to be assigned to an agent initially or when the call is about to be transferred to an agent) and excessive call handling delay is avoided (i.e. unnecessarily cutting the current call off and putting the call on call back state when the call queue is high) even when there is data protection threat in the environment. Herein delay is avoided means if such long call holding delay is estimated then based on the estimation suitable countermeasures can be planned instead of sticking to long call holding time in spite of already having spent a significant time within the call. The said call handling delay refers to the total time of the call which even includes the call back call duration as a continuation of the call. Callback call generally increase the call handling time because when agent calls back he may not be successful in getting the customer immediately. If the customer is informed of such arrangement then probably he can get hold of the customer. However, rather than long call hold, callback will be preferred by customer. Long call hold will block the customer from doing their other activities.
Next within this current embodiment, suitable approaches to estimate the waiting time to get an agent for a given intent based call is illustrated and how the AI engine leverages such waiting time estimates information in its choice of the suitable countermeasure for the call is also highlighted. This solution considers various system call queues based on the various system call intents. i.e. if there are 5 intents supported by system for calls related to agent handling/involvement then the number of call queues is also 5. Calls of various distinct intent will join the appropriate intent based queues to be served by agents. The agents serving the call queue of a given call intent can only be agents having the appropriate skills to serve that particular intent. That is, multiple relevant agents can serve a given intent queue. The system generally imposes a maximum length for each of these intent based call queues. However, this value is outside the scope of this solution. However, one skilled in the art can understand that the current solution is applicable in any such value for the queue length. Such call queue to access an agent with the relevant skill is present because the call-serving rate by the agent is lower than the call arrival rate in search of an agent. Every call that cannot find an available agent of relevant skill will immediately join the said queue at the end of the queue and the queue handling will be first in first out (FIFO) manner. Two calls of the same intent may enter the system at the same time but due to the request for agent happening at different times for these said calls the one that first requested for agent is placed first/initially in the queue and the one that requested for agent later will be placed later in the queue. For example, some calls may be going to other branches of IVR and then coming to agent while some other may straight away want an agent. In such case, the calls that wanted agent involvement first will be placed earlier in the queue rather than the calls that need agent at a later time. In this solution, this said intent based call queue can be made of new calls, transfer calls and/or call back calls. Although callback calls are initiated by agent there will be a queue position for it to get access to the agent so that the agent can trigger this callback call at the due time where the system assigns the agent to serve the callback call.
If such queue length is high for the transfer call that just joined the queue because transfer call could not get an available agent immediately, then waiting for an agent to be free might not be beneficial because the waiting time will be high. Also if this said queue length is high then call back is also not beneficial because the callback calls will be given the same treatment like transfers there will be delay for the call back call to get access of an agent. The below solution components considers the above factors when planning the relevant countermeasures when environment related data protection threat is detected and also such large waiting times for suitable available agent is detected.
Before considering the solution to choose the right component of countermeasure when threat is high in general, the AI based solution component in this current embodiment also highlights how the waiting time to access the agent after joining the queue for a given intent can be identified by the AI engine running in the system. Usually queue times are identified by simulations and/or queueing models used to represent the queue dynamics. But for a real time queue, more accuracy can be obtained if the parameters that influence the queue time is used to build the machine learning model and use the real time queue parameters as inputs to get the output from the machine learnt model that can estimate the queue time when a call just joins the queue. Here the output is the estimated waiting time or queue time for a given new call or transfer call or a call back call that has joined a given intent based call queue.
To accurately get this waiting time value to access an agent upon joining the queue, using machine learning and big data, much past data is needed and for a new system embracing this solution this is difficult to be obtained at the initial stages of roll out. However, it is considered the data is obtained after a certain time and the said machine learning model is derived based on the learning process from the said data. It is further considered until this large past data information is populated and suitable machine learning done and machine learnt model is derived, to estimate waiting time, the waiting time prediction component using machine learning is not used in the solution. Until the machine learning based waiting time prediction is ready, simply the queue length that is number of calls in queue when a given call joins the queue can be used to estimate the waiting time. Basically knowing the average waiting time for new call, transfer call, call back call and the queue composition for these calls (i.e. how many new calls in queue ahead of the current call, how many transfer calls in queue ahead of the current call, how many callback calls in queue ahead of the current call), the waiting time can be estimated even when a machine learnt model is not in place. This said machine learning model is able to identify the waiting time for a transfer, new or call back call that joins the intent based call queue at a given position based on the real time values for the variables that correlate to the waiting time. As mentioned before any call will join this queue at the end (i.e. just behind the last call in the queue).
Let us consider that the transfer call/call back call/new call joins the intent call queue at a given position and there are various types of calls ahead of it in the queue when it joins this intent based queue. The waiting time value can be derived using the machine learnt model using these input variables with related values for these input variables obtained at real time. These below described said input variables was also used from past data to train and derive the machine learnt model. These identified input variables that influence the waiting time value for the said call when it just joins the queue are: there are x calls ahead in the queue which is of new call type, y calls ahead in the queue which is of call back type and z calls ahead in the queue which are of transfer type, the number of currently serving agents are a for this intent, number of currently available agents are b for this intent and total number of calls in the queue are m for this intent. It is also considered the average time to serve new calls by the agent of a given intent is t1 when the call just entered the queue, the average time to serve call back calls by an agent of a given intent is t2 when the call entered the queue and the average time to serve transfer calls by a given agent of a given intent is t3 when the call just joined the queue. It is further considered that all these said variables and its values are also correlated to the value of the waiting time. In real runtime, the system uses these said variable values as inputs into the machine learnt model to get the waiting time as the output. It is considered that this output from a well trained model will be accurate.
This waiting time output value from the machine learnt model is used to make a decision as to whether transfer/callback call waiting time is high or not. Which value of long waiting time or hold time is bad for the quality can be decided by system administrator and it is outside the scope of this solution. In this solution, if a significant amount of time has elapsed for a call with a given agent and then there arises a need to find another agent because of data protection threat (could be environment based data protection threat or malicious agent trying to record personal data etc), and no available agent immediately present of the relevant skill, and if waiting time is high for call back and transfer agent access, then that approach such as call transfer or call back is not used and instead AI assistant/AI analytics is used to support the call. If another less skilled agent is available and all the above options are not valid, then the call will be transferred to the less skilled agent (i.e. an agent outside the intent) and AI analytics will assist him in the intent based call. If when no other agent available (not even less skilled) and all the above said options are not there, then the call will continue in the current agent with AI assistant informing the personal data via head phone.
The AI engine will track whether head phone ‘on’ or not and if people are seen closer to agent in the high threat environment for AI assistant is used. If head phone not ‘On’ in this high threat scenario and other faces seen closer to the agent, immediately the AI engine will inform customer and put the call in call back mode. If when during such AI assisted call in a high threat environment, a skilled agent becomes available to serve the call, then the call will be served by the newly available skilled agent. It is considered that when an agent of related skill is searched for the transfer, the call for transfer is put in the queue although the queue was long. Then that place holder in the queue enables to get the available agent into the call which already engaged the AI assistant. The call back option is only even considered for evaluation when the call is in the initial stage with the agent. The agent transfer can be considered at slightly later moments of the call. It is generally considered that the call back call will have longer delays when compared to transfer calls. The agent has to dial out and the customer has to answer. All these adds to the net delay.
Next the full principle of the above mentioned methods are highlighted whereby the AI engine will do many checks points based on its priority of appropriate countermeasure. It is first checked whether data protection threat is high (or very high) and call that needs personal data is happening in the given agent environment. If yes for the 1 st check then the 2 nd check is whether the call can be transferred to a currently available agent of related skill without any waiting time. If 2 nd check is yes, then no issues the call will be transferred to a currently available agent of related skill and the call can continue in the safe environment. Whenever a transfer is made, additional check w.r.t. environment safety is also done and call only transferred to safe environments or environments that have minimal data protection threat. If 2 nd check evaluates to no, then a 3 rd check is activated. Here in the 3 rd check, it is checked whether the call has passed a considerable amount of time with its current handling agent or has spent significant time in the system. If 3 rd check evaluates to yes, then another check such as 4 t h is made where a less skilled agent is currently available to handle this call and this less skilled agent has no or minimal data protection threat in his environment is checked. In such case if 4 th check evaluates to yes, then the call will be transferred to this less skilled agent and the less skilled agent will be supported with AI analytics related to the intent to continue the call. If the 4 th check point evaluates to no and no less skilled agent, then the call will continue in the current agent environment where the personal data information will be informed in speech mode for the agent with head sets on. If the 3 rd check point evaluates to no, then the system uses AI engine to evaluate the waiting time for a call if transfer or call back is needed. Then if the waiting time is high and detected by using check point 5, then the call will have to be transferred to a less skilled agent and include AI analytics. Or if a less skilled agent not available, then the call will continue in the current environment with the agent and AI assisted mode where AI assistance engaged into the call will inform the personal data. Also the call place holder will be put in the queue when high waiting time is detected in check point 5. If the call in a threat environment and before the call ends another skilled agent is available the call will be transferred to that skilled agent. If the check point 5 is detected as no, then the call will be put in hold until a skilled agent is available. If hold is not preferred by customer and customer has signaled such during the call or other means, then the call is put in the high priority call back mode immediately and before that the call back call will be put in queue.
When system assign/transfers the call to not so skilled agent, it will assist him to conduct the call by providing various insights into the incoming call’s customer system attributes related to the intent. This scenario is not an agent transfer scenario per say but choosing an agent outside the intent to handle the call. The AI engine will analyze the incoming calling customer attribute state and highlight to an unskilled agent various insights related to that intent. For e.g. whether the customer can improve his attribute value due to internal events and external events occurring at the call coming moment. Whether any penalty he has to pay etc. The AI engine will be able to work out the analytics based on rules related to an intent. What type of data to efficiently support a given intent based call is presented to the unskilled agent to complete the call. The analytics done by the AI engine could be also based on the attribute state, inputs and outputs that happened in the past as well the current value of attribute and the inputs (rather than rules). The said AI engine could also derive the suitable AI analytics based on the inputs given by the skilled agents who have handled calls from a given customer related to a given intent previously. Such AI assisted summaries will help the un skilled agent communicate with the caller/customer during a very busy time or when the agents are made redundant in the system due to a very unsafe work from home environment.
In addition to detecting call arrival when system already knows the call is for the particular intent (e.g. the callback call planned ahead, the customer select the intent at the beginning of the call in IVR system etc), the AI system also has the ability to detect the call arrival into the system for a particular intent (applicable in contact centers where the intent cannot be identified at the beginning of the call) based on the current business attribute values tied to a customer, external events that can impact the change to the current business attributes related to a customer, organization based internal events that can contribute to business attribute changes to the customer, customer demographics, intent based call arrival time, day of the week, special day identification. Based on such data, the AI engine is pre-trained in a dedicated training phase to identify the prediction model with high accuracy. The said prediction model will be able to identify as to when a new call arrives for a particular intent, based on the real time data that influences the output (output herein refers to the call is related to a given intent).
In the previously mentioned call back mode, the call back state will be put in the queue and then the current call will be dropped. If an agent of related skill is available to serve the queued call back call state, the said agent will start to call the customer and activate the call back immediately. Additionally the system can send a short message service (SMS) to the customer and inform the call back has to be started within certain immediate time period. Then when the call comes from this customer as part of the call back within this said time period, the agent tied to the agent ID who was assigned to serve the queued call back state will be engaged to handle the call.
To ensure data protection safety when the call arrives at the agent WFH environment, the solution has a warning component to make the environment safe when the call lands at the agent (as mentioned previously). But the agent may be confused as to what actions to take to ensure safety when the warning is issued. The AI engine will send such safety restoration details too. What exactly to do to make the environment safe is also sent as identified by the AI engine. Few of such safety restoration events are: Lock the door. The following members have to leave (the system will project additional faces detected). Increase the light luminous level etc. This will help the agent to quickly ensure any possible threat is removed from the work environment.
The AI engine has methodologies where the skill evaluation is done in a continuous manner to decide on the agent because the agent skill is modified in a continuous manner based on environment safety aspect and predictions. Until the call hits an agent, the suitable agent is continuously evaluated to ensure the call lands at the correct environment amidst the dynamically changing skills.
Predominantly the agent skill is evaluated and agent selection done by The AI engine when the call just hits the system and when the call is just about to hit the agent. If the agent skill has been lowered to such an extent during a call (when newer high or higher threats identified in the environment) so that this agent cannot serve the intent any more, significant time already spent on this call by the agent and no other agent available to immediately attend to the transfer for this intent (ie. Absolutely no agent of any skill set available), then call will be put in an AI assisted mode where the AI assistance application will verbally in speech mode convey the personal data information to the agent (such happens in high threat environment). Again such methodology is used when such a call needs personal data for call handling.
If the AI assisted call is taking longer time and many silent periods (i.e. AI engine assisting with delivering personal data using audio form, or AI assistant providing the analytics related to the call intent to support unskilled agent), then the call will be automatically transferred to another agent of a higher skill that is within the same intent when another agent becomes available again and transfer request was put in queue. If a call can be handled smoothly by AI assistant, then the call may not be transferred to another agent. This said transfer if done will be done in a dynamic and autonomous manner by the AI engine. During such transfer, to ensure quicker action, the agent who is transferring will do a transfer related audio summary. Additionally, the AI engine has a capability whereby the said audio summary is checked by the AI engine as to whether it has personal data information or not before being played in the new agent environment who handles the transferred call and only play the audio if no such personal data is present in the audio summary or the audio summary can be understood after removing the personal data information whereby the AI engine reformulates the audio summery after removing the personal data. This audio will be played to the newly transferred agent using some related application. This summary enables the new agent to handle the call smoothly without many hiccups. This transfer related information being sent to the new agent will happen in this system for any transfer.
If when the call hits the agent domain and environment safety is cleared by the agent (i.e. clearing from an initial threat state to a better state), the skill is restored back slightly by the AI engine. The AI engine does environment safety based skill restoration as well where appropriate.
In a yet another embodiment of the present invention (i.e. embodiment 8), the data protection threat degree evaluation by AI engine in Fig 1 and its subsequent treatment is described.
The AI engine has the capability to evaluate the data threat level in an agent WFH environment into various classifications such as high, medium and low. The net threat value/level is determined by giving each threat a threat score and evaluating the net/cumulative score for a given agent’s work from home environment. The high, medium and low is given fixed ranges based on the threat value and the cumulative threat score in a given environment is identified into the high, medium and low classifications. Based on these classifications appropriate counter measures are in place in this system. The countermeasures are not rigid and depends on the state of the system. Additionally the countermeasures will generally be used only if the personal data is used or will be used in an environment.
If the threat is high and the agent desktop has personal data exposed (ie. Sudden threat state encountered when call is On), the agent screen will be completely blocked.
Whenever, agent work environment is considered not safe or agent trying to maliciously copy the personal data, the agent’s integrity points are reduced. The points reduced is directly proportional to the degree of non-compliance. Basically if the threat level is high, then the integrity points reduced is also high. These integrity points are not restored quickly. Compliance of agent is monitored over a longer time period and then the said points are restored if compliance is seen.
If environment personal data protection threat is medium the call is allowed to continue in the environment using the speech assisted or agent assisted mode. In both these modes the personal data information is informed using the speech mode to the agent with head sets. In such medium threat environment, an agent who usually serves another intent can be used by the system to assist the agent to which the call landed. This other agent mainly helps in conveying the personal data value only and not really engaged in serving the call.
If the environment data protection threat is low, then the personal data shown in the screen is reduced to the middle of the screen or the font size is slightly reduced.
The solution counteractions are not fixed. The counteractions are planned based on available resources. The AI engine generally plans the counter actions based on the threat level and the available resources (agents). The environment data protection threat level detection and countermeasure is dynamic. But the countermeasures for malicious agent behavior is static and dynamic combined. The transfer refers to the dynamic part. If agent is trying to record video etc, then if detected the skill severely reduced, integrity points reduced and event notified to supervisor. Also the recorded files will be deleted by the supporting application. Also the screen will be blocked and call transferred to another agent.
In the next preferred embodiment of the present invention (embodiment 9) we highlight the complete design components of the solution. Such is highlighted to see the complete picture of the solution and possibly understand the solution operation in a given scenario. The big design component/concept is broken into multiple smaller design components/concepts and the features tied to these said smaller design components are illustrated in this embodiment. We will uses Figs, 3A, 3B, 3C, 3D, 3E and 3F to fully illustrate the solution design components from the initial stage of on-boarding the agent after safe proofing the WFH environment and authentication, managing data protection threat when call is on and said threat happening in the said call handling agent’s WFH environment, using warning to clear threat when call is about to land in a threat related environment of the agent, managing data protection threat even when call is not ‘on’ and yet malicious agent behavior identified and subsequent restoration of agent into system whereby after countermeasure how re-onboarding of the agent is done is also illustrated. The above mentioned are some of the high-level design components of the system.
As discussed in the previous sections, the solution has a design concept whereby minimal camera is used and it is not necessary to have many sensors and/or cameras running as a pre-condition for this solution. The solution during its core operating time only uses the in-built agent desktop camera to capture real-time video from its view and send to AI engine for further processing and detection. It is also considered, that this solution does not have prior data to predict data protection threat in the WFH environment. Thus this solution has unique features whereby AI object image/recognition is properly utilized to achieve personal data protection.
To better understand this solution’s design components, we can next look at the Fig. 3A where one of the solution design component is briefly illustrated. Fig. 3A highlights the safe on-boarding of the agent and initial configuration using many smaller design components such as agent WFH environment is safe proofed w.r.t personal data protection using AI, the agent is authenticated as valid agent and live agent using AI, the agent WFH environment is identified as his work environment and the agent skill is set to an appropriate value at the initial starting stage/state or rollout state of the agent to the system based on input provided by the agent regarding the safety of his WFH environment . These smaller features/design components that help in the most suitable safe on-boarding of the agent to his agent desktop application is highlighted in Fig. 3A using components (202, 204 and 205).
In component (204), the rules usage to ensure the agent WFH environment is safe is highlighted. Here the design component objective is to eliminate the data protection threat in a special way (carefully planned rules for initial on-boarding considering the fact that subsequently the full view is not available for the AI engine) before-onboarding the agent to his agent desktop application, so that subsequent data protection threats can be correctly identified using minimal camera view based on the condition that the initial on-boarding check has eliminated certain threat events. In this minimal camera view what is detected by means of data protection threat evaluation rules at operation time is that no new threats are seen in the minimal camera view and also the rules are used to check whether the previous clear state is not violated by events seen in the minimal camera view. For. E.g. if agent is not seen in the minimal camera view, then agent may be going away from camera and trying to take a picture or agent disappearing from the camera view may be opening a window for an another to get in. Thus these appropriate deign components in the rules, provides the system with all possible components that are needed to sustain the data protection threat.
The design component (205) highlights that the starting state/value of the skill can be preset as part of initial configuration by the agent/ supervisor for the given WFH environment based on evaluated data protection threat in the given WFH environment. This said initial rollout evaluation is not done by the AI engine, and it is agent’s evaluation about his own WFH environment. Initially based on the agent work skill he will have a skill value tied to the call intent he serves. During initial configuration a change(reduction) to agent technical skill can be done purely based on the data protection safety of his work environment given by the agent/supervisor. This said skill modification is always a reduction and this happens only during the initial rollout. This said skill reduction may or may not happen and that is purely based on the indication of the agent/supervisor about the agent’s WFH environment’s safety w.r.t. personal data protection. If environment is safe or the agent has no idea about his WFH environment w.r.t. personal data protection then the agent will not insert any threat indication value about his WFH environment into the system. In such case, the said agent skill will not be reduced. If the agent evaluates his environment or supervisor evaluates or approves the environment safety level, and inserts a threat value to system, then based on the severity of the threat value inserted into the system the skill will be reduced. Higher the threat value, the skill reduction will be proportionately higher. Every work day a new on-boarding event of the agent will happen i.e.smaller features (202, 204) will be activated. However, the feature (205) is activated only for initial agent roll out only (i.e. initial rollout of the agent into the system).
The system also checks whether the current agent on-boarding day is an approved day for the agent to carry on his work duties. This is done in feature (202). That is, the agent has not taken leave and trying log in or the agent is trying to access the system on public holidays etc are checked as part of the daily on-boarding process. The skill reduction during initial configuration, gives the appropriate indication to the call routing engine of the system. Basically, for a given intent related call, in this solution, available agents of lower skills are not picked as a preferred agent to handle the call and always available agents of higher skills are picked.
Next by referring to Fig. 3B, another sub component of the solution component is highlighted. In this sub component, the functionality and features of the solution to enforce personal data protection in WFH even when there is no environmental data protection threat (i.e.) is highlighted using features such as (208, 209, 210, 214, 215, 216 and 217) as in Fig. 2B and Fig. 2C. The main solution has appropriate features to ensure data protection security in WFH environment even when no environment related data protection threat present in the environment. The said environmental data protection threat refers to data protection threat originating from agent’s work environment surrounding the agent. Even when there is no environment related data protection threat, there has to be continuous checking and threat elimination mechanism especially to catch agent behaving in a malicious manner and also to detect whether safe environment is changing to a data protection threat state. This is the main objective of this solution sub component.
The feature (208) refers to continuous monitoring of the agent environment so that any data protection related threat components are continuously identified even if the current environment does not have any data protection threat. Such design philosophy is used so that the agent can be informed ahead by means of warning to clear the threat from the environment incase the call will land to that agent in the future. The said warning frequency is very less if the system does not predict the call that needs personal data will arrive at this data protection threat related environment. Nevertheless, even if no call arrival to this said environment is predicted at least one warning will be sent if AI engine identifies data protection threat in the environment.
Also by having this threat detection using the restricted camera view, the AI engine can check whether if someone has entered a room then whether the person has left the room. The total number of person count per WFH environment is detected and kept by the AI engine. Even if the additional person face cannot be seen, at a given moment using the restricted view of the camera, the AI engine can detect the total number of people entered using the single entry point that is in the restricted camera view (i.e. the door for the WFH area) and the number of people that have left the room using the single entry exit point that is in the restricted camera view. Based on this method the AI engine can detect the total number of people in the WFH environment even with the restricted camera view. The threats are because of people and/or automated recording programs running. Thus this sub solution gives priority to detect these objects/additional people and any malicious recording by agent. During system operation time or the agent access time for work desktop application, the solution implemented continuously checks whether any other camera in the WFH environment is integrated to the system and set to recording ‘Off’ mode.
Next the feature (209) is looked into. This feature is about allowing the agent to access the personal data in his agent desktop work application only during a call handled by the said agent and when it is detected accurately the said call needs a given customer’s personal data to be viewed by the said agent to handle the call. In all other cases, the system prevents the customer personal data from being viewed by the agent from his agent desktop work application.
The feature (210) highlights the system’s ability to detect during a call or otherwise, as to whether when the personal data being projected on the screen or otherwise, the agent is quoting the personal data values verbally over the call or otherwise and the quoted personal data can be heard by any other person. The said any other person is a threat only when this person is in the audible range from the source of sound (i.e. agent).
The AI engine has the capability to estimate from the recorded sound/speech received from agent desktop containing the personal data, in which/what area around the agent the agent’s sound can be clearly heard. If the said sound’s finalized audible area cannot be fully seen by the AI engine from the restricted camera view if the border of the area extends further than the door in the AI engine’s camera view or there is another human in the said audible area and this human can be seen by the AI engine using the restricted camera view or any combination thereof, the AI engine considers this act of the said agent (the agent quoting the data via voice) as a threat and imposes countermeasure such as significantly reducing the skill of the agent, agent integrity point reduction, blocking the agent work screen by requesting the smart surveillance application and this malicious behavior will be considered during re-on-boarding of the agent.
It is considered the AI engine has voice energy degradation data (the rate at which voice amplitude will diminish in the given agent WFH area from the agent/source), distance to agent’s door from agent work location within his WFH environment and it uses such data and general speech audibility level guidance to detect whether the agent’s sound can be heard by any other. Basically the AI engine will be able to draw an audibility area around the agent and check whether the seen door is within the area and use it for its decisions regarding the said threat. In general if the door lies within the audibility estimated area, the AI engine will consider that the threat is present. Only when the estimated audibility area is such where it does not have the door or just touches the door and there is no other in the view of the AI engine within this said area, it considers that the threat from agent voicing out the personal data is minimal. The said audibility area is derived by the AI engine based on the sound level/amplitude capture by the agent and the voice degradation rate based on distance from voice source related information it has a priori for the given WFH agent environment.
If the solution system has no capability of detecting the audible range of the personal data quoted or voiced out by the agent, then when personal data being quoted is detected, the system imposes counteraction (as mentioned) such as significantly reducing the agent skill of the agent, agent integrity point reduction, screen locking and this malicious behavior will be considered during re- on-boarding of the agent.
The agent on-boarding for next day will be prevented because agent screen is locked. During re-on-boarding in the beginning of the work day, a state will remain in the system tied to the given agent ID for such malicious act and system will prevent the user from gaining access to his work desktop application, until the supervisor clears such state and re-on boarding is approved.
In certain cases, the agent without viewing the personal data and getting this personal data projected on the screen, the agent is able to handle the call, where the call needs personal data. In this scenario, the personal data is not voiced out during the call but the agent is getting the pre-written data from some other place. This is also considered as a malicious event and the system will impose the same measures of severe skill degrading, integrity point deduction, screen locking and prevention of re-onboarding. This said agent action is a malicious event because the agent might have registered the data in some other device or paper and using it for reference during the call without retrieving the data onto the screen. The AI engine detects such copying to another device and re-using.
One of the core feature of the AI engine is that, using speech recognition method it will be able to match voice phrases to personal data conveyed and subsequently track the malicious behavior of the agent. The solution always reduces more skill value when malicious act of the agent is detected.
The next component is (214, 216) as described in Fig 3C. Here (i.e. function related to the said component) the AI engine will track and detect a given call that needs personal data to be accessed on to the screen is being handled in a hold phase or taking a very long time to complete the call. When taking a long time for a call and/or putting a call in hold when personal data is displayed on the agent screen is considered as a malicious act by agent.
It is considered the AI engine will get the information from the DB it associates with about the hold time when the personal data was projected on the screen (i.e. start time of hold time within the call and the end time of hold time within the call). Also the AI engine is able to get the call duration timing for the call (i.e. start time of call and end time of call) which used personal data on the screen. The said DB will additionally have time as to when the personal data was projected on screen during the said call (i.e. start time of personal data projection and end time of personal data projection). Using such multiple information, the AI engine can detect whether the hold time was long, whether the hold time happened during personal data projection time and also the call took an unusually long time to complete. These mentioned check point events correlates to an act such as malicious agent trying to copy personal data on to paper or some external device during a call. In such case, the AI engine will track this as a malicious behavior by the agent and reduce the skills. By such reduction of the skills, for future calls the AI engine based system will avoid this agent (as the skill is reduced for a given intent).
In addition to such skill detection, since hold time happened during personal data projection time and if hold time is higher than a given amount of time (i.e. the given amount of time is the AI engine’s accepted hold time), the system blocks the personal data on the screen. Here the said hold time is an explicit hold state inserted by the agent into the system. Once hold is completed, the agent will press un-hold. The said accepted hold time could be derived by the AI engine from past data or it could be obtained from a preconfigured information.
In addition to such detection mechanism using DB, the AI engine is able to detect hold/silence period by real time analysis of the speech recorded during a call, which uses personal data. The solution leverages such capabilities of the AI engine. The AI engine as part of this solution will be able to analyze speech samples for normal agent conversation that uses personal data and compare against speech conversation which has unusual silent period as in a malicious event. Then it will correlate this event of long silent period as unusual event if the silent duration value is unusual. To conclude on an unusual event it will check many past data of speech samples during a conversation where agent used personal data. The AI engine will specifically track the silent periods in the past data. This past speech samples used for analysis can be past real time data. Additionally in addition to long unusual silence time detection, if this silence time detected falls within the time of personal data projected screen while the call is on, the AI engine detects this as a malicious act by the agent. The AI engine will use big data such as silence periods within a call, time period when personal data is projected on screen, the time duration of personal data projection etc to identify the threat event such as copying done by the agent.
Usually the system updates the hold time duration into the DB at the end of the call. In some cases the solution can be implemented in a way whereby the hold time is updated into the DB immediately when the hold time is completed by un-hold during the middle of the call. In such case, the AI engine is able to track in real time that the hold time has exceeded by querying the DB at a time that usually takes to update the call hold time. If the call hold time is not updated within the usual time, then the AI engine, even during the call will track the malicious act of the agent using this yet another method. The said usual time is detected by the AI engine using some averages of hold time of past data. In such case, the system will block the agent screen if the hold is significantly high during the time when personal data is projected on the screen.
If in another deployment scenario of the present invention, if call hold is not allowed during a call, which uses personal data, then the AI engine upon detecting silence because there is no conversation when personal data is projected on the screen, it will immediately block the screen (by using the smart surveillance application) and reduce the agent skill. In such scenario, even when there is hold and un-hold explicitly done, the AI engine is able to track and immediately block the screen. Basically, such measure can be taken if the personal data is a very important one. In such case no time boundaries are used to decide on the counter action. If any unusual long silence time periods and/or hold/un-hold and/or long call conversation time then counteraction such as block screen and skill reduction will be in place.
The component (215) identifies the case where when personal data being exposed on the agent screen during a call involving the agent, the malicious agent starts another session via zoom, webex and share the agent screen having the said personal data and pass the personal data related video to another attacker. It is considered that such zoom session, webex session creation, screen share events can be detected by the AI engine. The video application that sends data to AI engine also sends the screen video images too to the AI engine when personal data is projected on the agent desktop screen. If such malicious act detected, the AI engine will block the agent screen by using smart surveillance application, reduce the skill significantly, reduce agent integrity points and prevent the agent re-onboarding to the agent desktop work application the next day. In general these screen blocking related countermeasures are activated by the AI engine using other related applications such as smart surveillance application as illustrated in the previous embodiments.
The next component is (217). This component function is such where the AI engine is able to detect for the customer calling which personal data has to be shown on the screen. The AI engine subsequently informs other applications not to project any other personal data that is not related to this call.
Next another sub design/component or feature of the solution is described where the solution identifies the threat in the agent work environment and enforces appropriate actions. These will be explained by reference to figs Fig. 3D and Fig. 3E. To explain this sub feature the components (222, 223, 224, 229, 230, 231, 232) will be referenced. The action related to Fig. 3D is tied to skill juggling so that the call landing in the agent’s WFH environment that has compromised its safety w.r.t. personal data protection is avoided. Fig. 3E relates to counteractions where the call is transferred to another agent or in the worst case put into call back mode.
The components (222, 223) identifies the event that a call that needs personal data for its handling will land at a particular agent and activate prevention of the call landing if the said agent’s WFH environment is not safe at the moment of the said prediction. One such counter measure or prevention measure is skill reduction and another such prevention method is issuing a warning to the said agent to clear the un-safe environment. How the system predicts a call related to a given intent that needs personal data will land at a given agent can be broken into the one: identification of probability of the given intent related call landing in the system and being given to the agent which handles the intent and two: the probably that this intent call will need personal data. The AI engine can detect this event by using big data analytics rather than identifying these probabilities individually too. That is the big data should have all sorts of data that will give insights to the AI engine to correlate call arrival of a particular intent that needs personal data to the arrival of the said call at a given said agent. In some previous embodiments of the present invention such details were revealed.
The system until the said call lands at any given agent continuously checks whether in any available agent’s (i.e. an agent which has high chance of getting this call due to its related intent) WFH environment the skill has to be reduced or for any available WFH agent the skill can be restored back. The skill is restored back if the WFH environment is considered safe again after initial unsafe condition as long as compliance from unsafe to safe is shown by the agent. Such skill management is continuously done by the system when a call has landed in the system for a given intent and needs personal data protection and has not landed at any agent yet. This dynamic mechanism helps to find the appropriate agent. If such continuous and dynamic skill management is not in place, there is a chance of the call going to call back mode unnecessarily because the system may erroneously detect that all agent environment is unsafe and may not be able to find a suitable available agent. Only by having such continuous skill management design component, appropriate agent can be identified. Thus by having this dynamic skill management, appropriate agent can be identified since threat environment has a high chance of being cleared after the warning is issued.
If the system cannot identify a suitable agent to which the call has to land for a given intent and the call needs personal data protection and this can be estimated to a certain degree, then the warning is sent to all agents whose environment has data protection threat. If in a case, whether the call needs personal data protection cannot be estimated, then warning is again sent to all the relevant agent’s whose WFH environment has safety issues w.r.t. personal data protection when such call hits the system.
The above mentioned skill restoration after environment safety is cleared is highlighted by the component (224). The skill restoration to a value near full amount should happen whenever the WFH environment is considered safe and the agent has complied to the first warning. If the agent has not complied to the first warning and does environment safety restoration after 2 nd or 3 rd warning, then skill restoration value is not significant. Basically the system uses a method where skill restoration happens based on agent’s compliance to work environment safety rules. The skill restoration value is decided by the system based on the compliance level of the agent to WFH environment safety w.r.t. personal data protection.
After the call lands in an agent WFH environment, the system activates certain functions and features to ensure the call continues with the least disruption and also in a safe environment. To enable such a goal, the component (228) uses a feature where the system even after the call lands to an agent continues to monitor the environment safety w.r.t. personal data protection and including any malicious behavior by the agent. This is to ensure that if data protection threat detected then transfer to another agent or some AI assistance or another agent assistance can be planned. In the component (229) it is highlighted the solution principle where the system does certain countermeasures when the personal data protection threat in the WFH environment is not so severe. In such case, the AI engine uses the AI assistant to be involved in the call and tell the personal data in speech mode to an agent who is in the head phone mode and the said agent also in the data protection threat environment. The component (229) also highlights the case when the threat is not severe in an agent WFH environment, then another less skilled agent who is available is incorporated into the call to help voice out the personal data information to the agent. Basically within the same session either a AI assistant or another available less skilled person is engaged to help the agent. The reason for such is because the threat is not severe and the system identifies that the call can continue in the current agent environment. For e.g. if suddenly many people in the agent WFH environment then the threat can be severe and in such case the system will try to transfer the call to another agent completely. Transfers generally cause delay and hiccups in performance.
The component (230) highlights the case where the agent receives the personal data via head phone and when threat is present in the environment agent tries to voice out or repeat what was received from other virtual or real agent. In such scenario, the agent may be using some hidden recording device and recording such data. Thus it is essential to move the call away immediately to another safer environment and to an agent who has adequate skills to handle the call. In such case, the call will be transferred to another agent of related skill tied to the intent tied to the call. In general, when the threat is high in the agent’s call handling environment (environment threat and/malicious agent), the system will try to completely cutoff the agents involvement in the call and transfer the call to another agent if possible. The component (231) highlights a case where the call cannot be transferred to another agent because there is no other agent available to handle the said call tied to a given intent. In that case the call may be put on hold if the another appropriate agent availability is soon (i.e. a given busy agent will be freed within a short time) or the call will be put in high priority call back state. The system judges whether the remaining hold time and subsequent handling by another agent severely degrades the quality of the call. If it is detected to degrade the quality of the call (i.e. a call with excessive hold time will impact the QoS), the call will be put in call back state and said call back call will join the queue to get an agent as per other calls.
The component (232) highlights the sub feature that when data protection threat is detected in a WFH environment of an agent (can be environment based data protection threat and/or malicious agent), the skill of the agent is reduced, so that an incoming call is prevented from landing at the said agent who has an environment based data protection threat. This lower skill will generally ensure such agent is not picked to serve the call. If suppose such feature is not present, then the call will unnecessarily have to be transferred and there will be additional delays here due to session update and also finding the suitable agent again. In addition to the skill, the agent integrity points are reduced whenever data protection threat is identified (this can be environment related and/or the malicious agent based).
This integrity points are used to re-evaluate the agent’s entry back into the usage of the agent desktop application. The supervisor will be able to clearly distinguish the behavior and trustworthiness of an agent using the integrity points. Reduced skill value does not simply imply less integrity. It could also mean that the agent has reduced technical skill to handle a given intent based call. Thus this integrity point is a dedicated metric to highlight the trustworthiness of the agent and it also plays an important role in this solution.
Next another important sub design of the solution is highlighted in Fig. 3F. This component is specifically used for re-onboarding the agent after his desktop work application has been blocked because of a high data protection threat event in his environment or the agent himself have behaved maliciously or a combination thereof. The sub component (236) highlights how a very strict re-onboarding security checks are done before allowing the agent to re-onboard. The solution principle here is to further tighten the security after a data protection threat incident.
To achieve this many solution components are in place. After the screen is blocked/locked a QR code is projected on the agent screen after a certain amount of time. As part of the agent re-entry after security incident, first the QR code on the screen needs to be read by the agent phone’s smart surveillance application (smart surveillance application mentioned in Fig. 1) and sent to system. Here in contrary to general QR solutions, the QR code reading is integrated with the smart surveillance application it self. Thus only the phone that has smart surveillance application installed can be used to get the QR code to be read and sent to solution system (or a phone that knows the URL of this application). As opposed to installing the smart surveillance application as a smart phone application, if agent uses URL to access the smart surveillance application, this URL can be sent using out of band means such as email and in an encrypted manner. This re-login URL is only used for re-login/re-onboarding procedure and it is dynamic. There will not be a single URL system wide for this re-login/re-onboarding and it changes dynamically. Due to such measures, the security is further enhanced because a malicious person cannot access the smart surveillance application nor the dynamic URL so easily and send the QR code back to the solution system and thus impersonate the agent. The smart surveillance application mentioned in Fig. 1 which is running in the agent’s phone has the capability to read the QR code and send the information to back end. If the QR code read is equal to agent ID whose desktop was locked, then the system after the supervisor approves re-login will send a login code or password as a SMS to the agent smart phone. This password is sent to the said agent’s mobile phone number. This to ensure no malicious person is able to get this password. Before this said password is sent, many checks as discussed next will be done and has to be passed. Then this passed state will be shown to supervisor and he has to approve the re-login. After that, based on supervisor agreement, the system generates the re-login password and sends to the said agent whose screen got locked/blocked. After the QR code is received by back end, subsequently the agent environment safe proofing, authenticating the agent and his work environment, identifying the agent liveliness etc are used to re-validate the agent’s entry into his agent desktop application. All these said information/validation results is now given to supervisor and supervisor can use additional means such as incident related video, integrity points etc and decide on suitable re-entry.
The sub component (237) highlights that the system also has important design components such as restoration. If skill is reduced then it is important to restore to the original value. If the integrity point is reduced it also has to be restored based on good behavior. The skill restoration happens when compliance is detected after a warning or for a certain time period no incident happens. Integrity points are restored after a longer time period. Having this design sub component in the solution, the system will have skilled agents in the system available to handle the call load and system will not be illogically/inappropriately depleted of agents due the threat management protocols implemented.
Next in another preferred embodiment of the solution (embodiment 10), the AI engine’s software architecture is revealed according to one exemplary illustration. One skilled in the art can understand that this is not the only software architecture for the AI engine to realize the current inventive solution. To better understand the said AI engine’s exemplary software architecture the Fig. 4 is referenced.
The complete software architecture of AI engine is illustrated by (300). It basically follows the open system interconnection (OSI) of communication protocols architecture where all the applications reside on top layer of the software communication architecture framework. The application components that are involved in the AI specific framework are: (301), (302), (303), (304), (305), (306), (307), (308), (310), (311) and (313). In order for these said applications to communicate with various other modules such as smart surveillance module (122) as mentioned in Fig. 1, it needs to generate the data for communication using the said individual application modules and send this data using the communication framework highlighted using communication protocol software components such as (315), (316) and (317). The AI engine sends this said data to the smart surveillance application so that the listener in smart surveillance application can fetch these in real time and immediately execute the counteractions needed to manage the data protection threat. Here the component (315) enables various data such as modified skill value, agent ID, session ID, data protection threat countermeasure action, updated integrity point for an agent and related data protection threat capture video to be suitably arranged in-order for it to be sent from the core AI application modules using an application layer transport protocol such as hyper text transfer protocol (HTTP). The said data arrangement for the said interface in one exemplary manner highlight Javascript Object Notation (JSON) format of data within the API body. The component (315) is an application to call the various sets of interface APIs between AI engine and the smart surveillance application. The interface communication session establishment and data transport triggering is done by the interface application running in component (315). The solution does not restrict to HTTP as the only suitable application transport protocol for the said interface communication. If HTTP is used, various Representational State Transfer (REST) communication models can be used to enable transfer. Anyone skilled in the art would know that if another application layer transport protocol is used such as Websockets, the same solution can be used without deviating from the main point of the solution. The layer (315) specifically highlights the appropriate data formation for HTTP session and thus it exposes various API to attain such objective. Using these APIs, various sets of data are sent from the AI engine applications to the smart surveillance application. By hosting such API and exposing it, the component (315) can use various API names to send various types of data from the AI engine. The data can be sent via an API within one HTTP session or multiple HTTP messages for various APIs can be sent via the single HTTP session. The API name is generally transported using the header of the HTTP protocol. Basically the component (315) is an application that calls the HTTP service including HTTP session establishment service. The actual implementation of the HTTP session is handled by the component (316).
The communication layer (316) provides signaling, data packet transportation service for various communication protocols such as HTTP, Transport Control Protocol (TCP), Internet Protocol version 4 (IPv4) and Internet Protocol version 6 (IPv6) and IP Security (IPSEC). When communication is needed between AI engine’s application modules hosted in (300) to any other module such as the smart surveillance module, then the application data in addition of it being wrapped using HTTP header it will also be wrapped using TCP header and also wrapped in the appropriate IPv4/IPv6 protocol. The data link layer and physical layer in the communication stack is highlighted by component (317). The said application data before being sent into the physical layer (317) in the protocol stack it will be further appended with data link layer and media Access Control (MAC) layer protocol headers. Also the data packet will be processed by suitable physical layer protocols before being transmitted via the communication medium. Any AI engine will need its evaluations/predictions to be done quickly and it needs much hardware such as large cache size and processors with high processing speed. It is assumed that such additional hardware tools are given by component (309).
Furthermore in some case to activate a particular algorithm AI engine framework has capability to dynamically write the code. Whenever the AI engine needs code to be dynamically generated this Integrated Development Environment (IDE) can be used. In this solution when the IDE will be used and when it will not be used is not specified. The architecture simply highlights that AI applications can use this feature where and when needed. One possible scenario where the IDE can be used by the AI engine of the solution is when it picks a certain algorithm from its machine learning suite to activate in learning. If libraries are not present to use this algorithm, then this algorithm can be coded using the said IDE within the machine learning framework.
The application of the AI framework (300) is segregated into many AI functions. These said functions are mapped in a one to one manner into various AI application software modules. Such separate application modules in the software architecture design reduces the maintenance efforts such as new build deployed as an upgrade to a certain function does not need to disrupt other untouched modules, improves debugging when issue identified as only certain related code space has to be investigated, also enables reusability by forming application with the needed set of sub function/modules and smoother plug and play for different types of customization needed. For e.g. some customers that use this solution may not need all of the AI engine feature and may only need certain aspects. Thus compartmentalization of AI engine feature using these various sub modules is very useful for customization of this big solution into various customers with slightly different needs.
The component 301 refers to the AI engine’s involvement in identifying/predicting the given intent call arrival to an agent. This said prediction uses big data and machine learning algorithm(s) to identify the suitable machine learnt mathematical model that can enable such prediction accurately. The said machine learning algorithm from the big past data, will identify the positive or negative correlation between labelled data and the needed output and use such identified correlation to form the machine learning model that can accurately predict the outcome such as a given intent call arrival to an agent. The machine learning algorithm also has the capability to improve on the machine learnt model to improve on its estimation capability.
To identify such machine learnt model the application component (301) can leverage data and also tools that can be used by AI engine and residing in the software framework (300). The said data to train and identify the machine learnt model for application (301) can be accessed by getting the data from DB. Then this retrieved data could be used in cache memory or stored in a file during the machine training/learning process. To get the data from the DB the component (312) is used. This component (312) in general helps to insert real time data obtained from the WFH environment or from any other AI interfacing application into DB and also retrieve the data from DB for the machine training/learning purpose and also for object/image recognition during agent on-boarding, data protection threat detection during on-boarded time for the given day and re-onboarding.
The software component (314) highlights the AI framework that is useful in identifying the appropriate finalized machine learnt model for application component (301) or any other AI application component residing in this software architecture (300). Some of the best fitting mathematical model defined in (314) are evaluated to identify the suitable machine learnt model for a given prediction problem. To identify the suitable machine learnt model a suitable machine learning algorithm can be used. The component (314) provides such library of algorithms and machine learnt mathematical model that can be accessed to enable in the derivation of a prediction model.
The application component (302) is another independent AI based application component. This component’s objective is to identify images, objects that can cause data protection threats in real time. To achieve this in real time, component (302) could use well developed deep learning model for image recognition or leverage existing API/libraries that are provided as part of software tools in this AI framework. This function is activated as part of on-boarding the agent to enable the agent access his work desktop application after his environment state is considered safe for work to start and continue. The AI engine uses already given big data of images/objects, to identify various object and scenes that correlate to a threat. Some already given objects could possibly be agent face, image of door in locked state, image of phone camera where back camera sealed and many more. The application component (302) in addition to detecting the object will use rules to evaluate the threat during on-boarding. The component (314) may be exposing image recognition API where using this API, past data/training data for image recognition and current image captured that needs to be detected, the input image can be accurately detected and the response returned. It is considered the AI component (302) will be having an application that uses the image recognition API hosted in component (314). Alternatively, the said image recognition can be also be fully done in component (302) where suitable deep leaning algorithms can be built and deployed to detect certain objects of a given type. How component (302) achieve object detection is outside the scope of this solution. AI is a very matured technology and this invention uses its already available rich features that are available as implementations, exposed AI APIs or AI algorithms documented in many widely available materials.
The application component (303) uses AI engine to help identify malicious agent behavior that will pose a threat to personal data protection. The application (303) has the ability to track whether agent voicing out personal data during call or when outside call. It employs AI engine speech recognition framework available in component (314) to identify whether the agent trying to voice out any sort of personal data at any time during his work time. It also monitors whether the personal data is being projected on the agent screen when such malicious act being conducted by the agent. If so, the agent screen will be locked. The application (303) achieves this locking by interfacing with the smart surveillance application mentioned before. Again the AI speech recognition is a matured technology. Thus the application (303) simply integrates the suitable algorithm for speech recognition to achieve the objective of detecting the said malicious agent behavior.
The application component (304) has the functionality to continuously track the environment safety throughout the work time of the agent. This component (304) has capability to track an environment with highest priority where in the said WFH environment there is already a call is being engaged to an agent and there is a need for personal data to be projected or currently projected on the screen. If component (304) is using parallel thread processing to detect threat in all the WFH environments, then in the initial batch of parallel processing of this threat evaluation, such said high priority environments will be used.
The application components (308, 310) are such that the AI engine does skill management. Thus when the call is about to land at a given agent, if this is predicted and environment threat is high then the AI engine will ensure the skill is reduced appropriately so that the call does not land at the given threat environment. The skill reduction does not take into account the skills of the other currently available agents for the given intent. But the skill reduction amount primarily is based on the threat level in the said agent environment. The skill reduction as in component (310) and management is done by the AI engine because it has the environment threat identification means and it can quickly use this information and modify the skill. The revised skill and the related agent ID are informed to the smart surveillance application. Also if a call is in a given environment and data protection threat happens, again skill reduction is done. This is mainly done so that this agent is not used for call transfers in the future. This is handled by the application component (308). Also even when a call is not in a data protection threat environment and environment protection threat happens, using the component (308), skill reduction happens. This is to ensure that another call does not get transferred to this data protection threat environment.
The application component (307) highlights the specific AI engine feature where virtual AI assistant is enrolled or incorporated into a call session if there is no agent for transfer of a call from a data protection threat environment and also in case if the call is supposed to wait for a free agent, significant impairment to call quality will happen. In such case, AI framework uses the text to speech function of component (307) to support the agent with personal data where the agent is in a data protection threat environment. The said data is conveyed via speech when the agent is in the head phone mode. The said AI assistant of component (307) is incorporated into the session and the AI assistant will voice out the personal data information from the DB in the speech mode. The component (307) is able to perform this text to speech service using deep learning mechanisms that are well known. This said deep learning mechanisms are formed in general using neural networks. The training happens for this model using various speech samples that denotes the text. This solution does not highlight which mechanism in AI solution space should be used for text to speech conversion. It simply highlights that the text to speech happens using some form of AI deep learning. The component (307) is also responsible to ensure the AI assistant is incorporated into the session and also handles the calling customer’s personal data from DB to be correctly converted to a speech mode and conveyed to the said agent.
The AI component (306) does AI based agent face detection, agent liveliness detection and agent environment threat detection related to personal data when agent is planned to be re-on boarded again after a threat incident. This component is activated when agent requests for re-onboarding by scanning the QR code using the smart surveillance application. The said environment related personal data threat detection function of component (306) identifies various objects that poses a threat and uses rules to check environment safety by incorporating these detected objects into the rules.
After skill is reduced, the AI engine has to reimburse back the skill so that technically sound agents will have their skill in place to continue serving the calls with the best possible quality. The component (311) handles this. When a given agent’s environment did not have any environment data protection threat and a given agent is not detected to act maliciously, and such compliance state happens for a certain number of days, the environment is considered safe and the skill is restored back to its original value. This compliance detection and skill restoration is done by the component (311). If within this time period some degree of improvement w.r.t. personal data protection is seen when compared to previous such time periods, partial restoration of the skill will happen. Such detection also done by the component (311).
The component (305) handles the agent integrity points restoration. Here the component will track agent compliance as in component (311) but the restoration happens in steps and not full restoration after evaluation for a certain time period. Full restoration will happen after many such compliance periods.
The component (313) handles the identification of net data protection threat value at a given moment and plans the appropriate countermeasures based on the current state of the system such as available agents, calls in queue for any agent of related skill, personal data being projected on the screen or not, the time spent in the call when the threat happens etc. Basically the appropriate countermeasures are not static, it depends on what the threat is. This solution generally plans countermeasure based on severity of the threat.
In yet another embodiment of the present invention (embodiment 11), various scenarios of the present invention is presented to better understand the solution. To understand such scenarios, the figures Fig. 5A, Fig. 5B, Fig. 5C, Fig. 5D and Fig. 5E are used.
The first scenario under consideration is about the needed configurations done by the relevant stakeholders before the system embedding the solution rolls out. Also it is highlighted in this scenario how the AI system uses these initial configurations to subsequently plan the initial suitable state for its system variables. The Fig. 5A can be referenced for this.
In the flow chart shown in Fig. 5A, the component (500a) highlights that initially the agent’s face images has to be given to back end. It should be understood it is not one agent’s face images that needs to be given to back end but a plurality of agent’s face images since the system comprises of many agents. Although reference is made to a given agent for illustrating the scenario, it is considered that the illustration is applicable to any agent who is part of the system where the solution is running. The said agent will be part of this AI based surveillance system and thus needs to be authenticated daily using face recognition. It is just not one image of agent but many images have to be given to the back end to attain such face authentication with accuracy. Also the general data protection threat level judgement at agent WFH environment is given to back end (this threat level value can simply high, medium or low and it is the agent interpretation of his WFH environment and this has to be approved by supervisor before inserting in back end. How this value is inserted is outside the scope) as part of initial roll out. The said threat level is either given by the agent or supervisor into the system.
The agent face images is needed by the face recognition AI algorithm to authenticate the live captured agent face by accurate matching or similarity identification to these pictures/images. To achieve this objective the AI engine has to be trained to be able to match the seen real face against the many images. To train the AI system for face recognition and subsequent authentication, many images of the same agent are considered to be given during the AI training/learning process so that a relevant machine learnt mathematical model is built. This mathematical model possibly has the capability to use biometrics or some other means to be able to match the real face with the images. It is considered that the agent face images used for training also remains in the system so that the AI engine can improve its learning continuously where possible using the initially uploaded face images used for training and also the real images captured while real face recognition is happening. The details of the training and the finally achieved machine learnt model is outside the scope of this invention. The solution may also use an open source AI API to achieve the face recognition.
Based on the initially given threat value (i.e. given by agent/supervisor) for the data protection threat for the agent WFH environment, the system will reduce the agent skill(or not reduce the skill if there is no threat given for the WFH environment) appropriately to a given starting value at the rollout stage for a given agent. This is done by the subsequent method in the flow chart which is (501a). The said rollout time/stage implies the very beginning of the system w.r.t. a given agent. After the initial rollout for the agent, the agent will do daily login to access the system. This said daily login of agent into the system is not the initial rollout. This initial adjustment of the skill is done so that an agent whose WFH environment security is not good can generally be avoided for call handling that needs personal data involvement. The AI engine can detect threats but the initial skill value is an input from the agent, which additionally helps the system in addition to AI detection of such events.
It is further considered that this solution system has the capability to assign priority to skills and use the priority. This priority metric is only used to pick a skill/agent during decision of call handling when there are multiple agents with the same skill within an intent. In such case, the priority metric will be used as an additional metric to choose agent for call. Herein picking a skill for call implies picking the agent. The priority to skills(or agent) can also be assigned to the newly enrolled agent as highlighted in step (501a). At the initial rollout stage or at any stage, the priority value is the agent integrity point. The integrity point reflects how the agent is keeping his WFH environment threat free w.r.t. personal data from the moment of getting access to the system since rollout. At the rollout stage, all agents will have the same value for integrity point. The call routing logic will be always mainly based on the skill value and this priority value is an additional metric to choose an agent in case skill value is the same for a group of agents within the same intent. In addition to skill and priority based agent identification, when deciding on the call assignment to an agent, an additional check by the system is whether the chosen agent WFH environment is data protection safe before call assignment. If an agent is chosen based on skill, priority etc and the current environment is not safe, then this agent will not be considered for handling the call. This said priority assignment can be done at the initial stage/rollout stage and during operation time for a given agent. The priority assignment herein refers to assigning the integrity value to the priority value. Whenever an agent integrity point is updated, that implies the priority value is updated.
The current value of any skill related to an agent is based/dependent on the starting value of skill at rollout time, the data protection threat events that occurred at periodic data protection threat evaluation points of the system, data protection threat score at every system related evaluation points, additional threat evaluation points due to new intent related call has arrived into system, system threat evaluation point from no threat to threat state transition in agent WFH environment and the corresponding skill reduction amount related to the threat value/score. Every time the system checks periodically or event based on data protection threat for skill evaluation (it is considered that such is frequency based and also sudden new threat event based and also when call has arrived and the given agent environment is one possible environment for the call), the system identifies the threat level. The skill reduction possibility is also checked when the call is about to be transferred to another agent and there is data protection threat in that environment. In such case, the said other agent skill is also reduced. If threat level is high then the skill reduction amount is also high. If threat level is medium, then the skill reduction amount is also medium and if the threat level is low the skill reduction amount is low. If the threat level is none, the there is no modification to the skill. Thus there is a possibility within an intent there are many skills of the same value because the skill modification happens in an agent environment in a completely independent manner to another agent due to the above mentioned factors. In such a case, the system will assign a priority value within same skill cluster for the given intent. Higher priority is given for skills that have higher integrity points (priority equals integrity point). The integrity points are also reduced using the same method as illustrated for skills during operation time. Whenever the algorithm checks for the possibility of threat and deduction of skill due to data protection threat, it also modifies the integrity points when skills are reduced. Then agents of same skill values will be prioritized based on priority value when the system evaluates the usage of an agent for handling a call. If in a very rare case integrity points/priority values are also the same, then random picking of agent will be done for call handling when a given cluster of agent have same skills and same priority points. The reason integrity point is used for priority is because at the rollout stage all agents will have the same integrity point and hence its current value will definitely highlight which agent is more trustworthy as far as data protection compliance is concerned.
Next the component (502a) is looked into. Here the rules are inserted into the system. These rules can be hard coded or can be configured. But in-order to enable customization of rules, this solution highlights the rules to be configured during initial rollout. These rules are used to detect the data protection threat based on the objects identified by the AI system. Based on rules configured the application can do only the needed threat evaluation rather than the complete list of rules for which the application is built to support.
Next using the said figures another solution operation scenario is highlighted to better understand the solution operation in one of its preferred way of implementation. One skilled in the art would know that there are many ways of implementing the solution without deviating from the high level principle of the solution and these scenarios to be illustrated are some of the preferred way the solution would operate in these said scenarios.
This next scenario is how the system behaves/operates in a given agent environment when the call is not predicted to land in the said given agent environment although the call has hit the system and has less probability of landing at the said given agent (e.g. the call is for another intent but needs personal data). What happens in the said agent environment for such a scenario is explained by highlighting certain operations using Figs. 5B and 5E. Initially, when the agent needs to access his work desktop application for the given day, the system will ensure agent work environment is safe w.r.t. personal data protection, also will check whether the agent is live, using face detection agent is authenticated by the system and the shown work environment via the webcam capture is really agent’s work environment is checked. Then once all said checks cleared, agent is allowed to access the work desktop application. This is done using step (504b) as shown in Fig. 5B. After the agent has logged into the system the system continuously checks when a call lands, whether the call has a high chance of reaching the said logged in agent. This is done by checkpoint step (505b). If the step (505b) evaluates to ‘no’ then the scenario navigates to some processes and checkpoints as illustrated in Fig. 5E. When the call has hit another agent (as in this scenario), then the system will ensure that for the agent who does not have the call has no permission to access personal data of the customer related to the call or any other personal data through the desktop application. This process is shown by step (521e). Subsequently the system will continue checking for the environment safety of the said agent where the call did not land and this highlighted in step (522e). It is essential to understand that the environment has to be continuously monitored although the call did not land in the agent environment. This is to make sure that the environment is safe for future calls and also to handle call transfers from another environment if needed. Such environment safety check will prepare a safe environment early even before the call lands. Since there is no indication of a call landing at this given agent environment, the AI system could lessen the frequency value of the environment check in such period (i.e. period where the call is not predicted to land in the given agent environment).
But as mentioned previously, one of the important check points in such case (i.e. after an agent has logged in) is simply to take note of additional people seen from the restricted view of webcam in the given agent WFH environment. So that when a given call is about to be landed here, the system can check whether all faces seen have left the environment. It is not checking whether no additional people seen at a given point. But also checking whether people seen before has left through the door. To realize this goal the AI system counts the number of people during this phase. This additional check will count any new faces seen other than the agent through the restricted camera view. It is important for the system to check the count. Only then it can detect whether they have left by monitoring the leaving process of these additional people. This is because the system has limited camera view during operation time and simply seeing no other people is not giving adequate proof that these people have left. They could be still in the room but not captured in the camera view. Having identified these faces, the system will request each of these faces to leave the room through the door with their faces clearly seen by the AI system. A warning will be sent to agent and the system will monitor such leaving of the faces. This said warning frequency will be higher when call is about to land. The warning could be sent to agent desktop work application or/and SMS. Such happens even before a call is predicted to land in this environment. Additionally, if the AI system have counted that the people have left, and if there are additional camera present then by turning on this additional camera integrated into system, the AI engine is able to fully check no other person in the room other than the agent. Here the additional people check and checking them leaving is not based on usual face recognition. Usual face recognition will have agent face in the DB. It is detecting human images arbitrarily popping up in the environment and then storing these images dynamically. Then ensuring when they leave via the door, the leaving images match the captured image of these people. This identification of arbitrary people entering and leaving are also managed by the AI engine’s face recognition technology.
Again the system will check whether any call arriving for the said agent in another 5 minutes (for example a callback is going to happen in 5 minutes and predicted) and this done in step (523e) in Fig. 5E. If step (523e) evaluates to ‘yes’, then the system will check the state of clearance after step (522e). If after (522e) the environment has not cleared yet, the system will generate a warning with slightly higher frequency to the said agent to clear the environment. This warning will have what to clear and the system will monitor the clearing process. For example if 3 additional faces seen then these have to leave the environment and this has to be proved to the system either by turning additional camera. Alternatively the system can check this event of a person leaving through the main door. The AI engine as mentioned before, has the ability to detect a person identified leaving the door. If the said step (523e) evaluates to ‘no’ then the system continuously monitors the call arrival into the system and any chance of the call hitting the said agent. Basically this scenario is about how the system handles all agents that are part of the system but has no direct involvement with a given call.
Next another solution operation scenario is illustrated. Here the call arrives at an agent and personal data is needed to be viewed to handle the call. But there is no environment related data protection threat in the agent environment. This scenario can be explained by using process and check points/conditions in Fig. 5B and Fig. 5C.
After the agent is given access to the agent desktop as in Fig. 5B step (504b), then whether call is about to land in this given agent environment is predicted by step (505b). When the call is about to land in the said agent environment is predicted by step (505b), the system will check for environment based data protection threat based on many rules such as phones in the view of webcam, door being unlocked in the view of webcam, agent moving away from webcam view, multiple person in the view of webcam etc as mentioned in step (506b) in Fig. 5B. These checks done because the agent is predicted to get the call. If an agent is planning to go away from work environment for some reason for a short while, then he has to put himself as ‘unavailable’ or ‘Aux’ mode and update such mode value into the system. In such case, the AI system will not do these above mentioned environment based checks in the agent environment at all to validate the safety. The said environment based safety checks only done when the agent is in ‘available’ state. If the agent comes back after break then he must change his availability mode from ‘unavailable’ to ‘available’ state and this must be updated in the system. The agent can use his work web application to make such state change. In such case where the agent is ‘available’, again the system checks whether for the agent to which the call is predicted to arrive the environment is safe or not w.r.t. personal data protection. This is highlighted in step (507b). If step (507b) evaluates to ‘yes’ indicating then as shown in step (508b) the system sends a warning with higher frequency to remove the environment threat from the environment.
If when the call has landed and when the personal data is going to be viewed, the system again checks whether the environment is safe. Personal data not going to be viewed/going to be viewed can be evaluated based on some input given by agent via the desktop application to the system while call is on. This check point is highlighted by step (511c) in Fig. 5C. If this said check point evaluates to ‘yes’ then the process as in step (514c) is executed. Here the moment personal data is projected all copy/paste clip board function is deactivated in the agent desktop. This clip board copy/paste deactivation can be done by triggering another application in the desktop or can be done within the smart surveillance application. It is generally considered the smart surveillance application manages the clip board copy/paste deactivation function when personal data is projected on the screen. After this step (514c) then the step (515c) is executed. Here although environment threat for personal data protection is not present the system still checks whether the agent is acting maliciously such as passing the data to some other via another session and sharing the screen which has personal data. The system will also check whether agent trying to voice out personal data when it is being projected on screen or otherwise. Also whether the said agent trying to record personal data are also checked. Details of these have been shared in other embodiments. If the system detects such malicious acts which impact the personal data protection, then the screen will be blocked and the call will be transferred to other agent where possible. This will be done in an autonomous manner. Additionally the agent’s skill will be reduced and the integrity points will be also reduced when such malicious agent behavior is found out. The subsequent treatment for this malicious behavior was revealed in another embodiment.
If the agent informs via the agent desktop application, that personal data is not needed for this call, then the system will not send the personal data and also stop continuous checking of personal data for this agent work environment. For all malicious agent threat, the threat severity is considered as very high/highest such as agent trying to copy personal data etc. in such case the treatment is always blocking the screen and finding away to transfer the call to another agent.
Alternatively, if the call has landed to the agent and it is detected by the AI engine that the environment has data protection threat, then the operation goes to step (512c). If when environment data protection threat and malicious agent threat is simultaneously present, then the net threat will be considered as highest and the countermeasures will be same as for highest threat evaluated. The operation goes to step (512c) when the step (511c) evaluates to ‘no’ as shown in Fig. 5C. It is considered that step (511c) mainly checks only the environment based data protection threat. In step (512c) the threat level is detected. The threat level will be detected based on assigning a threat score for each threat event. One of the highest threat score is assigned when camera is used when personal data is projected. After the threat score is detected, it is further detected whether the threat is high or highest. This check point is shown by check point step (510c). If (510c) evaluates to ‘yes’ the step (513c) is activated. If step (510c) evaluates to ‘no’ then the step (509c) is executed. The step (513c) highlights when the environment threat is evaluated as high or highest then system will want to transfer the call to another agent. If the environment threat is high and also only personal data is projected on the screen or personal data is voiced out or CCTV is maliciously been turned to On mode when it is supposed to be integrated with Off mode or CCTV integration is removed, then the screen will be locked/blocked from accessing the agent desktop application and the call transfer is planned. As explained before, always call transfer to another agent who can serve the intent is planned provided his work environment is safe. If the call holding to get the agent is not so long, the call is still in the early stage with the agent and the user prefers call back rather than ‘hold’ then the call is put to high priority call back mode.
The step (509c) highlights the case of general treatment of medium or low environment related threat for personal data protection. Either the personal information is shown in the restricted area or the agent is informed the personal data via the AI assistant and the agent has to receive this data using the head set ‘on’ mode. Again in steps (513c) and also (509c) the skill and integrity points are reduced accordingly based on the environment data protection threat level. The system also has a method where the personal data is only projected when the environment is considered safe. During call, if agent request personal data and environment threat present, the system will not lock screen but not send the data to the screen. This way, unnecessary locking of the screen is avoided. But in some cases, while the agent is looking at the personal data, environment threat can be suddenly induced into the environment and therefore needs transfer of call and screen locking.
The next is a scenario what happens when such high threat event happens and the agent desktop application is blocked. The agent desktop is generally blocked only when personal data is shown on the screen/voiced out and a threat either induced by environment which is of very high level or threat induced by agent is of very high level and both threats together are at a very high level. Even if threat is present and if agent shows compliance then unnecessarily the agent screen is not locked. Only when personal data is projected on screen and high data protection threat is present, the agent screen is locked or even when no personal data projected on screen but agent voices out personal data in an audible manner or when CCTV integration is not in proper manner the agent screen is blocked. This second blocking scenario the screen is locked only because of re-evaluation of the malicious agent into the system. When the agent screen is locked no call can come this locked agent. In such case after the agent screen is locked, the step (516d) is performed. As highlighted in Fig. 5D, step (516d), the agent and the supervisor is notified of this locked event using separate means. But screen un-locking time may vary. It depends on past events and total integrity point reduction as a result of these past threat events. Based on this reduction in integrity point the lock period is decided. If the integrity point reduction is significant then proportionately the screen lock period will also be higher.
If the agent wants to work again, then appropriate un-locking passcode is needed. This can only be given by supervisor using the customized supervisor regular application or using the smart surveillance application after he verifies the incident that made the screen to be locked. In addition to the current event, the supervisor can check the total integrity points and the previous incidents and decide on whether to unlock the screen. This step by the system operation is highlighted as (517d). During this re-login process, the system will see using the skills whether the said locked agent can be re-assigned to some other intents that do not need personal data. If the agent integrity point is very low and there is a possibility, then this agent may be removed from the intent and re-assigned to another intent that do not need personal data to be viewed. This is done in step (519d). The step (520d) highlights the operation where the agent re-logs into the system after getting approval from the supervisor. The supervisor may give the passcode or agreement for passcode generation to smart surveillance application that enables to re-login into the system. After such re-login, the system again tracks the compliance of the logged in agent. If he shows compliance then the skill and integrity points are restored after some time. This is showed by step (518d) in Fig. 5D.
In yet another preferred embodiment (embodiment 12), the operation during re-onboarding of the agent after the agent’s work desktop application gets locked is briefly explained by means of the sequence diagram shown in Fig. 6. The main entities involved during this re-onboarding process are the smart phone and the smart surveillance application running in the smart phone’s browser as shown as component (600), the smart surveillance application running in the agent desktop browser and shown by component (601), smart surveillance web application hosted in the webserver and this is shown by component (602) and the AI engine that informs the data protection violation incident details to the smart surveillance application and this is shown by component (603). The component (603a) is the customized supervisor desktop application. This said application is used to get the agreement from supervisor about the re-onboarding of the agent after his screen was locked.
The design principle during re-onboarding is that additional first level authentication is done by means of QR code. Using the decrypted QR code if the system validates that the agent whose screen was locked is trying re-login the supervisor application will check whether to generate a passcode for re-login. The decision to generate the passcode for re-login is done based on some indices such as incident related video and agent integrity point. The agent tries to attempt re-login using a passcode that is randomly generated by the system once QR code based first level authentication is satisfied and the supervisor approves of the re-login. When randomly generated passcode based re-entry is tried, the system will finally approve the login when additional 2 nd level authentication is done by the AI engine. In this second level authentication the system tries to check agent face is approved agent face, agent is live, agent emotion is correct as per demanded by the system, agent work environment is safe and also agent work environment shown really belongs to that agent. Based on all these factors, the 2 nd level authentication is done and the agent screen will be un-locked. The details of this logic will be further explained by means of the message sequences as shown in Fig. 6.
As soon as the agent screen has been locked, the AI application (603) will send information as to until what time the agent screen has to be locked. This lock period information is sent to the application (602) and this is shown by message (604). When the smart surveillance application (602) gets the screen lock and time information, it will inform the customized supervisor desktop application (603a) that the screen has to be unlocked using the message (604a).
Once the agent knows his agent desktop screen is locked by (601), he will use the smart surveillance application in his smart phone to scan the QR code appearing on the locked screen. This scanned QR code in its encrypted form will be sent to the smart surveillance application (602). This QR code passing message is shown by (605). The smart surveillance application will decrypt the QR code and also identify that this decrypted value which is agent ID is a valid agent ID and its entry is available in the DB. Upon such valid agent ID identification an ack is sent by the smart surveillance application. This ack message is shown by (606). In Parallel the customized supervisor app (603a) will generate an SMS and also generate an alert for the supervisor in the customized supervisor desktop application. The supervisor can login into application (603a) and subsequently review whether the agent can be logged in again. If the agent is identified as can be logged in again, the supervisor will inform Ok state to the system via the application (603a). This ‘Ok to re-board’ information is then sent to the smart surveillance application (602) via message (608). The smart surveillance application will then generate a pass code and send to smart phone via SMS. This is shown by message (609). Using the pass code received in his smart phone, the agent tries re-login using the passcode via the agent desktop application (601). Once the passcode is given to application (610), then the smart surveillance application sends a message (611) to the AI engine (603) to perform the 2 nd level authentication and checks. The AI engine will do various checks as mentioned before to decide whether re-onboarding with screen unlock is allowed. Once the checks are successful, the AI engine (603) sends message (612) and informs smart surveillance application (602) that the agent can successfully log in after the lock period is expired. This message of agreement for re-boarding is sent to the agent desktop screen running the smart surveillance application on the browser (601) by using the message (613) from the smart surveillance application (602).
Next in a yet another preferred embodiment (embodiment 13) of the present invention, the operation of the smart surveillance application is explained. This is one of the better ways of implementing the smart surveillance application but one skilled in the art would know that there are many other ways of implementing it to bring out the core functionalities of the smart surveillance application. To further explain this said operation smart surveillance application, reference is made to Fig. 7. As explained before, the smart surveillance application as shown in Fig. 1 has a communication interface with the AI engine.
Through this communication interface all the details such as session ID, countermeasures, changed skill value, integrity point updated value, threat state etc will be sent. These need to be stored in a DB that can be accessible by the smart surveillance application and related applications. The history of information sent by the AI engine and the current information sent by AI engine cannot be stored completely in cache memory of the smart surveillance application. Thus it is essential to store this information in DB in customer premises. This DB information will be accessed by various other applications embedded in the solution space. This step related to smart surveillance application is shown by step (400) in Fig. 4. Subsequently there is a checkpoint that is evaluated as highlighted in step (401). Here the check point is to check whether any counteraction/counter measure is given by the AI engine. If step (401) evaluates to ‘no’ then the smart surveillance application continues to update its DB and listens for the API calls from the AI engine. If step (401) evaluates to ‘yes’ then there is an additional check point (402), where it checks what countermeasures have to be implemented in the agent desktop application when WFH environment related threat is present. If this check point evaluates to ‘yes’, then the process (403) is executed. Basically the smart surveillance application can inform the customized agent desktop application as mentioned in Fig. 1, how the personal data information can be presented in a modified manner on the screen. For example show the personal data with smaller fonts, show it only in the middle of the screen, show personal data for a lesser time on the screen, do not show personal data at all because threat in environment although the user requests for it. Also do not show personal data when there is no related call in the environment. These above mentioned rules are informed to the customized agent desktop application by the smart surveillance application and this is what is highlighted in step (403). If the check point (402) evaluates to ‘no’ then the countermeasures due to threat are a different set of actions and these are explained next. In addition to countermeasures, the AI engine can also inform the end of the data protection threat state in a given WFH agent environment to the smart surveillance application. Based on end of threat state, the countermeasures for all the screen related countermeasures will come to an end. When (402) evaluates to ‘yes’ then the smart surveillance application communicates with the customized agent desktop application to not show the personal data in any restricted manner. This is shown by step (404). It is important to take note that the end of threat reversion only happens for screen projections. For call transfers, once the threat is cleared, the call is not transferred back to the original/previous agent. Multiple agent handing within a call life time degrades the QoS of the call. Then step (405) highlights the smart surveillance application’s role in ensuring the AI assistant is involved into the current call session. It is considered the AI assistant involvement is highlighted by the AI engine to the smart surveillance application and this application simply passes this information to relevant application to trigger the correct action. The smart surveillance application by triggering the customized agent desktop application will trigger the call routing application to include the AI assistant into the call session by giving an agent ID for it. Subsequently step (406) shows the role of smart surveillance application in providing the reduced skill value as part of threat countermeasure to the customized agent desktop application and subsequently to the call routing application. The reduced skill value is a countermeasure of the AI engine to threat. This value is given to call routing engine, so that it too can independently choose the agent if no information is sent by AI engine. However, the call routing application should use the identical process as of the AI engine in picking the skill/agent within the given intent. It is considered the DB which is accessible to call routing has all needed information for such mechanism. Even the threat state in the agent environment is present in the DB. The skill value is needed by the call routing application which clones the agent ID identification method used by AI engine to pick the needed skill for a given intent. The step (407) highlights when extreme violation for personal data protection event is received by the smart surveillance application from the AI engine, then the smart surveillance application will completely block and lock the agent screen. This screen will have the QR code shown on the screen upon blocking/locking. The step (408) highlights the smart surveillance application’s role in validating QR code and in supporting the re-onboarding of the agent after being locked. In steps (408) and (409) the smart surveillance application will communicate with AI engine to retrieve the face detection check result, agent emotion check result, liveliness check result, environment safety check result and environment ownership check result etc before finally allowing the agent to log in.
In step (410), there is highlighted a method where the information to be viewed by the supervisor to allow re-boarding of the agent is placed in the DB and also the supervisor is alerted by SMS or other means by the smart surveillance application or the customized supervisor desktop application. The alert/SMS specifically highlights that the given agent environment threat was high and the agent screen is locked and needs review by the supervisor. In step (411), after the agent is re-boarded the smart surveillance application may receive the restored skill value if the agent has shown compliance over a period of time. In such a case, this restored skill value will be given to the customized desktop application to be given to the routing engine. Here the restored skill value happens when no threat detected in a given agent environment over a period of time.
The step (412) highlights that in case of threat and call has to be transferred to another agent or another agent has to be involved to whisper the personal data information such information will be retrieved by the smart surveillance application from AI engine (AI engine will generally give such information by using the communication interface and this event can be continuously listened by listener methods running in the smart surveillance application) and given to the customized agent desktop application and finally to the call routing application. Step (413) highlights when agent moves out from his seat and did not put his state to ‘Aux’ explicitly, then such move will be detected by AI engine and informed to the smart surveillance application. In that case, the smart surveillance application will pass this information to call routing application via the customized agent desktop application. The call routing application will immediately consider the agent as not available for call handling and will attempt to transfer the call to another agent if call is happening in that environment. The said call transfer can also be initiated by the AI engine. When such ‘Aux’ mode is informed by the AI engine, the smart surveillance application will check with the customized agent desktop application whether personal data is shown. If personal data projection status is known, the smart surveillance application will block the screen and lock the agent.
In addition to informing the changed skill, the relevant agent/skill to handle a call will also be detected by the AI engine and given to the smart surveillance application and this is shown as step (414). This is to ensure that the appropriate agent of related skill who does not have a WFH environment protection threat is used by the call routing engine. The AI engine is the best entity to pick this because it has the security dynamic knowledge too of the environments. The call routing engine can use this information (appropriate skill) to pick the agent for call handling. The call routing engine may also use its own replica of call routing similar to the AI engine’s decision process for the agent identification to handle the call in a WFH environment.
In the next embodiment 14, it is highlighted that the solution can be active and achieve its goals to a reasonable degree even when prediction mechanisms as illustrated in previous embodiments are not in place. The said previous embodiments have many prediction mechanisms in place to decide on the most suitable action when data protection threat is detected in an environment. However, this solution is applicable even in WFH systems where such predictions cannot be made or not made and such alternate solution is described in this present embodiment.
If no prediction involved in the solution, the following handlings are done when data protection threat happens in a WFH environment and a call is in the said environment or the call is about to land in that said environment.
In this said prediction less system, if the call hits the system and the call intent is not known and cannot be predicted and whether the personal data involvement for this said call is unknown and cannot be predicted and to which free agent the call will be handed over is not known or predicted at this point, then the warning will be sent to all the agents of the WFH system and the warning frequency will be high to all the agents.
If the call is about to land at an agent after traversing through the IVR system and at that moment the call intent is known (in almost all systems, at this point the call intent will be known), then the available agent will be picked based on the skill for the said intent and agent safety. In such case if the agent is available and his environment is safe then that agent is picked for the call and if not, another agent of similar skill range is picked and the process continues within the skill range of picking an agent of suitable skill and environment safety. Otherwise agents of lower skill and safe environment with AI analytics is considered to handle the call. This handling where the call is about to hit the agent also happens when all the predictions are in place as well.
In such prediction less case, after the call lands in an agent WFH environment, the countermeasure happens only when there is threat in the environment or the agent behaves maliciously. In this prediction less system whether the personal data is applicable for this call is not evaluated. If there is a threat in the WFH environment and a call is happening in this said WFH environment then appropriate countermeasure is done even without any prediction as to whether personal data will be used within the said call. Where if personal data threat is high in the agent WFH environment and a customer call is happening in the said environment, then transfer to another agent is considered. Else as in the main solution, when personal data protection threat is not high, call is continued in the current environment where personal data is projected in the screen in a small section or some AI assistance or another lower skilled agent to whisper the personal data can be considered etc.
If in the prediction less system, if the waiting time for an available agent to handle a call is not known for initial call landing or subsequent call landing such as transfer of the call, then waiting time estimation is generally done based on the total number of currently present call in the given intent queue and the average waiting time for a given call type. Here the said estimation is not a proper prediction as in the main solution.
The detail predictions as in previous embodiments help the solution from avoiding unnecessary actions that are not needed. It is also important to understand that the predictions is probability based. Similar to AI mechanisms, there can be a degree of error in the predictions. But in all cases, the solution ensures that the security and the quality of call is not affected regardless of whether predictions are in place or not in place in the solution space although with predictions too much system wide signaling can be reduced.
In general, countermeasure in this solution is system’s reaction to a data protection threat happening in a WFH environment. This countermeasure in the broad sense could incorporate the warning as well. In the next embodiment (embodiment 15), the criteria when such countermeasures are issued in the solution are illustrated. Countermeasure is related to a given data protection threat happening in a given WFH agent environment and how the system reacts to such threat.
When a data protection threat is present in the agent environment and no call that has hit the system is planned to be routed to this said threat environment, then the countermeasure for this threat is ‘null’. This can happen when AI engine has picked another agent to attend the said call or the call’s intent is not served by the said agent because the said agent has another set of skills tied to a different intent.
If the AI engine has picked/selected an agent to serve the call which has just hit the system and this agent’s WFH has personal data protection threat, then the immediate countermeasure is a warning to the agent environment and skill modification related to that said agent.
If the call has traversed via the IVR upon hitting the system and is about to hit the agent that has been selected by the AI engine and the said agent environment is not safe then appropriate countermeasures similar to what happens when personal data threat is present in the environment and call already exists in that environment is used to handle this new call case.
If the call has traversed via the IVR upon hitting the system and is about to hit the agent that has been selected by the AI engine and the said agent environment is safe then the counter measure is ‘null’.
If the call is on-going in a given agent environment and the system knows no personal data is needed to complete the call and suddenly a threat is induced in this agent environment, then the call will not be having any countermeasure. The system in some cases using AI prediction can estimate whether the call of the given intent needs personal data to handle the call. Alternatively by simply knowing the intent value, the system would know whether personal data is needed. Additionally in another case, the agent himself can highlight the personal data is needed in advance to the system. Basically, the system can use various means to know this.
If a call is on-going in a given agent environment and it is detected that personal data involvement is needed for this call from current time until its completion time and also the data protection threat has been induced then the call will be treated with a suitable counter measure that is based on severity of the threat and other system variable states.
If a call is on-going in a given agent environment and it is detected that personal data is shown on the screen and also the data protection threat has been induced then the call be treated with a suitable counter measure. Here the screen will be blocked and an appropriate counter measure will be chosen if the data protection threat is considered as high.
If a call is on-going in a given agent environment and a sudden data protection threat is induced and if no personal data will be used during the call cannot be estimated by the system, then based on the severity level of the data protection threat happening in the WFH environment a suitable countermeasure will be chosen.
If a call is on-going in a given agent environment and a sudden data protection threat is induced and no personal data will be used for the call and the system knows that with a high confidence level, then the counter measure will be ‘null’.
These countermeasure are based on severity of the threat and the nature of the threat. If personal data is projected and data protection threat present is high, then always the initial counteraction is locking/blocking the screen and additionally a transfer to another agent or virtual agent will be planned.
This solution has warning for data protection threats rather than immediate counteraction that changes the route of the customer call. This is further elaborated in yet another embodiment (embodiment 16) of the present invention. This warning is a solution countermeasure that is only sent when the personal data is not yet projected on the agent screen. In this solution, the AI engine when identifying such environment originating data protection threat during agent work time, it does not immediately send alerts to the agent concerned. It additionally predicts that the call, which needs personal data, is going to arrive to that particular agent situated in a data protection threat environment. The system combines the data protection threat level in the agent environment and the prediction of call arriving to the particular agent who will be using personal data, into a countermeasure score for warning purpose. If the said score is a value greater than 0 and none of the contributors to the score has 0 value, then a warning will be sent to that agent. The warning frequency will increase with the score value.
Examples
This AI assisted WFH related solution’s security feature and associated call routing feature is applicable to many industries. The industries are banking, telecom, travel or any industry that generally needs contact center type of call management and agents that are physically located at home and working from home. In addition the agent work scope needs some form of personal data of the customers.
The solution is also useful even when agent is working in their offices to prevent any security threats to the personal data of customers. The solution applicability scope is vast.
The solution is also applicable where call routing in general considers secure paths. This can be used by telecom industries for the back bone routing too. The security feature of the solution where show less of the personal data and show personal data only needed can be applicable to many industries that handle personal data.
International Patent Publication No. WO2019224658
US Patent Publication No. 20190387201
US Patent Publication No. US20200126396
US Patent Publication No. US20050002561
US Patent Publication No. US20200005613
International Patent Publication No. WO2020008249
US Patent Publication No. US20200050773
Australian Patent Publication No. AU2017436901
NPL1:

Claims (93)

  1. A system where artificial intelligence herein referred to as AI module is used to identify customer's personal data protection threat originating in one or a plurality of private WFH environments that are interconnected to the said system; Wherein every WFH environment has an agent uniquely associated with it and the said agent being the only allowed user to access his agent desktop work application in his own WFH environment where the said agent desktop work application can be retrieved from his agent work desktop or some other appropriate device placed in his WFH environment; Where the said agent desktop work application is a means that is used to get the personal data of a given customer to be projected on the screen related to the work done by the said agent at a given point of his work time for the given day; The said work done by the agent could be attending to an incoming call coming from a said customer or initiating an outbound call or call back call to the said customer using applications running in the said system or otherwise; The said call can be an IVR call or a non-IVR call;
    Specifically the said AI module in the said system has a means to identify the customer’s personal data protection threat when the personal data is captured using a single device or a plurality of devices in the WFH environment
    and this threat originating from a single or multiple person in the WFH environment where the said person could be the said agent himself or any combination thereof;
    The said AI module in the said system also has a means to capture the personal data protection threat where it solely originates from another single or plurality of person other than the agent in the WFH environment where the said other person in the WFH environment voice out the personal data of any customer that belongs to the system in an audible manner in real or time or using a recorded voice;
    The said AI module in the said system also has a means to capture the personal data protection threat whereby the personal data is being voiced out in an audible manner by the agent in his own WFH environment in a totally unrelated manner to the call handled by the agent or the work done by the agent;
    In the said system, also the said AI module has the capability to detect the said data protection threat in the WFH environment when the data protection threat is induced by the person outside the said agent’s WFH environment but using a personal data capturing device placed in the said agent’s WFH environment;
    In the said system, a means related to the said AI module to identify the customer's personal data protection threat when the said data is projected on the agent’s work desktop screen or any agent work device’s screen of the said agent and the said data is being maliciously captured by some means; The said capturing means could be a camera in the said agent WFH environment, any device that has camera feature embedded and placed in the said agent’s WFH environment, video recording device in the said agent’s WFH environment, video based 3 rd party live interactive communication desktop applications installed or running in the agent work desktop and sharing the agent desktop screen that shows the personal data to another recipient, malicious applications that record personal data projected on agent desktop screen where the said application is running in the said agent’s work desktop or running in any other agent work device that can project the personal data of customer on the screen;
    The said call in the said system in the most probable case would only involve a customer and the agent; The said call can also happen with one or multiple agents engaged within the call session; The said agents incorporated within the said call session could be real agents or virtual agents or a combination thereof.
  2. The system of claim 1 characterized whereby the said AI module in the said system is hosted in the cloud services environment where the said AI module has a means of utilizing the AI based cloud services.
  3. The system of claim 1 characterized whereby the said AI module in the said system is hosted in a private restricted environment and also use some of the AI libraries that are publicly available which can include the cloud AI libraries as well.
  4. The AI module in the said system in claim 1, to achieve this personal data protection goal engages many other supporting application modules hosted in the said system and communicates with these said other application modules using appropriate communication interfaces to reach them;
    Within the said system, the said other supporting application modules have a communication interface with the said AI module either by means of a direct communication interface to it or using an indirect communication interface to it;
    The said AI module and all the said supporting application modules are hosted individually in different servers or could be all hosted in the same server within the said system;
    One of the said other supporting application module is the smart surveillance application module that is hosted in the said system; This said smart application module interfaces with the said AI module using a given direct interface;
    Additionally this smart surveillance application module is interconnected to the customized regular agent desktop module and the
    customized regular supervisor desktop module using appropriate communication interfaces; The said customized regular agent desktop module and the said customized regular supervisor module
    communicates with the agent skill based routing enabling application using yet another appropriate communication interface.
  5. The AI module in claim 1 operating in the said system has a means whereby it mainly uses an inbuilt camera embedded in the said agent's work desktop to capture the video streams from the WFH environment as an input for its personal data threat detection evaluations whereby the said personal data evaluations occur in real time during system operation; Additionally the said AI module has a means whereby it will use these real time video captures from the inbuilt agent desktop camera as additional training data where appropriate to enhance its prediction/estimation module w.r.t. personal data threat detection.
  6. The system according to its preceding claims and its encompassing modules/application such as the said AI module and the said smart surveillance application, also has a means whereby it supports any existing legacy CCTV camera in the WFH environment or any recording camera of similar type in the WFH environment to be incorporated or integrated into the said system so that the said CCTV or other said recording camera present in the WFH does not pose a threat to the personal data projected on the agent desktop screen in the said WFH environment whereby in the said integrated state the threat full CCTV or similar other recording device can be put in appropriate operation state such as prevent it from recording the personal data related information projected on the agent desktop screen; Additionally in this said integrated state, if when CCTV or similar other recording device is considered as not a threat w.r.t. personal data protection as evaluated by the said system then the CCTV operation can be allowed in its normal operation mode.
  7. The AI module of claim 6 has a means whereby when it identifies the said data protection threat from the said CCTV or similar recording device related to personal data at the beginning stage or on-boarding stage or re-onboarding stage of the agent to his work desktop application, it appropriately connects CCTV or similar device in a video recording ‘Off’ mode to the said system; Once the said threat is detected during such evaluation stage, the AI module has a means to communicate with the smart surveillance application to set the video recording state to ‘Off’ in the CCTV system’s interfacing unit or other similar recording device’s interfacing unit where by in this ‘Off’ state the integrated CCTV device or other similar recording device cannot record any views or images seen through its camera lens.
  8. The interfacing unit mentioned in claim 7 has a communication application running whereby it is able to communicate with the said smart surveillance application running in the said system to retrieve such application level control messages such as video recording ‘Off’ and react accordingly such as able to control CCTV operation based on the said message received.
  9. The AI module of claim 6 also has a means whereby if it detects the threat from the integrated CCTV or a similar device is not present w.r.t. personal data protection during the on-boarding time or re-onboarding of the agent to his work desktop application or during other similar said evaluation time, it will ensure the CCTV system is integrated with the video ‘On’ mode whereby video recording is allowed during the system integration period.
  10. The AI module of claim 9, once the said threat induced by the said CCTV or other similar recording device is not detected, has a means to communicate with the said smart surveillance application to set the video recording state to ‘On’ in the CCTV or similar recording device and subsequently the smart surveillance application is able to inform the said interfacing unit about the said ‘On’ state.
  11. The method in said interfacing unit as in claim 10 is such whereby the said video ‘On’ is interpreted by the interface unit as an agreement that the CCTV system or other similar recording device can be operated in a video recording ‘On’ mode while being integrated to the said system and subsequently the said interfacing module allows the said CCTV or other said similar recording device to operate in the video recording ‘On’ mode.
  12. The said interfacing unit as mentioned in any one of the preceding claims has a means whereby before the CCTV integration mode is decided by the AI module, the said interfacing unit will ensure the CCTV recording is in On mode and also will send the evaluation related recording done by CCTV which comprises of the ‘test text message’ image/video to the AI module where the said evaluation related image/video recording will be used by the AI module to decide on the system integrated operation mode for the CCTV sub system or similar recording sub system thereof.
  13. The AI module in claim 6 has a means whereby during on-boarding time of agent to his work desktop application or re-onboarding time, will receive the captured CCTV images or video recording of the agent desktop screen during the time the agent desktop screen shows ‘test text messages’ which are of similar type to personal data information that will be projected on the screen and the AI module will be able to detect whether the said ‘test text message’ that is captured by CCTV camera is of a quality similar to human readable using its AI image based character recognition methods.
  14. The smart surveillance application as mentioned in any one of the preceding claims has a means whereby the said test text messages of claim 13 are projected by the smart surveillance application to support the CCTV threat evaluation during the on-boarding time or re-onboarding time or during any suitable safety evaluation time that needs such evaluation.
  15. The AI module of claim 13 has a means to analyze the images/video sent by the said CCTV or similar device thereof comprising of the ‘test text message’ as to whether it is captured using human readable clarity and whether the said captured message as a video image is indeed the test text message projected on the agent desktop work screen.
  16. The smart surveillance application in claim 14 and in any one of the preceding claims has a means whereby it will never project personal data of customers belonging to the system in its ‘test text message’.
  17. The AI module of claim 15 has a means to detect whether the CCTV captured image during the evaluation stage of CCTV integration mode is matching ‘test text message’ projected by the smart surveillance application in terms of fonts, font size, color, text message composition.
  18. The smart surveillance application of claim 13 and claim 14 has a means where for every on-boarding or re-onboarding time it projects the said test text message that is of similar font and nature to the real personal data that is projected on agent desktop screen during the said agent’s normal work time; The said smart surveillance application also has a means whereby it also ensures at every on-boarding or re-onboarding event a new set of such test text messages are projected on the screen that helps to decide on the integration state of the CCTV camera or other similar device in the WFH environment.
  19. The smart surveillance application of claim 13 and claim 14 has a means whereby once it detects from AI module that CCTV integration is allowed in video recording ‘Off’ mode it will send a video recording Off command to the interfacing unit and subsequently monitor this ‘Off’ state; The said ‘Off’ state monitoring could happen using the heart beat message or similar message thereof periodically sent from the said CCTV communication interfacing unit or similar device thereof to the said smart surveillance application.
  20. The smart surveillance application of claim 19 has a means whereby when CCTV ‘Off’ state is not detected by means of the heart beat message or heart beat message is missing will ensure a warning is sent to the related agent by appropriate means.
  21. The AI module of claim 7 has a means whereby when it detects that during personal data projection on the screen the CCTV system is disconnected from the said system or if when the CCTV has to be in video recording ‘Off’ mode but instead it is actually in video recording ‘On’ mode, it has a means of sending a message to smart surveillance application to ensure the agent work desktop screen is locked.
  22. The smart surveillance application of claims 10, 14 has a means whereby once it detects from AI module that CCTV integration is allowed in video recording On mode it will initially inform the CCTV interface unit that the CCTV can do its recording as per its normal operation; The said smart surveillance application upon allowing the video recording ‘On’ mode will also randomly send a request to the CCTV system’s interfacing unit from time to time to get some video images to check on the threat induced by the CCTV incase the position of the CCTV in the WFH environment has been maliciously shifted while being in the CCTV recording ‘On’ mode; The smart surveillance application will request for this random video footage only when personal data is not projected on the agent desktop work screen.
  23. The AI module interfacing with the smart surveillance application in claim 22 has a means whereby the said CCTV threat is again evaluated by the AI module using the images or video sent from the CCTV system from time to time.
  24. A system which has an AI module as in claim 1 or any one of the AI module based preceding claims, where the countermeasure for the said data protection threat related to a given WFH environment only takes place if there is a personal data threat in the said WFH environment, the call has a high chance of landing in the said threat environment or call has landed in the data protection threat WFH environment and also the said call has one of the following characteristics w.r.t. personal data usage such as: the said call needs personal data to be used during the call handling period of the said call or the personal data usage cannot be estimated within the said call time or the personal data usage within the said call can be estimated with high accuracy.
  25. A system according to claim 1 or any of the preceding systems in the preceding claims has a means whereby it implements and uses call intents where from the intent value of the call itself whether the call needs personal data or not can be identified without any estimations or predictions.
  26. A system according to claim 25 has an operation means whereby if intent value of the call or intent value tied to the call implies that no personal data is needed to handle the call from the said calling customer, then the system prohibits the agent’s request for personal data to be viewed tied to the said call’s customer during the said call, where the said agent is the one handling the said call during his work time.
  27. A system according to claim 1 has an additional means where when an intent value cannot indicate personal data involvement or not and not designed in that manner per say then the relation of the call’s intent value to the involvement of personal data or not can be estimated using call routing path within an IVR call.
  28. A system according to claim 1, claim 24 and claim 27 has a means whereby for calls if the personal data involvement is needed and this can be predicted with accuracy > 90% and also the call landing will be in a data protection threat related WFH environment or the call has landed in a data protection threat environment, then such calls will generally have countermeasure for call handling in place.
  29. A system according to claim 1 and claim 24 has a means where when calls usage of personal data cannot be estimated at all, the countermeasures are always in place when the personal data threat is detected in the environment and the said call will land in such threat induced WFH environment or the said call has landed in such environment.
  30. A system as in claim 1 and claim 24 where a call that needs personal data has completed the personal data usage, and if data threat in WFH environment happens after the personal data usage and if no additional personal data involvement is needed to complete the call is notified to the system by the agent, then in such cases even if personal data threat happens in the WFH environment, the countermeasures will not be in place.
  31. A system as in claim 30 where once the agent notifies to the said system within a call session that personal data is not needed during the remaining time of the said call, then the system prohibits the personal data to be shown on said agent’s desktop screen within the said call session.
  32. A system according to claim 1, claim 24 and claim 27 has a means whereby for a given call if the personal data involvement is not needed and this can be predicted with accuracy > 90% and when only if the condition such as: personal data threat present in the WFH environment of the said call handling agent and the personal data is projected on the said agent work desktop screen simultaneously is true, then the said system will generally have countermeasure for data protection threat handling in place.
  33. A means of the AI module as in claim 1 where the AI module as part of onboarding the said agent to access his work desktop application and personal data for a given working day does the following checks such as:
    During on-boarding of the agent to his work desktop application on a given day, the AI module checks whether the face seen through the camera is the agent assigned to access the work desktop application and the agent is part of the system;
    During this on-boarding time, the AI module checks whether the agent seen through the camera is actually live by means of emotion detection where the said emotion to be shown by the said agent is requested by the AI module;
    The said AI module further checks whether the said agent seen through the camera is trying to access his work desktop application from his own WFH environment;
    The said AI module also checks whether the agent’s WFH environment shown by agent using full video view during on-boarding is safe w.r.t. personal data threat at the given moment and the WFH environment seen by the camera does not pose threats after on-boarding the said agent where after on-boarding the AI module only has limited camera view.
  34. The AI module of claim 1 and claim 33 has a means whereby the checks similar to on-boarding is also done when the agent work desktop application screen is locked and re-on boarding is needed for the given agent to access his work desktop application.
  35. The system according to claims 1, 4, 34 is such the said smart surveillance application, the said customized supervisor regular application and the AI module together supports the re-onboarding function of the agent once the said agent screen gets locked due to personal data protection severe violation; The said smart surveillance application has a means whereby it approves re-onboarding of the agent to his work desktop application by a screen un locking process only when the following are all satisfied;
    The encrypted QR code projected on the locked agent work desktop application which is scanned and retrieved and submitted to the smart surveillance application is equal to the screen locked agent’s agent ID;
    AI module has indicated that the checks related to on-boarding as in claim 33 has passed during the re-onboarding process of the agent after screen lock;
    Customized supervisor regular application gives approval to the smart surveillance application that the said agent whose screen is locked is able to access his work desktop application as a part of re-onboarding after the locking period expires.
  36. The AI module in claims 33, 34 and 35 has a means whereby when it evaluates the safety of the WFH environment w.r.t. personal data protection during on-boarding time and re-onboarding times uses the following rules and only when the said WFH environment is safe after having satisfied all the said below rules the AI module approves on-boarding or re-onboarding:
    Rule 1: The work area should be an enclosed area and only single door is allowed and that door should be in the view of AI module that uses restricted camera view during operation;
    Rule 2: The said door of the agent’s work space should be locked;
    Rule 3: Windows in the work area should be locked at all times and if a given window to be kept open during work time of the agent, then that window has to be in the agent desktop’s inbuilt web camera view;
    Rule 4: Only agent is allowed to stay in the bounded WFH work area and no other person is allowed to stay;
    Rule 5: room lights are ‘On’ and room luminous level is high in room;
    Rule 6: All the additional phone cameras and dedicated cameras have to be kept aside by the agent at another location away from the work location; Agent’s personal smart phone back camera sealed and blocked from capturing scenes/events;
    Rule 7: The agent has to be at his work desk facing the desktop;
    Rule 8: The system integrated CCTV belonging to the agent’s home has to be kept in video recording Off state if the AI module evaluates it as a threat during On-boarding approval check;
    Rule 9: The agent is allowed to access the agent desktop work application only on the approved work day for the said agent.
  37. The AI module of claim 1 has the following functions or means to achieve personal data protection in a WFH environment during on-boarding of the agent, during re-onboarding of the agent, during work time of the agent since on-boarding and winding up for the given work day;
    AI module has the following involvement during agent on-boarding and re-onboarding such as AI module is involved in authentication using agent face detection and agent liveliness detection, external CCTV in WFH environment threat detection and appropriate integration state setting for this CCTV in the system integration mode, agent WFH environment authentication and WFH environment personal data threat detection during on-boarding of agent to his work desktop application and also threat detection during re-onboarding time;
    AI module has a means whereby it ensures that personal data of the said customer is prevented from being projected on said agent work application screen at un-related time such as when there is no call from the said customer being attended by the said agent in the said agent’s WFH environment;
    AI module has also a means where by it ensures that if call is about to land in a given agent WFH environment where by the AI module has evaluated using the current state of the system the given agent will get the call, then it will send warning to this said agent to clear the environment and make it safe w.r.t. personal data protection;
    The AI module has a means whereby in the said warning message what has to be done to make the environment safe again w.r.t. personal data protection is also sent;
    AI module has the capability or means whereby when call is on-going in a given agent environment and personal data threat is happening in the said agent’s environment or the agent himself is behaving in a malicious manner then appropriate suitable countermeasures are triggered that ensures smooth call completion in a safer environment in spite of the data protection threat detected in the said environment;
    AI module has a means whereby the customer personal data is not allowed to be projected on the agent work desktop screen when the data protection threat is high in the said agent’s WFH environment although the call is in progress in that environment and the said agent request for such personal data related to the customer who is in call with the said agent;
    AI module has a means whereby when an agent during a call with a given customer requests for personal data of another customer to be viewed by means of agent desktop work application, then that act is prohibited by the said AI module;
    AI module has means where when the agent voices out personal data in an audible manner and in unrelated manner during his worktime it has the ability to detect it and act accordingly to restore the safety related to the call happening in such malicious agent WFH environment;
    AI module has skill based agent selection mechanism where the agent skill is used to pick the appropriate agent to serve a given call that has a specific intent; Where among the available agents an agent with the highest skill is picked to serve a call when the call hits the system and also before the call hits the agent in its routing path;
    The AI module has the agent skill manipulation mechanism where the said AI module has a means or capability to modify the skill or reduce the skill whenever agent’s non-compliance to personal data safety is detected at certain evaluation moments during the agent’s work time related to a given day;
    The AI module has the capability to detect personal data protection threat originating from malicious agent behavior and has capability to prevent such threat by enabling appropriate countermeasures.
  38. The AI module in claim 37 has a means whereby it evaluates the personal data protection safety with regards to skill manipulation or skill reduction at the following moments such as: periodic time intervals, when an environment safety is suddenly changed from lower threat state to higher threat after warning issued to clear the threat or when call is in progress in the given agent WFH environment and personal data protection threat state change happens from a given threat state to a higher threat state.
  39. The AI module of claim 37 has a means whereby it evaluates the personal data protection threat related skill manipulation or skill reduction also during call transfer stage to another agent WFH environment whereby the said another agent WFH environment is checked for personal data protection threat and the said another agent skill is reduced if personal data protection non-compliance is found.
  40. The AI module in claim 37 also has a means where additionally during a call handling stage where additional agent has to be involved into the call session, that additional agent WFH environment is checked for personal data protection threat/personal data safety and the said additional agent skill modified or reduced accordingly if non-compliance to personal data protection is seen.
  41. AI module in claim 1 and claim 37 has a capability whereby it has a decision means to transfer the call to a less skilled agent when the personal data protection threat in the current agent environment is high, also there are no other available agents/free agents of related technical skill currently present whose WFH environment is safe, also the waiting time to get an available and suitable agent for the said call is high due to many calls in waiting queue and/or the said so far elapsed call duration has been high.
  42. The AI module of claim 41 has a means whereby the said AI module is able to engage an AI analytics to be used by the said another less skilled agent to which the call is transferred to where the said AI analytics is able to provide the insights and summaries related to customer who is related to the said call and whereby the said analytics would help the quicker progress of the said call with the said customer.
  43. The AI module of claim 42 is able to derive the AI analytics based on the inputs provided by the skilled agents who have served the related customer calls previously.
  44. AI module in claim 1 and claim 37 has a capability whereby when a call cannot be transferred to any agent in spite of high data protection threat occurring in the current WFH environment, because there are no available agents whatsoever in the system of any magnitude of skill, then the AI module will use an AI assistant or virtual agent into the same call session where the AI assistant is able to whisper the related personal data tied to the said customer related to the call using AI module’s text to speech capability.
  45. The AI module of claim 37 has the means to use the following individual rules or individual check points to detect personal data protection threat using minimal camera view such as the in built webcam camera of the said agent work desktop after safe on-boarding of the agent to his work desktop application: Whether any new person (face detection in the door and window frame) entering through the main door or window; Whether the main door in camera view is unlocked; Whether a new human sound detected in the work environment other than the agent; Whether any new human apart from the agent shadow is seen on the wall; Whether the agent is offing the light unnecessarily and as a result luminous level is low in the room subsequently; Whether agent moving from work desk and yet not going out of the room without turning the agent work mode to ‘Aux’; Whether the CCTV in the room is switched On by the user when the system wants it to be in video recording Off mode; Whether agent phone cameras are in the view of the AI module without the back cameras sealed off; Whether additional camera is in the view of AI system; Whether agent himself behaving maliciously and trying to capture/record personal; Whether agent himself is trying to pass the personal data to some other using speech in an audible manner; Whether the agent himself is trying to pass the personal data projected on the screen using live screen sharing to other un-related recipients.
  46. The AI module of claim 45 has a means to assign threat value for each of the said data protection threat events in claim 45 and using the cumulative threat value for the WFH environment it is able to detect which is a high data protection threat environment, which is a medium data protection threat WFH environment and which is a low data protection threat WFH environment; The cumulative threat value for a given WFH environment is derived by linear addition of each of the threat value tied to each of the individual independent data protection threat in the said WFH environment including the agent’s malicious behavior occurring in the said WFH environment; The AI module executes or triggers appropriate counter measures based on the net or cumulative data protection threat value or data protection threat score for a given agent WFH environment.
  47. The AI module in claim 37, claim 45 and claim 46, has a means whereby if detecting a high threat such as camera being used in an agent work desktop screen capturing manner while personal data projected on the said agent desktop screen, it considers it as a very high threat and immediately blocks the said agent screen by communicating with the smart surveillance application.
  48. The AI module in claim 37 and claim 45, has a means whereby if it detects that when personal data is being shown on screen while call is on with an agent and the said agent is not at his desk, the said AI module considers this as a high threat environment and immediately the agent desktop work screen is blocked whereby AI module achieves this screen blocking by communicating with the smart surveillance application.
  49. The smart surveillance application of claim 4, has a means whereby it always imposes the agent desktop clipboard copy and paste deactivation functionality when the personal data is projected on the agent desktop work screen.
  50. The AI module in claim 37 and claim 45, has a means whereby if detecting any malicious activity by the agent such as copying/recording of the personal data projected on the agent desktop screen, it will impose the suitable countermeasures; The said AI module has a means to monitor whether said agent trying to record the personal data in the agent work desktop during the personal data projection time and if such detected, it will block the agent desktop screen by communicating with the smart surveillance application to activate the screen blocking.
  51. The AI module of claim 50, is able to identify the malicious recording function within the agent work desktop when the personal data is projected by using another application that runs in the agent desktop that provides information to the AI module about the newly created video or image files for the given day; The said AI module is able to detect this malicious act in real time while the personal data is being maliciously recorded in the said agent desktop; Specifically the AI module has a means whereby able to check any video or image files created when the personal data was projected on the screen and subsequently has a means of identifying the malicious agent behavior.
  52. The AI module in claim 1, claim 37 and claim 45, has a means where by it is able to identify unusually long call holding time where the said call holding is explicitly initiated by the said agent within the time frame when the call related customer personal data is projected on the said agent screen when the said agent is in call with the said customer and accordingly put the countermeasures in place when such long unusual call holding is detected
  53. If the AI module in claim 37 and claim 45, has a means whereby if it detects one or more other faces near the agent work desktop screen other than the agent when personal data is projected on screen, again the screen is blocked/locked by the smart surveillance application based on the initiation from the said AI module.
  54. The AI engine of claim 37 and claim 45 generally uses a method whereby it considers threat such as another person in the agent WFH environment when call has landed in a given agent environment as a high threat or another person in the vicinity of the camera when personal data is projected as a high threat or agent trying to capture the data or another person trying to capture the personal data projected on the screen as a high threat or agent’s malicious activities such as trying copy the data when personal data is projected on the screen as a high threat or agent trying to utter the personal data at un related moments in audible manner as high threat in general or the agent trying to pass the personal data projected on screen via another third party screen sharing video application running on the agent work desktop as a high threat or agent viewing the personal data for a long time is considered as a high threat or agent putting the call on hold when personal data projected on the said agent work desktop screen as a high threat or the agent quoting the personal data as a high threat or handling the call that needs personal data without the personal data being projected on the screen as a high threat or any human other than the agent uttering the personal data in an audible manner as a high threat or a recording device in the WFH environment playing out the recorded personal data in an audible manner as a high threat.
  55. The AI module of claim 37 and claim 54 has a means whereby in high data protection threat WFH environment it uses the following method to decide on the suitable countermeasure based on various checks or conditions in priority order as to what is the preferred countermeasure;
    It is 1 st checked by the said AI module whether data protection threat is high (or very high) in a given agent WFH environment and call that needs personal data is happening in a given agent environment;
    If yes for the 1 st check then the 2 nd check is whether the call can be transferred to a currently available agent of related skill in a safe environment without any waiting time;
    If 2 nd check is yes, then the call will be transferred to a currently available agent of related skill and the call can continue in the safe environment;
    If 2 nd check evaluates to no, then a 3 rd check is activated; Here in the 3 rd check, it is checked whether the call has passed a considerable amount of time with its current handling agent or call has spent significant time in the system;
    If 3 rd check evaluates to yes, then another check such as 4 t h is made where a less skilled agent is currently available to handle this call and this less skilled agent has no data protection threat in his environment is checked;
    In such case if 4 th check evaluates to yes, then the call will be transferred to this less skilled agent and the less skilled agent will be supported with AI analytics related to the intent to continue the call;
    If the 4 th check point evaluates to no and no less skilled agent available in a safe environment w.r.t. personal data protection, then the call will continue in the current agent environment where the personal data information of the in call customer will be informed in speech mode to the current agent with head sets on using the AI assistant;
    If the 3 rd check point evaluates to no, then the system uses AI module to evaluate the waiting time for a call if transfer or call back is needed;
    Then if the waiting time is high and detected by using check point 5, then the call will have to be transferred to a less skilled agent and include AI analytics; Or if a less skilled agent not available, then the call will continue in the current environment with the agent and AI assisted mode where AI assistance engaged into the call will inform the personal data of the said customer; Also the call place holder will be put in the queue when high waiting time is detected in check point 5; If the call still in a threat environment and before the call ends another skilled agent is available the call will be transferred to that skilled agent;
    If the check point 5 is detected as no, then the call will be put in hold until a skilled agent is available; If hold is not preferred by customer related to the said call and customer has signaled such during the call or other means, then the call is put in the high priority call back mode immediately and before that the call back call will be put in queue to wait for an available agent.
  56. The AI module in claim 1 and claim 37 where the AI module has a means whereby if the AI module has picked/selected an agent to serve the call which has just hit the system and not reached the agent, where the agent selection either based on prediction or skill value and integrity value based agent selection means, where this selected agent’s WFH has personal data protection threat and in the event personal data usage is unknown or known for this said call, then the immediate countermeasure is a warning to the said agent and skill modification for the said selected agent.
  57. The AI module in claim 1 and claim 37 where the AI module has a means whereby if the call has traversed via the IVR upon hitting the system and is about to hit the agent that has been selected by the AI module based on skill and agent integrity value, whereby the said agent WFH environment is having high data protection threat and either the personal data usage is unknown or known for this said call, then appropriate countermeasure that could be a transfer to another agent of similar skill and is in safer environment is considered with highest priority as a preferred call handling process.
  58. The AI module in claim 1 and claim 37 where the AI module has a means whereby if the call has traversed via the IVR upon hitting the system and is about to hit the agent that has been selected by the AI module based on the agent skill along agent integrity value and the said agent WFH environment is safe, then the counter measure is for data protection threat imposed is ‘null’.
  59. The AI module in claim 1 and claim 37 where the AI module has a means whereby if the call is on-going in a given agent environment and the system knows no personal data is needed to complete the said call and suddenly a personal data protection threat is induced in this agent environment, then the call will not be having any personal data protection related countermeasure and the said countermeasure is ‘null’.
  60. The AI module in claim 1 and claim 37 where the AI module has a means whereby if a call is on-going in a given agent environment and it is detected that personal data involvement is needed for this call from current time until its completion time and also the data protection threat has been induced then the call will be treated with a suitable counter measure that is based on severity of the said data protection threat and other system variable states that supports the said call continuity.
  61. The AI module in claim 1 and claim 37 where the AI module has a means whereby if a call is on-going in a given agent environment and it is detected that personal data is shown on the screen and also a high data protection threat has been induced then the call be treated with a suitable counter measure whereby the agent screen will be blocked and an appropriate counter measure based on priority order will be chosen based on availability of system resources to handle the call in a safe manner without degrading the QoS of the call.
  62. The AI module in claim 1 and claim 37 where the AI module has a means whereby if a call is on-going in a given agent environment and a sudden data protection threat is induced and if no personal data will be used during the said call cannot be estimated by the system, then based on the severity level of the data protection threat in the said agent’s WFH environment a suitable countermeasure to handle the personal data protection threat will be chosen.
  63. The AI module in claim 1 and claim 37 where the AI module has a means whereby if a call is on-going in a given agent environment and a sudden data protection threat is induced and no personal data will be used for the call and the system knows that with a high confidence level, then the personal data protection related counter measure will be ‘null’.
  64. The AI module of claim 1 and claim 37 has a functionality whereby if when AI module running in the browser of the agent desktop detects personal data threat related entity either one or many present in the WFH environment, only then subsequently the video capture passing application to AI module is triggered in-order to start sending the video images to the AI module for some configured time, in-order to capture the video at the AI end related to data protection threat event identified.
  65. The AI module of claim 1, claim 4 and claim 37 has a means of sending information to the smart surveillance application via its designated interfaces where the information sent will indicate for a given call/session ID who is the appropriate agent ID (this could even mean a transfer of call), which agent ID is additionally added to a call session, skill related to the chosen agent ID, skill modified event and modified skill value for an agent ID, AI assistant agent ID (i.e. virtual agent’s ID) that needs to be added to the session, the on-boarding approval/disapproval command, the extreme countermeasure such as block screen, call transfer condition, any other countermeasures that impact the display of personal data and the personal data can be shown on screen or not state for a given call.
  66. The AI module mentioned in claim 1 and claim 37 has a means whereby the agent is only allowed to voice out/utter the personal data in audible manner at certain approved moments such as when the agent is in a call with a customer who is requesting for the personal data during the call and this information was not known to the calling customer while traversing the IVR call; In all other cases, the said AI module will impose a countermeasure such as block the agent work desktop screen.
  67. The AI module of claim 1 has a detection capability whereby the AI module is also able to identify the personal data voiced out whereby it can detect personal data voiced out and impose countermeasures when at least first x% of the given personal data field is voiced out in an audible manner where the x% is a configurable variable.
  68. AI module in claim 1 is also able to identify the event that personal data is voiced out from the event whereby each of the digits, alphabets and special characters of the personal data field is voiced out individually by speech in the correct order and subsequently imposes the countermeasure such as block agent desktop work screen.
  69. AI module in claim 1 is also able to identify personal data being voiced out from the event where the user voices out the complete personal data field by individually voicing out the digits, characters and special characters in a scrambled manner and impose the countermeasure such as block screen of the agent desktop work application.
  70. AI module in claim 1 is also able to identify that personal data is voiced out, simply by detecting the event whereby each of the digits, alphabets and special characters of the personal data is voiced out by speech, when at least x% of the characters of the personal data in the correct order is voiced out and subsequently upon the x% target met imposes the countermeasure such as block screen of the agent desktop work application; where the x% is a configurable variable.
  71. AI module in claim 1 is also able to identify when personal data is voiced out, by simply identifying the event whereby each of the digits, alphabets and special characters of the personal data is voiced out by speech when at least x% of the characters in the scrambled order have been voiced out and subsequently impose the countermeasure such as block screen of the agent desktop work application; where the x% is a configurable variable.
  72. The AI module of claim 1, claim 37 and claim 55 has a means whereby if the AI assisted call is taking longer time and many silent periods (i.e. AI engine assisting with delivering personal data using audio form, or AI assistant providing the analytics related to the call intent to support unskilled agent), then the call will be automatically transferred to another agent of a higher skill that is within the same intent when another such preferred agent becomes available again and transfer request was earlier put in call queue; If a call can be handled smoothly by AI assistant without much silent periods, then the call will not be transferred to another suitable agent.
  73. The AI module of claim 55 and claim 72 has a means whereby the said call transfer in the midst of the call is realized whereby the agent who is transferring the call will do a transfer related audio summary wherein the said audio will be played to the newly transferred agent using some related application; where the said audio summary helps the new agent to handle the call without much hiccups and delay whereby the said audio summary primarily contains what has been handled in the call and what is yet to be handled in the call.
  74. The AI module of claim 73 has a means whereby the said audio summary is checked by the AI module as to whether it has personal data information or not before being played in the new suitable agent environment who handles the transferred call and only play the said summary audio if no such personal data is present in the audio summary or the audio summary can be understood after removing the personal data information from the said audio summary; The AI module is also involved in this personal data removal as well from the audio summary clip.
  75. The AI module of claims 1, 4 and 37 has a means whereby whenever agent WFH environment is considered not safe or agent trying to maliciously copy the personal data, the agent’s integrity points are reduced whereby the points reduced is directly proportional to the degree of non-compliance; The said AI module specifically has a means where by, if the threat level is high, then the integrity points reduced is also high and these said integrity points are not restored quickly by the said AI module.
  76. The AI module of claim 75 has a means where compliance of agent is monitored over a longer time period and then the said integrity points are restored if compliance is seen.
  77. The AI module of claim 51 has a means whereby the AI module is able to inspect the said stored files that were created during personal data projection time with an intention of capturing the personal data that was projected on screen and/or verbally voiced out, as to whether it has personal data captures in video and/or audio mode.
  78. The AI module of claim 77 also has the capability to delete the said recorded files in the agent desktop by performing the deletion or interacting with another application and performing the said deletion of the said recorded files that has the personal data captured maliciously in video and/or audio mode.
  79. The AI module of claim 1 and claim 37 has a means whereby if environment personal data protection threat is medium, the call is allowed to continue in the current environment using the speech assisted or agent assisted mode and it is considered in both these modes the personal data information is informed to the call handling agent using the speech mode to the agent with head sets On; whereby the said agent assisted mode implies another agent is incorporated into the call to simply voice out the related personal data.
  80. The AI module of claim 1 and claim 37 has a means whereby if the environment data protection threat is low in a given WFH environment, then the personal data shown on the screen is reduced to the middle of the screen or the font size is slightly reduced.
  81. The AI module of claim 37 has a means to estimate from the recorded sound/speech received from agent desktop containing the personal data, in which area around the agent the agent’s personal data related speech sound can be clearly heard.
  82. The AI module of claim 81 has a means whereby if the said sound’s finalized audible area cannot be fully seen by the AI module from the restricted camera view if the border of the area extends further than the door in the AI module’s camera view or there is another human in the said audible area and this human can be seen by the AI module using the restricted camera view or any combination thereof, the AI module considers this act of the said agent (the agent quoting the data via voice) as a threat and imposes countermeasure such as significantly reducing the skill of the agent, blocking the agent work screen and this malicious behavior will be considered during re-on-boarding of the agent.
  83. AI module of claim 82 further has a means where it is considered the AI module has voice energy degradation data (the rate at which voice amplitude will diminish in the given agent WFH area from the agent/source), distance to agent’s door from agent work location within his WFH environment and it uses such data and general speech audibility level guidance to detect whether the agent’s sound can be heard by any other.
  84. The AI module in claim 83 will be able to draw an audibility area around the agent and check whether the seen door is within the area and use it for its decisions regarding the said threat; In general if the door lies within the audibility estimated area, the AI module will consider that the threat is present and only when the estimated audibility area is such where it does not have the door and door is outside the area or the said audibility area just touches the door and there is no other person in the view of the AI module within this said area, it considers that the threat is minimal; The said audibility area is derived by the AI module based on the sound level/amplitude capture by the agent and the voice degradation rate related information it has a priori for the given WFH agent environment.
  85. AI module of claim 1 and claim 37 has a means whereby it is able to detect the case such as the agent without viewing the personal data and getting this related personal data projected on the agent desktop work screen during the said call time, the agent is able to handle the call; Subsequently AI module upon identifying the threat imposes the needed counter measures such as bock agent screen.
  86. The AI module of claim 1 and claim 37 has a means whereby to conclude on an unusual event w.r.t. personal data theft when agent is in call with a customer and customer personal data projected on the agent work screen of the said agent, AI module will check many past data of speech samples related to a conversation where the said agent used personal data on the screen and identify the general trend of characterization of silent periods within the personal data projection time; The AI module will then compare the past silent periods occurrence during personal data projection time to the currently occurring silent periods to check any unusually high silent period seen during when the personal data is projected on the screen and subsequently will track an unusual behavior by the said agent.
  87. AI module of claim 1 and claim 37 has a means whereby having identified additional faces within the WFH environment, will request each of these faces to leave the room through the door with their faces clearly seen by the AI engine; To this effect the AI engine has a means to issue a warning to be sent to agent belonging to the said WFH environment and the AI engine will also subsequently monitor such leaving of the faces through the door in view of the AI engine.
  88. The AI module of claim 1 and claim 37 has the capability to identify malicious events such as sharing the agent desktop screen when personal data is projected on the screen via some other screen share applications running on the agent desktop and subsequently has a means to activate suitable countermeasures such as block agent desktop screen to eliminate the said threat.
  89. AI module of claim 1 and claim 37 has a means where when the agent is in Aux mode or ‘unavailable mode’ it will not do any personal data threat evaluation for the given agent’s WFH environment.
  90. The AI module of claim 1 and claim 37 has a means where when agent is missing from the AI view, it will put the agent to ‘Aux’ mode and subsequently the AI module or the skill based routing module will generally consider this agent as unavailable to attend a call; The said AI module or the skill based routing module will subsequently consider this agent as ‘available’ only when the agent comes back to his seat in the WFH environment and this agent re-entry is detected by the AI module.
  91. The AI module of claim 90 can perform similar functions as in claim 90 when the agent explicitly informs he is available to attend a call or he is unavailable to attend a call; When AI module knows he is available to attend a call then the AI module or the skill based call routing module will consider the said agent as one of the candidate to handle the call provided he is not occupied with another call; If the agent informs he is not available to attend the call, then the AI module or the skill based routing module will not consider this said agent as a suitable candidate to handle a given call.
  92. The AI module of claim 55 has a means whereby when the call is about to hit an agent where the said agent has been picked by the AI module based on skill alone but the data protection threat is high in the said selected agent WFH environment, then the same methods as in claim 55 will be used to decide on the suitable countermeasure to start handling the call.
  93. The AI module of claim 1 and claim 37 has a means whereby when identifying such environment originating data protection threat during agent work time, it does not immediately send alerts to the agent concerned; It additionally predicts that the call, which needs personal data, is going to arrive to that particular agent situated in a data protection threat environment; The said AI module combines the data protection threat level in the agent environment and the prediction of call arriving to the particular agent who will be using personal data, into a countermeasure score for warning purpose; If the said score is a value greater than 0 and none of the contributors to the score has 0 value, then a warning will be sent to that agent; The warning frequency will increase with the score value.
PCT/SG2020/050469 2020-08-13 2020-08-13 Ai based data protection in wfh environment WO2022035371A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/SG2020/050469 WO2022035371A2 (en) 2020-08-13 2020-08-13 Ai based data protection in wfh environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SG2020/050469 WO2022035371A2 (en) 2020-08-13 2020-08-13 Ai based data protection in wfh environment

Publications (1)

Publication Number Publication Date
WO2022035371A2 true WO2022035371A2 (en) 2022-02-17

Family

ID=80248217

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2020/050469 WO2022035371A2 (en) 2020-08-13 2020-08-13 Ai based data protection in wfh environment

Country Status (1)

Country Link
WO (1) WO2022035371A2 (en)

Similar Documents

Publication Publication Date Title
EP3652926B1 (en) Fraud detection system
US11005862B1 (en) Systems and methods of detecting and mitigating malicious network activity
US9232051B2 (en) Call management for secure facilities
ES2844449T3 (en) Context-sensitive rule-based alerts for fraud monitoring
US9009785B2 (en) System and method for implementing adaptive security zones
US20060028488A1 (en) Apparatus and method for multimedia content based manipulation
US9495636B2 (en) Determining a threat level for one or more individuals
US11689660B2 (en) Methods and systems for detecting disinformation and blocking robotic calls
CA3226695A1 (en) Systems and methods for authentication using browser fingerprinting
US11706211B2 (en) Computer-based systems configured for one-time passcode (OTP) protection and methods of use thereof
US11552985B2 (en) Method for predicting events using a joint representation of different feature types
US20200099706A1 (en) Multi-layer approach to monitor cell phone usage in restricted areas
WO2022035371A2 (en) Ai based data protection in wfh environment
Su et al. A prevention system for spam over internet telephony
US11394754B2 (en) System and method of admission control of a communication session
CN114257688A (en) Telephone fraud identification method and related device
US12028479B2 (en) System and method for protecting subscriber data in the event of unwanted calls
US20230056959A1 (en) Systems and methods for facilitating supervision of individuals based on geofencing
US12046120B1 (en) Alarm scoring based on alarm event data in a storage environment having time-controlled access
Bella et al. A fraud detection model for Next-Generation Networks
CA3206619A1 (en) Machine learning for computer security
Alvarez Identifying a Network Security Strategy to Protect a Home System when Using Iot Devices
CN118312950A (en) Data leakage prevention method and system based on large language model

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20949640

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20949640

Country of ref document: EP

Kind code of ref document: A2