GB2590542A - System and method for training of a software agent - Google Patents

System and method for training of a software agent Download PDF

Info

Publication number
GB2590542A
GB2590542A GB2017505.5A GB202017505A GB2590542A GB 2590542 A GB2590542 A GB 2590542A GB 202017505 A GB202017505 A GB 202017505A GB 2590542 A GB2590542 A GB 2590542A
Authority
GB
United Kingdom
Prior art keywords
software agent
executable instructions
tasks
data packets
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB2017505.5A
Other versions
GB202017505D0 (en
GB2590542B (en
Inventor
Raman Balachander Sundar
Sharma Rohil
Balkrishna Kakade Prashant
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of GB202017505D0 publication Critical patent/GB202017505D0/en
Publication of GB2590542A publication Critical patent/GB2590542A/en
Application granted granted Critical
Publication of GB2590542B publication Critical patent/GB2590542B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling

Abstract

Systems and methods are provided for training of a software agent for execution of multiple executable instructions. A set of data packets related to multiple executable instructions are received by the software agent, and a second set of data packets is selected from the first set of data packets. Iterative execution of the set of tasks associated with the one or more executable instructions is determined as training data. The software agent is trained using the training data and the trained software agent performs the executable instructions. The data packets may be determined from a real-time video, where the video is related to simulation of the one or more executable instructions. The trained software agent may automatically initiate execution of one or more executable instructions without receiving human input.

Description

Intellectual Property Office Application No G13201 75055 RTM Date:8 March 2021 The following terms are registered trade marks and should be read as such wherever they occur in this document: Page 16: "Firewire", "Seagate", and "Hitachi" Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo
SYSTEM AND METHOD FOR TRAINING OF A SOFTWARE AGENT
FIELD OF THE INVENTION
[0001] The present disclosure generally relates to automation. In particular, the present disclosure provides methods and systems to facilitate automating workflow of applications.
BACKGROUND OF THE INVENTION
[0002] The background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[0003] Typically, software agents, commonly known as bots, web robots, WWW robots, or simply bots, are software applications that are created to autonomously perform tasks, e.g., scripts, over the internet. The software agents are often created and used to perform tasks that are simple and structurally repetitive so as to alleviate burden on humans performing these repetitive tasks. In some implementations, automation techniques have been applied on the software agents, where the software agents are relied upon to operate an application and follow a series of steps in order to perform a function on the application.
[0004] However, the previously implemented automation techniques failed to replicate the repetitive activities performed by a user while operating the application. Additionally, the automation techniques that use machine learning are not suited to address and solve complex problems. For example, some form of complex problems cannot be solved by the automation techniques due to a lack of training data, a problem type that is unfavorable to machine prediction, or due to other complexity factors.
[0005] Thus, it would be beneficial for there to be systems and methods that allow the software agents to be trained using machine learning for performing repetitive tasks being performed by the user while operating the application.
OBJECTS OF THE INVENTION
[0001] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below.
[0006] An object of the present disclosure is to provide a system and method to train a software agent to perform repetitive tasks without additional human efforts.
[0007] An object of the present disclosure is to provide a system and method to facilitate the software agent to perform rule-based, regular tasks that require manual inputs.
[0008] An object of the present disclosure is to provide a system and method to facilitate the software agent to perform iterative tasks on an application with minimal error and with improved quality while maintaining detailed audit trail.
[0009] It is another object of the present disclosure to provide a system and method to provide self certification training for the software agent while working on a business application workflow.
[0010] Yet another object of the present disclosure is to provide a system and method to determine and enable the software agent to stall the training upon reaching a training threshold.
SUMMARY
[0011] The present disclosure generally relates to automation. In particular, the present disclosure provides methods and systems to facilitate automating workflow of applications.
[0012] An aspect of the present disclosure pertains to a method comprising: receiving, at one or more processors and by a software agent, a first set of data packets pertaining to one or more executable instructions; selecting, at the one or more processors, a second set of data packets from the first set of data packets, where the second set of data packets pertains to at least one of the one or more executable instructions; training, at the one or more processors, the software agent using training data, where the training data pertains to iterative execution of a set of tasks associated with the selected at least one of the one or more executable instructions; and in response to receiving a third set of data packets pertaining to the set of tasks, performing, at the one or more processors, by the trained software agent the set of tasks.
[0013] According to an embodiment, the first set of data packets pertaining to the one or more executable instructions and the second set of data packets pertaining to at least one of the one or more executable instructions are executed on a computing device.
[0014] According to an embodiment, the training is triggered by an indication being selected at the computing device.
[0015] According to an embodiment, the first set of data packets pertaining to the one or more executable instructions are determined from a real-time video, where the video is related to simulation of the one or more executable instructions.
[0016] According to an embodiment, the one or more executable instructions are executed serially or in parallel.
[0017] According to an embodiment, the software agent is trained based on any of a detected dynamic change in the execution of the one or more executable instructions.
[0018] According to an embodiment, the trained software agent performs the set of tasks upon reaching a training threshold value.
[0019] According to an embodiment, the trained software agent automatically initiates execution of the one or more executable instructions without receiving human input.
[0020] Another aspect of the present disclosure relates to a system comprising: a processing unit operatively coupled to the audio capturing unit, the processing unit comprising a processor communicatively coupled to a memory, the memory storing a set of instructions executable by the processor, wherein, when the system is in operation, the processor is configured to execute the set of instructions to enable the processing unit to: receive at a software agent a first set of data packets pertaining to one or more executable instructions; select a second set of data packets from the first set of data packets, where the second set of data packets pertains to at least one of the one or more executable instructions; train the software agent using training data, where the training data pertains to iterative execution of a set of tasks associated with the selected at least one of the one or more executable instructions; and in response to receiving a third set of data packets pertaining to the set of tasks, performing, at the one or more processors, by the trained software agent the set of tasks.
[0021] According to an embodiment, the trained software agent performs the set of tasks upon reaching a training threshold value [0022] Various objects, features, aspects and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.
BRIEF DESCRIPTION OF THE DRAWINGS
[0023] The accompanying drawings are included to provide a further understanding of the present disclosure and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure.
[0024] FIG. I indicates a network implementation of a software agent training system, in accordance with an embodiment of the present disclosure.
[0025] FIG. 2 illustrates exemplary functional components of the software agent training system in accordance with an embodiment of the present disclosure.
[0026] FIG. 3 illustrates exemplary representation for training of the software agent for executable instructions and a set of tasks in accordance with an embodiment of
the present disclosure.
[0027] FIG. 4 illustrates a flow representation for training of the software agent for learning execution of executable instructions and a set of tasks in accordance with an embodiment of the present disclosure [0028] FIG. 5 illustrates an exemplary method for training of the software agent in accordance with an embodiment of the present disclosure.
[0029] FIG. 6 is an exemplary computer system in which or with which embodiments of the present invention may be utilized.
DETAILED DESCRIPTION
[0030] In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without some of these specific details.
[0031] Embodiments of the present invention may be provided as a computer program product, which may include a machine-readable storage medium tangibly embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. The machine-readable medium may include, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, PROMs, random access memories (RAMs), programmable read-only memories (PROMs), erasable PROMs (EPROMs), electrically erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware).
[0032] Various methods described herein may be practiced by combining one or more machine-readable storage media containing the code according to the present invention with appropriate standard computer hardware to execute the code contained therein. An apparatus for practicing various embodiments of the present invention may involve one or more computers (or one or more processors within a single computer) and storage systems containing or having network access to computer program(s) coded in accordance with various methods described herein, and the method steps of the invention could be accomplished by modules, routines, subroutines, or subparts of a computer program product.
[0033] If the specification states a component or feature "may", 'can", 'could-, or "might" be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.
[0034] As used in the description herein and throughout the claims that follow, the meaning of "a," "an," and -the" includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of "in" includes "in" and "on" unless the context clearly dictates otherwise.
[0035] Exemplary embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. These embodiments are provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those of ordinary skill in the art. Moreover, all statements herein reciting embodiments of the invention, as well as specific examples thereof are intended to encompass both structural and functional equivalents thereof Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future (i.e., any elements developed that perform the same function, regardless of structure).
[0036] The present disclosure generally relates to automation. In particular, the present disclosure provides methods and systems to facilitate automating workflow of applications.
[0037] An aspect of the present disclosure pertains to a method comprising receiving, at one or more processors and by a software agent, a first set of data packets pertaining to one or more executable instructions; selecting, at the one or more processors, a second set of data packets from the first set of data packets, where the second set of data packets pertains to at least one of the one or more executable instructions; training, at the one or more processors, the software agent using training data, where the training data pertains to iterative execution of a set of tasks associated with the selected at least one of the one or more executable instructions; and in response to receiving a third set of data packets pertaining to the set of tasks, performing, at the one or more processors, by the trained software agent the set of tasks.
[0038] According to an embodiment, the first set of data packets pertaining to the one or more executable instructions and the second set of data packets pertaining to at least one of the one or more executable instructions are executed on a computing device.
[0039] According to an embodiment, the training is triggered by an indication being selected at the computing device.
[0040] According to an embodiment, the first set of data packets pertaining to the one or more executable instructions are determined from a real-time video, where the video is related to simulation of the one or more executable instructions.
[0041] According to an embodiment, the one or more executable instructions are executed serially or in parallel.
[0042] According to an embodiment, the software agent is trained based on any of a detected dynamic change in the execution of the one or more executable instructions.
[0043] According to an embodiment, the trained software agent performs the set of tasks upon reaching a training threshold value.
[0044] According to an embodiment, the trained software agent automatically initiates execution of the one or more executable instructions without receiving human input.
[0045] Another aspect of the present disclosure relates to a system comprising: a processing unit operatively coupled to the audio capturing unit, the processing unit comprising a processor communicatively coupled to a memory, the memory storing a set of instructions executable by the processor, wherein, when the system is in operation, the processor is configured to execute the set of instructions to enable the processing unit to: receive at a software agent a first set of data packets pertaining to one or more executable instructions; select a second set of data packets from the first set of data packets, where the second set of data packets pertains to at least one of the one or more executable instructions; train the software agent using training data, where the training data pertains to iterative execution of a set of tasks associated with the selected at least one of the one or more executable instructions; and in response to receiving a third set of data packets pertaining to the set of tasks, performing, at the one or more processors, by the trained software agent the set of tasks.
[0046] According to an embodiment, the trained software agent performs the set of tasks upon reaching a training threshold value.
[0047] FIG. 1 indicates a network implementation 100 of a software agent training system, in accordance with an embodiment of the present disclosure.
[0048] According to an embodiment of the present disclosure the software agent training system (also referred to as the system 102, hereinafter) can facilitate determining one or more executable instructions being executed on a computing device. Based on the determined executable instructions, the software agent is trained to perform the executable instructions along with a set of tasks being performed within the executable instructions iteratively in a serial manner.
[0049] The system 102 implemented in any computing device can be configured/operatively connected with a sewer 110. As illustrated, the system 102 can be communicatively coupled with one or more entity devices 106-1, 106-2,.., 106-N (individually referred to as the entity device 106 and collectively referred to as the entity devices 106, hereinafter) through a network 104. The one or more entity devices 106 are connected to living subjects/ users / entities/participants 108-1, 108-2,..., 108N (individually referred to as the entity 108 and collectively referred to as the entities 108, hereinafter).
[0050] The entity devices 106 can include a variety of computing systems, including but not limited to, a laptop computer, a desktop computer, a notebook, a workstation, a portable computer, a personal digital assistant, a handheld device and a mobile device.
In an embodiment, the system 102 can be implemented using any or a combination of hardware components and software components such as a cloud, a server, a computing system, a computing device, a network device and the like. Further, the system 102 can interact with any of the entity devices 106 through a website or an application that can reside in the entity devices 106. In an implementation, the system 102 can be accessed by website or application that can be configured with any operating system, including but not limited to, Android', iOSTm, and the like. Examples of the entity devices 106 can include, but are not limited to, a computing device associated with industrial equipment or an industrial equipment based asset, a smart camera, a smart phone, a portable computer, a personal digital assistant, a handheld device and the like.
[0051] Further, the network 104 can be a wireless network, a wired network or a combination thereof that can be implemented as one of the different types of networks, such as Intranet, Local Area Network (LAN), Wide Area Network (WAN), Internet, and the like. Further, the network 104 can either be a dedicated network or a shared network. The shared network can represent an association of the different types of networks that can use variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like.
[0052] In an embodiment, the system 102 facilitates providing training to a software agent such as a bot, or a robot. The software agent can be an automated program that runs over the Internet. Some of the software agents can run automatically, while others only execute commands when they receive specific input. The software agent can be a software application that can interface with a computing device to determine the set of executable instructions. The software agent can be a process executing on the computing device, executing on a communication system, or a separate stand-alone service or process executing on a different computer or a remote process sewer.
[0053] In one embodiment, the software agent may reside in the same computing device where the set of executable instructions are being performed or can be placed in another computing device. Examples of the computing device may include, but are not limited to, a mobile communications device such as a cellular handset (e.g., a cellular phone) or smart phone, a mobile computing device such as a tablet computer, a notebook computer, a laptop computer, a desktop computer, a sewer computer, and so on.
[0054] In an embodiment, the system 102 can communicate with the entity devices via a low point-to-point communication protocol such as Bluetooth®. In other embodiments, the system may also communicate via other various protocols and technologies such as WiFi®, WiMax®, iBeacon®, and near field communication (NFC). In other embodiments, the system 102 may connect in a wired manner to entity devices.
Examples of the entity devices may include but are not limited to, computer monitors, television sets, light-emitting diodes (LEDs), and liquid crystal displays (LCDs).
[0055] Although in various embodiments, the implementation of system 102 is explained with regard to the server 110, those skilled in the art would appreciate that, the system 102 can fully or partially be implemented in other computing devices operatively coupled with network 104 such as entity devices 106 with minor modifications, without departing from the scope of the present disclosure.
[0056] FIG. 2 illustrates exemplary functional components 200 of the software agent training system in accordance with an embodiment of the present disclosure.
[0057] In an aspect, the system 102 may comprise one or more processor(s) 202. The one or more processor(s) 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the one or more processor(s) 202 are configured to fetch and execute computer-readable instructions stored in a memory 204 of the system 102. The memory 204 may store one or more computer-readable instructions or routines, which may be fetched and executed to create or share the data units over a network service. The memory 204 may comprise any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0058] The system 102 may also comprise an interface(s) 206. The interface(s) 206 may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) 206 may facilitate communication of the system 102 with various devices coupled to the system 102 such as an input unit and an output unit. The interface(s) 206 may also provide a communication pathway for one or more components of the computing device 102. Examples of such components include, but are not limited to, processing engine(s) 208 and database 210.
[0059] The processing engine(s) 208 may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) 208. In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) 208 may be processor executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) 208 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) 208. In such examples, the computing device 102 may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system 102 and the processing resource. In other examples, the processing engine(s) 208 may be implemented by electronic circuitry. The database 210 may comprise data that is either stored or generated as a result of functionalities implemented by any of the components of the processing engine(s) 208.
[0060] In an exemplary embodiment, the processing engine(s) 208 may comprise an executable instructions and tasks tracking unit 212, a software agent training unit 214, and other units(s) 216.
[0061] In an embodiment, the system 102 may include the executable instructions and tasks tracking unit 212. The executable instructions may be a program, a software module, a software component, and/or other software element that may be executing by the processing engine(s) 208. The executable instructions may include a plurality of tasks. The executable instructions may include program code to cause the processing engine(s) 208 to perform activities such as, but not limited to, reading data, writing data, processing data, formulating data, converting data, transforming data, etc. For example, the executable instructions may be a binary file and/or an executable file that includes instructions to cause the processing engine(s) 208 to execute a media player to play media items (such as digital videos, digital music) or to cause the processing engine(s) 208 to execute a web browser. The executable instructions in an application may be divided into blocks of instructions e.g., a series or group of instructions.
[0062] In an embodiment, an image and video capturing device may obtain images of a display of the computing device while the software agent is executing on and interacting with the computing device. For example, the image and video capturing device may obtain a first image of user interface of the computing device where the set of executable instructions are being performed.
[0063] In some implementations, the image and video capturing device may be a camera that captures images and videos of a screen shown on display. In other implementations, the images and videos may be captured using a software process. In an example, the image and video capturing device may be continuously obtaining images and the video and the system 102 may use machine-learning to determine when the set of executable instructions begins and when the set of executable instructions ends. For example, the system 102 may use machine-learning to determine specific activities that are associated with a start or end of the set of executable instructions. Such activities may include a specific image or video being shown on a display, e.g., a specific software window popping into a foreground or a specific logo being detected. The system 102 may identify these specific activities, and split a list of sequential activities into different sub-lists of tasks that contain all the tasks being performed within the set of executable instructions. In some implementations the system 102 may try to identify recurring sequences of activities, e.g., click on application icon, application window opens, and click on menu button, and when a recurring sequence of activities is identified, identify a first activity in the recurring sequence as an activity associated with a start of the set of executable instructions and identify a last activity in the recurring sequence as an activity associated with an end of the set of executable instructions.
[0064] In an embodiment, the system 102 may include the software agent training unit 214. Based on the set of executions and the tasks tracked and determined, the software agent may receive data streams and information (in real time) from the computing device. The software agent can also have access to historically collected data of the computing device. This data is modeled in a database with date and time stamps and other metadata. The software agent is trained for activities associated with the computing device. Further, the training is triggered by an indication being selected at the computing device. For example, the indication may be related to initiation of a business application with respect to the set of executable instructions and the tasks or upon reaching a particular stage in the application and so forth.
[0065] In an embodiment, the system 102 may include the software agent training unit 214 for training the software agent. In an embodiment, the system 102 can facilitate to train the software agent to generate meaningful patterns from the tasks that can mimic business logic and execute an auto-generated process flow without human supervision or inputs. For example, consider a task generated to "change an input field". In this example, classification of the tasks can be performed that depend on a multitude of parameters for the generated task. This classification allows learning for the software agent to be performed in an automated manner. For example, the software agent can automate the tasks to be performed during execution of the set of executable instructions when new task is available based on the classification of the tasks into one or more categories. Additionally, decision making ability of the software agent does not end in just categorizing the tasks. Classification of sub-tasks or, at times, at any point where a decision needs to be taken for a possible classification of the tasks can be included in the software agent's machine learning functionality.
[0066] In an embodiment, the software agent may continue to undergo additional training through feedback (actual tasks from observed tasks versus the software agent predicted tasks). This may allow the software agent to become more accurate with respect to predicting tasks from the set of tasks observed. The software agent may also keep track of a variety of the executable instructions and the tasks and their competing interests based on priorities assigned to the tasks (which the software agent can also learn through initial configuration and through continual training). The tasks may refer to a combination of sub-steps or actions taken during the execution of the executable instructions. In various implementations, the tasks can be a series of click streams (e.g., mouse click streams, tap streams from a mobile device, and the like) with data being injected at the executable instructions at various points.
[0067] In an embodiment, the software agent can be used to simulate the executable instructions and the tasks and test the executable instructions and the tasks to be tested using a video, without affecting the performance of the executable instructions and the related tasks without affecting real time functioning of the executable instructions. In an embodiment, the software agent can be trained to create a complete automatic workflow automatically.
[0068] In an embodiment, the image and video capturing device operatively coupled to the computing device can continuously monitor the executable instructions and the tasks to determine if any changes are observed in a pattern of performance of the executable instructions and the tasks. Upon observance of any of the changes, the software agent is trained automatically without any human intervention.
[0069] In an embodiment, the software agent can be continuously tracked on performance of the learned tasks and upon reaching a training threshold a confidence level of the software agent is determined and the software agent can be given tasks that need to be performed automatically without any additional human or machine interference.
[0070] In an embodiment, the software agent can be trained in multiple of ways such as but limited to: (a) running a trained workflow of the executable instructions and the tasks on a simulation mode and certify the software agent to perform as per the trained workflow, (b) promoting the software agent to complete a self certification training by working on the simulation mode of a business application simulator.
[0071] In an embodiment, a simulation engine may be provided that is operatively coupled to the computing device. The simulation engine may be used to record screen, keyboard and mouse object movements on the computing device as a single propriety data file. In an embodiment, the software agent can be trained till the software agent is trained to the training threshold.
[0072] Tn an embodiment, the software agent can work on the workflow when the training threshold for the software agent is achieved. Upon the software agent reaching the training threshold the software agent can implement a workflow manager, where the software agent can define multiple business rules on fly and test using the simulation engine.
[0073] FIG. 3 illustrates exemplary representation 300 for training of the software agent for executable instructions and a set of tasks in accordance with an embodiment of the present disclosure.
[0074] In an exemplary embodiment, as illustrated in FIG. 3 is a software agent 302 that needs to be trained on the executable instructions and the tasks being performed on user interface 304 of the computing device. The executable set of instructions 306, and the corresponding one or tasks (308-1, 308-2....308-N) that are being performed while execution of the instructions.
[0075] Upon the software agent being trained based on the executable instructions being performed either in a serial or a parallel form and thus the tasks being performed during execution of the one or more instructions, the software agent 302 is trained. The training steps at 310 can be received by the software agent 302 from the user interface 304. Upon the software agent 302 being trained the trained software agent can automatically perform the learned set of executable instructions and the tasks. Further, the instructions and the tasks can be performed at 312 in a similar manner as learned as either in a serial, sequential or in parallel form.
[0076] In an embodiment, the software agent may include a machine-learning component such that the software agent is trained on the tasks and the instructions. The real-time observed instructions and the tasks to predict a desired activity is performed using the machine learning. Further, the machine learning of a derived function from the performed instruction and the tasks can occur through feedback. The machine learning can facilitate the software agent to continuously learn in case some modifications are observed during execution of the execution instructions and the related tasks.
[0077] FIG. 4 illustrates a flow representation 400 for training of the software agent for learning execution of executable instructions and a set of tasks in accordance with an
embodiment of the present disclosure.
[0078] In an exemplary embodiment, as illustrated in FIG. 4 is an image and video capturing unit that captures the instructions being executed on the computing device. At block 402, the image and video capturing unit may obtain images of a display of the computing device while the software agent is executing on and interacting with the computing device. For example, the image and video capturing device may obtain a first image of a user interface of the computing device where the set of executable instructions and the tasks within are being performed [0079] At block 404, the executable instructions and tasks identification unit identifies the unique instructions and the tasks performed therewith. The identified unique instructions and the tasks are used to train the software agent at block 406.
[0080] In an exemplary embodiment, each of multiple instances of learning modules deployed for learning the instructions and the tasks employ machine-learning techniques to learn the workflows associated with an organizational environment, including business-specific business workflows.
[0081] FIG. 5 illustrates an exemplary method 500 for training of the software agent in accordance with an embodiment of the present disclosure [0082] In an embodiment, at block 502 a first set of data packets pertaining to one or more executable instructions are received by a software agent. At block 504, a second set of data packets from the first set of data packets are selected. The second set of data packets pertains to at least one of the one or more executable instructions. At block 506, the software agent is trained using training data. The training data may pertain to iterative execution of a set of tasks associated with the selected at least one of the one or more executable instructions. At block 508, in response to receiving a third set of data packets pertaining to the set of tasks the trained software agent performs the set of tasks.
[0083] As an advantage the video based user simulation can help processes and business to achieve effective and efficient automation for the software agent in a short span of time and without much human efforts.
[0084] The advantages herein are numerous. One significant advantage is the video based user simulation of the software agent that can help processes and business to achieve effective and efficient automation of the software agent. Another advantage is the provision of a method and system where the instruction and the tasks can be built on the software agent by machine learning. When the software agents learn that the instructions and the tasks are changed, the instructions and the tasks need be re-learned by the software agent.
[0085] FIG. 6 illustrates an exemplary computer system 600 to implement the proposed system in accordance with embodiments of the present disclosure.
[0086] As shown in FIG. 6, computer system can include an external storage device 610, a bus 620, a main memory 630, a read only memory 640, a mass storage device 650, communication port 660, and a processor 670. A person skilled in the art will appreciate that computer system may include more than one processor and communication ports. Examples of processor 670 include, but are not limited to, an Intel® Itanium® or Itanium 2 processor(s), or AMD® Opteron® or Athlon MP® processor(s), Motorola® lines of processors, F0rtiSOCTM system on a chip processors or other future processors. Processor 670 may include various modules associated with embodiments of the present invention. Communication port 660 can be any of an RS232 port for use with a modem based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. Communication port 660 may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which computer system connects.
[0087] Memory 630 can be Random Access Memory (RAM), or any other dynamic storage device commonly known in the art. Read only memory 640 can be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or BIOS instructions for processor 670. Mass storage 650 may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (1JSB) and/or Firewire interfaces), e.g. those available from Seagate (e.g., the Seagate Barracuda 7102 family) or Hitachi (e.g., the Hitachi Deskstar 7K1000), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g. an array of disks (e.g., SATA arrays), available from various vendors including Dot Hill Systems Corp., LaCie, Nexsan Technologies, Inc. and Enhance Technology, Inc. [0088] Bus 620 communicatively couples processor(s) 670 with the other memory, storage and communication blocks. Bus 620 can be, e.g. a Peripheral Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), USB or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects processor 670 to software system.
[0089] Optionally, operator and administrative interfaces, e.g. a display, keyboard, and a cursor control device, may also be coupled to bus 620 to support direct operator interaction with computer system. Other operator and administrative interfaces can be provided through network connections connected through communication port 660. External storage device 610 can be any kind of external hard-drives, floppy drives, [OMEGA ® Zip Drives, Compact Disc -Read Only Memory (CD-ROM), Compact Disc -Re-Writable (CD-RW), Digital Video Disk -Read Only Memory (DVD-ROM). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system limit the scope of the present
disclosure.
[0090] Thus, it will be appreciated by those of ordinary skill in the art that the diagrams, schematics, illustrations, and the like represent conceptual views or processes illustrating systems and methods embodying this invention. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing associated software. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the entity implementing this invention. Those of ordinary skill in the art further understand that the exemplary hardware, software, processes, methods, and/or operating systems described herein are for illustrative purposes and, thus, are not intended to be limited to any particular named.
[0091] While embodiments of the present invention have been illustrated and described, it will be clear that the invention is not limited to these embodiments only.
Numerous modifications, changes, variations, substitutions, and equivalents will be apparent to those skilled in the art, without departing from the spirit and scope of the invention, as described in the claim.
[0092] In the foregoing description, numerous details are set forth. It will be apparent, however, to one of ordinary skill in the art having the benefit of this disclosure, that the present disclosure can be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, to avoid obscuring the present invention.
[0093] While the foregoing describes various embodiments of the disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof The disclosure is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the disclosure when combined with information and knowledge available to the person having ordinary skill in the art.
ADVANTAGES OF THE INVENTION
[0094] The present disclosure facilitates to provide a mechanism to use video based simulation for training the software agent to perform repetitive tasks being executed on an application.
[0095] The present disclosure facilitates to provide a mechanism to train the software agent to perform repetitive tasks without additional human efforts.
[0096] The present disclosure provides a mechanism to use the software agent to perform rule-based, regular tasks that require manual inputs.
[0097] The present disclosure provides a mechanism to use the software agent to perform iterative tasks on an application with minimal error and with improved quality while maintaining detailed audit trail.
[0098] The present disclosure facilitates to provide self certification training for the software agent while working on a business application workflow.
[0099] The present disclosure provides a mechanism to determine and enable the software agent to stall the training upon reaching a training threshold.

Claims (5)

  1. Claims: 1 A method comprising: receiving, at one or more processors and by a software agent, a first set of data packets pertaining to one or more executable instructions; selecting, at the one or more processors, a second set of data packets from the first set of data packets, where the second set of data packets pertains to at least one of the one or more executable instructions; training, at the one or more processors, the software agent using training data, where the training data pertains to iterative execution of a set of tasks associated with the selected at least one of the one or more executable instructions; and in response to receiving a third set of data packets pertaining to the set of tasks, performing, at the one or more processors, by the trained software agent the set of tasks.
  2. 2 The method as claimed in claim 1, wherein the first set of data packets pertaining to the one or more executable instructions and the second set of data packets pertaining to at least one of the one or more executable instructions are executed on a computing device.
  3. 3 The method as claimed in claim 1, wherein the training is triggered by an indication being selected at the computing device.
  4. 4 The method as claimed in claim 1, wherein the first set of data packets pertaining to the one or more executable instructions are determined from a real-time video, where the video is related to simulation of the one or more executable instructions.
  5. 5. The method as claimed in claim 4, wherein the one or more executable instructions are executed serially or in parallel 6 The method as claimed in claim 5, wherein the software agent is trained based on any of a detected dynamic change in the execution of the one or more executable instructions.7 The method as claimed in claim 1, wherein the trained software agent performs the set of tasks upon reaching a training threshold value 8 The method as claimed in claim 1, wherein the trained software agent automatically initiates execution of the one or more executable instructions without receiving human input.9 A system comprising: a processing unit operatively coupled to the audio capturing unit, the processing unit comprising a processor communicatively coupled to a memory, the memory storing a set of instructions executable by the processor, wherein, when the system is in operation, the processor is configured to execute the set of instructions to enable the processing unit to: receive at a software agent a first set of data packets pertaining to one or more executable instructions; select a second set of data packets from the first set of data packets, where the second set of data packets pertains to at least one of the one or more executable instructions; train the software agent using training data, where the training data pertains to iterative execution of a set of tasks associated with the selected at least one of the one or more executable instructions; and in response to receiving a third set of data packets pertaining to the set of tasks, performing, at the one or more processors, by the trained software agent the set of tasks.The system as claimed in claim 9, the trained software agent performs the set of tasks upon reaching a training threshold value.
GB2017505.5A 2019-12-18 2020-11-05 System and method for training of a software agent Active GB2590542B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
IN201941052480 2019-12-18

Publications (3)

Publication Number Publication Date
GB202017505D0 GB202017505D0 (en) 2020-12-23
GB2590542A true GB2590542A (en) 2021-06-30
GB2590542B GB2590542B (en) 2022-09-07

Family

ID=74046384

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2017505.5A Active GB2590542B (en) 2019-12-18 2020-11-05 System and method for training of a software agent

Country Status (1)

Country Link
GB (1) GB2590542B (en)

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11042784B2 (en) * 2017-09-15 2021-06-22 M37 Inc. Machine learning system and method for determining or inferring user action and intent based on screen image analysis

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
GB202017505D0 (en) 2020-12-23
GB2590542B (en) 2022-09-07

Similar Documents

Publication Publication Date Title
US10592808B2 (en) Predictive model scoring to optimize test case order in real time
US10146558B2 (en) Application documentation effectiveness monitoring and feedback
CN105556482A (en) Monitoring mobile application performance
US11144289B1 (en) Dynamic automation of selection of pipeline artifacts
US10089661B1 (en) Identifying software products to test
US20240061688A1 (en) Automated generation of early warning predictive insights about users
EP4018399A1 (en) Modeling human behavior in work environments using neural networks
CN115474440A (en) Task and process mining across computing environments through robotic process automation
US11055204B2 (en) Automated software testing using simulated user personas
US20140278338A1 (en) Stream input reduction through capture and simulation
US11431557B1 (en) System for enterprise event analysis
US20200410387A1 (en) Minimizing Risk Using Machine Learning Techniques
Safwat et al. Addressing challenges of ultra large scale system on requirements engineering
JP2013030036A (en) Process control system, process control method, program, and process control device
US20220214948A1 (en) Unsupervised log data anomaly detection
Jha et al. From theory to practice: Understanding DevOps culture and mindset
Meyer et al. Detecting developers’ task switches and types
US11775419B2 (en) Performing software testing with best possible user experience
US11237890B2 (en) Analytics initiated predictive failure and smart log
CN117234844A (en) Cloud server abnormality management method and device, computer equipment and storage medium
GB2590542A (en) System and method for training of a software agent
Bodik Automating datacenter operations using machine learning
WO2022022572A1 (en) Calculating developer time during development process
US20230350392A1 (en) Method and system for seamless transition of runtime system from controller device to digitalization platform
US11729068B2 (en) Recommend target systems for operator to attention in monitor tool