NZ785402A - Method and system for evaluating performance of developers using artificial intelligence (ai) - Google Patents
Method and system for evaluating performance of developers using artificial intelligence (ai)Info
- Publication number
- NZ785402A NZ785402A NZ785402A NZ78540222A NZ785402A NZ 785402 A NZ785402 A NZ 785402A NZ 785402 A NZ785402 A NZ 785402A NZ 78540222 A NZ78540222 A NZ 78540222A NZ 785402 A NZ785402 A NZ 785402A
- Authority
- NZ
- New Zealand
- Prior art keywords
- developers
- performance
- developed
- product
- category
- Prior art date
Links
- 238000010801 machine learning Methods 0.000 claims abstract 13
- 230000004044 response Effects 0.000 claims abstract 7
- 230000000875 corresponding Effects 0.000 claims abstract 6
- 238000011156 evaluation Methods 0.000 claims 8
- 206010033307 Overweight Diseases 0.000 claims 1
- 238000000034 method Methods 0.000 claims 1
- 230000002787 reinforcement Effects 0.000 claims 1
Abstract
method and system for evaluating performance of developers using Artificial Intelligence (AI) is disclosed. In some embodiments, the method includes receiving, each of a plurality of performance parameters associated with a set of developers. The method further includes creating one or more feature vectors corresponding to each of the plurality of performance parameters, based on one or more features determined for each of the plurality of performance parameters. The method further includes assessing the one or more feature vectors, based on the first pre-trained machine learning model. The method further includes classifying the set of developers into one of a set of performance categories based on the assessing of the one or more feature vectors. The method further includes evaluating the performance of at least one of the set of developers, based on an associated category in the set of performance categories, in response to the classifying. e vectors corresponding to each of the plurality of performance parameters, based on one or more features determined for each of the plurality of performance parameters. The method further includes assessing the one or more feature vectors, based on the first pre-trained machine learning model. The method further includes classifying the set of developers into one of a set of performance categories based on the assessing of the one or more feature vectors. The method further includes evaluating the performance of at least one of the set of developers, based on an associated category in the set of performance categories, in response to the classifying.
Description
PATENT APPLICATION
METHOD AND SYSTEM FOR EVALUATING PERFORMANCE OF
DEVELOPERS USING ARTIFICIAL IGENCE (AI)
NAVIN SABHARWAL
AMIT AGRAWAL
PTION
Technical Field
Generally, the disclosure relates to Artificial Intelligence (AI). More
specifically, the disclosure relates to a method and system for evaluating performance of
employees (such as, developers) using AI based technologies.
Background
Typically, re developers may be evaluated for performance on
various factors, such as, modules, applications or features of products
developed by them, work experience in a t organization, and overall
experience as a Subject Matter Expert (SME) in a particular technology.
Traditionally in a software pment process, multiple re developers
may work on development of features of the product(s) assigned to them as
per their skillsets. This makes it difficult for reviewers to provide feedback
manually for performance evaluation to the software developers for issues
faced in the developed modules and applications by keeping a track on
individual performance of a software developer at the same time. However,
such performance evaluation may be crucial in any organization for factors,
such as, but not limited to, providing an sal or a rating that clearly
distinguishes an individual software developer amongst others and also
providing the training to the software developer to enhance the skillset, based
on the performance evaluation.
Accordingly, there is a need for a method and system for evaluating the
performance of software pers.
SUMMARY OF INVENTION
In accordance with one embodiment, a method of evaluating mance
of developers using AI is disclosed. The method may include receiving each of a plurality
of performance parameters associated with a set of developers. The method may include
creating one or more feature vectors ponding to each of the plurality of
mance parameters, based on one or more features ined for each of the
plurality of performance parameters. It should be noted that, the one or more feature
vectors are created based on a first pre-trained e learning model. The method
may include assessing the one or more feature vectors, based on the first pre-trained
machine learning model. The method may include classifying the set of developers into
one of a set of performance ries based on the assessing of the one or more e
vectors. The method may include evaluating the mance of at least one of the set
of developers, based on an ated category in the set of mance categories, in
response to the classifying.
In another embodiment, a system for evaluating performance of developers
using Artificial Intelligence (AI) is disclosed. The system includes a processor and a
memory communicatively coupled to the sor. The memory may store processorexecutable
instructions, which, on execution, may causes the processor to receive each
of a plurality of performance parameters associated with a set of developers. The
processor-executable instructions, on execution, may further cause the processor to
create one or more feature vectors corresponding to each of the plurality of performance
ters, based on one or more features determined for each of the plurality of
performance parameters. It should be noted that, the one or more feature vectors are
created based on a first pre-trained e learning model. The processor-executable
ctions, on execution, may further cause the processor to assess the one or more
feature s, based on the first pre-trained e learning model. The processorexecutable
ctions, on execution, may further cause the processor to classify the set
of developers into one of a set of performance categories based on the assessing of the
one or more feature vectors. The processor-executable instructions, on execution, may
further cause the processor to evaluate the performance of at least one of the set of
developers, based on an associated category in the set of performance categories, in
response to the classifying.
In yet another embodiment, a non-transitory computer-readable medium
storing computer-executable instruction for evaluating performance of developers using
Artificial Intelligence (AI) is disclosed. The stored instructions, when executed by a
processor, may cause the processor to perform operations including receiving each of a
plurality of performance parameters associated with a set of developers. The operations
may further include creating one or more feature vectors corresponding to each of the
plurality of performance ters, based on one or more features determined for each
of the plurality of performance parameters. It should be noted that, the one or more
feature s are created based on a first pre-trained machine learning model. The
operations may further include assessing the one or more feature vectors, based on the
first pre-trained e learning model. The operations may r include classifying
the set of developers into one of a set of performance categories based on the ing
of the one or more feature vectors. The operations may further include evaluating the
performance of at least one of the set of developers, based on an associated category in
the set of performance categories, in response to the classifying.
It is to be understood that both the foregoing general description and the
ing detailed description are exemplary and explanatory only and are not ctive
of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The present disclosure can be best tood by reference to the
following description taken in conjunction with the accompanying drawing figures, in
which like parts may be referred to by like numerals
illustrates a functional block diagram of an AI based evaluation
system for evaluating performance of developers, in accordance with an embodiment.
illustrates a functional block diagram of various modules of an AI
based evaluation system for evaluating performance of developers, in accordance with
an embodiment.
illustrates a flowchart of a method for evaluating performance of
developers using AI, in accordance with an embodiment.
illustrates a flowchart of a method for ranking developers for
performance tion of the developers, in accordance with an embodiment.
illustrates a flowchart of a method for generating a feedback for
evaluating the performance of developers, in accordance with an embodiment.
FIGs. 6A – 6B illustrate tabular representations for input data
corresponding to mance parameters of developers, in accordance with an
exemplary ment.
rates a tabular representation for identifying a plurality of bugs
associated with a module of a t developed to evaluate performance of developers,
in accordance with an embodiment.
illustrates an AI based evaluation system trained on a reinforcement
ng approach, in accordance with an exemplary embodiment.
illustrates an evaluation system that uses inverse reinforcement
learning to predict a set of algorithms to perform hyperparameter tuning, in accordance
with an exemplary embodiment.
illustrates creating a new environment for an evaluation system with
a transfer learning approach, in accordance with an ary embodiment.
DETAILED DESCRIPTION OF THE DRAWINGS
The following description is presented to enable a person of ordinary skill
in the art to make and use the disclosure and is provided in the context of particular
applications and their requirements. Various modifications to the embodiments will be
readily apparent to those skilled in the art, and the generic principles defined herein may
be applied to other embodiments and ations without departing from the spirit and
scope of the disclosure. Moreover, in the following description, numerous details are set
forth for the purpose of explanation. However, one of ordinary skill in the art will realize
that the disclosure might be practiced t the use of these specific details. In other
instances, well-known structures and devices are shown in block diagram form in order
not to obscure the description of the disclosure with unnecessary detail. Thus, the
sure is not intended to be d to the embodiments shown, but is to be accorded
the widest scope consistent with the principles and features sed herein.
While the sure is described in terms of particular examples and
illustrative s, those of ordinary skill in the art will recognize that the sure is not
limited to the examples or figures described. Those skilled in the art will recognize that
the operations of the various embodiments may be implemented using hardware,
software, re, or combinations f, as appropriate. For example, some
processes can be carried out using processors or other digital try under the control
of software, firmware, or hard-wired logic. (The term “logic” herein refers to fixed
hardware, programmable logic and/or an appropriate combination thereof, as would be
recognized by one skilled in the art to carry out the recited functions). re and
firmware can be stored on computer-readable storage media. Some other processes can
be implemented using analog circuitry, as is well known to one of ordinary skill in the art.
Additionally, memory or other storage, as well as communication ents, may be
employed in embodiments of the invention.
The present disclosure tackles limitations of existing systems to facilitate
performance evaluation of re developers (hereinafter referred as developers)
using an AI based evaluation system. The present disclosure evaluates performance of
a set of developers based on a plurality of parameters associated with each of the set of
pers. The plurality of mance ters may include, but is not limited to, at
least one of efficiency of a developed product associated with a module developed for a
product, complexity of the developed product, types of support received from peers,
ck or rating received from managers, quality of the module developed for the
product, and technical skills of each of the set of developers. r, the present
disclosure may facilitate computation of ranks of each of the set of developers.
In accordance with an embodiment, the present disclosure may train the AI
based evaluation system by exposing to a new environment during initial ng
process. In addition, the AI based system may identify a plurality of bugs associated with
a module of product developed by each of the set of developers based on which a
feedback is generated. Further, based on the generated feedback, performance of each
of the set of developers may be luated to re-rank each of the set of developers.
This has been explained in detail in conjunction with to
Referring now to a functional block diagram for a network
environment 100 of an AI based tion system for ting performance of
developers is illustrated, in accordance with an ment. With reference to
there is shown an AI based evaluation system 102 that includes a memory 104, a
processor 106, I/O devices 108 and a machine learning (ML) model 112. The I/O devices
108 of the AI based evaluation system 102 may further include an I/O interface 110.
Further, in the network environment 100, there is shown a server 114, a database 116,
external devices 118 and a communication network 120 (hereinafter referred as network
120).
The AI based evaluation system 102 may be communicatively coupled to
the server 114, and the external devices 118, via the network 120. Further, the AI based
evaluation system 102 may be communicatively coupled to the database 116 of the
server 114, via the network 120. A user or an administrator (not shown in the may
interact with the AI based evaluation system 102 via the user interface 110 of the I/O
device 108.
The AI based evaluation system 102 may include suitable logic, circuitry,
interfaces, and / or code that may be configured to evaluate performance of developers
of an organization, based on a plurality of mance parameters ated with the
developers. Such developers may be from a same team or a different team in the
organization and working at different levels of a hierarchy in the organization. The
plurality of performance ters may include, but not limited to, at least one of
efficiency of a developed product associated with a module developed for a t,
complexity of the developed product, types of support received from peers, feedback or
rating received from managers, quality of the module developed for the t, and
technical skills of each of the set of developers.
es of the AI based evaluation system 102 may include, but are not
limited to, a server, a desktop, a laptop, a notebook, a tablet, a smartphone, a mobile
phone, an application server, or the like. By way of an example, the AI based evaluation
system 102 may be implemented as a plurality of distributed cloud-based resources by
use of several technologies that are well known to those skilled in the art. Other examples
of implementation of the AI based evaluation system 102 may include, but are not limited
to, a web/cloud server and a media ,
The I/O device 108 may be configured to provide inputs to the AI based
evaluation system 102 and render output on user equipment. By way of an e, the
user may provide inputs, i.e., the plurality of performance parameters via the I/O device
108. In addition, the I/O device 108 may be configured to provide ranks to the developers
based on performance evaluation of the developers by the AI based evaluation system
102. Further, the I/O device 108 may be configured to display results (i.e., the set of
performance category associated with each of the set of developers) based on the
evaluation performed by the AI based evaluation system 102, to the user. By way of
another example, the user ace 110 may be configured to provide inputs from users
to the AI based evaluation system 102. Thus, for example, in some embodiment, the AI
based evaluation system 102 may ingest the one or more mance ters via
the user interface 110. Further, for example, in some embodiments, the AI based
evaluation device 102 may render intermediate results (e.g., one or more feature vectors
created for each of the set of pers, the set of performance categories, and one or
more features determined for each of the plurality of performance parameters) or final
results (e.g., classification of each of the set of developers in one of the set of
performance categories, and s of evaluation of each of the set of set of developers
) to the ) via the user interface 110.
The memory 104 may store instructions that, when executed by the
processor 106, may cause the processor 106 to evaluate performance of each of the set
of developers. The sor 106 may evaluate the performance of each of the set of
developers based on a plurality of performance parameters associated with the set of
developers, in accordance with some embodiments. As will be described in r detail
in conjunction with to in order to evaluate the performance of each of the
set of developers, the processor 106 in conjunction with the memory 104 may perform
various functions including creation of one or more feature vectors, assessment of the
one or more feature vectors, classification of the set of developers, and evaluation of the
set of pers.
The memory 104 also store various data (e.g., the plurality of performance
categories, the one or more feature vectors, ranks for each developer from the set of
developers, and a predefined evaluation, etc.) that may be captured, processed, and/or
required by the AI based evaluation system 102. The memory 104 may be a latile
memory (e.g., flash memory, Read Only Memory (ROM), Programmable ROM ,
Erasable PROM (EPROM), Electrically EPROM M) memory, etc.) or a le
memory (e.g., Dynamic Random-Access Memory , Static Random-Access
memory (SRAM), etc.).
In accordance with an embodiment, the AI based evaluation system 102
may be configured to deploy the ML model 112 to use output of the ML model to generate
real or near-real time inferences, take decisions, or output prediction s. The ML
model 112 may be ed on the AI based evaluation system 102 once the ML model
112 is trained on the AI based evaluation system 102 for classification task associated
with evaluation of performance of developers.
In accordance with one embodiment, the machine learning model 112 may
correspond to a first pre-trained machine learning model. In accordance with an
embodiment, the first pre-trained e learning model may correspond to an attention
based deep neural network model that classifies a developer into a particular category
of evaluation. Examples of the attention based deep neural network model includes, but
not limited to, Long Short-Term Memory (LSTM), LSTM – GRU (Long Short-Term
Memory – Gated ent Units) of Neural Network. The machine learning model 112
may be configured to determine one or more features for each of the plurality of
performance parameters. The machine learning model 112 may be configured to
determine one or more es in order to assist the AI based evaluation system 102 to
create the one or more feature s. In accordance with another embodiment, the
machine learning model 112 may correspond to a second machine learning model. The
machine learning model 112 may be trained by assigning weights to the one or more
es associated with the each of the plurality of performance parameters based on a
predefined evaluation ion. The predefined evaluation criterion may e one or
more of a technical skill in demand and an efficiency of a developed product with respect
to bugs identified in the ped product.
Further, the AI based evaluation system 102 may interact with the server
114 or the external device 118 over the network 120 for sending and receiving various
types of data. The external device 118 may include, but not limited to, a desktop, a laptop,
a notebook, a netbook, a tablet, a smartphone, a remote server, a mobile phone, or
another ing system/device.
The k 120, for example, may be any wired or wireless
communication network and the examples may include, but not limited to, the Internet,
ss Local Area Network (WLAN), Wi-Fi, Long Term Evolution (LTE), ide
Interoperability for Microwave Access (WiMAX), and General Packet Radio Service
(GPRS). In some embodiments, the AI based tion system 102 may fetch
information associated with the developers from the server 114, via the network 120. The
database 116 may store information associated with existing technologies or the new
technology in demand.
In operation, the AI based evaluation system 102 may be ured to
e the plurality of performance parameters associated with a set of developers. The
AI based evaluation system 102 may be further configured to create one or more feature
vectors corresponding to each of the plurality of performance parameters. The AI based
evaluation system 102 may create one or more feature vectors based on one or more
features determined for each of the plurality of performance parameters. Further, the AI
based evaluation system 102 may assess the one or more feature vectors. The AI based
evaluation system 102 may then classify each of the set of developers into one of a set
of performance categories. In ance with an embodiment, the set of performance
categories may include an excellent performer category, a good performer category, an
average performer category, and a bad performer category. Thereafter, the AI based
evaluation system 102 may evaluate the performance of at least one of the set of
developers, based on an associated category in the set of performance categories. In
order to evaluate the performance, the AI based evaluation system 102 may compute
ranks for each developer from the set of developers categorized within an associated
mance ry. Based on the computed ranks, the AI based evaluation system
102 may rank each developer from the set of developers associated with each of the set
of performance categories. In addition, the AI based evaluation system 102 may generate
a ck for each of the set of developers. The AI based evaluation system 102 may
generate the ck based on a plurality of bugs identified for a module of a product
developed by each of the set of developers. The AI based evaluation system 102 may
then evaluate performance of each of the set of developers based on the ted
feedback. This is further explained in detail in conjunction with to
Referring now to a functional block diagram of various modules
within a memory of an AI based evaluation system for evaluating mance of
developers is illustrated, in accordance with an embodiment of the present disclosure.
is explained in conjunction with
With reference to there is shown input data 202, a database 204,
the memory 102 that includes a reception module 206, a feature vector creation module
208, an assessment module 210, a classification module 212, an evaluation module 214,
a training module 220, an fication module 222, and a feedback generation module
224. The feedback generation module 224 may receive a feedback from a user 226. The
user 226 may correspond to, a manager, a reviewer, a supervisor or a developer. In
addition, the user 226 may be working in a same team as of the set of developers or may
be working in any other team of the organization. The evaluation module 214 may further
include a computing module 216, and a ranking module 216. In accordance with an
embodiment, the memory 102 may include the database 204. The modules 206-224 may
include routines, programs, objects, components, data structures, etc., which perform
particular tasks or implement particular abstract data types. The modules 206-224
bed herein may be implemented as software s that may be executed in a
cloud-based computing environment of the AI based evaluation system 102.
In ance with an embodiment, the memory 104 may be configured to
receive the input data 202. The input data 202 may correspond to data associated with
a plurality of performance parameters of a set of developers. In an embodiment, the
ity of performance parameters may include, but is not limited to, at least one of
ency of a developed product associated with a module developed for a product,
complexity of the developed product, types of support received from peers, feedback or
rating ed from managers, quality of the module developed for the product, and
technical skills of each of the set of developers. The memory 104 may be ured to
receive the input data 202 in the database 204 from the external device 118. Additionally,
the input data may include information associated with the set of developers.
The database 204 may serve as a repository for storing data sed,
received, and generated by the modules 206-224. The data generated as a result of the
execution of the s 206-224 may be stored in the database 204.
During operation, the reception module 206 may be configured to receive
the plurality of performance parameters associated with each of the set of developers as
the input data 202. The plurality of performance parameters may include, but is not limited
to, at least one of efficiency of a developed t associated with a module developed
for a product, complexity of the developed product, types of support received from peers,
feedback or rating received from managers, y of the module developed for the
product, and technical skills of each of the set of pers. In ance with an
embodiment, xity of the developed product may correspond to complexity in code
snippets of the developed product. The efficiency of the developed product associated
with the module developed for the product may be based on faulty codes or bug free
codes associated with the per. For example, the AI based tion system 102
captures performance parameters for a developer who works on multiple modules of a
single product, multiple products or an application in various programming languages or
technologies and may evaluate the mance of the developer, which is an ardent task
when done ly. In accordance with an embodiment, the AI based evaluation
system 102 may be configured to use the performance parameter corresponding to the
technical skills of developers to identify developers of similar skill set.
The plurality of performance parameters may correspond to tabular data
shown in . Such tabular data corresponding to the plurality of performance
parameters may reflect ordinal values of the plurality of performance parameters. The
reception module 206 may be configured to pre-process the input data 202 of the plurality
of performance parameters with ordinal values into cal values. For e, the
performance parameter corresponding to complexity of developed product may be high,
medium and low value which is converted to 1, 2 and 3 respectively by the reception
module 206. Further, in another e, feedback or rating received from rs
may take any values n 1 to 5. ‘1’ may represent lowest rating received from
managers, while ‘5’may represent highest rating received from managers. The rating
may be provided by managers based on quality of delivery of the module of the product.
Moreover, in some embodiments, definition of lowest and highest may be different. In
accordance with an embodiment, the input data 202 of the plurality of performance
parameters with ordinal values may be converted into a passable format for a first pre-
trained e ng model by using one hot representation for the input data 202.
The one hot representation (also known as one hot embedding) may map input data 202
which may be a categorical value data into a Neural k passable . Such
format may allow to train an embedding layer of the first pre-trained machine learning
model for each of the plurality of performance parameters. The one hot representation
may represent a low ional embedding which on being fed to hidden layers of the
first pre-trained machine learning model can handle a much smaller size of preprocessed
input data as compared to the input data with ordinal values.
Further, the feature vector creation module 208 may be configured to
determine one or more features for each of the plurality of performance parameters
corresponding to the ocessed input data. In accordance with an embodiment, the
feature vector creation module 208 may be configured to determine one or more features
based on the first pre-trained machine learning model. As will be iated, the first
pre-trained machine learning model may correspond to any deep neural network model
(for example, an attention based deep neural network model and a Convolution Neural
Network (CNN) model).
In accordance with an embodiment, the feature vector creation module 208
may be further configured to create one or more feature vectors ponding to the
determined one or more features. In accordance with another embodiment, the feature
vector creation module 208 may be configured to create one or more feature vectors
based on the first pre-trained machine learning model. The feature vectors created for
each of the plurality of performance ters may be stored in the database 204 for
further computation. It may be noted that the process of storing the feature vectors in the
database 204 may continue, until the feature vector corresponding to each of the ity
of performance parameters is created and stored. The e vector stored in the
database 204 may further be utilized by the assessment module 210.
In addition, the feature vector creation module 208 may be configured to
send the one or more feature vectors to the ng module 220. The assessment module
210 may be configured to receive each of the one or more feature vectors from the
feature vector creation module 208. Upon receiving the one or more feature vectors, the
assessment module 210 may be configured to perform ment of each of the one
or more feature vectors. In an ment, the assessment module 210 may perform
assessment based on the first pre-trained machine learning model. Further, the
assessment module may be configured to send a result of assessment of each of the
one or more feature vector to the classification module 212.
The classification module 212 may be configured to receive the result of
assessment of each of the one or more feature vectors from the ment module
210. The classification module 212 may be configured to classify each of the set of
developers into one of a set of performance categories, based on the assessment of
each of the one or more feature vectors. The performance categories may e an
excellent performer, a good performer, an average performer and a bad performer. In
ance with an embodiment, the classification module 212 may fy the set of
developers into one of the excellent performer, the good performer, the average
performer and the bad performer, based on the assessing of the one or more feature
vectors. Further, the classification module 212 may be configured to send a result of
classification of the set of developers to the evaluation module 214. In addition, the
classification module 212 may be configured to send the result of classification to the
identification module 222.
The evaluation module 214 may be configured to receive the result of
classification from the classification module 212. In addition, the evaluation module 214
may be configured to receive input from the training module 220. In an embodiment, the
training module 220 may correspond to a machine learning model (such as, the second
machine learning . As will be appreciated, the second pre-trained machine
learning model may correspond to any deep neural network model (for e, an
attention based deep neural network model and a Convolution Neural Network (CNN)
model). The second machine learning model may be trained by assigning weights to the
one or more features associated with the each of the plurality of performance parameters
based on a predefined tion criterion. The predefined evaluation criteria may
include one or more of a technical skill in demand and an efficiency of a ped
product with respect to bugs identified in the developed product.
Further, the evaluation module 214 may be configured to te the
performance of at least one of the set of developers. The evaluation module 214 may
evaluate the performance of at least one of the set of developers based on an ated
category in the set of performance categories, in response to the classification. In
accordance with an embodiment, output data ponding to the evaluation of the
performance of at least one of the set of developers may be ed on a user device.
Such output data may facilitate identification of employees in need of training on a
particular technology. Further, the output data may provide insights for collaboration
amongst ees of an organization.
In order to evaluate performance of at least one of the set of developers,
the computing module 216 within the evaluation module 214 may be configured to
compute ranks for each developer from the set of developers categorized within an
associated performance ry for each of the set of performance categories. In an
embodiment, the computing module 216 may compute ranks based on the second
machine learning model. In ance with an embodiment, the second machine
ng model correspond to any rank based neural network model (for example,
Ranknet). By way of an example, the computing module 216 may compute ranks for
each developer from the set of developers based on the predefined evaluation criteria.
In accordance with an embodiment, high s are assigned to one or
more es associated with at least one of the high demand technical skill as
ed to a low demand technical skill and bug-free ped product as compared
to the developed t with a plurality of bugs, by using the second machine learning
model. In accordance with an exemplary embodiment, the computing module 216 may
be configured to compute re-rank of developers under a same category (say, an
“Excellent Performer” category). Forexample, it may be possible that there exist two
developers under the “Excellent Performer” category, r, the two developers may
be compared for evaluation of performance on basis of bugs or no-bugs reported in their
respective developed modules. In accordance with another exemplary embodiment,
some of the pers may be skilled on latest technologies, such as, Machine
Learning, Artificial Intelligence, and Natural Language Processing. The developers
delivering solutions in the latest technologies may be given more attention as compared
to their counterpart developers. This is explained in detail in conjunction with .
Once the computing module 216 computes ranks for each developer from the set of
developers, the computing module 216 may send the computed ranks to the ranking
module 218.
The ranking module 218 within the evaluation module 214 may be
configured to receive the ed rank for each developer from the set of developers
from the computing module 216. Further, the ranking module 218 may rank each
developer from the set of developers for each of the set of performance categories, based
on the computed ranks. In an embodiment, the g module 218 may provide ranks
for each developer in order to evaluate performance of each developer from the set of
developers.
The training module 220 may be configured to receive the one or more
feature vectors from the feature vector creation module 208. Based on the one or more
e s ed, the training module 220 may train the second machine ng
model. In an embodiment, the training module 220 may train the second the machine
learning model by assigning weights to the one or more features associated with the each
of the plurality of performance parameters based on the predefined evaluation criterion.
In ance with an embodiment, the training module 220 may be configured to train
the first pre-trained e learning model.
The identification module 222 may be configured to receive the result of
classification from the fication module 212. Further, the identification module 222
may be configured to identify a plurality of bugs associated with a module of a product
developed by each of the set of pers. In addition, the identification module 222
may be configured to send the plurality of bugs identified to the feedback generation
module 224.
The feedback generation module 224 may be configured to generate a
feedback for each of the set of developers, based on the identified plurality of bugs. In
an embodiment, the feedback generation module 224 may generate the feedback in
response of identifying the plurality of bugs associated with the product developed by
each of the set of developers. Moreover, the feedback generation module 224 may
receive a feedback from the user 226. The user 226 may correspond to a r, a
reviewer, a supervisor, or a developer. In addition, the user 226 may be working in the
same team as of the set of developers or may be working in any other team of the
organization. fter, the feedback generation module 224 may be configured to send
the generated ck to the evaluation module 214. In an embodiment, the feedback
generation module 224 may send the generated feedback to the evaluation module 214
in order to evaluate the performance of at least one of the set of developers.
In particular, as will be appreciated by those of ordinary skill in the art,
s modules 206-224 for ming the techniques and steps described herein may
be implemented in the AI based evaluation system 102, either by hardware, re, or
combinations of hardware and software. For example, suitable code may be accessed
and executed by the one or more processors on the AI based evaluation system 102 to
perform some or all of the techniques described herein. Similarly, application specific
integrated circuits ) configured to perform some or all of the processes described
herein may be included in the one or more processors on the host computing system.
Even though FIGs 1-2 describe about the AI based evaluation system 102 and 200, the
functionality of the components of the AI based evaluation system 102 and 200 may be
implemented in any computing devices.
Referring to a flowchart of a method for evaluating performance of
developers using AI is illustrated, in accordance with an embodiment. is explained
in conjunction with and
With reference to the performance of developers may be evaluated
based on various entities of network environment 100 (for example, the AI based
evaluation system 102, and the server 114). Moreover, various modules depicted within
the memory 104 of the AI based evaluation system 102 in may be configured to
perform each of the steps mentioned in the present
At step 302, a plurality of performance parameters may be received. Each
of the plurality of performance parameters received may be ated with the set of
developers. In an embodiment, each of the performance parameters may include, but is
not limited, to at least one of efficiency of a developed product associated with a module
ped for a product, complexity of the ped product, types of support ed
from peers, feedback or rating received from managers, quality of the module developed
for the product, and technical skills of each of the set of developers.
At step 304, one or more feature vectors may be created, corresponding to
each of the plurality of mance ters. Moreover, the one or more e
vectors may be created based on one or more features determined for each of the
plurality of performance parameters. In an embodiment, the one or more e vectors
may be created based on a first pre-trained machine learning model.
With reference to the proposed AI based evaluation system 102
may be order agnostic and need not depend on specific order of values being
implemented. At step 306, each of the one or more feature vectors may be ed
based on the first pre-trained machine learning model.
At step 308, each of the set of developers may be classified into one of a
set of mance categories. In an embodiment, the set of performance categories may
include an excellent performer category, a good performer category, an average
performer ry, and a bad performer category. By way of an example, the excellent
mer category refers to a group of developers that may have received top rating
during evaluation of mance. Further, the bad performer may refer to a group of
developers that may have received lowest rating during evaluation of performance. In an
embodiment, in order to classify each of the set of developers in at least one of the set
of performance category, the classification may be based on a deep ng recurrent
neural network. Example of the deep learning model may include a Long Short-Term
Memory (LSTM) model and Long Short-Term Memory – Gated Recurrent Units (LSTMGRU
At step 310, the performance of at least one of the set of developers may
be evaluated. In an ment, the performance of at least one developer may be
evaluated based on an associated category with the set of mance categories, in
response to the classification. The process of evaluating the performance of at least one
developer from the set of developers has been explained in greater detail in conjunction
to and
Referring now to a flowchart of a method for ranking developers for
performance evaluation of the developers is rated, in accordance with an
embodiment, in accordance with an embodiment. is ned in conjunction with
to
With reference to in order to evaluate the performance of at least
one of the set developers as mentioned in step 310, a second machine learning model
may be trained. The second machine learning model may correspond to the machine
learning model 112. In an embodiment, in order to train the second machine learning
model, weights may be assigned to the one or more features associated with the each
of the plurality of performance parameters. Moreover, weights to the one or more features
may be assigned based on a predefined evaluation criterion. The ined evaluation
criteria may include, but is not limited to, one or more of a cal skill in demand and
an efficiency of a developed product with respect to bugs identified in the developed
product. es of the technical skill may include, but is not limited to, Java, Python,
AI, dynamic programming, l Processing Language (NLP), Machine Learning (ML),
and Structured Query Language (SQL) database. In an embodiment, high weights may
be ed to one or more features associated with at least one of the high demand
technical skill as compared to a low demand cal skill. In addition, higher weights
may be assigned to a bug-free developed product as compared to the developed product
with a plurality of bugs.
At step 404, ranks for each developer may be computed, based on the
trained second machine learning model. Moreover, ranks may be computed for each
per from the set of developers rized within an associated performance
category. In addition, ranks for each developer may be computed for each of the set of
performance categories.
At step 406, each developer from the set of developers for each of the set
of performance categories may be ranked based on the computed ranks for each
developer. Moreover, each developer may be ranked based on the computed ranks in
order to evaluate the performance of each developer from the set of developers.
Referring now to a flowchart of a method of generating a feedback
for ting the performance of at least one per is illustrated, in accordance with
an embodiment. is explained in conjunction with to
At step 502, a ity of bugs may be identified. In accordance with an
embodiment, the plurality of bugs identified may be ated with a module of a product
developed by each of the set of developers. Moreover, the plurality of bugs identified may
be reported to each of the set of developer and his particular manager to take riate
actions in order to solve a bug identified.
At step 504, Once the plurality of bugs is identified, a feedback may be
generated for each of the set of developers. In an embodiment, the feedback may be
generated in response of identification of the ity of bugs associated with the product
developed by each of the set of developers. By way of an e, based on the
feedback, when the plurality of bugs may have been reported for same developer very
frequently then a negative response may be imposed for the developer, thereby effecting
rating of the developer while evaluating the mance
At step 506, the performance of at least one of the set of developers may
be evaluated. By way of an example, in order to evaluate at least one developer, the at
least one developer may be re-ranked based on the feedback generated corresponding
to the plurality of bugs identified. In an embodiment, a neural network-based ranking
method may re-rank each of the set of developers under at least one of the set of
performance category. By way of an e, the neural network used for ranking may
pond to Ranknet. For example, multiple pers from the set of developers may
be ranked under the “excellent performer” category. However, based on a number of the
plurality of bugs identified in the module of the product by each from the set of developers
ranked under the excellent performer category, each developer from the set of
developers may be re-ranked. This has been explained in greater detail in conjunction to
to 6B.
Referring now to – 6B, a tabular representation of input data
corresponding to the plurality of performance parameters is illustrated, in accordance
with some exemplary ments of the present disclosure. – 6B is explained
in conjunction with to
With reference to , the tabular representation 600A of a dataset (the
input data) corresponding to the ity of performance parameters for a set of
developers is shown. The dataset may depict the plurality of performance parameters
captured as the input data by the AI based evaluation system 102 for each of the set of
developers order to evaluate performance of each of the set of pers. A column
602a represents a serial number. A column 604a represents a developer ID for each of
the set of developers. In present table 600, the set of developers may correspond to a
set of three pers. A column 606a represents a module ID of the product developed
by each of the set of developers. A column 608a represents a type of technology or
language used to develop the module of the product. A column 610a represents a
xity of the module developed. A column 612a represents a quality of the module
developed. A column 614a represents a feedback rating provided by managers to each
of the set of developers based on the quality of the module developed. The data
populated in the table 600A may not be le as a passable format for a neural
network, such as the first ained machine learning model. Hence, the data populated
in the table 600A may be pre-processed by the AI based evaluation system 102 as shown
in .
The tabular representation 600B may represent numerical values of the
plurality of performance ters captured for each of the set of developers. The AI
based evaluation system 102 may be configured to convert input data with ordinal values
as shown in into numerical values. As an example, column with name “delivery
quality” have values such as 1, 2 and 3 where “1” may replace “Low” and “3” may replace
“High”. In some other embodiments, one-hotrepresentation (also referred as one hot
embeddings) of ordinal values may be generated by the AI based evaluation system 102
where new features / columns may be introduced equal to number of unique values in
al column of r representation 600A. For example, columns of performance
ters with multiple values (such as, Column: Technology used where single
developer skilled in multiple technologies) may be converted to unique c values.
In order to represent values numerically for the performance parameters, the AI based
evaluation system 102 may be configured to t such values into one-hot
representation.
Further, in n-hot representation, embedding layer of the first trained
Machine Learning model may have vector representation for a number of dimensions
equal to number of unique values (T1 to T5 of 600B) in certain column (such as, the
column 608a of 600A. Column ‘T1’, ‘T2’, ‘T3’, ‘T4’, and ‘T5’ may represent unique
numerical values based on the type of technology or language used in order to develop
the module of the product. ). By way of an example, the mance ter “types
of technology/ language used” in column 608a may be represented numerically in T1 to
T5 of 600B, such as, Python: [1 0 0 0 0 0], Java: [0 1 0 0 0 0], Machine Learning: [0 0 1
0 0 0], Natural ge Processing: [0 0 0 1 0 0], MS SQL database: [0 0 0 0 1 0] and
Dynamic Programming: [0 0 0 0 0 1].
illustrates a tabular representation for identifying a plurality of bugs
associated with a module of a product developed to evaluate performance of developers,
in accordance with an embodiment.
A tabular representation 600C represents the plurality of bugs identified
corresponding to each of the set of developers. A column 602c may represent a serial
number. A column 604c may represent developers rating. The developers rating may be
based on classification of each of the set of developers in one of the set of performance.
A column 606c may represent developer ID. A column 608c may represent type of
technology or language used to develop the module of the product. Lastly, a column 608c
may represent a number of bugs identified in the module of the product developed by
each of the set of developers. Moreover, each developer represented in the tabular
representation 600C may be initially classified in the excellent performance category.
However, the number of bugs fied in the module developed by each per is
different. Therefore, each developer classified under the excellent performance category
may be re-ranked.
By way of an example, in 600C, maximum number of bugs, i.e., ‘3’ have
been identified for a developer with developer ID ‘D2’. ore, the developer with
developer ID ‘D2’ may be ranked lowest. In addition, number of bugs fied for a
developer with per ID ‘D1’ and a per with developer ID ‘D3’ is ‘2’. However,
the developer with developer ID ‘D1’ may be ranked higher than the developer with
developer ID ‘D3’ because the per with developer ID ‘D1’ has worked on more
technologies as compared to the developer with per ID ‘D3’.
The AI based evaluation system 102 may be configured to rank each
developer from the set of developers for each of the set of performance categories to
evaluate the performance of each developer from the set of developers using the second
pre-trained machine learning model. The AI based tion system 102 may be
configured to render output data on a user device (not shown) based on evaluation of the
performance of at least one of the set of developers. Such output data may be used by
some other developer or r looking for assistance from another developer in order
to assist developer or managers to resolve bugs in future. In certain scenario, one
per may be of different team in the same zation and can use the output data
to find a developer of specific /technical et.
In certain other scenario, a developer or a manager who needs help or
assistance from another developer may leverage the AI based evaluation system 102.
By way of an example, the per or the manager may ask query like “Can you help
me to find out developer who is an expert in machine learning?” via the user interface
110 of the AI based evaluation system 102. As a response, the AI based evaluation
system 102 may connect the developer or the manager via REST API (Representational
State Transfer Application Programming Interface) to get details of developers and
render/display the response using the I/O devices 108.
ing now to a trained AI based evaluation system based on a
rcement learning is illustrated, in accordance with an exemplary embodiment. is explained in conjunction with to .
There is shown a model 702, training data 704 with a set of developer data
706 and Q-learning algorithm 708, apply model 710, a test set of developer data 712,
and developer ratings 714. In accordance with an embodiment, the model 702 may
correspond to a trained evaluation system, such as the AI based evaluation system 102.
In accordance with an embodiment, the model 702 may be exposed to new training data
704 when the model 702 has never been through r training process. The model
702 may leverage AI based code reusability system to generate code snippets in various
languages and technologies. The code snippets may be generated for modules of
dummy products similar to the ones developed by developers. Thereafter, a set of bugs
may be introduced in some of the modules of the dummy products to generate
information associated with each of the set of developers, and to capture feedback given
by a manager.
As a result, the model 702 may learn to identify optimal reward function that
will ze reward for end goal of performance evaluation. In accordance with an
embodiment, the set of developer data 706 may correspond to information associated
with each of the set of developers. The information may include performance ry
associated with each of the set of developers, computed ranks, the plurality of
performance parameters, etc. Further, the Q-value algorithm 708 may be used to
calculate a Q-value corresponding to each of the set of developers. The Q-value may be
ated based on the reinforcement ng approach. In addition, the feedback
associated with each of the set of pers may be predicted based on reinforcement
ng approach.
In an embodiment, the Q-value represents preference of a particular
per over other developers from the set of developers across all values of the
feedback or rating. In other words, the Q-value may ent probability of one
developer being preferred over the other developer across different values of the
feedback or rating. Based on the calculated Q-value, the model 702 may penalize the
r for giving incorrect feedback or rating to one per over the other
developers from the set of developers. Moreover, the Q-values each of the set of
developers along with the associated feedback or rating may be used to maximize
reward.
Based on the training data received, the model 710 may be generated for
a test set of developers’ data 712. The test set of developers’ data may correspond to
information associated with a new set of developers. The rating provided to each of the
test set of developers’ data may be depicted as developers rating 714. With reference to
the the first pre-trained machine ng model corresponds to a Q network. The
Q network may be configured to receive as input an input observation corresponding to
set of developers’ data and an input action and to generate an estimated future reward
(or penalty) from the input in accordance with each of the plurality of performance
parameters associated with the set of developers.
Referring now to a trained tion system that uses inverse
reinforcement learning is illustrated, in accordance with an exemplary embodiment. is explained in conjunction with to There is shown an environmental
model 802, an inverse rcement learning model 804, historical data 806, policy 808,
relevant algorithm combinations 810, and algorithm set satisfying historical data 812.
The reinforcement learning based trained evaluation system may
correspond to the nment model 802. The environment model 802 may correspond
to the apply model 710. The environment model 802 may employ the inverse
reinforcement ng model 804. The inverse reinforcement learning model 804 may
be configured to utilize the historical records 806 to ze and boost rating and ranking
of each of a set of developers in an zation. The historical records 806 may use
various policies, such as policy 808 to penalize and boost rating and ranking of each of
the set of developers.
In an embodiment, the historical records 806 may contain ed
information about each of the set of developers from various teams in an organization
along with the plurality of bugs reported and action taken by each of the set of developers
and the manager with any other member in same team or different team. Thereafter, the
inverse reinforcement learning model 804 may identify combination or set of algorithms
and function that will define architecture of deep learning based recurrent neural network
variations and define hyperparameter for different layers of a neural network. The
combination or set of algorithms and function may be represented as relevant algorithm
ation 810. In an embodiment, the inverse rcement learning model 804 may
recommend more than one combination of set of algorithms and functions.
Further, the recommended combination of set of algorithms and functions
may be evaluated based on the reinforcement learning approach in order to accept one
combination of set of algorithms and functions. Moreover, one combination of set of
algorithms and functions may be accepted when it satisfies evaluation of historical
records represented as algorithm set ying historical records 812. Once the one
combination of set of algorithms and functions is accepted, a new environment may be
created for the environment model 802. In addition, the inverse reinforcement ng
model 804 may recommend optimal values of hyperparameters ponding to each
combination of set of algorithms and functions. Further, the optimal values of
hyperparameters may be validated against historical data received from an existing
environment of the environment model 802. This process is known as model
hyperparameter .
ing now to a transfer learning ch to create a new
environment for an evaluation system is depicted, in accordance with an exemplary
embodiment. is explained in conjunction with to There is shown a
pre-trained model 902, a developer performance category 904 associated with the pretrained
model 902, a new model 906, and a developer performance category 908
associated with the pre-trained model 906.
In an embodiment, the transfer learning approach may be used to leverage
training of an AI based evaluation system (such as, the AI based evaluation system 102)
from previous implementation to new implementation. The new model 906 may
correspond to the new environment generated for the environment model 802 based on
acceptance of one combination of the set of thms and functions. The new model
906 may receive the optimal values of hyperparameters ented as extracted pretrained
hyperparameters from the pre-trained model 902.
Thereafter, the new model 906 may classify a new set of developers in one
of the set of performance categories based on the optimal values of hyperparameters
received from the pre-trained model 902. In an embodiment, the er learning
approach may enable gathering of knowledge from an ng environment or
implementation of the AI based evaluation system 102. The knowledge corresponds to
optimal values (i.e., the one or more feature vectors) of the plurality of performance
parameters and hyperparameter ed for the implementation of the AI based
evaluation system 102. Further, the optimal values of the plurality of performance
parameters and arameter may be utilized to develop the new environment for the
AI based evaluation system 102. This may require less training time as compared to
starting from h or from vanilla model. The vanilla model may correspond to a
standard, usual, and unfeatured version of the AI based tion system 102.
In accordance with an embodiment, the AI based evaluation system 102
may be configured to modify the first ained machine learning model (such as, the
pre-trained model 902) with transferable dge for a target system to be evaluated.
The transferable knowledge may correspond to optimal values ated with the one
or more feature vectors corresponding to each of the plurality of performance
parameters.
In ance with an embodiment, the AI based evaluation system 102
may be configured to tune the first pre-trained machine learning model (such as, the pretrained
model 902) using specific characteristics of the target system to create a target
model (such as, the new model 906). In accordance with an embodiment, the AI based
evaluation system 102 may be configured to evaluate the target system performance
using the target model (such as, the new model 906) to predict system performance of
the target system for evaluating performance of a set of developers from an organization.
Various embodiments e a method and system for evaluating
performance of developers using cial igence (AI). The disclosed method and
system may receive each of a plurality of performance parameters ated with a set
of developers. The system and method may then create one or more feature s
corresponding to each of the plurality of performance parameters, based on one or more
features determined for each of the plurality of performance parameters. The one or more
feature vectors may be created based on a first ained e learning model.
Further, the system and the method may assess the one or more feature vectors, based
on the first pre-trained machine learning model. The system and the method may classify
the set of developers into one of a set of performance categories based on the assessing
of the one or more feature vectors. Thereafter, the system and the method may evaluate
the performance of at least one of the set of pers, based on an associated category
in the set of performance ries, in response to the classification.
The system and method provide some advantages, like the disclosed
system and the method may enable collaboration amongst developers of an organization
based on performance evaluation. Further, the disclosed system and method may help
managers or reviewers to pro-actively identify developers that may require training on a
particular technology. In on, the system and method may evaluate performance of
a developer comprehensively, based on several performance parameters, such as faulty
code, complexity of code ts as well as assistance provided by a developer to other
developers in the organization. Such comprehensive evaluation of performance by the
AI based evaluation system 102 may tate identification of distinguished developers
in the organization and similarly aid in providing a necessary appraisal or rating to
developers of the organization. Moreover, the system and method may help rs
to find developers of a similar type of technical skills. Further, the system and the method
may allow managers to fetch details of a developer based on performance parameters.
It will be appreciated that, for clarity purposes, the above description has
described embodiments of the disclosure with reference to different functional units and
processors. However, it will be apparent that any suitable distribution of functionality
between different onal units, processors or domains may be used without detracting
from the sure. For example, functionality illustrated to be performed by separate
processors or controllers may be performed by the same processor or controller. Hence,
references to specific functional units are only to be seen as references to suitable means
for providing the bed functionality, rather than indicative of a strict logical or
al structure or organization.
Although the present sure has been described in connection with
some embodiments, it is not intended to be limited to the specific form set forth herein.
Rather, the scope of the t disclosure is limited only by the claims. Additionally,
although a feature may appear to be described in connection with particular
embodiments, one skilled in the art would recognize that various features of the
bed embodiments may be combined in accordance with the disclosure.
Furthermore, although individually listed, a plurality of means, elements or
process steps may be implemented by, for example, a single unit or processor.
Additionally, although individual features may be included in ent claims, these may
possibly be advantageously combined, and the inclusion in different claims does not
imply that a combination of features is not le and/or advantageous. Also, the
inclusion of a feature in one category of claims does not imply a limitation to this category,
but rather the e may be equally applicable to other claim categories, as appropriate.
Claims (18)
1. A method for evaluating performance of developers using Artificial igence (AI), the method sing: receiving, by an AI based tion , each of a ity of performance parameters associated with a set of developers; creating, by the AI based evaluation system, one or more feature vectors corresponding to each of the plurality of performance parameters, based on one or more features determined for each of the plurality of performance parameters, wherein the one or more feature vectors are created based on a first pre-trained machine learning model; assessing, by the AI based evaluation system, the one or more feature s, based on the first pre-trained machine learning model; classifying, by the AI based evaluation system, the set of developers into one of a set of performance categories based on the assessing of the one or more feature vectors; evaluating, by the AI based evaluation , the performance of at least one of the set of developers, based on an associated category in the set of performance categories, in response to the classifying.
2. The method of claim 1, wherein evaluating the performance comprises: computing, for each of the set of performance categories, ranks for each developer from the set of developers categorized within an ated performance category, based on a second machine ng model; and ranking each developer from the set of developers for each of the set of performance categories, based on the computed ranks, to evaluate the performance of each developer from the set of developers.
3. The method of claim 2, further comprising training the second machine learning model, wherein training comprises assigning weights to the one or more features associated with the each of the plurality of performance parameters based on a predefined evaluation criterion.
4. The method of claim 3, wherein the predefined evaluation criterion comprises one or more of a technical skill in demand and an efficiency of a developed product with t to bugs identified in the developed product, and wherein high weights are ed to one or more features associated with at least one of the high demand technical skill as compared to a low demand technical skill and ee developed product as compared to the developed product with a ity of bugs.
5. The method of claim 1, wherein the one or more performance parameters comprise at least one of efficiency of a developed product associated with a module developed for a t, complexity of the developed product, types of support received from peers, feedback or rating received from managers, quality of the module developed for the product, and technical skills of each of the set of developers.
6. The method of claim 1, n the set of mance categories includes an excellent performer category, a good mer category, an average mer category, and a bad mer category.
7. The method of claim 1, further comprising: identifying a plurality of bugs associated with a module of a product developed by each of the set of developers; generating a feedback for each of the set of developers, wherein the feedback is ted in response of identifying the plurality of bugs associated with the product developed by each of the set of developers; and evaluating the performance of at least one of the set of developers, based on the feedback.
8. The method of claim 7, wherein evaluating the performance of at least one of the set of developers is based on an inverse reinforcement learning technique.
9. The method of claim 1, further comprising: modifying the first pre-trained machine learning model with transferable knowledge for a target system to be evaluated, wherein the transferable knowledge corresponds to optimal values associated with the one or more feature vectors corresponding to each of the plurality of performance parameters; tuning the first pre-trained machine learning model using specific characteristics of the target system to create a target model; and evaluating the target system performance using the target model to predict system performance of the target system.
10. The method of claim 1, wherein the first pre-trained machine ng model corresponds to a Q network, and wherein the Q network is ured to receive as input an input observation and an input action and to generate an estimated future reward from the input in accordance with each of the plurality of mance parameters associated with the set of developers.
11. A system for evaluating performance of developers using Artificial Intelligence (AI), the system comprising: a processor; and a memory communicatively coupled to the processor, wherein the memory stores sor executable instructions, which, on execution, causes the processor to: receive each of a plurality of performance parameters associated with a set of developers; create one or more feature vectors corresponding to each of the plurality of performance parameters, based on one or more es determined for each of the plurality of performance parameters, n the one or more feature vectors are created based on a first pre-trained machine learning model; assess the one or more feature s, based on the first pre-trained machine learning model; classify the set of developers into one of a set of mance categories based on the assessing of the one or more e vectors; and evaluate the performance of at least one of the set of developers, based on an associated category in the set of performance categories, in response to the classifying.
12. The system of claim 11, wherein the processor executable instructions cause the processor to evaluate the performance by: computing, for each of the set of mance categories, ranks for each developer from the set of developers categorized within an associated performance category, based on a second machine learning model; and ranking each developer from the set of developers for each of the set of performance categories, based on the computed ranks, to evaluate the performance of each developer from the set of pers.
13. The system of claim 12, wherein the processor executable ctions cause the processor to train the second machine learning model, wherein training comprises assigning weights to the one or more features associated with the each of the plurality of mance parameters based on a predefined evaluation criterion.
14. The system of claim 13, wherein the predefined evaluation criterion comprises one or more of a technical skill in demand and an efficiency of a developed product with respect to bugs identified in the developed product, and wherein high s are assigned to one or more features associated with at least one of the high demand technical skill as compared to a low demand technical skill and bug-free developed product as compared to the developed product with a plurality of bugs.
15. The system of claim 11, wherein the one or more performance parameters comprise at least one of efficiency of a developed product associated with a module developed for a product, complexity of the developed product, types of support received from peers, feedback or rating received from managers, quality of the module ped for the product, and technical skills of each of the set of developers.
16. The system of claim 11, n the set of performance categories includes an excellent performer category, a good performer category, an average performer category, and a bad performer category.
17. The system of claim 11, wherein the sor able instructions cause the sor to: identify a plurality of bugs associated with a module of a product developed by each of the set of developers; generate a feedback for each of the set of pers, wherein the feedback is generated in response of identifying the ity of bugs associated with the product developed by each of the set of developers; and te the performance of at least one of the set of developers, based on the feedback.
18. A non-transitory computer-readable medium storing computer-executable ctions for contextually aligning a title of an article with content within the article, the stored instructions, when executed by a processor, cause the processor to perform operations comprising: receiving each of a plurality of performance parameters associated with a set of pers; ng one or more e vectors corresponding to each of the plurality of performance parameters, based on one or more features determined for each of the plurality of performance parameters, wherein the one or more feature vectors are created based on a first pre-trained machine learning model; assessing the one or more feature vectors, based on the first pre-trained machine learning model; classifying the set of developers into one of a set of mance categories based on the assessing of the one or more feature vectors; and evaluating the performance of at least one of the set of developers, based on an associated category in the set of performance categories, in response to the classifying.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/202,926 | 2021-03-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
NZ785402A true NZ785402A (en) | 2022-02-25 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11038821B1 (en) | Chatbot artificial intelligence | |
US20180075357A1 (en) | Automated system for development and deployment of heterogeneous predictive models | |
US10831448B2 (en) | Automated process analysis and automation implementation | |
US11681912B2 (en) | Neural network training method and device | |
US11663255B2 (en) | Automatic collaboration between distinct responsive devices | |
US11599826B2 (en) | Knowledge aided feature engineering | |
CN116737129B (en) | Supply chain control tower generation type large language model and construction method thereof | |
US20200159526A1 (en) | Methods and systems for automatically updating software functionality based on natural language input | |
US11900320B2 (en) | Utilizing machine learning models for identifying a subject of a query, a context for the subject, and a workflow | |
AU2024200941A1 (en) | Method and system for evaluating performance of developers using artificial intelligence (ai) | |
KR102281161B1 (en) | Server and Method for Generating Interview Questions based on Letter of Self-Introduction | |
US11551187B2 (en) | Machine-learning creation of job posting content | |
Quiroz Martinez et al. | Chatbot for Technical Support, Analysis of Critical Success Factors Using Fuzzy Cognitive Maps | |
Malode | Benchmarking public large language model | |
Sharma et al. | Operating model design in tuna Regional Fishery Management Organizations: Current practice, issues and implications | |
AU2024200940A1 (en) | Method and system for determining collaboration between employees using artificial intelligence (ai) | |
US10896034B2 (en) | Methods and systems for automated screen display generation and configuration | |
US11645716B2 (en) | System and method for implementing a trust discretionary distribution tool | |
US11314488B2 (en) | Methods and systems for automated screen display generation and configuration | |
NZ785402A (en) | Method and system for evaluating performance of developers using artificial intelligence (ai) | |
AU2021258019A1 (en) | Utilizing machine learning models to generate initiative plans | |
US11989678B2 (en) | System using artificial intelligence and machine learning to determine an impact of an innovation associated with an enterprise | |
US11941500B2 (en) | System for engagement of human agents for decision-making in a dynamically changing environment | |
US20240127912A1 (en) | Systems and methods for generating customizable digital data structures | |
US20230045900A1 (en) | Method and system for evaluating performance of operation resources using artificial intelligence (ai) |