GB2541649A - User feedback for machine learning - Google Patents

User feedback for machine learning Download PDF

Info

Publication number
GB2541649A
GB2541649A GB1514927.1A GB201514927A GB2541649A GB 2541649 A GB2541649 A GB 2541649A GB 201514927 A GB201514927 A GB 201514927A GB 2541649 A GB2541649 A GB 2541649A
Authority
GB
United Kingdom
Prior art keywords
feedback
user
preference
model
users
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1514927.1A
Other versions
GB201514927D0 (en
Inventor
Kampa Simon
Russell Robert
Hill Alexander
reid Daniel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Senseye Ltd
Original Assignee
Senseye Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Senseye Ltd filed Critical Senseye Ltd
Priority to GB1514927.1A priority Critical patent/GB2541649A/en
Publication of GB201514927D0 publication Critical patent/GB201514927D0/en
Publication of GB2541649A publication Critical patent/GB2541649A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An anomaly detection model is optimised by presenting a dataset where determined anomalies are identified and receiving informal user feedback indicative of a users like or dislike preference. For example, data points 35a and 35b determined to be anomalies are highlighted on a display screen 2 comprising a graphic user interface (GUI). A user can select a level of preference via like button 32 or dislike button 31. The feedback is used to update unsupervised anomaly detection algorithms, such that a model is brought closer to users preference. Fitness rating indicates the success of the model at identifying anomalies in an iterative process. Multiple weighted users can provide their feedbacks, e.g. users with two models of accelerometer on the same model of vehicle. Data can be derived from internet connected sensors or Internet-of-Things (IoT) sensors. Feedback is non-binary absolute or comparative. Advantages include obtaining feedback from non-technical users.

Description

USER FEEDBACK FOR MACHINE LEARNING
Technical Field
The present invention relates generally to user feedback for machine learning. Background
Accurate determination of anomalies in data detected from sensors (for example) can be achieved by human feedback (although it could be improved in an unsupervised manner, whereby human interaction is not required). In the example of sensor systems these may include one or more sensors sending sensed data via the internet, for example, to a data processing resource, such as a server. The sensors may include internet connected (or Internet of Things) sensors. However, feedback for anomaly detection techniques typically requires a binary absolute label, stating whether a data point is abnormal or normal. Whilst the concept of abnormalities is common and easily understandable for those individual users who are familiar with machine learning, that may not be the case for non-technically trained/experienced users. Accordingly, the interaction of technically knowledgeable individuals who are able to make sense of the data is usually required.
We have realised that it would be advantageous to facilitate obtaining feedback from non-technical users, and provide a simpler, more intuitive interface for such users, but at the same time produce accurate determination of anomalies. This allows these users to provide meaningful feedback which can be used to enhance the accuracy of determination of anomalies. For example, companies without specialised data analysts or other specialist engineers would be able to interact with an interface which is easy to use and understand. Thus, anomaly detection feedback could take account of non-technical business owners/executive who have little time to provide feedback on the (strict or exact) correctness of identified anomalies.
Summary
According to a first aspect of the invention there is provided a method of optimising an anomaly detection model for at least one sensor, the method comprising presenting a dataset to a user on a visual display device, identifying at least one determined anomaly of the displayed dataset to the user, receiving a user feedback input in response to the displayed dataset which is indicative of the user’s like or dislike preference of at least one determined anomaly, using the feedback input to update a model used to determine the anomalies so as to bring anomaly determination of the model into closer alignment with the user's preference.
One embodiment to allow users to provide feedback may be integrated within a web application, providing a quick and easy way to contribute feedback whilst examining the anomalies associated with the user's raw sensor data or derived data. Along with providing feedback, the application may allow users to examine their previous feedback and investigate the implications to the anomaly detection process. This may also allow modification of previously provided feedback, allowing users to correct erroneous feedback. It is likely that erroneous feedback would not significantly change the behaviour of the anomaly detection process if it contradicts other feedback provided by the user, as it will receive less weighting than the larger quantity of 'correct' feedback.
One aspect of the invention may be viewed as improving the fitness of machine learning through the use of informal user feedback.
The method may comprise applying two different versions of an anomaly detection model to a dataset, and presenting the results to a user to allow a user to provide feedback as to which version of the model he prefers.
The method may comprise determining what may be termed a fitness rating which is indicative of the degree of success of the model at generating the anomalies desired by the user based on his preference feedback.
The method may comprise searching a parameter space to determine how parameters can be optimised in order to generate a refined version of the model which better suits the user's feedback preference input. This may include modifying one or more parameters of a model, over multiple iterations. The result of each iteration being used to further tune the parameter(s) of the model towards user preference.
The method may comprise displaying graphic user interface iconography or indicia indicative of a like or dislike user preference.
The method may comprise receiving user preference feedback on each of multiple individual determined anomalies, or a dataset.
The method may comprise providing more than two graduations of user preference level, relative to a scale of the strength of a user’s preference.
The method may comprise repeated iterations of obtaining user feedback preference and updating the model accordingly. The iterations may be performed periodically.
The method may comprise allowing a plurality of users to provide preference feedback to the same model. The model evaluation may utilise feedback from multiple users. Advantageously, a user's feedback may be used by multiple models and/or a model may use feedback from multiple users.
The method may comprise allowing a plurality of users to provide preference feedback to the same dataset. Where feedback is obtained on a model from multiple users, a determined, general, predominant/collective preference may be used as the basis to tune the model.
The method may comprise allowing different users the same or different weightings of their preference feedback.
The method may relate to data originating or being derived from internet connected (or Internet of Things) sensors. However, it will be appreciated that the data which is acted upon could be derived from other, alternative, sources, and not necessarily loT sensors. The method is preferably a method to optimize unsupervised anomaly detection algorithms.
The method may be viewed as including obtaining and using of non-binary absolute feedback. The method may alternatively or in addition be viewed as obtaining and using comparative feedback for anomaly detection.
According to a second aspect of the invention there is provided a computer feedback system for optimising an anomaly detection model which is arranged to implement the method of the first aspect of the invention.
According to a third aspect of the invention there is provided a computer software product for optimising an anomaly detection model comprising instructions, which, when executed by a data processor, arranged to implement the method of the first aspect of the invention.
The method, computer software product or computer system may include one or more features described in the description and/or shown in the drawings
Brief Description of the drawings
Various embodiments of the invention will now be described, by way of example only, with reference to the following drawings in which:
Figure 1 is a schematic representation of an Internet of Things system.
Figure 2 is a flow diagram showing the principal stages involved in obtaining and using user preference feedback, and
Figure 3 is a schematic representation of a computer device displaying a screen allowing a user to provide user preference feedback
Detailed Description
There is now described a method and computer system 1 for the input and processing of user preferences in relation to anomaly detection models. Both non-binary absolute feedback and comparative feedback mechanisms are used, either individually or in combination.
By way of introduction, anomaly detection algorithms may be considered as falling into three categories, differentiated by the way in which normal and abnormal data points are presented during training. Supervised techniques require a set of labelled data, with each data point being labelled as normal or abnormal. Semi-supervised techniques are given a training dataset which only contains only normal data points. Unsupervised techniques are given an unlabeled training set, featuring both normal and abnormal data points, with the assumption that the vast majority of the training dataset are normal.
Typically, machine learning algorithms, anomaly detection included, require a number of parameters to be tuned to obtain the optimal performance. In the case of unsupervised techniques, the tuning can be performed by running a labeled test data set through the trained model to assess performance (also known as fitness). The described optimization adopts this approach, however the quantity of test data may be limited and the feedback from the users will not provide binary labels. This alters the typical approach to optimizing non-supervised techniques, in that the fitness of each model must be calculated differently, and this is further described below.
In overview, this is an iterative process, applicable to both non-binary absolute feedback and comparative feedback, comprising three main stages, as shown in Figure 2, namely anomaly detection training, evaluation of the trained model by way of obtaining user feedback, and optimization of the model based on the user feedback obtained. It will be appreciated that the three stages are repeated over time so as to further update and tune the model and thereby lead to increasing accurate anomaly detection based on the user’s inputted preferences. It will also be appreciated that the step of evaluation by obtaining user feedback may be performed periodically (or on-demand by the user). The user feedback data is used to perform a series of tuning iterations during the optimisation stage, which tuning iterations occur with a higher frequency than that of obtaining user feedback. For example, user feedback may be obtained on a weekly basis, whereas the parameter tuning of the model may be of the order of hundreds per minute. Each tuning iteration within the optimisation stage includes a feedback loop, in which the relevant parameters are adjusted, the output compared to a measure of the user preference, and then further adjusted, and so on, thus gradually tailoring the model closer towards the user preference.
Anomaly detection models may be trained offline using historic data. The output of the model evaluation will be a fitness rating, which indicates how successful the model was at generating the anomalies desired by the user based on their feedback. The fitness of the model is used to search the parameter space of the model for more optimal parameters (i.e. parameter tuning) which will produce an updated version of the model which better suits the user's preferences. The parameter space search is a form of optimization and is typically approached using metaheuristic algorithms like genetic algorithms, simulated annealing and particle swarms. In the case of comparative feedback, the models are stored to allow user feedback in future.
Referring back to Figure 1, there is shown an Internet of Things (loT) array 10 of sensors 9, which is capable of communicating sensed data via a communications network 4 (in particular the internet) to a data processor unit 3. Sensed raw data and/or data derived from one or more of the sensors is available to a user’s computer 2 by way of the communications network 4. The user is also able to send feedback to the processor unit 3 in the reverse direction. A typical approach to model evaluation would be to have a test data set which has normal/abnormal labels associated with each data point. Test sets of data are common in machine learning and are used to evaluate a trained model on previously unseen data. The data is processed by the trained model and the resulting labels compared to the actual labels. The accuracy is calculated using a suitable metric, for anomaly detection, common metrics include, but are not limited to, precision, recall and the FI score. The accuracy of a trained model can be assessed in this way as there is a known truth to compare against. However, in the case of non-binary absolute feedback or comparative feedback, there is not a known truth; instead there is a level of preference. In this case, the test data set comprises the raw data, the anomalies that the user judged and the user's preference.
The trained model is initially given a fitness of zero. The test data set's data is processed by the trained model. The resulting labels are compared to the labels the user evaluated, using common accuracy metrics, giving a similarity between the two result sets (note, in this case this is not accuracy as the labels are not known to be true). The resulting similarity is then scaled based on the user's preference. This preference can be positive or negative. The resulting scaled similarity is added to the model's fitness. This can be repeated over multiple test data sets. This means that if a model is similar (large similarity metric) to a preferred test dataset (with large positive preference) the model's fitness increases. In comparison, if the model is not similar to a preferred test dataset, its fitness is not increased. The opposite is true for disliked test datasets, reducing the model's fitness if similar, and not reducing if not similar.
In the case of absolute feedback on a specific anomaly rather than a dataset, or group of data points, the similarity metric is changed slightly, just to target the individual points which were judged by the user.
The resulting fitness is a value which comparatively ranks a model's performance in satisfying the user's preferences. The absolute value is irrelevant, it is the relative value which will allow an optimization algorithm to discover more performant parameters.
Each of absolute non-binary feedback and comparative feedback will now be described.
Comparative Evaluation
In the case of comparative evaluation, the user inputs whether a determined anomaly set is better than another anomaly set (of a respective dataset). Comparative information cannot be used in its native format, it merely describes whether one set of results is better than another. It first must be anchored allowing a measurement, whereby the preference of a test dataset can be measured in relation to the rest of the test datasets, this is termed a relative measurement. For example, if a set of points A, was judged as preferred in comparison to points B, A would have a larger relative measurement than B. If B was then judged as being preferred over points C, C would have a relative measurement less than B and A. Essentially, relative measurements allow the preference of datasets to be ranked, allowing some datasets to be more preferred than others even though they have not explicitly been compared by the user. A suitable ranking algorithm can produce a set of relative measurements from user comparative feedback, as an example the (known) Elo rating system can be used. This generates a relative measurement for each testing dataset, specifying how preferred that dataset is to the user, compared to others. The resulting relative measurements should preferably be centered around zero, such that zero is equal to no (or neutral) preference from the user, positive values indicate a positive preference by the user, and negative values indicate a negative preference by the user. The resultant magnitude of positive and negative values indicate the magnitude of the user's relative like or dislike. This value is used as the preference value for the model, used to scale similarities with new models to represent the user's like or dislike.
In the following example the social and comparative feedback method is described for a user who has just added new sensor data to the described system.
Initially, when the sensor data (this may be raw sensor data or derived data and aggregated data) is added there would be no user feedback available to tune the model. In this case the periodic anomaly detection/determination would not be able to optimize to the user’s preferences. Instead, default parameters would be used. These default parameters may be set-based on knowledge of other sensors in the system with similar characteristics. In addition to use of the default parameters, several other parameters would be trialed and their resulting anomalies stored.
In use, the user would be presented with the anomalies when he next visits the application. The presentation may comprise a line graph with highlighted points showing the anomalies. Typically the best performing model's anomalies would be shown to the user, in this case, at the initial stage, the default parameter model's results would be shown, as there is no knowledge of the model's performance at that initial stage.
On cursor rollover, for example, of the presented anomaly graph, a feedback button would be available. On pressing the button, the user would be presented, for example, with two line graphs showing the same raw data. The anomalies presented on top of the raw data would be obtained from different models, one being the default model and the other being picked at random from other trained models. The user would be asked to pick which they preferred. On selecting, both sets of results would be marked as a test dataset, with the preferred test dataset being recorded as preferred over the other. The user would be asked for additional comparisons with determined anomalies produced by other randomly selected models. This would continue whilst there were comparisons available (and the user did not close the feedback application window).
At the time of the next periodic anomaly detection tuning process for the sensor, the test datasets would be considered. Initially, the test datasets would have their user preference calculated using the Elo rating system. This would provide relative measurements for each test dataset based on the user's feedback. Any test dataset with a preference of zero (i.e. no positive or negative feedback provided, or positive feedback canceled out with negative feedback) would be ignored. A metaheuristic optimization would be conducted, using generic algorithms (which may include metaheuristic algorithms). Each new trained model would process each test dataset in turn. The results produced by the trained model would be compared to the test dataset's results and the resulting similarity scaled based on the result's user preference. The model would receive a fitness rating based on its success achieving the user's preferences. The optimization would search the parameter space based on the relative finesses of the models produced. The most successful model would be shown to the user on the next (periodic) visit, with the option of performing more comparisons to further tune the results.
Absolute Evaluation
In relation to absolute feedback, as compared to the comparative feedback described above, this provides an absolute (non-comparative) evaluation of a single anomaly (of a data set) or of a data set (comprising many data points). Compared to typical anomaly labelling, the user feedback conveys a level of preference rather than a normal/abnormal label.
Reference is made to Figure 3 which shows a user's computer device 2', such as a portable tablet computer device, displaying a data set 35 shown graphically as a plot of data points. The computer device 2' comprises a display screen which in addition to showing the data set, visually highlights data points 35a and 35b which have been determined by the anomaly detection model as being anomalies. The screen also displays input buttons 31 and 32, which allow a user to select a level of preference to each of the indicated anomalies 35a and 35b. The button 31 indicates a 'DISLIKE' preference, and the button 32 indicates a 'LIKE preference. Using the buttons, the user is able to provide, in turn, feedback to each of the individual indicated anomalies. (In other embodiments, the user may be able to provide preference feedback to a group or subset of data points.) The graphic user interface displayed by the computer device 2' may be driven by a software product loaded onto the memory of the device, such as an App, or may be provided by way of the an Internet browser, and connecting with the data processor unit 3, which may comprise a server, making the GUI and data available to the device 2'.
In addition to the 'like' and 'dislike' preference levels described above, additional distinctions of like and dislike could be provided. For example, four feedback buttons may be available which allows the selection of like, slightly like, slightly dislike or dislike (which may be represented purely graphically, without accompanying text, as per the example described below). The benefit of which, is allowing users to express weaker forms of feedback. This is likely to further encourage the user to provide feedback where they may not have done with binary feedback. For such a scheme, the user may have the following options and resulting preferences: like = +1, slightly like = +0.5, slightly dislike = -0.5 and dislike = -1. These preference options could be associated with iconography representing different (additional) degrees of user preference.
Initially when the sensor data, derived data or aggregates, are added to the system there would be no anomaly feedback from the user. Metadata associated with the sensor would be utilized to identify sensors which are similar, in terms of data characteristics, type of sensor and its application.
When the periodic anomaly detection tuning next activates, the produced models would be evaluated using feedback obtained for the identified similar sensors (this feedback could potentially be from other users or feedback the user has provided for other sensors they own). This feedback would guide the metaheuristic optimization, aiming to satisfy the feedback associated with the similar sensors, which is assumed to be comparable to the user's preferences.
The best performing anomaly detection results would be presented to the user when they next visit the application. The presentation could be via a line graph with highlighted points to visually indicate anomalies. In one region of the GUI would be displayed four icons, representing an unhappy face, a slightly less unhappy face, a slightly happy face and a happy face, respectively. Selecting one of the face icons provides feedback describing the user's perception of the quality of the produced results. This feedback would be stored along with the data being evaluated.
When the anomaly model is next trained, the models produced would be evaluated using the feedback received from users with similar sensors as well as the user's own feedback. The user's feedback could be weighted to prioritize their feedback over other users' preference feedback.
Having provided feedback, the user could be awarded with a new 'achievement' and their user account would be labelled as being a level 1 user, for example. This in turn provides publicly or group viewable iconography against the user's username, presenting the user's kudos to other users and friends.
The next time the user visits the feedback software application, he could be encouraged to provide additional feedback using an informative pop-up over the anomaly detection results. This pop-up may encourage the user to provide feedback by comparing their level of kudos to other similar users or friends.
As an extension to the described approach to evaluation and optimization, and as alluded to above, feedback can be crowd sourced from multiple users. This is advantageous since individual users do not need to provide as much feedback, instead feedback from other users can used in order to ensure accurate anomaly determination. The process of feedback and optimization remains the same, however the evaluation of the model would slightly differ. Instead of evaluating a model based solely on a single user's feedback, the evaluation would be based on feedback from many users. This can be considered an additional benefit of the social media-like interaction style described above which suits the scaling required when many users participate.
It will be appreciated that exploiting feedback from other users has the potential of decreasing the accuracy of the anomalies produced. This is especially true if feedback was exploited which originated from a very different dataset, in this case the feedback would bear no or little resemblance to a particular individual user's preferences and the source data being analysed. To solve this problem, other users’ feedback would only be utilized if the source sensor and its data were similar. For example, if two users both had the same model of accelerometer on the same model of vehicle, the feedback from both users could be combined to improve the evaluation of models.
It will be appreciated that some embodiments may include both non-binary absolute feedback and comparative feedback, and other embodiments only use one or the other type of feedback.
Various significant advantages result from the system described above. It is expected that the described forms of feedback will be integrated within a web application, providing a quick and easy way to contribute feedback whilst examining the anomalies associated with the user's raw sensor data or derived data. Along with providing feedback, the application will allow users to examine their previous feedback and investigate the implications to the anomaly detection process. This will also allow modification of previously provided feedback, allowing users to correct erroneous feedback. It is likely that erroneous feedback would not significantly change the behavior of the anomaly detection process if it contradicts other feedback provided by the user, as it will receive less weighting than the larger quantity of 'correct' feedback.
This allows companies without specialised data analysts or other specialist engineers to benefit from anomaly detection by providing an interface which is easy to use and understand. Thus, anomaly detection feedback can now include a demographic of non-technical business owners/executive who have little time to provide feedback on the (strict or exact) correctness of identified anomalies. As such, the user feedback is advantageously non-time consuming, opportunistic, simple and understandable.
Using similar user interface styles to those used for social media interfaces, such as 'like' 'dislike', thumbs up or thumbs down iconography, for anomaly detection feedback provides a familiar environment for non-technical users. As discussed above the user feedback could be associated with a set of anomaly results or a single anomalous point. A further benefit of such an approach is the intentionally ambiguous format of input. 'Like' and 'dislike', for example, do not equate to labelling the point as normal or abnormal, instead they express fuzzy, indefinite feedback. For instance, a user may click dislike for a detected anomaly, the intention behind the feedback may be to state that the point should not be displayed as the user already knows about this information, requesting binary labelling of the point as normal or abnormal may discourage the user as the available labels do not match their intention.
In addition to providing a familiar social media-like interface with what may be termed 'fuzzy' feedback, elements of playfulness, gamification, competition and other informal engagement styles may be employed to increase the amount and fidelity of feedback obtained, with the ultimate aim of producing accurate anomalies by encouraging users to provide feedback. This may include interactive and informal user interface approaches requesting feedback, aspects of user kudos for providing feedback and competitive comparisons in the quantity of feedback compared to other similar users or friends. A further benefit is the ability to create a highly scalable system of obtaining and managing feedback. This allows us to leverage the anticipated substantial loT community to crowdsource feedback, reducing the amount of feedback required from each individual.
Finally, exploiting natural language processing along with domain specific terminology, allows the automatic generation of more user friendly feedback interfaces, increasing familiarity for users and increasing the probability of obtaining feedback.

Claims (17)

1. A method of optimising an anomaly detection model, the method comprising presenting a dataset to a user on a visual display device, identifying at least one determined anomaly of the displayed dataset to the user, receiving a user feedback input in response to the displayed dataset which is indicative of the user’s like or dislike preference of at least one determined anomaly, using the feedback input to update a model used to determine the anomalies so as to bring anomaly determination of the model into closer alignment with the user’s preference.
2. The method of claim 1 which comprises applying two different versions of an anomaly detection model to a dataset, and presenting the results to a user to allow a user to provide feedback as to which version of the model he prefers.
3. The method as claimed in claim 1 or claim 2 which comprises determining a fitness rating which is indicative of the success of the model at identifying anomalies desired by the user based on his preference feedback.
4. The method of any preceding claim which comprises searching a parameter space to determine how parameters can be optimised in order generate a refined version of the model which better suits the user’s feedback preference input.
5. The method of any preceding claim which comprises displaying graphic user interface iconography or indicia indicative of a like or dislike user preference.
6. The method of any preceding claim which comprises receiving user preference feedback on each of multiple individual determined anomalies, or a dataset.
7. The method of any preceding claim comprising providing more than two graduations of user preference level relative a scale of the strength of a user’s preference.
8. The method of any preceding claim comprising repeated iterations of obtaining user feedback preference and updating the model accordingly.
9. The method of claim 8 in which the iterations of obtaining user feedback may be performed periodically.
10. The method of any preceding claim which comprises allowing a plurality of users to provide preference feedback to the same model.
11. The method of any preceding claim which comprises allowing a plurality of users to provide preference feedback to the same dataset.
12. The method of any preceding claim comprising allowing different users the same or different weightings of their preference feedback.
13. The method of any preceding claim which relates to data for at least one sensor, for example originating or being derived from internet connected (or Internet of Things) sensors.
14. The method of any preceding claim which is a method to optimize unsupervised anomaly detection algorithms.
15. The method as claimed in any preceding claim which includes obtaining and using non-binary absolute feedback and/or obtaining and using comparative feedback, for anomaly detection.
16. Computer feedback system for optimising an anomaly detection model which is arranged to implement the method of any of claims 1 to 15.
17. A computer software product for optimising an anomaly detection model comprising machine-readable instructions, which, when executed by a data processor arranged to implement the method of the first aspect of the invention.
GB1514927.1A 2015-08-21 2015-08-21 User feedback for machine learning Withdrawn GB2541649A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1514927.1A GB2541649A (en) 2015-08-21 2015-08-21 User feedback for machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1514927.1A GB2541649A (en) 2015-08-21 2015-08-21 User feedback for machine learning

Publications (2)

Publication Number Publication Date
GB201514927D0 GB201514927D0 (en) 2015-10-07
GB2541649A true GB2541649A (en) 2017-03-01

Family

ID=54292042

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1514927.1A Withdrawn GB2541649A (en) 2015-08-21 2015-08-21 User feedback for machine learning

Country Status (1)

Country Link
GB (1) GB2541649A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019047519A1 (en) * 2017-09-05 2019-03-14 西安中兴新软件有限责任公司 Method and apparatus for upgrading narrowband internet of things and system
WO2019231659A1 (en) * 2018-05-29 2019-12-05 Microsoft Technology Licensing, Llc Data anomaly detection
US10631129B1 (en) 2018-10-01 2020-04-21 International Business Machines Corporation First responder feedback-based emergency response floor identification
US10839618B2 (en) 2018-07-12 2020-11-17 Honda Motor Co., Ltd. Applied artificial intelligence for natural language processing automotive reporting system
US11361247B2 (en) 2018-10-01 2022-06-14 International Business Machines Corporation Spatial device clustering-based emergency response floor identification
US11455639B2 (en) 2020-05-29 2022-09-27 Sap Se Unsupervised universal anomaly detection for situation handling
WO2023006215A1 (en) * 2021-07-30 2023-02-02 Lytt Limited Workflow and contextual drive knowledge encoding
EP4058861A4 (en) * 2019-11-12 2023-11-22 AVEVA Software, LLC Operational anomaly feedback loop system and method

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112904437B (en) * 2021-01-14 2023-03-24 支付宝(杭州)信息技术有限公司 Detection method and detection device of hidden component based on privacy protection
CN114939276B (en) * 2022-04-26 2023-08-01 深圳爱玩网络科技股份有限公司 Game operation data analysis method, system and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120137367A1 (en) * 2009-11-06 2012-05-31 Cataphora, Inc. Continuous anomaly detection based on behavior modeling and heterogeneous information analysis

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120137367A1 (en) * 2009-11-06 2012-05-31 Cataphora, Inc. Continuous anomaly detection based on behavior modeling and heterogeneous information analysis

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"11th International Conference on Information Fusion", IEEE, 2008, Riveiro et al "Improving maritime anomaly detection and situation awareness though interactive visualization", ISBN 978-3-8007-3092-6; 3-8007-3092-8 *
"28th International Conference on Software Maintenance (ICSM)", IEEE, 2012, Gong L et al, "Interactive Fault Localization Leveraging Simple User Feedback", ISBN 978-1-4673-2313-0; 1-4673-2313-6 *
"Conference on Visual Analytics Science and Technology (VAST)", IEEE, 2012, E Kandogan, "Just-in-Time Annotation of Clusters, Outliers, and Trends in Point-based Data Visualizations", ISBN 978-1-4673-4752-5; 1-4673-4752-3 *
"International Congress on Big Data", IEEE, 2013, Kandogan E et al, "Data For All: A System Approach to Accelerate the Path from Data to Insight" *
"Symposium on Visual Analytics Science and Technology (VAST)", IEEE, 2010, Liao et al, "Anomaly Detection in GPS Data Based on Visual Analytics", ISBN 978-1-4244-9488-0; 1-4244-9488-5 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019047519A1 (en) * 2017-09-05 2019-03-14 西安中兴新软件有限责任公司 Method and apparatus for upgrading narrowband internet of things and system
WO2019231659A1 (en) * 2018-05-29 2019-12-05 Microsoft Technology Licensing, Llc Data anomaly detection
US11341374B2 (en) 2018-05-29 2022-05-24 Microsoft Technology Licensing, Llc Data anomaly detection
US10839618B2 (en) 2018-07-12 2020-11-17 Honda Motor Co., Ltd. Applied artificial intelligence for natural language processing automotive reporting system
US10631129B1 (en) 2018-10-01 2020-04-21 International Business Machines Corporation First responder feedback-based emergency response floor identification
US10771920B2 (en) 2018-10-01 2020-09-08 International Business Machines Corporation First responder feedback-based emergency response floor identification
US11361247B2 (en) 2018-10-01 2022-06-14 International Business Machines Corporation Spatial device clustering-based emergency response floor identification
EP4058861A4 (en) * 2019-11-12 2023-11-22 AVEVA Software, LLC Operational anomaly feedback loop system and method
US11455639B2 (en) 2020-05-29 2022-09-27 Sap Se Unsupervised universal anomaly detection for situation handling
WO2023006215A1 (en) * 2021-07-30 2023-02-02 Lytt Limited Workflow and contextual drive knowledge encoding

Also Published As

Publication number Publication date
GB201514927D0 (en) 2015-10-07

Similar Documents

Publication Publication Date Title
GB2541649A (en) User feedback for machine learning
Azodi et al. Opening the black box: interpretable machine learning for geneticists
JP5340204B2 (en) Inference apparatus, control method thereof, and program
CN111242310B (en) Feature validity evaluation method and device, electronic equipment and storage medium
Das et al. Beames: Interactive multimodel steering, selection, and inspection for regression tasks
Loepp et al. Interactive recommending with tag-enhanced matrix factorization (TagMF)
US20200110783A1 (en) Method and system for estimating user-item interaction data
US20210158420A1 (en) Clustered user browsing missions for products with user-selectable options associated with the products
WO2016151620A1 (en) Simulation system, simulation method, and simulation program
WO2022231963A1 (en) Industry specific machine learning applications
US11488223B1 (en) Modification of user interface based on dynamically-ranked product attributes
EP4205043A1 (en) Hybrid machine learning
US9804741B2 (en) Methods and systems for managing N-streams of recommendations
US20240078473A1 (en) Systems and methods for end-to-end machine learning with automated machine learning explainable artificial intelligence
WO2023174099A1 (en) Recommendation model training method, item recommendation method and system, and related device
US20240037145A1 (en) Product identification in media items
US9251263B2 (en) Systems and methods for graphical search interface
Prendez et al. Measuring parameter uncertainty by identifying fungible estimates in SEM
Lai et al. A Survey on Data-Centric Recommender Systems
JP5446788B2 (en) Information processing apparatus and program
Letard et al. Bandit algorithms: A comprehensive review and their dynamic selection from a portfolio for multicriteria top-k recommendation
JP7396478B2 (en) Model training program, model training method, and information processing device
US11782576B2 (en) Configuration of user interface for intuitive selection of insight visualizations
EP4116891A1 (en) Model generation program and method, and information processing device
US8862592B2 (en) Systems and methods for graphical search interface

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20210916 AND 20210922

R108 Alteration of time limits (patents rules 1995)

Free format text: EXTENSION ALLOWED

Effective date: 20221116

Free format text: EXTENSION APPLICATION

Effective date: 20221024

WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)