US20170308810A1 - Event prediction and object recognition system - Google Patents

Event prediction and object recognition system Download PDF

Info

Publication number
US20170308810A1
US20170308810A1 US15/335,530 US201615335530A US2017308810A1 US 20170308810 A1 US20170308810 A1 US 20170308810A1 US 201615335530 A US201615335530 A US 201615335530A US 2017308810 A1 US2017308810 A1 US 2017308810A1
Authority
US
United States
Prior art keywords
matrix
value
label
observation
observation vectors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US15/335,530
Other versions
US9792562B1 (en
Inventor
Xu Chen
Tao Wang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SAS Institute Inc
Original Assignee
SAS Institute Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SAS Institute Inc filed Critical SAS Institute Inc
Priority to US15/335,530 priority Critical patent/US9792562B1/en
Assigned to SAS INSTITUTE INC. reassignment SAS INSTITUTE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHEN, XU, WANG, TAO
Priority to US15/686,863 priority patent/US10127477B2/en
Application granted granted Critical
Publication of US9792562B1 publication Critical patent/US9792562B1/en
Publication of US20170308810A1 publication Critical patent/US20170308810A1/en
Priority to US16/108,293 priority patent/US10275690B2/en
Priority to US16/162,794 priority patent/US10354204B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06N99/005
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N5/003
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N7/005
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Definitions

  • Machine learning defines models that can be used to predict occurrence of an event, for example, from sensor data or signal data, or recognize/classify an object, for example, in an image, in text, in a web page, in voice data, in sensor data, etc.
  • Machine learning algorithms can be classified into three categories: unsupervised learning, supervised learning, and semi-supervised learning.
  • Unsupervised learning does not require that a target (dependent) variable y be labeled in training data to indicate occurrence or non-occurrence of the event or to recognize/classify the object.
  • An unsupervised learning system predicts the label, target variable y, in training data by defining a model that describes the hidden structure in the training data.
  • Supervised learning requires that the target (dependent) variable y be labeled in training data so that a model can be built to predict the label of new unlabeled data.
  • a supervised learning system discards observations in the training data that are not labeled. While supervised learning algorithms are typically better predictors/classifiers, labeling training data often requires a physical experiment or a statistical trial, and human labor is usually required. As a result, it may be very complex and expensive to fully label an entire training dataset.
  • a semi-supervised learning system only requires that the target (dependent) variable y be labeled in a small portion of the training data and uses the unlabeled training data in the training dataset to define the prediction/classification (data labeling) model.
  • a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a computing device, cause the computing device to predict occurrence of an event or to classify an object using semi-supervised data to label unlabeled data in a dataset.
  • a dataset is read that includes a plurality of observation vectors.
  • a label set is read that defines permissible values for a target variable.
  • a value of the permissible values of the target variable is defined for a subset of the plurality of observation vectors.
  • a classification matrix is initialized based on the value of the target variable of each observation vector of the plurality of observation vectors;
  • a converged classification matrix is computed, wherein the converged classification matrix defines a label probability for each permissible value defined in the label set for each observation vector of the plurality of observation vectors; and
  • the value of the target variable is updated based on a maximum label probability value identified from the converged classification matrix.
  • a distance matrix is computed that defines a distance value between each pair of the plurality of observation vectors using a distance function and the converged classification matrix; a number of observation vectors is selected from the dataset that have minimum values for the distance value; a label is requested for each of the selected observation vectors; a response to the request is received for each of the selected observation vectors; the value of the target variable is updated for each of the selected observation vectors with the received response; and operations (a) to (c) are repeated. After the predefined number of times, the value of the target variable for each observation vector of the plurality of observation vectors is output to a second dataset.
  • a computing device in yet another example embodiment, includes, but is not limited to, a processor and a non-transitory computer-readable medium operably coupled to the processor.
  • the computer-readable medium has instructions stored thereon that, when executed by the computing device, cause the computing device to predict occurrence of an event or classify an object using semi-supervised data to label unlabeled data in a dataset.
  • a method of predicting occurrence of an event or classifying an object using semi-supervised data to label unlabeled data in a dataset is provided.
  • FIG. 1 depicts a block diagram of a data labeling device in accordance with an illustrative embodiment.
  • FIGS. 2A and 2B depict a flow diagram illustrating examples of operations performed by the data labeling device of FIG. 1 in accordance with an illustrative embodiment.
  • FIGS. 3A-3E depict supplemental points successively selected for labeling by the data labeling device of FIG. 1 in accordance with an illustrative embodiment.
  • Data labeling device 100 may include an input interface 102 , an output interface 104 , a communication interface 106 , a non-transitory computer-readable medium 108 , a processor 110 , a data labeling application 122 , a partially labeled dataset 124 , and a labeled dataset 126 . Fewer, different, and/or additional components may be incorporated into data labeling device 100 .
  • Input interface 102 provides an interface for receiving information from the user or another device for entry into data labeling device 100 as understood by those skilled in the art.
  • Input interface 102 may interface with various input technologies including, but not limited to, a keyboard 112 , a microphone 113 , a mouse 114 , a display 116 , a track ball, a keypad, one or more buttons, etc. to allow the user to enter information into data labeling device 100 or to make selections presented in a user interface displayed on display 116 .
  • the same interface may support both input interface 102 and output interface 104 .
  • display 116 comprising a touch screen provides a mechanism for user input and for presentation of output to the user.
  • Data labeling device 100 may have one or more input interfaces that use the same or a different input interface technology.
  • the input interface technology further may be accessible by data labeling device 100 through communication interface 106 .
  • Output interface 104 provides an interface for outputting information for review by a user of data labeling device 100 and/or for use by another application or device.
  • output interface 104 may interface with various output technologies including, but not limited to, display 116 , a speaker 118 , a printer 120 , etc.
  • Data labeling device 100 may have one or more output interfaces that use the same or a different output interface technology.
  • the output interface technology further may be accessible by data labeling device 100 through communication interface 106 .
  • Communication interface 106 provides an interface for receiving and transmitting data between devices using various protocols, transmission technologies, and media as understood by those skilled in the art.
  • Communication interface 106 may support communication using various transmission media that may be wired and/or wireless.
  • Data labeling device 100 may have one or more communication interfaces that use the same or a different communication interface technology.
  • data labeling device 100 may support communication using an Ethernet port, a Bluetooth antenna, a telephone jack, a USB port, etc. Data and messages may be transferred between data labeling device 100 and distributed computing system 128 using communication interface 106 .
  • Computer-readable medium 108 is an electronic holding place or storage for information so the information can be accessed by processor 110 as understood by those skilled in the art.
  • Computer-readable medium 108 can include, but is not limited to, any type of random access memory (RAM), any type of read only memory (ROM), any type of flash memory, etc. such as magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, . . . ), optical disks (e.g., compact disc (CD), digital versatile disc (DVD), . . . ), smart cards, flash memory devices, etc.
  • Data labeling device 100 may have one or more computer-readable media that use the same or a different memory media technology.
  • computer-readable medium 108 may include different types of computer-readable media that may be organized hierarchically to provide efficient access to the data stored therein as understood by a person of skill in the art.
  • a cache may be implemented in a smaller, faster memory that stores copies of data from the most frequently/recently accessed main memory locations to reduce an access latency.
  • Data labeling device 100 also may have one or more drives that support the loading of a memory media such as a CD, DVD, an external hard drive, etc.
  • One or more external hard drives further may be connected to data labeling device 100 using communication interface 106 .
  • Processor 110 executes instructions as understood by those skilled in the art. The instructions may be carried out by a special purpose computer, logic circuits, or hardware circuits. Processor 110 may be implemented in hardware and/or firmware. Processor 110 executes an instruction, meaning it performs/controls the operations called for by that instruction. The term “execution” is the process of running an application or the carrying out of the operation called for by an instruction. The instructions may be written using one or more programming language, scripting language, assembly language, etc. Processor 110 operably couples with input interface 102 , with output interface 104 , with communication interface 106 , and with computer-readable medium 108 to receive, to send, and to process information.
  • Processor 110 may retrieve a set of instructions from a permanent memory device and copy the instructions in an executable form to a temporary memory device that is generally some form of RAM.
  • Data labeling device 100 may include a plurality of processors that use the same or a different processing technology.
  • Data labeling application 122 performs operations associated with defining labeled dataset 126 from data stored in partially labeled dataset 124 . Some or all of the operations described herein may be embodied in data labeling application 122 .
  • data labeling application 122 is implemented in software (comprised of computer-readable and/or computer-executable instructions) stored in computer-readable medium 108 and accessible by processor 110 for execution of the instructions that embody the operations of data labeling application 122 .
  • Data labeling application 122 may be written using one or more programming languages, assembly languages, scripting languages, etc.
  • Data labeling application 122 may be integrated with other analytic tools.
  • data labeling application 122 may be part of SAS® Enterprise MinerTM developed and provided by SAS Institute Inc. of Cary, N.C. that may be used to create highly accurate predictive and descriptive models based on analysis of vast amounts of data from across an enterprise.
  • Data labeling application 122 further may be part of SAS® Enterprise Guide, SAS® Visual Analytics, SAS® LASRTM Analytic Server, and/or SAS® Access Engine(s) also developed and provided by SAS Institute Inc. of Cary, N.C., USA.
  • Data labeling application 122 is applicable in a variety of industries.
  • data labeling application 122 may be used to recognize text, recognize text meaning, recognize a voice, recognize speech, recognize characteristics of images such as medical images, equipment diagnostic images, terrain images, etc., recognize types of web pages, predict whether or not an event has occurred, such as an equipment failure, etc.
  • Data labeling application 122 may be integrated with other data processing tools to automatically process data generated as part of operation of an enterprise, facility, system, device, etc., to label the data, and to provide a warning or alert associated with the labeling using input interface 102 , output interface 104 , and/or communication interface 106 so that appropriate action can be initiated in response to the labeling.
  • medical images that include a tumor may be recognized by data labeling application 122 which triggers a notification message sent to a clinician that a tumor has been identified based on a “tumor” label determined for the image data.
  • Data labeling application 122 may be implemented as a Web application.
  • data labeling application 122 may be configured to receive hypertext transport protocol (HTTP) responses and to send HTTP requests.
  • HTTP responses may include web pages such as hypertext markup language (HTML) documents and linked objects generated in response to the HTTP requests.
  • Each web page may be identified by a uniform resource locator (URL) that includes the location or address of the computing device that contains the resource to be accessed in addition to the location of the resource on that computing device.
  • URL uniform resource locator
  • the type of file or resource depends on the Internet application protocol such as the file transfer protocol, HTTP, H.323, etc.
  • the file accessed may be a simple text file, an image file, an audio file, a video file, an executable, a common gateway interface application, a Java applet, an extensible markup language (XML) file, or any other type of file supported by HTTP.
  • HTTP HyperText Transfer Protocol
  • Partially labeled dataset 124 may include, for example, a plurality of rows and a plurality of columns.
  • the plurality of rows may be referred to as observation vectors or records, and the columns may be referred to as variables.
  • Partially labeled dataset 124 may be transposed.
  • An observation vector x i may include a value for each of the plurality of variables associated with the observation i.
  • Each variable of the plurality of variables describes a characteristic of a physical object, such as a living thing, a vehicle, terrain, a computing device, a physical environment, etc.
  • partially labeled dataset 124 includes data related to operation of a vehicle
  • the variables may include an oil pressure, a speed, a gear indicator, a gas tank level, a tire pressure for each tire, an engine temperature, a radiator level, etc.
  • Partially labeled dataset 124 may include data captured as a function of time for one or more physical objects.
  • Partially labeled dataset 124 includes supervised data and unsupervised data.
  • the supervised data includes a y i -variable (target) value that indicates a truth related to the observation vector x i such as what the observation vector x i in the form of text means, what the observation vector x i in the form of image data does or does not represent (i.e., text, a medical condition, an equipment failure, an intrusion, a terrain feature, etc.), what the observation vector x i in the form of sensor signal data does or does not represent (i.e., voice, speech, an equipment failure, an intrusion, a terrain feature, etc.), etc.
  • a sensor may measure a physical quantity in an environment to which the sensor is associated and generate a corresponding measurement datum that may be associated with a time that the measurement datum is generated.
  • Illustrative sensors include a microphone, an infrared sensor, a radar, a pressure sensor, a temperature sensor, a position or location sensor, a voltage sensor, a current sensor, a frequency sensor, a humidity sensor, a dewpoint sensor, a camera, a computed tomography machine, a magnetic resonance imaging machine, an x-ray machine, an ultrasound machine, etc. that may be mounted to various components used as part of a system.
  • partially labeled dataset 124 may include image data captured by medical imaging equipment (i.e., computed tomography image, magnetic resonance image, x-ray image, ultrasound image, etc.) of a body part of a living thing.
  • a subset of the image data is labeled, for example, as either indicating existence of a medical condition or non-existence of the medical condition.
  • Partially labeled dataset 124 may include a reference to image data that may be stored, for example, in an image file, and the existence/non-existence label associated with each image file.
  • Partially labeled dataset 124 includes a plurality of such references.
  • the existence/non-existence labels may be defined by a clinician or expert in the field to which data stored in partially labeled dataset 124 relates.
  • the data stored in partially labeled dataset 124 may be generated by and/or captured from a variety of sources including one or more sensors of the same or different type, one or more computing devices, etc.
  • the data stored in partially labeled dataset 124 may be received directly or indirectly from the source and may or may not be pre-processed in some manner.
  • the data may include any type of content represented in any computer-readable format such as binary, alphanumeric, numeric, string, markup language, etc.
  • the data may be organized using delimited fields, such as comma or space separated fields, fixed width fields, using a SAS® dataset, etc.
  • the SAS dataset may be a SAS® file stored in a SAS® library that a SAS® software tool creates and processes.
  • the SAS dataset contains data values that are organized as a table of observations (rows) and variables (columns) that can be processed by one or more SAS software tools.
  • Partially labeled dataset 124 may be stored on computer-readable medium 108 or on one or more computer-readable media of distributed computing system 128 and accessed by data labeling device 100 using communication interface 106 , input interface 102 , and/or output interface 104 .
  • Data stored in partially labeled dataset 124 may be sensor measurements or signal values captured by a sensor, may be generated or captured in response to occurrence of an event or a transaction, generated by a device such as in response to an interaction by a user with the device, etc.
  • the data stored in partially labeled dataset 124 may be captured at different date/time points periodically, intermittently, when an event occurs, etc.
  • Each record of partially labeled dataset 124 may include one or more date values and/or time values.
  • Partially labeled dataset 124 may include data captured at a high data rate such as 200 or more observations per second for one or more physical objects.
  • data stored in partially labeled dataset 124 may be generated as part of the Internet of Things (IoT), where things (e.g., machines, devices, phones, sensors) can be connected to networks and the data from these things collected and processed within the things and/or external to the things before being stored in partially labeled dataset 124 .
  • the IoT can include sensors in many different devices and types of devices. Some of these devices may be referred to as edge devices and may involve edge computing circuitry. These devices may provide a variety of stored or generated data, such as network data or data specific to the network devices themselves. Some data may be processed with an event stream processing engine, which may reside in the cloud or in an edge device before being stored in partially labeled dataset 124 .
  • Partially labeled dataset 124 may be stored using one or more of various structures as known to those skilled in the art including one or more files of a file system, a relational database, one or more tables of a system of tables, a structured query language database, etc. on data labeling device 100 or on distributed computing system 128 .
  • Data labeling device 100 may coordinate access to partially labeled dataset 124 that is distributed across distributed computing system 128 that may include one or more computing devices that can communicate using a network.
  • partially labeled dataset 124 may be stored in a cube distributed across a grid of computers as understood by a person of skill in the art.
  • partially labeled dataset 124 may be stored in a multi-node Hadoop® cluster.
  • ApacheTM Hadoop® is an open-source software framework for distributed computing supported by the Apache Software Foundation.
  • partially labeled dataset 124 may be stored in a cloud of computers and accessed using cloud computing technologies, as understood by a person of skill in the art.
  • the SAS® LASRTM Analytic Server may be used as an analytic platform to enable multiple users to concurrently access data stored in partially labeled dataset 124 .
  • the SAS® ViyaTM open, cloud-ready, in-memory architecture also may be used as an analytic platform to enable multiple users to concurrently access data stored in partially labeled dataset 124 .
  • Some systems may use SAS In-Memory Statistics for Hadoop® to read big data once and analyze it several times by persisting it in-memory for the entire session. Some systems may be of other types and configurations.
  • Labeled dataset 126 may be identical to partially labeled dataset 124 except that labeled dataset 126 includes only supervised data such that the y i -variable (target) value of each observation vector x i is labeled.
  • the existence or non-existence label is associated with each image file.
  • data labeling application 122 may be used to create labeled dataset 126 from partially labeled dataset 124 . On each iteration, additional data points of partially labeled dataset 124 are identified for truth labeling. Data labeling application 122 has been shown to improve the accuracy of labels defined in labeled dataset 126 at much lower cost due to a reduced reliance on human labor.
  • Additional, fewer, or different operations may be performed depending on the embodiment of data labeling application 122 .
  • the order of presentation of the operations of FIGS. 2A and 2B is not intended to be limiting. Although some of the operational flows are presented in sequence, the various operations may be performed in various repetitions, concurrently (in parallel, for example, using threads and/or a distributed computing system), and/or in other orders than those that are illustrated.
  • a user may execute data labeling application 122 , which causes presentation of a first user interface window, which may include a plurality of menus and selectors such as drop down menus, buttons, text boxes, hyperlinks, etc. associated with data labeling application 122 as understood by a person of skill in the art.
  • the plurality of menus and selectors may be accessed in various orders.
  • An indicator may indicate one or more user selections from a user interface, one or more data entries into a data field of the user interface, one or more data items read from computer-readable medium 108 or otherwise defined with one or more default values, etc. that are received as an input by data labeling application 122 .
  • a first indicator may be received that indicates partially labeled dataset 124 .
  • the first indicator indicates a location and a name of partially labeled dataset 124 .
  • the first indicator may be received by data labeling application 122 after selection from a user interface window or after entry by a user into a user interface window.
  • partially labeled dataset 124 may not be selectable. For example, a most recently created dataset may be used automatically. A subset of the observation vectors x i included in partially labeled dataset 124 are labeled.
  • a second indicator may be received that indicates a label set Q associated with partially labeled dataset 124 .
  • the label set Q includes a list of permissible values that the y i -variable (target) value of each observation vector x i may have.
  • No y i -variable (target) value indicates that the associated observation vector x i is not labeled in partially labeled dataset 124 .
  • a y i -variable (target) value for example, of zero may indicate that the associated observation vector x i is not labeled in partially labeled dataset 124 where the value of zero is not included in the label set Q.
  • n indicates a number of data points or observation vectors x i included in partially labeled dataset 124 , where the observation vectors x i (i ⁇ l) are labeled as y i ⁇ Q, and the remaining observation vectors x i (l ⁇ i ⁇ n) are unlabeled (not labeled as y i ⁇ Q).
  • l indicates a number of labeled data points or observation vectors x i included in partially labeled dataset 124 .
  • l may be a small percentage, such as less than 1% of the observation vectors x i included in partially labeled dataset 124 .
  • Data labeling application 122 determines a label from label set Q for each observation vector x i included in partially labeled dataset 124 that is not labeled. The resulting fully labeled (supervised) data is stored in labeled dataset 126 .
  • a third indicator may be received that indicates a relative weighting value ⁇ , where ⁇ is selected between zero and one, non-inclusive.
  • each data point receives information from its neighboring data points while also retaining its initial label information.
  • the relative weighting value ⁇ specifies a relative amount of the information from its neighbors versus its initial label information.
  • a fourth indicator of a kernel function to apply may be received.
  • the fourth indicator indicates a name of a kernel function.
  • the fourth indicator may be received by data labeling application 122 after selection from a user interface window or after entry by a user into a user interface window.
  • a default value for the kernel function may further be stored, for example, in computer-readable medium 108 .
  • a kernel function may be selected from “Gaussian”, “Exponential”, “Linear”, “Polynomial”, “Sigmoid”, etc.
  • a default kernel function may be the Gaussian kernel function though any positive definite kernel function could be used.
  • the kernel function may be labeled or selected in a variety of different manners by the user as understood by a person of skill in the art.
  • the kernel function may not be selectable, and a single kernel function is implemented in data labeling application 122 .
  • the Gaussian kernel function may be used by default or without allowing a selection.
  • the Gaussian kernel function may be defined as:
  • s is a kernel parameter that is termed a Gaussian bandwidth parameter.
  • a fifth indicator of a kernel parameter value to use with the kernel function may be received.
  • a value for s the Gaussian bandwidth parameter
  • the fifth indicator may not be received.
  • a default value for the kernel parameter value may be stored, for example, in computer-readable medium 108 and used automatically or the kernel parameter value may not be used.
  • the value of the kernel parameter may not be selectable. Instead, a fixed, predefined value may be used.
  • a sixth indicator of a labeling convergence test may be received.
  • the sixth indicator indicates a name of a labeling convergence test.
  • the sixth indicator may be received by data labeling application 122 after selection from a user interface window or after entry by a user into a user interface window.
  • a default value for the labeling convergence test may further be stored, for example, in computer-readable medium 108 .
  • a labeling convergence test may be selected from “Num Iterations”, “Within Tolerance”, etc.
  • a default convergence test may be “Num Iterations”.
  • the labeling convergence test may be labeled or selected in a variety of different manners by the user as understood by a person of skill in the art.
  • the labeling convergence test may not be selectable, and a single labeling convergence test is implemented by data labeling application 122 .
  • the labeling convergence test “Num Iterations” may be used by default or without allowing a selection.
  • a seventh indicator of a labeling convergence test value may be received.
  • the seventh indicator may not be received.
  • a default value may be stored, for example, in computer-readable medium 108 and used automatically when the seventh indicator is not received.
  • the labeling convergence test value may not be selectable. Instead, a fixed, predefined value may be used. As an example, when the labeling convergence test “Num Iterations” is indicated from operation 210 , the labeling convergence test value is a number of iterations M L .
  • the number of iterations M L may be set between 10 and 1000 though the user may determine that other values are more suitable for their application as understood by a person of skill in the art, for example, based on the labeling accuracy desired, computing resources available, size of partially labeled dataset 124 , etc.
  • the labeling convergence test value may be a tolerance value ⁇ .
  • an eighth indicator of a distance function may be received.
  • the eighth indicator indicates a name of a distance function.
  • the eighth indicator may be received by data labeling application 122 after selection from a user interface window or after entry by a user into a user interface window.
  • a default value for the distance function may further be stored, for example, in computer-readable medium 108 .
  • a distance function may be selected from “Kullback-Leibler”, “Euclidian”, “Manhattan”, “Minkowski”, “Cosine”, “Chebyshev”, “Hamming”, “Mahalanobis”, etc.
  • a default distance function may be “Kullback-Leibler”.
  • the distance function may be labeled or selected in a variety of different manners by the user as understood by a person of skill in the art.
  • the distance function may not be selectable, and a single distance function is implemented by data labeling application 122 .
  • a ninth indicator of a number of supplemental labeled points N SL may be received.
  • the ninth indicator may not be received.
  • a default value may be stored, for example, in computer-readable medium 108 and used automatically.
  • the value of the number of supplemental labeled points N SL may not be selectable. Instead, a fixed, predefined value may be used.
  • the number of supplemental labeled points N SL defines a number of additional data points of partially labeled dataset 124 that are identified for truth labeling on each iteration as described further below. Merely for illustration, the number of supplemental labeled points N SL may be between 2 and 10 though the user may determine that other values are more suitable for their application.
  • a tenth indicator of a number of times M SL to perform supplemental labeling may be received.
  • the tenth indicator may not be received.
  • a default value may be stored, for example, in computer-readable medium 108 and used automatically when the tenth indicator is not received.
  • the number of times may not be selectable. Instead, a fixed, predefined value may be used.
  • the number of times M SL may be set between 3 and 1000 though the user may determine that other values are more suitable for their application as understood by a person of skill in the art, for example, based on computing resources available, size of partially labeled dataset 124 , etc.
  • an affinity matrix W is computed based on the kernel function indicated by operation 206 and the kernel parameter value indicated by operation 208 .
  • the affinity matrix W is defined as
  • a diagonal matrix D is computed based on the affinity matrix W.
  • a normalized distance matrix S is computed based on the affinity matrix W and the diagonal matrix D.
  • a label matrix Y is defined based on partially labeled dataset 124 .
  • a classification matrix F and one or more labeling convergence parameter values are initialized.
  • Classification matrix F is an n ⁇ c matrix.
  • One or more labeling convergence parameter values may be initialized based on the labeling convergence test indicated from operation 210 .
  • a first labeling convergence parameter value t may be initialized to zero and associated with the number of iterations M L so that first labeling convergence parameter value t can be compared to the number of iterations M L to determine convergence by the labeling convergence test.
  • Classification matrix F defines a label probability distribution matrix for each observation vector x i .
  • a first labeling convergence parameter value ⁇ F may be initialized to a large number and associated with the tolerance value ⁇ .
  • the updated classification matrix F defines a label probability for each permissible value defined in label set Q for each observation vector x i .
  • the one or more labeling convergence parameter values are updated.
  • ⁇ F F(t+1) ⁇ F(t).
  • processing continues in an operation 234 .
  • processing continues in operation 228 to compute a next update of classification matrix F(t+1).
  • the labeling convergence test “Num Iterations” is indicated from operation 210
  • the first labeling convergence parameter value t is compared to the labeling convergence test value that is the number of iterations M L .
  • t ⁇ M L labeling has converged.
  • the first labeling convergence parameter value ⁇ F is compared to the labeling convergence test value that is the tolerance value ⁇ .
  • ⁇ F ⁇ labeling has converged.
  • the y i -variable (target) value of each observation vector x i is labeled using F(t).
  • processing continues in an operation 238 .
  • supplemental labeling is not done, processing continues in an operation 240 .
  • supplemental labeling is done when a number of times operations 240 - 250 have been performed is greater than or equal to M SL .
  • each observation vector x i with its selected y i -variable (target) value is stored in labeled dataset 126 .
  • Labeled dataset 126 may be stored on data labeling device 100 and/or on one or more computing devices of distributed computing system 128 in a variety of formats as understood by a person of skill in the art. All or a subset of labeled dataset 126 further may be output to display 116 , to printer 120 , etc.
  • medical images labeled as including a tumor may be recognized by data labeling application 122 and presented on display 116 or indicators of the medical images may be printed on printer 120 .
  • a notification message may be sent to a clinician indicating that a tumor has been identified based on a “tumor” label determined for the image data.
  • an alert message may be sent to another device using communication interface 106 , printed on printer 120 or another printer, presented visually on display 116 or another display, presented audibly using speaker 118 or another speaker, etc. based on how urgent a response is needed to a certain label. For example, if a sound signal or image data indicate an intrusion into a surveilled area, a notification message may be sent to a responder.
  • a distance matrix Dis is computed between each pair of label distributions defined by F(t).
  • the distance function indicated from operation 214 is used to compute distance matrix Dis between each pair of label probability distributions defined by F(t).
  • Distance matrix Dis is a symmetric n ⁇ n matrix. For illustration, when the distance function indicated from operation 214 is “Kullback-Leibler”,
  • the number of supplemental labeled points N SL are selected from distance matrix Dis by identifying the N SL data points having the smallest distances in distance matrix Dis.
  • the index i to the observation vector x i associated with each data point may be identified as part of the selection.
  • a truth label is requested for each of the selected N SL data points by presenting the observation vector x i associated with each data point. For example, if the observation vector x i includes an image, the image is presented on display 116 with a request that a user determine the truth label, the true y i -variable (target) value, for that observation vector x i . The truth label may represent different values dependent on what the image represents or indicates. As another example, if the observation vector x i includes a sound signal, the sound signal is played on speaker 118 with a request that a user determine the truth label, the true y i -variable (target) value, for that observation vector x i . The truth label may represent different values dependent on what the sound signal represents or indicates.
  • a truth response label the true y i -variable (target) value for each observation vector x i of the selected N SL data points.
  • the truth response label includes one of the permissible values included in label set Q.
  • the truth response label, the true y i -variable (target) value for each observation vector x i of the selected N SL data points, is updated in partially labeled dataset 124 .
  • l has been increased by N SL .
  • observation vectors x i (i ⁇ l) are labeled as y i ⁇ Q
  • remaining observation vectors x i (l ⁇ i ⁇ n) are unlabeled (not labeled as y i ⁇ Q).
  • label matrix Y is updated based on partially labeled dataset 124 updated in operation 248 , and processing continue in operation 226 to reinitialize classification matrix F and update labels in partially labeled dataset 124 .
  • Operations 240 - 250 are performed at least once, and operations 226 - 234 are performed at least twice before the y i -variable (target) value of each observation vector x i selected in operation 234 is output in operation 238 .
  • Data labeling application 122 optimizes the process of selecting better labeled data to improve classification/prediction performance. By selecting the labeled data based on a distance measure, data labeling application 122 selects the most informative data since they have the smallest distance to the rest of the data in a probability space. Geometrically, these data are frequently located in the center of clusters in the probability space. By adding them into labeled dataset 124 , they can significantly facilitate the learning process in comparison to random selection.
  • Data labeling application 122 was used with a dataset of handwritten digits as partially labeled dataset 124 .
  • the relative weighting value ⁇ was set to 0.2, where the larger the weight is, the faster labels propagate.
  • N SL was set to five and the Kullback-Leibler divergence was used for the distance function.
  • M L 5 was used.
  • the effectiveness of data labeling application 122 can be measured using both quantitative results and qualitative results. For quantitative results, a precision, a recall, and an F1-score were computed for each of the 10 labels. Precision can be defined as
  • F1 can be defined as
  • precision is the number of correct results divided by the number of all returned results.
  • Recall is the number of correct results divided by the number of results that should have been returned.
  • F1-score is a measure that combines precision, and recall and is a harmonic mean of precision and recall.
  • data labeling application 122 achieved 94% precision and 93% recall with 50 total labeled samples (five samples having minimum distance in summed distance matrix SD were added to partially labeled dataset 124 at each iteration) and 1450 unlabeled samples.
  • FIGS. 3A-3E For qualitative results, the five samples having minimum distance in summed distance matrix SD are shown in FIGS. 3A-3E for a first iteration of operations 240 - 250 , for a second iteration of operations 240 - 250 , for a third iteration of operations 240 - 250 , for a fourth iteration of operations 240 - 250 , and for a fifth iteration of operations 240 - 250 , respectively.
  • “Predict” above each image indicates the label determined in operation 234 for the sample, and “truth” above each image indicates the label received in operation 246 for the sample. Note that the number of correct predictions increases with each iteration.
  • the performance gains resulting from use of data labeling application 122 can be measured by comparing the precision, recall, and F1-score generated by operations 228 - 234 versus operations 226 - 250 using the same number of labeled samples. For example, operations 228 - 234 were performed with 15 labeled samples and the labeled points output to labeled dataset 126 after operation 234 in operation 238 without performing operations 240 - 250 . In comparison, operations 228 - 234 were performed with 10 initially labeled samples and operations 226 - 234 were performed with five supplemental samples selected in operation 242 for one or more additional iterations. Table I below shows the precision results:
  • the precision, recall, and F1-score values demonstrate that data labeling application 122 achieves better classification results in terms of the ability to correctly label an item with fewer incorrect labels over prior algorithms that label unlabeled data using a fixed number of randomly selected observation vectors x i .
  • the improvement may be attributable to the selection of supplemental labels that have minimum average distances and, as a result, are more informative.
  • Data labeling application 122 can be implemented as part of a machine learning application. Data labeling application 122 lowers the cost associated with training the object labeling process because fewer samples are needed to be labeled due to the identification of the samples that are most informative.
  • Data labeling application 122 can be used for image recognition on the Internet.
  • the target is to identify whether an image is or is not an image of a cat based on a limited time and resource budget.
  • the labeling task is usually accomplished by volunteers.
  • the best set for the training data (images with a cat or images with a cat) is identified.
  • Data labeling application 122 can be used for image recognition in sports analysis to recognize human actions such as diving, walking, running, swinging, kicking, lifting, etc.
  • Image recognition in this area is a challenging task due to significant intra-class variations, occlusion, and background cluster for big data.
  • Most of the existing work uses action models based on statistical learning algorithms for classification. To obtain ideal recognition results, a massive amount of the labeled samples are required to train the complicated human action models. However, collecting labeled samples is very costly.
  • Data labeling application 122 addresses this challenging by selecting the most informative labeled human action samples using a smaller budget while providing better classification results.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A computing device predicts occurrence of an event or classifies an object using semi-supervised data. A label set defines permissible values for a target variable. A value of the permissible values is defined for a subset of observation vectors. A predefined number of times, a distance matrix is computed that defines a distance value between pairs of observation vectors using a distance function and a converged classification matrix; a number of observation vectors is selected that have minimum values for the distance value; a label is requested and a response is received for each of the selected observation vectors; the value of the target variable is updated for each of the selected observation vectors with the received response; and the value of the target variable is determined again by recomputing the converged classification matrix. The value of the target variable for each observation vector is output to a second dataset.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present application claims the benefit of 35 U.S.C. §119(e) to U.S. Provisional Patent Application No. 62/325,668 filed on Apr. 21, 2016, the entire contents of which is hereby incorporated by reference.
  • BACKGROUND
  • Machine learning defines models that can be used to predict occurrence of an event, for example, from sensor data or signal data, or recognize/classify an object, for example, in an image, in text, in a web page, in voice data, in sensor data, etc. Machine learning algorithms can be classified into three categories: unsupervised learning, supervised learning, and semi-supervised learning. Unsupervised learning does not require that a target (dependent) variable y be labeled in training data to indicate occurrence or non-occurrence of the event or to recognize/classify the object. An unsupervised learning system predicts the label, target variable y, in training data by defining a model that describes the hidden structure in the training data. Supervised learning requires that the target (dependent) variable y be labeled in training data so that a model can be built to predict the label of new unlabeled data. A supervised learning system discards observations in the training data that are not labeled. While supervised learning algorithms are typically better predictors/classifiers, labeling training data often requires a physical experiment or a statistical trial, and human labor is usually required. As a result, it may be very complex and expensive to fully label an entire training dataset. A semi-supervised learning system only requires that the target (dependent) variable y be labeled in a small portion of the training data and uses the unlabeled training data in the training dataset to define the prediction/classification (data labeling) model.
  • SUMMARY
  • In an example embodiment, a non-transitory computer-readable medium is provided having stored thereon computer-readable instructions that, when executed by a computing device, cause the computing device to predict occurrence of an event or to classify an object using semi-supervised data to label unlabeled data in a dataset. A dataset is read that includes a plurality of observation vectors. A label set is read that defines permissible values for a target variable. A value of the permissible values of the target variable is defined for a subset of the plurality of observation vectors. (a) A classification matrix is initialized based on the value of the target variable of each observation vector of the plurality of observation vectors; (b) a converged classification matrix is computed, wherein the converged classification matrix defines a label probability for each permissible value defined in the label set for each observation vector of the plurality of observation vectors; and (c) for each observation vector, the value of the target variable is updated based on a maximum label probability value identified from the converged classification matrix. A predefined number of times, a distance matrix is computed that defines a distance value between each pair of the plurality of observation vectors using a distance function and the converged classification matrix; a number of observation vectors is selected from the dataset that have minimum values for the distance value; a label is requested for each of the selected observation vectors; a response to the request is received for each of the selected observation vectors; the value of the target variable is updated for each of the selected observation vectors with the received response; and operations (a) to (c) are repeated. After the predefined number of times, the value of the target variable for each observation vector of the plurality of observation vectors is output to a second dataset.
  • In yet another example embodiment, a computing device is provided. The system includes, but is not limited to, a processor and a non-transitory computer-readable medium operably coupled to the processor. The computer-readable medium has instructions stored thereon that, when executed by the computing device, cause the computing device to predict occurrence of an event or classify an object using semi-supervised data to label unlabeled data in a dataset.
  • In an example embodiment, a method of predicting occurrence of an event or classifying an object using semi-supervised data to label unlabeled data in a dataset is provided.
  • Other principal features of the disclosed subject matter will become apparent to those skilled in the art upon review of the following drawings, the detailed description, and the appended claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Illustrative embodiments of the disclosed subject matter will hereafter be described referring to the accompanying drawings, wherein like numerals denote like elements.
  • FIG. 1 depicts a block diagram of a data labeling device in accordance with an illustrative embodiment.
  • FIGS. 2A and 2B depict a flow diagram illustrating examples of operations performed by the data labeling device of FIG. 1 in accordance with an illustrative embodiment.
  • FIGS. 3A-3E depict supplemental points successively selected for labeling by the data labeling device of FIG. 1 in accordance with an illustrative embodiment.
  • DETAILED DESCRIPTION
  • Referring to FIG. 1, a block diagram of a data labeling device 100 is shown in accordance with an illustrative embodiment. Data labeling device 100 may include an input interface 102, an output interface 104, a communication interface 106, a non-transitory computer-readable medium 108, a processor 110, a data labeling application 122, a partially labeled dataset 124, and a labeled dataset 126. Fewer, different, and/or additional components may be incorporated into data labeling device 100.
  • Input interface 102 provides an interface for receiving information from the user or another device for entry into data labeling device 100 as understood by those skilled in the art. Input interface 102 may interface with various input technologies including, but not limited to, a keyboard 112, a microphone 113, a mouse 114, a display 116, a track ball, a keypad, one or more buttons, etc. to allow the user to enter information into data labeling device 100 or to make selections presented in a user interface displayed on display 116. The same interface may support both input interface 102 and output interface 104. For example, display 116 comprising a touch screen provides a mechanism for user input and for presentation of output to the user. Data labeling device 100 may have one or more input interfaces that use the same or a different input interface technology. The input interface technology further may be accessible by data labeling device 100 through communication interface 106.
  • Output interface 104 provides an interface for outputting information for review by a user of data labeling device 100 and/or for use by another application or device. For example, output interface 104 may interface with various output technologies including, but not limited to, display 116, a speaker 118, a printer 120, etc. Data labeling device 100 may have one or more output interfaces that use the same or a different output interface technology. The output interface technology further may be accessible by data labeling device 100 through communication interface 106.
  • Communication interface 106 provides an interface for receiving and transmitting data between devices using various protocols, transmission technologies, and media as understood by those skilled in the art. Communication interface 106 may support communication using various transmission media that may be wired and/or wireless. Data labeling device 100 may have one or more communication interfaces that use the same or a different communication interface technology. For example, data labeling device 100 may support communication using an Ethernet port, a Bluetooth antenna, a telephone jack, a USB port, etc. Data and messages may be transferred between data labeling device 100 and distributed computing system 128 using communication interface 106.
  • Computer-readable medium 108 is an electronic holding place or storage for information so the information can be accessed by processor 110 as understood by those skilled in the art. Computer-readable medium 108 can include, but is not limited to, any type of random access memory (RAM), any type of read only memory (ROM), any type of flash memory, etc. such as magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, . . . ), optical disks (e.g., compact disc (CD), digital versatile disc (DVD), . . . ), smart cards, flash memory devices, etc. Data labeling device 100 may have one or more computer-readable media that use the same or a different memory media technology. For example, computer-readable medium 108 may include different types of computer-readable media that may be organized hierarchically to provide efficient access to the data stored therein as understood by a person of skill in the art. As an example, a cache may be implemented in a smaller, faster memory that stores copies of data from the most frequently/recently accessed main memory locations to reduce an access latency. Data labeling device 100 also may have one or more drives that support the loading of a memory media such as a CD, DVD, an external hard drive, etc. One or more external hard drives further may be connected to data labeling device 100 using communication interface 106.
  • Processor 110 executes instructions as understood by those skilled in the art. The instructions may be carried out by a special purpose computer, logic circuits, or hardware circuits. Processor 110 may be implemented in hardware and/or firmware. Processor 110 executes an instruction, meaning it performs/controls the operations called for by that instruction. The term “execution” is the process of running an application or the carrying out of the operation called for by an instruction. The instructions may be written using one or more programming language, scripting language, assembly language, etc. Processor 110 operably couples with input interface 102, with output interface 104, with communication interface 106, and with computer-readable medium 108 to receive, to send, and to process information. Processor 110 may retrieve a set of instructions from a permanent memory device and copy the instructions in an executable form to a temporary memory device that is generally some form of RAM. Data labeling device 100 may include a plurality of processors that use the same or a different processing technology.
  • Data labeling application 122 performs operations associated with defining labeled dataset 126 from data stored in partially labeled dataset 124. Some or all of the operations described herein may be embodied in data labeling application 122.
  • Referring to the example embodiment of FIG. 1, data labeling application 122 is implemented in software (comprised of computer-readable and/or computer-executable instructions) stored in computer-readable medium 108 and accessible by processor 110 for execution of the instructions that embody the operations of data labeling application 122. Data labeling application 122 may be written using one or more programming languages, assembly languages, scripting languages, etc. Data labeling application 122 may be integrated with other analytic tools. For example, data labeling application 122 may be part of SAS® Enterprise Miner™ developed and provided by SAS Institute Inc. of Cary, N.C. that may be used to create highly accurate predictive and descriptive models based on analysis of vast amounts of data from across an enterprise. Data labeling application 122 further may be part of SAS® Enterprise Guide, SAS® Visual Analytics, SAS® LASR™ Analytic Server, and/or SAS® Access Engine(s) also developed and provided by SAS Institute Inc. of Cary, N.C., USA.
  • Data labeling application 122 is applicable in a variety of industries. For example, data labeling application 122 may be used to recognize text, recognize text meaning, recognize a voice, recognize speech, recognize characteristics of images such as medical images, equipment diagnostic images, terrain images, etc., recognize types of web pages, predict whether or not an event has occurred, such as an equipment failure, etc. Data labeling application 122 may be integrated with other data processing tools to automatically process data generated as part of operation of an enterprise, facility, system, device, etc., to label the data, and to provide a warning or alert associated with the labeling using input interface 102, output interface 104, and/or communication interface 106 so that appropriate action can be initiated in response to the labeling. For example, medical images that include a tumor may be recognized by data labeling application 122 which triggers a notification message sent to a clinician that a tumor has been identified based on a “tumor” label determined for the image data.
  • Data labeling application 122 may be implemented as a Web application. For example, data labeling application 122 may be configured to receive hypertext transport protocol (HTTP) responses and to send HTTP requests. The HTTP responses may include web pages such as hypertext markup language (HTML) documents and linked objects generated in response to the HTTP requests. Each web page may be identified by a uniform resource locator (URL) that includes the location or address of the computing device that contains the resource to be accessed in addition to the location of the resource on that computing device. The type of file or resource depends on the Internet application protocol such as the file transfer protocol, HTTP, H.323, etc. The file accessed may be a simple text file, an image file, an audio file, a video file, an executable, a common gateway interface application, a Java applet, an extensible markup language (XML) file, or any other type of file supported by HTTP.
  • Partially labeled dataset 124 may include, for example, a plurality of rows and a plurality of columns. The plurality of rows may be referred to as observation vectors or records, and the columns may be referred to as variables. Partially labeled dataset 124 may be transposed. An observation vector xi may include a value for each of the plurality of variables associated with the observation i. Each variable of the plurality of variables describes a characteristic of a physical object, such as a living thing, a vehicle, terrain, a computing device, a physical environment, etc. For example, if partially labeled dataset 124 includes data related to operation of a vehicle, the variables may include an oil pressure, a speed, a gear indicator, a gas tank level, a tire pressure for each tire, an engine temperature, a radiator level, etc. Partially labeled dataset 124 may include data captured as a function of time for one or more physical objects.
  • Partially labeled dataset 124 includes supervised data and unsupervised data. The supervised data includes a yi-variable (target) value that indicates a truth related to the observation vector xi such as what the observation vector xi in the form of text means, what the observation vector xi in the form of image data does or does not represent (i.e., text, a medical condition, an equipment failure, an intrusion, a terrain feature, etc.), what the observation vector xi in the form of sensor signal data does or does not represent (i.e., voice, speech, an equipment failure, an intrusion, a terrain feature, etc.), etc. A sensor may measure a physical quantity in an environment to which the sensor is associated and generate a corresponding measurement datum that may be associated with a time that the measurement datum is generated. Illustrative sensors include a microphone, an infrared sensor, a radar, a pressure sensor, a temperature sensor, a position or location sensor, a voltage sensor, a current sensor, a frequency sensor, a humidity sensor, a dewpoint sensor, a camera, a computed tomography machine, a magnetic resonance imaging machine, an x-ray machine, an ultrasound machine, etc. that may be mounted to various components used as part of a system.
  • For example, partially labeled dataset 124 may include image data captured by medical imaging equipment (i.e., computed tomography image, magnetic resonance image, x-ray image, ultrasound image, etc.) of a body part of a living thing. A subset of the image data is labeled, for example, as either indicating existence of a medical condition or non-existence of the medical condition. Partially labeled dataset 124 may include a reference to image data that may be stored, for example, in an image file, and the existence/non-existence label associated with each image file. Partially labeled dataset 124 includes a plurality of such references. The existence/non-existence labels may be defined by a clinician or expert in the field to which data stored in partially labeled dataset 124 relates.
  • The data stored in partially labeled dataset 124 may be generated by and/or captured from a variety of sources including one or more sensors of the same or different type, one or more computing devices, etc. The data stored in partially labeled dataset 124 may be received directly or indirectly from the source and may or may not be pre-processed in some manner. As used herein, the data may include any type of content represented in any computer-readable format such as binary, alphanumeric, numeric, string, markup language, etc. The data may be organized using delimited fields, such as comma or space separated fields, fixed width fields, using a SAS® dataset, etc. The SAS dataset may be a SAS® file stored in a SAS® library that a SAS® software tool creates and processes. The SAS dataset contains data values that are organized as a table of observations (rows) and variables (columns) that can be processed by one or more SAS software tools.
  • Partially labeled dataset 124 may be stored on computer-readable medium 108 or on one or more computer-readable media of distributed computing system 128 and accessed by data labeling device 100 using communication interface 106, input interface 102, and/or output interface 104. Data stored in partially labeled dataset 124 may be sensor measurements or signal values captured by a sensor, may be generated or captured in response to occurrence of an event or a transaction, generated by a device such as in response to an interaction by a user with the device, etc. The data stored in partially labeled dataset 124 may be captured at different date/time points periodically, intermittently, when an event occurs, etc. Each record of partially labeled dataset 124 may include one or more date values and/or time values.
  • Partially labeled dataset 124 may include data captured at a high data rate such as 200 or more observations per second for one or more physical objects. For example, data stored in partially labeled dataset 124 may be generated as part of the Internet of Things (IoT), where things (e.g., machines, devices, phones, sensors) can be connected to networks and the data from these things collected and processed within the things and/or external to the things before being stored in partially labeled dataset 124. For example, the IoT can include sensors in many different devices and types of devices. Some of these devices may be referred to as edge devices and may involve edge computing circuitry. These devices may provide a variety of stored or generated data, such as network data or data specific to the network devices themselves. Some data may be processed with an event stream processing engine, which may reside in the cloud or in an edge device before being stored in partially labeled dataset 124.
  • Partially labeled dataset 124 may be stored using one or more of various structures as known to those skilled in the art including one or more files of a file system, a relational database, one or more tables of a system of tables, a structured query language database, etc. on data labeling device 100 or on distributed computing system 128. Data labeling device 100 may coordinate access to partially labeled dataset 124 that is distributed across distributed computing system 128 that may include one or more computing devices that can communicate using a network. For example, partially labeled dataset 124 may be stored in a cube distributed across a grid of computers as understood by a person of skill in the art. As another example, partially labeled dataset 124 may be stored in a multi-node Hadoop® cluster. For instance, Apache™ Hadoop® is an open-source software framework for distributed computing supported by the Apache Software Foundation. As another example, partially labeled dataset 124 may be stored in a cloud of computers and accessed using cloud computing technologies, as understood by a person of skill in the art. The SAS® LASR™ Analytic Server may be used as an analytic platform to enable multiple users to concurrently access data stored in partially labeled dataset 124. The SAS® Viya™ open, cloud-ready, in-memory architecture also may be used as an analytic platform to enable multiple users to concurrently access data stored in partially labeled dataset 124. Some systems may use SAS In-Memory Statistics for Hadoop® to read big data once and analyze it several times by persisting it in-memory for the entire session. Some systems may be of other types and configurations.
  • Labeled dataset 126 may be identical to partially labeled dataset 124 except that labeled dataset 126 includes only supervised data such that the yi-variable (target) value of each observation vector xi is labeled. For example, in the medical imaging example, the existence or non-existence label is associated with each image file.
  • Referring to FIGS. 2A and 2B, example operations associated with data labeling application 122 are described. For example, data labeling application 122 may be used to create labeled dataset 126 from partially labeled dataset 124. On each iteration, additional data points of partially labeled dataset 124 are identified for truth labeling. Data labeling application 122 has been shown to improve the accuracy of labels defined in labeled dataset 126 at much lower cost due to a reduced reliance on human labor.
  • Additional, fewer, or different operations may be performed depending on the embodiment of data labeling application 122. The order of presentation of the operations of FIGS. 2A and 2B is not intended to be limiting. Although some of the operational flows are presented in sequence, the various operations may be performed in various repetitions, concurrently (in parallel, for example, using threads and/or a distributed computing system), and/or in other orders than those that are illustrated. For example, a user may execute data labeling application 122, which causes presentation of a first user interface window, which may include a plurality of menus and selectors such as drop down menus, buttons, text boxes, hyperlinks, etc. associated with data labeling application 122 as understood by a person of skill in the art. The plurality of menus and selectors may be accessed in various orders. An indicator may indicate one or more user selections from a user interface, one or more data entries into a data field of the user interface, one or more data items read from computer-readable medium 108 or otherwise defined with one or more default values, etc. that are received as an input by data labeling application 122.
  • Referring to FIG. 2A, in an operation 200, a first indicator may be received that indicates partially labeled dataset 124. For example, the first indicator indicates a location and a name of partially labeled dataset 124. As an example, the first indicator may be received by data labeling application 122 after selection from a user interface window or after entry by a user into a user interface window. In an alternative embodiment, partially labeled dataset 124 may not be selectable. For example, a most recently created dataset may be used automatically. A subset of the observation vectors xi included in partially labeled dataset 124 are labeled.
  • In an operation 202, a second indicator may be received that indicates a label set Q associated with partially labeled dataset 124. For example, the label set Q includes a list of permissible values that the yi-variable (target) value of each observation vector xi may have. For illustration, if partially labeled dataset 124 includes text images of numeric digits, the label set Q includes c=10 permissible values that may be indicated as Q={1, . . . , c}, where Q=1 may be associated with the digit “0”, Q=2 may be associated with the digit “1”, Q=3 may be associated with the digit “2”, . . . , Q=10 may be associated with the digit “9”. No yi-variable (target) value indicates that the associated observation vector xi is not labeled in partially labeled dataset 124. In an alternative embodiment, a yi-variable (target) value, for example, of zero may indicate that the associated observation vector xi is not labeled in partially labeled dataset 124 where the value of zero is not included in the label set Q. Thus, partially labeled dataset 124 defines a point set χ={x1, . . . , xl, xl+1, xn}, where n indicates a number of data points or observation vectors xi included in partially labeled dataset 124, where the observation vectors xi (i≧l) are labeled as yiεQ, and the remaining observation vectors xi (l<i≦n) are unlabeled (not labeled as yiεQ). Thus, l indicates a number of labeled data points or observation vectors xi included in partially labeled dataset 124. For illustration, l may be a small percentage, such as less than 1% of the observation vectors xi included in partially labeled dataset 124. Partially labeled dataset 124 includes an observation vector xi where i=1, . . . , n.
  • Data labeling application 122 determines a label from label set Q for each observation vector xi included in partially labeled dataset 124 that is not labeled. The resulting fully labeled (supervised) data is stored in labeled dataset 126.
  • In an operation 204, a third indicator may be received that indicates a relative weighting value α, where α is selected between zero and one, non-inclusive. As described further below, each data point receives information from its neighboring data points while also retaining its initial label information. The relative weighting value α specifies a relative amount of the information from its neighbors versus its initial label information. The relative weighting value α=0.5 indicates equal weight between the information from its neighbors relative to its initial label information.
  • In an operation 206, a fourth indicator of a kernel function to apply may be received. For example, the fourth indicator indicates a name of a kernel function. The fourth indicator may be received by data labeling application 122 after selection from a user interface window or after entry by a user into a user interface window. A default value for the kernel function may further be stored, for example, in computer-readable medium 108. As an example, a kernel function may be selected from “Gaussian”, “Exponential”, “Linear”, “Polynomial”, “Sigmoid”, etc. For example, a default kernel function may be the Gaussian kernel function though any positive definite kernel function could be used. Of course, the kernel function may be labeled or selected in a variety of different manners by the user as understood by a person of skill in the art. In an alternative embodiment, the kernel function may not be selectable, and a single kernel function is implemented in data labeling application 122. For example, the Gaussian kernel function may be used by default or without allowing a selection. The Gaussian kernel function may be defined as:
  • exp - || x i - x j || 2 2 s 2 ( 1 )
  • where s is a kernel parameter that is termed a Gaussian bandwidth parameter.
  • In an operation 208, a fifth indicator of a kernel parameter value to use with the kernel function may be received. For example, a value for s, the Gaussian bandwidth parameter, may be received for the Gaussian kernel function. In an alternative embodiment, the fifth indicator may not be received. For example, a default value for the kernel parameter value may be stored, for example, in computer-readable medium 108 and used automatically or the kernel parameter value may not be used. In another alternative embodiment, the value of the kernel parameter may not be selectable. Instead, a fixed, predefined value may be used.
  • In an operation 210, a sixth indicator of a labeling convergence test may be received. For example, the sixth indicator indicates a name of a labeling convergence test. The sixth indicator may be received by data labeling application 122 after selection from a user interface window or after entry by a user into a user interface window. A default value for the labeling convergence test may further be stored, for example, in computer-readable medium 108. As an example, a labeling convergence test may be selected from “Num Iterations”, “Within Tolerance”, etc. For example, a default convergence test may be “Num Iterations”. Of course, the labeling convergence test may be labeled or selected in a variety of different manners by the user as understood by a person of skill in the art. In an alternative embodiment, the labeling convergence test may not be selectable, and a single labeling convergence test is implemented by data labeling application 122. For example, the labeling convergence test “Num Iterations” may be used by default or without allowing a selection.
  • In an operation 212, a seventh indicator of a labeling convergence test value may be received. In an alternative embodiment, the seventh indicator may not be received. For example, a default value may be stored, for example, in computer-readable medium 108 and used automatically when the seventh indicator is not received. In an alternative embodiment, the labeling convergence test value may not be selectable. Instead, a fixed, predefined value may be used. As an example, when the labeling convergence test “Num Iterations” is indicated from operation 210, the labeling convergence test value is a number of iterations ML. Merely for illustration, the number of iterations ML may be set between 10 and 1000 though the user may determine that other values are more suitable for their application as understood by a person of skill in the art, for example, based on the labeling accuracy desired, computing resources available, size of partially labeled dataset 124, etc. As another example, when the labeling convergence test “Within Tolerance” is indicated from operation 210, the labeling convergence test value may be a tolerance value τ.
  • In an operation 214, an eighth indicator of a distance function may be received. For example, the eighth indicator indicates a name of a distance function. The eighth indicator may be received by data labeling application 122 after selection from a user interface window or after entry by a user into a user interface window. A default value for the distance function may further be stored, for example, in computer-readable medium 108. As an example, a distance function may be selected from “Kullback-Leibler”, “Euclidian”, “Manhattan”, “Minkowski”, “Cosine”, “Chebyshev”, “Hamming”, “Mahalanobis”, etc. As an example, a default distance function may be “Kullback-Leibler”. Of course, the distance function may be labeled or selected in a variety of different manners by the user as understood by a person of skill in the art. In an alternative embodiment, the distance function may not be selectable, and a single distance function is implemented by data labeling application 122.
  • In an operation 216, a ninth indicator of a number of supplemental labeled points NSL may be received. In an alternative embodiment, the ninth indicator may not be received. For example, a default value may be stored, for example, in computer-readable medium 108 and used automatically. In another alternative embodiment, the value of the number of supplemental labeled points NSL may not be selectable. Instead, a fixed, predefined value may be used. The number of supplemental labeled points NSL defines a number of additional data points of partially labeled dataset 124 that are identified for truth labeling on each iteration as described further below. Merely for illustration, the number of supplemental labeled points NSL may be between 2 and 10 though the user may determine that other values are more suitable for their application.
  • In an operation 217, a tenth indicator of a number of times MSL to perform supplemental labeling may be received. In an alternative embodiment, the tenth indicator may not be received. For example, a default value may be stored, for example, in computer-readable medium 108 and used automatically when the tenth indicator is not received. In an alternative embodiment, the number of times may not be selectable. Instead, a fixed, predefined value may be used. Merely for illustration, the number of times MSL may be set between 3 and 1000 though the user may determine that other values are more suitable for their application as understood by a person of skill in the art, for example, based on computing resources available, size of partially labeled dataset 124, etc.
  • In an operation 218, an affinity matrix W is computed based on the kernel function indicated by operation 206 and the kernel parameter value indicated by operation 208. For example, using the Gaussian kernel function, the affinity matrix W is defined as
  • W ij = exp - || x i - x j || 2 2 s 2
  • if i≠j and Wii=0 for, where s is the kernel parameter value and the affinity matrix W is an n×n matrix such that i=1, . . . , n and j=1, . . . , n.
  • In an operation 220, a diagonal matrix D is computed based on the affinity matrix W. For example, using the Gaussian kernel function, the diagonal matrix D is an n×n matrix and is defined as Diij=1 nWij and Dij=0 if i≠j.
  • In an operation 222, a normalized distance matrix S is computed based on the affinity matrix W and the diagonal matrix D. For example, the normalized distance matrix S is an n×n matrix and is defined as S=D−1/2WD−1/2.
  • In an operation 224, a label matrix Y is defined based on partially labeled dataset 124. Label matrix Y is an n×c matrix with Yik=1 if xi is labeled as yi=k. Otherwise, Yik=0, where k=1, . . . , c.
  • In an operation 226, a classification matrix F and one or more labeling convergence parameter values are initialized. Classification matrix F is an n×c matrix. For example, classification matrix F is initialized as F(0)=Y. One or more labeling convergence parameter values may be initialized based on the labeling convergence test indicated from operation 210. As an example, when the labeling convergence test “Num Iterations” is indicated from operation 210, a first labeling convergence parameter value t may be initialized to zero and associated with the number of iterations ML so that first labeling convergence parameter value t can be compared to the number of iterations ML to determine convergence by the labeling convergence test. Classification matrix F defines a label probability distribution matrix for each observation vector xi. As another example, when the labeling convergence test “Within Tolerance” is indicated from operation 210, a first labeling convergence parameter value ΔF may be initialized to a large number and associated with the tolerance value τ.
  • In an operation 228, an updated classification matrix F(t+1) is computed using F(t+1)=αSF(t)+(1−α)Y, where for a first iteration of operation 228, F(t)=F(0). The updated classification matrix F defines a label probability for each permissible value defined in label set Q for each observation vector xi.
  • in an operation 230, the one or more labeling convergence parameter values are updated. As an example, when the labeling convergence test “Num Iterations” is indicated from operation 210, t=t+1. As another example, when the labeling convergence test “Within Tolerance” is indicated from operation 210, ΔF=F(t+1)−F(t).
  • In an operation 232, a determination is made concerning whether or not labeling has converged by evaluating the labeling convergence test. When labeling has converged, processing continues in an operation 234. When labeling has not converged, processing continues in operation 228 to compute a next update of classification matrix F(t+1). As an example, when the labeling convergence test “Num Iterations” is indicated from operation 210, the first labeling convergence parameter value t is compared to the labeling convergence test value that is the number of iterations ML. When t≧ML, labeling has converged. As another example, when the labeling convergence test “Within Tolerance” is indicated from operation 210, the first labeling convergence parameter value ΔF is compared to the labeling convergence test value that is the tolerance value τ. When ΔF≦τ, labeling has converged.
  • Referring to FIG. 2B, In operation 234, the yi-variable (target) value of each observation vector xi is labeled using F(t). yi is selected for each observation vector xi based on yi=argmaxj≦cFij(t).
  • In an operation 236, a determination is made concerning whether or not supplemental labeling is done. When supplemental labeling is done, processing continues in an operation 238. When supplemental labeling is not done, processing continues in an operation 240. For example, supplemental labeling is done when a number of times operations 240-250 have been performed is greater than or equal to MSL.
  • In operation 238, the yi-variable (target) value of each observation vector xi selected in operation 234 is output. For example, each observation vector xi with its selected yi-variable (target) value is stored in labeled dataset 126. Labeled dataset 126 may be stored on data labeling device 100 and/or on one or more computing devices of distributed computing system 128 in a variety of formats as understood by a person of skill in the art. All or a subset of labeled dataset 126 further may be output to display 116, to printer 120, etc. For example, medical images labeled as including a tumor may be recognized by data labeling application 122 and presented on display 116 or indicators of the medical images may be printed on printer 120. As another option, a notification message may be sent to a clinician indicating that a tumor has been identified based on a “tumor” label determined for the image data. In an illustrative embodiment, an alert message may be sent to another device using communication interface 106, printed on printer 120 or another printer, presented visually on display 116 or another display, presented audibly using speaker 118 or another speaker, etc. based on how urgent a response is needed to a certain label. For example, if a sound signal or image data indicate an intrusion into a surveilled area, a notification message may be sent to a responder.
  • In an operation 240, a distance matrix Dis is computed between each pair of label distributions defined by F(t). As an example, the distance function indicated from operation 214 is used to compute distance matrix Dis between each pair of label probability distributions defined by F(t). Distance matrix Dis is a symmetric n×n matrix. For illustration, when the distance function indicated from operation 214 is “Kullback-Leibler”,
  • Dis ik = Σ i = 1 n Σ j = 1 c F kj ( t ) log F kj ( t ) F ij ( t ) .
  • In an operation 242, the number of supplemental labeled points NSL are selected from distance matrix Dis by identifying the NSL data points having the smallest distances in distance matrix Dis. The index i to the observation vector xi associated with each data point may be identified as part of the selection.
  • In an operation 244, a truth label is requested for each of the selected NSL data points by presenting the observation vector xi associated with each data point. For example, if the observation vector xi includes an image, the image is presented on display 116 with a request that a user determine the truth label, the true yi-variable (target) value, for that observation vector xi. The truth label may represent different values dependent on what the image represents or indicates. As another example, if the observation vector xi includes a sound signal, the sound signal is played on speaker 118 with a request that a user determine the truth label, the true yi-variable (target) value, for that observation vector xi. The truth label may represent different values dependent on what the sound signal represents or indicates.
  • In an operation 246, a truth response label, the true yi-variable (target) value for each observation vector xi of the selected NSL data points, is received. The truth response label includes one of the permissible values included in label set Q.
  • In an operation 248, the truth response label, the true yi-variable (target) value for each observation vector xi of the selected NSL data points, is updated in partially labeled dataset 124. As a result, l has been increased by NSL. Partially labeled dataset 124 may be sorted so that the newly labeled data points are included in point set χ={x1, . . . , xl, xl+1, . . . , xn}, where the observation vectors xi (i≦l) are labeled as yiεQ, and the remaining observation vectors xi (l<i≦n) are unlabeled (not labeled as yiεQ).
  • In operation 250, label matrix Y is updated based on partially labeled dataset 124 updated in operation 248, and processing continue in operation 226 to reinitialize classification matrix F and update labels in partially labeled dataset 124. Operations 240-250 are performed at least once, and operations 226-234 are performed at least twice before the yi-variable (target) value of each observation vector xi selected in operation 234 is output in operation 238.
  • Data labeling application 122 optimizes the process of selecting better labeled data to improve classification/prediction performance. By selecting the labeled data based on a distance measure, data labeling application 122 selects the most informative data since they have the smallest distance to the rest of the data in a probability space. Geometrically, these data are frequently located in the center of clusters in the probability space. By adding them into labeled dataset 124, they can significantly facilitate the learning process in comparison to random selection.
  • Data labeling application 122 was used with a dataset of handwritten digits as partially labeled dataset 124. Partially labeled dataset 124 included 1500 samples (observation vectors xi) (n=1500), where each sample had 64 dimensions because each handwritten digit included a gray level 8 by 8 pixel image. There were 10 labels (c=10), namely, the handwritten digits from “0” to “9”. Partially labeled dataset 124 included 10 labeled samples (l=10). The Gaussian kernel function was used for affinity matrix W with s=0.25. Intuitively, s defines how far the influence of a single training example reaches, with low values meaning ‘far’ and high values meaning ‘close’. The relative weighting value α was set to 0.2, where the larger the weight is, the faster labels propagate. NSL was set to five and the Kullback-Leibler divergence was used for the distance function. ML=5 was used.
  • The effectiveness of data labeling application 122 can be measured using both quantitative results and qualitative results. For quantitative results, a precision, a recall, and an F1-score were computed for each of the 10 labels. Precision can be defined as
  • precision = tp tp + fp
  • and recall can be defined as
  • recall = tp tp + fn ,
  • where tp is the number of true positives, fp is the number of false positives, and fn is the number of false negatives. F1 can be defined as
  • F 1 = 2 * precision * recall precision + recall .
  • For example, for a text search on a set of documents, precision is the number of correct results divided by the number of all returned results. Recall is the number of correct results divided by the number of results that should have been returned. F1-score is a measure that combines precision, and recall and is a harmonic mean of precision and recall.
  • For MSL=9, data labeling application 122 achieved 94% precision and 93% recall with 50 total labeled samples (five samples having minimum distance in summed distance matrix SD were added to partially labeled dataset 124 at each iteration) and 1450 unlabeled samples.
  • For qualitative results, the five samples having minimum distance in summed distance matrix SD are shown in FIGS. 3A-3E for a first iteration of operations 240-250, for a second iteration of operations 240-250, for a third iteration of operations 240-250, for a fourth iteration of operations 240-250, and for a fifth iteration of operations 240-250, respectively. “Predict” above each image indicates the label determined in operation 234 for the sample, and “truth” above each image indicates the label received in operation 246 for the sample. Note that the number of correct predictions increases with each iteration.
  • The performance gains resulting from use of data labeling application 122 can be measured by comparing the precision, recall, and F1-score generated by operations 228-234 versus operations 226-250 using the same number of labeled samples. For example, operations 228-234 were performed with 15 labeled samples and the labeled points output to labeled dataset 126 after operation 234 in operation 238 without performing operations 240-250. In comparison, operations 228-234 were performed with 10 initially labeled samples and operations 226-234 were performed with five supplemental samples selected in operation 242 for one or more additional iterations. Table I below shows the precision results:
  • TABLE I
    operations operations
    Number of labeled samples 228-234 226-250
    15 labels 0.47 0.73
    (10 initial, MSL = 1, 1
    iteration of operations 240-250)
    20 labels 0.61 0.90
    (10 initial, MSL = 2, 2
    iterations of operations 240-250)
    25 labels 0.76 0.92
    (10 initial, MSL = 3, 3
    iterations of operations 240-250)
    30 labels 0.76 0.93
    (10 initial, MSL = 4, 4
    iterations of operations 240-250)
  • Table II below shows the recall results:
  • TABLE II
    operations operations
    Number of labeled samples 228-234 226-250
    15 labels 0.59 0.79
    (10 initial, MSL = 1, 1
    iteration of operations 240-250)
    20 labels 0.73 0.88
    (10 initial, MSL = 2, 2
    iterations of operations 240-250)
    25 labels 0.81 0.89
    (10 initial, MSL = 3, 3
    iterations of operations 240-250)
    30 labels 0.83 0.91
    (10 initial, MSL = 4, 4
    iterations of operations 240-250)
  • Table III below shows the F1-score results:
  • TABLE III
    operations operations
    Number of labeled samples 228-234 226-250
    15 labels 0.49 0.76
    (10 initial, MSL = 1, 1
    iteration of operations 240-250)
    20 labels 0.66 0.89
    (10 initial, MSL = 2, 2
    iterations of operations 240-250)
    25 labels 0.77 0.90
    (10 initial, MSL = 3, 3
    iterations of operations 240-250)
    30 labels 0.79 0.91
    (10 initial, MSL = 4, 4
    iterations of operations 240-250)
  • The precision, recall, and F1-score values demonstrate that data labeling application 122 achieves better classification results in terms of the ability to correctly label an item with fewer incorrect labels over prior algorithms that label unlabeled data using a fixed number of randomly selected observation vectors xi. For example, the improvement may be attributable to the selection of supplemental labels that have minimum average distances and, as a result, are more informative.
  • Data labeling application 122 can be implemented as part of a machine learning application. Data labeling application 122 lowers the cost associated with training the object labeling process because fewer samples are needed to be labeled due to the identification of the samples that are most informative.
  • Data labeling application 122 can be used for image recognition on the Internet. For example, the target is to identify whether an image is or is not an image of a cat based on a limited time and resource budget. The labeling task is usually accomplished by volunteers. Using data labeling application 122, the best set for the training data (images with a cat or images with a cat) is identified.
  • Data labeling application 122 can be used for image recognition in sports analysis to recognize human actions such as diving, walking, running, swinging, kicking, lifting, etc. Image recognition in this area is a challenging task due to significant intra-class variations, occlusion, and background cluster for big data. Most of the existing work uses action models based on statistical learning algorithms for classification. To obtain ideal recognition results, a massive amount of the labeled samples are required to train the complicated human action models. However, collecting labeled samples is very costly. Data labeling application 122 addresses this challenging by selecting the most informative labeled human action samples using a smaller budget while providing better classification results.
  • The word “illustrative” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “illustrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Further, for the purposes of this disclosure and unless otherwise specified, “a” or “an” means “one or more”. Still further, using “and” or “or” in the detailed description is intended to include “and/or” unless specifically indicated otherwise.
  • The foregoing description of illustrative embodiments of the disclosed subject matter has been presented for purposes of illustration and of description. It is not intended to be exhaustive or to limit the disclosed subject matter to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosed subject matter. The embodiments were chosen and described in order to explain the principles of the disclosed subject matter and as practical applications of the disclosed subject matter to enable one skilled in the art to utilize the disclosed subject matter in various embodiments and with various modifications as suited to the particular use contemplated.

Claims (30)

1. A non-transitory computer-readable medium having stored thereon computer-readable instructions that when executed by a computing device cause the computing device to:
read a dataset that includes a plurality of observation vectors;
read a label set, wherein the label set defines permissible values for a target variable, wherein a value of the permissible values of the target variable is defined for a subset of the plurality of observation vectors;
compute an affinity matrix using a kernel function and the plurality of observation vectors;
compute a diagonal matrix by summing each row of the computed affinity matrix, wherein the sum of each row is stored in a diagonal of the row with zeroes in remaining positions of the row;
compute a normalized distance matrix using the computed affinity matrix and the computed diagonal matrix;
define a label matrix using the value of the target variable of each observation vector of the plurality of observation vectors
(a) initialize a classification matrix as the defined label matrix;
(b) compute a converged classification matrix, wherein the converged classification matrix defines a label probability for each permissible value defined in the label set for each observation vector of the plurality of observation vectors, wherein the classification matrix is converged using F(t+1)=αSF(t)+(1−α)Y, where F(t+1) is a next classification matrix, α is a relative weighting value, S is the normalized distance matrix defined as S=D−1/2WD−1/2, where W is the computed affinity matrix and D is the computed diagonal matrix, F(t) is the classification matrix, Y is the label matrix defined as Yik=1 if xi is labeled as yi=k; otherwise, Yik=0, where xi is an observation vector of the plurality of observation vectors, i=1, . . . , n, n is a number of vectors of the plurality of observation vectors, k=1, . . . , c, and c is a number of permissible values of the label set, and t is an iteration number, wherein the classification matrix is converged when a second predefined number of iterations of computations of F(t+1)=αSF(t)+(1−α)Y is complete;
(c) for each observation vector, update the value of the target variable based on a maximum label probability value identified from the converged classification matrix;
a predefined number of times,
compute a distance vector that defines a distance value between each pair of the plurality of observation vectors using a distance function applied to only the converged classification matrix;
select a number of observation vectors from the dataset that have minimum values for the distance value;
request that a user provide a label for each of the selected observation vectors;
receive a response to the request from the user for each of the selected observation vectors;
update the value of the target variable for each of the selected observation vectors with the received response; and
repeat operations (a) to (c); and
after the predefined number of times, output the value of the target variable for each observation vector of the plurality of observation vectors to a second dataset.
2. The non-transitory computer-readable medium of claim 1, wherein
each observation vector defines an image, and the value of the target variable defines an image label determined using the converged classification matrix or the received response.
3. The non-transitory computer-readable medium of claim 1, wherein the subset of the plurality of observation vectors is less than one percent of the plurality of observation vectors.
4. The non-transitory computer-readable medium of claim 1, wherein the distance function is based on a Kullback-Leibler divergence computation.
5. (canceled)
6. The non-transitory computer-readable medium of claim 4, wherein the distance vector is computed using
Dis i = j = 1 n k = 1 c F kj ( t ) log F kj ( t ) F ki ( t ) ,
7. (canceled)
8. (canceled)
9. The non-transitory computer-readable medium of claim 1, wherein the kernel function is a Gaussian kernel function.
10. The non-transitory computer-readable medium of claim 9, wherein the affinity matrix is defined as
W ij = exp - || x i - x j || 2 2 s 2
if i≠j and Wii=0, where s is a Gaussian bandwidth parameter and j=1, . . . , n.
11. The non-transitory computer-readable medium of claim 1, wherein the diagonal matrix is defined as Diij=1 nWij and Dij=0 if i≠j.
12. (canceled)
13. (canceled)
14. A computing device comprising:
a processor; and
a non-transitory computer-readable medium operably coupled to the processor, the computer-readable medium having computer-readable instructions stored thereon that, when executed by the processor, cause the computing device to
read a dataset that includes a plurality of observation vectors;
read a label set, wherein the label set defines permissible values for a target variable, wherein a value of the permissible values of the target variable is defined for a subset of the plurality of observation vectors;
compute an affinity matrix using a kernel function and the plurality of observation vectors;
compute a diagonal matrix by summing each row of the computed affinity matrix, wherein the sum of each row is stored in a diagonal of the row with zeroes in remaining positions of the row;
compute a normalized distance matrix using the computed affinity matrix and the computed diagonal matrix;
define a label matrix using the value of the target variable of each observation vector of the plurality of observation vectors
(a) initialize a classification matrix as the defined label matrix;
(b) compute a converged classification matrix, wherein the converged classification matrix defines a label probability for each permissible value defined in the label set for each observation vector of the plurality of observation vectors, wherein the classification matrix is converged using F(t+1)=αSF(t)+(1−α)Y, where F(t+1) is a next classification matrix, α is a relative weighting value, S is the normalized distance matrix defined as S=D−1/2WD−1/2, where W is the computed affinity matrix and D is the computed diagonal matrix, F(t) is the classification matrix, Y is the label matrix defined as Yik=1 if xi is labeled as yi=k; otherwise, Yik=0, where xi is an observation vector of the plurality of observation vectors, i=1, . . . , n, n is a number of vectors of the plurality of observation vectors, k=1, . . . , c, and c is a number of permissible values of the label set, and t is an iteration number, wherein the classification matrix is converged when a second predefined number of iterations of computations of F(t+1)=αSF(t)+(1−α)Y is complete;
(c) for each observation vector, update the value of the target variable based on a maximum label probability value identified from the converged classification matrix;
a predefined number of times,
compute a distance vector that defines a distance value between each pair of the plurality of observation vectors using a distance function applied to only the converged classification matrix;
select a number of observation vectors from the dataset that have minimum values for the distance value;
request that a user provide a label for each of the selected observation vectors;
receive a response to the request from the user for each of the selected observation vectors;
update the value of the target variable for each of the selected observation vectors with the received response; and
repeat operations (a) to (c); and
after the predefined number of times, output the value of the target variable for each observation vector of the plurality of observation vectors to a second dataset.
15. (canceled)
16. The computing device of claim 14, wherein the distance vector is computed using
Dis i = j = 1 n k = 1 c F kj ( t ) log F kj ( t ) F ki ( t ) ,
17. (canceled)
18. (canceled)
19. The computing device of claim 15, wherein the diagonal matrix is defined as Diij=1 nWij and Dij=0 if i≠j.
20. (canceled)
21. (canceled)
22. A method of predicting occurrence of an event or classifying an object using semi-supervised data to label unlabeled data in a dataset, the method comprising:
reading, by a computing device, a dataset that includes a plurality of observation vectors;
reading, by the computing device, a label set, wherein the label set defines permissible values for a target variable, wherein a value of the permissible values of the target variable is defined for a subset of the plurality of observation vectors;
computing, by the computing device, an affinity matrix using a kernel function and the plurality of observation vectors;
computing, by the computing device, a diagonal matrix by summing each row of the computed affinity matrix, wherein the sum of each row is stored in a diagonal of the row with zeroes in remaining positions of the row;
computing, by the computing device, a normalized distance matrix using the computed affinity matrix and the computed diagonal matrix;
defining, by the computing device, a label matrix using the value of the target variable of each observation vector of the plurality of observation vectors
(a) initializing, by the computing device, a classification matrix as the defined label matrix;
(b) computing, by the computing device, a converged classification matrix, wherein the converged classification matrix defines a label probability for each permissible value defined in the label set for each observation vector of the plurality of observation vectors, wherein the classification matrix is converged using F(t+1)=αSF(t)+(1−α)Y, where F(t+1) is a next classification matrix, α is a relative weighting value, S is the normalized distance matrix defined as S=D−1/2WD−1/2, where W is the computed affinity matrix and D is the computed diagonal matrix, F(t) is the classification matrix, Y is the label matrix defined as Yik=1 if xi is labeled as yi=k; otherwise, Yik=0, where xi is an observation vector of the plurality of observation vectors, i=1, . . . , n, n is a number of vectors of the plurality of observation vectors, k=1, . . . , c, and c is a number of permissible values of the label set, and t is an iteration number, wherein the classification matrix is converged when a second predefined number of iterations of computations of F(t+1)=αSF(t)+(1−α)Y is complete;
(c) for each observation vector, updating, by the computing device, the value of the target variable based on a maximum label probability value identified from the converged classification matrix;
a predefined number of times,
computing, by the computing device, a distance vector that defines a distance value between each pair of the plurality of observation vectors using a distance function applied to only the converged classification matrix;
selecting, by the computing device, a number of observation vectors from the dataset that have minimum values for the distance value;
requesting, by the computing device, that a user provide a label for each of the selected observation vectors;
receiving, by the computing device, a response to the request from the user for each of the selected observation vectors;
updating, by the computing device, the value of the target variable for each of the selected observation vectors with the received response; and
repeating, by the computing device, operations (a) to (c); and
after the predefined number of times, outputting, by the computing device, the value of the target variable for each observation vector of the plurality of observation vectors to a second dataset.
23. (canceled)
24. The method of claim 22, wherein the distance vector is computed using
Dis i = j = 1 n k = 1 c F kj ( t ) log F kj ( t ) F ki ( t ) ,
25. (canceled)
26. (canceled)
27. The method of claim 22, wherein the affinity matrix is defined as
W ij = exp - || x i - x j || 2 2 s 2
if i≠j and Wii=0, where s is a Gaussian bandwidth parameter.
28. The method of claim 22, wherein the diagonal matrix is defined as Diij=1 nWij and Dij=0 if i≠j.
29. (canceled)
30. (canceled)
US15/335,530 2016-04-21 2016-10-27 Event prediction and object recognition system Active US9792562B1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US15/335,530 US9792562B1 (en) 2016-04-21 2016-10-27 Event prediction and object recognition system
US15/686,863 US10127477B2 (en) 2016-04-21 2017-08-25 Distributed event prediction and machine learning object recognition system
US16/108,293 US10275690B2 (en) 2016-04-21 2018-08-22 Machine learning predictive labeling system
US16/162,794 US10354204B2 (en) 2016-04-21 2018-10-17 Machine learning predictive labeling system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662325668P 2016-04-21 2016-04-21
US15/335,530 US9792562B1 (en) 2016-04-21 2016-10-27 Event prediction and object recognition system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/686,863 Continuation-In-Part US10127477B2 (en) 2016-04-21 2017-08-25 Distributed event prediction and machine learning object recognition system

Publications (2)

Publication Number Publication Date
US9792562B1 US9792562B1 (en) 2017-10-17
US20170308810A1 true US20170308810A1 (en) 2017-10-26

Family

ID=60021766

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/335,530 Active US9792562B1 (en) 2016-04-21 2016-10-27 Event prediction and object recognition system

Country Status (1)

Country Link
US (1) US9792562B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190325344A1 (en) * 2018-04-20 2019-10-24 Sas Institute Inc. Machine learning predictive labeling system
JP2023525236A (en) * 2020-04-30 2023-06-15 華為技術有限公司 Data labeling system and method, and data labeling manager

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10650045B2 (en) 2016-02-05 2020-05-12 Sas Institute Inc. Staged training of neural networks for improved time series prediction performance
US10650046B2 (en) 2016-02-05 2020-05-12 Sas Institute Inc. Many task computing with distributed file system
US10642896B2 (en) 2016-02-05 2020-05-05 Sas Institute Inc. Handling of data sets during execution of task routines of multiple languages
US10346476B2 (en) 2016-02-05 2019-07-09 Sas Institute Inc. Sketch entry and interpretation of graphical user interface design
US10795935B2 (en) 2016-02-05 2020-10-06 Sas Institute Inc. Automated generation of job flow definitions
US10560487B2 (en) * 2017-07-26 2020-02-11 International Business Machines Corporation Intrusion detection and mitigation in data processing
US20190244138A1 (en) * 2018-02-08 2019-08-08 Apple Inc. Privatized machine learning using generative adversarial networks
US10430690B1 (en) 2018-04-20 2019-10-01 Sas Institute Inc. Machine learning predictive labeling system
US11294949B2 (en) 2018-09-04 2022-04-05 Toyota Connected North America, Inc. Systems and methods for querying a distributed inventory of visual data
US11100428B2 (en) 2018-09-30 2021-08-24 Sas Institute Inc. Distributable event prediction and machine learning recognition system
US10635947B2 (en) 2018-09-30 2020-04-28 Sas Institute Inc. Distributable classification system
US10510022B1 (en) * 2018-12-03 2019-12-17 Sas Institute Inc. Machine learning model feature contribution analytic system
US10832174B1 (en) 2019-10-14 2020-11-10 Sas Institute Inc. Distributed hyperparameter tuning system for active machine learning
US10929762B1 (en) 2019-10-14 2021-02-23 Sas Institute Inc. Distributable event prediction and machine learning recognition system
US10956825B1 (en) 2020-03-16 2021-03-23 Sas Institute Inc. Distributable event prediction and machine learning recognition system
US11720748B2 (en) 2020-04-27 2023-08-08 Robert Bosch Gmbh Automatically labeling data using conceptual descriptions
CN112540749B (en) * 2020-11-16 2023-10-24 南方电网数字平台科技(广东)有限公司 Micro-service dividing method, apparatus, computer device and readable storage medium
CN113516162A (en) * 2021-04-26 2021-10-19 湖南大学 OCSVM and K-means algorithm based industrial control system flow abnormity detection method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Zhu et al ("Combining Active Learning and Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions" 2003) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190325344A1 (en) * 2018-04-20 2019-10-24 Sas Institute Inc. Machine learning predictive labeling system
US10521734B2 (en) * 2018-04-20 2019-12-31 Sas Institute Inc. Machine learning predictive labeling system
JP2023525236A (en) * 2020-04-30 2023-06-15 華為技術有限公司 Data labeling system and method, and data labeling manager
JP7529800B2 (en) 2020-04-30 2024-08-06 華為技術有限公司 Data labeling system and method, and data labeling manager - Patents.com

Also Published As

Publication number Publication date
US9792562B1 (en) 2017-10-17

Similar Documents

Publication Publication Date Title
US9792562B1 (en) Event prediction and object recognition system
US10127477B2 (en) Distributed event prediction and machine learning object recognition system
US10430690B1 (en) Machine learning predictive labeling system
US10275690B2 (en) Machine learning predictive labeling system
US10521734B2 (en) Machine learning predictive labeling system
US10354204B2 (en) Machine learning predictive labeling system
US10600005B2 (en) System for automatic, simultaneous feature selection and hyperparameter tuning for a machine learning model
US10311368B2 (en) Analytic system for graphical interpretability of and improvement of machine learning models
US10474959B2 (en) Analytic system based on multiple task learning with incomplete data
US10635947B2 (en) Distributable classification system
US10929762B1 (en) Distributable event prediction and machine learning recognition system
US9830558B1 (en) Fast training of support vector data description using sampling
US11087215B1 (en) Machine learning classification system
US10628755B1 (en) Distributable clustering model training system
US11200514B1 (en) Semi-supervised classification system
US11379685B2 (en) Machine learning classification system
US10956825B1 (en) Distributable event prediction and machine learning recognition system
US11151463B2 (en) Distributable event prediction and machine learning recognition system
US9990592B2 (en) Kernel parameter selection in support vector data description for outlier identification
US11100428B2 (en) Distributable event prediction and machine learning recognition system
US11416712B1 (en) Tabular data generation with attention for machine learning model training system
US11403527B2 (en) Neural network training system
US11195084B1 (en) Neural network training system
US10872277B1 (en) Distributed classification system
Dinov et al. Model Performance Assessment

Legal Events

Date Code Title Description
AS Assignment

Owner name: SAS INSTITUTE INC., NORTH CAROLINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHEN, XU;WANG, TAO;REEL/FRAME:040147/0766

Effective date: 20161026

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4