WO2021221649A1 - Automatic sensor data collection and model generation - Google Patents

Automatic sensor data collection and model generation Download PDF

Info

Publication number
WO2021221649A1
WO2021221649A1 PCT/US2020/030656 US2020030656W WO2021221649A1 WO 2021221649 A1 WO2021221649 A1 WO 2021221649A1 US 2020030656 W US2020030656 W US 2020030656W WO 2021221649 A1 WO2021221649 A1 WO 2021221649A1
Authority
WO
WIPO (PCT)
Prior art keywords
measurements
sensor
sequence
touch
subset
Prior art date
Application number
PCT/US2020/030656
Other languages
French (fr)
Inventor
Robert Gregory CAMPBELL
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Priority to PCT/US2020/030656 priority Critical patent/WO2021221649A1/en
Publication of WO2021221649A1 publication Critical patent/WO2021221649A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/325Power saving in peripheral device
    • G06F1/3262Power saving in digitizer or tablet
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • G06F1/3231Monitoring the presence, absence or movement of users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/30Control
    • G16Y40/35Management of things, i.e. controlling in accordance with a policy or in order to achieve specified objectives
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • Computing devices such as laptop computers, tablet computers and the like may include touch sensors and associated control hardware.
  • the touch sensors and associated control hardware may be disabled to reduce power consumption when not in use. Disabling the touch sensors may reduce the responsiveness of the devices to subsequent operator interactions.
  • FIG. 1 is a block diagram of an example system to collect sensor data and generate a model used at client computing devices to predict operator interactions with primary sensors.
  • FIG. 2 is a diagram illustrating an example developer device from the system of FIG. 1.
  • FIG. 3 is a flowchart of a method of collecting sensor data and generating a model for use to predict operator interactions with primary sensors.
  • FIG. 4 is a diagram illustrating a sequence of images and operator interactions captured during an example performance of the method of FIG. 3.
  • Various computing devices may include touch sensors to detect operator interactions such as contact by fingers, pen accessories, and the like.
  • laptop computers may include touch-sensitive displays and/or touch pads.
  • the above- mentioned touch sensors may be connected to sensor controllers to capture and process the raw output of the touch sensors, and provide data to a central controller of the computing device that is representative of touch interactions.
  • the computing devices noted above may be battery-powered, and may therefore disable the touch sensors and/or sensor controllers under certain conditions to reduce power consumption.
  • the computing devices may implement predictive mechanisms to determine when operator interactions with the touch sensors are imminent, in order to pre-emptively re-enable the touch sensors to capture such interactions. That is, the computing devices can predict future operator interactions with touch and other sensors.
  • the predictive mechanisms may rely on data from auxiliary sensors of the computing devices, such as cameras and the like.
  • the predictive mechanisms may include the use of machine learning processes, such as a convolutional neural network (CNN), to process data from the auxiliary sensor(s) and determine when touch interactions are likely.
  • CNN convolutional neural network
  • the generation of CNN models, as well as other machine learning processes, is dependent on training datasets, which may be time-consuming and costly to obtain.
  • a computing device may automatically collect training data and generate a model to determine when touch interactions are likely.
  • the computing device may continuously monitor touch interactions via the touch sensors, and may also collect a sequence of measurements via auxiliary sensor(s). Subsets of the measurements from the auxiliary sensors may be automatically labelled as positive training samples (associated with operator interactions) or negative training samples (not associated with operator interactions)
  • FIG. 1 shows a system 100 for automatic sensor data collection and model generation.
  • the system 100 includes at least one computing device. Two example computing device are illustrated in FIG. 1 , in the form of a first example computing device 104-1 and a second example computing device 104-2.
  • the system 100 may include fewer computing devices 104, or a greater number of computing devices 104, in other examples.
  • the first computing device 104-1 includes a first primary sensor 108-1 and a first auxiliary sensor 112-1.
  • the second computing device 104-2 includes a second primary sensor 108-2 and a second auxiliary sensor 112-2.
  • the first primary sensor 108- 1 and the second primary sensor 108-2 include sensors that detect operator interactions. Examples of primary sensors include touch screens, touch pads, microphones, buttons, and the like.
  • the first auxiliary sensor 112-1 and the second auxiliary sensor 112-2 include sensors that detect capture data indicative of operator activity, such as a position of an operator, a position of a hand of an operator, and the like. Examples of auxiliary sensors include cameras (e.g. infrared cameras), ultrasonic sensors, time-of-flight sensors, and the like.
  • the auxiliary sensors 112 can include a combination of the above-mentioned sensors in some examples.
  • the first computing device 104-1 and the second computing device 104-2 capture measurements via the auxiliary sensors 112-1 and 112-2, respectively.
  • the captured measurements such as images, are processed to predict future operator interactions with the primary sensors 108. That is, the first computing device 104-1 can capture measurements such as images via the auxiliary sensor 112-1 , and predict operator interactions with the primary sensor 108-1 from the measurements.
  • the first computing device 104-1 can generate the predictions using a convolutional neural network (CNN) in some examples.
  • the CNN is defined by a set of model parameters, also referred to simply as a model.
  • the computing device 104-1 therefore stores a model 116, e.g. in a storage device.
  • the computing device 104-2 also stores the model 116.
  • the computing devices 104-1 and 104-2 may obtain the model 116 from a server 120 connected with the computing devices 104 via a network 124.
  • the server 120 may obtain the model 116 via the network 124 from a computing device 128.
  • the computing device 128 may also be referred to as a developer device 128.
  • the developer device 128 automatically generates the model 116, for subsequent transmission to the server 120 (or directly to the client computing devices 104 in some examples).
  • the developer device 128 can generate the model 116, and the server 120 can store the model 116 for transmission to the client computing devices 104 at a manufacturer facility, prior to provision of the client computing devices 104 to operators.
  • the developer device 128 includes a controller 132, which may also be referred to as a processor.
  • the developer device 128 also includes a non-transitory computer- readable medium such as a storage device 136, which may also be referred to as a memory, coupled to the controller 132.
  • the developer device 128 also includes a primary sensor 140 and an auxiliary sensor 144 coupled to the controller 132.
  • the primary sensor 140 is the same type of sensor as the primary sensors 108-1 and 108-2 of the client computing devices 104 (e.g. a touch sensor).
  • the auxiliary sensor 144 is the same type of sensor as the auxiliary sensors 112-1 and 112-2 of the client computing devices 104 (e.g. a camera).
  • the developer device 128 also includes a communications interface 148 coupled to the controller 132.
  • the communications interface 148 includes transceivers, network controllers, and the like to enable the developer device 128 to communicate with other computing devices (e.g. the server 120 and/or the client computing devices 104) over the network 124.
  • the storage device 136 stores a model generator application 152.
  • the controller 132 may execute the model generator application 152 to capture measurements via the auxiliary sensor 144 and monitor the primary sensor 140 for operator interactions.
  • the controller 132 may also, via execution of the model generator 152, automatically label subsets of the measurements from the auxiliary sensor 144 as training samples, and use the training samples to generate the model 116.
  • the developer device 128 may include other components not illustrated for clarity, such as a power supply, an output device (e.g. a display), other input devices (e.g. a keyboard), and the like.
  • FIG. 2 shows an example developer device 128, including a housing 200 that supports the components of the developer device 128.
  • the housing 200 supports a primary sensor 140 (e.g. a touch pad), as well as an auxiliary sensor 144 (e.g. an infrared camera).
  • the housing 200 may also support other components of the developer device 128, such as a display 204.
  • FIG. 3 illustrates a method 300 of sensor data collection and model generation.
  • the method 300 may be embodied by a set of instructions (e.g. the model generator application 152 shown in FIG. 1) that may be stored in a non-transitory computer-readable medium and executed by a controller. The method 300 is described below in conjunction with an example performance of the method 300 by the developer device 128.
  • the controller 132 may collect a sequence of measurements via the auxiliary sensor 144.
  • the controller 132 may control the auxiliary sensor 144 to capture a sequence of images. As seen in FIG.
  • the auxiliary sensor 144 faces the operator of the developer device 128, and the sequence of images therefore depicts at least a portion of the operator during operation of the developer device 128.
  • the sequence collected at block 305 can include a stream of images captured at a suitable frequency (e.g. ten frames per second, although lower and greater frame rates may be used in other examples).
  • the sequence of measurements collected at block 305 can be stored in the storage device 136 for further processing. A timestamp may also be stored in connection with each image or other measurement, indicating when the measurement was captured.
  • the controller 132 may monitor the primary sensor 140 to detect operator interactions.
  • the performance of block 310 may be initiated simultaneously with the initiation of measurement capture at block 305. That is, the primary sensor 140 remains active continuously.
  • the primary sensor 140 is a touch sensor such as the touch pad shown in FIG. 2
  • the controller 132 can record timestamps, e.g. in the storage device 136, corresponding to each time an operator interaction with the touch pad begins. Such an operator interaction may also be referred to as a touch interaction.
  • An operator interaction, for a touch-based primary sensor 140 includes physical contact between the operator and the primary sensor 140.
  • the controller 132 may determine whether an operator interaction has been detected via the monitoring initiated at block 310. When the determination at block 315 is affirmative, indicating that an operator interaction with the primary sensor 140 has been detected, the controller 132 proceeds to block 320.
  • the controller 132 labels a subset of the measurements captured at block 305 (and that continue to be captured throughout the performance of the method 300) as being associated with an operator interaction. In other words, at block 320 the controller labels a subset of measurements from the auxiliary sensor 144 as a positive training sample for subsequent use in generating the model 116. [0024] To label a subset of measurements from the auxiliary sensor 144 at block 320, the controller 132 determines a time corresponding to the operator interaction. The controller 132 then selects, from the sequence of measurements (e.g. the sequence of images captured via the auxiliary sensor 144), a subset of measurements that corresponds to a time period preceding the time of the operator interaction. For example, the controller 132 can select any measurements from the auxiliary sensor 144 that were captured within three seconds prior to the operator interaction. A wide variety of other time periods longer than or shorter than three seconds may also be employed in other examples.
  • Each image (or other measurement, dependent on the nature of the auxiliary sensor 144) is labelled as a positive training sample.
  • each measurement may be modified to include metadata indicating that the measurement is associated with an operator interaction.
  • the label can include a binary indicator (e.g. ⁇ ” for positive samples), or other suitable labels.
  • successive measurements may also include sequence information indicating the order of the measurements.
  • the controller 132 proceeds to block 325 instead of block 320.
  • the controller 132 labels a subset of the measurements from block 305 as a negative training sample. That is, the measurements of the subset are labelled as not being associated with an operator interaction with the primary sensor 140.
  • the label applied to measurements at block 325 contrasts with the label applied to measurements at block 320. For example, if the binary label “1” is used at block 320, a binary label “0” may be used at block 325.
  • the selection of a subset of measurements to label at block 325 can be performed in various ways.
  • the controller 132 can perform block 325 when a predetermined time period passes without operator interactions with the primary sensor 140.
  • the controller 132 proceeds to block 330.
  • the controller may determine whether to process the labelled subsets generated via blocks 320 and 325 in order to generate the model 116.
  • the determination at block 330 can include whether a threshold number of training samples have been obtained through repeated performances of blocks 320 and 325.
  • the determination at block 330 can also include whether an operator command has been received to generate the model 116 based on any training samples gathered thus far via blocks 320 and 325.
  • the controller 132 may return to block 305 to continue collecting auxiliary measurements, monitoring the primary sensor 140 and labelling subsets of the measurements.
  • the controller 132 proceeds to block 335.
  • the controller 132 may train the model 116 based on the labelled samples accumulated via repeated performances of blocks 320 and 325.
  • the process of generating the model 116 which is also referred to as training the model 116, includes the selection of parameters defining nodes of the CNN mentioned earlier in some examples.
  • the model 116 once generated or updated, can be stored in the storage device 136.
  • the controller 132 can transmit the model 116 via the communications interface 148.
  • the model 116 can be transmitted directly from the developer device 128 to the client computing devices 104, or from the developer device 128 to the server 120 for subsequent transmission to the client computing devices 104.
  • the developer device 128 can employ the model 116 to predict future touch or other sensor interactions based on auxiliary sensor data. For example, the developer device 128 can disable the primary sensor 140 following the performance of block 335. The controller 132 can then obtain another sequence of measurements via the auxiliary sensor 144 as described above in connection with block 305. From the other sequence of auxiliary measurements, the controller 132 can predict future operator interactions with the primary sensor 140 using the model 116. When a future operator interaction is predicted, the controller 132 can re-enable the primary sensor 140 and/or associated controller(s).
  • FIG. 4 an example sequence of measurements obtained via block 305 is illustrated, in the form of a sequence of images captured by the auxiliary sensor 144.
  • the sequence includes images 400-1 , 400-1 , 400-3, 400-4, 400-5, 400-6, 400-7, 400-8 and 400-9, collectively referred to as the images 400.
  • the images 400 are captured at a configurable frequency, and are illustrated relative to a time axis 404. That is, the image 400-1 is taken first, followed by the image 400-2, and so on.
  • An example performance of the method 300 by the controller 132 will be described below with reference to the images 400 and the time axis 404.
  • the controller 132 detects an operator interaction with the primary sensor 140 at a time 408-1. As a result of the detection, through an example performance of block 320, the controller 132 labels the two images 400-1 and 400-2 preceding the time 408-1 as positive training samples (that is, as being associated with an operator interaction). Labelling as a positive training sample is illustrated in FIG. 4 with cross-hatching of the relevant images 400.
  • the controller 132 determines that no operator interaction with the primary sensor 140 has been detected between the time 408-1 and a time 412-1. Therefore, at block 325 the images 400-3 and 400-4 are labelled as a negative training sample (i.e. as not being associated with an operator interaction). Labelling as a negative training sample is illustrated in FIG. 4 with diagonal lines filling the relevant images 400.
  • the controller 132 determines at block 315 that no operator interaction has been detected between the time 412-1 and a time 412-2. Therefore, the images 400-5 and 400-6 are also labelled as negative training samples, at block 325. However, at a time 408-2, through another performance of block 315, the controller 132 detects an operator interaction with the primary sensor 140. Therefore, at block 320 the two preceding frames are labelled as positive training samples. As a result, the images 400-6 and 400-7 are labelled as positive training samples, while the image 400-5 remains labelled as a negative training sample. In other examples, the previous negative labelling of the images 400-5 and 400-6 can be removed, and the images 400-6 and 400-7 can be labelled as a positive training sample, leaving the image 400-5 unlabelled.
  • the controller 132 detects that no operator interactions with the primary sensor 140 have occurred between the time 408-2 and a time 412-3.
  • the images 400-8 and 400-9, corresponding to the time period between the times 408-2 and 412-3, are therefore labelled as negative training samples.
  • the model 116 is generated using two subsets of images 400 as negative training samples, and two subsets of images 400 as positive training samples.
  • the two negative subsets are the images 400-3 to 400-5, and the images 400-8 to 400-9.
  • the two positive subsets are the images 400-1 to 400-2, and the images 400-6 to 400-7.

Abstract

An example computing device includes: a touch sensor; an auxiliary sensor; and a controller to: control the auxiliary sensor to collect a sequence of measurements; continuously monitor the touch sensor to detect a plurality of touch interactions; automatically label a first subset of the sequence of measurements as associated with a first touch interaction; and based on the first subset, train a model to predict subsequent touch interactions from the sequence of measurements.

Description

AUTOMATIC SENSOR DATA COLLECTION AND MODEL GENERATION
BACKGROUND
[0001] Computing devices such as laptop computers, tablet computers and the like may include touch sensors and associated control hardware. The touch sensors and associated control hardware may be disabled to reduce power consumption when not in use. Disabling the touch sensors may reduce the responsiveness of the devices to subsequent operator interactions.
BRIEF DESCRIPTIONS OF THE DRAWINGS
[0002] FIG. 1 is a block diagram of an example system to collect sensor data and generate a model used at client computing devices to predict operator interactions with primary sensors.
[0003] FIG. 2 is a diagram illustrating an example developer device from the system of FIG. 1.
[0004] FIG. 3 is a flowchart of a method of collecting sensor data and generating a model for use to predict operator interactions with primary sensors.
[0005] FIG. 4 is a diagram illustrating a sequence of images and operator interactions captured during an example performance of the method of FIG. 3.
DETAILED DESCRIPTION
[0006] Various computing devices may include touch sensors to detect operator interactions such as contact by fingers, pen accessories, and the like. For example, laptop computers may include touch-sensitive displays and/or touch pads. The above- mentioned touch sensors may be connected to sensor controllers to capture and process the raw output of the touch sensors, and provide data to a central controller of the computing device that is representative of touch interactions.
[0007] The computing devices noted above may be battery-powered, and may therefore disable the touch sensors and/or sensor controllers under certain conditions to reduce power consumption. The computing devices may implement predictive mechanisms to determine when operator interactions with the touch sensors are imminent, in order to pre-emptively re-enable the touch sensors to capture such interactions. That is, the computing devices can predict future operator interactions with touch and other sensors. The predictive mechanisms may rely on data from auxiliary sensors of the computing devices, such as cameras and the like.
[0008] The predictive mechanisms may include the use of machine learning processes, such as a convolutional neural network (CNN), to process data from the auxiliary sensor(s) and determine when touch interactions are likely. The generation of CNN models, as well as other machine learning processes, is dependent on training datasets, which may be time-consuming and costly to obtain.
[0009] A computing device may automatically collect training data and generate a model to determine when touch interactions are likely. The computing device may continuously monitor touch interactions via the touch sensors, and may also collect a sequence of measurements via auxiliary sensor(s). Subsets of the measurements from the auxiliary sensors may be automatically labelled as positive training samples (associated with operator interactions) or negative training samples (not associated with operator interactions)
[0010] FIG. 1 shows a system 100 for automatic sensor data collection and model generation. The system 100 includes at least one computing device. Two example computing device are illustrated in FIG. 1 , in the form of a first example computing device 104-1 and a second example computing device 104-2. The system 100 may include fewer computing devices 104, or a greater number of computing devices 104, in other examples.
[0011] The first computing device 104-1 includes a first primary sensor 108-1 and a first auxiliary sensor 112-1. The second computing device 104-2 includes a second primary sensor 108-2 and a second auxiliary sensor 112-2. The first primary sensor 108- 1 and the second primary sensor 108-2 include sensors that detect operator interactions. Examples of primary sensors include touch screens, touch pads, microphones, buttons, and the like. The first auxiliary sensor 112-1 and the second auxiliary sensor 112-2 include sensors that detect capture data indicative of operator activity, such as a position of an operator, a position of a hand of an operator, and the like. Examples of auxiliary sensors include cameras (e.g. infrared cameras), ultrasonic sensors, time-of-flight sensors, and the like. The auxiliary sensors 112 can include a combination of the above-mentioned sensors in some examples.
[0012] The first computing device 104-1 and the second computing device 104-2 capture measurements via the auxiliary sensors 112-1 and 112-2, respectively. The captured measurements, such as images, are processed to predict future operator interactions with the primary sensors 108. That is, the first computing device 104-1 can capture measurements such as images via the auxiliary sensor 112-1 , and predict operator interactions with the primary sensor 108-1 from the measurements. The first computing device 104-1 can generate the predictions using a convolutional neural network (CNN) in some examples. The CNN is defined by a set of model parameters, also referred to simply as a model. The computing device 104-1 therefore stores a model 116, e.g. in a storage device. The computing device 104-2 also stores the model 116.
[0013] The computing devices 104-1 and 104-2, which may also be referred to as client computing devices 104, may obtain the model 116 from a server 120 connected with the computing devices 104 via a network 124. The server 120, in turn, may obtain the model 116 via the network 124 from a computing device 128. The computing device 128 may also be referred to as a developer device 128. The developer device 128 automatically generates the model 116, for subsequent transmission to the server 120 (or directly to the client computing devices 104 in some examples). For example, the developer device 128 can generate the model 116, and the server 120 can store the model 116 for transmission to the client computing devices 104 at a manufacturer facility, prior to provision of the client computing devices 104 to operators.
[0014] The developer device 128 includes a controller 132, which may also be referred to as a processor. The developer device 128 also includes a non-transitory computer- readable medium such as a storage device 136, which may also be referred to as a memory, coupled to the controller 132. The developer device 128 also includes a primary sensor 140 and an auxiliary sensor 144 coupled to the controller 132. The primary sensor 140 is the same type of sensor as the primary sensors 108-1 and 108-2 of the client computing devices 104 (e.g. a touch sensor). The auxiliary sensor 144 is the same type of sensor as the auxiliary sensors 112-1 and 112-2 of the client computing devices 104 (e.g. a camera).
[0015] The developer device 128 also includes a communications interface 148 coupled to the controller 132. The communications interface 148 includes transceivers, network controllers, and the like to enable the developer device 128 to communicate with other computing devices (e.g. the server 120 and/or the client computing devices 104) over the network 124.
[0016] The storage device 136 stores a model generator application 152. The controller 132 may execute the model generator application 152 to capture measurements via the auxiliary sensor 144 and monitor the primary sensor 140 for operator interactions. The controller 132 may also, via execution of the model generator 152, automatically label subsets of the measurements from the auxiliary sensor 144 as training samples, and use the training samples to generate the model 116.
[0017] The developer device 128 may include other components not illustrated for clarity, such as a power supply, an output device (e.g. a display), other input devices (e.g. a keyboard), and the like.
[0018] FIG. 2 shows an example developer device 128, including a housing 200 that supports the components of the developer device 128. In particular, the housing 200 supports a primary sensor 140 (e.g. a touch pad), as well as an auxiliary sensor 144 (e.g. an infrared camera). The housing 200 may also support other components of the developer device 128, such as a display 204.
[0019] FIG. 3 illustrates a method 300 of sensor data collection and model generation. The method 300 may be embodied by a set of instructions (e.g. the model generator application 152 shown in FIG. 1) that may be stored in a non-transitory computer-readable medium and executed by a controller. The method 300 is described below in conjunction with an example performance of the method 300 by the developer device 128. [0020] At block 305, the controller 132 may collect a sequence of measurements via the auxiliary sensor 144. For example, the controller 132 may control the auxiliary sensor 144 to capture a sequence of images. As seen in FIG. 2, the auxiliary sensor 144 faces the operator of the developer device 128, and the sequence of images therefore depicts at least a portion of the operator during operation of the developer device 128. The sequence collected at block 305 can include a stream of images captured at a suitable frequency (e.g. ten frames per second, although lower and greater frame rates may be used in other examples). The sequence of measurements collected at block 305 can be stored in the storage device 136 for further processing. A timestamp may also be stored in connection with each image or other measurement, indicating when the measurement was captured.
[0021] At block 310, the controller 132 may monitor the primary sensor 140 to detect operator interactions. The performance of block 310 may be initiated simultaneously with the initiation of measurement capture at block 305. That is, the primary sensor 140 remains active continuously. In the present example, in which the primary sensor 140 is a touch sensor such as the touch pad shown in FIG. 2, at block 310 the controller 132 can record timestamps, e.g. in the storage device 136, corresponding to each time an operator interaction with the touch pad begins. Such an operator interaction may also be referred to as a touch interaction. An operator interaction, for a touch-based primary sensor 140, includes physical contact between the operator and the primary sensor 140.
[0022] At block 315, the controller 132 may determine whether an operator interaction has been detected via the monitoring initiated at block 310. When the determination at block 315 is affirmative, indicating that an operator interaction with the primary sensor 140 has been detected, the controller 132 proceeds to block 320.
[0023] At block 320, the controller 132 labels a subset of the measurements captured at block 305 (and that continue to be captured throughout the performance of the method 300) as being associated with an operator interaction. In other words, at block 320 the controller labels a subset of measurements from the auxiliary sensor 144 as a positive training sample for subsequent use in generating the model 116. [0024] To label a subset of measurements from the auxiliary sensor 144 at block 320, the controller 132 determines a time corresponding to the operator interaction. The controller 132 then selects, from the sequence of measurements (e.g. the sequence of images captured via the auxiliary sensor 144), a subset of measurements that corresponds to a time period preceding the time of the operator interaction. For example, the controller 132 can select any measurements from the auxiliary sensor 144 that were captured within three seconds prior to the operator interaction. A wide variety of other time periods longer than or shorter than three seconds may also be employed in other examples.
[0025] Each image (or other measurement, dependent on the nature of the auxiliary sensor 144) is labelled as a positive training sample. For example, each measurement may be modified to include metadata indicating that the measurement is associated with an operator interaction. The label can include a binary indicator (e.g. Ί” for positive samples), or other suitable labels. In some examples, successive measurements may also include sequence information indicating the order of the measurements.
[0026] When the determination at block 315 is negative, the controller 132 proceeds to block 325 instead of block 320. At block 325, the controller 132 labels a subset of the measurements from block 305 as a negative training sample. That is, the measurements of the subset are labelled as not being associated with an operator interaction with the primary sensor 140. The label applied to measurements at block 325 contrasts with the label applied to measurements at block 320. For example, if the binary label “1” is used at block 320, a binary label “0” may be used at block 325.
[0027] The selection of a subset of measurements to label at block 325 can be performed in various ways. For example, the controller 132 can perform block 325 when a predetermined time period passes without operator interactions with the primary sensor 140.
[0028] Following the performance of block 320 or 325, the controller 132 proceeds to block 330. At block 330, the controller may determine whether to process the labelled subsets generated via blocks 320 and 325 in order to generate the model 116. The determination at block 330 can include whether a threshold number of training samples have been obtained through repeated performances of blocks 320 and 325. The determination at block 330 can also include whether an operator command has been received to generate the model 116 based on any training samples gathered thus far via blocks 320 and 325.
[0029] When the determination at block 330 is negative, the controller 132 may return to block 305 to continue collecting auxiliary measurements, monitoring the primary sensor 140 and labelling subsets of the measurements. When the determination at block 330 is affirmative, the controller 132 proceeds to block 335. At block 335, the controller 132 may train the model 116 based on the labelled samples accumulated via repeated performances of blocks 320 and 325. The process of generating the model 116, which is also referred to as training the model 116, includes the selection of parameters defining nodes of the CNN mentioned earlier in some examples. The model 116, once generated or updated, can be stored in the storage device 136.
[0030] At block 340, the controller 132 can transmit the model 116 via the communications interface 148. The model 116 can be transmitted directly from the developer device 128 to the client computing devices 104, or from the developer device 128 to the server 120 for subsequent transmission to the client computing devices 104.
[0031] Following generation of the model 116 and transmission of the model 116 to the client computing devices 104, the developer device 128 itself, as well as the client computing devices 104, can employ the model 116 to predict future touch or other sensor interactions based on auxiliary sensor data. For example, the developer device 128 can disable the primary sensor 140 following the performance of block 335. The controller 132 can then obtain another sequence of measurements via the auxiliary sensor 144 as described above in connection with block 305. From the other sequence of auxiliary measurements, the controller 132 can predict future operator interactions with the primary sensor 140 using the model 116. When a future operator interaction is predicted, the controller 132 can re-enable the primary sensor 140 and/or associated controller(s).
[0032] Turning now to FIG. 4, an example sequence of measurements obtained via block 305 is illustrated, in the form of a sequence of images captured by the auxiliary sensor 144. The sequence includes images 400-1 , 400-1 , 400-3, 400-4, 400-5, 400-6, 400-7, 400-8 and 400-9, collectively referred to as the images 400. The images 400 are captured at a configurable frequency, and are illustrated relative to a time axis 404. That is, the image 400-1 is taken first, followed by the image 400-2, and so on. An example performance of the method 300 by the controller 132 will be described below with reference to the images 400 and the time axis 404.
[0033] At an example performance of block 415, the controller 132 detects an operator interaction with the primary sensor 140 at a time 408-1. As a result of the detection, through an example performance of block 320, the controller 132 labels the two images 400-1 and 400-2 preceding the time 408-1 as positive training samples (that is, as being associated with an operator interaction). Labelling as a positive training sample is illustrated in FIG. 4 with cross-hatching of the relevant images 400.
[0034] Assuming that the determination at block 330 is negative, at another performance of block 315, the controller 132 determines that no operator interaction with the primary sensor 140 has been detected between the time 408-1 and a time 412-1. Therefore, at block 325 the images 400-3 and 400-4 are labelled as a negative training sample (i.e. as not being associated with an operator interaction). Labelling as a negative training sample is illustrated in FIG. 4 with diagonal lines filling the relevant images 400.
[0035] Following another negative determination at block 330, the controller 132 determines at block 315 that no operator interaction has been detected between the time 412-1 and a time 412-2. Therefore, the images 400-5 and 400-6 are also labelled as negative training samples, at block 325. However, at a time 408-2, through another performance of block 315, the controller 132 detects an operator interaction with the primary sensor 140. Therefore, at block 320 the two preceding frames are labelled as positive training samples. As a result, the images 400-6 and 400-7 are labelled as positive training samples, while the image 400-5 remains labelled as a negative training sample. In other examples, the previous negative labelling of the images 400-5 and 400-6 can be removed, and the images 400-6 and 400-7 can be labelled as a positive training sample, leaving the image 400-5 unlabelled.
[0036] In a subsequent performance of block 315, the controller 132 detects that no operator interactions with the primary sensor 140 have occurred between the time 408-2 and a time 412-3. The images 400-8 and 400-9, corresponding to the time period between the times 408-2 and 412-3, are therefore labelled as negative training samples.
[0037] When the controller 132 performs block 335, the model 116 is generated using two subsets of images 400 as negative training samples, and two subsets of images 400 as positive training samples. The two negative subsets are the images 400-3 to 400-5, and the images 400-8 to 400-9. The two positive subsets are the images 400-1 to 400-2, and the images 400-6 to 400-7.
[0038] It should be recognized that features and aspects of the various examples provided above can be combined into further examples that also fall within the scope of the present disclosure. In addition, the figures are not to scale and may have size and shape exaggerated for illustrative purposes.

Claims

1. A computing device, comprising: a touch sensor; an auxiliary sensor; and a controller to: control the auxiliary sensor to collect a sequence of measurements; continuously monitor the touch sensor to detect a plurality of touch interactions; automatically label a first subset of the sequence of measurements as associated with a first touch interaction; and based on the first subset, train a model to predict subsequent touch interactions from the sequence of measurements.
2. The computing device of claim 1, wherein the auxiliary sensor includes a camera, a time-of-flight sensor, an ultrasonic sensor, or a combination thereof.
3. The computing device of claim 1, wherein the controller is to transmit the model to a second computing device.
4. The computing device of claim 1 , wherein the controller, in order to automatically label the first subset of the sequence of measurements, is to: determine a time corresponding to the first touch interaction; select, from the sequence of measurements, the first subset corresponding to a time period preceding the time; and label the first subset as associated with the first touch interaction.
5. The computing device of claim 1, wherein the controller is to: automatically label a second subset of the sequence of measurements as not associated with the first touch interaction; and train the model based on the first subset and the second subset.
6. The computing device of claim 5, wherein the controller is to: identify a time period during which no touch interactions were detected; and select, from the sequence of measurements, the second subset corresponding to the time period.
7. A computing device, comprising: a touch sensor; and a controller to: obtain a sequence of auxiliary measurements; label a first subset of the sequence of auxiliary measurements as associated with a first touch interaction; label a second subset of the sequence of auxiliary measurements as not associated with the first touch interaction; and train a model to predict subsequent touch interactions from the sequence of auxiliary measurements, based on the first subset and the second subset.
8. The computing device of claim 7, further comprising: an auxiliary sensor connected with the controller, wherein the controller is to: continuously obtain the sequence of auxiliary measurements from the auxiliary sensor; and continuously monitor the touch sensor to detect touch interactions.
9. The computing device of claim 8, wherein the sequence of auxiliary measurements are indicative of an operator hand position relative to the computing device.
10. The computing device of claim 7, wherein the controller is to: disable the touch sensor; obtain another sequence of auxiliary measurements; predict a future touch interaction from the other sequence of auxiliary measurements, based on the model; and responsive to predicting the future touch interaction, enable the touch sensor.
11. The computing device of claim 7, wherein the controller is to: transmit the model to a server.
12. A non-transitory computer-readable medium having a set of instructions executable by a controller to: control an auxiliary sensor to collect a sequence of measurements; continuously monitor a primary sensor to detect a plurality of operator interactions with the primary sensor; automatically label a first subset of the sequence of measurements as associated with a first operator interaction with the primary sensor; and based on the first subset, train a model to predict subsequent operator interactions with the primary sensor from the sequence of measurements.
13. The non-transitory computer-readable medium of claim 12, wherein the primary sensor is a touch sensor.
14. The non-transitory computer-readable medium of claim 13, wherein the primary sensor includes a touch screen, a touch pad, or a combination thereof.
15. The non-transitory computer-readable medium of claim 12, wherein the set of instructions is executable by the controller to: automatically label a second subset of the sequence of measurements as not associated with the first operator interaction with the primary sensor; and train the model based on the first subset and the second subset.
PCT/US2020/030656 2020-04-30 2020-04-30 Automatic sensor data collection and model generation WO2021221649A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2020/030656 WO2021221649A1 (en) 2020-04-30 2020-04-30 Automatic sensor data collection and model generation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2020/030656 WO2021221649A1 (en) 2020-04-30 2020-04-30 Automatic sensor data collection and model generation

Publications (1)

Publication Number Publication Date
WO2021221649A1 true WO2021221649A1 (en) 2021-11-04

Family

ID=78373831

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/030656 WO2021221649A1 (en) 2020-04-30 2020-04-30 Automatic sensor data collection and model generation

Country Status (1)

Country Link
WO (1) WO2021221649A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100302212A1 (en) * 2009-06-02 2010-12-02 Microsoft Corporation Touch personalization for a display device
WO2012109635A2 (en) * 2011-02-12 2012-08-16 Microsoft Corporation Prediction-based touch contact tracking
US9910540B2 (en) * 2015-04-13 2018-03-06 International Business Machines Corporation Management of a touchscreen interface of a device
WO2018125347A1 (en) * 2016-12-29 2018-07-05 Google Llc Multi-task machine learning for predicted touch interpretations

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100302212A1 (en) * 2009-06-02 2010-12-02 Microsoft Corporation Touch personalization for a display device
WO2012109635A2 (en) * 2011-02-12 2012-08-16 Microsoft Corporation Prediction-based touch contact tracking
US9910540B2 (en) * 2015-04-13 2018-03-06 International Business Machines Corporation Management of a touchscreen interface of a device
WO2018125347A1 (en) * 2016-12-29 2018-07-05 Google Llc Multi-task machine learning for predicted touch interpretations

Similar Documents

Publication Publication Date Title
US11275357B2 (en) Event analyzing device, event analyzing system, event analyzing method, and non-transitory computer readable storage medium
JP6374466B2 (en) Sensor interface device, measurement information communication system, measurement information communication method, and measurement information communication program
US20120296606A1 (en) Method, computer program, and system for performing interpolation on sensor data for high system availability
JP2019502195A (en) Anomalous fusion in temporal causal graphs
JP7168567B2 (en) Method and Apparatus for Collecting Motion Data for Industrial Robot Applications
CN116244159A (en) Training duration prediction method and device, multi-heterogeneous computing equipment and medium
WO2021221649A1 (en) Automatic sensor data collection and model generation
CN114091611A (en) Equipment load weight obtaining method and device, storage medium and electronic equipment
CN110289090A (en) Event finds method and device, storage medium, terminal
WO2020217957A1 (en) Data processing system, data processing method, program, sensor device, and receiving device
JP2020194377A (en) Job power prediction program, job power prediction method, and job power prediction device
JP3852636B2 (en) Status data collection method and control device
CN113676279A (en) Time synchronization method, device, storage medium and system
US11375945B2 (en) User device based Parkinson's disease detection
CN107202603B (en) Standard pattern generation method and device for data signal
JPH0225196A (en) Remote monitor
CN117472127B (en) Control system of laminating furnace
JP7311820B1 (en) Abnormality determination method, abnormality determination device, and program
JP6833124B1 (en) Communication equipment, communication methods and programs
US20230222322A1 (en) Vibration data analysis with functional neural network for predictive maintenance
WO2023226502A1 (en) Electronic fence alarm method and apparatus, electronic device, and storage medium
US20220221981A1 (en) Apparatus and method for adaptation of personalized interface
Dewei et al. Research on Data Operation Monitoring and Analysis System of Computer Intelligent Cloud Platform
JP2021125218A (en) Programmable logic controller and analyzer
JP2023047907A (en) Information processing system, method for collecting information, and information processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20933182

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20933182

Country of ref document: EP

Kind code of ref document: A1