US20200132717A1 - System and method for determining whether an electronic device is located on a stationary or stable surface - Google Patents
System and method for determining whether an electronic device is located on a stationary or stable surface Download PDFInfo
- Publication number
- US20200132717A1 US20200132717A1 US16/175,328 US201816175328A US2020132717A1 US 20200132717 A1 US20200132717 A1 US 20200132717A1 US 201816175328 A US201816175328 A US 201816175328A US 2020132717 A1 US2020132717 A1 US 2020132717A1
- Authority
- US
- United States
- Prior art keywords
- electronic device
- acquisition time
- time window
- sensor data
- orientation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01P—MEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
- G01P15/00—Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration
- G01P15/18—Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration in two or more dimensions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3206—Monitoring of events, devices or parameters that trigger a change in power modality
- G06F1/3231—Monitoring the presence, absence or movement of users
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01P—MEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
- G01P15/00—Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration
- G01P15/14—Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration by making use of gyroscopes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G06N99/005—
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present disclosure relates generally to electronic devices, and, in particular embodiments, to a system and method for determining whether an electronic device is located on a stationary or stable surface.
- Power supply components of the electronic device may be located on a bottom surface of the electronic device (e.g., the surface below the keyboard portion of a laptop computer).
- the base of the electronic device can overheat, burn, or cause discomfort to the user if the electronic device is in physical contact with the user (e.g., the user's lap of wrist).
- elevated temperatures in the electronic device can detrimentally affect batteries that power the electronic device. While batteries can operate over a wide range of temperatures, charging or discharging the batteries while the electronic device is at an elevated temperature can reduce charge acceptance and reduce battery-life. For example, charging or discharging lithium polymer (LiPo) batteries at elevated temperatures can lead to gas generation that might cause a cylindrical cell to vent and a pouch cell to swell. Even further, elevated temperatures can detrimentally affect the lifetime of integrated circuits (e.g. provided on a printed circuit board (PCB) or implemented as silicon-on-chip (SoC)) in the electronic device, especially when such integrated circuits are subjected to prolonged durations of high operating temperatures.
- integrated circuits e.g. provided on a printed circuit board (PCB) or implemented as silicon-on-chip (SoC)
- heat sinks, fans, or holes could be used to funnel heat out of a body of the electronic device.
- heat is becoming a much more important consideration at the silicon level.
- Efficient ways of detecting whether or not the electronic device is located on a stationary or stable surface may be needed to optimize power consumption and/or heat dissipation of components within the electronic device.
- a system includes: a first motion sensor configured to generate first sensor data indicative of a first type of movement of an electronic device; a first feature detection circuit configured to determine at least one orientation-independent feature based on the first sensor data; and a classifying circuit configured to determine whether or not the electronic device is located on a stationary surface based on the at least one orientation-independent feature.
- a method includes: generating, by an accelerometer of an electronic device, first sensor data over an acquisition time window; generating, by a gyroscope of the electronic device, second sensor data over the acquisition time window; determining, by a first feature detection circuit, at least one first orientation-independent feature for the acquisition time window based on the first sensor data; determining, by a second feature detection circuit, at least one second orientation-independent feature for the acquisition time window based on the second sensor data; and executing, by a classification circuit, a machine learning classification to determine whether or not the electronic device is located on a stationary surface based on the at least one first orientation-independent feature and the at least one second orientation-independent feature.
- an electronic device includes a detection system.
- the detection system includes: an accelerometer configured to generate accelerometer data indicative of a first type of movement of an electronic device; a first feature detection circuit coupled to an output of the accelerometer and configured to determine at least one orientation-independent feature based on the accelerometer data; and a classifying circuit configured to determine whether or not the electronic device is located on a stationary surface based on the at least one orientation-independent feature.
- FIG. 1 shows a block diagram of an electronic device including a detection system, in accordance with an embodiment
- FIG. 2 shows a method of extracting orientation-independent features from sensor data generated by a motion sensor of the electronic device of FIG. 1 , in accordance with an embodiment
- FIG. 3A shows sensor data generated by a motion sensor of the electronic device of FIG. 1 , in accordance with an embodiment
- FIG. 3B shows a zoomed-in view of sampling times of first and second acquisition time windows of the sensor data of FIG. 3A , in accordance with an embodiment
- FIG. 3C shows the norm of the sensor data of FIG. 3A , in accordance with an embodiment
- FIG. 3D shows the norm of the sensor data of FIG. 3A within the first acquisition time window of the sensor data of FIG. 3A , in accordance with an embodiment
- FIGS. 4A and 4B show mean-cross values generated by different motion sensors of the electronic device of FIG. 1 for different states, in accordance with an embodiment
- FIGS. 5A and 5B show relative differences between mean-cross values and variances for different states, in accordance with an embodiment
- FIGS. 6A to 6C show block diagrams illustrating various ways of implementing the detection system of FIG. 1 and the method of FIG. 2 , in accordance with various embodiments.
- Various embodiments described herein are directed to efficient systems and methods for determining whether or not an electronic device is located on a stationary or stable surface (e.g. on a stationary or stable inanimate surface such as on a table or in a drawer). Such a determination may be used, for example, to optimize device performance, vary power consumption of the electronic device, and/or manage heat dissipation of components within the electronic device.
- a stationary or stable surface e.g. a table
- fan speeds and clock frequencies of electronic components e.g. of a central processing unit (CPU), a graphics processing unit (GPU), or a power supply unit
- CPU central processing unit
- GPU graphics processing unit
- a power supply unit e.g.
- clock frequencies of components in the electronic device may be decreased to reduce power consumption and to avoid overheating of the components in the electronic device.
- the embodiments described below are directed to systems and methods of determining whether or not the electronic device is located on a stationary or stable surface.
- Use of the result of such a determination in the electronic device is given merely as illustrations, examples being to implement thermal policies, power savings, and performance benchmarks.
- the use of the result of such a determination in controlling or varying an operation of the electronic device may, in general, be left to the discretion of the manufacturer(s) of the electronic device and/or the manufacturer(s) of the electronic components of the electronic device.
- the proposed methods use data from one or more motion sensors included in the electronic device. While conventional systems and methods of determining whether or not the electronic device is located on a stationary or stable surface may use data from one or more motion sensors, such conventional systems and methods may suffer from several disadvantages.
- the motion sensors of the electronic device generate motion sensor data
- conventional systems and methods extract features from the motion sensor data that depend on an orientation of the motion sensor in the electronic device relative to a plurality of reference axes in order to determine whether or not the electronic device is located on a stationary or stable surface. In other words, conventional systems and methods rely on orientation-dependent features for the determination.
- conventional systems and methods may extract, from the motion sensor data, pitch, yaw, roll and/or various acceleration components relative to a calibrated coordinate system or the plurality of reference axes (e.g. three-dimensional coordinate system or a 6-axes system), with such orientation-dependent features being subsequently used to determine whether or not the electronic device is located on a stationary or stable surface.
- a calibrated coordinate system or the plurality of reference axes e.g. three-dimensional coordinate system or a 6-axes system
- orientation-dependent features require calibration of the motion sensors of the electronic device to reduce sensor offset and bias (e.g. accelerometer offset and/or gyroscope bias). Calibration is also needed to generate the calibrated coordinate system or the plurality of reference axes, with such calibration ensuring that the orientation-dependent features (e.g., pitch, yaw, roll, x-axis acceleration component, y-axis acceleration component, and/or z-axis acceleration component) accurately track the motion and/or orientation of the electronic device.
- orientation-dependent features e.g., pitch, yaw, roll, x-axis acceleration component, y-axis acceleration component, and/or z-axis acceleration component
- conventional systems and methods are not easily reconfigurable or re-tunable, can suffer from high latency and long convergence times (e.g.
- Embodiment systems and methods aim to circumvent at least these disadvantages associated with conventional methods of determining whether or not the electronic device is located on a stationary or stable surface.
- embodiment systems and methods described herein extract a few (e.g. one or two) significant features from motion sensor data, and such extracted features are orientation-independent. Stated differently, the features extracted from motion sensor data are not dependent on a calibrated coordinate system or a plurality of reference axes for accuracy.
- embodiment systems and methods rely on a mean-cross value (explained in greater detail below) and a variance of the norm of the motion sensor data within each acquisition time window, which features are orientation-independent.
- embodiment systems and methods analyze the mean-cross value and the variance of the norm using a machine learning approach to determine whether or not the electronic device is located on a stationary or stable surface.
- embodiment systems and methods use physical sensor data without the need of complex processing methods (examples of such methods being sensor fusion for attitude estimation, calibration, FFT, and complex filtering chains). Due to the use of orientation-independent features, a machine learning approach, and physical sensor data, the embodiment systems and methods have at least the following advantages: (1) are easily tuned or reconfigured; (2) have low latency and short convergence times (e.g. less than 10 seconds); (3) do not require calibration of the motion sensors (thereby exhibiting immunity against device-to-device variations, accelerometer offsets, and/or gyroscope bias); and (4) have greater reliability compared to conventional systems and methods since orientation-independent features are used instead of orientation-dependent features.
- FIG. 1 shows a block diagram of an electronic device 101 including a detection system wo, in accordance with an embodiment.
- the detection system boo may be within, attached, or coupled to the electronic device 101 .
- the detection system boo of the electronic device 101 may be used to determine whether or not the electronic device 101 is on a stationary or stable surface (e.g. on a table or in a drawer).
- the electronic device 101 may be a laptop computer, a tablet device, or a wearable electronic device (e.g. a smart watch, mobile phone, wireless headphones, or the like).
- the detection system boo includes a first motion sensor 102 and a first feature detection circuit 104 that is coupled to an output of the first motion sensor 102 .
- the first feature detection circuit 104 is configured to determine one or more orientation-independent features from the output signal of the first motion sensor 102 .
- a classifying circuit 106 is coupled to an output of the first feature detection circuit 104 .
- the classifying circuit 106 is configured to determine a state of the electronic device 101 (e.g. assign a label indicating whether or not the electronic device 101 is located on a stationary or stable surface). Such a determination by the classifying circuit 106 is based on the orientation-independent features determined by the first feature detection circuit 104 .
- the detection system loo may further include a second motion sensor 108 that measures a different motion characteristic compared to the first motion sensor 102 .
- a second feature detection circuit no may be coupled to an output of the second motion sensor 108 . Similar to the first feature detection circuit 104 , the second feature detection circuit no is configured to determine one or more orientation-independent features from the output signal of the second motion sensor 108 .
- the classifying circuit 106 is configured to determine a state of the electronic device 101 (e.g. assign a label indicating whether or not the electronic device 101 is located on a stationary or stable surface), with such determination being based on the orientation-independent features determined by the first feature detection circuit 104 and the orientation-independent features determined by the second feature detection circuit no.
- the detection system wo may further include a meta-classifying circuit 112 coupled to an output of the classifying circuit 106 .
- the meta-classifying circuit 112 may implement a time-based voting method that acts as a low-pass filter on the output of the classifying circuit 106 in order to improve an overall accuracy of the detection system loo.
- the detection system loo includes the first motion sensor 102 , which may be an accelerometer of the electronic device 101 . It is noted that although only one first motion sensor 102 is shown in FIG. 1 , a plurality of first motion sensors 102 may be included in the electronic device 101 (e.g. two or more accelerometers placed at different locations of the electronic device 101 ).
- the electronic device 101 having the first motion sensor 102 may be a laptop computer having an accelerometer coupled or attached to a base of the laptop computer.
- the electronic device 101 having the first motion sensor 102 may be a tablet having an accelerometer included within the tablet.
- the first motion sensor 102 may be configured to sense vibration or acceleration of the electronic device 101 in each axis of motion.
- the first motion sensor 102 may generate first sensor data 102 x, 102 y, 102 z that is indicative of vibration or acceleration of the electronic device 101 in the lateral axis (e.g. referred to as the “x axis”), longitudinal axis (e.g. referred to as the “y axis”), and vertical or normal axis (e.g. referred to as the “z axis”), respectively.
- first sensor data 102 x, 102 y, 102 z that is indicative of vibration or acceleration of the electronic device 101 in the lateral axis (e.g. referred to as the “x axis”), longitudinal axis (e.g. referred to as the “y axis”), and vertical or normal axis (e.g. referred to as the “z axis”), respectively.
- first sensor data 102 x, 102 y, 102 z from the first motion sensor 102 enables the embodiment system and methods to determine whether or not the electronic device 101 is located on a stationary or stable surface.
- detection can be improved with the use of the second motion sensor 108 in conjunction with the first motion sensor 102 .
- the second motion sensor 108 may be a gyroscope of the electronic device 101 . It is reiterated that use of the second motion sensor 108 (and consequently, the data generated by the second motion sensor 108 ) is optional.
- the second motion sensor 108 e.g.
- the second motion sensor 108 may be configured to measure a rate at which the electronic device 101 rotates around each axis of motion. For example, the second motion sensor 108 may generate second sensor data 108 x, 108 y, 108 z that is indicative of the rotation rate of the electronic device 101 around the x-axis, the y-axis, and the z-axis, respectively.
- first sensor data 102 x, 102 y, 102 z and the second sensor data 108 x, 108 y, 108 z respectively generated by the first motion sensor 102 and the second motion sensor 108 may depend, at least in part, on a placement or orientation of the electronic device 101 .
- the electronic device 101 may be placed in an inclined plane, a flat plane, on a part of the human body (e.g. a lap), or on an inanimate object (e.g. a desk).
- the first sensor data 102 x, 102 y, 102 z and the second sensor data 108 x, 108 y, 108 z may be indicative of such a placement or orientation of the electronic device lot
- first feature detection circuit 104 and the second feature detection circuit no are shown as separate circuits in FIG. 1 , it is noted that in some embodiments, a single detection circuit may implement both the first feature detection circuit 104 and the second feature detection circuit no.
- FIG. 2 shows an embodiment method 200 that may be executed by the first feature detection circuit 104 to extract or determine orientation-independent features from the first sensor data 102 x, 102 y, 102 z.
- the method 200 may also be executed by the second feature detection circuit no to extract or determine orientation-independent features from the second sensor data 108 x, 108 y, 108 z, in other embodiments that optionally utilize the second motion sensor 108 (e.g. gyroscope) in addition to the first motion sensor 102 (e.g. accelerometer).
- the description that follows is directed to examples where the first feature detection circuit 104 executes the method 200 ; however, such description applies equally to the second feature detection circuit no in other embodiments that optionally utilize the second motion sensor 108 in addition to the first motion sensor 102 .
- FIG. 3A shows an example of the first sensor data 102 x, 102 y, 102 z that is generated by the first motion sensor 102 over a plurality of acquisition time windows.
- FIG. 3B shows a zoomed-in view of sampling times of the first two acquisition time windows W 1 , W 2 of the example of FIG. 3A .
- the plurality of acquisition time windows are consecutive and non-overlapping windows of time in some embodiments. However, in other embodiments, overlapping windows of time are also possible.
- FIG. 3A shows an example of the first sensor data 102 x, 102 y, 102 z that is generated by the first motion sensor 102 over a plurality of acquisition time windows.
- FIG. 3B shows a zoomed-in view of sampling times of the first two acquisition time windows W 1 , W 2 of the example of FIG. 3A .
- the plurality of acquisition time windows are consecutive and non-overlapping windows of time in some embodiments. However, in other embodiments, overlapping windows of time are also possible.
- each acquisition time window W 1 starts at time to and ends at time t 49 .
- each acquisition time window has a duration of 1 second and includes 50 samples (e.g. corresponding to a 50 Hz sampling frequency). Consequently, in the example of FIG. 3A , there are about 72 acquisition time windows and a total about 3600 samples (i.e., 50 samples for each of the 72 acquisition time windows). It is noted that each sample includes a complete dataset (e.g. x-axis data, y-axis data, and z-axis data).
- FIG. 3C shows the norm 302 of the first sensor data 102 x, 102 y, 102 z in FIG. 3A , and the norm 302 at a given sample time may be indicative of the magnitude of the first sensor data 102 x, 102 y, 102 z at the given sample time.
- the method 200 is executed for each acquisition time window W i .
- method 200 is triggered at the start of acquisition time window W i (e.g. time t o in FIG. 3B ) and includes step 202 , where the first feature detection circuit 104 receives the first sensor data 102 x, 102 y, 102 z and determines the norm of each sample within the acquisition time window W i .
- the norm of each sample within the acquisition time window W i is stored in a buffer included in the first detection circuit 104 , although in other embodiments, the computation technique used to determine the norm may obviate the need for such a buffer.
- step 204 the acquisition time window W i ends and the method 200 proceeds to step 206 where the mean of the norms within the acquisition time window W i are determined.
- steps 208 and 210 statistical data is extracted from the norms within the acquisition time window W i . Consequently, steps 206 , 208 and 210 are triggered each time an entire window of samples is acquired (e.g. each time 50 samples are acquired in a 1 second time window).
- the statistical data includes the mean-cross value within the acquisition time window W i (in step 208 ) and the variance of the norms within the acquisition time window W i (in step 210 ), both of which require the mean of the norms determined in step 206 .
- the mean-cross value denotes the number of times the norms within the acquisition time window W i crosses the mean of the norms within the acquisition time window W i .
- FIG. 3D shows the norms 304 within the acquisition time window W i (e.g. determined in step 202 ) and the mean 306 of the norms within the acquisition time window W i (e.g. determined in step 206 ).
- These instances are depicted as points of intersection of the curve 304 and the line 306 . Consequently, the mean-cross value for the example of FIG. 3D is 26 .
- the variance of the norm within the acquisition time window W i is determined as follows:
- n is the number of samples within the acquisition time window W i (e.g. 50 in the case of a 50 Hz sampling frequency)
- x i is the i th norm 304 within the acquisition time window W i
- x mean is the mean of the norms 306 within the acquisition time window W i .
- the mean-cross value and the variance of the norms within the acquisition time window W i is provided to the classifying circuit 106 .
- the classifying circuit 106 is run after the acquisition time window W i ends and after the mean-cross value and the variance of the norms within the acquisition time window W i are determined by the appropriate detection circuit.
- the mean-cross value and the variance of the norms within the acquisition time window W i are the orientation-independent features that are used to determine whether or not the electronic device 101 is located on a stationary or stable surface.
- FIG. 4A shows mean-cross values 402 generated by the first feature detection circuit 104 and mean-cross values 404 generated by the second feature detection circuit no over 96 acquisition time windows W i in a scenario where the electronic device 101 is located on a stationary or stable surface (e.g. a table). Consequently, each acquisition time windows W i in FIG. 4A has a respective mean-cross value MC A,i associated with the first motion sensor 102 (e.g. accelerometer) and a respective mean-cross value MC G,i associated with the second motion sensor 108 (e.g. gyroscope).
- FIG. 1 shows mean-cross values 402 generated by the first feature detection circuit 104 and mean-cross values 404 generated by the second feature detection circuit no over 96 acquisition time windows W i in a scenario where the electronic device 101 is located on a stationary or stable surface (e.g. a table). Consequently, each acquisition time windows W i in FIG. 4A has a respective mean-cross value MC A,i associated with the first motion sensor
- each time window t i in the example of FIG. 4B has a respective mean-cross value MC A,i associated with the first motion sensor 102 (e.g. accelerometer) and a respective mean-cross value MC G,i associated with the second motion sensor 108 (e.g. gyroscope).
- the first sensor data 102 x, 102 y, 102 z from the first motion sensor 102 can be approximated as white noise of the first motion sensor 102 added with motion-dependent signals.
- the white noise of the first motion sensor 102 can be approximated as a signal that causes the first sensor data 102 x, 102 y, 102 z to fluctuate frequently and randomly around its mean value when the motion-dependent signals are stable and slowly varying (e.g. when on a stationary or stable surface).
- white noise of the first motion sensor 102 has less of a contribution on the first sensor data 102 x, 102 y, 102 z when the motion-dependent signals are dominant (e.g. when not on a stationary or stable surface).
- the mean-cross values 402 when the electronic device 101 is located on a stationary or stable surface is expected to be greater than the mean-cross values 406 when the electronic device 101 is not located on a stationary or stable surface.
- the mean-cross values 404 obtained by method 200 when the electronic device 101 is located on a stationary or stable surface is greater than the mean-cross values 408 obtained by method 200 when the electronic device 101 is not located on a stationary or stable surface (e.g. when on a human lap).
- This difference in the mean-cross values for the two difference states can also be explained in terms of the contribution of white noise of the second motion sensor 108 to the second sensor data 108 x, 108 y, 108 z in the two states, as described above.
- the classifying circuit 106 is run after the acquisition time window W i ends and after it has received the mean-cross value and the variance of the norms for the acquisition time window W i .
- the classifying circuit 106 may be configured to determine whether or not the electronic device 101 is located on a stationary or stable surface during the acquisition time window W i based on at least the mean-cross value and the variance of the norms for each acquisition time window W i .
- the classifying circuit 106 may be a supervised machine learning classifier implemented using machine learning techniques, examples being logistic regression, naive Bayes classifier, support vector machines, decision trees, boosted trees, random forest, neural networks, nearest neighbor, among others.
- the classifying circuit 106 is configured to assign a label (or decision) L i to each acquisition time window W i , with such label L i indicating whether or not the electronic device 101 is located on a stationary or stable surface during the acquisition time window W i .
- the usage of the variance of the norm can increase the accuracy of the classifying circuit 106 , with the variance of the norm decreasing if the electronic device 101 is located on a stationary or stable surface, and the variance of the norm increasing if the electronic device 101 is not located on a stationary or stable surface.
- supervised learning is a machine learning task of learning a function that maps an input to an output based on example input-output pairs.
- supervised learning infers a function from labeled training data including a set of training examples.
- labeled training data may be obtained by placing the electronic device 101 (including the first motion sensor 102 and, optionally, the second motion sensor 108 ) on a stationary or stable surface (e.g. a table) and logging the first sensor data 102 x, 102 y, 102 z and the second sensor data 108 x, 108 y, 108 z for various typing intensity levels and different orientations and positions of the electronic device 101 on the stationary or stable surface.
- the first sensor data 102 x, 102 y, 102 z and the second sensor data 108 x, 108 y, 108 z for these various typing intensity levels and different orientations and positions are known to have been obtained when the electronic device 101 is located on a stationary or stable surface. Consequently, such first sensor data 102 x, 102 y, 102 z and second sensor data 108 x, 108 y, 108 z are then subjected to the method 200 of FIG. 2 to obtain mean-cross values and variance of norms values for various acquisition time windows W i , and such mean-cross values and variance of norms values are subsequently assigned the label indicating that the electronic device 101 is located on a stationary or stable surface.
- labeled training data may also be obtained by placing the electronic device 101 on a moving or unstable surface (e.g. a human lap) and logging the first sensor data 102 x, 102 y, 102 z and the second sensor data 108 x, 108 y, 108 z for various typing intensity levels and different orientations and positions of the electronic device 101 on the stationary or stable surface.
- the various first sensor data 102 x, 102 y, 102 z and the various second sensor data 108 x, 108 y, 108 z obtained in such a manner are then subjected to the method 200 of FIG. 2 to obtain mean-cross values and variance of norms values for various acquisition time windows W i , and such mean-cross values and variance of norms values are subsequently assigned the label indicating that the electronic device 101 is not located on a stationary or stable surface.
- Latency of the detection system 100 shown in FIG. 1 may depend on at least the latency of the classifying circuit 106 , which may be equal the duration of each of the acquisition time windows W i .
- the classifying circuit 106 has a latency of 1 second since a label L i is output from the classifying circuit 106 every second.
- the latency of the detection system 100 is also affected by the meta-classifier output latency.
- the detection system 100 may include the meta-classifying circuit 112 .
- the meta-classifying circuit 112 is configured to determine the number of consecutive occurrences of the output L i of the classifying circuit 106 . If the number of consecutive occurrences overcomes a threshold, the output of the meta-classifying circuit 112 (labelled L final in FIG. 1 ) is changed. Otherwise, the previous state is kept. As such, the meta-classifying circuit 112 can be used to low-pass filter the output of the classifying circuit 106 (e.g. to avoid glitches and spurious false positives).
- the latency of the meta-classifying circuit 112 introduces latency to the detection system 100 , and the latency of the meta-classifying circuit 112 can be configured to be a minimum of N times the duration of an acquisition time window W i .
- N N not_on_table and the output state L final is changed if the number of consecutive occurrences reaches N not_on_table ).
- N not_on_table can be different from N on_table .
- the output of the meta-classifying circuit 112 is updated according to the meta-classifier logic configuration and the configured meta-classifier output latency.
- N on_table may be configured to be between 2 and 10
- N not_on_table may be configured to be between 2 and 10.
- the meta-classifying circuit 112 may increase an accuracy of the determination of whether or not the electronic device 101 is located on a stationary or stable surface, this increase in accuracy comes at a cost of increased system latency. However, even though latency increases as accuracy increases, the embodiment systems and methods achieve latencies that are less than 10 seconds (e.g. between 4 seconds and 9 seconds), even with the use of the meta-classifying circuit 112 .
- the second motion sensor 108 e.g. gyroscope
- the data therefrom may not be used by the classifying circuit 106 to determine whether or not the electronic device 101 is located on a stationary or stable surface (e.g. on a table or in a drawer).
- a stationary or stable surface e.g. on a table or in a drawer.
- approximately 90% accuracy can be achieved if the classifying circuit 106 only uses the mean-cross values MC i,102 and the variance of the norms Var i,102 obtained from the first sensor data 102 x, 102 y, 102 z.
- labels L i are correctly given to approximately 90% of the acquisition time windows when only the mean-cross values MC i,102 and the variance of the norms Var i,102, obtained from the first sensor data 102 x, 102 y, 102 z, are used.
- a high accuracy can be achieved, even without the use of a meta-classifying circuit 112 .
- the choice of which data to extract from the acquisition time window W 1 is based on a trade-off between accuracy and power consumption.
- the number of features determined by the first feature detection circuit 104 can be varied.
- the mean for each axis can be computed, and this may be used to determine the mean-cross value for each axis for each acquisition time window W i .
- the energy of the signal received from the motion sensors can be used.
- determination of a greater number of features is accompanied by an increase in resources (e.g. memory, execution time, and power).
- the output of the meta-classifying circuit 112 may be provided to a state monitor 114 , which may adapt the behavior or operation of the electronic device 101 .
- the state monitor 114 may be implemented using a controller and a memory register.
- the output of the classifying circuit 106 and/or the output of the meta-classifying circuit 112 may be stored in the memory register of the state monitor 114 , and the controller of the state monitor 114 may be configured to read the content of the memory register.
- the state monitor 114 may generate an interrupt signal 116 that may adapt the behavior or operation of electronic device 101 , for example, fan speeds and clock frequencies of electronic components (e.g.
- the interrupt signal 116 may cause the clock frequencies of components in the electronic device 101 to be decreased to reduce power consumption and to avoid overheating of the components in the electronic device 101 .
- FIG. 6A shows a first example, where the method 200 , as well as the classifying circuit and meta-classifying circuit 112 , is implemented by a controller 502 (e.g. a microcontroller) that is coupled to a micro-electro-mechanical (MEMS) system-in-package 504 .
- the MEMS system-in-package 504 may implement the first motion sensor 102 and/or the second motion sensor 108 .
- the controller 502 may be included in a system-on-chip (SoC) 506 , which is communicatively coupled to the operating system layer 508 of the electronic device 101 .
- SoC system-on-chip
- FIG. 6B shows another example, where the method 200 , as well as the classifying circuit and meta-classifying circuit 112 , is implemented by directly connecting the controller 502 to the operating system layer 508 (e.g. without the SoC 506 of FIG. 6A being an intervening connection).
- FIG. 6C shows another example, where the method 200 , as well as the classifying circuit and meta-classifying circuit 112 , is implemented directly in hardware (e.g. directly on the MEMS system-in-package 504 , aided by software embedded in the MEMS system-in-package 504 ) that is connected to the operating system layer 508 . It is noted that current consumption of the implementation shown in FIG. 6A is greater than current consumption of the implementation shown in FIG. 6B , which is, in turn, greater than current consumption of the implementation shown in FIG. 6C .
- the embodiment systems and methods have at least the following advantages: (1) are easily tuned or reconfigured (e.g. due to the use of machine learning approach for classifying circuit 106 ); (2) have low latency and short convergence times (e.g. less than 10 seconds, due to the time interval TI being split into a plurality of short time windows each of which is about 1 second and also configurable/adjustable); (3) do not require calibration of the motion sensors (e.g. due to the use of orientation-independent features of mean-cross values and the variance of the norms, thereby exhibiting immunity against device-to-device variations, accelerometer offsets, and/or gyroscope bias); and (4) have greater reliability compared to conventional systems and methods since orientation-independent features are used in embodiment systems and methods. Furthermore, as mentioned in reference to FIG. 6C , the embodiment systems and methods may be executed directly in hardware, thus enabling ultra-low power implementations of the embodiment systems and methods.
- a system includes: a first motion sensor configured to generate first sensor data indicative of a first type of movement of an electronic device; a first feature detection circuit configured to determine at least one orientation-independent feature based on the first sensor data; and a classifying circuit configured to determine whether or not the electronic device is located on a stationary surface based on the at least one orientation-independent feature.
- a method includes: generating, by an accelerometer of an electronic device, first sensor data over an acquisition time window; generating, by a gyroscope of the electronic device, second sensor data over the acquisition time window; determining, by a first feature detection circuit, at least one first orientation-independent feature for the acquisition time window based on the first sensor data; determining, by a second feature detection circuit, at least one second orientation-independent feature for the acquisition time window based on the second sensor data; and executing, by a classification circuit, a machine learning classification to determine whether or not the electronic device is located on a stationary surface based on the at least one first orientation-independent feature and the at least one second orientation-independent feature.
- an electronic device includes a detection system.
- the detection system includes: an accelerometer configured to generate accelerometer data indicative of a first type of movement of an electronic device; a first feature detection circuit coupled to an output of the accelerometer and configured to determine at least one orientation-independent feature based on the accelerometer data; and a classifying circuit configured to determine whether or not the electronic device is located on a stationary surface based on the at least one orientation-independent feature.
- DSP digital signal processor
- ASIC Application Specific Integrated Circuit
- FPGA field programmable gate array
- a processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
- a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
- RAM Random Access Memory
- ROM Read Only Memory
- EPROM Electrically Programmable ROM
- EEPROM Electrically Erasable Programmable ROM
- registers a hard disk, a removable disk, a CD-ROM, or any other form of computer-readable medium known in the art.
- An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium.
- the storage medium may be integral to the processor.
- the processor and the storage medium may reside in an ASIC.
Abstract
Description
- The present disclosure relates generally to electronic devices, and, in particular embodiments, to a system and method for determining whether an electronic device is located on a stationary or stable surface.
- As electronic devices become more ubiquitous and as individuals become more mobile, there is an increasing need to provide computing capabilities and information on the go. Such a need can be met, at least in part, by laptop computers, tablet devices, and wearable electronics (hereinafter individually and collectively referred to as an “electronic device”).
- One aspect that users often encounter with the use of an electronic device is high power consumption and/or poor heat dissipation, which often manifests as heating of the electronic device. Power supply components of the electronic device may be located on a bottom surface of the electronic device (e.g., the surface below the keyboard portion of a laptop computer). During long periods of use or during intense use (e.g. during gaming), the base of the electronic device can overheat, burn, or cause discomfort to the user if the electronic device is in physical contact with the user (e.g., the user's lap of wrist).
- In addition to the potential of causing harm to human skin, elevated temperatures in the electronic device can detrimentally affect batteries that power the electronic device. While batteries can operate over a wide range of temperatures, charging or discharging the batteries while the electronic device is at an elevated temperature can reduce charge acceptance and reduce battery-life. For example, charging or discharging lithium polymer (LiPo) batteries at elevated temperatures can lead to gas generation that might cause a cylindrical cell to vent and a pouch cell to swell. Even further, elevated temperatures can detrimentally affect the lifetime of integrated circuits (e.g. provided on a printed circuit board (PCB) or implemented as silicon-on-chip (SoC)) in the electronic device, especially when such integrated circuits are subjected to prolonged durations of high operating temperatures.
- In the past, heat sinks, fans, or holes could be used to funnel heat out of a body of the electronic device. However, as more functionality is added onto a PCB or into a SoC, heat is becoming a much more important consideration at the silicon level. Efficient ways of detecting whether or not the electronic device is located on a stationary or stable surface (e.g. a table or in a drawer) may be needed to optimize power consumption and/or heat dissipation of components within the electronic device.
- In an embodiment, a system includes: a first motion sensor configured to generate first sensor data indicative of a first type of movement of an electronic device; a first feature detection circuit configured to determine at least one orientation-independent feature based on the first sensor data; and a classifying circuit configured to determine whether or not the electronic device is located on a stationary surface based on the at least one orientation-independent feature.
- In an embodiment, a method includes: generating, by an accelerometer of an electronic device, first sensor data over an acquisition time window; generating, by a gyroscope of the electronic device, second sensor data over the acquisition time window; determining, by a first feature detection circuit, at least one first orientation-independent feature for the acquisition time window based on the first sensor data; determining, by a second feature detection circuit, at least one second orientation-independent feature for the acquisition time window based on the second sensor data; and executing, by a classification circuit, a machine learning classification to determine whether or not the electronic device is located on a stationary surface based on the at least one first orientation-independent feature and the at least one second orientation-independent feature.
- In an embodiment, an electronic device includes a detection system. The detection system includes: an accelerometer configured to generate accelerometer data indicative of a first type of movement of an electronic device; a first feature detection circuit coupled to an output of the accelerometer and configured to determine at least one orientation-independent feature based on the accelerometer data; and a classifying circuit configured to determine whether or not the electronic device is located on a stationary surface based on the at least one orientation-independent feature.
- For a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
-
FIG. 1 shows a block diagram of an electronic device including a detection system, in accordance with an embodiment; -
FIG. 2 shows a method of extracting orientation-independent features from sensor data generated by a motion sensor of the electronic device ofFIG. 1 , in accordance with an embodiment; -
FIG. 3A shows sensor data generated by a motion sensor of the electronic device ofFIG. 1 , in accordance with an embodiment; -
FIG. 3B shows a zoomed-in view of sampling times of first and second acquisition time windows of the sensor data ofFIG. 3A , in accordance with an embodiment; -
FIG. 3C shows the norm of the sensor data ofFIG. 3A , in accordance with an embodiment; -
FIG. 3D shows the norm of the sensor data ofFIG. 3A within the first acquisition time window of the sensor data ofFIG. 3A , in accordance with an embodiment; -
FIGS. 4A and 4B show mean-cross values generated by different motion sensors of the electronic device ofFIG. 1 for different states, in accordance with an embodiment; -
FIGS. 5A and 5B show relative differences between mean-cross values and variances for different states, in accordance with an embodiment; -
FIGS. 6A to 6C show block diagrams illustrating various ways of implementing the detection system ofFIG. 1 and the method ofFIG. 2 , in accordance with various embodiments. - Corresponding numerals and symbols in the different figures generally refer to corresponding parts unless otherwise indicated. The figures are drawn to clearly illustrate the relevant aspects of the embodiments and are not necessarily drawn to scale.
- The making and using of various embodiments are discussed in detail below. It should be appreciated, however, that the various embodiments described herein are applicable in a wide variety of specific contexts. The specific embodiments discussed are merely illustrative of specific ways to make and use various embodiments, and should not be construed in a limited scope.
- Various embodiments described herein are directed to efficient systems and methods for determining whether or not an electronic device is located on a stationary or stable surface (e.g. on a stationary or stable inanimate surface such as on a table or in a drawer). Such a determination may be used, for example, to optimize device performance, vary power consumption of the electronic device, and/or manage heat dissipation of components within the electronic device. As an illustration, in various embodiments, in response to a determination that the electronic device is on a stationary or stable surface (e.g. a table), fan speeds and clock frequencies of electronic components (e.g. of a central processing unit (CPU), a graphics processing unit (GPU), or a power supply unit) in the electronic device may be increased to achieve better performance (e.g. faster computation times); however, in response to a determination that the electronic device is not on a stationary or stable surface (e.g. when the electronic device is in motion or on a user's lap), clock frequencies of components in the electronic device may be decreased to reduce power consumption and to avoid overheating of the components in the electronic device.
- At the outset, it is noted that the embodiments described below are directed to systems and methods of determining whether or not the electronic device is located on a stationary or stable surface. Use of the result of such a determination in the electronic device is given merely as illustrations, examples being to implement thermal policies, power savings, and performance benchmarks. The use of the result of such a determination in controlling or varying an operation of the electronic device may, in general, be left to the discretion of the manufacturer(s) of the electronic device and/or the manufacturer(s) of the electronic components of the electronic device.
- As described below, the proposed methods use data from one or more motion sensors included in the electronic device. While conventional systems and methods of determining whether or not the electronic device is located on a stationary or stable surface may use data from one or more motion sensors, such conventional systems and methods may suffer from several disadvantages. For example, the motion sensors of the electronic device generate motion sensor data, and conventional systems and methods extract features from the motion sensor data that depend on an orientation of the motion sensor in the electronic device relative to a plurality of reference axes in order to determine whether or not the electronic device is located on a stationary or stable surface. In other words, conventional systems and methods rely on orientation-dependent features for the determination. Illustratively, conventional systems and methods may extract, from the motion sensor data, pitch, yaw, roll and/or various acceleration components relative to a calibrated coordinate system or the plurality of reference axes (e.g. three-dimensional coordinate system or a 6-axes system), with such orientation-dependent features being subsequently used to determine whether or not the electronic device is located on a stationary or stable surface.
- Use of such orientation-dependent features requires calibration of the motion sensors of the electronic device to reduce sensor offset and bias (e.g. accelerometer offset and/or gyroscope bias). Calibration is also needed to generate the calibrated coordinate system or the plurality of reference axes, with such calibration ensuring that the orientation-dependent features (e.g., pitch, yaw, roll, x-axis acceleration component, y-axis acceleration component, and/or z-axis acceleration component) accurately track the motion and/or orientation of the electronic device. As a result of the use of orientation-dependent features, conventional systems and methods are not easily reconfigurable or re-tunable, can suffer from high latency and long convergence times (e.g. 10 seconds or more), and have limited accuracy since such conventional systems and methods are susceptible to device-to-device variations and orientation-based variations. Embodiment systems and methods aim to circumvent at least these disadvantages associated with conventional methods of determining whether or not the electronic device is located on a stationary or stable surface.
- In general, embodiment systems and methods described herein extract a few (e.g. one or two) significant features from motion sensor data, and such extracted features are orientation-independent. Stated differently, the features extracted from motion sensor data are not dependent on a calibrated coordinate system or a plurality of reference axes for accuracy. In particular, embodiment systems and methods rely on a mean-cross value (explained in greater detail below) and a variance of the norm of the motion sensor data within each acquisition time window, which features are orientation-independent. Furthermore, embodiment systems and methods analyze the mean-cross value and the variance of the norm using a machine learning approach to determine whether or not the electronic device is located on a stationary or stable surface. Additionally, embodiment systems and methods use physical sensor data without the need of complex processing methods (examples of such methods being sensor fusion for attitude estimation, calibration, FFT, and complex filtering chains). Due to the use of orientation-independent features, a machine learning approach, and physical sensor data, the embodiment systems and methods have at least the following advantages: (1) are easily tuned or reconfigured; (2) have low latency and short convergence times (e.g. less than 10 seconds); (3) do not require calibration of the motion sensors (thereby exhibiting immunity against device-to-device variations, accelerometer offsets, and/or gyroscope bias); and (4) have greater reliability compared to conventional systems and methods since orientation-independent features are used instead of orientation-dependent features.
-
FIG. 1 shows a block diagram of anelectronic device 101 including a detection system wo, in accordance with an embodiment. The detection system boo may be within, attached, or coupled to theelectronic device 101. The detection system boo of theelectronic device 101 may be used to determine whether or not theelectronic device 101 is on a stationary or stable surface (e.g. on a table or in a drawer). As mentioned above, theelectronic device 101 may be a laptop computer, a tablet device, or a wearable electronic device (e.g. a smart watch, mobile phone, wireless headphones, or the like). The detection system boo includes afirst motion sensor 102 and a firstfeature detection circuit 104 that is coupled to an output of thefirst motion sensor 102. The firstfeature detection circuit 104 is configured to determine one or more orientation-independent features from the output signal of thefirst motion sensor 102. - As shown in
FIG. 1 , a classifyingcircuit 106 is coupled to an output of the firstfeature detection circuit 104. Theclassifying circuit 106 is configured to determine a state of the electronic device 101 (e.g. assign a label indicating whether or not theelectronic device 101 is located on a stationary or stable surface). Such a determination by the classifyingcircuit 106 is based on the orientation-independent features determined by the firstfeature detection circuit 104. - In some embodiments, the detection system loo may further include a
second motion sensor 108 that measures a different motion characteristic compared to thefirst motion sensor 102. In such embodiments, a second feature detection circuit no may be coupled to an output of thesecond motion sensor 108. Similar to the firstfeature detection circuit 104, the second feature detection circuit no is configured to determine one or more orientation-independent features from the output signal of thesecond motion sensor 108. - In embodiments including the
second motion sensor 108, the classifyingcircuit 106 is configured to determine a state of the electronic device 101 (e.g. assign a label indicating whether or not theelectronic device 101 is located on a stationary or stable surface), with such determination being based on the orientation-independent features determined by the firstfeature detection circuit 104 and the orientation-independent features determined by the second feature detection circuit no. - In some embodiments, the detection system wo may further include a meta-classifying
circuit 112 coupled to an output of theclassifying circuit 106. The meta-classifyingcircuit 112 may implement a time-based voting method that acts as a low-pass filter on the output of theclassifying circuit 106 in order to improve an overall accuracy of the detection system loo. Each of the components of the detection system loo is described in further detail below. - The detection system loo includes the
first motion sensor 102, which may be an accelerometer of theelectronic device 101. It is noted that although only onefirst motion sensor 102 is shown inFIG. 1 , a plurality offirst motion sensors 102 may be included in the electronic device 101 (e.g. two or more accelerometers placed at different locations of the electronic device 101). Theelectronic device 101 having thefirst motion sensor 102 may be a laptop computer having an accelerometer coupled or attached to a base of the laptop computer. As another example, theelectronic device 101 having thefirst motion sensor 102 may be a tablet having an accelerometer included within the tablet. Thefirst motion sensor 102 may be configured to sense vibration or acceleration of theelectronic device 101 in each axis of motion. For example, thefirst motion sensor 102 may generatefirst sensor data electronic device 101 in the lateral axis (e.g. referred to as the “x axis”), longitudinal axis (e.g. referred to as the “y axis”), and vertical or normal axis (e.g. referred to as the “z axis”), respectively. - As will be clear in the description below, use of the
first sensor data first motion sensor 102 enables the embodiment system and methods to determine whether or not theelectronic device 101 is located on a stationary or stable surface. However, in other embodiments, detection can be improved with the use of thesecond motion sensor 108 in conjunction with thefirst motion sensor 102. Thesecond motion sensor 108 may be a gyroscope of theelectronic device 101. It is reiterated that use of the second motion sensor 108 (and consequently, the data generated by the second motion sensor 108) is optional. For example, in low-power or low-cost implementations of the embodiment systems and methods, the second motion sensor 108 (e.g. gyroscope) and the data therefrom may not be present or used by the classifyingcircuit 106 to determine whether or not theelectronic device 101 is located on a stationary or stable surface (e.g. on a table or in a drawer). Thesecond motion sensor 108 may be configured to measure a rate at which theelectronic device 101 rotates around each axis of motion. For example, thesecond motion sensor 108 may generatesecond sensor data electronic device 101 around the x-axis, the y-axis, and the z-axis, respectively. - It is noted that the
first sensor data second sensor data first motion sensor 102 and thesecond motion sensor 108 may depend, at least in part, on a placement or orientation of theelectronic device 101. As an illustration, theelectronic device 101 may be placed in an inclined plane, a flat plane, on a part of the human body (e.g. a lap), or on an inanimate object (e.g. a desk). Thefirst sensor data second sensor data feature detection circuit 104 and the second feature detection circuit no are shown as separate circuits inFIG. 1 , it is noted that in some embodiments, a single detection circuit may implement both the firstfeature detection circuit 104 and the second feature detection circuit no. -
FIG. 2 shows anembodiment method 200 that may be executed by the firstfeature detection circuit 104 to extract or determine orientation-independent features from thefirst sensor data method 200 may also be executed by the second feature detection circuit no to extract or determine orientation-independent features from thesecond sensor data feature detection circuit 104 executes themethod 200; however, such description applies equally to the second feature detection circuit no in other embodiments that optionally utilize thesecond motion sensor 108 in addition to thefirst motion sensor 102. - Prior to discussing the details of
method 200 inFIG. 2 , a brief discussion of acquisition time windows is provided with reference toFIGS. 3A and 3B .FIG. 3A shows an example of thefirst sensor data first motion sensor 102 over a plurality of acquisition time windows.FIG. 3B shows a zoomed-in view of sampling times of the first two acquisition time windows W1, W2 of the example ofFIG. 3A . As illustrated inFIG. 3B , the plurality of acquisition time windows are consecutive and non-overlapping windows of time in some embodiments. However, in other embodiments, overlapping windows of time are also possible. In the example ofFIG. 3B , the first acquisition time window W1 starts at time to and ends at time t49. In an embodiment, such as in the examples ofFIGS. 3A and 3B , each acquisition time window has a duration of 1 second and includes 50 samples (e.g. corresponding to a 50 Hz sampling frequency). Consequently, in the example ofFIG. 3A , there are about 72 acquisition time windows and a total about 3600 samples (i.e., 50 samples for each of the 72 acquisition time windows). It is noted that each sample includes a complete dataset (e.g. x-axis data, y-axis data, and z-axis data). It is also noted that the 50 Hz sampling frequency and the 1 second duration for each acquisition time window are merely examples, and other embodiments are envisioned where different sampling frequencies and different time durations are used.FIG. 3C shows thenorm 302 of thefirst sensor data FIG. 3A , and thenorm 302 at a given sample time may be indicative of the magnitude of thefirst sensor data - The
method 200 is executed for each acquisition time window Wi. As shown inFIG. 2 ,method 200 is triggered at the start of acquisition time window Wi (e.g. time to inFIG. 3B ) and includesstep 202, where the firstfeature detection circuit 104 receives thefirst sensor data first detection circuit 104, although in other embodiments, the computation technique used to determine the norm may obviate the need for such a buffer. - In
step 204, the acquisition time window Wi ends and themethod 200 proceeds to step 206 where the mean of the norms within the acquisition time window Wi are determined. Insteps time 50 samples are acquired in a 1 second time window). The statistical data includes the mean-cross value within the acquisition time window Wi (in step 208) and the variance of the norms within the acquisition time window Wi (in step 210), both of which require the mean of the norms determined instep 206. - With reference to step 208, the mean-cross value denotes the number of times the norms within the acquisition time window Wi crosses the mean of the norms within the acquisition time window Wi. An illustration is given in
FIG. 3D , which shows thenorms 304 within the acquisition time window Wi (e.g. determined in step 202) and the mean 306 of the norms within the acquisition time window Wi (e.g. determined in step 206). In the example ofFIG. 3D , there are 26 times when thenorms 304 within the acquisition time window Wi crosses the mean 306 of the norms within the acquisition time window Wi. These instances are depicted as points of intersection of thecurve 304 and theline 306. Consequently, the mean-cross value for the example ofFIG. 3D is 26. - With reference to step 210, the variance of the norm within the acquisition time window Wi is determined as follows:
-
- where n is the number of samples within the acquisition time window Wi (e.g. 50 in the case of a 50 Hz sampling frequency), xi is the ith norm 304 within the acquisition time window Wi, and xmean is the mean of the
norms 306 within the acquisition time window Wi. - At
step 212 ofmethod 200, the mean-cross value and the variance of the norms within the acquisition time window Wi is provided to theclassifying circuit 106. As such, the classifyingcircuit 106 is run after the acquisition time window Wi ends and after the mean-cross value and the variance of the norms within the acquisition time window Wi are determined by the appropriate detection circuit. It is once again noted that the mean-cross value and the variance of the norms within the acquisition time window Wi are the orientation-independent features that are used to determine whether or not theelectronic device 101 is located on a stationary or stable surface. -
FIG. 4A shows mean-cross values 402 generated by the firstfeature detection circuit 104 and mean-cross values 404 generated by the second feature detection circuit no over 96 acquisition time windows Wi in a scenario where theelectronic device 101 is located on a stationary or stable surface (e.g. a table). Consequently, each acquisition time windows Wi inFIG. 4A has a respective mean-cross value MCA,i associated with the first motion sensor 102 (e.g. accelerometer) and a respective mean-cross value MCG,i associated with the second motion sensor 108 (e.g. gyroscope).FIG. 4B shows mean-cross values 406 generated by the firstfeature detection circuit 104 and mean-cross values 408 generated by the secondfeature detection circuit 110 over 145 acquisition time windows Wi in a scenario where theelectronic device 101 is not located on a stationary or stable surface (e.g. when on a human lap). Consequently, each time window ti in the example ofFIG. 4B has a respective mean-cross value MCA,i associated with the first motion sensor 102 (e.g. accelerometer) and a respective mean-cross value MCG,i associated with the second motion sensor 108 (e.g. gyroscope). - As can be observed by comparing the mean-
cross values FIGS. 4A and 4B , respectively, it has been observed through experiments that the mean-cross values 402 obtained bymethod 200 when theelectronic device 101 is located on a stationary or stable surface (e.g. when on a table) is expected to be greater than the mean-cross values 406 obtained bymethod 200 when theelectronic device 101 is not located on a stationary or stable surface (e.g. when on a human lap). This relative difference in the mean-cross values in the two different states is depicted inFIG. 5A and can be explained in terms of the contribution of white noise of thefirst motion sensor 102 to thefirst sensor data electronic device 101 is located on a stationary or stable surface, and (2) when theelectronic device 101 is not located on a stationary or stable surface. - For example, the
first sensor data first motion sensor 102 can be approximated as white noise of thefirst motion sensor 102 added with motion-dependent signals. The white noise of thefirst motion sensor 102 can be approximated as a signal that causes thefirst sensor data first motion sensor 102 has less of a contribution on thefirst sensor data cross values 402 when theelectronic device 101 is located on a stationary or stable surface is expected to be greater than the mean-cross values 406 when theelectronic device 101 is not located on a stationary or stable surface. - In a similar manner, it can be observed from
FIGS. 4A and 4B that the mean-cross values 404 obtained bymethod 200 when theelectronic device 101 is located on a stationary or stable surface (e.g. when on a table) is greater than the mean-cross values 408 obtained bymethod 200 when theelectronic device 101 is not located on a stationary or stable surface (e.g. when on a human lap). This difference in the mean-cross values for the two difference states can also be explained in terms of the contribution of white noise of thesecond motion sensor 108 to thesecond sensor data - With regards to the variance of the norm, it has been observed through experiments that the variance of the norms when the
electronic device 101 is located on a stationary or stable surface is expected to be smaller than the variance of the norms when theelectronic device 101 is not located on a stationary or stable surface. This relative difference in the variance of the norms in the two different states is depicted inFIG. 5B . - Moving on to the
classifying circuit 106, as noted above, the classifyingcircuit 106 is run after the acquisition time window Wi ends and after it has received the mean-cross value and the variance of the norms for the acquisition time window Wi. Theclassifying circuit 106 may be configured to determine whether or not theelectronic device 101 is located on a stationary or stable surface during the acquisition time window Wi based on at least the mean-cross value and the variance of the norms for each acquisition time window Wi. Theclassifying circuit 106 may be a supervised machine learning classifier implemented using machine learning techniques, examples being logistic regression, naive Bayes classifier, support vector machines, decision trees, boosted trees, random forest, neural networks, nearest neighbor, among others. Theclassifying circuit 106 is configured to assign a label (or decision) Li to each acquisition time window Wi, with such label Li indicating whether or not theelectronic device 101 is located on a stationary or stable surface during the acquisition time window Wi. The usage of the variance of the norm can increase the accuracy of theclassifying circuit 106, with the variance of the norm decreasing if theelectronic device 101 is located on a stationary or stable surface, and the variance of the norm increasing if theelectronic device 101 is not located on a stationary or stable surface. - It is noted that supervised learning is a machine learning task of learning a function that maps an input to an output based on example input-output pairs. In particular, supervised learning infers a function from labeled training data including a set of training examples. In the supervised machine learning classifier of classifying
circuit 106, labeled training data may be obtained by placing the electronic device 101 (including thefirst motion sensor 102 and, optionally, the second motion sensor 108) on a stationary or stable surface (e.g. a table) and logging thefirst sensor data second sensor data electronic device 101 on the stationary or stable surface. Thefirst sensor data second sensor data electronic device 101 is located on a stationary or stable surface. Consequently, suchfirst sensor data second sensor data method 200 ofFIG. 2 to obtain mean-cross values and variance of norms values for various acquisition time windows Wi, and such mean-cross values and variance of norms values are subsequently assigned the label indicating that theelectronic device 101 is located on a stationary or stable surface. - Similarly, labeled training data may also be obtained by placing the
electronic device 101 on a moving or unstable surface (e.g. a human lap) and logging thefirst sensor data second sensor data electronic device 101 on the stationary or stable surface. The variousfirst sensor data second sensor data method 200 ofFIG. 2 to obtain mean-cross values and variance of norms values for various acquisition time windows Wi, and such mean-cross values and variance of norms values are subsequently assigned the label indicating that theelectronic device 101 is not located on a stationary or stable surface. - Latency of the
detection system 100 shown inFIG. 1 may depend on at least the latency of theclassifying circuit 106, which may be equal the duration of each of the acquisition time windows Wi. In an embodiment where the duration of each acquisition time window Wi is 1 second, the classifyingcircuit 106 has a latency of 1 second since a label Li is output from the classifyingcircuit 106 every second. As will be described below, in embodiments that also include the meta-classifyingcircuit 112, the latency of thedetection system 100 is also affected by the meta-classifier output latency. - To further enhance the accuracy of the determination of whether or not the
electronic device 101 is located on a stationary or stable surface, thedetection system 100 may include the meta-classifyingcircuit 112. In an embodiment, the meta-classifyingcircuit 112 is configured to determine the number of consecutive occurrences of the output Li of theclassifying circuit 106. If the number of consecutive occurrences overcomes a threshold, the output of the meta-classifying circuit 112 (labelled Lfinal inFIG. 1 ) is changed. Otherwise, the previous state is kept. As such, the meta-classifyingcircuit 112 can be used to low-pass filter the output of the classifying circuit 106 (e.g. to avoid glitches and spurious false positives). - Use of the meta-classifying
circuit 112 introduces latency to thedetection system 100, and the latency of the meta-classifyingcircuit 112 can be configured to be a minimum of N times the duration of an acquisition time window Wi. In some embodiments, different minimum latencies may be applicable depending on whether the output of theclassifying circuit 106 indicates that theelectronic device 101 is located on a stationary or stable surface (e.g. where N=Non_table and the output state Lfinal is changed if the number of consecutive occurrences reaches Non_table) or whether the output of theclassifying circuit 106 indicates that theelectronic device 101 is not located on a station or stable surface (e.g. where N=Nnot_on_table and the output state Lfinal is changed if the number of consecutive occurrences reaches Nnot_on_table). In some embodiments, Nnot_on_table can be different from Non_table. The output of the meta-classifyingcircuit 112 is updated according to the meta-classifier logic configuration and the configured meta-classifier output latency. In some embodiments Non_table may be configured to be between 2 and 10, while Nnot_on_table may be configured to be between 2 and 10. - While use of the meta-classifying
circuit 112 may increase an accuracy of the determination of whether or not theelectronic device 101 is located on a stationary or stable surface, this increase in accuracy comes at a cost of increased system latency. However, even though latency increases as accuracy increases, the embodiment systems and methods achieve latencies that are less than 10 seconds (e.g. between 4 seconds and 9 seconds), even with the use of the meta-classifyingcircuit 112. - As discussed above, in low-power or low-cost implementations of the embodiment systems and methods, the second motion sensor 108 (e.g. gyroscope) and the data therefrom may not be used by the classifying
circuit 106 to determine whether or not theelectronic device 101 is located on a stationary or stable surface (e.g. on a table or in a drawer). In experiments that have been run, it has been noted that approximately 90% accuracy can be achieved if theclassifying circuit 106 only uses the mean-cross values MCi,102 and the variance of the norms Vari,102 obtained from thefirst sensor data first sensor data circuit 112. It has also been noted that when both the mean-cross values MCi,102 and the variance of the norms Vari,102 (obtained from thefirst sensor data second sensor data circuit 112. - In low-power applications, the choice of which data to extract from the acquisition time window W1 is based on a trade-off between accuracy and power consumption. Generally, the number of features determined by the first feature detection circuit 104 (and the second feature detection circuit no in embodiments that use it in conjunction with circuit 104) can be varied. For example, the mean for each axis can be computed, and this may be used to determine the mean-cross value for each axis for each acquisition time window Wi. As another example, the energy of the signal received from the motion sensors can be used. However, it is noted that determination of a greater number of features is accompanied by an increase in resources (e.g. memory, execution time, and power).
- The output of the meta-classifying
circuit 112 may be provided to astate monitor 114, which may adapt the behavior or operation of theelectronic device 101. The state monitor 114 may be implemented using a controller and a memory register. The output of theclassifying circuit 106 and/or the output of the meta-classifyingcircuit 112 may be stored in the memory register of thestate monitor 114, and the controller of thestate monitor 114 may be configured to read the content of the memory register. In response to a determination that the electronic device is on a stationary or stable surface (e.g. a table), thestate monitor 114 may generate an interruptsignal 116 that may adapt the behavior or operation ofelectronic device 101, for example, fan speeds and clock frequencies of electronic components (e.g. of a central processing unit (CPU), a graphics processing unit (GPU), or a power supply unit) in theelectronic device 101 may be increased to achieve better performance (e.g. faster computation times). Conversely, in response to a determination that the electronic device is not on a stationary or stable surface (e.g. when the electronic device is in motion or on a user's lap), the interruptsignal 116 may cause the clock frequencies of components in theelectronic device 101 to be decreased to reduce power consumption and to avoid overheating of the components in theelectronic device 101. - The embodiment systems and methods discussed above can be implemented in various ways.
FIG. 6A shows a first example, where themethod 200, as well as the classifying circuit and meta-classifyingcircuit 112, is implemented by a controller 502 (e.g. a microcontroller) that is coupled to a micro-electro-mechanical (MEMS) system-in-package 504. The MEMS system-in-package 504 may implement thefirst motion sensor 102 and/or thesecond motion sensor 108. Furthermore, thecontroller 502 may be included in a system-on-chip (SoC) 506, which is communicatively coupled to theoperating system layer 508 of theelectronic device 101. -
FIG. 6B shows another example, where themethod 200, as well as the classifying circuit and meta-classifyingcircuit 112, is implemented by directly connecting thecontroller 502 to the operating system layer 508 (e.g. without theSoC 506 ofFIG. 6A being an intervening connection). -
FIG. 6C shows another example, where themethod 200, as well as the classifying circuit and meta-classifyingcircuit 112, is implemented directly in hardware (e.g. directly on the MEMS system-in-package 504, aided by software embedded in the MEMS system-in-package 504) that is connected to theoperating system layer 508. It is noted that current consumption of the implementation shown inFIG. 6A is greater than current consumption of the implementation shown inFIG. 6B , which is, in turn, greater than current consumption of the implementation shown inFIG. 6C . - The embodiment systems and methods have at least the following advantages: (1) are easily tuned or reconfigured (e.g. due to the use of machine learning approach for classifying circuit 106); (2) have low latency and short convergence times (e.g. less than 10 seconds, due to the time interval TI being split into a plurality of short time windows each of which is about 1 second and also configurable/adjustable); (3) do not require calibration of the motion sensors (e.g. due to the use of orientation-independent features of mean-cross values and the variance of the norms, thereby exhibiting immunity against device-to-device variations, accelerometer offsets, and/or gyroscope bias); and (4) have greater reliability compared to conventional systems and methods since orientation-independent features are used in embodiment systems and methods. Furthermore, as mentioned in reference to
FIG. 6C , the embodiment systems and methods may be executed directly in hardware, thus enabling ultra-low power implementations of the embodiment systems and methods. - In an embodiment, a system includes: a first motion sensor configured to generate first sensor data indicative of a first type of movement of an electronic device; a first feature detection circuit configured to determine at least one orientation-independent feature based on the first sensor data; and a classifying circuit configured to determine whether or not the electronic device is located on a stationary surface based on the at least one orientation-independent feature.
- In an embodiment, a method includes: generating, by an accelerometer of an electronic device, first sensor data over an acquisition time window; generating, by a gyroscope of the electronic device, second sensor data over the acquisition time window; determining, by a first feature detection circuit, at least one first orientation-independent feature for the acquisition time window based on the first sensor data; determining, by a second feature detection circuit, at least one second orientation-independent feature for the acquisition time window based on the second sensor data; and executing, by a classification circuit, a machine learning classification to determine whether or not the electronic device is located on a stationary surface based on the at least one first orientation-independent feature and the at least one second orientation-independent feature.
- In an embodiment, an electronic device includes a detection system. The detection system includes: an accelerometer configured to generate accelerometer data indicative of a first type of movement of an electronic device; a first feature detection circuit coupled to an output of the accelerometer and configured to determine at least one orientation-independent feature based on the accelerometer data; and a classifying circuit configured to determine whether or not the electronic device is located on a stationary surface based on the at least one orientation-independent feature.
- Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the embodiments disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer-readable medium and executed by a processor or other processing device, or combinations of both. The devices and processing systems described herein may be employed in any circuit, hardware component, integrated circuit (IC), or IC chip, as examples. Memory disclosed herein may be any type and size of memory and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends upon the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
- The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a processor, a digital signal processor (DSP), an Application Specific Integrated Circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
- The embodiments disclosed herein may be embodied in hardware and in instructions that are stored in hardware, and may reside, for example, in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer-readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC.
- While this invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments, as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. It is therefore intended that the appended claims encompass any such modifications or embodiments.
Claims (24)
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/175,328 US11099208B2 (en) | 2018-10-30 | 2018-10-30 | System and method for determining whether an electronic device is located on a stationary or stable surface |
CN201921835259.2U CN210721520U (en) | 2018-10-30 | 2019-10-29 | Detection system and electronic device |
CN201911040360.3A CN111198281B (en) | 2018-10-30 | 2019-10-29 | System and method for determining whether an electronic device is located on a stationary or stable surface |
EP19206346.9A EP3647905B1 (en) | 2018-10-30 | 2019-10-30 | System and method for determining whether an electronic device is located on a stationary or stable surface |
US17/375,297 US11719718B2 (en) | 2018-10-30 | 2021-07-14 | System and method for determining whether an electronic device is located on a stationary or stable surface |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/175,328 US11099208B2 (en) | 2018-10-30 | 2018-10-30 | System and method for determining whether an electronic device is located on a stationary or stable surface |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/375,297 Continuation US11719718B2 (en) | 2018-10-30 | 2021-07-14 | System and method for determining whether an electronic device is located on a stationary or stable surface |
Publications (2)
Publication Number | Publication Date |
---|---|
US20200132717A1 true US20200132717A1 (en) | 2020-04-30 |
US11099208B2 US11099208B2 (en) | 2021-08-24 |
Family
ID=68424699
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/175,328 Active 2039-09-27 US11099208B2 (en) | 2018-10-30 | 2018-10-30 | System and method for determining whether an electronic device is located on a stationary or stable surface |
US17/375,297 Active 2038-11-13 US11719718B2 (en) | 2018-10-30 | 2021-07-14 | System and method for determining whether an electronic device is located on a stationary or stable surface |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/375,297 Active 2038-11-13 US11719718B2 (en) | 2018-10-30 | 2021-07-14 | System and method for determining whether an electronic device is located on a stationary or stable surface |
Country Status (3)
Country | Link |
---|---|
US (2) | US11099208B2 (en) |
EP (1) | EP3647905B1 (en) |
CN (2) | CN111198281B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11099208B2 (en) | 2018-10-30 | 2021-08-24 | Stmicroelectronics S.R.L. | System and method for determining whether an electronic device is located on a stationary or stable surface |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2582665B (en) * | 2019-03-29 | 2021-12-29 | Advanced Risc Mach Ltd | Feature dataset classification |
CN111772639B (en) * | 2020-07-09 | 2023-04-07 | 深圳市爱都科技有限公司 | Motion pattern recognition method and device for wearable equipment |
US20230112510A1 (en) * | 2021-10-12 | 2023-04-13 | Target Brands, Inc. | Beacon system |
EP4357889A1 (en) * | 2022-10-20 | 2024-04-24 | STMicroelectronics S.r.l. | System and method for determining whether an electronic device is located on a stationary or stable surface |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012503194A (en) | 2008-09-23 | 2012-02-02 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | How to process measurements from accelerometers |
WO2014089119A1 (en) | 2012-12-03 | 2014-06-12 | Navisens, Inc. | Systems and methods for estimating the motion of an object |
US9990479B2 (en) * | 2014-12-27 | 2018-06-05 | Intel Corporation | Technologies for authenticating a user of a computing device based on authentication context state |
WO2017100641A1 (en) | 2015-12-11 | 2017-06-15 | SomniQ, Inc. | Apparatus, system, and methods for interfacing with a user and/or external apparatus by stationary state detection |
US11020058B2 (en) | 2016-02-12 | 2021-06-01 | Qualcomm Incorporated | Methods and devices for calculating blood pressure based on measurements of arterial blood flow and arterial lumen |
CN106681478A (en) | 2017-01-04 | 2017-05-17 | 北京邮电大学 | Simple scheme for distinguishing state of portable wearable device |
US11099208B2 (en) | 2018-10-30 | 2021-08-24 | Stmicroelectronics S.R.L. | System and method for determining whether an electronic device is located on a stationary or stable surface |
-
2018
- 2018-10-30 US US16/175,328 patent/US11099208B2/en active Active
-
2019
- 2019-10-29 CN CN201911040360.3A patent/CN111198281B/en active Active
- 2019-10-29 CN CN201921835259.2U patent/CN210721520U/en active Active
- 2019-10-30 EP EP19206346.9A patent/EP3647905B1/en active Active
-
2021
- 2021-07-14 US US17/375,297 patent/US11719718B2/en active Active
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11099208B2 (en) | 2018-10-30 | 2021-08-24 | Stmicroelectronics S.R.L. | System and method for determining whether an electronic device is located on a stationary or stable surface |
US11719718B2 (en) | 2018-10-30 | 2023-08-08 | Stmicroelectronics S.R.L. | System and method for determining whether an electronic device is located on a stationary or stable surface |
Also Published As
Publication number | Publication date |
---|---|
CN210721520U (en) | 2020-06-09 |
EP3647905C0 (en) | 2024-01-03 |
CN111198281A (en) | 2020-05-26 |
US11099208B2 (en) | 2021-08-24 |
US11719718B2 (en) | 2023-08-08 |
US20210341512A1 (en) | 2021-11-04 |
EP3647905B1 (en) | 2024-01-03 |
CN111198281B (en) | 2022-09-30 |
EP3647905A1 (en) | 2020-05-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11719718B2 (en) | System and method for determining whether an electronic device is located on a stationary or stable surface | |
US10802572B2 (en) | System and method of determining whether an electronic device is in contact with a human body | |
US20160174913A1 (en) | Device for health monitoring and response | |
EP3232312B1 (en) | Method, apparatus and terminal device for setting interrupt threshold for fingerprint identification device | |
CN107357411B (en) | Electronic device | |
US20220335127A1 (en) | Side-channel exploit detection | |
US10083287B2 (en) | Fingerprint sensing device and electronic device including the same | |
US10914773B2 (en) | Resolution adjustment for capacitive touch sensor | |
US20220136909A1 (en) | Method and device for temperature detection and thermal management based on power measurement | |
CN111052035A (en) | Electronic device and operation control method thereof | |
CN108351715B (en) | Resident sensor equipment applied to human body touch | |
US9323498B2 (en) | Multiplier circuit with dynamic energy consumption adjustment | |
JP2018536215A (en) | Method with missing finger detection and fingerprint detection device | |
WO2014164750A1 (en) | Management of exterior temperatures encountered by user of a portable electronic device using multiple heat-rejection elements | |
CN109991896B (en) | Robot falling prediction method and device and storage device | |
TWI596512B (en) | Electronic devices and methods for utilizing acceleration event signatures and non-transitory computer-readable medium | |
US11429178B2 (en) | Electronic device and method for determining operating frequency of processor | |
US20240134468A1 (en) | System and method for determining whether an electronic device is located on a stationary or stable surface | |
EP4357889A1 (en) | System and method for determining whether an electronic device is located on a stationary or stable surface | |
WO2023136873A1 (en) | Diffusion-based handedness classification for touch-based input | |
CN109066870B (en) | Charging management method, device, medium and electronic equipment applying method | |
CN108700938B (en) | Method, device and equipment for detecting electronic equipment approaching human body | |
Skoglund et al. | Activity tracking using ear-level accelerometers | |
WO2014159398A1 (en) | Management of exterior temperatures encountered by user of a portable electronic device by reducing heat generation by a component | |
US11460928B2 (en) | Electronic device for recognizing gesture of user from sensor signal of user and method for recognizing gesture using the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: STMICROELECTRONICS S.R.L., ITALY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:RIVOLTA, STEFANO PAOLO;RIZZARDINI, FEDERICO;REEL/FRAME:047359/0854 Effective date: 20181030 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: AWAITING TC RESP, ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |