CN113126758A - Hand-in-hand state identification method and device - Google Patents

Hand-in-hand state identification method and device Download PDF

Info

Publication number
CN113126758A
CN113126758A CN202110339772.8A CN202110339772A CN113126758A CN 113126758 A CN113126758 A CN 113126758A CN 202110339772 A CN202110339772 A CN 202110339772A CN 113126758 A CN113126758 A CN 113126758A
Authority
CN
China
Prior art keywords
hand
intelligent wearable
wearable device
data
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110339772.8A
Other languages
Chinese (zh)
Inventor
闻启栋
汤奕
谭敏刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liyang Research Institute of Southeast University
Original Assignee
Liyang Research Institute of Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liyang Research Institute of Southeast University filed Critical Liyang Research Institute of Southeast University
Priority to CN202110339772.8A priority Critical patent/CN113126758A/en
Publication of CN113126758A publication Critical patent/CN113126758A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Neurosurgery (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Dermatology (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a hand-in-hand state identification method and a hand-in-hand state identification device, which are applied to intelligent wearable equipment worn on the wrist of a user, and the method comprises the following steps: acquiring attitude data, orientation data and target signal intensity data when the first intelligent wearable device and the second intelligent wearable device which are in a connection state are close to each other; processing the attitude data, the azimuth data and the target signal intensity data to obtain target characteristic parameter values; according to the target characteristic parameter value, identifying the hand-in-hand state based on the intelligent wearable equipment to obtain a first identification result; according to the first identification result, identifying the muscle electric signal of the wrist of the user to obtain a second identification result; and counting, displaying and uploading the first recognition result and the second recognition result. The invention is applied among couples, can detect the hand-in-hand state among couples and record the hand-in-hand times, stimulates the hand-in-hand frequency among couples, and promotes the couples to be more intimate and pleasant.

Description

Hand-in-hand state identification method and device
Technical Field
The invention relates to the technical field of gesture recognition, in particular to a method and a device for recognizing a hand-pulling state.
Background
Along with the increasing demand of users for intelligent wearable devices with different functions, the types of the intelligent wearable devices on the market are increasingly diversified, the intelligent wearable devices gradually become necessary matched intelligent devices for mobile terminals such as mobile phones, meanwhile, along with the continuous expansion and richness of the functionality of the intelligent wearable devices, the intelligent wearable devices gradually become popular gifts, some existing intelligent wearable devices on the market can realize multiple functions, such as functions of tracking the sleep quality of a wearer, recording the motion state of the wearer, monitoring the health state of the wearer and the like, but no intelligent wearable device capable of recognizing the hand-pulling state function based on the intelligent wearable device is available, the functionality of the intelligent wearable device is continuously enhanced and expanded, the wearable device becomes more popular gifts, some existing intelligent wearable devices can only acquire the behavior characteristic data and physiological parameter data of the wearer through a sensor, the motion state and the health condition of the user are judged, but the hand-in-hand state is not identified and judged, the functionality is less, and some intelligent wearable devices only identify the motion state through motion data collected by a sensor, the data source is single, and the motion state cannot be accurately identified.
To sum up, current intelligent wearing equipment mainly exists: the functions are few, the hand-in-hand frequency between couples can not be stimulated, the data source is single, and the hand-in-hand state can not be accurately identified.
Disclosure of Invention
In order to solve the above problems, the present invention provides a method and a device for identifying a hand-in-hand state.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a hand-in-hand state identification method is applied to an intelligent wearable device worn on a wrist of a user, the intelligent wearable device comprises a first intelligent wearable device and a second intelligent wearable device, and the method comprises the following steps:
acquiring attitude data, orientation data and target signal intensity data when the first intelligent wearable device and the second intelligent wearable device which are in a connection state are close to each other;
processing the attitude data, the azimuth data and the target signal intensity data to obtain target characteristic parameter values;
according to the target characteristic parameter value, identifying the hand-in-hand state based on the intelligent wearable equipment to obtain a first identification result;
according to the first identification result, identifying the muscle electric signal of the wrist of the user to obtain a second identification result;
and counting, displaying and uploading the first recognition result and the second recognition result.
Further, the method for implementing the connection state comprises the following steps:
judging whether the first intelligent wearable device and the second intelligent wearable device are in a wearing mode;
if yes, starting a main Bluetooth device in the first intelligent wearable device and a secondary Bluetooth device in the second intelligent wearable device;
the first intelligent wearable device sends key information to a secondary Bluetooth device of the second intelligent wearable device through a primary Bluetooth device;
the second intelligent wearable device decrypts the key information, acquires decryption information, feeds the decryption information back to the first intelligent wearable device, and if the feedback result is positive, the first intelligent wearable device and the second intelligent wearable device establish a connection state.
Further, the step of judging whether the first intelligent wearable device and the second intelligent wearable device are in the wearing mode includes:
acquiring heart rate data of a user through the first intelligent wearable device and the second intelligent wearable device;
and judging whether the heart rate data value is between a preset heart rate data upper limit threshold value and a preset heart rate data lower limit threshold value, if so, judging that the first intelligent wearable device and the second intelligent wearable device are in a wearing mode.
Further, the posture data is angle value data of the user wrist relative to coordinate axes, and the coordinate axes comprise an X axis, a Y axis and a Z axis which are perpendicular to each other; wherein, the X axis corresponds to the true north of the geomagnetism, the Y axis corresponds to the true east of the geomagnetism, and the Z axis is vertical to the bracelet screen; the orientation data is angle data between the first intelligent wearable device and the second intelligent wearable device; the target signal intensity data is signal intensity data between the first intelligent wearable device and the second intelligent wearable device.
Further, the step of obtaining the target characteristic parameter value includes:
processing the attitude data, the azimuth data and the target signal intensity data to obtain an initial characteristic parameter value;
and processing the initial characteristic parameter value by adopting a gradient descent algorithm to obtain a target characteristic parameter value for identifying the hand-in-hand state.
Further, the initial characteristic parameter value is obtained by the following formula:
Y=aX1+bX2+cX3+dX4+eX5
wherein Y is an initial characteristic parameter value, the value of Y is a fixed constant value, and X1Is the angle value of the user's wrist with respect to the X-axis direction, X2Is the angle value, X, of the user's wrist with respect to the Y-axis direction3Is the angle value, X, of the user's wrist with respect to the Z-axis direction4For target signal strength data values, X5Is the azimuth data value, a is the first parameter value, b is the second parameter value, c is the third parameter value, d is the fourth parameter value, e is the fifth parameter value.
Further, the step of obtaining the first recognition result includes:
substituting the initial characteristic parameter value into W = a1X1+b1X2+c1X3+d1X4+e1X5In the method, a hand-in-hand recognition model is established, wherein W is a target characteristic parameter value, the value of W is a fixed constant value, and X is a constant value1Is the angle value of the user's wrist with respect to the X-axis direction, X2Is the angle value, X, of the user's wrist with respect to the Y-axis direction3Is the angle value, X, of the user's wrist with respect to the Z-axis direction4For target signal strength data values, X5As a result of the orientation value,a1is a first initial characteristic parameter value, b1Is a second initial characteristic parameter value, c1Is the third initial characteristic parameter value, d1Is a fourth initial characteristic parameter value, e1Is a fifth initial characteristic parameter value;
inputting the acquired attitude data, the acquired azimuth data and the acquired target signal intensity data into the hand-in-hand recognition model, wherein if the acquired target characteristic parameter value W is equal to a preset fixed constant value, the first recognition result is that the hand-in-hand is successful, and if the acquired target characteristic parameter value W is not equal to the preset fixed constant value, the first recognition result is that the hand-in-hand is failed.
Further, the specific steps of identifying the muscle electrical signal of the wrist of the user according to the first identification result and acquiring a second identification result include:
judging whether the first identification result is successful in holding the hand or not;
if the first identification result is that the hand is successfully pulled, detecting whether the muscle electric signal of the wrist of the user reaches a preset muscle electric signal data lower limit threshold value;
and taking the detection result as a second identification result.
Further, the step of counting, displaying and uploading the first recognition result and the second recognition result comprises:
counting a second identification result in a preset time period to obtain hand-in-hand attribute information, wherein the hand-in-hand attribute information comprises hand-in-hand time and hand-in-hand times;
displaying the hand-in-hand attribute information on electronic screens of the first intelligent wearable device and the second intelligent wearable device respectively;
and uploading the hand-in-hand attribute information to a cloud server for analysis according to the user requirements.
A hand-in-hand state recognition device, comprising:
the acquisition unit is used for acquiring attitude data and target signal intensity data when the first intelligent wearable device and the second intelligent wearable device in the connection state are close to each other;
the communication unit is used for establishing a connection relation between the first intelligent wearable device and the second intelligent wearable device;
the data processing unit is used for identifying the hand-in-hand state based on the intelligent wearable equipment and acquiring a first identification result and a first identification result;
the storage unit is used for storing the acquired first identification result and the acquired second identification result;
and the display unit is used for displaying the acquired first recognition result and the second recognition result on a screen.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a hand-in-hand state identification method and device, intelligent wearing equipment using the method can be applied to couples, can detect the hand-in-hand state among the couples and record the hand-in-hand times, stimulates the hand-in-hand frequency among the couples, and promotes the couples to be more intimate and pleasant, and can also be called as couple wearing equipment.
Drawings
FIG. 1 is a flowchart illustrating a method for identifying a hand-in-hand state according to an embodiment of the invention;
fig. 2 is a structural diagram of a hand-in-hand state recognition device according to an embodiment of the invention.
In the figure: 11. the device comprises an acquisition unit, 12, a communication unit, 13, a data processing unit, 14, a storage unit, 15 and a display unit.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
In one aspect, an embodiment of the present invention provides a hand-in-hand state identification method, as shown in fig. 1, the method is applied to an intelligent wearable device worn on a wrist of a user, the intelligent wearable device is a bracelet or an intelligent watch, the intelligent wearable device includes a first intelligent wearable device and a second intelligent wearable device, and the method includes the following steps:
s1: acquiring attitude data, orientation data and target signal intensity data when the first intelligent wearable device and the second intelligent wearable device which are in a connection state are close to each other;
in this step, the connection state refers to a state in which the first intelligent wearable device and the second intelligent wearable device are connected through respective built-in bluetooth devices, the attitude data can be obtained through auxiliary sensors such as a three-axis gyroscope, a three-axis accelerometer (IMU), etc., the embedded low-power ARM processor outputs calibrated angular velocity, acceleration, magnetic data, etc., the motion attitude measurement is performed through a sensor data algorithm based on quaternion, the zero-drift three-dimensional attitude data expressed by quaternion, euler angle, etc. is output in real time, the orientation data is obtained through a three-axis electronic compass, the orientation data represents angle data between the first intelligent wearable device and the second intelligent wearable device, the target signal strength data refers to signal strength data (RSSI) measured after the first intelligent wearable device and the second intelligent wearable device are connected through respective built-in bluetooth devices, the unit of the signal strength data is dbm, in the bluetooth device, the signal strength data can be directly understood as the strength of a received bluetooth signal, RSSI = 10 × log P, P represents the received signal power, bluetooth can transmit a signal, the received signal power can be affected by different distances, and assuming that the maximum value of the transmission power is 1mw, the RSSI value is 0, that is, when the distance between the first smart wearable device and the second smart wearable device is infinitely close, that is, the RSSI value obtained in an ideal state is 0, but in practice, the ideal state basically does not exist, so the RSSI values are basically negative numbers, and therefore, when the distance between the first smart wearable device and the second smart wearable device is very close, the bluetooth signal strength value between the bracelets is about-50 dbm.
S2: processing the attitude data, the azimuth data and the target signal intensity data to obtain target characteristic parameter values;
s3: according to the target characteristic parameter value, identifying the hand-in-hand state based on the intelligent wearable equipment to obtain a first identification result;
s4: according to the first identification result, identifying the muscle electric signal of the wrist of the user to obtain a second identification result;
in the step, the human muscle electrical signals of the wrist of the user are collected through the electrodes, the time domain characteristics and the frequency domain characteristics of the human muscle electrical signals are obtained, and a regression model is constructed; wherein the content of the first and second substances,
time domain characteristics: mean + root mean + mean/root mean
Frequency domain characteristics: frequency + phase + amplitude
A regression model: z = k1x1+k2x2 + k3x3 + k4x4 + k5x5 + k6x6
k1Is the mean weight, k2Is the root mean square weight, k3Is the mean/root mean weight, k4As a frequency weight, k5Is a phase weight, k6Is the amplitude weight; and analyzing data information contained in the human muscle electric signals, judging whether the wrist of the user has the muscle electric signals which can be sent out when the user holds the hand in hand, if so, judging that the hand holding is successful, and if not, judging that the hand holding is failed.
S5: and counting, displaying and uploading the first recognition result and the second recognition result.
Specifically, the method for implementing the connection state comprises the following steps:
s101: judging whether the first intelligent wearable device and the second intelligent wearable device are in a wearing mode;
s102: if yes, starting a main Bluetooth device in the first intelligent wearable device and a secondary Bluetooth device in the second intelligent wearable device;
in this step, manually open the main bluetooth device in the first intelligent wearing equipment and the secondary bluetooth device in the second intelligent wearing equipment, start the bluetooth search pairing mode, acquire the bluetooth search list in the certain distance.
S103: the first intelligent wearable device sends key information to a secondary Bluetooth device of the second intelligent wearable device through the primary Bluetooth device;
s104: the second intelligent wearable device decrypts the key information, acquires decryption information, feeds back the decryption information to the first intelligent wearable device, and if the feedback result is positive, the first intelligent wearable device and the second intelligent wearable device establish a connection state.
In this step, in the bluetooth search list, the second smart wearable device may be actively connected through the first smart, or the first smart wearable device may be actively connected through the second smart wearable device, for example, when the second smart wearable device is actively connected through the first smart, the first smart wearable device sends the key information to the secondary bluetooth device of the second smart wearable device through the main bluetooth device, the second smart wearable device receives the key information sent by the first smart wearable device, inputs the preset password, decrypts the key information, and feeds back the decryption information to the first smart wearable device, if the input password is correct, the pairing is successfully displayed, and the first smart wearable device and the second smart wearable device establish a connection relationship through the bluetooth device.
In an embodiment, the step of determining whether the first smart wearable device and the second smart wearable device are in the wearing mode includes:
s201: acquiring heart rate data of a user through the first intelligent wearable device and the second intelligent wearable device;
in this step, the heart rate, which refers to the frequency of the heart beats, is usually characterized by the number of beats Per minute, i.e., BPM (beats Per Minute). The resting heart rate of normal adult generally is 60 ~ 100BPM, and the heart rate interval of adult's ideal is 55 ~ 70BPM, can pass through the built-in photoelectric sensor of intelligence wearing equipment acquires heart rate data, can also acquire the person's of wearing heart rate data through methods such as electrocardiosignal method, arterial blood pressure method in addition to judge the wearing mode of bracelet according to the heart rate data who acquires.
S202: and judging whether the heart rate data value is between a preset heart rate data upper limit threshold value and a preset heart rate data lower limit threshold value, and if so, judging that the first intelligent wearable device and the second intelligent wearable device are in a wearing mode.
In this step, the acquired heart rate data is processed by presetting a heart rate data upper limit threshold and a heart rate data lower limit threshold, for example, the preset heart rate data upper limit threshold is 150, the heart rate data lower limit threshold is 20, and the acquired heart rate data value is 50, because the acquired heart rate data value is 35, it is determined that the intelligent wearable device is in a wearing mode between the heart rate data upper limit threshold and the heart rate lower limit threshold, and the processing of the posture data, the orientation data and the target signal intensity data is executed.
S203: and if the heart rate data value is not between the heart rate data upper limit threshold value and the heart rate lower limit threshold value, judging that the bracelet is in a non-wearing mode, and not executing the processing on the posture data, the azimuth data and the target signal intensity data by the machine learning model.
In this step, for example, the acquired heart rate data value is 10, and it is not between the heart rate data upper limit threshold and the heart rate lower limit threshold, it is determined that the intelligent wearable device is in the non-wearable mode, and the machine learning model does not perform processing on the posture data, the orientation data, and the target signal strength data, that is, the hand-in-hand state is not identified in the non-wearable mode, so that the running of the program is reduced, the loss is reduced, and the electric energy is saved.
In the embodiment, the posture data is angle value data of the user wrist relative to coordinate axes, and the coordinate axes comprise an X axis, a Y axis and a Z axis which are mutually vertical; wherein, the X axis corresponds to the true north of the geomagnetism, the Y axis corresponds to the true east of the geomagnetism, and the Z axis is vertical to the bracelet screen; the orientation data is angle data between the first intelligent wearable device and the second intelligent wearable device; the target signal intensity data is signal intensity data between the first intelligent wearable device and the second intelligent wearable device.
Specifically, the first intelligent wearable device and the second intelligent wearable device are both provided with built-in attitude sensors and used for collecting angle value data of the wrist relative to an X axis, a Y axis and a Z axis, and the collected angle value data is used as one of indexes for judging the hand-in-hand state.
When the intelligent wearable device is worn and in a hand-in-hand state, the wrist can form a certain angle, the angle value is azimuth data, the azimuth data is obtained through an electronic compass sensor arranged in the intelligent wearable device, and the obtained azimuth data is also used as one of indexes for judging the hand-in-hand state.
First intelligent wearing equipment and second intelligence wearing equipment all embed has signal strength collection module, through signal strength collection module, acquires first intelligent wearing equipment with signal strength data between the second intelligence wearing equipment, signal strength data are specific numerical value, represent the relative distance between first intelligent wearing equipment and the second intelligence wearing equipment, and signal strength data value is bigger, and the relative distance between representing first intelligent wearing equipment and the second intelligence wearing equipment is more close, otherwise more far away.
In an embodiment, the step of obtaining the target feature parameter value includes:
s301: processing the attitude data, the azimuth data and the target signal intensity data to obtain an initial characteristic parameter value;
s303: and processing the initial characteristic parameter value by adopting a gradient descent algorithm to obtain a target characteristic parameter value for identifying the hand-in-hand state.
In an embodiment, the initial characteristic parameter value is obtained by:
Y=aX1+bX2+cX3+dX4+eX5
wherein Y is an initial characteristic parameter value, the value of Y is a fixed constant value, and X1Is the angle value of the user's wrist with respect to the X-axis direction, X2Is the angle value, X, of the user's wrist with respect to the Y-axis direction3Is the angle value, X, of the user's wrist with respect to the Z-axis direction4For target signal strength data values, X5Is the azimuth data value, a is the first parameter value, b is the second parameter value, c is the third parameter value, d is the fourth parameter value, e is the fifth parameter value.
In this step, for example, a first set of data is obtained, the set of data comprising 5 data matricesThe value in each data matrix is X1、X2、X3、X4And X5By substituting the group of data into Y = aX1+bX2+cX3+dX4+eX5Wherein Y is a fixed constant, for example, Y is equal to 1, solving the initial characteristic parameter values to obtain a first set of initial characteristic parameter values, similarly, performing the same operation on a second set of data to obtain a second set of initial characteristic parameter values … …, and so on to obtain a plurality of sets of initial characteristic parameter values, which are collected together to form an initial characteristic parameter value data set.
In this step, the Gradient Descent algorithm (Gradient decision Optimization) is a commonly used Optimization algorithm in machine learning model training, and the algorithm Y = aX1+bX2+cX3+dX4+eX5Can be represented as Y (X)1,X2,X3,X4,X5)=aX1+bX2+cX3+dX4+eX5Wherein Y (X)1,X2,X3,X4,X5) Is an objective function, objective function Y (X)1,X2,X3,X4,X5) Regarding that the gradients of the parameters a, b, c, d and e are the direction in which the target function rises most quickly, each group of parameters can be advanced by a fixed step length along the direction opposite to the gradient, the target function is decreased until a fixed constant value converges, the characteristic parameter value at the time of convergence is obtained, and the group of characteristic parameter values is taken as the target characteristic parameter value.
In an embodiment, the step of obtaining the first recognition result includes:
s401: substituting the initial characteristic parameter value into W = a1X1+b1X2+c1X3+d1X4+e1X5In the method, a hand-in-hand recognition model is established, wherein W is a target characteristic parameter value, the value of W is a fixed constant value, and X is a constant value1Is the angle value of the user's wrist with respect to the X-axis direction, X2For the wrist of the user to faceAngle value in Y-axis direction, X3Is the angle value, X, of the user's wrist with respect to the Z-axis direction4For target signal strength data values, X5Is an azimuth value, a1Is a first initial characteristic parameter value, b1Is a second initial characteristic parameter value, c1Is the third initial characteristic parameter value, d1Is a fourth initial characteristic parameter value, e1Is a fifth initial characteristic parameter value;
s402: inputting the acquired attitude data, azimuth data and target signal intensity data into a hand-in-hand recognition model, wherein if the acquired target characteristic parameter value W is equal to a preset fixed constant value, the first recognition result is that the hand-in-hand is successful, and if the acquired target characteristic parameter value W is not equal to the preset fixed constant value, the first recognition result is that the hand-in-hand is failed.
In this step, W in the hand-in-hand recognition model is a fixed constant value, assuming that W is equal to 1, the acquired attitude data, orientation data and target signal strength data are input into the model, if the calculated W value result is 1, the hand-in-hand success is represented, and if the calculated W value result is not equal to 1, the hand-in-hand failure is represented.
In an embodiment, the identifying the muscle electrical signal of the wrist of the user according to the first identification result, and the obtaining the second identification result includes:
s501: judging whether the first identification result is successful in holding the hand or not;
s502: if the first identification result is that the hand is successfully pulled, detecting whether the muscle electric signal of the wrist of the user reaches a preset muscle electric signal data lower limit threshold value;
s503: and taking the detection result as a second identification result.
In an embodiment, the step of counting, displaying and uploading the second recognition result includes:
counting second identification results in a preset time period to obtain hand-pulling attribute information, wherein the hand-pulling attribute information comprises hand-pulling time and hand-pulling times;
in this step, can download the function APP that matches with this hand in hand recognition function through the cell-phone that is connected with intelligent wearing equipment, can set up some function items through this APP, for example hand in hand attribute information shows the item, functions such as statistics cycle of hand in hand number of times.
Displaying the hand-in-hand attribute information on electronic screens of the first intelligent wearable device and the second intelligent wearable device respectively;
in this step, for example, the set hand-holding attributes include hand-holding time and hand-holding times, and in a preset time period, for example, 24 hours, information of the hand-holding times in the time period may be displayed on an electronic screen of the intelligent wearable device, so that the information is convenient for a wearer to view.
And uploading the hand-in-hand attribute information to a cloud server for analysis according to the user requirements.
In this step, hand-in-hand attribute information can be uploaded to a hand-in-hand community of the cloud server, the community can rank the hand-in-hand times of the user, a special hand-in-hand attribute information base can be formed, and the user can conveniently check the current and historical hand-in-hand conditions of the user at any time.
According to the invention, gesture data, orientation data and target signal intensity data when a first intelligent wearable device and a second intelligent wearable device in a connection state are close to each other are obtained; processing the attitude data, the azimuth data and the target signal intensity data according to the machine learning model to obtain target characteristic parameter values for identifying the hand-in-hand state; according to the target characteristic parameter values, a hand-in-hand recognition model based on the intelligent wearable equipment is established, the hand-in-hand state based on the double intelligent wearable equipment is recognized, and a target recognition result is obtained; counting, displaying and selectively uploading the target recognition result; due to the adoption of non-single data, the accuracy of the hand-in-hand recognition state based on the intelligent wearable device is improved, the intelligent statistics of the hand-in-hand times is realized, the functionality of the intelligent wearable device is expanded, and the intimacy between couples is enhanced.
On the other hand, the embodiment of the present invention further provides a hand-in-hand state recognition apparatus, as shown in fig. 2, including an obtaining unit 11, a communication unit 12, a data processing unit 13, a storage unit 14, and a display unit 15, where:
the acquiring unit 11 is configured to acquire gesture data and target signal intensity data when the first intelligent wearable device and the second intelligent wearable device in the connection state are close to each other;
the communication unit 12 is configured to establish a connection relationship between the first intelligent wearable device and the second intelligent wearable device;
the data processing unit 13 is used for identifying the hand-in-hand state based on the intelligent wearable device and acquiring a first identification result;
a storage unit 14 for storing the acquired first recognition result and second recognition result;
and a display unit 15 for performing screen display on the acquired first recognition result and second recognition result.
In this embodiment, the acquiring unit 11 acquires the posture data and the target signal strength data when the first smart wearable device and the second smart wearable device in the connected state are close to each other; establishing a connection relation between the first intelligent wearable device and the second intelligent wearable device through a communication unit 12; identifying the hand-in-hand state through the data processing unit 13, and acquiring the hand-in-hand identification result; storing the acquired target recognition result through a storage unit 14; the acquired target recognition result is screen-displayed through the display unit 15. The invention adopts non-single data, improves the accuracy of hand-in-hand identification based on the intelligent wearable equipment, realizes the intelligent statistics of hand-in-hand times, expands the functionality of the intelligent wearable equipment and enhances the intimacy between couples.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A hand-in-hand state identification method is applied to intelligent wearable devices worn on wrists of users, the intelligent wearable devices comprise a first intelligent wearable device and a second intelligent wearable device, and the method comprises the following steps:
acquiring attitude data, orientation data and target signal intensity data when the first intelligent wearable device and the second intelligent wearable device which are in a connection state are close to each other;
processing the attitude data, the azimuth data and the target signal intensity data to obtain target characteristic parameter values;
according to the target characteristic parameter value, identifying the hand-in-hand state based on the intelligent wearable equipment to obtain a first identification result;
according to the first identification result, identifying the muscle electric signal of the wrist of the user to obtain a second identification result;
and counting, displaying and uploading the first recognition result and the second recognition result.
2. The hand-in-hand state identification method according to claim 1, wherein the implementation method of the connection state comprises the following steps:
judging whether the first intelligent wearable device and the second intelligent wearable device are in a wearing mode;
if yes, starting a main Bluetooth device in the first intelligent wearable device and a secondary Bluetooth device in the second intelligent wearable device;
the first intelligent wearable device sends key information to a secondary Bluetooth device of the second intelligent wearable device through a primary Bluetooth device;
the second intelligent wearable device decrypts the key information, acquires decryption information, feeds the decryption information back to the first intelligent wearable device, and if the feedback result is positive, the first intelligent wearable device and the second intelligent wearable device establish a connection state.
3. The petunia state identification method according to claim 2, wherein the step of judging whether the first intelligent wearable device and the second intelligent wearable device are in the wearing mode comprises:
acquiring heart rate data of a user through the first intelligent wearable device and the second intelligent wearable device;
and judging whether the heart rate data value is between a preset heart rate data upper limit threshold value and a preset heart rate data lower limit threshold value, if so, judging that the first intelligent wearable device and the second intelligent wearable device are in a wearing mode.
4. The hand-in-hand state recognition method according to claim 1, characterized in that: the gesture data is angle value data of the user wrist relative to coordinate axes, and the coordinate axes comprise an X axis, a Y axis and a Z axis which are perpendicular to each other; wherein, the X axis corresponds to the true north of the geomagnetism, the Y axis corresponds to the true east of the geomagnetism, and the Z axis is vertical to the bracelet screen; the orientation data is angle data between the first intelligent wearable device and the second intelligent wearable device; the target signal intensity data is signal intensity data between the first intelligent wearable device and the second intelligent wearable device.
5. The hand-in-hand state identification method according to claim 1, wherein the step of obtaining the target characteristic parameter value comprises:
processing the attitude data, the azimuth data and the target signal intensity data to obtain an initial characteristic parameter value;
and processing the initial characteristic parameter value by adopting a gradient descent algorithm to obtain a target characteristic parameter value for identifying the hand-in-hand state.
6. The hand-in-hand state recognition method according to claim 5, characterized in that:
the initial characteristic parameter value is obtained by the following formula:
Y=aX1+bX2+cX3+dX4+eX5
wherein Y is an initial characteristic parameter value, the value of Y is a fixed constant value, and X1Is the angle value of the user's wrist with respect to the X-axis direction, X2Is the angle value, X, of the user's wrist with respect to the Y-axis direction3Is the angle value, X, of the user's wrist with respect to the Z-axis direction4For target signal strength data values, X5Is the azimuth data value, a is the first parameter value, b is the second parameter value, c is the parameter value, d is the fourth parameter value, e is the fifth parameter value.
7. The hand-in-hand state recognition method according to claim 1, wherein the step of obtaining the first recognition result comprises:
substituting the initial characteristic parameter value into W = a1X1+b1X2+c1X3+d1X4+e1X5In the method, a hand-in-hand recognition model is established, wherein W is a target characteristic parameter value, the value of W is a fixed constant value, and X is a constant value1Is the angle value of the user's wrist with respect to the X-axis direction, X2Is the angle value, X, of the user's wrist with respect to the Y-axis direction3Is the angle value, X, of the user's wrist with respect to the Z-axis direction4For target signal strength data values, X5Is an azimuth value, a1Is a first initial characteristic parameter value, b1Is a second initial characteristic parameter value, c1Is the third initial characteristic parameter value, d1Is a fourth initial characteristic parameter value, e1Is a fifth initial characteristic parameter value;
inputting the acquired attitude data, the acquired azimuth data and the acquired target signal intensity data into the hand-in-hand recognition model, wherein if the acquired target characteristic parameter value W is equal to a preset fixed constant value, the first recognition result is that the hand-in-hand is successful, and if the acquired target characteristic parameter value W is not equal to the preset fixed constant value, the first recognition result is that the hand-in-hand is failed.
8. The hand-in-hand state identification method of claim 7, wherein the specific steps of identifying the muscle electrical signal of the wrist of the user according to the first identification result and obtaining the second identification result comprise:
judging whether the first identification result is successful in holding the hand or not;
if the first identification result is that the hand is successfully pulled, detecting whether the muscle electric signal of the wrist of the user reaches a preset muscle electric signal data lower limit threshold value; and taking the detection result as a second identification result.
9. The hand-in-hand state identification method according to claim 1, wherein the step of counting, displaying and uploading the first identification result and the second identification result comprises:
counting a second identification result in a preset time period to obtain hand-in-hand attribute information, wherein the hand-in-hand attribute information comprises hand-in-hand time and hand-in-hand times;
displaying the hand-in-hand attribute information on electronic screens of the first intelligent wearable device and the second intelligent wearable device respectively;
and uploading the hand-in-hand attribute information to a cloud server for analysis according to the user requirements.
10. A hand-in-hand state recognition device is characterized by comprising:
the acquisition unit is used for acquiring attitude data and target signal intensity data when the first intelligent wearable device and the second intelligent wearable device in the connection state are close to each other;
the communication unit is used for establishing a connection relation between the first intelligent wearable device and the second intelligent wearable device;
the data processing unit is used for identifying the hand-in-hand state based on the intelligent wearable equipment and acquiring a first identification result;
the storage unit is used for storing the acquired first identification result and the acquired second identification result;
and the display unit is used for displaying the acquired first recognition result and the second recognition result on a screen.
CN202110339772.8A 2021-03-30 2021-03-30 Hand-in-hand state identification method and device Pending CN113126758A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110339772.8A CN113126758A (en) 2021-03-30 2021-03-30 Hand-in-hand state identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110339772.8A CN113126758A (en) 2021-03-30 2021-03-30 Hand-in-hand state identification method and device

Publications (1)

Publication Number Publication Date
CN113126758A true CN113126758A (en) 2021-07-16

Family

ID=76774606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110339772.8A Pending CN113126758A (en) 2021-03-30 2021-03-30 Hand-in-hand state identification method and device

Country Status (1)

Country Link
CN (1) CN113126758A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104243279A (en) * 2013-06-24 2014-12-24 联想(北京)有限公司 Information processing method and equipment and wearable electronic equipment
CN104767807A (en) * 2015-03-31 2015-07-08 华为技术有限公司 Information transmission method based on wearable devices and related devices
CN105451205A (en) * 2015-11-10 2016-03-30 北京奇虎科技有限公司 Intelligent wearable device matching method and device
CN206062357U (en) * 2016-06-11 2017-04-05 汪胜 A kind of intelligent ring for recording the time of leading along by hand

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104243279A (en) * 2013-06-24 2014-12-24 联想(北京)有限公司 Information processing method and equipment and wearable electronic equipment
CN104767807A (en) * 2015-03-31 2015-07-08 华为技术有限公司 Information transmission method based on wearable devices and related devices
CN105451205A (en) * 2015-11-10 2016-03-30 北京奇虎科技有限公司 Intelligent wearable device matching method and device
CN206062357U (en) * 2016-06-11 2017-04-05 汪胜 A kind of intelligent ring for recording the time of leading along by hand

Similar Documents

Publication Publication Date Title
US11045117B2 (en) Systems and methods for determining axial orientation and location of a user's wrist
US10001386B2 (en) Automatic track selection for calibration of pedometer devices
CN107223247A (en) Method, system and wearable device for obtaining multiple health parameters
Bertolotti et al. A wearable and modular inertial unit for measuring limb movements and balance control abilities
CN110151137B (en) Sleep state monitoring method, device, equipment and medium based on data fusion
KR102564269B1 (en) Electronic apparatus for providing exercise information using biometric information and operating method thereof
CN104808783A (en) Mobile terminal and method of controlling the same
EP2919434B1 (en) Method for determining data source
JP6742380B2 (en) Electronic device
US20170227375A1 (en) Calibration of a primary pedometer device using a secondary pedometer device
CN106705989B (en) step recording method, device and terminal
CN106774861B (en) Intelligent device and behavior data correction method and device
US20230191198A1 (en) Electronic apparatus and operation method for providing of workout guide thereof
US20200268263A1 (en) Electronic device for measuring biometric information and method for operating the same
CN113679339A (en) Sleep monitoring method, device, system and storage medium
US10251598B2 (en) Waistband monitoring analysis for a user
JP2009106374A (en) Display system for gait information
KR20190080598A (en) System for recognizing emotion using biometric data and method thereof
CN113126758A (en) Hand-in-hand state identification method and device
US20230334630A1 (en) Systems and methods for motion measurement drift correction
KR20180073795A (en) Electronic device interworking with smart clothes, operating method thereof and system
CN116019443A (en) Cardiopulmonary resuscitation chest compression compliance detection system and method
EP3308705A1 (en) Body composition measuring device
US11992325B2 (en) Systems, devices and methods for dextrous hand function assessment and therapy
CN107411721A (en) A kind of flexible paste chip wireless monitor meter and its judge flow

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210716

RJ01 Rejection of invention patent application after publication