CN115230634B - Method for reminding wearing safety belt and wearable device - Google Patents
Method for reminding wearing safety belt and wearable device Download PDFInfo
- Publication number
- CN115230634B CN115230634B CN202110447235.5A CN202110447235A CN115230634B CN 115230634 B CN115230634 B CN 115230634B CN 202110447235 A CN202110447235 A CN 202110447235A CN 115230634 B CN115230634 B CN 115230634B
- Authority
- CN
- China
- Prior art keywords
- sound
- wearable device
- information
- seat belt
- safety belt
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 125
- 230000033001 locomotion Effects 0.000 claims description 108
- 230000001133 acceleration Effects 0.000 claims description 43
- 230000009471 action Effects 0.000 claims description 34
- 238000006073 displacement reaction Methods 0.000 claims description 34
- 230000008859 change Effects 0.000 claims description 26
- 239000000284 extract Substances 0.000 claims description 13
- 230000008569 process Effects 0.000 description 38
- 238000004891 communication Methods 0.000 description 36
- 230000006854 communication Effects 0.000 description 36
- 238000012549 training Methods 0.000 description 27
- 238000001514 detection method Methods 0.000 description 25
- 230000006870 function Effects 0.000 description 19
- 238000013528 artificial neural network Methods 0.000 description 17
- 230000005236 sound signal Effects 0.000 description 16
- 238000007726 management method Methods 0.000 description 15
- 238000010295 mobile communication Methods 0.000 description 13
- 238000012545 processing Methods 0.000 description 11
- 238000013527 convolutional neural network Methods 0.000 description 8
- 239000008280 blood Substances 0.000 description 6
- 210000004369 blood Anatomy 0.000 description 6
- 238000013145 classification model Methods 0.000 description 6
- 238000003780 insertion Methods 0.000 description 6
- 230000037431 insertion Effects 0.000 description 6
- 238000013186 photoplethysmography Methods 0.000 description 6
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 5
- 210000000988 bone and bone Anatomy 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 229910052760 oxygen Inorganic materials 0.000 description 5
- 239000001301 oxygen Substances 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 125000004122 cyclic group Chemical group 0.000 description 4
- 230000000306 recurrent effect Effects 0.000 description 4
- 229920001621 AMOLED Polymers 0.000 description 3
- 238000013459 approach Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 210000001519 tissue Anatomy 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000003416 augmentation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000017531 blood circulation Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 239000002096 quantum dot Substances 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000011010 flushing procedure Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000010349 pulsation Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- JLGLQAWTXXGVEM-UHFFFAOYSA-N triethylene glycol monomethyl ether Chemical compound COCCOCCOCCO JLGLQAWTXXGVEM-UHFFFAOYSA-N 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R22/00—Safety belts or body harnesses in vehicles
- B60R22/48—Control systems, alarms, or interlock systems, for the correct application of the belt or harness
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Mechanical Engineering (AREA)
- Emergency Alarm Devices (AREA)
Abstract
The embodiment of the invention discloses a method for reminding a wearer of a safety belt, which is applied to wearable equipment, wherein the wearable equipment comprises at least one sensor, and the method comprises the following steps: the wearable device recognizes the wearing condition of the safety belt through the at least one sensor when recognizing that the user enters the car; further, when it is recognized that the seat belt is not worn, the user is prompted to wear the seat belt.
Description
Technical Field
The invention relates to the technical field of terminals, in particular to a method for reminding a user to wear a safety belt and wearable equipment.
Background
The safety belt is the most effective passive safety device, and when the automobile encounters emergency braking in unexpected situations, the safety belt can tie the driver and passengers on the seat so as to avoid forward flushing and prevent the driver and passengers from being thrown out of the automobile.
However, the proportion of the driver and the passenger taking the seat belt is not very high, the seat belt is improved after the legislation in recent years, the proportion of the seat belt is not investigated, and a plurality of people have a lucky mind, so that the tragic cases are not rare. The proportion of the driver and the passenger taking the seat belt is not very high, the seat belt is improved after the legislation in recent years, the proportion of the seat belt is not investigated, and many people have a mind of being lively, so that the tragic cases are not minority.
Therefore, a method for accurately reminding a driver to wear the safety belt is needed, so that the life safety of the driver is guaranteed.
Disclosure of Invention
The embodiment of the invention provides a method for reminding a user to wear a safety belt and a wearable device.
In a first aspect, embodiments of the present application provide a method for reminding a wearer of a seat belt, applied to a wearable device, the wearable device including at least one sensor, comprising:
the wearable device recognizes the wearing condition of the safety belt through the at least one sensor when recognizing that the user enters the car;
and prompting the user to wear the safety belt under the condition that the safety belt is not worn by the wearable device.
According to the method, whether the user enters a car or not and whether the safety belt is worn or not is identified through the wearable equipment worn on the driver and the passenger (also called the user), and when the user enters the car and the safety belt is not worn, prompt is carried out, so that the life safety of the driver and the passenger is guaranteed.
Moreover, the method, on the one hand, compared with the intelligent safety belt, does not require any improvement on the existing safety belt, and is applicable to all automobiles; on the other hand, the user image does not need to be acquired, and the risk of privacy disclosure of the user does not exist; on the other hand, the method is high in universality, can be used for the safety belt wearing detection and prompt of a driver, is also suitable for passengers, and particularly realizes the safety belt wearing detection and prompt of rear passengers.
In one possible implementation, the at least one sensor includes an inertial sensor and a microphone, and the identifying, by the at least one sensor, a wearing condition of the safety belt includes:
the wearable device acquires first motion information in real time through the inertial sensor;
the wearable device collects first sound information in real time through the microphone;
the wearable device determines a wearing condition of the safety belt based on at least one of the first motion information and the first sound information.
The method is implemented through the first motion information collected by the inertial sensor in the wearable device or the first sound information collected by the microphone, so that the wearable device can detect the wearing of the safety belt.
In one possible implementation, the wearable device determines, based on at least one of the first motion information and the first sound information, a wearing condition of a safety belt, specifically including:
the wearable device determining that the seat belt is worn if an action to pull the seat belt is detected based on the first motion information;
the wearable device determines that the seat belt is not worn if an action to pull the seat belt is not detected based on the first motion information.
In one possible implementation, the method further comprises:
the wearable device determines a first displacement of the wearable device in a horizontal plane direction and a second displacement on a water surface based on the first motion information;
and determining that an action of pulling the seat belt is detected when the first displacement is greater than a first length and the second displacement is greater than the second length.
According to the method, the action of pulling the safety belt is detected through the displacement of the wearable device, and after the action of pulling the safety belt is identified, the safety belt is identified to be worn.
In one possible implementation, the wearable device determines, based on at least one of the first motion information and the first sound information, a wearing condition of a safety belt, specifically including:
The wearable device determines that the seat belt is worn in a case where a sound of pulling the seat belt and a sound of inserting the seat belt into a seat belt socket are recognized based on the first sound information;
the wearable device determines that the seat belt is not worn in a case where a sound of pulling the seat belt or a sound of inserting the seat belt into a seat belt socket is not recognized based on the first sound information.
According to the method, after the sound of pulling the safety belt and the sound of inserting the safety belt into the safety belt socket are recognized, the safety belt is determined to be worn more accurately.
In one possible implementation, the method further comprises:
the wearable device inputs the first sound information into a sound recognition model to obtain the sound type of the first sound information;
when the type of the first sound information is the sound of the pulling safety belt, the wearable device inputs the first sound information collected by the microphone after the sound of the pulling safety belt is identified to the sound identification model, and the sound type of the first sound information collected later is obtained; and when the sound type of the first sound information collected later is the sound of the safety belt inserted into the safety belt socket, identifying the sound of the safety belt inserted into the safety belt socket.
In one possible implementation, the wearable device determines, based on at least one of the first motion information and the first sound information, a wearing condition of a safety belt, specifically including:
the wearable device determines the wearing condition of the safety belt based on a first confidence and a second confidence, wherein the first confidence is used for indicating the probability of detecting that the safety belt is worn based on the first motion information; the second confidence is used to indicate a probability of detecting belt wear based on the first sound information.
In one possible implementation, the wearable device determines a wearing condition of the safety belt based on the first confidence and the second confidence, and specifically includes:
the wearable device determining a weighted sum of the first confidence and the second confidence;
the wearable device determines that the safety belt is worn when the weighted sum is greater than a target threshold;
the wearable device determines that the seat belt is not worn when the weighted sum is not greater than the target threshold.
According to the method, the wearing recognition of the safety belt is performed by integrating the first motion information and the first sound information, so that the safety belt wearing recognition is more accurate, and the recognition success rate can be improved.
In one possible implementation, before the wearable device identifies the wearing condition of the safety belt by the at least one sensor when the user is identified to enter the car, the method further comprises:
the wearable device acquires second motion information in real time through the inertial sensor;
the wearable device collects second sound information in real time through the microphone;
the wearable device determines that the user enters the vehicle when detecting that the human body state of the user changes from a standing posture to a sitting posture based on the second motion information after detecting the sound of opening the vehicle door based on the second sound information and detecting the sound of closing the vehicle door.
According to the vehicle entering recognition mode of the user, when the sequence of operations of opening the vehicle door, converting the standing posture into the sitting posture and closing the vehicle door are sequentially detected, the user is judged to enter, and the vehicle entering recognition of the user is carried out by combining the opening/closing of the vehicle door and the change of the posture, so that the vehicle entering recognition method is more accurate and better in user experience.
In one possible implementation, the method further comprises:
the wearable device extracts characteristics of the second sound information;
the wearable device determines that the sound for opening the vehicle door is detected based on the second sound information when the characteristic of the second sound information matches the characteristic of the sound for opening the vehicle door;
The wearable device extracts characteristics of second sound information input acquired by the microphone after detecting the sound for opening the vehicle door based on the second sound information;
and when the characteristics of the second sound information collected later are matched with the characteristics of the sound for closing the car door, the wearable device determines that the sound for closing the car door is detected.
In one possible implementation, the method further comprises:
the wearable device acquires third motion information in real time through the inertial sensor;
the wearable device collects third sound information in real time through the microphone;
the wearable equipment acquires position information in real time through a positioning system;
the wearable device determines a start-up condition of the vehicle based on at least one of the third motion information, the third sound information, and the position information.
In one possible implementation, the wearable device determines a start-up condition of the vehicle based on at least one of the third motion information, the third sound information, and the location information, including:
when the wearable device meets N items in preset conditions, determining starting of a vehicle; the N is a positive integer not greater than 3, and the preset condition comprises:
Detecting an abrupt acceleration change of the wearable device in a horizontal direction based on the third motion information;
detecting a sound of engine start based on the third sound information;
and determining that the displacement of the wearable device is greater than a preset length based on the position information.
In one possible implementation, the wearable device determines a start-up condition of the vehicle based on at least one of the third motion information, the third sound information, and the location information, including:
the wearable device determines the wearing condition of the safety belt based on a third confidence coefficient, a fourth confidence coefficient and a fifth confidence coefficient, wherein the third confidence coefficient is used for indicating the probability of detecting that the vehicle is started based on the third sound information; the fourth confidence is used for indicating the probability that the vehicle is detected to be started based on the position information; the fifth confidence level is used to indicate a probability that a vehicle has been started based on the third motion information.
In a second aspect, embodiments of the present application further provide a wearable device, the wearable device including a processor, a memory, and at least one sensor, the processor being coupled to the processor and the at least one sensor, respectively, the memory being configured to store computer instructions, the processor being configured to execute the computer instructions stored by the memory, to perform:
Identifying the wearing condition of the safety belt through the at least one sensor when the user is identified to enter the car;
and prompting the user to wear the safety belt when the safety belt is not worn.
In a possible implementation, the at least one sensor includes an inertial sensor and a microphone, and the processor executes the identification of the wearing condition of the seat belt by the at least one sensor, specifically including executing:
acquiring first motion information in real time through the inertial sensor;
collecting first sound information in real time through the microphone;
and determining the wearing condition of the safety belt based on at least one of the first motion information and the first sound information.
In one possible implementation, the processor executes the determining, based on at least one of the first motion information and the first sound information, a wearing condition of the safety belt, specifically including executing:
determining that the seat belt is worn in a case where an action to pull the seat belt is detected based on the first motion information;
if an action of pulling the seat belt is not detected based on the first motion information, it is determined that the seat belt is not worn.
In one possible implementation, the processor further performs:
determining a first displacement of the wearable device in a horizontal plane direction and a second displacement on a water surface based on the first motion information;
and determining that an action of pulling the seat belt is detected when the first displacement is greater than a first length and the second displacement is greater than the second length.
In one possible implementation, the processor executes the determining, based on at least one of the first motion information and the first sound information, a wearing condition of the safety belt, specifically including executing:
determining that the seat belt is worn, in a case where a sound of pulling the seat belt and a sound of inserting the seat belt into a seat belt socket are recognized based on the first sound information;
if the sound of pulling the seat belt or the sound of inserting the seat belt into the seat belt socket is not recognized based on the first sound information, it is determined that the seat belt is not worn.
In one possible implementation, the processor further performs:
inputting the first sound information into a sound recognition model to obtain the sound type of the first sound information;
when the type of the first sound information is the sound of the pulling safety belt, the first sound information collected by the microphone after the sound of the pulling safety belt is identified is input into the sound identification model, and the sound type of the first sound information collected later is obtained;
And when the sound type of the first sound information collected later is the sound of the safety belt inserted into the safety belt socket, identifying the sound of the safety belt inserted into the safety belt socket.
In one possible implementation, the processor executes the determining, based on at least one of the first motion information and the first sound information, a wearing condition of the safety belt, specifically including executing:
determining a wearing condition of the safety belt based on a first confidence and a second confidence, wherein the first confidence is used for indicating a probability of detecting that the safety belt is worn based on the first motion information; the second confidence is used to indicate a probability of detecting belt wear based on the first sound information.
In one possible implementation, the processor executes the determining the wearing condition of the safety belt based on the first confidence and the second confidence, specifically includes executing:
determining a weighted sum of the first confidence and the second confidence;
determining that the seat belt is worn when the weighted sum is greater than a target threshold;
and determining that the safety belt is not worn when the weighted sum is not greater than the target threshold.
In one possible implementation, before the processor executes the identifying of the wearing condition of the seat belt by the at least one sensor when the user is entering the vehicle, the processor further executes:
Acquiring second motion information in real time through the inertial sensor;
collecting second sound information in real time through the microphone;
the user is determined to enter the vehicle when detecting a sound of opening the vehicle door based on the second sound information, detecting a change of a human body state of the user from a standing posture to a sitting posture based on the second motion information, and detecting a sound of closing the vehicle door.
In one possible implementation, the processor further performs:
extracting characteristics of the second sound information;
determining that a door opening sound is detected based on the second sound information when the characteristic of the second sound information matches the characteristic of the door opening sound;
extracting a feature of second sound information input collected by the microphone after detecting the sound of opening the door based on the second sound information;
and determining that the sound for closing the vehicle door is detected when the characteristic of the second sound information collected later is matched with the characteristic of the sound for closing the vehicle door.
In one possible implementation, the processor further performs:
acquiring third motion information in real time through the inertial sensor;
collecting third sound information in real time through the microphone;
Collecting position information in real time through a positioning system;
a start condition of the vehicle is determined based on at least one of the third motion information, the third sound information, and the position information.
In one possible implementation, the processor executing the determining a start-up condition of the vehicle based on at least one of the third motion information, the third sound information, and the position information includes executing:
when N items in the preset conditions are met, determining starting of the vehicle; the N is a positive integer not greater than 3, and the preset condition comprises:
detecting an abrupt acceleration change of the wearable device in a horizontal direction based on the third motion information;
detecting a sound of engine start based on the third sound information;
and determining that the displacement of the wearable device is greater than a preset length based on the position information.
In one possible implementation, the processor performs determining a start-up condition of the vehicle based on at least one of the third motion information, the third sound information, and the position information, including performing:
determining a wearing condition of the safety belt based on a third confidence, a fourth confidence and a fifth confidence, wherein the third confidence is used for indicating a probability that the vehicle is detected to be started based on the third sound information; the fourth confidence is used for indicating the probability that the vehicle is detected to be started based on the position information; the fifth confidence level is used to indicate a probability that a vehicle has been started based on the third motion information.
The beneficial effects may be described in the first aspect, and are not described herein.
In a third aspect, embodiments of the present application also provide a computer program product, which when run on a wearable device, causes the wearable device to perform the method according to the first aspect or any one of the implementations of the first aspect.
In a fourth aspect, embodiments of the present application further provide a computer readable storage medium comprising computer instructions, characterized in that the computer instructions, when run on a wearable device, cause the wearable device to perform the method as described in the first aspect or any one of the first aspects.
The beneficial effects may be described in the first aspect, and are not described herein.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of a scenario of a method for reminding a user to wear a safety belt according to an embodiment of the present application;
fig. 2 is a flowchart of a method for reminding a user to wear a safety belt according to an embodiment of the present application;
fig. 3 is a flow chart of a method for identifying a user in a vehicle entering manner according to an embodiment of the present application;
fig. 4A is a schematic flowchart of wearing recognition of a safety belt according to an embodiment of the present application;
fig. 4B is a schematic diagram of an action recognition principle of pulling a safety belt according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of another vehicle start identification provided in an embodiment of the present application;
FIGS. 6A-6B are schematic illustrations of some user interfaces provided by embodiments of the present application;
fig. 7 is a schematic structural diagram of a wearable device according to an embodiment of the present application.
Detailed Description
An application scenario of the present application is described below with reference to fig. 1.
The following embodiments of the present application provide a method for reminding a driver to wear a safety belt, which identifies whether the driver enters a car and wears the safety belt through a wearable device worn on the driver (also referred to as a user), and when the driver is identified to enter the car and does not wear the safety belt, the method prompts the driver to ensure the life safety of the driver.
In the scene that the user gets on the car but does not need to travel, the user does not need to wear the safety belt, in order to improve the accuracy of the prompt, the false prompt of the wearable equipment in the scene that does not need to travel is avoided, whether the vehicle is started or not can be identified before the prompt is carried out, and the prompt is carried out after the user is identified to get on the car, the safety belt is not worn and the vehicle is started.
The wearable device comprises at least one sensor, and the at least one sensor can comprise an inertial sensor, a sound collecting device (such as a microphone), a heart rate sensor and the like, so as to realize the vehicle entering identification of a user, the wearing identification of a safety belt, the starting identification of a vehicle and the like. It should be understood that the inertial sensor may be an acceleration sensor, and may also be composed of an acceleration sensor, a gyroscope, a magnetic sensor, and the like.
The wearable device may be an electronic device that needs to be worn on a user's hand, wrist or arm when the smart wristband, the smart watch, etc. are used, or may be an electronic device that may be fixed on the user's hand, wrist or arm through a wearing device, such as a mobile phone, etc., which are not listed here.
The vehicles and the vehicles in various embodiments of the application can be automobiles, such as cars, off-road vehicles, electric vehicles, unmanned vehicles, semi-automatic vehicles and the like.
The following description is made by way of 4 examples, respectively.
Example 1:
the embodiment 1 of the application provides a method for reminding a driver of wearing a safety belt, and the method is characterized in that whether the driver enters a car or not and whether the driver wears the safety belt is identified through a wearable device worn on the driver, and when the driver is identified to enter the car and not wear the safety belt, the method is used for reminding.
The method may include the following steps:
user's identification of entry
The wearable device can detect the pose change of the user through at least one sensor and/or detect the door opening/closing sound of the vehicle through the sound signals obtained by the microphone, and further, whether the user enters the vehicle or not is judged by combining one or more of detection results. If the user is identified to have entered the vehicle, the wearing identification process of the safety belt can be started, or the wearing identification process of the safety belt and the starting identification process of the vehicle can be started respectively; if the user is not identified to enter the vehicle, the flow of the user's vehicle entering identification can be re-executed, or the flow can be executed according to a preset frequency, or the flow can be executed when the triggering condition is detected. The triggering condition may be that the motion sensor of the wearable device detects that the user moves, or detects that the user is walking, or detects that the user stops walking, or may be other triggering conditions, which are not limited herein. The at least one sensor may be an inertial sensor, such as an accelerometer, a gyroscope, a gravity sensor, etc., a heart rate sensor, or a combination of an inertial sensor and a heart rate sensor.
For specific implementation of the user entering recognition, refer to the related description in the following embodiment of the method for recognizing the entering of the user shown in embodiment 2, and will not be described herein.
Wearing recognition of safety belt
After the wearable device recognizes that the user enters the car, the wearable device can detect the action of pulling the safety belt of the user through the motion information acquired by the inertial sensor and/or detect the pulling sound of the safety belt and the sound of inserting the safety belt into the safety belt socket (namely, the safety belt is buckled) through the sound signal acquired by the microphone, and whether the safety belt is worn or not is judged by combining one or more of the detection results.
In some embodiments, if the belt is worn, the wearable device may close the start-up identification process and the end process of the vehicle, and may also prompt the user that the belt is worn; if the safety belt is not worn, the wearable device can execute a flow of prompting the user to wear the safety belt.
In other embodiments, the wearable device may also execute the process of prompting the user to wear the seat belt if it is determined that the seat belt is worn and the vehicle is started.
For specific implementation of the user driving recognition, reference may be made to the description related to the wearing recognition method embodiment of the safety belt shown in the following embodiment 3, and details are not repeated here.
(III) vehicle Start-up identification
After the wearable device recognizes that the user enters the vehicle, the wearable device can detect movement of the vehicle through movement information acquired by the inertial sensor and/or detect starting sound of the vehicle through sound signals acquired by the microphone, and one or more of detection results are combined to judge whether the vehicle is started or not. In the case where the vehicle is started and the seat belt is not worn, a process of prompting the user to wear the seat belt may be performed. When the vehicle is not started, the wearable device can repeatedly execute the vehicle starting identification process, if the vehicle is not detected within a preset time period, such as 5 minutes or 10 minutes, the process can be ended, the user entering identification process is returned, or the user entering identification process is triggered.
For specific implementation of the user entering recognition, reference may be made to the description related to the embodiment of the method for recognizing the start of the vehicle shown in the following embodiment 4, which is not repeated here.
(IV) prompting the user to wear the safety belt
In the case where the seat belt is not worn and the vehicle is started, the wearable device may prompt the user to wear the seat belt, and further, the flow is ended. The manner of prompting includes, but is not limited to, one or more of voice prompting, vibration prompting, display prompting, indicator light prompting, etc.
On the one hand, compared with the intelligent safety belt, the method does not require any improvement on the existing safety belt, and is applicable to all automobiles; on the other hand, the user image does not need to be acquired, and the risk of privacy disclosure of the user does not exist; on the other hand, the method is high in universality, can be used for the safety belt wearing detection and prompt of a driver, is also suitable for passengers, and particularly realizes the safety belt wearing detection and prompt of rear passengers.
Example 2:
as shown in fig. 3, a flow chart of a vehicle entering identification method provided in embodiment 2 of the present application is shown, and the method may be implemented by a wearable device, including, but not limited to, the following partial or all steps:
s11: the human body state is detected in real time through the sensor. The sensor may be an inertial sensor, such as an acceleration sensor, a heart rate sensor, etc., and the human body state to be identified may include a standing posture (standing posture for short) and a sitting posture (sitting posture for short).
S12: it is determined whether a sound to open the door is detected. If yes, executing S13; otherwise, S12 is performed.
The wearable device can collect sound information in real time through the microphone, extract characteristics of the collected sound information, and further identify whether the collected sound information is sound for opening/closing the vehicle door based on the characteristics of the collected sound information.
S13: it is detected whether the human body state of the user is changed from a standing posture to a sitting posture. If yes, then S14 is performed; otherwise, S13 is repeatedly performed.
S14: it is determined whether a sound to close the door is detected. If yes, judging that the user enters the vehicle, otherwise, judging that the user does not enter the vehicle.
According to the vehicle entering identification method for the user, when the sequence of operations of opening the vehicle door, converting the standing posture into the sitting posture and closing the vehicle door are sequentially detected, the user is judged to enter, and the vehicle entering identification for the user is carried out through the combination of opening/closing the vehicle door and the position posture change, so that the vehicle entering identification method for the user is more accurate and better in user experience.
In some embodiments, to increase the speed of the recognition of the user's entering, the wearable device may further determine that the user is entering when the target duration detects the sound of opening the door and the sound of closing the door, and determine that the user is not entering when the sound of opening the door or the sound of closing the door is not detected.
In some embodiments, to increase the speed of the user's vehicle entering recognition, the wearable device may further determine that the user enters when detecting whether the human body state of the user changes from standing to sitting within the target duration; otherwise, judging that the user does not enter the vehicle.
The target duration may be a certain duration realized by taking time for triggering the recognition of the user for driving in as a start. Such as 2 minutes, or 3 minutes, etc.
In some embodiments, the wearable device may also execute S11 and S13 after detecting the sound of opening the door, so as to avoid the real-time operation of the sensor for identifying the human body state, and reduce the power consumption of the wearable device.
The principle of detecting the human body state by the two sensors related to the vehicle entering recognition method is described as follows.
First, the principle of detecting the human body state by an inertial sensor.
Specifically, motion information of the wearable device can be acquired through the inertial sensor, a motion track of the wearable device is obtained according to the motion information, and then the change of the pose is identified based on the detected motion track.
In some embodiments, the wearable device may pre-collect a motion trajectory of the user from standing to sitting, and generate a preset trajectory based on the collected motion trajectory. In specific application, the wearable device can acquire motion information in a period of time from when the sound for opening the vehicle door is detected to when the sound for closing the vehicle door is detected or acquire motion information acquired by the inertial sensor, and a motion track is calculated; furthermore, the wearable device can judge whether the calculated similarity of the motion trail and the preset trail is larger than a preset threshold value, if so, the pose change of the user is detected to be changed from standing to sitting, and the vehicle entering of the user is determined; otherwise, the user does not enter the vehicle. Optionally, the preset track may be one track or multiple tracks, where the preset track includes multiple tracks, the wearable device may calculate a similarity between the detected motion track and each of the multiple motion tracks, and if the similarity between one track and each of the multiple motion tracks is greater than a preset threshold, determine that the user enters the vehicle; otherwise, the user does not enter the vehicle.
In other embodiments, the wearable device may determine whether the calculated motion trajectory is a top-to-bottom trajectory, and if so, detect that the pose change of the user changes from standing to sitting, and determine the user's approach; otherwise, the user does not enter the vehicle.
And (II) detecting the human body state through a heart rate sensor.
The heart rate of a human body is obviously different when the human body is in a standing posture and a sitting posture. Typically, the heart rate of a person in a sitting position is lower than the heart rate in a standing position. The wearable device may detect a heart rate of the user through the heart rate sensor, and in turn, identify a state or transition of state of the user based on the detected heart rate. For example, when the detected heart rate is in the standing heart rate range, confirming that the human body state of the user is standing; when the heart rate of the user is in the heart rate range in the sitting posture, the human body state of the user is confirmed to be in the sitting posture. For another example, when the heart rate of the user decreases from the first steady state to the second steady state and the difference between the first steady state and the second steady state is not smaller than the first threshold, such as 10 times/second or 15 times/second, the human body state of the user is identified as being changed from the standing posture to the sitting posture, wherein the first steady state and the second steady state are both that the heart rate variation amplitude is not larger than the second threshold, such as 3 times/second or 5 times/second, in a certain period of time. The first threshold and the second threshold can be obtained based on heart rate data acquired by the user, and at the moment, the setting of the first threshold and the second threshold can be determined based on the actual heart rate of the user aiming at different users, so that the identification of the human body state of the user is more accurate.
The human body state recognition principle shown in the first and second modes is not limited, and the human body state recognition principle can be recognized by combining the two modes. For example, when the slave standing posture of the human body state of the user is changed to the sitting posture in any one of the modes (one) and (two), the slave standing posture of the human body state of the user is determined to be changed to the sitting posture, so that the success rate of the detection is improved. For another example, when the slave standing posture of the human body state of the user is changed to the sitting posture in both the first and second modes, the slave standing posture of the human body state of the user is recognized to be changed to the sitting posture, so that the accuracy of recognition is improved.
The recognition principle of the sound of opening/closing the door of the vehicle, which is related to the above-described approach to recognition of an approach to a vehicle, is described as follows. Its recognition principles may include feature matching and recognition by artificial intelligence (artificial intelligence, AI) models.
Feature matching:
the wearable device may collect sounds of opening/closing the door of the vehicle in advance, and then extract features of the sounds of opening the door and features of the sounds of closing the door, respectively, as pre-stored feature information. In the application process, the wearable equipment can extract the characteristics of the sound information collected in real time, match the characteristics of the sound information with the characteristics of the sound for opening the car door, and consider the collected sound information to be the sound for opening the car door when the characteristics of the sound information and the characteristics of the sound for opening the car door are matched; otherwise, if the sound information does not match, the sound information is not the sound of opening the door. Similarly, the wearable device can extract the characteristics of the sound information collected in real time after the door is identified to be opened, match the characteristics of the sound information with the sound for closing the door, and consider the sound information collected later to be the sound for closing the door when the characteristics of the sound information and the sound information are matched; otherwise, if the sound information is not matched, the sound information collected later is not the sound of closing the vehicle door.
AI model identification:
the training device may train a voice recognition model, which is a classification model, through the sample set, recognizing the probability that the input voice belongs to the voice of opening the door, the probability that the input voice belongs to the voice of closing the door, and the probability that the input voice belongs to other voices. Further, the type to which it belongs may be identified based on the output probability value. It should be understood that other sounds herein refer to sounds that are neither opening nor closing the door. The training device may be a server, cloud server, notebook, or other computing-capable electronic device that may send the trained voice recognition model to the wearable device.
The voice recognition model may be a Convolutional Neural Network (CNN), a cyclic neural network (recurrent neural network, RNN), a Depth Neural Network (DNN), or the like, for recognizing a voice type to which input voice information belongs, where the voice type includes a voice of opening a door, a voice of closing a door, and other voices. The samples employed to train the voice recognition model include voice information and the voice type (also referred to as the true voice type) to which the voice information belongs. The training method comprises the following steps: and inputting the voice information in the sample into the voice recognition model to obtain the predicted voice type of the voice information, updating parameters of the voice recognition model through loss between the predicted voice type and the real voice type of the voice information, so that the loss is smaller and smaller, and obtaining the voice recognition model with voice recognition capability when the loss converges or the training times meet the requirements.
The wearable device may pre-store or download the voice recognition model. In the application process of the model, the wearable device can acquire the sound information in real time, the sound information is input into the sound recognition model, the sound recognition model processes the acquired sound information to obtain the probability P1 of the sound of opening the vehicle door, the probability P2 of the sound of closing the vehicle door and the probability P3 of other sounds respectively, and further, when P1 is larger than P2 and larger than P3, the sound of opening the vehicle door is judged; when P2 is larger than P1 and larger than P3, judging that the sound belongs to the sound of closing the vehicle door; otherwise, it belongs to other sounds.
Alternatively, the models for identifying the sound for opening the door and the sound for closing the door may be different two models, for example, the model for identifying the sound for opening the door is a first sound identification model, and is a classification model for identifying whether it is the sound for opening the door; similarly, the model for recognizing the sound of closing the door is a second sound recognition model, which is a classification model for recognizing whether it belongs to the sound of closing the door. The training principle and the application method are the same as the voice recognition model, and detailed description is omitted.
Example 3:
as shown in fig. 4A, a flowchart of a method for identifying wearing of a safety belt according to embodiment 3 may be implemented by a wearable device, including, but not limited to, the following partial or all steps:
s21: and acquiring the motion information of the wearable equipment in real time through an inertial sensor.
S22: and detecting whether the action of pulling the safety belt occurs or not based on the acquired motion information. If yes, determining that the safety belt is worn or performs S23, S25, or S26; if not, it may be determined that the seat belt is not worn or S25 or S26 is performed.
S23: sound information is collected in real time through a microphone.
S24: it is determined whether or not a sound of pulling the webbing is detected. If yes, determining that the safety belt is worn or executed S25 or S26; if not, it may be determined that the seat belt is not worn or S26 is performed.
S25: it is determined whether the sound of the seat belt inserted into the seat belt outlet is detected. If yes, determining that the seat belt is worn or executing S26; if not, it is determined that the seat belt is not worn or S26 is performed.
S26: whether the seat belt is worn or not is determined based on the detection results of the motion and the sound.
It should be understood that the steps S21-S22 and the steps S23-S25 may be performed simultaneously without being consecutive.
Wherein, judging whether the safety belt is worn based on the detection result of the action and the sound can comprise, but is not limited to, the following implementation modes:
implementation 1: the determination is based solely on the action. Specifically, the wearable device may identify whether the seat belt is worn through S21-S22 described above, wherein when the action of pulling the seat belt occurs, it is determined that the seat belt is worn, and if not, the seat belt is not worn.
Implementation 2: the judgment is based on sound only. Specifically, the wearable device may identify whether the seat belt is worn or not through S23 to S25 described above, wherein when the sound of inserting the seat belt into the seat belt socket is detected after the sound of pulling the seat belt is detected, it is determined whether the seat belt is worn or not, and the seat belt is not worn.
Implementation 3: the determination is based only on the sound of the seat belt inserted into the seat belt insertion opening. Specifically, the wearable device may identify whether the seat belt is worn through the above S25, wherein when detecting the sound of the seat belt inserted into the seat belt socket, it is determined whether the seat belt is worn, or not.
Implementation 4: based on the action and the sound of the seat belt inserted into the seat belt socket. Specifically, the wearable device can recognize whether the seat belt is worn or not through the above-described S21 to S22, S23, and S25, and if the sound of the seat belt inserted into the seat belt socket is detected after the action of pulling the seat belt occurs in the detection lane, it is determined that the seat belt is worn, and if not, the seat belt is not worn.
Implementation 5: the judgment is based on the action, the sound of pulling the seat belt, and the sound of inserting the seat belt into the seat belt socket. Specifically, the wearable device can recognize whether the seat belt is worn or not through the above-described S21 to S22 and S23 to S25, detect the sound of pulling the seat belt while the action of pulling the seat belt occurs, and determine whether the seat belt is worn or not only when the sound of inserting the seat belt into the seat belt socket is detected after the sound of pulling the seat belt is detected. Or, when one or both of the above-described conditions of occurrence of the action of pulling the webbing, detection of the sound of pulling the webbing, and detection of insertion of the webbing into the webbing insertion port are satisfied, it is determined whether the webbing is worn or not, and the webbing is not worn.
In implementation 4 and implementation 5, when the motion and the sound are combined to determine, a first confidence α and a second confidence β may also be determined, where the first confidence α is used to indicate the reliability of detecting the belt wearing based on the motion, and the second confidence β is used to indicate the reliability of detecting the belt wearing based on the sound; and further judging whether the safety belt is worn or not based on the weighted sum of the first confidence coefficient alpha and the second confidence coefficient beta, and determining whether the safety belt is worn or not when the weighted sum is larger than a target threshold value theta, wherein the formula is expressed as follows:
c1*α+c2*β>θ
Wherein c1 and c2 are weights of alpha and beta respectively. In one implementation, considering that the driver does not have to wear the safety belt with the hand wearing the wearable device, the wearable device can identify the left and right hand it is worn on at this time, and then set the weight c1 of the first confidence coefficient α smaller than the weight c2 of the second confidence coefficient β when it identifies that the wearable device is worn on the left hand. Considering that the current microphone may be occupied, at this time, the wearable device may determine the occupancy status of the microphone between executing S23, and when the microphone is occupied by another program, the wearable device may set the weight c1 of the first confidence α to be greater than the weight c2 of the second confidence β. The weights c1 and c2 may also have other arrangements, which are not listed in the embodiments of the present application.
The principle of detection of the action of pulling the webbing is described as follows.
It should be appreciated that the size of seats in automobiles is generally fixed, as shown in fig. 4B. Therefore, the actions of wearing the safety belt by the driver and the passenger are unified, the motion information of the wearable equipment can be acquired by utilizing the inertial sensor, and the displacement Sz of the wearable equipment in the direction vertical to the horizontal plane (namely the z-axis direction) and the displacement Sx in the horizontal plane (namely the x-axis direction) are determined based on the motion track of the wearable equipment: if Sz is greater than a first length (e.g., 0.6 m) and |sx| is greater than a second length (e.g., 0.4 m) within a threshold time, such as two seconds, is satisfied, the user is considered to have an action to wear the seat belt, and further a first confidence level α may be output based on the action similarity. The action similarity may be determined based on a deviation of Sz from the first length and a deviation of |sx| from the second length, where the greater the deviation, the less the action similarity.
It should be understood that the action of pulling the safety belt may also be detected based on other manners, for example, the wearable device may determine whether the similarity of the motion trajectory of the wearable device and the motion trajectory of the pre-stored operation of pulling the safety belt is greater than a preset threshold, and if so, consider that the user has an action of wearing the safety belt, otherwise, the safety belt is not worn. Further, the first confidence α may also be determined by the similarity, e.g., the similarity is equal to the first confidence α.
The detection principle of the sound of pulling the webbing is described as follows. The sound of pulling the seat belt and the sound of inserting the seat belt into the seat belt insertion opening are obvious in the process of fastening the seat belt, and particularly, the sound of inserting the seat belt into the seat belt insertion opening has very high recognition capability due to the special sound. The same detection principle as the above-described sound of opening/closing the door may include, but is not limited to, both the feature matching and AI-model recognition principles.
Feature matching:
the wearable device can collect the sound of the pulling safety belt in advance, and then extract the characteristics of the sound of the pulling safety belt as pre-stored characteristic information. In the application process, the wearable device can extract the characteristics of the sound information collected in real time, match the characteristics of the sound information with the characteristics of the sound of the pulling safety belt, and consider the collected sound information to be the sound of the pulling safety belt when the characteristics of the sound information and the characteristics of the sound of the pulling safety belt are matched; conversely, if there is no match, the sound information is not the sound of pulling the seat belt. Optionally, the second confidence level β is equal to or derived from a similarity of the characteristic of the collected sound information and the characteristic of the sound pulling the seat belt. The greater the similarity, the greater the second confidence β; conversely, the smaller the similarity, the smaller the second confidence β.
AI model identification:
the training device may train a voice recognition model, which is a classification model, through the sample set, recognizing the probability that the input voice belongs to the voice of pulling the seat belt and the probability that the input voice does not belong to the voice of pulling the seat belt. Further, the type to which it belongs may be identified based on the output probability value. The training device may be a server, cloud server, notebook, or other computing-capable electronic device that may send the trained voice recognition model to the wearable device.
The voice recognition model may be a Convolutional Neural Network (CNN), a cyclic neural network (recurrent neural network, RNN), a Deep Neural Network (DNN), or the like, for recognizing a voice type to which the input voice information belongs, where the voice type includes a voice of pulling the seat belt and a voice of not pulling the seat belt. The samples employed to train the voice recognition model include voice information and the voice type (also referred to as the true voice type) to which the voice information belongs. The training method comprises the following steps: and inputting the voice information in the sample into the voice recognition model to obtain the predicted voice type of the voice information, updating parameters of the voice recognition model through loss between the predicted voice type and the real voice type of the voice information, so that the loss is smaller and smaller, and obtaining the voice recognition model with voice recognition capability when the loss converges or the training times meet the requirements.
The wearable device may pre-store or download the voice recognition model. In the application process of the model, the wearable device can acquire sound information in real time, the sound information is input into a sound identification model, the sound identification model processes the acquired sound information to obtain the probability Q1 of sound of the acquired sound information belonging to a pulling safety belt and the sound Q2 of the sound belonging to a non-pulling safety belt respectively, and then when Q1 is larger than Q2 or larger than a target threshold (such as 0.6 and 0.7), the sound belonging to the pulling safety belt is judged; when Q1 is not greater than Q2 or not greater than the target threshold value, it is judged that it belongs to the sound of the non-pulling seatbelt.
In one implementation of the second confidence level β, the second confidence level β is equal to the probability Q1 or derived from Q1 that the collected sound information belongs to the sound of pulling the webbing belt. The larger Q1, the greater the second confidence β; conversely, the smaller Q1, the smaller the second confidence β.
The detection principle of detecting the sound of the webbing inserted into the webbing insertion port (simply referred to as the sound on the webbing buckle) is described as follows. The same detection principle as the above-described sound of opening/closing the door may include, but is not limited to, both the feature matching and AI-model recognition principles.
Feature matching:
the wearable device can collect the sound on the safety belt buckle in advance, and then the characteristics of the sound on the safety belt buckle are extracted to serve as pre-stored characteristic information. In the application process, the wearable device can extract the characteristics of the sound information collected in real time, match the characteristics of the sound on the safety belt buckle, and consider the collected sound information to be the sound on the safety belt buckle when the characteristics of the sound on the safety belt buckle are matched; otherwise, if the sound information is not matched, the sound information is not the sound on the safety belt buckle.
AI model identification:
the training device may train a voice recognition model, which is a classification model, through the sample set, recognizing the probability that the input voice belongs to the voice on the seat belt buckle and the probability that the input voice does not belong to the voice on the seat belt buckle. Further, the type to which it belongs may be identified based on the output probability value. The training device may be a server, cloud server, notebook, or other computing-capable electronic device that may send the trained voice recognition model to the wearable device.
The voice recognition model may be a Convolutional Neural Network (CNN), a cyclic neural network (recurrent neural network, RNN), a Deep Neural Network (DNN), or the like, for recognizing a voice type to which the input voice information belongs, where the voice type includes a voice on a seat belt buckle and a voice on a non-seat belt buckle. The samples employed to train the voice recognition model include voice information and the voice type (also referred to as the true voice type) to which the voice information belongs. The training method comprises the following steps: and inputting the voice information in the sample into the voice recognition model to obtain the predicted voice type of the voice information, updating parameters of the voice recognition model through loss between the predicted voice type and the real voice type of the voice information, so that the loss is smaller and smaller, and obtaining the voice recognition model with voice recognition capability when the loss converges or the training times meet the requirements.
The wearable device may pre-store or download the voice recognition model. In the application process of the model, the wearable device can acquire sound information in real time, the sound information is input into a sound identification model, the sound identification model processes the acquired sound information to obtain the probability O1 of the sound on the safety belt buckle and the probability O2 of the sound on the non-safety belt buckle, and then when O1 is larger than O2 or larger than a target threshold (such as 0.6 and 0.7), the sound on the safety belt buckle is judged; when O1 is not greater than O2 or is not greater than the target threshold, the sound of the non-safety belt buckle is judged.
In another implementation of the second confidence level β, the second confidence level β is equal to the probability O1 that the acquired sound information belongs to the sound of pulling the webbing belt or is derived from O1. The larger O1, the larger the second confidence β; conversely, the smaller O1, the smaller the second confidence β.
In yet another implementation of the second confidence β, the second confidence β is equal to the probabilities Q1 and O1, e.g., β is the average of Q1 and O1; for another example, β is the product of Q1 and Q2.
In another implementation, the training device may train a voice recognition model, and the recognizable voice types include a voice of pulling the seat belt, a voice on the seat belt buckle, and other sounds. The sample set adopted by the training comprises sound information of the labels respectively of the sound types. The trained voice recognition model can recognize the voice type of the input voice. The specific training method is the same as that of the voice recognition model in the above embodiment 2, and will not be described here again.
In yet another implementation, the training device may train a voice recognition model, and the recognizable voice types include a voice to open a door, a voice to close a door, a voice to pull a seat belt, a voice to buckle a seat belt, and other sounds. The sample set adopted by the training comprises sound information of the labels respectively of the sound types. The trained voice recognition model can recognize the voice type of the input voice. The specific training method is the same as that of the voice recognition model in the above embodiment 2, and will not be described here again.
According to the method for identifying the wearing of the safety belt, when the operation of the safety belt pulling action, the safety belt pulling sound and the safety belt buckle sound are detected, the safety belt is judged to be worn, so that the method is accurate, and the user experience is better.
Example 4:
as shown in fig. 5, a flowchart of a method for identifying the start of a vehicle according to embodiment 4 is provided, and the method may be implemented by a wearable device, including, but not limited to, the following partial or all steps:
s31: sound information is collected in real time through a microphone.
S32: it is determined whether a sound of engine start is detected.
S33: and acquiring the position information in real time through a positioning system. The positioning system may be a base station positioning system, a global positioning system (global positioning system, GPS), or a beidou satellite navigation system (beidou navigation satellite system, BDS), among others.
S34: the displacement of the vehicle is determined from the position information.
S35: and detecting whether the displacement is larger than a preset length.
S36: acceleration of the wearable device is acquired in real time through an inertial sensor.
S37: and detecting whether sudden acceleration change in the horizontal direction occurs or not based on the acquired acceleration.
S38: and judging whether the vehicle is started or not based on detection results of the sound, the displacement and the acceleration.
Wherein determining whether the vehicle is started based on sound, displacement, and acceleration includes, but is not limited to, the following implementations:
implementation 1: the judgment is based on sound only. Specifically, the wearable device may identify whether the vehicle is started or not through S31 to S32 described above, wherein when the sound of engine start is detected, it is determined that the vehicle is started, and if not, the vehicle is not started.
Implementation 2: the determination is based only on the location information acquired by the positioning system. Specifically, the wearable device may identify whether the vehicle is started or not through S33-S35, where when the displacement of the wearable device is detected to be greater than the preset length, which indicates that the vehicle is moving, it is determined that the vehicle is started, and if not, the vehicle is not started.
Implementation 3: the determination is based only on the acceleration acquired by the inertial sensor. Specifically, the wearable device may identify whether the vehicle is started or not through S36 to S37 described above, where when an abrupt change in acceleration in the horizontal direction is detected to occur based on the collected acceleration, it is determined that the vehicle is started or not and the vehicle is not started, indicating that the vehicle is moving.
Implementation 4: based on the sound and the location information acquired by the positioning system. Specifically, the wearable device may identify whether the vehicle is started or not through S31 to S35, where if the displacement is detected to be greater than the preset length after the sound of engine start is detected, it is determined that the vehicle is started, and if not, the vehicle is not started.
Implementation 5: based on the acceleration acquired by the sound and inertial sensors. Specifically, the wearable device may identify whether the vehicle is started or not through the above S31-S32 and S36-S37, wherein if an abrupt change in acceleration in the horizontal direction is detected based on the collected acceleration after the sound of the engine start is detected, it is determined that the vehicle is started, and if not, the vehicle is not started.
Implementation 6: and judging based on the position information acquired by the positioning system and the acceleration acquired by the inertial sensor. Specifically, the wearable device may identify whether the vehicle is started or not through the above S33-S35 and S36-S37, wherein if the displacement is detected to be greater than the preset length and the sudden acceleration change in the horizontal direction is detected to occur based on the collected acceleration, it is determined that the vehicle is started, and if not, the vehicle is not started.
Implementation 7: and judging based on the sound, the position information acquired by the positioning system and the acceleration acquired by the inertial sensor. Specifically, the wearable device may identify whether the vehicle is started or not through S31 to S37 described above, and when the displacement is detected to be greater than the preset length and the sudden change in acceleration in the horizontal direction is detected to occur based on the collected acceleration after the sound of the engine start is detected, it is determined whether the vehicle is started or not, and the vehicle is not started. Or when one or two conditions of the sound of detecting the engine start, the displacement detected to be larger than the preset length and the horizontal acceleration detected to occur based on the collected acceleration are met, judging that the vehicle is started, and judging that the vehicle is not started.
When the sound, the position information acquired by the positioning system and the motion information acquired by the inertial sensor are integrated to judge, a third confidence coefficient x, a fourth confidence coefficient y and a fifth confidence coefficient z can be further determined, wherein the third confidence coefficient x is used for indicating the reliability of vehicle starting detection based on the sound, the fourth confidence coefficient y is used for indicating the reliability of vehicle starting detection based on the position information acquired by the positioning system, and the fifth confidence coefficient z is used for indicating the reliability of vehicle starting detection based on the acceleration acquired by the inertial sensor. The above-mentioned determination methods of the third confidence coefficient x, the fourth confidence coefficient y and the fifth confidence coefficient z may be described in the following manner by referring to "the principle of vehicle start by voice recognition", "the principle of vehicle start by position information obtained by the positioning system", and "the principle of vehicle start by acceleration recognition acquired by the inertial sensor", respectively, which will not be described in detail herein. Further, the wearable device may determine whether the vehicle is started based on the weighted sum of the third confidence x, the fourth confidence y and the fifth confidence z, and when the weighted sum is greater than the target threshold L, determine that the vehicle is started, and if the weighted sum is not greater than the target threshold L, the formula that the belt is not worn is that:
w1*x+w2*y+w3*z>L
Wherein w1, w2, w3 are weights of x, y, z, respectively. In one implementation, considering that the current microphone may be occupied, at this time, the wearable device may determine the occupancy of the microphone before executing S31, and when the microphone is occupied by another program, the wearable device may turn down w1 or set the weight w1 smaller than w2 and w3. In consideration of the situation that the vehicle may be in the underground parking garage and other places where the positioning signal is weak, the wearable device may determine whether the signal of the positioning system is good before executing S33, if yes, S33-S35 may be executed, w2 may be increased, and if not, the wearable device may decrease w2 or set the weight w2 smaller than w1 and w3. In consideration of the positioning accuracy difference of different positioning systems, the reliability difference of the detection results is caused, so that different weights w2 can be set for different positioning systems, and a positioning system with high positioning accuracy can be set with larger weight w2. In consideration that the acceleration of the wearable device may not be due to the motion of the vehicle, but rather the user's own activity, at this point the wearable device may turn down w3 or set the weight w3 smaller than w1 and w2 when non-horizontal acceleration is detected based on the motion information. The weights w1, w2, w3 may also have other arrangements, which are not listed in the embodiments of the present application.
The following is a description of the principle of recognizing the start of the vehicle, which is involved in the above-described vehicle start recognition method, respectively.
The principle of vehicle start by voice recognition is described below.
During the vehicle start-up, there is a noticeable sound of the engine start-up of the vehicle, which sound has a large recognition force. The same detection principle as the above-described sound of opening/closing the door may include, but is not limited to, both the feature matching and AI-model recognition principles.
Feature matching:
the wearable device can collect the sound of starting the vehicle in advance, and further extract the characteristics of the sound of starting the vehicle as pre-stored characteristic information. In the application process, the wearable device can extract the characteristics of the sound information collected in real time, match the characteristics with the characteristics of the vehicle start, and consider the collected sound information to be the sound of the vehicle start when the characteristics are matched; otherwise, if there is no match, the sound information is not the sound of the vehicle start. Wherein the third confidence level x may be equal to or determined based on a degree of matching of the feature of the collected sound information with the feature of the vehicle start. The higher the matching degree is, the higher the third confidence x is; conversely, the lower the degree of matching, the lower the third confidence x.
AI model identification:
the training device may train a voice recognition model, which is a classification model, through the sample set, recognizing the probability that the input voice belongs to the voice of the vehicle start and the probability that the input voice does not belong to the voice of the vehicle start. Further, the type to which it belongs may be identified based on the output probability value. The training device may be a server, cloud server, notebook, or other computing-capable electronic device that may send the trained voice recognition model to the wearable device.
The voice recognition model may be a Convolutional Neural Network (CNN), a cyclic neural network (recurrent neural network, RNN), a Deep Neural Network (DNN), or the like, for recognizing a voice type to which the input voice information belongs, where the voice type includes a vehicle-started voice and a vehicle-started voice. The samples employed to train the voice recognition model include voice information and the voice type (also referred to as the true voice type) to which the voice information belongs. The training method comprises the following steps: and inputting the voice information in the sample into the voice recognition model to obtain the predicted voice type of the voice information, updating parameters of the voice recognition model through loss between the predicted voice type and the real voice type of the voice information, so that the loss is smaller and smaller, and obtaining the voice recognition model with voice recognition capability when the loss converges or the training times meet the requirements.
The wearable device may pre-store or download the voice recognition model. In the application process of the model, the wearable device can acquire the sound information in real time, input the sound information into the sound identification model, and the sound identification model processes the acquired sound information to obtain the probability that the acquired sound information respectively belongs to the starting of the vehicle and the probability that the acquired sound information belongs to the non-vehicle starting sound, and further identify whether the acquired sound information is the vehicle starting sound or not based on the obtained probability value. For example, when the probability that the collected sound information belongs to the vehicle start is greater than the probability that the collected sound information belongs to the non-vehicle start, the collected sound information belongs to the vehicle start sound, and the sound information is the non-vehicle start sound.
Wherein the third confidence level x may be equal to or determined based on a probability that the collected sound information belongs to a vehicle start. The higher the probability, the higher the third confidence x; conversely, the lower the probability, the lower the third confidence x.
The principle of identifying the start of the vehicle by the position information acquired by the positioning system is described below.
When the user enters the vehicle, the vehicle is usually started and moves when the vehicle is not started, the user sits on the seat and does not move or has limited movement, the space in the vehicle cannot be exceeded, and when the movement exceeds the space in the vehicle, the vehicle can be determined to be started and move. Therefore, after recognizing that the user enters the vehicle, the wearable device can detect current position information in real time through the positioning system, and further, when detecting that the displacement of the wearable device is greater than the target distance based on the position information, whether the vehicle is started or not is determined, and the vehicle is not started. Wherein the target distance may be equal to the size of the vehicle interior space, or other value.
The fourth confidence coefficient y can be determined by the deviation between the displacement and the target distance, and the larger the deviation is, the more likely the vehicle moves, and the higher the fourth confidence coefficient y is, otherwise, the smaller the deviation is, the smaller the fourth confidence coefficient y is.
The principle of recognizing the start of the vehicle by the acceleration acquired by the inertial sensor is described below.
When a user enters a car, on one hand, the wearable equipment can move along with the movement of the wearing part of the user; on the other hand, when the vehicle is started, the wearable device suddenly has an acceleration in the horizontal direction due to the momentary acceleration. When the acceleration in the horizontal direction is suddenly changed, the wearable device can determine that the vehicle is started, otherwise, when the acceleration in the horizontal direction is not suddenly changed or the change amplitude is smaller, the vehicle is determined to be not started.
Wherein the fifth confidence z may be determined based on the magnitude of the acceleration in the horizontal direction, the greater the magnitude of the abrupt change in acceleration in the horizontal direction, the more likely it is that the abrupt start of the vehicle is caused, the greater the fifth confidence z; conversely, the smaller the acceleration abrupt change in the horizontal direction, the smaller the fifth confidence z.
The voice recognition models involved in the three processes of the (first) user's driving recognition, (second) belt wearing recognition and (third) vehicle starting recognition are the same description, but the voice recognition models in the three processes may be different, the recognizable voice types are different, and the sample set used for training is different. In some embodiments, the three processes may use the same voice recognition model, the voice recognition model may recognize any voice type related to the three processes, and the training samples also include samples of each type related to the three processes, and the training method is the same as the training method of the model in each process, which is not repeated herein.
It should be noted that, although the same description is adopted for the motion information and the sound information involved in the three processes of the above-described (one) user's entering recognition, (two) belt wearing recognition, and (three) vehicle start recognition, the motion information and the sound information acquired are different because of the different time periods, and may be referred to as the first motion information and the first sound information, respectively, in the (one) user's entering recognition process, may be referred to as the second motion information and the second sound information, respectively, in the (two) belt wearing recognition process, and may be referred to as the third motion information and the third sound information, respectively, in the three) vehicle start recognition process.
It should be understood that the terms "first," "second," "third," "fourth," "fifth," etc. are used herein for distinguishing only. Does not have a practical meaning.
User interfaces to which various embodiments of the present application relate are described below.
The wearable device may include a function of wearing a seat belt, such as the user interface 601 shown in fig. 6A, and the wearable device may set on and off of the function, and when the function is on, that is, the electronic device invokes the computer instruction stored in the memory, to implement the method for wearing a seat belt described in the foregoing embodiment 1. Upon detecting that the user is entering the car, a reminder may be made when the seat belt is not being worn, e.g., the wearable device displays a user interface 602 as shown in fig. 6B and vibrates to prompt the user to wear the seat belt.
An exemplary wearable device 100 provided by embodiments of the present application is described below, where the wearable device 100 may perform all the steps and processes of the methods described in embodiments 1-4 of the present application, where the wearable device 100 may correspond to a smart watch, a smart bracelet, etc. described in embodiments of the present application.
By way of example, fig. 7 shows a schematic structural diagram of the wearable device 100. The wearable device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, at least one key 190, a motor 191, an indicator 192, a camera 193, a display 194, a user identification module (subscriber identification module, SIM) card interface 195, and the like. Wherein the sensor module 180 may include, but is not limited to: a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, a heart rate sensor 180N, a blood oxygen sensor 180O, and the like.
It is to be understood that the illustrated structure of the embodiments of the present invention does not constitute a specific limitation on the wearable device 100. In other embodiments of the present application, wearable device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. For example, it may not be necessary for some smartwatches to include a mobile communication module 150, a SIM card interface 195, or the like.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
In this embodiment, the processor 110 may be configured to perform the methods described in embodiments 1-4 above.
The controller may be, among other things, a neural hub and a command center of the wearable device 100. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided, reducing the latency of the processor 110, and thus improving the efficiency of the wearable device 100.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera module 193, etc., respectively, via different I2C bus interfaces. For example: the processor 110 may couple the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement a touch function of the wearable device 100.
The I2S interface may be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through the I2S interface, to implement a function of answering a call through the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through a UART interface, to implement a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as the display screen 194, camera module 193, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 110 and camera module 193 communicate through a CSI interface to implement camera functionality of wearable device 100. The processor 110 and the display screen 194 communicate through a DSI interface to implement the display functionality of the wearable device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera module 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the wearable device 100, and may also be used to transfer data between the wearable device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other wearable devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present invention is only illustrative, and does not limit the structure of the wearable device 100. In other embodiments, the wearable device 100 may also use different interfacing manners, or a combination of multiple interfacing manners, in the above embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of the wired charger through a USB interface. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the wearable device 100. The charging management module 140 may also supply power to the wearable device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the wearable device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the wearable device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G or the like for use on the wearable device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., for use on the wearable device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2. Illustratively, the wireless communication module 160 may include a Bluetooth module, a Wi-Fi module, or the like.
In some embodiments, antenna 1 and mobile communication module 150 of wearable device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that wearable device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The wearable device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the wearable device 100 may include 1 or N display screens 194, N being a positive integer greater than 1. In the present embodiment, the display 194 is used to display a user interface as shown in fig. 6A and 6B.
The wearable device 100 may implement camera functions through a camera module 193, isp, video codec, GPU, display 194, and application processor AP, neural network processor NPU, etc.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the wearable device 100 is selecting a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The wearable device 100 may support one or more video codecs. In this way, the wearable device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) -1, MPEG-2, MPEG-3, MPEG-4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of the wearable device 100 can be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the wearable device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, data such as music, photos, videos, etc. are stored in an external memory card.
The internal memory 121 may be used to store one or more computer programs, including instructions. The processor 110 may cause the wearable device 100 to perform the methods described in the above embodiments 1 to 4, as well as various functional applications, data processing, and the like, by executing the above instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area can store an operating system; the storage area may also store one or more applications (e.g., gallery, contacts, etc.), and so forth. The storage data area may store data (e.g., photos, contacts, etc.) created during use of the wearable device 100. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The wearable device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The wearable device 100 may listen to music, or to hands-free conversations, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When wearable device 100 is answering a phone or voice message, voice may be received by placing receiver 170B close to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The wearable device 100 may be provided with at least one microphone 170C. In other embodiments, the wearable device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the wearable device 100 may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc. In this application, microphone 170C is also used to collect sound information for processor 110 to identify the sound type of the sound information.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile wearable platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The wearable device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display 194, the wearable device 100 detects the touch operation intensity from the pressure sensor 180A. The wearable device 100 may also calculate the location of the touch from the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine a motion gesture of the wearable device 100. In some embodiments, the angular velocity of wearable device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the angle of the shake of the wearable device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the wearable device 100 through the reverse motion, thereby realizing anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, wearable device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor.
The acceleration sensor 180E may detect the magnitude of acceleration of the wearable device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the wearable device 100 is stationary. The method can also be used for identifying the gesture of the wearable equipment, and is applied to applications such as horizontal and vertical screen switching, pedometers and the like. It should be understood that the inertial sensor in the present application may be the acceleration sensor 180E, or may be a combination of the acceleration sensor 180E and the gyro sensor 180B.
A distance sensor 180F for measuring a distance. The wearable device 100 may measure the distance by infrared or laser light. In some embodiments, capturing a scene, wearable device 100 may range using distance sensor 180F to achieve quick focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The wearable device 100 emits infrared light outwards through the light emitting diode. The wearable device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that there is an object in the vicinity of the wearable device 100. When insufficient reflected light is detected, the wearable device 100 may determine that there is no object in the vicinity of the wearable device 100. The wearable device 100 can detect that the user holds the wearable device 100 close to the ear to talk by using the proximity light sensor 180G, so as to automatically extinguish the screen to achieve the purpose of saving electricity.
The ambient light sensor 180L is used to sense ambient light level. The wearable device 100 may adaptively adjust the display screen 194 brightness according to the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether wearable device 100 is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The wearable device 100 can utilize the collected fingerprint characteristics to realize fingerprint unlocking, access an application lock, fingerprint photographing, fingerprint incoming call answering and the like.
The temperature sensor 180J is for detecting temperature. In some embodiments, wearable device 100 performs a temperature processing strategy using the temperature detected by temperature sensor 180J.
The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the wearable device 100 at a different location than the display 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal.
Heart rate sensor 180N is used to measure heart rate, and in some embodiments heart rate sensor 180N may be a photoelectric sensor, which may include a transmitter, a receiver, or the like. Wherein the transmitter may be a light emitting diode, an infrared emitting diode, etc., and the receiver may comprise a phototransistor, etc. There is some attenuation of the illumination emitted by the emitter as it passes through the skin tissue and then reflects back to the receiver. The change of volume pulse blood flow can be realized through the change of illumination. The difference of the reflected light intensities after absorption by human blood and tissues is detected by using a photoelectric sensor, the change of the blood flow in the heartbeat period can be obtained, and the heart rate is calculated from the obtained pulse waveform. In other embodiments, the heart rate sensor 180N may also be a capacitive sensor, a piezoresistive sensor, a piezoelectric sensor, or the like, which is not limited by the embodiments of the present application.
The blood oxygen sensor 180O may include at least one light emitting source and at least one photodetector for calculating blood oxygen saturation. The at least one light emitting source may emit red light and infrared light, the emitted red light and infrared light being reflected by human tissue, the at least one photodetector may receive the reflected light and convert it into photoplethysmography (PPG) signals, respectively, wherein the received red light is converted into a red PPG signal and the received infrared light is converted into an infrared PPG signal. The red PPG signal and the infrared PPG signal are used to calculate the blood oxygen saturation. For example, the blood oxygen sensor includes 2 LEDs, one of which may emit red light, and 2 PDs, one of which may emit near-infrared light, one of which is used to detect red light, and one of which is used to detect near-infrared light.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The wearable device 100 may receive key inputs, generating key signal inputs related to user settings and function control of the wearable device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be contacted and separated from the wearable device 100 by inserting the SIM card interface 195, or by extracting from the SIM card interface 195. The wearable device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The wearable device 100 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, wearable device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the wearable device 100 and cannot be separated from the wearable device 100.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented using a software program, may be present in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (25)
1. A method of alerting a wearer of a safety harness, applied to a wearable device, the wearable device including at least one sensor, comprising:
the wearable device recognizes the wearing condition of the safety belt through the at least one sensor when recognizing that the user enters the car;
the wearable device prompts the user to wear the safety belt under the condition that the safety belt is not worn;
the at least one sensor comprises an inertial sensor and a microphone, and the method for identifying the wearing condition of the safety belt through the at least one sensor specifically comprises the following steps: the wearable device acquires first motion information in real time through the inertial sensor; the wearable device collects first sound information in real time through the microphone; the wearable device determines the wearing condition of the safety belt based on the first motion information and the first sound information;
The wearable device determining, based on the first motion information and the first sound information, a wearing condition of a safety belt includes: the wearable apparatus recognizes a sound of pulling a seat belt based on the first sound information while detecting an action of pulling the seat belt based on the first motion information, and determines that the seat belt has been worn if a sound of pulling the seat belt is recognized and a sound of inserting the seat belt into a seat belt socket is recognized after the sound of pulling the seat belt is recognized.
2. The method according to claim 1, wherein the method further comprises:
the wearable device determines that the seat belt is not worn if an action to pull the seat belt is not detected based on the first motion information.
3. The method according to claim 1, wherein the method further comprises:
the wearable device determining a first displacement of the wearable device in a direction perpendicular to a horizontal plane and a second displacement on the horizontal plane based on the first motion information;
and determining that an action of pulling the seat belt is detected when the first displacement is greater than a first length and the second displacement is greater than a second length.
4. The method according to claim 1, wherein the method further comprises:
the wearable device determines that the seat belt is not worn in a case where a sound of pulling the seat belt or a sound of inserting the seat belt into a seat belt socket is not recognized based on the first sound information.
5. The method according to claim 4, wherein the method further comprises:
the wearable device inputs the first sound information into a sound recognition model to obtain the sound type of the first sound information;
when the type of the first sound information is the sound of the pulling safety belt, the wearable device inputs the first sound information collected by the microphone after the sound of the pulling safety belt is identified to the sound identification model, and the sound type of the first sound information collected later is obtained; and when the sound type of the first sound information collected later is the sound of the safety belt inserted into the safety belt socket, identifying the sound of the safety belt inserted into the safety belt socket.
6. The method of claim 1, wherein the wearable device determines a wearing condition of a seat belt based on the first motion information and the first sound information, further comprising:
The wearable device determines the wearing condition of the safety belt based on a first confidence and a second confidence, wherein the first confidence is used for indicating the probability of detecting that the safety belt is worn based on the first motion information; the second confidence is used to indicate a probability of detecting belt wear based on the first sound information.
7. The method of claim 6, wherein the wearable device determines the wearing of the seat belt based on the first confidence and the second confidence, specifically comprising:
the wearable device determining a weighted sum of the first confidence and the second confidence;
the wearable device determines that the safety belt is worn when the weighted sum is greater than a target threshold;
the wearable device determines that the seat belt is not worn when the weighted sum is not greater than the target threshold.
8. The method of any of claims 1-7, wherein the wearable device, upon identifying a user entering, prior to identifying a wearing condition of the seat belt by the at least one sensor, further comprises:
the wearable device acquires second motion information in real time through the inertial sensor;
The wearable device collects second sound information in real time through the microphone;
the wearable device determines that the user enters the vehicle when detecting that the human body state of the user changes from a standing posture to a sitting posture based on the second motion information after detecting the sound of opening the vehicle door based on the second sound information and detecting the sound of closing the vehicle door.
9. The method of claim 8, wherein the method further comprises:
the wearable device extracts characteristics of the second sound information;
the wearable device determines that the sound for opening the vehicle door is detected based on the second sound information when the characteristic of the second sound information matches the characteristic of the sound for opening the vehicle door;
the wearable device extracts characteristics of second sound information input acquired by the microphone after detecting the sound for opening the vehicle door based on the second sound information;
and when the characteristics of the second sound information collected later are matched with the characteristics of the sound for closing the car door, the wearable device determines that the sound for closing the car door is detected.
10. The method according to any one of claims 1-7 and 9, further comprising:
The wearable device acquires third motion information in real time through the inertial sensor;
the wearable device collects third sound information in real time through the microphone;
the wearable equipment acquires position information in real time through a positioning system;
the wearable device determines a start-up condition of the vehicle based on at least one of the third motion information, the third sound information, and the position information.
11. The method of claim 10, wherein the wearable device determining a start-up condition of a vehicle based on at least one of the third motion information, the third sound information, and the location information comprises:
when the wearable device meets N items in preset conditions, determining starting of a vehicle; the N is a positive integer not greater than 3, and the preset condition comprises:
detecting an abrupt acceleration change of the wearable device in a horizontal direction based on the third motion information;
detecting a sound of engine start based on the third sound information;
and determining that the displacement of the wearable device is greater than a preset length based on the position information.
12. The method of claim 11, wherein the wearable device determining a start-up condition of a vehicle based on at least one of the third motion information, the third sound information, and the location information comprises:
The wearable device determines the wearing condition of the safety belt based on a third confidence coefficient, a fourth confidence coefficient and a fifth confidence coefficient, wherein the third confidence coefficient is used for indicating the probability of detecting that the vehicle is started based on the third sound information; the fourth confidence is used for indicating the probability that the vehicle is detected to be started based on the position information; the fifth confidence level is used to indicate a probability that a vehicle has been started based on the third motion information.
13. A wearable device, comprising a processor, a memory, and at least one sensor, the processor being coupled to the processor and the at least one sensor, respectively, the memory being for storing computer instructions, the processor being for executing the computer instructions stored by the memory, to perform:
identifying the wearing condition of the safety belt through the at least one sensor when the user is identified to enter the car;
prompting the user to wear the safety belt under the condition that the safety belt is not worn;
the at least one sensor comprises an inertial sensor and a microphone, and the processor performs the identification of the wearing condition of the safety belt by the at least one sensor, specifically comprising the steps of:
Acquiring first motion information in real time through the inertial sensor;
collecting first sound information in real time through the microphone;
determining a wearing condition of the safety belt based on the first motion information and the first sound information;
the processor executing the determining the wearing condition of the safety belt based on the first motion information and the first sound information specifically includes executing: the wearable apparatus recognizes a sound of pulling a seat belt based on the first sound information while detecting an action of pulling the seat belt based on the first motion information, and determines that the seat belt has been worn if a sound of pulling the seat belt is recognized and a sound of inserting the seat belt into a seat belt socket is recognized after the sound of pulling the seat belt is recognized.
14. The wearable device of claim 13, wherein the processor is further configured to perform:
if an action of pulling the seat belt is not detected based on the first motion information, it is determined that the seat belt is not worn.
15. The wearable device of claim 13, wherein the processor further performs:
determining a first displacement of the wearable device in a direction perpendicular to a horizontal plane and a second displacement in the horizontal plane based on the first motion information;
And determining that an action of pulling the seat belt is detected when the first displacement is greater than a first length and the second displacement is greater than a second length.
16. The wearable device of claim 13, wherein the processor is further configured to perform:
if the sound of pulling the seat belt or the sound of inserting the seat belt into the seat belt socket is not recognized based on the first sound information, it is determined that the seat belt is not worn.
17. The wearable device of claim 16, wherein the processor further performs:
inputting the first sound information into a sound recognition model to obtain the sound type of the first sound information;
when the type of the first sound information is the sound of the pulling safety belt, the first sound information collected by the microphone after the sound of the pulling safety belt is identified is input into the sound identification model, and the sound type of the first sound information collected later is obtained;
and when the sound type of the first sound information collected later is the sound of the safety belt inserted into the safety belt socket, identifying the sound of the safety belt inserted into the safety belt socket.
18. The wearable device of claim 13, wherein the processor executing the determining a wearing condition of a seat belt based on the first motion information and the first sound information further comprises executing:
Determining a wearing condition of the safety belt based on a first confidence and a second confidence, wherein the first confidence is used for indicating a probability of detecting that the safety belt is worn based on the first motion information; the second confidence is used to indicate a probability of detecting belt wear based on the first sound information.
19. The wearable device of claim 18, wherein the processor executing the determining the wearing condition of the seat belt based on the first confidence and the second confidence comprises executing:
determining a weighted sum of the first confidence and the second confidence;
determining that the seat belt is worn when the weighted sum is greater than a target threshold;
and determining that the safety belt is not worn when the weighted sum is not greater than the target threshold.
20. The wearable device of any of claims 13-19, wherein the processor, prior to executing the identifying, by the at least one sensor, that the seat belt is worn when the user is entering, further executes:
acquiring second motion information in real time through the inertial sensor;
collecting second sound information in real time through the microphone;
The user is determined to enter the vehicle when detecting a sound of opening the vehicle door based on the second sound information, detecting a change of a human body state of the user from a standing posture to a sitting posture based on the second motion information, and detecting a sound of closing the vehicle door.
21. The wearable device of claim 20, wherein the processor further performs:
extracting characteristics of the second sound information;
determining that a door opening sound is detected based on the second sound information when the characteristic of the second sound information matches the characteristic of the door opening sound;
extracting a feature of second sound information input collected by the microphone after detecting the sound of opening the door based on the second sound information;
and determining that the sound for closing the vehicle door is detected when the characteristic of the second sound information collected later is matched with the characteristic of the sound for closing the vehicle door.
22. The wearable device of any of claims 13-19 and 21, wherein the processor further performs:
acquiring third motion information in real time through the inertial sensor;
collecting third sound information in real time through the microphone;
Collecting position information in real time through a positioning system;
a start condition of the vehicle is determined based on at least one of the third motion information, the third sound information, and the position information.
23. The wearable device of claim 22, wherein the processor executing the determining a start-up condition of a vehicle based on at least one of the third motion information, the third sound information, and the location information comprises executing:
when N items in the preset conditions are met, determining starting of the vehicle; the N is a positive integer not greater than 3, and the preset condition comprises:
detecting an abrupt acceleration change of the wearable device in a horizontal direction based on the third motion information;
detecting a sound of engine start based on the third sound information;
and determining that the displacement of the wearable device is greater than a preset length based on the position information.
24. The wearable device of claim 22, wherein the processor executing the determination of the start-up condition of the vehicle based on at least one of the third motion information, the third sound information, and the location information comprises executing:
determining a wearing condition of the safety belt based on a third confidence, a fourth confidence and a fifth confidence, wherein the third confidence is used for indicating a probability that the vehicle is detected to be started based on the third sound information; the fourth confidence is used for indicating the probability that the vehicle is detected to be started based on the position information; the fifth confidence level is used to indicate a probability that a vehicle has been started based on the third motion information.
25. A computer readable storage medium comprising computer instructions which, when run on a wearable device, cause the wearable device to perform the method of any of claims 1 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110447235.5A CN115230634B (en) | 2021-04-25 | 2021-04-25 | Method for reminding wearing safety belt and wearable device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110447235.5A CN115230634B (en) | 2021-04-25 | 2021-04-25 | Method for reminding wearing safety belt and wearable device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115230634A CN115230634A (en) | 2022-10-25 |
CN115230634B true CN115230634B (en) | 2024-04-12 |
Family
ID=83665719
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110447235.5A Active CN115230634B (en) | 2021-04-25 | 2021-04-25 | Method for reminding wearing safety belt and wearable device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115230634B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104637335A (en) * | 2014-12-15 | 2015-05-20 | 广东梅雁吉祥水电股份有限公司 | Reminding method |
KR20160016560A (en) * | 2014-07-31 | 2016-02-15 | 삼성전자주식회사 | Wearable device and method for controlling the same |
CN106200952A (en) * | 2016-07-04 | 2016-12-07 | 歌尔股份有限公司 | A kind of method monitoring user behavior data and wearable device |
CN108382352A (en) * | 2018-02-06 | 2018-08-10 | 深圳市沃特沃德股份有限公司 | Safety prompt function method and apparatus |
CN109996701A (en) * | 2016-11-25 | 2019-07-09 | 奥迪股份公司 | The equipment of the normal wearing state of safety belt for identification |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10040372B2 (en) * | 2016-02-23 | 2018-08-07 | Samsung Electronics Co., Ltd. | Identifying and localizing a vehicle occupant by correlating hand gesture and seatbelt motion |
-
2021
- 2021-04-25 CN CN202110447235.5A patent/CN115230634B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160016560A (en) * | 2014-07-31 | 2016-02-15 | 삼성전자주식회사 | Wearable device and method for controlling the same |
CN104637335A (en) * | 2014-12-15 | 2015-05-20 | 广东梅雁吉祥水电股份有限公司 | Reminding method |
CN106200952A (en) * | 2016-07-04 | 2016-12-07 | 歌尔股份有限公司 | A kind of method monitoring user behavior data and wearable device |
CN109996701A (en) * | 2016-11-25 | 2019-07-09 | 奥迪股份公司 | The equipment of the normal wearing state of safety belt for identification |
CN108382352A (en) * | 2018-02-06 | 2018-08-10 | 深圳市沃特沃德股份有限公司 | Safety prompt function method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN115230634A (en) | 2022-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021000876A1 (en) | Voice control method, electronic equipment and system | |
WO2020177619A1 (en) | Method, device and apparatus for providing reminder to charge terminal, and storage medium | |
CN111742361B (en) | Method for updating wake-up voice of voice assistant by terminal and terminal | |
CN109710080A (en) | A kind of screen control and sound control method and electronic equipment | |
CN110750772A (en) | Electronic equipment and sensor control method | |
CN112334977B (en) | Voice recognition method, wearable device and system | |
CN111368765A (en) | Vehicle position determining method and device, electronic equipment and vehicle-mounted equipment | |
CN112334860B (en) | Touch control method of wearable device, wearable device and system | |
CN110742580A (en) | Sleep state identification method and device | |
CN114915747B (en) | Video call method, electronic device and readable storage medium | |
CN114915721A (en) | Method for establishing connection and electronic equipment | |
CN114822525A (en) | Voice control method and electronic equipment | |
WO2022105830A1 (en) | Sleep evaluation method, electronic device, and storage medium | |
CN113838478B (en) | Abnormal event detection method and device and electronic equipment | |
CN113509145B (en) | Sleep risk monitoring method, electronic device and storage medium | |
CN115230634B (en) | Method for reminding wearing safety belt and wearable device | |
CN113823288A (en) | Voice wake-up method, electronic equipment, wearable equipment and system | |
CN114431891B (en) | Method for monitoring sleep and related electronic equipment | |
CN117093068A (en) | Vibration feedback method and system based on wearable device, wearable device and electronic device | |
CN115119336A (en) | Earphone connection system, earphone connection method, earphone, electronic device and readable storage medium | |
CN116094082A (en) | Charging control method and related device | |
CN115480250A (en) | Voice recognition method and device, electronic equipment and storage medium | |
CN115022807A (en) | Express delivery information reminding method and electronic equipment | |
CN115393676A (en) | Gesture control optimization method and device, terminal and storage medium | |
WO2024093748A1 (en) | Signal collection method, and electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |