CN114639113A - Data processing method and device, electronic equipment and storage medium - Google Patents
Data processing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114639113A CN114639113A CN202011379910.7A CN202011379910A CN114639113A CN 114639113 A CN114639113 A CN 114639113A CN 202011379910 A CN202011379910 A CN 202011379910A CN 114639113 A CN114639113 A CN 114639113A
- Authority
- CN
- China
- Prior art keywords
- current
- state
- frame image
- target
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 14
- 238000001514 detection method Methods 0.000 claims abstract description 45
- 238000000034 method Methods 0.000 claims abstract description 38
- 238000012545 processing Methods 0.000 claims abstract description 15
- 238000010586 diagram Methods 0.000 claims description 37
- 238000004891 communication Methods 0.000 claims description 27
- 238000006073 displacement reaction Methods 0.000 claims description 27
- 230000004927 fusion Effects 0.000 claims description 27
- 238000004458 analytical method Methods 0.000 claims description 17
- 230000002159 abnormal effect Effects 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000036772 blood pressure Effects 0.000 claims description 6
- 230000008859 change Effects 0.000 description 13
- 230000008569 process Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 7
- 238000005070 sampling Methods 0.000 description 7
- 239000011159 matrix material Substances 0.000 description 6
- 230000005484 gravity Effects 0.000 description 5
- 238000011176 pooling Methods 0.000 description 5
- 238000001228 spectrum Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000002618 waking effect Effects 0.000 description 3
- 230000005284 excitation Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- 210000003423 ankle Anatomy 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002567 electromyography Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 210000002683 foot Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- PICXIOQBANWBIZ-UHFFFAOYSA-N zinc;1-oxidopyridine-2-thione Chemical class [Zn+2].[O-]N1C=CC=CC1=S.[O-]N1C=CC=CC1=S PICXIOQBANWBIZ-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
- G06F2218/04—Denoising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The application discloses a data processing method, a data processing device, an electronic device and a storage medium, wherein the method comprises the following steps: the method comprises the steps of obtaining a detection data set obtained by detecting a current wearing object by intelligent wearing equipment, wherein the detection data set comprises a plurality of detection data; acquiring at least one candidate state corresponding to the detection data, and taking the candidate state with the occurrence probability larger than the preset probability as a current state, wherein the current state at least comprises one of the following items: a motion state, a driving state, and a sleep state; determining a safety mode corresponding to the current state, wherein the safety mode at least comprises one of the following items: a sports safety mode, a driving safety mode, and a sleep safety mode; and controlling the intelligent wearable equipment to execute protection operation on the current wearable object according to the safety mode. According to the embodiment of the application, different protection operations can be provided for the current wearing object according to different states, and corresponding protection measures are executed for the user according to the state of the user in real time.
Description
Technical Field
The present application relates to the field of intelligent wearable devices, and in particular, to a data processing method and apparatus, an electronic device, and a storage medium.
Background
Along with the development of internet, various intelligent wearing equipment appear in people's life, for example, intelligent wrist-watch, intelligent bracelet, intelligent bluetooth headset and so on. Along with the gradual popularization of artificial intelligence in people's life, intelligence wearing equipment function is also increased gradually at present, but, most intelligence wearing equipment's function is only limited to user's life and work among the prior art. For example, in the aspect of safety, the intelligent wearable device in the prior art cannot have a safety mode, and cannot perform corresponding protection measures on the user according to the state of the user in real time.
Disclosure of Invention
In order to solve the technical problems described above or at least partially solve the technical problems, the present application provides a data processing method, an apparatus, an electronic device, and a storage medium.
According to an aspect of the embodiments of the present application, a data processing method is provided, which is applied to an intelligent wearable device, and includes:
the method comprises the steps of obtaining a detection data set obtained by detecting a current wearing object by intelligent wearing equipment, wherein the detection data set comprises a plurality of detection data, and the detection data comprises: the motion speed, the electroencephalogram signal, the heart rate, the blood pressure, the body surface temperature and the angular speed of key parts;
acquiring at least one candidate state corresponding to the detection data, and taking the candidate state with the occurrence probability greater than the preset probability as a current state, wherein the current state comprises: a sport state, a driving state, and a sleep state;
determining a security mode corresponding to the current state, wherein the security mode comprises: a sports safety mode, a driving safety mode, and a sleep safety mode;
and controlling the intelligent wearable equipment to execute protection operation on the current wearable object according to the safety mode.
Further, when the security mode is a motion security mode, the controlling the smart wearable device to perform a protection operation on the current wearable object according to the security mode includes:
acquiring a current frame image shot by the intelligent wearable device, wherein the current frame image comprises at least one object;
carrying out difference processing on the current frame image by adopting the previous frame image of the current frame image to obtain a target foreground image;
determining an interested region in the target foreground image, wherein the interested region is obtained according to the gradient direction histogram characteristics of the target foreground image and carries at least one key part;
tracking the object according to the region of interest to obtain the number of the object;
and generating an early warning instruction according to the number of the objects, wherein the early warning instruction is used for controlling the intelligent wearable equipment to vibrate according to the target vibration frequency corresponding to the number of the objects.
Further, the tracking the object according to the region of interest to obtain the number of objects of the object includes:
extracting color features of the region of interest;
determining a first peak-to-side lobe ratio and a first correlation map of the gradient direction histogram feature, and a second peak-to-side lobe ratio and a second correlation map of the color feature;
determining a fusion weight proportion of the color feature in a target frame image according to the first peak-to-side lobe ratio, the second peak-to-side lobe ratio, the first correlation diagram and the second correlation diagram, wherein the target frame image is a next frame image of the current frame image, and a calculation formula of the fusion weight proportion is as follows:
in the formula, kCNIs the fusion weight ratio of color features in the current frame image, PSLRHOG,sIs the first peak sidelobe ratio, PSLR, in the current frame imageCN,sIs the second peak sidelobe ratio, f, in the current frame imageHOG,sIs a first correlation map corresponding to the current frame image, fCN,sA second correlation map corresponding to the current frame image;
determining a fusion weight of the color feature in the target frame image according to the fusion weight proportion;
determining a target correlation diagram of the gradient direction histogram feature and the color feature in the target frame image according to the fusion weight, the first correlation diagram and the second correlation diagram;
and determining the number of the objects in the target frame image according to the target correlation diagram.
Further, the determining the number of the objects in the target frame image according to the target correlation map includes:
determining a target peak value of the target correlation map;
determining a target region of interest in the target frame image according to the target peak value;
and determining the number of the objects according to the number of the key parts in the target region of interest.
Further, when the safety mode is a driving safety mode, the controlling the intelligent wearable device to execute a protection operation on the current wearable object according to the safety mode includes:
detecting electroencephalogram data of the current wearing object in real time;
extracting frequency domain characteristic information in the electroencephalogram data;
inputting the frequency domain characteristic information into an analysis model, and obtaining the attention state of the current wearing object by the analysis model according to the characteristic type corresponding to the frequency domain characteristic information;
executing corresponding protection operation on the current wearing object according to the attention state, wherein the attention state comprises: concentration, slight distraction, and severe distraction.
Further, the executing a corresponding protection operation on the current wearing object according to the attention state includes:
when the attention state is slightly dispersed, controlling the intelligent wearable device to execute indirect vibration according to a first vibration frequency, and sending first prompt information to a vehicle-mounted screen in communication connection with the intelligent wearable device;
the executing corresponding protection operation on the current wearing object according to the attention state further comprises:
and when the attention state is seriously dispersed, controlling the intelligent wearable device to execute continuous vibration according to a second vibration frequency, and sending second prompt information to a vehicle-mounted screen in communication connection with the intelligent wearable device.
Further, when the security mode is a sleep security mode, the controlling the smart wearable device to perform a protection operation on the current wearable object according to the security mode includes:
acquiring current displacement data of the intelligent wearable device;
and when the sleep condition of the current wearing object is determined to belong to the abnormal sleep condition according to the current displacement data, generating a wake-up instruction, wherein the wake-up instruction is used for waking up the current wearing object according to a preset vibration frequency.
Further, when it is determined that the sleep condition of the current wearing object belongs to an abnormal sleep condition according to the current displacement data, generating a wake-up instruction includes:
acquiring historical displacement data;
determining displacement change information according to the current displacement data and the historical displacement data;
and when the displacement change information is larger than preset change information, determining that the sleep condition of the current wearing object belongs to an abnormal sleep condition, and generating the awakening instruction.
According to still another aspect of the embodiments of the present application, there is provided a data processing apparatus applied to an intelligent wearable device, including:
the acquisition module is used for acquiring a detection data set obtained by detecting a current wearing object by the intelligent wearing equipment, wherein the detection data set comprises a plurality of detection data, and the detection data comprises: the motion speed, the electroencephalogram signal, the heart rate, the blood pressure, the body surface temperature and the angular speed of key parts;
an analysis module, configured to obtain at least one candidate state corresponding to the detection data, and use the candidate state with an occurrence probability greater than a preset probability as a current state, where the current state at least includes one of: a motion state, a driving state, and a sleep state;
a determining module, configured to determine a security mode corresponding to the current state, where the security mode at least includes one of: a sports safety mode, a driving safety mode, and a sleep safety mode;
and the control module is used for controlling the intelligent wearable equipment to execute protection operation on the current wearable object according to the safety mode.
According to another aspect of the embodiments of the present application, there is also provided a storage medium including a stored program that executes the above steps when the program is executed.
According to another aspect of the embodiments of the present application, there is also provided an electronic apparatus, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus; wherein: a memory for storing a computer program; a processor for executing the steps of the method by running the program stored in the memory.
Embodiments of the present application also provide a computer program product containing instructions, which when run on a computer, cause the computer to perform the steps of the above method.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: according to the embodiment of the application, the current state of the current wearing object is determined through the detection data set obtained by detecting the current wearing object, and the intelligent wearing equipment enters the corresponding safety mode according to the current state, so that different protection operations can be provided for the current wearing object according to different states, and corresponding protection measures can be executed on the user in real time according to the state of the user.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a flowchart of a data processing method according to an embodiment of the present application;
fig. 2 is a flowchart of a data processing method according to another embodiment of the present application;
fig. 3 is a flowchart of a data processing method according to another embodiment of the present application;
fig. 4 is a flowchart of a data processing method according to another embodiment of the present application;
fig. 5 is a block diagram of a data processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments, and the illustrative embodiments and descriptions thereof of the present application are used for explaining the present application and do not constitute a limitation to the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another similar entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The embodiment of the application provides a control method, device and system of intelligent household equipment. The method provided by the embodiment of the invention can be applied to any required electronic equipment, for example, the electronic equipment can be electronic equipment such as a server and a terminal, and the method is not particularly limited herein, and is hereinafter simply referred to as electronic equipment for convenience in description.
According to an aspect of the embodiments of the present application, there is provided an embodiment of a method for processing data, where the method is applied to an intelligent wearable device. Fig. 1 is a flowchart of a data processing method according to an embodiment of the present application, and as shown in fig. 1, the method includes:
step S11, acquiring a detection data set obtained by detecting the current wearing object by the intelligent wearing equipment;
in this application embodiment, intelligence wearing equipment can be intelligent bracelet, intelligent wrist-watch, intelligent foot ring and intelligent bluetooth headset and so on. The current wearing object is the user who wears the intelligent wearing equipment at present. The detection data set includes a plurality of detection data, the detection data including: motion velocity, electroencephalogram, heart rate, blood pressure, and angular velocity of key parts. Wherein the angular velocities of the key sites include: angular velocity of the wrist, angular velocity of the ankle, and the like.
Step S12, obtaining at least one candidate state corresponding to the detection data, and taking the candidate state whose occurrence probability is greater than the preset probability as a current state, where the current state includes: a motion state, a driving state, and a sleep state;
in the embodiment of the present application, step S12 includes the following steps A1-A3:
step A1, acquiring at least one candidate state corresponding to the detection data;
in the embodiment of the present application, at least one candidate state corresponding to each detection data of a detection data set is obtained, where the candidate state includes: a sports state, a driving state, and a sleeping state.
For example: at this time, the candidate states corresponding to the moving speed include: motion state, driving state, etc.; the candidate state corresponding to the heart rate is a motion state; the candidate state corresponding to the body surface temperature is a motion state; the candidate state corresponding to the angular velocity of the wrist is a motion state.
Step A2, determining the occurrence probability of the candidate state;
and step A3, taking the candidate state with the occurrence probability larger than the preset probability as the current state.
Based on the above embodiment, the probability of occurrence of the motion state is the largest, and the motion state is taken as the current state.
Step S13, determining a security mode corresponding to the current state, where the security mode includes: a sports safety mode, a driving safety mode, and a sleep safety mode;
in this embodiment of the present application, the current state and the security mode are in a one-to-one correspondence relationship, and the security mode includes: a sports safety mode, a driving safety mode, and a sleep safety mode. For example: when the current state is a motion state, the safety mode is a motion safety mode; when the current state is a driving state, the safety mode is a driving safety mode; and when the current state is the sleep state, the safety mode is the sleep safety mode.
And step S14, controlling the intelligent wearable device to execute protection operation on the current wearable object according to the safety mode.
According to the embodiment of the application, the current state of the current wearing object is determined through the detection data set obtained by detecting the current wearing object, and the intelligent wearing equipment enters the corresponding safety mode according to the current state, so that different protection operations can be provided for the current wearing object according to different states, and corresponding protection measures can be executed on a user in real time according to the state of the user.
In this embodiment of the application, when the security mode is the sports security mode, as shown in fig. 2, step S14 is to control the smart wearable device to perform a protection operation on the current wearable object according to the security mode, and includes the following steps:
step S21, acquiring a current frame image shot by the intelligent wearable device, wherein the current frame image comprises at least one object;
in this application embodiment, when the security mode is the motion security mode, intelligence wearing equipment can shoot current environment, obtains current frame image, and the object that includes in the current frame image can be pedestrian, animal etc..
Step S22, carrying out difference processing on the current frame image by adopting the previous frame image of the current frame image to obtain a target foreground image;
in the embodiment of the present application, the commonly used difference algorithm includes: background subtraction, interframe subtraction, and three-frame subtraction. The method mainly adopts an interframe difference method to carry out difference on the current frame image, takes the current frame image as a background frame, adopts the previous frame image of the current frame image to carry out difference processing on the background frame, and removes image shadow to obtain a target foreground image when determining that a moving object (object) exists. The interframe difference method can well inhibit noise, adapt to the change of an external scene and ensure the stability of a processing process.
Step S23, determining an interested area in the target foreground image, wherein the interested area is obtained according to the gradient direction histogram characteristics of the target foreground image and carries at least one key part;
in the embodiment of the application, the object takes a pedestrian as an example, the gradient direction histogram feature and the classifier are adopted to extract and detect the head and shoulder features of the pedestrian, the region of the head and shoulder of the pedestrian in the target foreground image is determined, and the region where the head and shoulder are located is used as the region of interest.
The specific process is as follows: firstly, converting a target foreground image into a gray image and correcting. Then, the image pixel gradient is calculated, and the calculation formula is as follows:
Ga(a,b)=H(a+1,b)-H(a-1,b)。
Gb(a,b)=H(a,b+1)-H(a,b-1)。
in the above formula, Ga(a, b) is the horizontal gradient direction of the pixel point (a, b), GbAnd (a, b) is the vertical gradient direction of the pixel point (a, b), and H (a, b) is the pixel value of the pixel point (a, b). Then pixel point (a)And the gradient magnitude and gradient direction of b) are as follows:
in the above formula, G (a, b) is the gradient magnitude, and α (a, b) is the gradient direction.
It should be noted that, generally, a gradient operator is adopted to perform convolution operation on an image to obtain an image gradient, and in this embodiment, a user [ -1,0,1 ] is]The gradient operator performs convolution operation on the image to obtain gradient components in the horizontal direction, and the gradient components are [ -1,0,1 [ -0 [ -1 [ ]]TAnd performing convolution operation on the gradient operator to obtain a gradient component in the vertical direction, and finally obtaining a gradient amplitude and a gradient direction.
After obtaining the gradient magnitude and the gradient direction, a gradient direction histogram is calculated according to the gradient magnitude and the gradient direction, for example: for an image with 64 × 64 pixels, firstly, the image is divided into a plurality of connected regions with 8 × 8 pixels, adjacent regions are not overlapped, each normalized block (block) is composed of 2 × 2 regions, then (8-1) × (8-1) ═ 49 blocks are shared, gradient directions are quantized into 9 histogram channels, the histogram channels in a certain direction are used for calculating each pixel point in the regions in a weighting mode, the gradient amplitude of the pixel point determines the size of a weight, and then gradient direction histogram features are counted in each region.
And finally, inputting the obtained gradient direction histogram features into a classifier, and analyzing the gradient direction histogram features by the classifier to determine an interested region, wherein the interested region comprises the head and the shoulders of the pedestrian (object).
Step S24, tracking the object according to the region of interest to obtain the object number of the object;
in the embodiment of the present application, tracking the object according to the region of interest to obtain the number of objects of the object includes the following steps B1-B6:
step B1, extracting color features of the region of interest;
in the embodiment of the present application, the color feature is cn (color name) feature.
Step B2, determining a first peak-to-side lobe ratio and a first correlation diagram of the gradient direction histogram characteristic, and a second peak-to-side lobe ratio and a second correlation diagram of the color characteristic;
in the embodiment of the present application, the first peak-to-side lobe ratio and the second peak-to-side lobe ratio are defined as follows: the ratio of the peak intensity of the main lobe to the peak intensity of the strongest side lobe, and the correlation diagram is the correlation filtering response diagram.
Step B3, determining the fusion weight ratio of the color feature in the target frame image according to the first peak sidelobe ratio, the second peak sidelobe ratio, the first correlation diagram and the second correlation diagram, wherein the target frame image is the next frame image of the current frame image;
the calculation formula is as follows:
in the formula, kCNIs the fusion weight ratio of color features in the current frame image, PSLRHOG,sIs the first peak sidelobe ratio, PSLR, in the current frame imageCN,sIs the second peak sidelobe ratio, f, in the current frame imageHOG,sIs a first correlation map corresponding to the current frame image, fCN,sAnd the second correlation diagram corresponds to the current frame image. It is understood that s in the formula represents the s-th frame image, i.e., the current frame image.
In the embodiment of the application, a weight self-adaptive mode fuses gradient direction histogram features and color features with complementary characteristics. Because of the sharpness of the peak side lobe ratio to the energy correlation peak, the larger the PSLR value, the higher the tracking confidence, and the more matched the current frame and the previous frame target. In addition, the larger the peak of the correlation filter response map, the more accurate the target position. Therefore, the feature fusion weight is calculated according to the peak-to-side lobe ratio of the two features and the peak value of the relevant filtering response diagram.
Step B4, determining the fusion weight of the color features in the target frame image according to the fusion weight proportion;
in the embodiment of the present application, a calculation formula of the fusion weight is as follows:
kCN,s=(1-β)×kCN,s-1+β×kCN。
in the formula, beta is a learning coefficient of the fusion weight, kCN,sIs the fusion weight.
Step B5, determining a target correlation diagram of the gradient direction histogram characteristic and the color characteristic in the target frame image according to the fusion weight, the first correlation diagram and the second correlation diagram;
in the embodiment of the present application, the calculation formula of the target correlation graph is as follows:
fs(z)=(1-kCN,s)×fHOG,s(z)+kCN,s×fCN,s(z)。
in the formula (f)s(z) is a target correlation diagram.
And step B6, determining the number of the objects in the target frame image according to the target correlation map.
In the embodiment of the present application, determining the number of objects in the target frame image according to the target correlation map includes steps B61 to B63:
step B61, determining a target peak value of the target correlation diagram;
step B62, determining a target interesting region in the target frame image according to the target peak value;
and step B63, determining the number of the objects according to the number of the key parts in the target region of interest.
In the embodiment of the application, the target correlation diagram f is taken as the basist(z) determining a region of interest in the target frame image, and determining the number of objects according to the number of the head and shoulders of the pedestrian in the region of interest in the target frame image.
The method and the device have the advantages that the color characteristics are added, so that the detection precision is improved, the position of the region of interest in the next frame of image can be predicted in advance under the condition that the object is completely shielded, and the problem of tracking loss when the target is completely shielded is effectively solved. On the basis, the gradient direction histogram feature and the color feature are fused to be used as a tracking detection mechanism, and the position of the region of interest is locked more accurately.
And S25, generating an early warning instruction according to the number of the objects, wherein the early warning instruction is used for controlling the intelligent wearable device to vibrate according to the target vibration frequency corresponding to the number of the objects.
In the embodiment of the application, the corresponding relation between the preset object number and the vibration frequency is obtained, the target vibration frequency corresponding to the object number is determined according to the corresponding relation, and the early warning instruction is generated according to the target vibration frequency and is used for controlling the intelligent wearable device to perform vibration operation according to the target vibration frequency corresponding to the object number.
The embodiment of the application tracks the object in real time to determine the number of the objects, and when the number of the objects is large, the intelligent wearable device can execute vibration operation to remind the current wearable object of reducing the movement speed. Avoid accidents such as collision.
Further, when the safety mode is the driving safety mode, as shown in fig. 3, step S14 is to control the smart wearable device to perform a protection operation on the current wearable object according to the safety mode, where the protection operation includes:
step S31, detecting the electroencephalogram data of the current wearing object in real time;
in the embodiment of the present application, because the original electroencephalogram signal has serious noise interference, the present embodiment first preprocesses the original electroencephalogram signal, where the preprocessing process is as follows: the band-pass filter can remove a part of power frequency interference on one hand and can remove a part of electromyography and electrooculogram noise on the other hand.
In addition, the band-pass filter can well remove high-frequency components in the original electroencephalogram signal, but the traditional digital filter noise reduction method cannot well filter the low-frequency noise of the electroencephalogram signal and the noise with the frequency overlapping of the electroencephalogram signal, so that wavelet threshold denoising is required to obtain electroencephalogram data.
Step S32, extracting frequency domain characteristic information in the electroencephalogram data;
in the embodiment of the application, frequency domain characteristic information is a power spectrum, electroencephalogram data are divided by adopting a sliding window to obtain Q data segments, the length of each data segment is p, the length of the electroencephalogram data is l, p × Q, then the power spectrum of each data segment is calculated, and finally the power spectrum of each segment is combined to obtain a characteristic matrix;
wherein the formula for calculating the power spectrum of each data segment is as follows:
moderate, WQ(ω) is the power spectrum of the Q-th segment, ω (z) is a window function, U is a normalization coefficient, xQ(z) is the observed value of data, e-dIs constant, d is 0,1, 2 … ….
As an example, the signal length of the electroencephalogram data is 2s, and the intelligent wearable device selects a 500HZ sampling frequency, so that there are 1000 sampling points of the signal length. Then extracting frequency domain characteristic information according to 1000 sampling points, wherein the specific process is as follows:
selecting a rectangular window to carry out sliding interception on the electroencephalogram data, wherein the size of the window is 128 sampling points, and the window advances by taking the step length as 32 sampling points, so that 28 segments of electroencephalogram data O with the length of 128 sampling points are obtained, and the electroencephalogram data O is combined into a characteristic matrix R:
step S33, inputting the frequency domain characteristic information into an analysis model, and obtaining the attention state of the current wearing object by the analysis model according to the characteristic type corresponding to the frequency domain characteristic information;
in the embodiment of the application, the feature matrix R is input into an analysis model, and the analysis model determines the attention state of the current wearing object according to the feature matrix R. The analysis model in this embodiment includes:
and (one) an input layer, wherein a 28 x 28 feature matrix is input, a row vector represents a time sampling point, and a column vector represents a frequency distribution.
And (two) a first convolution layer, wherein the first convolution layer is used for carrying out convolution transformation based on local perception fields and extracting features. The convolution layer and the input layer are locally connected, the input feature matrix is convoluted with 6 convolution kernels with the size of 5 × 5, and feature mapping, namely 6 feature maps with the size of 24 × 24, is finally obtained through an excitation function.
And (III) a first pooling layer, wherein the purpose of the pooling layer is to reduce data dimensionality and reduce computational complexity. In this example, the first convolution layer is reduced by one half by the maximum pooling layer of size 2 × 2, and 6 12 × 12 feature maps are obtained.
And (IV) a second convolution layer, which is used for convolving 6 feature maps of 12 × 12 with 12 convolution kernels of 3 × 3 and finally obtaining 72 feature maps with the size of 10 × 10 through an excitation function.
And (v) the second pooling layer, wherein the 72 feature maps with the size of 10 × 10 are reduced by half by the maximum pooling layer with the size of 2 × 2, so as to obtain 72 feature maps with the size of 5 × 5.
And (VI) a full connection layer, wherein the full connection layer is used for classifying the feature map and has the function of being equivalent to a classifier.
And (seventhly) an output layer, wherein 3 neurons are designed in the output layer and used for representing analysis results (the analysis results comprise attention focusing, slight attention scattering and serious attention scattering).
And step S34, executing corresponding protection operation on the current wearing object according to the attention state.
In the embodiment of the application, the corresponding protection operation is executed on the current wearing object according to the attention state, and the protection operation comprises the following steps: when the attention state is slightly dispersed, controlling the intelligent wearable device to execute indirect vibration according to a first vibration frequency, and sending first prompt information to a vehicle-mounted screen in communication connection with the intelligent wearable device;
executing corresponding protection operation on the current wearing object according to the attention state, and further comprising: when the attention state is seriously dispersed, the intelligent wearable device is controlled to continuously vibrate according to the second vibration frequency, and second prompt information is sent to a vehicle-mounted screen in communication connection with the intelligent wearable device.
According to the scheme provided by the embodiment of the application, when the current wearing object is in the driving state, the attention state of the current wearing object is determined, different prompt operations are automatically executed on the current wearing object according to the attention state, meanwhile, the current wearing object is given correct early warning information by matching with the vehicle-mounted screen connected with the intelligent device, so that the driving condition of a user is effectively monitored, and the safety in the driving process is improved.
In this embodiment of the application, when the security mode is the sleep security mode, as shown in fig. 4, step S14 is to control the smart wearable device to perform a protection operation on the current wearable object according to the security mode, where the protection operation includes:
step S41, acquiring current displacement data of the intelligent wearable device;
in an embodiment of the present application, the current displacement data includes: displacement change data, and change data of the gravity point.
And step S42, when the sleep condition of the current wearing object is determined to belong to the abnormal sleep condition according to the current displacement data, generating a wake-up instruction, wherein the wake-up instruction is used for waking up the current wearing object according to the preset vibration frequency.
In the embodiment of the application, when the sleep condition of the current wearing object is determined to belong to the abnormal sleep condition according to the current displacement data, the wake-up instruction is generated, and the method comprises the following steps of C1-C3:
step C1, obtaining historical displacement data;
step C2, determining displacement change information according to the current displacement data and the historical displacement data;
and step C3, when the displacement change information is larger than the preset change information, determining that the sleep condition of the current wearing object belongs to the abnormal sleep condition, and generating a wake-up instruction.
In this application embodiment, after the current wearing object of intelligence wearing equipment entered into the sleep state, intelligence wearing equipment will continue to carry out the analysis to the displacement data that senses to judge the sleep condition of current wearing object.
And if the displacement and the change of the gravity point of the intelligent wearable device within a period of preset length before the current time do not exceed the correspondingly set limit value, judging that the sleeping condition of the current wearable object is just falling asleep.
If the displacement and gravity point data tend to be stable and hardly change within a period of time away from the current time, the sleep condition of the current wearing object can be judged to be deep sleep.
For example, according to actual use requirements, the detection period may be set to be 10 minutes, and if the gender of the current wearing object of the intelligent wearing device is female, the historical horizontal displacement of the intelligent wearing device during the set detection period is not more than 3 centimeters, and the change of the gravity point is not more than 1.5 centimeters, the sleep condition of the current wearing object is considered to be deep sleep.
If the gender of the current wearing object of the intelligent wearing device is female, the historical horizontal displacement of the intelligent wearing device during the set detection period is not more than 5 cm, and the change of the gravity point is not more than 2.5 cm, the sleep condition of the current wearing object is considered to be deep sleep.
Fig. 5 is a block diagram of a data processing apparatus provided in an embodiment of the present application, which may be implemented as part of or all of an electronic device through software, hardware, or a combination of the two. As shown in fig. 5, the apparatus includes:
the obtaining module 51 is configured to obtain a detection data set obtained by detecting a current wearing object by an intelligent wearing device, where the detection data set includes a plurality of detection data, and the detection data includes: the motion speed, the electroencephalogram signal, the heart rate, the blood pressure, the body surface temperature and the angular speed of key parts;
an analysis module 52, configured to obtain at least one candidate state corresponding to the detection data, and use the candidate state with an occurrence probability greater than a preset probability as a current state, where the current state at least includes one of the following items: a motion state, a driving state, and a sleep state;
a determining module 53, configured to determine a security mode corresponding to the current state, where the security mode at least includes one of: a sport safety mode, a driving safety mode, and a sleep safety mode;
and the control module 54 is configured to control the intelligent wearable device to perform a protection operation on the current wearable object according to the security mode.
Further, when the safety mode is the sport safety mode, the control module 54 includes:
the acquisition sub-module is used for acquiring a current frame image shot by the intelligent wearable device, wherein the current frame image comprises at least one object;
the processing submodule is used for carrying out difference processing on the current frame image by adopting the previous frame image of the current frame image to obtain a target foreground image;
the determining submodule is used for determining an interested area in the target foreground image, the interested area is obtained according to the gradient direction histogram feature of the target foreground image, and the interested area carries at least one key part;
the tracking submodule is used for tracking the object according to the region of interest to obtain the number of the object;
and the generation submodule is used for generating an early warning instruction according to the number of the objects, and the early warning instruction is used for controlling the intelligent wearable equipment to carry out vibration operation according to the target vibration frequency corresponding to the number of the objects.
Further, the tracking sub-module is used for extracting color features of the region of interest; determining a first peak-to-side lobe ratio and a first correlation diagram of the gradient direction histogram characteristic, and a second peak-to-side lobe ratio and a second correlation diagram of the color characteristic; determining a fusion weight ratio of the color features in a target frame image according to the first peak sidelobe ratio, the second peak sidelobe ratio, the first correlation diagram and the second correlation diagram, wherein the target frame image is a next frame image of the current frame image, and a calculation formula of the fusion weight ratio is as follows:
in the formula, kCNIs the fusion weight ratio of color features in the current frame image, PSLRHOG,sIs the first peak sidelobe ratio, PSLR, in the current frame imageCN,sIs the second peak sidelobe ratio, f, in the current frame imageHOG,sIs a first correlation map corresponding to the current frame image, fCN,sA second correlation map corresponding to the current frame image;
determining a fusion weight of the color features in the target frame image according to the fusion weight proportion; determining a target correlation diagram of the gradient direction histogram feature and the color feature in the target frame image according to the fusion weight, the first correlation diagram and the second correlation diagram; and determining the number of the objects in the target frame image according to the target correlation map.
Further, the tracking sub-module is used for determining a target peak value of the target correlation diagram; determining a target interesting region in the target frame image according to the target peak value; and determining the number of the objects according to the number of the key parts in the target region of interest.
Further, when the safety mode is the driving safety mode, the control module 54 includes:
the detection sub-module is used for detecting the electroencephalogram data of the current wearing object in real time;
the extraction submodule is used for extracting frequency domain characteristic information in the electroencephalogram data;
the processing submodule is used for inputting the frequency domain characteristic information into the analysis model, and the analysis model obtains the attention state of the current wearing object according to the characteristic type corresponding to the frequency domain characteristic information;
the execution submodule is used for executing corresponding protection operation on the current wearing object according to the attention state, and the attention state comprises: concentration, slight distraction, and severe distraction.
Further, the execution submodule is used for controlling the intelligent wearable device to execute indirect vibration according to the first vibration frequency when the attention state is slightly dispersed, and sending first prompt information to a vehicle-mounted screen in communication connection with the intelligent wearable device;
and the execution submodule is used for controlling the intelligent wearable device to execute continuous vibration according to the second vibration frequency when the attention state is seriously dispersed, and sending second prompt information to a vehicle-mounted screen in communication connection with the intelligent wearable device.
Further, when the safety mode is the sleep safety mode, the control module 54 is configured to obtain current displacement data of the intelligent wearable device; and when the sleep condition of the current wearing object is determined to belong to the abnormal sleep condition according to the current displacement data, generating a wake-up instruction, wherein the wake-up instruction is used for waking up the current wearing object according to a preset vibration frequency.
An embodiment of the present application further provides an electronic device, as shown in fig. 6, the electronic device may include: the system comprises a processor 1501, a communication interface 1502, a memory 1503 and a communication bus 1504, wherein the processor 1501, the communication interface 1502 and the memory 1503 complete communication with each other through the communication bus 1504.
A memory 1503 for storing a computer program;
the processor 1501 is configured to implement the steps of the above embodiments when executing the computer program stored in the memory 1503.
The communication bus mentioned in the above terminal may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the terminal and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components.
In yet another embodiment provided by the present application, a computer-readable storage medium is further provided, which has instructions stored therein, and when the instructions are executed on a computer, the instructions cause the computer to execute the data processing method described in any of the above embodiments.
In yet another embodiment provided by the present application, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the data processing method of any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk), among others.
The above description is only for the preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the protection scope of the present application.
The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. A data processing method is applied to intelligent wearable equipment and is characterized by comprising the following steps:
the method comprises the steps of obtaining a detection data set obtained by detecting a current wearing object by intelligent wearing equipment, wherein the detection data set comprises a plurality of detection data, and the detection data comprises: the motion speed, the electroencephalogram signal, the heart rate, the blood pressure, the body surface temperature and the angular speed of key parts;
acquiring at least one candidate state corresponding to the detection data, and taking the candidate state with the occurrence probability greater than the preset probability as a current state, wherein the current state comprises: a motion state, a driving state, and a sleep state;
determining a security mode corresponding to the current state, wherein the security mode comprises: a sports safety mode, a driving safety mode, and a sleep safety mode;
and controlling the intelligent wearable equipment to execute protection operation on the current wearable object according to the safety mode.
2. The method according to claim 1, wherein when the security mode is a sport security mode, the controlling the smart wearable device to perform a protection operation on the current wearable object according to the security mode includes:
acquiring a current frame image shot by the intelligent wearable device, wherein the current frame image comprises at least one object;
carrying out difference processing on the current frame image by adopting the previous frame image of the current frame image to obtain a target foreground image;
determining an interested region in the target foreground image, wherein the interested region is obtained according to the gradient direction histogram characteristics of the target foreground image and carries at least one key part;
tracking the object according to the region of interest to obtain the number of the object;
and generating an early warning instruction according to the number of the objects, wherein the early warning instruction is used for controlling the intelligent wearable equipment to vibrate according to the target vibration frequency corresponding to the number of the objects.
3. The method of claim 2, wherein tracking the object according to the region of interest to obtain the number of objects of the object comprises:
extracting color features of the region of interest;
determining a first peak-to-side lobe ratio and a first correlation map of the gradient direction histogram feature, and a second peak-to-side lobe ratio and a second correlation map of the color feature;
determining a fusion weight proportion of the color feature in a target frame image according to the first peak-to-side lobe ratio, the second peak-to-side lobe ratio, the first correlation diagram and the second correlation diagram, wherein the target frame image is a next frame image of the current frame image, and a calculation formula of the fusion weight proportion is as follows:
in the formula, kCNIs the fusion weight ratio of color features in the current frame image, PSLRHOG,sIs the first peak sidelobe ratio, PSLR, in the current frame imageCN,sIs the second peak sidelobe ratio, f, in the current frame imageHOG,sIs a first correlation map corresponding to the current frame image, fCN,sA second correlation map corresponding to the current frame image;
determining a fusion weight of the color feature in the target frame image according to the fusion weight proportion;
determining a target correlation diagram of the gradient direction histogram feature and the color feature in the target frame image according to the fusion weight, the first correlation diagram and the second correlation diagram;
and determining the number of the objects in the target frame image according to the target correlation diagram.
4. The method of claim 3, wherein determining the number of objects in the target frame image from the target correlation map comprises:
determining a target peak value of the target correlation map;
determining a target region of interest in the target frame image according to the target peak value;
and determining the number of the objects according to the number of the key parts in the target region of interest.
5. The method according to claim 1, wherein when the safety mode is a driving safety mode, the controlling the smart wearable device to perform a protection operation on the current wearable object according to the safety mode includes:
detecting electroencephalogram data of the current wearing object in real time;
extracting frequency domain characteristic information in the electroencephalogram data;
inputting the frequency domain characteristic information into an analysis model, and obtaining the attention state of the current wearing object by the analysis model according to the characteristic type corresponding to the frequency domain characteristic information;
executing corresponding protection operation on the current wearing object according to the attention state, wherein the attention state comprises: concentration, slight distraction, and severe distraction.
6. The method of claim 5, wherein performing the corresponding protection operation on the current worn object according to the attention state comprises:
when the attention state is slightly dispersed, controlling the intelligent wearable device to execute indirect vibration according to a first vibration frequency, and sending first prompt information to a vehicle-mounted screen in communication connection with the intelligent wearable device;
the executing corresponding protection operation on the current wearing object according to the attention state further comprises:
and when the attention state is seriously dispersed, controlling the intelligent wearable device to execute continuous vibration according to a second vibration frequency, and sending second prompt information to a vehicle-mounted screen in communication connection with the intelligent wearable device.
7. The method according to claim 1, wherein when the security mode is a sleep security mode, the controlling the smart wearable device to perform a protection operation on the current wearable object according to the security mode includes:
acquiring current displacement data of the intelligent wearable device;
and when the sleep condition of the current wearing object is determined to belong to an abnormal sleep condition according to the current displacement data, generating an awakening instruction, wherein the awakening instruction is used for awakening the current wearing object according to a preset vibration frequency.
8. The utility model provides a data processing apparatus, is applied to intelligent wearing equipment which characterized in that includes:
the acquisition module is used for acquiring a detection data set obtained by detecting a current wearing object by the intelligent wearing equipment, the detection data set comprises a plurality of detection data, and the detection data comprises: the motion speed, the electroencephalogram signal, the heart rate, the blood pressure, the body surface temperature and the angular speed of key parts;
an analysis module, configured to obtain at least one candidate state corresponding to the detection data, and use the candidate state with an occurrence probability greater than a preset probability as a current state, where the current state includes: a motion state, a driving state, and a sleep state;
a determining module, configured to determine a security mode corresponding to the current state, where the security mode includes: a sports safety mode, a driving safety mode, and a sleep safety mode;
and the control module is used for controlling the intelligent wearable equipment to execute protection operation on the current wearable object according to the safety mode.
9. A storage medium, characterized in that the storage medium comprises a stored program, wherein the program is operative to perform the method steps of any of the preceding claims 1 to 7.
10. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the communication bus; wherein:
a memory for storing a computer program;
a processor for performing the method steps of any of claims 1-7 by executing a program stored on a memory.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011379910.7A CN114639113A (en) | 2020-11-30 | 2020-11-30 | Data processing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011379910.7A CN114639113A (en) | 2020-11-30 | 2020-11-30 | Data processing method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114639113A true CN114639113A (en) | 2022-06-17 |
Family
ID=81944459
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011379910.7A Pending CN114639113A (en) | 2020-11-30 | 2020-11-30 | Data processing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114639113A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103605983A (en) * | 2013-10-30 | 2014-02-26 | 天津大学 | Remnant detection and tracking method |
CN104112335A (en) * | 2014-07-25 | 2014-10-22 | 北京机械设备研究所 | Multi-information fusion based fatigue driving detecting method |
CN105286890A (en) * | 2015-09-22 | 2016-02-03 | 江西科技学院 | Driver sleepy state monitoring method based on electroencephalogram signal |
CN108852771A (en) * | 2018-05-18 | 2018-11-23 | 湖北淇思智控科技有限公司 | The cloud control platform of intelligent massaging pillow |
CN110187765A (en) * | 2019-05-30 | 2019-08-30 | 努比亚技术有限公司 | Wearable device control method, wearable device and computer readable storage medium |
-
2020
- 2020-11-30 CN CN202011379910.7A patent/CN114639113A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103605983A (en) * | 2013-10-30 | 2014-02-26 | 天津大学 | Remnant detection and tracking method |
CN104112335A (en) * | 2014-07-25 | 2014-10-22 | 北京机械设备研究所 | Multi-information fusion based fatigue driving detecting method |
CN105286890A (en) * | 2015-09-22 | 2016-02-03 | 江西科技学院 | Driver sleepy state monitoring method based on electroencephalogram signal |
CN108852771A (en) * | 2018-05-18 | 2018-11-23 | 湖北淇思智控科技有限公司 | The cloud control platform of intelligent massaging pillow |
CN110187765A (en) * | 2019-05-30 | 2019-08-30 | 努比亚技术有限公司 | Wearable device control method, wearable device and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107292242B (en) | Iris identification method and terminal | |
Huang | An advanced motion detection algorithm with video quality analysis for video surveillance systems | |
CN113128368B (en) | Method, device and system for detecting character interaction relationship | |
Sengar et al. | Detection of moving objects based on enhancement of optical flow | |
CN111936990A (en) | Method and device for waking up screen | |
CN105049790A (en) | Video monitoring system image acquisition method and apparatus | |
Xin et al. | A self-adaptive optical flow method for the moving object detection in the video sequences | |
CN107944381B (en) | Face tracking method, face tracking device, terminal and storage medium | |
CN111505632A (en) | Ultra-wideband radar action attitude identification method based on power spectrum and Doppler characteristics | |
US11842542B2 (en) | System and method for abnormal scene detection | |
CN103955682A (en) | Behavior recognition method and device based on SURF interest points | |
CN109816694A (en) | Method for tracking target, device and electronic equipment | |
CN110910416A (en) | Moving obstacle tracking method and device and terminal equipment | |
Zhong et al. | A general moving detection method using dual-target nonparametric background model | |
Hou et al. | A lightweight framework for abnormal driving behavior detection | |
Liu et al. | SETR-YOLOv5n: A lightweight low-light lane curvature detection method based on fractional-order fusion model | |
Qin et al. | Multiscale random projection based background suppression of infrared small target image | |
CN117056786A (en) | Non-contact stress state identification method and system | |
Song et al. | Behavior Recognition of the Elderly in Indoor Environment Based on Feature Fusion of Wi-Fi Perception and Videos | |
CN102073878A (en) | Non-wearable finger pointing gesture visual identification method | |
CN102890822B (en) | Device with function of detecting object position, and detecting method of device | |
Liu | RETRACTED: Beach sports image detection based on heterogeneous multi-processor and convolutional neural network | |
CN114639113A (en) | Data processing method and device, electronic equipment and storage medium | |
CN117132482A (en) | Intelligent image algorithm method based on infrared thermal imaging | |
Wang et al. | Image haze removal using a hybrid of fuzzy inference system and weighted estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |