WO2020098119A1 - Procédé et appareil d'identification d'accélération, dispositif informatique et support de stockage - Google Patents

Procédé et appareil d'identification d'accélération, dispositif informatique et support de stockage Download PDF

Info

Publication number
WO2020098119A1
WO2020098119A1 PCT/CN2018/125572 CN2018125572W WO2020098119A1 WO 2020098119 A1 WO2020098119 A1 WO 2020098119A1 CN 2018125572 W CN2018125572 W CN 2018125572W WO 2020098119 A1 WO2020098119 A1 WO 2020098119A1
Authority
WO
WIPO (PCT)
Prior art keywords
acceleration
information
target user
preset
user
Prior art date
Application number
PCT/CN2018/125572
Other languages
English (en)
Chinese (zh)
Inventor
黄章成
王健宗
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020098119A1 publication Critical patent/WO2020098119A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Definitions

  • Embodiments of the present application relate to the field of model algorithms, and in particular, to an acceleration recognition method, device, computer equipment, and storage medium.
  • Image recognition refers to the technology of using computers to process, analyze and understand images in order to recognize various modes of targets and objects.
  • an industrial camera is used to take pictures, and then the software is used for further recognition processing according to the gray-scale difference of the pictures.
  • Embodiments of the present application provide a convenient acceleration recognition method, device, computer device, and storage medium for judging user behavior by detecting user acceleration.
  • a technical solution adopted by the embodiment created by the present application is: to provide an acceleration recognition method, including the following steps: acquiring multiple acceleration information of a target user; generating rules according to a preset array and the A plurality of acceleration information generate an acceleration array matrix; input the acceleration data matrix into a preset acceleration judgment model, wherein the acceleration judgment model is a neural network model that is pre-trained to converge to judge user behavior characterized by acceleration; Obtaining a classification result output by the acceleration judgment model, wherein the classification result is a user behavior of the target user.
  • the embodiment created by the present application further provides an acceleration recognition device, including: an acquisition module for acquiring a plurality of acceleration information of a target user; a generation module for generating rules and indexes according to a preset array Generating a matrix of acceleration arrays from the plurality of acceleration information; a processing module for inputting the acceleration data matrix into a preset acceleration judgment model, wherein the acceleration judgment model is a user pre-trained to converge to judge the acceleration representation A neural network model of behavior; an execution module, configured to obtain a classification result output by the acceleration judgment model, wherein the classification result is a user behavior of the target user.
  • the embodiments created by the present application further provide a computer device, including a memory and a processor, the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, The processor is caused to execute the steps of the acceleration recognition method described above.
  • the embodiments created by the present application also provide a storage medium storing computer-readable instructions, which when executed by one or more processors cause the one or more processors to execute The steps of the acceleration recognition method described above.
  • the acceleration information of the target user is collected through the wearable device. Since the wearable device can collect the acceleration information of the user in real time, and does not occupy user space and is inexpensive, it is more suitable for detecting and judging the user's behavior.
  • the collected acceleration information is converted into an acceleration array matrix, and then the acceleration array matrix is input to a neural network model trained to convergence, and the user behavior of the user is recognized through the classification result of the neural network model. Because the acceleration information of the target user is collected quickly and conveniently, and the neural network model can quickly and accurately classify the user behavior represented by the acceleration array matrix. The cost of user behavior recognition is reduced, and the user behavior can be judged in real time.
  • FIG. 1 is a schematic flowchart of an acceleration recognition method according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of obtaining multiple acceleration information according to an embodiment of the present application
  • FIG. 3 is a schematic flowchart of generating an acceleration array matrix according to an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of voice rescue according to an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of training an acceleration judgment model according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a basic structure of an acceleration recognition device according to an embodiment of the present application.
  • FIG. 8 is a block diagram of a basic structure of a computer device according to an embodiment of the present application.
  • terminal and terminal device used herein include both wireless signal receiver devices, which only have wireless signal receiver devices without transmitting capabilities, and also include hardware for receiving and transmitting hardware.
  • Such devices may include: cellular or other communication devices with single-line displays or multi-line displays or cellular or other communication devices without multi-line displays; PCS (Personal Communications Services), which can combine voice and data Processing, fax and / or data communication capabilities; PDA (Personal Digital Assistant), which can include radio frequency receivers, pagers, Internet / Intranet access, web browsers, notepads, calendars and / or GPS (Global Positioning System (Global Positioning System) receiver; conventional laptop and / or palmtop computer or other device that has and / or includes a conventional radio frequency receiver and / or palmtop computer or other device.
  • GPS Global Positioning System
  • terminal and “terminal equipment” may be portable, transportable, installed in a vehicle (aeronautical, maritime, and / or terrestrial), or adapted and / or configured to operate locally, and / or In a distributed form, it operates at any other location on the earth and / or space.
  • the "terminal” and “terminal device” used herein may also be a communication terminal, an Internet terminal, a music / video playback terminal, for example, may be a PDA, MID (Mobile Internet Device), and / or have music / video playback
  • Functional mobile phones can also be smart TVs, set-top boxes and other devices.
  • FIG. 1 is a schematic diagram of the basic process of the acceleration recognition method of this embodiment.
  • an acceleration recognition method includes the following steps:
  • the acceleration information of the target user is obtained through an acceleration sensor in the wearable device worn by the user.
  • Wearable devices can be (not limited to): watches (including watches and wristbands and other products), feet-supported shoes (including shoes, socks or other future leg wearing products), and head-supported Glass (including glasses, helmets, headbands, etc.), as well as various non-mainstream product forms such as smart clothing, school bags, crutches, and accessories.
  • the wearable devices are all integrated with an acceleration sensor.
  • the wearable device can obtain the acceleration of the target user through an external acceleration sensor.
  • the external acceleration sensor and the wearable device can communicate through a wired connection or a wireless connection.
  • the target user is the person wearing the wearable device.
  • the acceleration information can be obtained through a mobile terminal carried by the user, for example, through a device such as a smart phone or a PAD.
  • the number of acceleration information can be: 2, 3, 4, 5, or more.
  • an acceleration array matrix is generated based on the acceleration information.
  • the number of rows and instances of the acceleration array matrix is related to the amount of acceleration information. For example, when there are four pieces of acceleration information, an array matrix of 2 rows * 2 columns is used; when the number of pieces of acceleration information is 9, an array matrix of 3 rows * 3 columns is used. In some optional embodiments, the product of the number of rows and instances of the acceleration array matrix is greater than the number of pieces of acceleration information. For example, when the number of pieces of acceleration information is 7, an array matrix of 2 rows * 4 columns is used. At this time, an empty matrix element needs to be filled by generating a random number or using a number 0.
  • the array generation rule can be (not limited to): fill the acceleration information matrix into the acceleration array matrix in sequence according to the acquisition time, fill the acceleration information into the acceleration array matrix by random extraction, or use intermediate data to start spreading along both sides.
  • the acceleration information is filled into the acceleration array matrix, etc.
  • the acceleration judgment model is a neural network model that is pre-trained to converge to judge the user behavior represented by the acceleration.
  • the acceleration judgment model is a convolutional neural network model (CNN), but the acceleration judgment model can also be: a deep neural network model (DNN), a recurrent neural network model (RNN), or a modification of the above three network models model.
  • the neural network model has great advantages for large-scale image recognition.
  • the neural network model when processing the image, the neural network model also extracts the scalar value of the image pixel value and converts the image into an array matrix, and then extracts the image features that are digitally characterized by the array matrix. , And finally generate image classification results. Therefore, as long as the multiple acceleration values generated by the user's wearable device in a time period can be written into an array matrix, and then the acceleration data matrix is input into the neural network model, the neural network model can accelerate the acceleration. The features are convolved, and finally the user behavior is judged by classification.
  • the deep learning of user acceleration enables the neural network model to recognize different user behaviors.
  • the user behavior can be (but not limited to): relatively static, running, riding a vehicle, falling or swimming, etc. various forms of sports behavior.
  • the classification result of the neural network model is set to different behaviors of the user.
  • the acceleration judgment model After the acceleration data matrix is input to the acceleration judgment model, the acceleration judgment model outputs the classification result after convolution and classification of the acceleration data matrix.
  • the classification result is the user behavior represented by the acceleration data matrix determined by the neural network model.
  • the above embodiment collects the acceleration information of the target user through the wearable device. Since the wearable device can collect the acceleration information of the user in real time, does not occupy user space and is inexpensive, it is more suitable for detecting and judging the user's behavior.
  • the collected acceleration information is converted into an acceleration array matrix, and then the acceleration array matrix is input to a neural network model trained to convergence, and the user behavior of the user is recognized through the classification result of the neural network model. Because the acceleration information of the target user is collected quickly and conveniently, and the neural network model can quickly and accurately classify the user behavior represented by the acceleration array matrix. The cost of user behavior recognition is reduced, and the user behavior can be judged in real time.
  • people tend to get sick after they get old, and many elderly people will lose consciousness (such as hypoglycemia, heart disease or cerebral hemorrhage, etc.) at the time of illness.
  • many algorithms for detecting falls of elderly people are mostly used.
  • the foreground extraction method is mainly used to obtain the human body contour, and then, whether the old man falls or not is judged according to the image classification method.
  • the use of fixed cameras means that monitoring equipment needs to be installed in every independent space in the home to ensure comprehensive monitoring of the elderly.
  • the use of mobile cameras, such as robots, to track and shoot the elderly in real time also has the problems of battery life and high cost.
  • whether the user falls is judged by the acceleration array matrix and acceleration judgment in the above embodiment.
  • the wearable device used for determining whether the user has fallen is a wrist watch. However, it is not limited to this. Depending on the specific application scenario, the wearable device used can be of other types.
  • the wearable device used when it is detected that the user's acceleration changes, four pieces of acceleration information are continuously obtained, and the number of pieces of acceleration information obtained is not limited to this, and different pieces of number can be obtained according to different specific application scenarios.
  • the acceleration sensor By collecting the acceleration of the user, and then input the acceleration information into the acceleration judgment model, it is judged whether the user has fallen. Since the acceleration sensor is small and the detection is not limited by the direction, the acceleration sensor integrated in the wristwatch can obtain the user's acceleration information in real time, and complete the rapid judgment of whether the user falls through the stored acceleration judgment model , And the input cost is very low, and the accuracy of judgment is high. It really meets the technical requirements of monitoring at any time and real-time judgment.
  • the behavior of acquiring user acceleration is not real-time.
  • FIG. 2 is a schematic flowchart of obtaining multiple acceleration information in this embodiment.
  • step 1100 also includes the following steps:
  • the target user wears the wearable device and detects the user's motion acceleration in real time.
  • the acceleration of the target user is 0, and the processor of the wearable device does not process the acceleration.
  • the wearable device detects that the acceleration of the target user has changed.
  • each movement acceleration has a set time interval. For example, when a change in acceleration is detected, four motion acceleration values within 2 s are acquired at each 0.5 s (not limited to, any time interval). It should be pointed out that the time interval between the acquisition of motion acceleration and the number of motion acceleration can be set according to the specific application scenarios, where the time interval can be any value greater than 0 and less than 10s to obtain the motion acceleration The number can be any value greater than 2.
  • FIG. 3 is a schematic diagram of a process of generating an acceleration array matrix in this embodiment.
  • step S1200 also includes:
  • the matrix template is an array matrix whose elements are all 0.
  • the matrix template is composed of rows and columns, and the number of rows and columns constituting the matrix template is related to the amount of acceleration information. For example, when there are four pieces of acceleration information, an array matrix of 2 rows * 2 columns is used; when the number of pieces of acceleration information is 9, an array matrix of 3 rows * 3 columns is used.
  • the product of the number of rows and instances of the acceleration array matrix is greater than the number of pieces of acceleration information. For example, when the number of pieces of acceleration information is 7, an array matrix of 2 rows * 4 columns is used. At this time, an empty matrix element needs to be filled by generating a random number or using a number 0.
  • S1202 Write the plurality of motion accelerations into the matrix template in order of acquisition time to generate the acceleration array matrix.
  • a plurality of motion accelerations are sequentially written into the matrix template in the order of acquisition time.
  • the order of writing can be row-by-row writing or row-by-row writing.
  • an acceleration array matrix is generated. Since the change of acceleration is obtained according to the time axis, there is an inevitable connection between the sequences. This connection is converted into weight parameters after deep learning by the neural network to amplify the weight of important data in the data matrix and reduce the non-linearity in the array matrix. The weight of the necessary data to achieve the purpose of extracting data. Therefore, the process of deep learning and the neural network learns the logical relationship between the numbers in the array matrix. This logical relationship is finally expressed through the weights.
  • the regularly arranged data can enable the neural network model to learn to a convergence state faster, and the accuracy of judgment for this type of data will also be greatly improved.
  • FIG. 4 is a schematic flowchart of sending warning information according to this embodiment.
  • step S1400 the following steps are also included:
  • the wearable device carried by the user has the function of external communication, and the communication method can communicate with external terminals through a communication method such as a cellular communication network or a WiFi network.
  • the wearable device needs to be written to the associated terminal when it is used for the first time or during the configuration process.
  • the communication information can be (not limited to): phone number, e-mail address or account information of an instant messaging account, etc.
  • S1412 Send preset warning information to the associated terminal according to the communication information to remind the user of the associated terminal that the target user has fallen.
  • the wearable device After the communication information of the associated terminal is obtained, the wearable device sends warning information to the associated terminal through the communication information.
  • the content of the warning message is "XX users have fallen, please give emergency assistance".
  • the content of the warning information is not limited to this, and the content of the warning information can be adjusted adaptively according to different application environments. For example, when a wearable device is used to detect whether a child is performing a high-altitude jump, the warning content is adjusted to: "XX users are engaged in dangerous behavior, please go to stop".
  • an emergency is sent to the nearest hospital or medical center Help information for treatment.
  • the wearable device will obtain the user's case information, the user's disease name and emergency treatment method for real-time broadcast to attract the attention of the surrounding people, and at the same time, teach the surrounding people
  • the method of relief treatment ensures that the target user can get effective treatment before the arrival of the rescuer. Please refer to FIG. 5, which is a schematic diagram of a voice rescue process of this embodiment.
  • step S1400 the following steps are also included:
  • the wearable device When it is detected that the user has fallen, the wearable device obtains the user case information stored locally.
  • the method of obtaining user cases is not limited to obtaining locally.
  • the case information can be obtained by accessing the corresponding server. For example, access to the hospital's case information database through user identity information.
  • the case information includes the name of the disease that the user has.
  • an emergency treatment method for acquiring symptoms of hypoglycemia is to feed "glycogen" to quickly supplement the target user's blood glucose.
  • Each disease has its corresponding emergency treatment method, and the corresponding emergency treatment method can be obtained in the corresponding database by the disease name.
  • S1423 Play the emergency treatment method through the voice, so that other people in the surrounding environment of the target user can learn the rescue knowledge.
  • the emergency treatment method is played by voice. Play the user's emergency treatment method to attract the attention of the surrounding people, and at the same time, teach the people around the rescue method to ensure that the target user can get effective treatment before the arrival of the rescuer.
  • the acceleration judgment model can only make accurate judgments on user behavior when it is trained to a converged state. Please refer to FIG. 6.
  • FIG. 6 is a schematic flowchart of training an acceleration judgment model in this embodiment.
  • the training method of the acceleration judgment model includes the following steps:
  • the training sample data is composed of an acceleration array matrix and classification reference information that marks the acceleration array matrix.
  • the classification reference information refers to the artificial judgment made by people on the training sample data according to the training direction of the input neural network model, through the universal judgment standard and the factual state, that is, the expected goal of the output value of the neural network model. For example, in a training sample data, manually calibrating the acceleration array matrix in the training sample is generated by the acceleration collected when the user falls. The classification reference information represented by the acceleration array matrix is "fall". With the same principle, for each training sample data, the reference information is classified according to the user's behavior mark at the time of collection.
  • the training sample set is input into the neural network model in sequence.
  • the model first extracts the features in the acceleration array matrix, and then calculates the classification result of the acceleration array matrix according to the weights, that is, outputs the classification judgment information of the acceleration array matrix.
  • the features extracted by the model are closer to the trend of the acceleration that can distinguish the fall. That is, as the training continues, the aggravation cannot characterize the fall. Acceleration learns the weights of other accelerations that characterize other behaviors in the convolutional layer, so that when convolution extraction is performed, the extracted cluster center points are concentrated where the above acceleration features are, which improves the degree of recognition and improves The accuracy of classification.
  • the model judgment reference information is the excitation data output by the neural network model according to the input acceleration array matrix.
  • the classification judgment information is a numerical value with a large discreteness.
  • the loss function is used to calculate whether the expected output is consistent with the excitation output.
  • the loss function is a detection function used to detect the model classification reference information in the neural network model and determine whether the information is consistent with the expected classification.
  • the weights in the neural network model need to be corrected so that the output result of the neural network model is the same as the expected result of the classification judgment information.
  • the weights in the neural network model need to be corrected according to the back propagation algorithm, so that the output result of the neural network model is the same as the expected result of the classification reference information .
  • multiple training samples are used for training (for example, 100,000 acceleration array matrices).
  • the neural network model outputs classification data and the classification reference information of each training sample, the correct rate is reached (not limited to) At 99%, the training is over.
  • the neural network model trained to the convergence state is the acceleration judgment model.
  • embodiments of the present application also provide an acceleration recognition device.
  • FIG. 7 is a schematic diagram of the basic structure of the acceleration recognition device of this embodiment.
  • an acceleration recognition device includes: an acquisition module 2100, a generation module 2200, a processing module 2300, and an execution module 2400.
  • the acquisition module 2100 is used to acquire multiple acceleration information of the target user
  • the generation module 2200 is used to generate an acceleration array matrix according to a preset array generation rule and multiple acceleration information
  • the processing module 2300 is used to input the acceleration data matrix to the
  • the acceleration judgment model is a neural network model that is pre-trained to converge to judge the user behavior represented by acceleration
  • the execution module 2400 is used to obtain the classification result output by the acceleration judgment model, where the classification result is the target The user's user behavior.
  • the acceleration recognition device collects the acceleration information of the target user through the wearable device. Because the wearable device can collect the acceleration information of the user in real time, and does not occupy user space and is inexpensive, it is more suitable for detecting and judging the user's behavior.
  • the collected acceleration information is converted into an acceleration array matrix, and then the acceleration array matrix is input to a neural network model trained to convergence, and the user behavior of the user is recognized through the classification result of the neural network model. Because the acceleration information of the target user is collected quickly and conveniently, and the neural network model can quickly and accurately classify the user behavior represented by the acceleration array matrix. The cost of user behavior recognition is reduced, and the user behavior can be judged in real time.
  • the classification result is whether the target user has fallen
  • the acceleration judgment model is a neural network model that is pre-trained to convergence and used to determine whether the target user has fallen based on multiple acceleration information.
  • the acceleration recognition device further includes: a first detection submodule and a first execution submodule.
  • the first detection sub-module is used to detect whether the movement acceleration of the target user changes; the first execution sub-module is used to sequentially acquire multiple movement accelerations according to a preset time interval when the movement acceleration changes.
  • the array generation rule is that multiple motion accelerations are written into a preset matrix template in sequence according to the order of acquisition time
  • the acceleration recognition device further includes: a first acquisition submodule and a second execution submodule .
  • the first acquisition submodule is used to acquire a preset matrix template; the second execution submodule is used to sequentially write a plurality of motion accelerations into the matrix template in the order of acquisition time to generate an acceleration array matrix.
  • the acceleration recognition device when the classification result is that the target user falls, the acceleration recognition device further includes: a second acquisition submodule and a third execution submodule.
  • the second obtaining submodule is used to obtain preset communication information of the associated terminal;
  • the third executing submodule is used to send preset warning information to the associated terminal according to the communication information to remind the user of the associated terminal that the target user has fallen.
  • the acceleration recognition device when the classification result is that the target user falls, the acceleration recognition device further includes: a third acquisition submodule, a first processing submodule, and a fourth execution submodule.
  • the third obtaining sub-module is used to obtain the target user's case information
  • the first processing sub-module is used to obtain the corresponding emergency treatment method according to the case information
  • the fourth execution sub-module is used to play the emergency treatment method through voice to make the target Other personnel in the user's surroundings are informed of the rescue knowledge.
  • the acceleration recognition device further includes: a fourth acquisition submodule, a second processing submodule, a first comparison submodule, and a fifth execution submodule.
  • the fourth acquisition submodule is used to acquire training sample data marked with classification reference information, wherein the training sample data includes multiple sets of acceleration array matrices
  • the second processing submodule is used to input the training sample data into the neural network model to obtain training samples The classification judgment information of the data
  • the first comparison submodule is used to compare whether the classification reference information of the training sample data is consistent with the classification judgment information
  • the fifth execution submodule is used to repeat the loop when the classification reference information and the classification judgment information are inconsistent Iteratively update the weights in the neural network model until the comparison results are consistent.
  • FIG. 8 is a block diagram of the basic structure of the computer device of this embodiment.
  • the computer device includes a processor, a non-volatile storage medium, a memory, and a network interface connected through a system bus.
  • the non-volatile storage medium of the computer device stores an operating system, a database, and computer-readable instructions.
  • the database may store a sequence of control information.
  • the processor may implement a An acceleration recognition method.
  • the processor of the computer device is used to provide calculation and control capabilities, and support the operation of the entire computer device.
  • the memory of the computer device may store computer readable instructions. When the computer readable instructions are executed by the processor, the processor may cause the processor to execute an acceleration recognition method.
  • the network interface of the computer device is used to connect and communicate with the terminal.
  • FIG. 8 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied.
  • the specific computer equipment may It includes more or fewer components than shown in the figure, or some components are combined, or have a different component arrangement.
  • the processor is used to execute the specific functions of the acquisition module 2100, the generation module 2200, the processing module 2300, and the execution module 2400 in FIG. 7, and the memory stores the program codes and various types of data required to execute the above modules.
  • the network interface is used for data transmission between user terminals or servers.
  • the memory in this embodiment stores the program codes and data required to execute all submodules in the face image key point detection device, and the server can call the server program codes and data to execute the functions of all submodules.
  • the computer device collects the acceleration information of the target user through the wearable device. Since the wearable device can collect the acceleration information of the user in real time, and does not occupy user space and is inexpensive, it is more suitable for detecting and judging the user's behavior.
  • the collected acceleration information is converted into an acceleration array matrix, and then the acceleration array matrix is input to a neural network model trained to converge, and the user behavior of the user is recognized through the classification result of the neural network model. Because the acceleration information of the target user is collected quickly and conveniently, and the neural network model can quickly and accurately classify the user behavior represented by the acceleration array matrix. The cost of user behavior recognition is reduced, and the user behavior can be judged in real time.
  • the present application also provides a storage medium storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, the one or more processors execute the steps of the acceleration recognition method in any of the foregoing embodiments.
  • the computer program may be stored in a computer-readable storage medium. When executed, it may include the processes of the foregoing method embodiments.
  • the aforementioned storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.

Abstract

Les modes de réalisation de la présente invention concernent un procédé et un appareil d'identification d'accélération, un dispositif informatique et un support de stockage, le procédé comprenant les étapes suivantes consistant à : acquérir une pluralité d'éléments d'informations d'accélération concernant un utilisateur cible ; générer une matrice de tableau d'accélération selon une règle de génération de tableau prédéfinie et la pluralité d'éléments d'informations d'accélération ; entrer la matrice de données d'accélération dans un modèle de détermination d'accélération prédéfini ; acquérir un résultat de classification délivré par le modèle de détermination d'accélération. La présente invention collecte des informations d'accélération concernant un utilisateur cible au moyen d'un dispositif vestimentaire. Étant donné qu'un dispositif vestimentaire peut acquérir des informations d'accélération concernant un utilisateur en temps réel sans occuper l'espace utilisateur et a un prix bas, il est plus approprié pour détecter et déterminer des comportements de l'utilisateur. La présente invention convertit les informations d'accélération collectées en une matrice de tableau d'accélération, puis entre la matrice de tableau d'accélération dans un modèle de réseau neuronal qui a été entraîné pour être convergent, et identifie des comportements d'utilisateur au moyen du résultat de classification du modèle de réseau neuronal.
PCT/CN2018/125572 2018-11-13 2018-12-29 Procédé et appareil d'identification d'accélération, dispositif informatique et support de stockage WO2020098119A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811347740.7A CN109670527A (zh) 2018-11-13 2018-11-13 加速度识别方法、装置、计算机设备及存储介质
CN201811347740.7 2018-11-13

Publications (1)

Publication Number Publication Date
WO2020098119A1 true WO2020098119A1 (fr) 2020-05-22

Family

ID=66142438

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/125572 WO2020098119A1 (fr) 2018-11-13 2018-12-29 Procédé et appareil d'identification d'accélération, dispositif informatique et support de stockage

Country Status (2)

Country Link
CN (1) CN109670527A (fr)
WO (1) WO2020098119A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221621A (zh) * 2021-02-04 2021-08-06 宁波卫生职业技术学院 一种基于深度学习的重心监测与识别方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241896A (zh) * 2019-07-18 2021-01-19 百度在线网络技术(北京)有限公司 信息推送方法、装置、设备及计算机可读介质
CN113065780B (zh) * 2021-04-09 2023-06-30 平安国际智慧城市科技股份有限公司 任务分配方法、装置、存储介质和计算机设备
CN115530774B (zh) * 2021-06-30 2024-03-26 荣耀终端有限公司 癫痫检测方法和装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103500342A (zh) * 2013-09-18 2014-01-08 华南理工大学 一种基于加速度计的人体行为识别方法
CN106846729A (zh) * 2017-01-12 2017-06-13 山东大学 一种基于卷积神经网络的跌倒检测方法和系统
CN107153871A (zh) * 2017-05-09 2017-09-12 浙江农林大学 基于卷积神经网络和手机传感器数据的跌倒检测方法
US20180157973A1 (en) * 2016-12-04 2018-06-07 Technion Research & Development Foundation Limited Method and device for a computerized mechanical device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103500342A (zh) * 2013-09-18 2014-01-08 华南理工大学 一种基于加速度计的人体行为识别方法
US20180157973A1 (en) * 2016-12-04 2018-06-07 Technion Research & Development Foundation Limited Method and device for a computerized mechanical device
CN106846729A (zh) * 2017-01-12 2017-06-13 山东大学 一种基于卷积神经网络的跌倒检测方法和系统
CN107153871A (zh) * 2017-05-09 2017-09-12 浙江农林大学 基于卷积神经网络和手机传感器数据的跌倒检测方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221621A (zh) * 2021-02-04 2021-08-06 宁波卫生职业技术学院 一种基于深度学习的重心监测与识别方法
CN113221621B (zh) * 2021-02-04 2023-10-31 宁波卫生职业技术学院 一种基于深度学习的重心监测与识别方法

Also Published As

Publication number Publication date
CN109670527A (zh) 2019-04-23

Similar Documents

Publication Publication Date Title
WO2020098119A1 (fr) Procédé et appareil d'identification d'accélération, dispositif informatique et support de stockage
US11797084B2 (en) Method and apparatus for training gaze tracking model, and method and apparatus for gaze tracking
US11030917B2 (en) Wearable apparatus and method for monitoring posture
US10341544B2 (en) Determining a matching score between users of wearable camera systems
US20230140011A1 (en) Learning mode for context identification
US9554355B2 (en) Methods and systems for providing notifications based on user activity data
US10416740B2 (en) Upsampling sensors to auto-detect a fitness activity
US10028037B2 (en) Apparatus, method and computer program for enabling information to be provided to a user
US11504068B2 (en) Methods, systems, and media for predicting sensor measurement quality
KR20170071159A (ko) 이미지 관련 서비스를 제공하기 위한 방법, 저장 매체 및 전자 장치
JP2015057691A (ja) 行動認識のための方法、装置、およびコンピュータ・プログラム
CN112307855A (zh) 一种用户状态检测方法、装置、电子设备及存储介质
WO2023040731A1 (fr) Système et procédé de surveillance de posture d'utilisateur, et dispositif intelligent pouvant être porté
WO2021121226A1 (fr) Procédé et dispositif de prédiction d'un signal d'électrocardiographie, terminaux, et support de stockage
CN113689660B (zh) 可穿戴设备的安全预警方法、可穿戴设备
WO2019219414A1 (fr) Adaptation de périodes de silence pour messagerie numérique
Chen et al. Vision-Based Elderly Fall Detection Algorithm for Mobile Robot
US11869535B1 (en) Character-level emotion detection
CN116453005A (zh) 一种视频封面的提取方法以及相关装置
US11632456B1 (en) Call based emotion detection
Gutiérrez et al. Fall Detection System Based on Far Infrared Images
JP7036272B1 (ja) データ収集システム、データ収集方法及びデータ収集装置
CN107862013B (zh) 一种日程查找方法和移动终端
Weerasinghe et al. Predicting and Analyzing Human Daily Routine Using Machine Learning
Bhattacharjee et al. Smart fall detection systems for elderly care

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18940226

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 20.08.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18940226

Country of ref document: EP

Kind code of ref document: A1