CN115797868A - Behavior early warning method, system, device and medium for monitoring object - Google Patents

Behavior early warning method, system, device and medium for monitoring object Download PDF

Info

Publication number
CN115797868A
CN115797868A CN202211598375.3A CN202211598375A CN115797868A CN 115797868 A CN115797868 A CN 115797868A CN 202211598375 A CN202211598375 A CN 202211598375A CN 115797868 A CN115797868 A CN 115797868A
Authority
CN
China
Prior art keywords
monitoring
behavior
early warning
coefficient
monitoring data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211598375.3A
Other languages
Chinese (zh)
Inventor
高井全
吴�琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lishui University
Original Assignee
Lishui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lishui University filed Critical Lishui University
Publication of CN115797868A publication Critical patent/CN115797868A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Alarm Systems (AREA)

Abstract

The present specification provides a behavior early warning method, system, device and medium for a monitored object, the method comprising: acquiring monitoring data acquired by an acquisition device, wherein the monitoring data comprises at least one of image data and sound data; determining whether a monitoring object exists based on the monitoring data; in response to the presence, acquiring behavior characteristics of the monitored object and environmental characteristics of the monitored object based on the monitoring data; determining an early warning coefficient based on the behavior characteristics and the environment characteristics; and responding to the early warning coefficient meeting the preset condition, and sending out an early warning notice.

Description

Behavior early warning method, system, device and medium for monitoring object
Cross-referencing
Priority of chinese application 202211035476.X, filed on 26/8/2022, this application, is hereby incorporated by reference in its entirety.
Technical Field
The present disclosure relates to the field of behavior monitoring technologies, and in particular, to a method, a system, an apparatus, and a medium for behavior early warning of a monitored object.
Background
With the development of social economy and the improvement of the living standard of residents, the aging problem of the population becomes more prominent. At present, china enters an aging society and shows an accelerated development situation, and the problem of accompanying of solitary old people becomes one of the focuses of people's attention. Besides the old people, the disabled and the patients who have inconvenient actions also need to spend certain medical resources and manpower cost to realize good nursing and caring effects. Therefore, how to timely and effectively nurse the elderly, the disabled, the patients or other monitoring objects with inconvenient actions is a problem to be solved urgently.
Therefore, a behavior early warning method with strong applicability is needed to intelligently nurse a group in need, meet the use requirement in a complex and variable environment and improve the accuracy of behavior early warning.
Disclosure of Invention
One or more embodiments of the present specification provide a behavior early warning method of a monitored subject, the method including: acquiring monitoring data acquired by an acquisition device, wherein the monitoring data comprises at least one of image data and sound data; determining whether a monitoring object exists based on the monitoring data; in response to the presence, acquiring behavior characteristics of the monitoring object and environmental characteristics of the monitoring object based on the monitoring data; determining an early warning coefficient based on the behavior characteristics and the environment characteristics; and responding to the condition that the early warning coefficient meets the preset condition, and sending out an early warning notice.
One or more embodiments of the present specification provide a behavior early warning system for monitoring a subject, the system including: the first acquisition module is used for acquiring monitoring data acquired by the acquisition device, and the monitoring data comprises at least one of image data and sound data; the device comprises a first determination module, a second determination module and a monitoring module, wherein the first determination module is used for determining whether a monitoring object exists or not based on monitoring data; the second acquisition module is used for responding to the existence of the monitoring object and acquiring the behavior characteristics of the monitoring object and the environmental characteristics of the monitoring object based on the monitoring data; the second determination module is used for determining an early warning coefficient based on the behavior characteristics and the environment characteristics; and the early warning module is used for responding to the condition that the early warning coefficient meets the preset condition and sending out early warning notice.
One or more embodiments of the present specification provide a behavior early warning apparatus for monitoring a subject, including a processor configured to execute a behavior early warning method for monitoring a subject.
One or more embodiments of the present specification provide a computer-readable storage medium storing computer instructions, and when the computer reads the computer instructions in the storage medium, the computer executes a behavior early warning method for a monitored object.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is an exemplary block diagram of a behavioral early warning system for monitoring a subject, according to some embodiments of the present description;
FIG. 2 is an exemplary flow diagram of a method for behavioral forewarning of a monitored subject according to some embodiments of the present description;
FIG. 3 is an exemplary diagram of an object recognition model according to some embodiments of the present description;
FIG. 4 is an exemplary diagram of a behavior recognition model and an environment recognition model, shown in accordance with some embodiments of the present description;
fig. 5 is an exemplary flow chart of a method of determining an early warning coefficient according to some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "apparatus", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
Fig. 1 is an exemplary block diagram of a behavior early warning system for monitoring a subject according to some embodiments of the present description. As shown in fig. 1, the behavior early warning system 100 for monitoring a subject may include a first acquisition module 110, a first determination module 120, a second acquisition module 130, a second determination module 140, and an early warning module 150.
The first obtaining module 110 may be configured to obtain monitoring data collected by a collecting device, where the monitoring data includes at least one of image data and sound data. For more description of the related terms and the acquisition method, refer to fig. 2 and its related description.
The first determination module 120 may be configured to determine whether a monitoring object is present based on the monitoring data. For more description of the related terms and the acquisition method, refer to fig. 2 and its related description.
The second obtaining module 130 may be configured to obtain, based on the monitoring data, a behavior characteristic of the monitoring object and an environmental characteristic of the monitoring object in response to the presence of the monitoring object. For more description of the related terms and the acquisition method, refer to fig. 2 and its related description.
The second determination module 140 may be configured to determine the pre-warning coefficients based on the behavioral and environmental characteristics. For more description of the related terms and the acquisition method, refer to fig. 2 and its related description.
The early warning module 150 may be configured to send an early warning notification in response to the early warning coefficient satisfying a preset condition. For more description of the related terms and the acquisition method, refer to fig. 2 and its related description.
In some embodiments, the first obtaining module, the first determining module, the second obtaining module, the second determining module and the warning module disclosed in fig. 1 may be different modules in one system, or may be a module that implements the functions of two or more modules described above. For example, each module may share one memory module, and each module may have its own memory module. Such variations are within the scope of the present disclosure.
Fig. 2 is an exemplary flow diagram of a method for behavioral forewarning of a monitored subject, according to some embodiments of the present description. In some embodiments, the process may be performed by the behavior alert system 100 monitoring the subject. As shown in fig. 2, the process 200 may include the following steps:
step 210, acquiring monitoring data acquired by an acquisition device, wherein the monitoring data includes at least one of image data and sound data.
The acquisition means may comprise means for acquiring monitoring data. The acquisition means may include various means and/or devices for acquiring image data and/or sound data. For example, the capturing device may include, but is not limited to, a camera, a video camera head, and the like. Also for example, the collection device may include, but is not limited to, a sound sensor, a microphone, and the like.
The camera may be any of a variety of devices for capturing images including, but not limited to, a monitor, a camera, and/or a video camera. In some embodiments, the camera device may acquire an image containing motion information of the monitoring object. For example, an image of the slipping motion of the elderly person is acquired. In some embodiments, the camera device may acquire images in various possible ways, including but not limited to continuous acquisition, timed acquisition, and the like. In some embodiments, there may be multiple cameras, and the multiple cameras may be placed at different positions to simultaneously acquire image information from different angles.
The monitoring data is based on data acquired by the acquisition device and may include, but is not limited to, image data and sound data. For example, image data may include, but is not limited to, pictures, video, or video, among others. As another example, the sound data may include, but is not limited to, speech, object sound, or device sound, among others.
In some embodiments, the monitoring data may be acquired by a variety of acquisition devices. For example, image data at different angles and directions may be acquired by cameras placed at multiple corners in a room; for another example, sound data of different areas may be acquired by sound sensors disposed in different rooms. For more description of the acquisition device, reference may be made to fig. 1 and its associated description.
Based on the monitoring data, it is determined whether a monitoring object exists, step 220.
The monitoring object can refer to a target object whose behavior needs to be monitored and whether early warning is needed. For example, the monitoring subject may include, but is not limited to, an elderly person, a disabled person, and the like, a person who is not mobile or cannot take care of oneself in life, and the like.
In some embodiments, the presence or absence of the monitoring object may be determined by analyzing image data and sound data in the monitoring data. For example, one or more target objects requiring monitoring behavior may be determined in advance as the monitoring object, and when the target object is included in the image data acquired based on the acquisition means or when the sound of the target object is included in the sound data, it is determined that the monitoring object exists. For more details on determining whether a monitoring object exists based on image data, see the contents of other parts of this specification (e.g., fig. 3 and its related description).
In some embodiments, the sound of the target object may include a voice of the target object or a sound of a specific object. For example, the sound of the target object may be a sound of the target object speaking or coughing. Also for example, the sound of the target object may include a ring tone of a cellular phone carried by the target object, or the like.
The monitored subjects may include people older than a certain number of years, patients, and/or people with impaired mobility. For example, the monitoring subjects may include, but are not limited to, persons older than 60 years of age; for another example, the monitoring target may be a disabled person or the like with inflexibility in legs and feet.
In some embodiments, the monitored object may be determined using a monitoring model, where the monitoring model may include a feature extraction layer and an output layer. For example, the monitoring model may include any one or combination of a convolutional neural network model, a deep neural network model, a recurrent neural network model, or other customized model structures, etc.
In some embodiments, the input to the feature extraction layer may include image data acquired by a camera device, and the output may include identifying features. For example, identifying features may include, but are not limited to, skin wrinkles, walking rate, degree of hunchback, hair color, and the like, wherein the external items may include crutches, wheelchairs, and/or plaster bandages, and the like.
In some embodiments, the input to the output layer may include an identification feature, and the output thereof may include the number of monitoring objects in the image data, for example, when the output is 0, it may indicate that there are no monitoring objects in the image data.
In some embodiments, the parameters of the feature extraction layer and the output layer of the monitoring model can be obtained by training with multiple sets of labeled training samples. The training sample may include sample image data, and the label of the training sample may include the monitoring object and its number in the sample image data. The training samples can be obtained based on historical data, and labels of the training samples can be obtained by manually labeling the image data of the samples. Inputting a plurality of groups of training samples with labels into an initial monitoring model, constructing a loss function based on the output of the initial monitoring model and the labels, and iteratively updating parameters of the initial monitoring model based on the loss function until the loss function meets preset conditions. For example, the loss function converges, or the loss function value is less than a preset value. And finishing model training when the loss function meets the preset condition to obtain a trained monitoring model. Wherein, the monitoring model and the trained initial monitoring model have the same model structure. It should be understood that, when the monitoring model is trained as described above, the monitoring model may learn how to identify the monitoring objects in the sample image data based on the labels, and further determining the number of the monitoring objects ensures the accuracy of the model training.
In some embodiments, the output of the feature extraction layer may be an input of the output layer, and the feature extraction layer and the output layer may be obtained by joint training. For example, training sample data, namely sample image information, is input to the feature extraction layer to obtain the identification features output by the feature extraction layer; then inputting the identification characteristics as training sample data into an output layer to obtain the number of monitoring objects output by the output layer, and verifying the output of the output layer by using the number of the sample monitoring objects; and obtaining verification data of the identification features output by the feature extraction layer by utilizing the back propagation characteristic of the neural network model, and training the feature extraction layer by using the verification data of the identification features as labels until the trained feature extraction layer and the trained output layer are obtained.
For another example, the sample image data is input into the initial feature extraction layer, the sample identification features are input into the initial output layer, the loss function is constructed based on the label and the result predicted by the initial output layer, and the parameters of the initial feature extraction layer and the initial output layer are updated simultaneously until the trained feature extraction layer and the trained output layer meet the preset conditions, so that the trained feature extraction layer and the trained output layer are obtained, and the trained monitoring model is further obtained. In some embodiments, the method of iteratively updating the model parameters may include a conventional model training method such as stochastic gradient descent.
In some embodiments of the present description, whether the image data contains the monitoring objects and the number of the monitoring objects can be determined accurately and quickly based on the monitoring model, so that analysis and processing can be performed in time, user requirements can be met, and early warning efficiency can be improved.
In some embodiments, the sound data may be further processed using a sound recognition model to determine whether the monitoring object is present, wherein the sound recognition model may include a sound feature extraction layer and a judgment layer. The sound identification model, the sound feature extraction layer and the judgment layer can be models obtained by a convolutional neural network or a deep neural network or a combination of the convolutional neural network and the deep neural network.
In some embodiments, the input to the sound feature extraction layer may comprise sound data, the output of which may comprise sound features; the input of the decision layer may include a sound feature, and the output thereof may include the presence or absence of the monitoring object. For example, when the judgment layer judges that the probability of the existence of the monitoring object is greater than a probability threshold, the existence of the monitoring object is output, wherein the probability threshold may be a manually preset value, such as 90%.
The sound feature may refer to a signal feature extracted based on sound data. For example, the sound features may include, but are not limited to, speech signal vectors or object sound wave signal vectors, etc.; for another example, the sound features may include energy features or spectral features derived based on sound data conversion, and the like; for another example, the voice characteristics may also include, but are not limited to, the gender and/or age of the person to whom the voice corresponds.
In some embodiments, the sound feature extraction layer may be obtained by training the sound feature extraction layer by establishing a judgment model. The decision model may include two embedded layers and one decision layer. The two embedded layers are embedded layer 1 and embedded layer 2, can have the same initial parameters, and the parameters are shared, and when the parameters are updated iteratively in training, the parameters of the two embedded layers can be updated synchronously. Training sample data of the judgment model can be a plurality of same classifications, wherein the classifications can be sounds of monitoring objects and sounds of not monitoring objects; the training labels may refer to whether the training samples are of the same class.
Inputting two training sample data of the same classification into an embedding layer 1 and an embedding layer 2 respectively, inputting an embedding feature 1 and an embedding feature 2 output by the embedding layer 1 and the embedding layer 2 respectively into a judgment layer, and constructing a loss function based on a prediction result output by the judgment layer and a label to update parameters of two embedding layers and a judgment layer model to obtain a trained judgment model, thereby obtaining a trained sound feature extraction layer.
In some embodiments, the output of the acoustic feature extraction layer may be used as the input of the judgment layer, and the acoustic feature extraction layer and the judgment layer may be obtained through joint training.
In some embodiments, the sample data of the joint training may include sample sound data, and the label may be whether the monitoring object exists, for example, the label is 1 if existing, and 0 if not existing. Sample data may be obtained based on historical data and the label may be determined by manual labeling. Inputting sample voice data into a voice feature extraction layer to obtain voice features output by the voice feature extraction layer; and inputting the voice features output by the voice feature extraction layer into the judgment layer as training sample data to obtain the probability of the existence of the monitoring object output by the judgment layer. Constructing a loss function based on the output results of the label and the judgment layer, and synchronously updating the judgment layer and the sound feature extraction layer; and obtaining the trained sound feature extraction layer and judgment layer through parameter updating.
In some embodiments of the present specification, by analyzing the sound data using the sound recognition model, it can be determined relatively quickly and accurately whether a monitoring object exists, so as to meet the use requirement.
In some embodiments, the first time the behavioral early warning system 100 of the monitored subject is used, the sound of the monitored subject may be entered into the system for subsequent identification and detection.
In some embodiments, the sound recognition model may further determine an age of a subject to which the sound belongs based on the sound feature output by the sound feature extraction layer, and determine the subject to which the sound belongs as the monitoring subject in response to the age being greater than a preset age threshold.
In some embodiments, the voice recognition model may further include a level determination layer, and the input of the level determination layer may include voice features output by the voice feature extraction layer, and the output of the level determination layer is a monitoring level. The sound characteristics may include, but are not limited to, energy characteristics, spectral characteristics, age and/or gender, etc. The level determination layer may be a model obtained by a convolutional neural network or a deep neural network, or a combination thereof, or the like.
The monitoring level may refer to a level at which a monitoring object is monitored. For example, the monitoring level may be represented by numerals 1 to 9, wherein a larger numeral indicates a higher monitoring level. It can be understood that the higher the corresponding monitoring grade of the monitored object is, the more frequently the monitored object needs to be monitored, and the higher the monitoring frequency is.
In some embodiments, when the layer output result is determined to be that the monitoring object does not exist, the level determination layer output result is 0.
In some embodiments, the rank determination layer may be obtained by training a plurality of labeled training samples. Wherein, the training sample can comprise sample sound characteristics, and the label can be a monitoring grade. Training samples and labels may be obtained based on historical data. For example, a plurality of labeled training samples may be input into the initial rank determination layer, a loss function may be constructed based on the labels and the output results of the initial rank determination layer, and parameters of the initial rank determination layer may be iteratively updated by gradient descent or other methods based on the loss function. And when the preset conditions are met, the model training is finished, and a trained grade determining layer is obtained. The preset condition may be that the loss function converges, the number of iterations reaches a threshold, and the like.
In some embodiments, the output of the sound feature extraction layer may be used as the input of the level determination layer, and the sound feature extraction layer and the level determination layer may be obtained by joint training.
In some embodiments, the sample data of the joint training may include sample sound data, and the label may be a sample monitoring level. Sample data may be obtained based on historical data and the label may be determined by manual labeling. For more description of the joint training, the contents of the joint training of the acoustic feature extraction layer and the judgment layer can be referred to.
In some embodiments of the present description, different monitoring schemes can be applied to different monitored objects in a targeted manner by determining a monitoring level through a level determination layer added to a voice recognition model, so as to meet user requirements and improve monitoring efficiency.
In response to the presence, behavior characteristics of the monitoring object and environmental characteristics of the monitoring object are obtained based on the monitoring data, step 230.
Behavioral characteristics may refer to characteristics related to behavioral information made by the monitoring subject. For example, behavioral features may include, but are not limited to, motion, expression, and other features extracted based on behavioral information.
The environmental characteristics may refer to characteristics related to environmental information of an environment in which the monitoring object is located. For example, the environmental features may include, but are not limited to, features extracted based on environmental information, such as environmental type and environmental space size.
The behavior information may refer to behavior data of the monitoring object determined based on the image data. For example, the behavioral information may include motion information and facial information of the monitored subject, wherein the motion information may include, but is not limited to, ambulation, twitching, trembling, covering the chest, etc., and the facial information may include, but is not limited to, painful expressions, mouth opening and/or facial distortion, etc.
The environmental information may refer to environmental data around the monitoring object determined based on the image data. For example, the environmental information may include an indoor environment, an outdoor environment, a kitchen environment, and/or a bedroom environment, among others.
In some embodiments, the behavioral information and environmental information may be determined based on the feature model. For example, the feature model may include any one or combination of a convolutional neural network model, a deep neural network model, a recurrent neural network model, or other custom model structure.
The input of the feature model may include image data acquired by the camera, and the output thereof may include behavior information and environment information.
In some embodiments, the feature model may be obtained based on training a feature extraction layer in the monitoring model. For more details on how the feature model is trained, see the relevant contents of the monitoring model with respect to model training.
In some embodiments, the behavioral and environmental characteristics may be obtained by analyzing image data and/or sound data in the monitoring data. For more details on how to obtain the behavior characteristics and the environmental characteristics, refer to the contents in other parts of the present specification (e.g., fig. 4 and its related contents).
And 240, determining an early warning coefficient based on the behavior characteristics and the environment characteristics.
The warning coefficient may refer to a coefficient related to a behavior warning. For example, the early warning coefficient may be a preset risk coefficient that is changed according to different environments and different behaviors of the monitored object. It can be understood that the early warning coefficient may be a quantitative indicator for measuring whether the monitored object is possibly in danger, and when the early warning coefficient is larger, the monitored object may be in a dangerous condition.
In some embodiments, the early warning coefficients may be determined by analyzing the behavioral and environmental characteristics. For example, different early warning coefficients may be set in advance for different behavior characteristics and environmental characteristics, and then the early warning coefficient corresponding to the monitored object may be determined in a manner of matching by table lookup or preset rules. For example, a vector to be matched may be established according to the current behavior feature and environmental feature, and then a reference vector may be established and a reference database may be formed based on different predetermined behavior features and environmental features. The reference database comprises a plurality of reference vectors and early warning coefficients corresponding to the reference vectors. The early warning coefficient corresponding to the vector to be matched can be determined by respectively calculating the distance between the vector to be matched and the reference vector. For example, a reference vector, the distance between which and a vector to be matched meets a preset condition, is taken as a target vector, and an early warning coefficient corresponding to the target vector is taken as an early warning coefficient corresponding to the vector to be matched. The preset condition may be set according to circumstances. For example, the preset condition may be that the vector distance is minimum or the vector distance is less than a distance threshold, etc.
The early warning coefficient may refer to an early warning value related to the nursing risk of the monitored subject, and it is understood that the greater the early warning coefficient, the greater the possibility of danger occurring to the monitored subject.
In some embodiments, different behavior information and environment information may be given different early warning weights respectively, where the early warning weights may be used to reflect potential risk degrees of the different behavior information and environment information, and understandably, the larger the early warning weight is, the higher the risk degree is.
In some embodiments, the early warning weight may be determined using a weight model, where the weight model may include a first extraction layer, a second extraction layer, a behavioral weight output layer, and an environmental weight output layer. For example, the first extraction layer, the second extraction layer, the behavior weight output layer, and the environment weight output layer of the weight model may include any one or combination of a convolutional neural network model, a deep neural network model, a cyclic neural network model, or other customized model structures.
In some embodiments, the input to the first extraction layer may include behavioral information, the output of which may include behavioral characteristics; the input of the second extraction layer may include environmental information, the output of which may include environmental features; the input of the behavior weight output layer can comprise behavior characteristics and environment characteristics, and the output of the behavior weight output layer can comprise behavior weights; the input of the environment weight output layer may include environment characteristics and behavior characteristics, and the output thereof may include environment weights.
In some embodiments, the parameters of the first extraction layer, the second extraction layer, the behavior weight output layer and the environment weight output layer of the weight model can be obtained through joint training respectively. The sample data of the weight model joint training may include sample behavior information and sample environment information, and the labels may be a sample behavior weight and a sample environment weight, respectively. For more details on joint training, see the relevant contents of the monitoring model on model training.
In some embodiments, the early warning coefficients may be determined based on the behavior information, the environmental information, and the behavior weights and the environmental weights using a variety of possible methods. For example, the pre-warning coefficient may be calculated by using a mathematical function according to actual conditions, and by assigning different behavior risk coefficients or environment risk coefficients to different behavior information and environment information, respectively, the calculated pre-warning coefficient may be equal to: behavioral risk coefficient + environmental risk coefficient environmental weight.
In some embodiments of the present description, different early warning weights are respectively given to different behavior information and environment information, so as to determine different risk degrees of the monitored object in different environments, and the determination result fully considers the influence of environmental factors, so that the method is more practical and can meet the user requirements.
For more explanation on how to determine the warning coefficient, refer to fig. 5 and its related description.
It can be understood that the warning coefficients of the same behavior feature may be different if the behavior feature corresponds to different environmental features. For more details on how to determine the warning coefficients, reference is made to the contents of the rest of the description.
And step 250, responding to the early warning coefficient meeting the preset condition, and sending out an early warning notice.
The preset condition may be that the pre-warning coefficient is greater than or equal to a pre-warning threshold, and the pre-warning threshold may be related to the environmental characteristic. In some embodiments, different pre-alarm thresholds may be set for different environmental characteristics. For example, the pre-alarm threshold for the kitchen may be set to be less than the pre-alarm threshold for the bedroom. It is understood that the warning threshold may be set to a larger value when the environmental risk level is smaller. For more description of the early warning threshold, refer to fig. 4 and its related description.
By defining an early warning threshold value which accords with the reality, the early warning notification is sent out when the early warning coefficient is more than or equal to the early warning threshold value, the user requirement can be met, and the occurrence of false alarm or false alarm phenomenon is reduced.
In some embodiments, the preset condition may be that the warning coefficient is greater than or equal to a threshold value.
For example, the preset threshold is 0.55, and when the warning coefficient is greater than or equal to 0.55, it is determined that the preset condition is satisfied.
In some embodiments, the corresponding thresholds may be different for different context information. The threshold value can be set according to the danger degree of the environment and the time and frequency of the monitored object. For example, a threshold value of 0.88 may be associated with a bedroom, while a threshold value of 0.34 may be associated with a stairwell.
In some embodiments of the present description, different thresholds are set for different environments, so that the risk levels of the different environments are fully considered, and then the nursing risk of the monitored object can be more accurately determined, which more meets the needs of actual situations.
The pre-alarm notification may be a notification that the monitored subject may be in a dangerous condition. In some embodiments, the processor may issue the warning notification when the warning coefficient satisfies a preset condition. For example, the processor may issue voice, video, and/or text alert notifications. The early warning notification may be a live voice prompt, or may be sent to a terminal device of the emergency contact of the monitored object.
In some embodiments of the present description, the monitoring data acquired by the acquisition device is analyzed to determine the monitored object and the early warning coefficient thereof, and determine whether to send out an early warning notification, so that the risk level of the monitored object, which is more reasonable and conforms to the actual situation, can be determined under the condition of comprehensively considering the influence of image data, sound data and different environmental characteristics, so as to timely perform processing, the accuracy is higher, and the occurrence of false alarm early warning is reduced to a certain extent.
It should be noted that the above description related to the flow 200 is only for illustration and description, and does not limit the applicable scope of the present specification. Various modifications and alterations to flow 200 will be apparent to those skilled in the art in light of this description. However, such modifications and variations are intended to be within the scope of the present description.
FIG. 3 is an exemplary diagram of an object recognition model according to some embodiments of the present description.
In some embodiments, the monitoring data may be processed using an object recognition model to determine the number of monitoring objects. It is understood that when the determined number of monitoring objects is 0, it indicates that there is no monitoring object. The object recognition model can be a model obtained by a convolutional neural network, a cyclic neural network or a deep neural network or a combination of the convolutional neural network and the deep neural network, and the like. For more explanation of the monitoring data and the monitored object, reference may be made to fig. 2 and its associated description.
In some embodiments, the sampling rate of the acquisition device and/or the recognition frequency of the object recognition model may be related to the monitoring level.
The sampling rate may refer to the frequency at which the acquisition device acquires the monitored data. For example, the sampling rate may be 3 times per minute.
The recognition frequency may refer to a frequency at which the object recognition model analyzes the monitoring data. For example, the recognition frequency may be 5 times/minute.
When the monitoring level is higher, the sampling rate of the acquisition device and/or the identification frequency of the object identification model can be increased accordingly, so that different monitoring requirements of users are met, and the efficiency of behavior monitoring and early warning is improved.
In some embodiments, the sampling rate of the acquisition device and/or the recognition frequency of the object recognition model may also be related to the voice recognition model output. For example, when the output result of the object recognition model is that the number of the monitored objects is greater than or equal to 1, but the output result of the voice recognition model is that no monitored object exists, the sampling rate of the acquisition device and/or the recognition frequency of the object recognition model can be appropriately increased; for another example, when the number of the monitoring objects is 0 as the result of the output of the object recognition model, but the monitoring objects are present as the result of the output of the voice recognition model, the sampling rate of the acquisition device and/or the recognition frequency of the object recognition model can be increased. The sampling rate of the acquisition device and/or the identification frequency of the object identification model are/is dynamically adjusted to ensure the accuracy of behavior monitoring and early warning.
As shown in fig. 3, the object recognition model 320 may include an image feature extraction layer 321 and a recognition layer 322. The image feature extraction layer 321 and the recognition layer 322 may be models obtained by a convolutional neural network, a cyclic neural network, a deep neural network, or a combination thereof.
In flow 300, an input of image feature extraction layer 321 may include image data 310, an output of which may include image features 330; the input to the recognition layer 322 may include image features 330, the output of which may include a monitoring object number 340. For more explanation of the image data, reference may be made to fig. 2 and its associated description.
Image feature 330 may refer to a feature vector associated with the image data. For example, the image features 330 may be in the form of a matrix, where each element in the matrix may represent a gray value for a corresponding image location. For another example, image features 330 may include, but are not limited to, features such as skin wrinkles, walking speed, degree of hunchback, hair color, and add-ons, where the add-ons may include crutches, wheelchairs, and the like.
In some embodiments, the image feature extraction layer may be obtained by training the image feature extraction layer by establishing a discriminant model. The discriminant model may include two embedded layers and one discriminant layer. The two embedded layers are embedded layer 3 and embedded layer 4, can have the same initial parameters, and the parameters are shared, and when the parameters are updated iteratively in training, the parameters of the two embedded layers can be updated synchronously. The training sample data of the discriminant model can be a plurality of same classifications, wherein the classifications can be the number of the monitoring objects in the image, such as 0, 1 or 3; the training labels may refer to whether the training samples are of the same class.
Inputting two training sample data of the same classification into an embedding layer 3 and an embedding layer 4 respectively, inputting the embedding characteristics 3 and the embedding characteristics 4 output by the embedding layer 3 and the embedding layer 4 into a discrimination layer respectively, and constructing a loss function based on a prediction result output by the discrimination layer and a label so as to update parameters of two embedding layer models and the discrimination layer model to obtain a trained discrimination model and further obtain a trained image characteristic extraction layer.
In some embodiments, the input of the image feature extraction layer 321 may be the output of the recognition layer 322, and the image feature extraction layer 321 and the recognition layer 322 may be obtained by joint training.
In some embodiments, the sample data of the joint training includes sample image data, and the label monitors the number of objects for the sample. Sample data may be obtained based on historical data and the label may be determined by manual labeling.
Inputting sample image data into the image feature extraction layer 321 to obtain image features output by the image feature extraction layer 321; and inputting the image characteristics as training sample data into the recognition layer 322 to obtain the number of the monitoring objects output by the recognition layer 322. And constructing a loss function based on the number of the sample monitoring objects and the number of the monitoring objects output by the identification layer 322, and synchronously updating the image feature extraction layer 321 and the identification layer 322. Through parameter updating, a trained image feature extraction layer 321 and a trained recognition layer 322 are obtained.
In some embodiments of the present description, the number of monitoring objects can be determined relatively quickly and accurately for further analysis by analyzing the image data using a well-trained object recognition model.
FIG. 4 is an exemplary diagram of a behavior recognition model and an environment recognition model, shown in accordance with some embodiments of the present description.
In some embodiments, the behavior recognition model may be used to process the monitoring data to obtain behavior characteristics, and the environment recognition model may be used to process the monitoring data to obtain environment characteristics. The behavior recognition model and the environment recognition model can be models obtained by a convolutional neural network, a cyclic neural network or a deep neural network or a combination of the convolutional neural network and the cyclic neural network. For more explanation of the behavioral and environmental characteristics, see fig. 2 and its associated description.
As shown in fig. 4, the behavior recognition model 430 may include an image feature extraction layer 321 and a behavior determination layer 432; the environment recognition model 440 may include an image feature extraction layer 321 and an environment determination layer 441. The behavior determination layer 432 and the environment determination layer 441 may be models of a convolutional neural network, a cyclic neural network, a deep neural network, or a combination thereof, and the like. For more explanation of the image feature extraction layer 321, refer to fig. 3 and its related description.
In some embodiments, the behavior recognition model 430 and the environment recognition model 440 may also include a sound feature extraction layer 431. For more description of the sound feature extraction layer 431, reference may be made to fig. 2 in the present specification regarding the relevant contents of the sound recognition model.
In flow 400, the input to image feature extraction layer 321 may include image data 310, the output of which may include image features 330; the input of the sound feature extraction layer 431 includes sound data 410, the output of which may include sound features 450; inputs to the behavior determination layer 432 and the environment determination layer 441 may include the image feature 330 and the sound feature 450, an output of the behavior determination layer 432 may include the behavior feature 460, and an output of the environment determination layer 441 may include the environment feature 470. For more description of the relevant features, reference may be made to the contents of other parts of this specification (e.g., fig. 2 or fig. 3 and their associated description).
In some embodiments, sound data 410 may include, but is not limited to, sound data of the monitoring subject and/or ambient sound data. For example, the sound data 410 may include, but is not limited to, voice information of the monitoring object, object drop sound, and/or appliance switch sound, etc.
In some embodiments, the environmental characteristics 470 may also relate to a preset condition corresponding to the warning coefficient. For example, the pre-alarm threshold in the preset condition may be determined based on the environmental characteristics 470, and different environmental characteristics 470 may correspond to different pre-alarm thresholds. For more explanation of the warning threshold, refer to fig. 2 and its related description.
In some embodiments, the outputs of the image feature extraction layer 321 and the sound feature extraction layer 431 may be used as inputs of the behavior determination layer 432 and the environment determination layer 441, respectively, and the image feature extraction layer 321, the sound feature extraction layer 431, and the behavior determination layer 432 and the environment determination layer 441 may be obtained by joint training. For more details on the joint training, reference may be made to the contents of the rest of the present specification (e.g., fig. 3 and its related description).
In some embodiments, the sound feature extraction layer 431 may be obtained by training a plurality of labeled training samples. Wherein, the training sample can include the sound data of a plurality of predetermined different grade type, like sound data such as impact sound, fall down, explosion sound, shouting. The tag may be a sound feature. For more details on model training, see the contents of model training in the rank determination layer.
In some embodiments of the present specification, by processing the image data and the sound data using the trained behavior recognition model and the trained environment recognition model, the behavior characteristics and the environment characteristics can be obtained relatively quickly and accurately for further analysis and processing in time. In addition, by using the image feature extraction layer and the sound feature extraction layer which are already established and trained in the foregoing embodiments as part of the model, the efficiency of establishing and training the model can be improved, and the cost of training the model can be reduced.
Fig. 5 is an exemplary flow chart of a method of determining an early warning coefficient according to some embodiments of the present description. In some embodiments, the process may be performed by the behavior alert system 100 monitoring the subject. As shown in fig. 5, the process 500 may include the following steps:
step 510, determining a first risk coefficient corresponding to the behavior characteristic and a second risk coefficient corresponding to the environment characteristic based on the behavior characteristic and the environment characteristic.
For more details on the behavior characteristics and the environment characteristics, reference may be made to the contents of other parts of the present specification (e.g., fig. 4 and its related description).
The first risk coefficient may refer to a coefficient related to a behavior characteristic of the monitored object. For example, the first risk factor may be numerically represented, such as 1.2. It is understood that the greater the first risk factor, the greater the degree of risk of monitoring the behavior of the subject.
The second risk factor may refer to a factor relating to a characteristic of the environment in which the monitored object is located. For example, the second risk factor may be numerically represented, such as 1.5. It will be appreciated that the greater the second risk factor, the greater the risk level of the environment in which the monitored object is located.
In some embodiments, the determination of the first risk factor and the second risk factor may be affected by different behavioral and environmental characteristics at the same time. For example, for monitoring the falling behavior of the subject, if the risk is smaller if the subject falls on the bed, a smaller first risk factor may be determined; if the risk is likely to be greater if the person falls into a kitchen or a stairway opening, a first greater risk factor may be determined. For another example, for monitoring the walking behavior of the subject, if the risk is less likely to be caused by walking in the kitchen, a smaller first risk coefficient may be determined; if the risk is likely to be greater if the patient is walking in a bed, a first greater risk factor may be determined. For another example, for environmental features including a kitchen, if the monitored subject is sitting still in the kitchen and is likely to be less dangerous, a second, smaller risk factor may be determined; if the monitored object is in the kitchen and is likely to be more dangerous using a tool, a second, greater risk factor may be determined.
In some embodiments, the first risk factor and the second risk factor may be determined by table lookup, statistical analysis, modeling, or the like. For example, according to different behavior characteristics and environmental characteristics or combinations thereof, the first risk coefficient and the second risk coefficient corresponding to the behavior characteristics and the environmental characteristics may be preset and a data table may be established.
In some embodiments, the behavioral and environmental characteristics may also be analyzed using a coefficient model to determine a first risk coefficient and a second risk coefficient. The coefficient model can be a model obtained by a convolutional neural network, a cyclic neural network or a deep neural network or a combination of the convolutional neural network and the deep neural network, and the like.
In some embodiments, the coefficient model may include a first determination layer and a second determination layer. The first determination layer and the second determination layer may be models obtained by a convolutional neural network, a cyclic neural network, a deep neural network, or a combination thereof.
The inputs of the first determination layer may include behavioral and environmental characteristics, and the outputs thereof may include a first risk factor; the inputs to the second determination layer may include environmental characteristics and behavioral characteristics, and the outputs thereof may include a second risk factor.
In some embodiments, the first and second determination layers may be obtained by training a plurality of labeled training samples, respectively. The training sample may include sample behavior characteristics and sample environment characteristics or a combination thereof, and the label may be manually set, and the first risk coefficient or the second risk coefficient corresponds to the sample behavior characteristics and the sample environment characteristics or the combination thereof. The training samples and labels may be determined based on historical data. For example, a plurality of labeled training samples may be input into an initial first or second determination layer, a loss function is constructed based on the labels and the output results of the initial first or second determination layer, and parameters of the initial first or second determination layer are iteratively updated by gradient descent or other methods based on the loss function. And when the preset conditions are met, the model training is completed to obtain a trained first determination layer and/or a trained second determination layer. The preset condition may be that the loss function converges, the number of iterations reaches a threshold, and the like.
In the coefficient model, two weight data may be calculated and output by the coefficient model based on the similarity between the actually input behavior feature and the environment feature and the above-described sample behavior feature and environment feature. It can be understood that the higher the similarity, the closer the result output by the coefficient model is to the output result when the sample behavior feature and the sample environment feature are input as the coefficient model.
In some embodiments of the present specification, by analyzing the behavior characteristics and the environmental characteristics using the trained coefficient model, the first risk coefficient and the second risk coefficient can be determined relatively quickly and accurately, so as to perform further analysis in time, and improve the data analysis processing efficiency.
And step 520, fusing the first risk coefficient and the second risk coefficient to determine an early warning coefficient.
For more details on the warning coefficient, refer to the contents of other parts of this specification (e.g., fig. 2 and its related description).
The fusion may refer to a process of performing calculation in a preset method based on the first risk coefficient and the second risk coefficient. The preset method may include various methods, for example, weighted summation, etc.
In some embodiments, the weighted summation may refer to pre-determining two different weighting factors α and β, respectively, and then the pre-warning coefficient may be: the first risk factor α + the second risk factor β. Wherein, the sizes of alpha and beta can be respectively determined according to the behavior of the monitored object or the danger degree of the environment. For example, according to different behaviors of the monitored object, such as walking, sitting still or falling, it can be determined that the alpha value corresponding to walking is 0.4, the alpha value corresponding to sitting still is 0.2, the alpha value corresponding to falling is 0.85, and the like; for another example, it may be determined that the β value corresponding to the bedroom is 0.3, the β value corresponding to the kitchen is 0.85, and the β value corresponding to the corridor is 0.5, respectively, according to different environments in which the monitoring object is located, such as a bedroom, a kitchen, or a corridor.
In some embodiments, the fusion may further include a process of calculating the sound risk coefficient, the first risk coefficient, and the second risk coefficient according to a preset method. The preset method may include various methods, for example, weighted summation, etc.
The sound risk coefficient may refer to a coefficient related to the sound of the monitoring object. For example, the sound risk factor may be represented numerically, such as 1.2. For example, different sound risk coefficients may be preset according to different sound feature types, for example, the sound risk coefficient of a scream may be set to 0.9, the sound risk coefficient of a cry may be set to 0.85, and the sound warning coefficients of a laugh and a talking sound may be set to 0. For more description of the sound characteristics, reference may be made to fig. 2 and its associated description in this specification.
In some embodiments, the warning coefficient may also be expressed as: the first risk factor x α + the second risk factor x β + the acoustic risk factor. For example, the monitored object falls down in a bedroom and emits screaming sound, the first risk coefficient and the second risk coefficient determined by the coefficient model are 0.7 and 0.3 respectively, the alpha and the beta are 0.85 and 0.3 respectively, and the sound risk coefficient is 0.9, so that the early warning coefficient is 0.7 + 0.85+0.3 +0.9=1.585.
By considering the sound risk coefficient as a factor for determining the early warning coefficient, the early warning coefficient can be determined more accurately, and the early warning coefficient accords with the actual situation, so that the use requirement is fully met.
In some embodiments, the sampling rate of the acquisition device and the identification frequency of the image recognition model may also be related to the early warning coefficient, for example, the larger the early warning coefficient, the higher the sampling rate of the acquisition device and the identification frequency of the image recognition model. The sampling rate of the acquisition device and the recognition frequency of the image recognition model are associated with the magnitude of the early warning coefficient, so that the sampling rate of the acquisition device and the recognition frequency of the image recognition model can better meet the practical requirements, and can be adjusted in time according to the change of the practical situation to meet the use requirements.
For more description of the sampling rate of the acquisition device and the recognition frequency of the image recognition model, refer to the contents of the rest of this specification (e.g., fig. 3 and its related description).
In some embodiments of the present description, different first risk coefficients and second risk coefficients are determined according to different behavior characteristics and environment characteristics, and an early warning coefficient is further determined by a fusion method, so that an early warning is given after the behavior of the monitored object and/or the environment reach a certain risk degree, so as to meet the safety requirements of users. In addition, the risk degree of different behaviors and/or environments is expressed in the form of a quantitative value, so that whether the monitored object is at risk or is about to be at risk can be accurately evaluated, the accuracy is high, and the occurrence of misjudgment can be avoided to a certain extent.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Additionally, the order in which elements and sequences are described in this specification, the use of numerical letters, or other designations are not intended to limit the order of the processes and methods described in this specification, unless explicitly stated in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Where numerals describing the number of components, attributes or the like are used in some embodiments, it is to be understood that such numerals used in the description of the embodiments are modified in some instances by the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range in some embodiments of the specification are approximations, in specific embodiments, such numerical values are set forth as precisely as possible within the practical range.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of this description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (10)

1. A behavior early warning method for a monitored object, the method comprising:
acquiring monitoring data acquired by an acquisition device, wherein the monitoring data comprises at least one of image data and sound data;
determining whether a monitoring object exists based on the monitoring data;
in response to the presence, acquiring behavior characteristics of the monitored object and environmental characteristics of the monitored object based on the monitoring data;
determining an early warning coefficient based on the behavior characteristics and the environment characteristics;
and responding to the early warning coefficient meeting a preset condition, and sending out an early warning notice.
2. The method of claim 1, wherein the determining whether a monitoring object is present based on the monitoring data comprises:
and processing the monitoring data based on an object recognition model, and determining the number of the monitoring objects, wherein the object recognition model is a machine learning model.
3. The method of claim 1, wherein the obtaining of the behavioral characteristic of the monitored object and the environmental characteristic of the monitored object based on the monitoring data comprises:
processing the monitoring data based on a behavior recognition model to determine the behavior characteristics;
processing the monitoring data based on an environment recognition model to determine the environment characteristics;
the behavior recognition model and the environment recognition model are machine learning models.
4. The method of claim 3, wherein determining an early warning coefficient based on the behavioral and environmental characteristics comprises:
determining a first risk coefficient corresponding to the behavior characteristic and a second risk coefficient corresponding to the environment characteristic based on the behavior characteristic and the environment characteristic;
and fusing the first risk coefficient and the second risk coefficient to determine the early warning coefficient.
5. A behavioral early warning system for monitoring a subject, the system comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring monitoring data acquired by an acquisition device, and the monitoring data comprises at least one of image data and sound data;
a first determination module, configured to determine whether a monitoring object exists based on the monitoring data;
the second acquisition module is used for responding to the existence of the monitoring object, and acquiring the behavior characteristic of the monitoring object and the environmental characteristic of the monitoring object based on the monitoring data;
a second determination module for determining an early warning coefficient based on the behavioral characteristics and the environmental characteristics;
and the early warning module is used for responding to the condition that the early warning coefficient meets the preset condition and sending out an early warning notice.
6. The system of claim 5, wherein the first determination module is further configured to:
and processing the monitoring data based on an object recognition model, and determining the number of the monitoring objects, wherein the object recognition model is a machine learning model.
7. The system of claim 5, wherein the second obtaining module is further configured to:
processing the monitoring data based on a behavior recognition model to determine the behavior characteristics;
processing the monitoring data based on an environment recognition model to determine the environmental characteristics;
the behavior recognition model and the environment recognition model are machine learning models.
8. The system of claim 7, wherein the second determination module is further configured to:
determining a first risk coefficient corresponding to the behavior characteristic and a second risk coefficient corresponding to the environment characteristic based on the behavior characteristic and the environment characteristic;
and fusing the first risk coefficient and the second risk coefficient to determine the early warning coefficient.
9. A behavior early warning apparatus for a monitored object, comprising a processor for executing the behavior early warning method for the monitored object according to any one of claims 1 to 4.
10. A computer-readable storage medium storing computer instructions, wherein when the computer instructions in the storage medium are read by a computer, the computer executes the behavior pre-warning method for the monitored object according to any one of claims 1 to 4.
CN202211598375.3A 2022-08-26 2022-12-14 Behavior early warning method, system, device and medium for monitoring object Pending CN115797868A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211035476X 2022-08-26
CN202211035476 2022-08-26

Publications (1)

Publication Number Publication Date
CN115797868A true CN115797868A (en) 2023-03-14

Family

ID=85419741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211598375.3A Pending CN115797868A (en) 2022-08-26 2022-12-14 Behavior early warning method, system, device and medium for monitoring object

Country Status (1)

Country Link
CN (1) CN115797868A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116405771A (en) * 2023-06-08 2023-07-07 深圳市华卓智能科技有限公司 Intelligent audio and video acquisition method and system for tablet personal computer
CN116503335A (en) * 2023-03-31 2023-07-28 江苏省秦淮河水利工程管理处 Aquatic organism monitoring system, method, device and storage medium
CN116991108A (en) * 2023-09-25 2023-11-03 四川公路桥梁建设集团有限公司 Intelligent management and control method, system and device for bridge girder erection machine and storage medium
CN117130016A (en) * 2023-10-26 2023-11-28 深圳市麦微智能电子有限公司 Personal safety monitoring system, method, device and medium based on Beidou satellite

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503335A (en) * 2023-03-31 2023-07-28 江苏省秦淮河水利工程管理处 Aquatic organism monitoring system, method, device and storage medium
CN116503335B (en) * 2023-03-31 2024-02-20 江苏省秦淮河水利工程管理处 Aquatic organism monitoring system, method, device and storage medium
CN116405771A (en) * 2023-06-08 2023-07-07 深圳市华卓智能科技有限公司 Intelligent audio and video acquisition method and system for tablet personal computer
CN116405771B (en) * 2023-06-08 2023-09-05 深圳市华卓智能科技有限公司 Intelligent audio and video acquisition method and system for tablet personal computer
CN116991108A (en) * 2023-09-25 2023-11-03 四川公路桥梁建设集团有限公司 Intelligent management and control method, system and device for bridge girder erection machine and storage medium
CN116991108B (en) * 2023-09-25 2023-12-12 四川公路桥梁建设集团有限公司 Intelligent management and control method, system and device for bridge girder erection machine and storage medium
CN117130016A (en) * 2023-10-26 2023-11-28 深圳市麦微智能电子有限公司 Personal safety monitoring system, method, device and medium based on Beidou satellite
CN117130016B (en) * 2023-10-26 2024-02-06 深圳市麦微智能电子有限公司 Personal safety monitoring system, method, device and medium based on Beidou satellite

Similar Documents

Publication Publication Date Title
CN115797868A (en) Behavior early warning method, system, device and medium for monitoring object
Deep et al. A survey on anomalous behavior detection for elderly care using dense-sensing networks
US20190216333A1 (en) Thermal face image use for health estimation
US10726846B2 (en) Virtual health assistant for promotion of well-being and independent living
Feng et al. Deep learning for posture analysis in fall detection
CN112364696B (en) Method and system for improving family safety by utilizing family monitoring video
CN109492595B (en) Behavior prediction method and system suitable for fixed group
CN110853620A (en) Sound detection
US10610109B2 (en) Emotion representative image to derive health rating
Tasoulis et al. Statistical data mining of streaming motion data for activity and fall recognition in assistive environments
CN110480656B (en) Accompanying robot, accompanying robot control method and accompanying robot control device
CN111227789A (en) Human health monitoring method and device
KR101584685B1 (en) A memory aid method using audio-visual data
Warunsin et al. Wristband fall detection system using deep learning
JP2004157614A (en) Behavior analysis system
KR101420189B1 (en) User recognition apparatus and method using age and gender as semi biometrics
CN110473616B (en) Voice signal processing method, device and system
Nahar et al. Twins and Similar Faces Recognition Using Geometric and Photometric Features with Transfer Learning
JP2019197509A (en) Nursing-care robot, nursing-care robot control method and nursing-care robot control program
CN114830257A (en) Cough detection system and method
Mithil et al. An interactive voice controlled humanoid smart home prototype using concepts of natural language processing and machine learning
CN113566395B (en) Air conditioner, control method and device thereof and computer readable storage medium
Pham et al. A proposal model using deep learning model integrated with knowledge graph for monitoring human behavior in forest protection
CN113887332B (en) Skin operation safety monitoring method based on multi-mode fusion
US20210142047A1 (en) Salient feature extraction using neural networks with temporal modeling for real time incorporation (sentri) autism aide

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination