CN111079560B - Tumble monitoring method and device and terminal equipment - Google Patents

Tumble monitoring method and device and terminal equipment Download PDF

Info

Publication number
CN111079560B
CN111079560B CN201911176646.4A CN201911176646A CN111079560B CN 111079560 B CN111079560 B CN 111079560B CN 201911176646 A CN201911176646 A CN 201911176646A CN 111079560 B CN111079560 B CN 111079560B
Authority
CN
China
Prior art keywords
target
image data
target object
target image
candidate window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911176646.4A
Other languages
Chinese (zh)
Other versions
CN111079560A (en
Inventor
李晓刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zdst Communication Technology Co ltd
Original Assignee
Zdst Communication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zdst Communication Technology Co ltd filed Critical Zdst Communication Technology Co ltd
Priority to CN201911176646.4A priority Critical patent/CN111079560B/en
Publication of CN111079560A publication Critical patent/CN111079560A/en
Application granted granted Critical
Publication of CN111079560B publication Critical patent/CN111079560B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of risk monitoring, and provides a fall monitoring method, which comprises the following steps: acquiring target image data, identifying a target object in the target image data, acquiring a candidate window of the target object in the target image data, calculating the height difference between the length-width ratio of the candidate window of the target object and the diagonal intersection point in a preset time period, and judging that the target object falls down if the absolute value of the height difference between the diagonal intersection point in the candidate window is larger than a first preset threshold value and the length-width ratio is smaller than a second preset threshold value. According to the application, the target object in the target image data is identified, the rectangular window of the target object is obtained, and whether the target object falls is judged by monitoring through the characteristics of the rectangular window, so that the speed and the accuracy for judging whether the target to be monitored falls are improved.

Description

Tumble monitoring method and device and terminal equipment
Technical Field
The application belongs to the technical field of risk monitoring, and particularly relates to a fall monitoring method, a fall monitoring device and terminal equipment.
Background
In recent years, the number of solitary old people is increasing, and life quality and health problems of the old people become important topics of social concern.
However, since the physical health of the solitary old man is inferior to that of the young, once the solitary old man falls down, the solitary old man is possibly severely injured, and if the solitary old man fails to rescue in time, life danger is possibly caused.
In the prior art, a diagnosis method for judging whether the old people fall is proposed, mainly by fitting motion information of a human body through ellipse, and connecting motion characteristics in series to form an image sequence so as to judge whether the old people fall, however, the method is easily affected by other factors, such as actions of picking up objects, sitting, lying and the like, so that the accuracy is not high.
Disclosure of Invention
The embodiment of the application provides a fall monitoring method, a fall monitoring device and terminal equipment, which can solve the problem that the accuracy is low because the prior art is easily affected by other factors.
In a first aspect, an embodiment of the present application provides a fall monitoring method, including:
acquiring target image data;
identifying a target object in the target image data;
acquiring a candidate window of a target object in the target image data;
calculating the height difference between the length-width ratio of the candidate window of the target object and the intersection point of the diagonal line in a preset time period;
and if the absolute value of the height difference is larger than a first preset threshold value and the length-width ratio is smaller than a second preset threshold value, judging that the target object falls down.
In a second aspect, an embodiment of the present application provides a fall monitoring device, including:
the first acquisition module is used for acquiring target image data;
the first identification module is used for identifying a target object in the target image data;
the second acquisition module is used for acquiring candidate windows of the target objects in the target image data;
the calculating module is used for calculating the height difference between the length-width ratio of the candidate window of the target object and the intersection point of the diagonal in a preset time period;
and the judging module is used for judging that the target object falls down if the absolute value of the height difference is larger than a first preset threshold value and the length-width ratio is smaller than a second preset threshold value.
In a third aspect, an embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the fall monitoring method according to any one of the first aspects when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which, when executed by a processor, implements a fall monitoring method as in any one of the first aspects above.
In a fifth aspect, an embodiment of the present application provides a computer program product, which when run on a terminal device, causes the terminal device to perform the fall monitoring method according to any one of the first aspects above.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
According to the embodiment of the application, the target object in the target image data is identified, the rectangular window of the target object is obtained, whether the target object falls is judged by monitoring the characteristics of the rectangular window, and the speed and the accuracy for judging whether the target to be monitored falls are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a fall monitoring method according to an embodiment of the present application;
fig. 2 is a schematic diagram of an application scenario based on a fall monitoring method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an application scenario in which a regression method-based deep learning target monitoring model obtains a candidate window of a target object in target image data according to an embodiment of the present application;
FIG. 4 is a schematic structural view of a fall monitoring device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The fall monitoring method provided by the embodiment of the application can be applied to terminal equipment such as mobile phones, tablet computers, vehicle-mounted equipment, notebook computers, ultra-mobile Personal Computer (UMPC), netbooks, personal digital assistants (Personal Digital Assistant, PDA) and the like, and the embodiment of the application does not limit the specific types of the terminal equipment.
Fig. 1 shows a schematic flow chart of a fall monitoring method provided by the present application, which can be applied to any of the above terminal devices by way of example and not limitation.
S101, acquiring target image data.
In a specific application, the target image data includes, but is not limited to, any one of a video and a picture including a target object to be monitored.
S102, identifying a target object in the target image data.
In a specific application, the target object in the target image data can be identified by a face recognition algorithm, a human body recognition algorithm or other algorithms for human identification. The target object may be an elderly person, a disabled person or a child that needs to be attended to by others.
S103, obtaining a candidate window of the target object in the target image data.
In a specific application, the target image data is processed through a trained deep learning target monitoring model based on a regression method, and a candidate window of a target object in the target image data is obtained. The candidate window is a window which is cut out from the target image data and comprises a preset shape of the target object. The preset shape may be rectangular or diamond-shaped.
S104, calculating the height difference between the length-width ratio of the candidate window of the target object and the intersection point of the diagonal line in the preset time period.
In a specific application, the preset time period may be specifically set according to the actual situation, for example, the preset time period is set to be 2s, and the height difference is specifically a difference between the height of the diagonal intersection point in the candidate window of the i-second target object and the height of the diagonal intersection point in the candidate window of the i-2 second target object. Aspect ratio is the quotient of the length and width of the candidate window.
And S105, judging that the target object falls down if the absolute value of the height difference is larger than a first preset threshold value and the length-width ratio is smaller than a second preset threshold value.
In a specific application, the first preset threshold and the second preset threshold may be specifically set according to actual situations. The first preset threshold is a natural number greater than 0, and the second preset threshold is a natural number other than 0. For example, if the first preset threshold is set to 1.1 and the second preset threshold is set to 0.46, it is possible to determine that the target object falls when the absolute value of the height difference is 2 and the aspect ratio is 0.3.
In one embodiment, after step S105, it includes:
acquiring identity information and rescue information of a target object; wherein the rescue information includes at least one of location information and medical information;
and sending the identity information and the rescue information to terminal equipment bound with the identity information.
In particular applications, the identity information includes, but is not limited to, name, gender, age, identification number. The medical information includes, but is not limited to, at least one of the physical health of the target subject, the name of the disease, the medication condition, and the allergy condition.
Through sending the identity information and the rescue information to the terminal equipment bound with the identity information, family members, friends and other rescue personnel of the target object can be helped to discover the fact that the target object falls in time and rescue the target object, and the safety of the target object is ensured.
Fig. 2 is a schematic diagram schematically illustrating an application scenario in which whether a target object falls is determined by a fall monitoring method. In the figure, X represents the width of the candidate window, Y represents the length of the candidate window, and h represents the height of the diagonal intersection in the candidate window of the target object that changes within the preset time period.
In one embodiment, after step S102, further includes:
and if the number of the target objects in the target image data is not 1, returning to identify the target objects in the target image data until the number of the target objects in the target image data is 1.
In a specific application, if the data size of the target object in the target image data is greater than 1, it is determined that when any one target object falls down, rescue of other target objects can be obtained, and the target objects in the target image data are continuously identified until the number of the target objects in the target image data is 1, so that the condition that the target object falls down and is not found is prevented.
If no target object exists in the target image data, the identification can be continued until the number of the target objects in the target image data is 1.
In one embodiment, before step S103, the method further includes:
identifying a state of the target object;
and if the target object is in a non-standing state, returning to a state of identifying the target object until the target object is in a standing state.
In a specific application, according to the comparison of the background, the place and the height of the target object in the candidate window with the height of the target object, the state of the target object is identified; if the state of the identified target object (or the state of the directly obtained target object) is a non-standing state such as sitting, lying or lying, returning to the state of identifying the target object until the target object is in a standing state, and obtaining a candidate window of the target object in the target image data so as to judge whether the target object falls down.
In one embodiment, step S103 includes:
s1031, obtaining a candidate window of a target object in the target image data through a trained deep learning target monitoring model based on a regression method; the deep learning target monitoring model based on the regression method comprises at least one of a single-shot multi-frame monitoring model and a single-shot target monitoring model.
In a specific application, the regression method-based deep learning object monitoring model includes, but is not limited to, at least one of a single-shot multi-frame monitoring model (Single Shot MultiBox Detector, SSD) and a single-shot object monitoring model (YouOnly Look Once, YOLO).
Fig. 3 illustrates an application scenario of a candidate window for obtaining a target object in target image data by a deep learning target monitoring model based on a regression method.
In one embodiment, step S1031 includes:
dividing the target image data into N grids; wherein N is a positive integer;
predicting M frames corresponding to each grid to obtain N, N and M target windows; wherein M is a positive integer and M is not equal to N;
and removing error windows in all the target windows to obtain candidate windows of the target objects in the target image data.
In a specific application, the target image data is divided into a plurality of grids, and each grid can be subjected to frame prediction independently. Wherein N is a positive integer, N can be specifically set according to practical situations, for example, in YOLO algorithm, N is set to 7 by default, and the target image data can be divided into 7*7 grids. M is a positive integer and is not equal to N, and M can be specifically set according to practical situations, for example, setting M to 2, 7×7×2 target windows can be obtained.
In one embodiment, removing error windows in all target windows to obtain candidate windows of a target object in target image data includes:
calculating to obtain the confidence coefficient of each candidate window including the target object and the probability of each candidate window on a plurality of categories so as to remove the candidate windows with the confidence coefficient lower than the preset confidence coefficient or remove the candidate windows with the probability lower than the preset probability;
redundant candidate windows may also be removed by Non-maximum suppression (Non-Maximum Suppression, NMS).
In one embodiment, step S1031 further includes:
acquiring a multi-scale feature map of the target image data;
and predicting the multi-scale feature map based on convolution check to obtain a candidate window of the target object in the target image data.
In a specific application, processing target image data through an SSD algorithm to obtain a multi-scale feature map of the target image data, and predicting by taking a convolution kernel of a convolution predictor (Convolutional Predictors For Detection) as a basic prediction element to obtain a default candidate frame of the target image data so as to obtain a candidate window of a target object.
In one embodiment, prior to step S103, comprising:
acquiring pre-training data;
and pre-training the regression-method-based deep learning target monitoring model through the pre-training data to obtain a trained regression-method-based deep learning target monitoring model.
In a specific application, pre-training data including different article types and sizes and position information is obtained, and a regression method-based deep learning target monitoring model is pre-trained, so that the trained regression method-based deep learning target monitoring model can identify whether target objects (in the embodiment, the target objects are people needing to be attended by others) are included in the pre-training data, and the number of the target objects, the position information, the state information of the target objects and the like in the pre-training data are obtained.
According to the embodiment, the target object in the target image data is identified, the rectangular window of the target object is obtained, whether the target object falls is judged by monitoring through the characteristics of the rectangular window, and the speed and the accuracy for judging whether the target to be monitored falls are improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Corresponding to the fall monitoring method described in the above embodiments, fig. 4 shows a block diagram of the fall monitoring device according to the embodiment of the present application, and for convenience of explanation, only the portions related to the embodiment of the present application are shown.
Referring to fig. 4, the fall monitoring device 200 includes:
a first acquisition module 101 for acquiring target image data;
a first identifying module 102, configured to identify a target object in the target image data;
a second obtaining module 103, configured to obtain a candidate window of a target object in the target image data;
a calculating module 104, configured to calculate a height difference between an aspect ratio of a candidate window of the target object and a diagonal intersection within a preset time period;
and the judging module 105 is configured to judge that the target object falls if the absolute value of the height difference is greater than a first preset threshold and the aspect ratio is less than a second preset threshold.
In one embodiment, the fall monitoring device 200 further comprises:
and the first circulation module is used for returning and identifying the target objects in the target image data if the number of the target objects in the target image data is not 1 until the number of the target objects in the target image data is 1.
In one embodiment, the fall monitoring device 200 further comprises:
the second identification module is used for identifying the state of the target object;
and the second circulation module is used for returning to the state of identifying the target object if the target object is in a non-standing state until the target object is in a standing state.
In one embodiment, the fall monitoring device 200 further comprises:
the third acquisition module is used for acquiring pre-training data;
and the pre-training module is used for pre-training the deep learning target monitoring model based on the regression method through the pre-training data to obtain the trained deep learning target monitoring model based on the regression method.
According to the embodiment, the target object in the target image data is identified, the rectangular window of the target object is obtained, whether the target object falls is judged by monitoring through the characteristics of the rectangular window, and the speed and the accuracy for judging whether the target to be monitored falls are improved.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
Fig. 5 is a schematic structural diagram of a terminal device according to this embodiment. As shown in fig. 5, the terminal device 5 of this embodiment includes: at least one processor 50 (only one shown in fig. 5), a memory 51, and a computer program 52 stored in the memory 51 and executable on the at least one processor 50, the processor 50 implementing the steps in any of the various fall monitoring method embodiments described above when executing the computer program 52.
The terminal device 5 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The terminal device may include, but is not limited to, a processor 50, a memory 51. It will be appreciated by those skilled in the art that fig. 5 is merely an example of the terminal device 5 and is not meant to be limiting as the terminal device 5, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The processor 50 may be a central processing unit (Central Processing Unit, CPU), the processor 50 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may in some embodiments be an internal storage unit of the terminal device 5, such as a hard disk or a memory of the terminal device 5. The memory 51 may in other embodiments also be an external storage device of the terminal device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the terminal device 5. The memory 51 is used for storing an operating system, application programs, boot loader (BootLoader), data, other programs, etc., such as program codes of the computer program. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The embodiment of the application also provides a terminal device, which comprises: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, which when executed by the processor performs the steps of any of the various method embodiments described above.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps for implementing the various method embodiments described above.
Embodiments of the present application provide a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform steps that enable the implementation of the method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above-described embodiments, and may be implemented by hardware related to instructions of a computer program, where the computer program may be stored in a computer readable storage medium, and the computer program may implement the steps of the method embodiments described above when executed by a processor. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (7)

1. A fall monitoring method, comprising:
acquiring target image data;
identifying a target object in the target image data;
acquiring a candidate window of a target object in the target image data; the candidate window is a window which is cut out from the target image data and comprises a preset shape of the target object, and the preset shape is rectangular;
calculating the height difference between the length-width ratio of the candidate window of the target object and the intersection point of the diagonal line in a preset time period;
if the absolute value of the height difference is larger than a first preset threshold value and the length-width ratio is smaller than a second preset threshold value, judging that the target object falls down;
the obtaining the candidate window of the target object in the target image data includes:
obtaining a candidate window of a target object in the target image data through a trained deep learning target monitoring model based on a regression method; the deep learning target monitoring model based on the regression method comprises at least one of a single-shot multi-frame monitoring model and a single-shot target monitoring model;
the obtaining the candidate window of the target object in the target image data through the trained deep learning target monitoring model based on the regression method comprises the following steps:
dividing the target image data into N grids; wherein N is a positive integer;
predicting M frames corresponding to each grid to obtain N, N and M target windows; wherein M is a positive integer and M is not equal to N;
removing error windows in all target windows to obtain candidate windows of target objects in target image data;
the obtaining the candidate window of the target object in the target image data through the trained deep learning target monitoring model based on the regression method further comprises the following steps:
acquiring a multi-scale feature map of the target image data;
and predicting the multi-scale feature map based on convolution check to obtain a candidate window of the target object in the target image data.
2. The fall monitoring method according to claim 1, wherein after the identifying the target object in the target image data, further comprising:
and if the number of the target objects in the target image data is not 1, returning to identify the target objects in the target image data until the number of the target objects in the target image data is 1.
3. The fall monitoring method according to claim 1, wherein before the step of obtaining the candidate window of the target object in the target image data, further comprises:
identifying a state of the target object;
and if the target object is in a non-standing state, returning to a state of identifying the target object until the target object is in a standing state.
4. The fall monitoring method according to claim 1, wherein before the step of obtaining the candidate window of the target object in the target image data, the method comprises:
acquiring pre-training data;
and pre-training the regression-method-based deep learning target monitoring model through the pre-training data to obtain a trained regression-method-based deep learning target monitoring model.
5. A fall monitoring device, comprising:
the first acquisition module is used for acquiring target image data;
the first identification module is used for identifying a target object in the target image data;
the second acquisition module is used for acquiring candidate windows of the target objects in the target image data; the candidate window is a window which is cut out from the target image data and comprises a preset shape of the target object, and the preset shape is rectangular;
the calculating module is used for calculating the height difference between the length-width ratio of the candidate window of the target object and the intersection point of the diagonal in a preset time period;
the judging module is used for judging that the target object falls down if the absolute value of the height difference is larger than a first preset threshold value and the length-width ratio is smaller than a second preset threshold value;
the obtaining the candidate window of the target object in the target image data includes:
obtaining a candidate window of a target object in the target image data through a trained deep learning target monitoring model based on a regression method; the deep learning target monitoring model based on the regression method comprises at least one of a single-shot multi-frame monitoring model and a single-shot target monitoring model;
the obtaining the candidate window of the target object in the target image data through the trained deep learning target monitoring model based on the regression method comprises the following steps:
dividing the target image data into N grids; wherein N is a positive integer;
predicting M frames corresponding to each grid to obtain N, N and M target windows; wherein M is a positive integer and M is not equal to N;
removing error windows in all target windows to obtain candidate windows of target objects in target image data;
the obtaining the candidate window of the target object in the target image data through the trained deep learning target monitoring model based on the regression method further comprises the following steps:
acquiring a multi-scale feature map of the target image data;
and predicting the multi-scale feature map based on convolution check to obtain a candidate window of the target object in the target image data.
6. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 4 when executing the computer program.
7. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 4.
CN201911176646.4A 2019-11-26 2019-11-26 Tumble monitoring method and device and terminal equipment Active CN111079560B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911176646.4A CN111079560B (en) 2019-11-26 2019-11-26 Tumble monitoring method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911176646.4A CN111079560B (en) 2019-11-26 2019-11-26 Tumble monitoring method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN111079560A CN111079560A (en) 2020-04-28
CN111079560B true CN111079560B (en) 2023-09-01

Family

ID=70311740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911176646.4A Active CN111079560B (en) 2019-11-26 2019-11-26 Tumble monitoring method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN111079560B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113940667B (en) * 2021-09-08 2022-08-09 中国科学院深圳先进技术研究院 Anti-falling walking aid method and system based on walking aid and terminal equipment
CN114220119B (en) * 2021-11-10 2022-08-12 深圳前海鹏影数字软件运营有限公司 Human body posture detection method, terminal device and computer readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009034709A1 (en) * 2007-09-11 2009-03-19 Panasonic Corporation Image processing device and image processing method
CN102722715A (en) * 2012-05-21 2012-10-10 华南理工大学 Tumble detection method based on human body posture state judgment
CN105448041A (en) * 2016-01-22 2016-03-30 苏州望湖房地产开发有限公司 A human body falling intelligent control system and method
CN106709471A (en) * 2017-01-05 2017-05-24 宇龙计算机通信科技(深圳)有限公司 Fall detection method and device
CN106991790A (en) * 2017-05-27 2017-07-28 重庆大学 Old man based on multimode signature analysis falls down method of real-time and system
CN108573228A (en) * 2018-04-09 2018-09-25 杭州华雁云态信息技术有限公司 A kind of electric line foreign matter intrusion detection method and device
CN108647589A (en) * 2018-04-24 2018-10-12 南昌大学 It is a kind of based on regularization form than fall down detection method
CN109269556A (en) * 2018-09-06 2019-01-25 深圳市中电数通智慧安全科技股份有限公司 A kind of equipment Risk method for early warning, device, terminal device and storage medium
CN110263634A (en) * 2019-05-13 2019-09-20 平安科技(深圳)有限公司 Monitoring method, device, computer equipment and the storage medium of monitoring objective
CN110378297A (en) * 2019-07-23 2019-10-25 河北师范大学 A kind of Remote Sensing Target detection method based on deep learning
CN110390303A (en) * 2019-07-24 2019-10-29 深圳前海达闼云端智能科技有限公司 Tumble alarm method, electronic device, and computer-readable storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009034709A1 (en) * 2007-09-11 2009-03-19 Panasonic Corporation Image processing device and image processing method
CN102722715A (en) * 2012-05-21 2012-10-10 华南理工大学 Tumble detection method based on human body posture state judgment
CN105448041A (en) * 2016-01-22 2016-03-30 苏州望湖房地产开发有限公司 A human body falling intelligent control system and method
CN106709471A (en) * 2017-01-05 2017-05-24 宇龙计算机通信科技(深圳)有限公司 Fall detection method and device
CN106991790A (en) * 2017-05-27 2017-07-28 重庆大学 Old man based on multimode signature analysis falls down method of real-time and system
CN108573228A (en) * 2018-04-09 2018-09-25 杭州华雁云态信息技术有限公司 A kind of electric line foreign matter intrusion detection method and device
CN108647589A (en) * 2018-04-24 2018-10-12 南昌大学 It is a kind of based on regularization form than fall down detection method
CN109269556A (en) * 2018-09-06 2019-01-25 深圳市中电数通智慧安全科技股份有限公司 A kind of equipment Risk method for early warning, device, terminal device and storage medium
CN110263634A (en) * 2019-05-13 2019-09-20 平安科技(深圳)有限公司 Monitoring method, device, computer equipment and the storage medium of monitoring objective
CN110378297A (en) * 2019-07-23 2019-10-25 河北师范大学 A kind of Remote Sensing Target detection method based on deep learning
CN110390303A (en) * 2019-07-24 2019-10-29 深圳前海达闼云端智能科技有限公司 Tumble alarm method, electronic device, and computer-readable storage medium

Also Published As

Publication number Publication date
CN111079560A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
US20180232603A1 (en) Method and program for computing bone age by deep neural network
US8751414B2 (en) Identifying abnormalities in resource usage
EP3219254B1 (en) Method and system for removing corruption in photoplethysmogram signals for monitoring cardiac health of patients
CN111079560B (en) Tumble monitoring method and device and terminal equipment
US20220375106A1 (en) Multi-target tracking method, device and computer-readable storage medium
CN111104925B (en) Image processing method, image processing apparatus, storage medium, and electronic device
JP2020518396A5 (en)
CN109864705B (en) Method and device for filtering pulse wave and computer equipment
WO2021051547A1 (en) Violent behavior detection method and system
US20190139233A1 (en) System and method for face position tracking and alerting user
CN114627345A (en) Face attribute detection method and device, storage medium and terminal
CN113887463A (en) ICU ward monitoring method and device, electronic equipment and medium
CN111428198B (en) Method, device, equipment and storage medium for determining abnormal medical list
CN112949785A (en) Object detection method, device, equipment and computer storage medium
CN112818946A (en) Training of age identification model, age identification method and device and electronic equipment
CN114140751B (en) Examination room monitoring method and system
CN114943695A (en) Medical sequence image anomaly detection method, device, equipment and storage medium
CN112348121B (en) Target detection method, target detection equipment and computer storage medium
CN114332720A (en) Camera device shielding detection method and device, electronic equipment and storage medium
CN114913567A (en) Mask wearing detection method and device, terminal equipment and readable storage medium
CN110689112A (en) Data processing method and device
CN111260692A (en) Face tracking method, device, equipment and storage medium
CN117456562B (en) Attitude estimation method and device
CN116912203B (en) Abnormal fundus image low-consumption detection method and system based on combination of multiple intelligent models
Selvakumar et al. Efficient diabetic retinopathy diagnosis through U-Net–KNN integration in retinal fundus images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant