CN111767888A - Object state detection method, computer device, storage medium, and electronic device - Google Patents

Object state detection method, computer device, storage medium, and electronic device Download PDF

Info

Publication number
CN111767888A
CN111767888A CN202010650507.7A CN202010650507A CN111767888A CN 111767888 A CN111767888 A CN 111767888A CN 202010650507 A CN202010650507 A CN 202010650507A CN 111767888 A CN111767888 A CN 111767888A
Authority
CN
China
Prior art keywords
target object
video frame
type
determining
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010650507.7A
Other languages
Chinese (zh)
Inventor
李燕超
申省梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Pengsi Technology Co ltd
Original Assignee
Beijing Pengsi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Pengsi Technology Co ltd filed Critical Beijing Pengsi Technology Co ltd
Priority to CN202010650507.7A priority Critical patent/CN111767888A/en
Publication of CN111767888A publication Critical patent/CN111767888A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The embodiment of the application provides an object state detection method, computer equipment, a storage medium and electronic equipment, wherein the method comprises the following steps: acquiring a plurality of video frames which are shot aiming at a target area and are continuous in time; respectively calculating the gesture type of the target object in each video frame; determining the attitude change speed of the target object and the duration of the target object in the target attitude type according to the time sequence of each video frame and the attitude type of the target object in each video frame; according to the determined posture change speed and the determined duration, the state of the target object is determined, and the accuracy of the determined type of the target object can be improved.

Description

Object state detection method, computer device, storage medium, and electronic device
Technical Field
The present application relates to the field of information technology, and in particular, to an object state detection method, a computer device, a storage medium, and an electronic device.
Background
When a subject is detected for a monitored area, a video stream including the subject is typically processed through a trained fall detection algorithm.
If the object is detected to be in the falling state in a certain video frame in the video stream, an alarm is given, but when the object is determined to be in the falling state through a single video frame, the false detection rate is high, the false alarm is caused, and the resource waste is caused to corresponding personnel.
Disclosure of Invention
In view of the above, an object of the embodiments of the present application is to provide an object state detection method, a computer device, a storage medium, and an electronic device, which improve accuracy of determining a type of a target object.
In a first aspect, an embodiment of the present application provides an object state detection method, where the method includes:
acquiring a plurality of video frames which are shot aiming at a target area and are continuous in time;
respectively calculating the gesture type of the target object in each video frame;
determining the attitude change speed of the target object and the duration of the target object in the target attitude type according to the time sequence of each video frame and the attitude type of the target object in each video frame;
and determining the state of the target object according to the determined attitude change speed and the duration.
In one embodiment, the calculating the pose type of the target object in each video frame comprises:
carrying out target detection on the plurality of video frames to obtain position information of the target object in each video frame;
for each video frame, determining the posture type of the target object in the video frame according to the position information of the target object in the video frame; alternatively, the first and second electrodes may be,
extracting bone features of the video frames to obtain bone feature information of the target object in each video frame;
and for each video frame, determining the posture type of the target object in the video frame according to the bone feature information of the target object in the video frame.
In one embodiment, for each video frame, determining the type of the pose of the target object in the video frame according to the position information of the target object in the video frame includes:
determining the aspect ratio of the region of the target in the video frame according to the position information of the target object in the video frame;
determining the attitude type of the target object in the video frame based on the determined aspect ratio and the preset aspect ratio range for each attitude type.
In one embodiment, for each video frame, determining the type of the pose of the target object in the video frame according to the bone feature information of the target object in the video frame includes:
determining a distance variation in a predetermined direction for the target object in the video frame according to the bone feature information of the target object in the video frame;
determining proportion information of the target object in the video frame based on the distance variation and the height information of the target object;
and determining the posture type of the target object in the video frame based on the proportion information and a preset proportion range for each posture type.
In one embodiment, determining a pose change speed of a target object and a duration of time that the target object is in a target pose type according to a timing of each video frame and a pose type of the target object in each video frame comprises:
determining a first video frame of which the posture type of the target object is changed and the changed posture type is the target posture type and a second video frame of which the target object is in the target posture type and is continuous with the first video frame in time from a plurality of video frames according to the posture type of the target object in each video frame;
determining the time sequence of the first video frame and the time sequence of the second video frame according to the time sequence of each video frame;
determining the attitude change speed of the target object according to the time sequence of the first video frame;
and determining the duration of the target object in the target posture type according to the time sequence of the second video frame.
In one embodiment, determining the state of the target object based on the determined gesture change speed and the duration comprises:
if the attitude change speed is greater than or equal to a preset speed threshold and the duration is greater than a preset duration threshold, determining that the target object is in a first state;
if the attitude change speed is less than the preset speed threshold and/or the duration is less than or equal to the preset duration threshold, determining that the target object is in a second state; wherein the risk level of the target object in the first state is higher than the risk level of the target object in the second state.
In a second aspect, an embodiment of the present application provides a computer device, including: a processor and a storage medium storing machine-readable instructions executable by the processor, the processor executing the machine-readable instructions when the computer device is running to perform the steps of the object state detection method as described above.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the object state detection method.
In a fourth aspect, an embodiment of the present application provides an electronic device, including the computer device according to the second aspect and an imaging element, the imaging element being coupled to the processor; the imaging element is configured to acquire a plurality of consecutive video frames of a target area; the processor is configured to execute the machine readable instructions to perform the steps of any of the methods of the first aspect when executed.
In one embodiment, the system further comprises a communication device coupled with the target device; the communication device is configured to send the state of the target object to the target device when the preset state of the target object occurs.
The object state detection method provided by the embodiment of the application obtains a plurality of video frames which are shot aiming at a target area and are continuous in time, respectively calculates the gesture type of a target object in each video frame, determines the gesture change speed of the target object and the duration of the target object in the target gesture type according to the time sequence of each video frame and the gesture type of the target object in each video frame, and determines the state of the target object according to the determined gesture change speed and the duration, so that the state of the target object in a certain duration is determined by comprehensively considering the time sequence corresponding to each video frame and the gesture type of the target object in each video frame, the state of the target object in a certain duration is not determined, and the accuracy of the determined state is improved.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic flowchart illustrating an object state detection method according to an embodiment of the present application;
fig. 2 is a schematic diagram illustrating a warning of a fall behavior provided by an embodiment of the present application;
FIG. 3 is a diagram illustrating a calculation of aspect ratio provided by an embodiment of the present application;
fig. 4 is a schematic structural diagram illustrating an object state detection apparatus provided in an embodiment of the present application;
fig. 5 shows a schematic structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that in the embodiments of the present application, the term "comprising" is used to indicate the presence of the features stated hereinafter, but does not exclude the addition of further features.
The method is used for detecting and alarming the falling of the old people, and detecting the falling of the young students to become a key problem, and aims at different application scenes (such as a nursing home, a common family and the like), a camera can be installed in the application scene, the video stream of a monitored area is collected in real time through the camera, a plurality of video frames are extracted from the video stream, the plurality of video frames are uploaded to a cloud end or a server, the plurality of video frames are input into a preset falling detection model through the cloud end or the server, the state of an object to be detected in each video frame is obtained, and when the state of the object to be detected in a certain video frame is a falling state, warning information is generated and an alarm is given. However, depending on the state corresponding to one video frame, determining whether the object to be detected falls down may result in false detection.
In addition, since the fall detection is performed at the cloud or the server, the alarm result of the related person may be delayed, and when the old or the young student falls down, the old or the young student cannot be helped in time and the fall detection is serious, the old or the young student may be injured more seriously.
Based on this, an embodiment of the present application provides an object state detection method, which obtains a plurality of video frames with continuous time obtained by shooting for a target area, respectively calculates a pose type of a target object in each video frame, determines a pose change speed of the target object and a duration of the target object in the target pose type according to a time sequence of each video frame and the pose type of the target object in each video frame, and determines a state of the target object according to the determined pose change speed and the duration. The embodiments of the present application will be described in detail based on this idea.
In view of the above situation, an embodiment of the present application provides an object state detection method, as shown in fig. 1, the method includes the following steps:
s101, acquiring a plurality of video frames which are shot aiming at a target area and are continuous in time;
s102, respectively calculating the gesture type of the target object in each video frame;
s103, determining the attitude change speed of the target object and the duration of the target object in the target attitude type according to the time sequence of each video frame and the attitude type of the target object in each video frame;
and S104, determining the state of the target object according to the determined attitude change speed and the duration.
The object state detection method provided by the embodiment of the application can be applied to computer equipment and video acquisition equipment, the computer equipment comprises terminal equipment, a server and the like, the video acquisition equipment comprises RGB cameras, depth camera equipment, near-infrared camera equipment, infrared temperature camera equipment and the like, the video acquisition equipment can be selected according to actual scenes, for example, when the scene of a nursing home and a family scene are displayed, the method can be applied to the video acquisition equipment, the video acquisition equipment acquires video streams, analyzes video frames in the video streams, alarms through a computer end or a mobile phone end, and reference can be made to fig. 2. Of course, the method may also be applied to a computer device.
In S101, the target area is an area in a field of view of the video capture device, for example, when the video capture device is installed in a home, the target area may be a living room or a bedroom; when the video acquisition equipment is installed in a nursing home, the target area can be an idle area, an active area and the like in a bedroom.
The video frames may be obtained by sampling a video stream acquired by a video acquisition device, and the sampling frequency may be determined according to actual requirements, for example, sampling one video frame every 50 ms; the time corresponding to the acquired video frames is continuous, for example, 3 video frames are acquired, the time of the first video frame is 15:00:01, the time of the second video frame is 15:00:02, and the time of the third video frame is 15:00: 03.
The total crossing time of a plurality of video frames with continuous time does not exceed the preset time generally, considering that a target object is an old person or a young student generally, the preset time is not too long, for example, the preset time is 2 minutes, 5 minutes and the like, when the preset time is too long, for example, 30 minutes, the state of the target object is determined from video frames collected from a 30-minute video stream, the determined result is not timely, when the beginning of 30 minutes possibly exists, the target object falls and is injured, but after the time for receiving the video frames is 30 minutes, the target object cannot be timely treated, and the injury is serious, so the preset time is generally short, the state of the target object can be determined in real time, and the situation that the target object cannot be treated is avoided.
In S102, the posture type may include a standing type, a variation type, and a falling type, the risk level of the target object is the highest when the target object is the falling type, the risk level of the target object is the lowest when the target object is the standing type, the risk level of the target object is the middle when the target object is the variation type, wherein the variation type may be a state between a standing state and a falling state, and the state may be a half-squat state.
The calculation of the pose type of the target object in each video frame may be accomplished in any of two ways, as described in detail below.
The first method is as follows: and determining the posture type of the target object by utilizing the aspect ratio of the target object in the area of the video frame.
After a plurality of video frames are obtained, target detection is carried out on the plurality of video frames to obtain position information of the target object in each video frame, and the gesture type of the target object in each video frame is determined according to the position information of the target object in the video frame aiming at each video frame.
Here, when performing target detection on a video frame, the video frame may be detected by a target detection module, where the target detection module may be a target detector based on a target detection algorithm, and the target detector is configured to detect an object included in the video frame, for example, the object included in the video frame may be an elderly person; the position information is coordinate information of a region where the target object is located in the video frame, the region where the target object is located in the video frame may be in a shape of a quadrangle, a circle, or the like, when the region is a quadrangle, the position information may be coordinate information of boundary points of the quadrangle, for example, the quadrangle includes four boundary points of a1, a2, A3, a4, the boundary points at opposite corners are a1 and a4, a2, and A3, the position information may be coordinates of a1 and coordinates of a4, and may also be coordinates of a2 and coordinates of A3; the span-height ratio range may be determined from data of various pose types of the object, which is not limited in this application.
The target detection algorithm can be various common implementations in the field of machine learning, such as R-CNN, Fast R-CNN, F-RCN, Context R-CNN and various specific implementations thereof, such as SSD, YOLO, YOLOv2, YOLO v3 and the like.
In a specific implementation process, when a plurality of video frames are input to the target detection module, the data input to the target detection module is pixel data in the video frames, for example, the pixel data is (x, y, z), where x is a width of the video frame, y is a height of the video frame, z is a number of channels in the video frame, z is 3 when the image is an RGB image, and z is 1 when the image is a depth map.
And inputting the video frame into a target detector aiming at each video frame to obtain the position information of the target object included in the video frame.
After the obtained position information of the target object in each video frame is obtained, that is, the region where the target object is located in each video frame is determined, for each video frame, coordinate information of two boundary points of the target object located at opposite angles in the video frame is obtained, a first difference L in the abscissa direction and a second difference H in the ordinate direction of the two boundary points are calculated, and the aspect ratio of the region where the target object belongs in the video frame is determined by the ratio of the first difference to the second difference, which can be referred to fig. 3.
For example, taking a video frame as an example, the position information of the target object in the video frame a is (x1, y1), (x2, y2), and the aspect ratio of the region to which the target object belongs in the video frame a is | x2-x1|/| y2-y1 |.
And comparing the aspect ratio corresponding to the target object in the video frame with the preset aspect ratio range for each posture type, and if the aspect ratio corresponding to the target object in the video frame belongs to a certain aspect ratio range, determining the posture type of the target object according to the posture type corresponding to the certain aspect ratio range.
For example, the aspect ratio of the target object in the video frame a is α, the aspect ratio range corresponding to the standing type is (T1, T2), the aspect ratio range corresponding to the squatting type is (T3, T4), the aspect ratio range corresponding to the falling type is (T5, T6), and when α belongs to (T5, T6), the posture type of the target object in the video frame a is determined to be the falling type.
The second method comprises the following steps: and determining the posture type of the target object by using the skeletal feature information of the target object in the video frame.
After a plurality of video frames are obtained, carrying out bone feature extraction on the plurality of video frames to obtain bone feature information of a target object in each video frame, and determining the gesture type of the target object in each video frame according to the bone feature information of the target object in the video frame aiming at each video frame.
When the bone feature extraction is performed on the video frame, the feature extraction can be performed on the video frame through a bone feature extraction module, wherein the bone feature extraction module is used for extracting the position information of the bone feature point of the target object from the video frame; the input data of the input skeleton feature extraction module is pixel point data in a video frame, for example, the pixel point data is (x, y, z), wherein x is the width of the video frame, y is the height of the video frame, z is the number of channels in the video frame, when the video frame is an RGB image, z is 3, and when the video frame is a depth map, z is 1; the bone feature information includes coordinate information of each bone feature point of the target object in the video frame, and the bone feature points include a head feature point, a center feature point, an arm feature point, a foot feature point, and the like.
The bone feature extraction module can be implemented in various ways, for example, an OpenPose model, an AlphaPose model, a VNect model, or the like can be used, and GAN, CNN, or the like can be trained by itself based on sample data to perform modeling.
In a specific implementation process, each video frame is input into a bone feature extraction module to obtain the position information of a plurality of bone feature points of a target object in each video frame.
After obtaining the bone feature information of the target object in each video frame, a distance variation in a predetermined direction may be determined for the target object in the video frame according to the bone feature information of the target object in the video frame, scale information of the target object in the video frame may be determined based on the distance variation and the height information of the target object, and the pose type of the target object in the video frame may be determined based on the scale information and a scale range preset for each pose type.
Here, the predetermined direction may be a height direction in the video frame, i.e., a y-axis direction in a coordinate system in which the video frame is located; the distance variation represents a difference value of two target feature points of the target object in a preset direction, the target feature points may be a head feature point and a center feature point, and may also be a center feature point and a foot feature point, for example, when the target feature points are the head feature point and the center feature point, the distance variation is an absolute value of a difference value between a longitudinal coordinate of the head feature point and a longitudinal coordinate of the center feature point; the height information of the target object can be the distance between the head characteristic point and the central characteristic point of the target object, or the distance between the central characteristic point and the foot characteristic point; the proportion information is used for representing the posture type of the target object, the larger the proportion is, the larger the probability that the target object processes a standing state is, and the smaller the proportion is, the larger the probability that the target object is in a falling state is; the scale range is determined from the data of the object at each pose type.
In a specific implementation process, after obtaining bone feature information of a target object in each video frame, for each video frame, calculating a difference value of ordinate information of two target feature points of the target object in the video frame, and determining the difference value as a distance variation of the target object in the video frame in the ordinate direction.
For example, taking a video frame as an example, the coordinates of the head feature point of the target object are (x1, y1), the coordinates of the center feature point are (x2, y2), and the distance variation of the target object in the video frame is the absolute value of y1-y 2.
After the distance variation of the target object in the video frame is obtained, the ratio of the distance variation to a preset height value is calculated, and the ratio is used as the proportion of the target object in the video frame.
For example, continuing with the previous example, where the distance variation is the variation of the head feature point and the center feature point in the y-axis direction, the height information may be the distance between the head feature point and the center feature point in the video frame, and the ratio of the absolute value of y1-y2 to the distance is calculated.
And comparing the proportion with the proportion range corresponding to each posture type, wherein when the proportion belongs to a certain proportion range, the posture type corresponding to the certain proportion range is the posture type of the target object.
In S103, the time sequence of the video frame may be the time when the video frame is acquired by the video acquisition device, or may be the serial number of the video frame in a plurality of video frames, where the larger the serial number is, the later the acquisition time of the video frame is represented, and otherwise, the earlier the acquisition time of the video frame is represented; the target posture type may be a fall type; the pose change rate may be a length of time taken to change from one pose type to another, the length of time being determined using a timing of a video frame corresponding to one pose type and a timing of a video frame corresponding to another pose type; the duration represents the duration of the target object in one gesture type, and the duration can be determined based on the time sequence of the video frame corresponding to the gesture type.
When S103 is executed, the following steps may be specifically included:
determining a first video frame of which the posture type of the target object is changed and the changed posture type is the target posture type and a second video frame of which the target object is in the target posture type and is continuous with the first video frame in time from a plurality of video frames according to the posture type of the target object in each video frame;
determining the time sequence of the first video frame and the time sequence of the second video frame according to the time sequence of each video frame;
determining the attitude change speed of the target object according to the time sequence of the first video frame;
and determining the duration of the target object in the target posture type according to the time sequence of the second video frame.
Here, the first video frame is a video frame that is changed from a standing type or a squatting type to a falling type and is continuous in time; the second video frame is a video frame in which the target object is of a fall type and is continuous with the time of the first video frame.
In a specific implementation process, after the gesture type of the target object in each video frame is obtained, a continuous video frame pair with a changed gesture type is selected from a plurality of video frames, and in the selected video frame pair, a video frame with a changed gesture type as a target gesture type is selected, that is, a video frame pair with a gesture type of a second video frame in the video frame pair as a target gesture type is selected.
For example, the gesture types include a standing gesture, a squatting gesture, and a falling gesture, the video frame pair with the changed gesture type includes 2 pairs, the first pair of video frames is changed from the standing type to the squatting type, the second pair of video frames is changed from the standing type to the falling type, and then the second pair of video frames is used as the first video frame.
In consideration of the fact that the target object falls, the movement from standing or squatting to falling is continuous, and therefore, after the first video frame is determined, a plurality of video frames corresponding to the target posture type and continuous in time with the first video frame may be selected from among the plurality of video frames, and the plurality of video frames may be taken as the second video frame.
And determining the time sequence of the video frame pair included in the first video frame and the time sequence of the video frame included in the second video frame according to the time sequences respectively corresponding to the plurality of video frames. For example, the numbers of the plurality of video frames are a1, a2 and A3 … … a10 respectively, the corresponding timings are T1, T2, T3 and … … T10 respectively, the first video frame includes a video frame a2 and a video frame A3, the timing of the video frame a2 is T2, and the timing of the video frame A3 is T3.
And calculating the difference value of the time sequence of the second video frame in the video frame pair and the time sequence of the first video frame aiming at each video frame pair in the first video frame pair, and taking the difference value as the posture change speed of the target object. Continuing with the previous example, the pose change speed between video frame A2 and video frame A3 is T3-T2.
And aiming at a plurality of video frames in the second video frames, selecting a video frame sequence which is continuous in time and is continuous with the time of the first video frame from the plurality of video frames, calculating a difference value of the time sequence of the first video frame and the time sequence of the last video frame in the selected video frame sequence, and taking the difference value as the duration of the target object in the target posture type.
In S104, the states of the target subject include a standing state, a squat state, and a falling state.
When S104 is executed, the following steps may be included:
if the attitude change speed is greater than or equal to a preset speed threshold and the duration is greater than a preset duration threshold, determining that the target object is in a first state;
if the attitude change speed is less than the preset speed threshold and/or the duration is less than or equal to the preset duration threshold, determining that the target object is in a second state; wherein the risk level of the target object in the first state is higher than the risk level of the target object in the second state.
Here, the speed threshold and the duration threshold may be determined according to an actual scene; the danger degree of the target object in the first state is higher than that of the target object in the second state, the first state represents that the target object is in a falling state, and the second state represents that the target object is in a squatting state or a standing state.
In the specific implementation process, after the posture change speed and the duration of the target object are obtained, the posture change speed is compared with a preset speed threshold, and the duration is compared with a preset duration threshold.
When the posture change speed is smaller than the preset speed threshold, or the duration is smaller than the preset duration threshold, or the posture change speed is smaller than the preset speed threshold and the duration is smaller than the preset duration threshold, it is indicated that the target object may perform a squatting action or a standing action, and it is determined that the target object is in the second state, that is, the risk degree of the current state of the target object is low.
When the posture change speed is greater than or equal to the preset speed threshold and the duration is greater than the preset duration threshold, it is indicated that the time for changing the target object from one posture type to another posture type is short, and the duration of the target posture type (falling type) is long, the target object is determined to be in a falling state, and at the moment, warning can be performed.
When warning is carried out, if the video acquisition equipment is installed in a nursing home, warning information can be generated and sent to the terminal equipment of the monitoring room, the equipment identification of the video acquisition equipment which sends the warning information and the duration that a target object is in a falling type are carried in the warning information, the terminal equipment of the monitoring room can determine the position (such as a room 301) of the video acquisition equipment by utilizing the equipment identification in the warning information, the warning information is displayed on a display interface of the terminal equipment, and meanwhile, the alarm is utilized to carry out sound warning.
If the video acquisition equipment is installed in a common household, warning information including the time length of the target object in the falling type can be generated, and the warning information is sent to a guardian of the target object, so that the guardian of the target object can check the monitoring video in time and correspondingly rescue the monitoring video.
Referring to fig. 4, a schematic diagram of an object state detection apparatus provided in an embodiment of the present application is shown, where the apparatus includes:
an obtaining module 41, configured to obtain multiple video frames with continuous time obtained by shooting a target area;
a calculation module 42, configured to calculate a pose type of the target object in each video frame;
a first determining module 43, configured to determine, according to a timing sequence of each video frame and a pose type of a target object in each video frame, a pose change speed of the target object and a duration of the target object in the target pose type;
and the second determination module 44 is configured to determine the state of the target object according to the determined posture change speed and the duration.
In one embodiment, the calculation module 42 is configured to calculate the pose type of the target object in each video frame according to the following steps:
carrying out target detection on the plurality of video frames to obtain position information of the target object in each video frame;
for each video frame, determining the posture type of the target object in the video frame according to the position information of the target object in the video frame; alternatively, the first and second electrodes may be,
extracting bone features of the video frames to obtain bone feature information of the target object in each video frame;
and for each video frame, determining the posture type of the target object in the video frame according to the bone feature information of the target object in the video frame.
In one embodiment, the calculation module 42 is configured to determine the pose type of the target object in the video frame according to the following steps:
determining the aspect ratio of the region of the target in the video frame according to the position information of the target object in the video frame;
determining the attitude type of the target object in the video frame based on the determined aspect ratio and the preset aspect ratio range for each attitude type.
In one embodiment, the calculation module 42 is configured to determine the pose type of the target object in the video frame according to the following steps:
determining a distance variation in a predetermined direction for the target object in the video frame according to the bone feature information of the target object in the video frame;
determining proportion information of the target object in the video frame based on the distance variation and the height information of the target object;
and determining the posture type of the target object in the video frame based on the proportion information and a preset proportion range for each posture type.
In one embodiment, the first determination module 43 is configured to determine the posture change speed of the target object and the duration of the target object being in the target posture type according to the following steps:
determining a first video frame of which the posture type of the target object is changed and the changed posture type is the target posture type and a second video frame of which the target object is in the target posture type and is continuous with the first video frame in time from a plurality of video frames according to the posture type of the target object in each video frame;
determining the time sequence of the first video frame and the time sequence of the second video frame according to the time sequence of each video frame;
determining the attitude change speed of the target object according to the time sequence of the first video frame;
and determining the duration of the target object in the target posture type according to the time sequence of the second video frame.
In one embodiment, the second determining module 44 is configured to determine the state of the target object according to the following steps:
if the attitude change speed is greater than or equal to a preset speed threshold and the duration is greater than a preset duration threshold, determining that the target object is in a first state;
if the attitude change speed is less than the preset speed threshold and/or the duration is less than or equal to the preset duration threshold, determining that the target object is in a second state; wherein the risk level of the target object in the first state is higher than the risk level of the target object in the second state.
An embodiment of the present application further provides a computer device 50, as shown in fig. 5, which is a schematic structural diagram of the computer device 50 provided in the embodiment of the present application, and includes: a processor 51, a memory 52, and a bus 53. The memory 52 stores machine-readable instructions executable by the processor 51 (for example, the corresponding execution instructions of the acquisition module 41, the calculation module 42, the first determination module 43, and the second determination module 44 in the apparatus in fig. 4, and the like), when the computer device 50 is running, the processor 51 communicates with the memory 52 through the bus 53, and when the processor 51 executes the following processes:
acquiring a plurality of video frames which are shot aiming at a target area and are continuous in time;
respectively calculating the gesture type of the target object in each video frame;
determining the attitude change speed of the target object and the duration of the target object in the target attitude type according to the time sequence of each video frame and the attitude type of the target object in each video frame;
and determining the state of the target object according to the determined attitude change speed and the duration.
In one possible embodiment, the instructions executed by the processor 51 for calculating the pose type of the target object in each video frame separately include:
carrying out target detection on the plurality of video frames to obtain position information of the target object in each video frame;
for each video frame, determining the posture type of the target object in the video frame according to the position information of the target object in the video frame; alternatively, the first and second electrodes may be,
extracting bone features of the video frames to obtain bone feature information of the target object in each video frame;
and for each video frame, determining the posture type of the target object in the video frame according to the bone feature information of the target object in the video frame.
In a possible embodiment, the instructions executed by the processor 51 for determining, for each video frame, the type of pose of the target object in the video frame according to the position information of the target object in the video frame includes:
determining the aspect ratio of the region of the target in the video frame according to the position information of the target object in the video frame;
determining the attitude type of the target object in the video frame based on the determined aspect ratio and the preset aspect ratio range for each attitude type.
In a possible embodiment, the processor 51 executes instructions for determining, for each video frame, a pose type of a target object in the video frame according to bone feature information of the target object in the video frame, including:
determining a distance variation in a predetermined direction for the target object in the video frame according to the bone feature information of the target object in the video frame;
determining proportion information of the target object in the video frame based on the distance variation and the height information of the target object;
and determining the posture type of the target object in the video frame based on the proportion information and a preset proportion range for each posture type.
In one possible embodiment, the instructions executed by the processor 51 for determining the pose change speed of the target object and the duration of the target object in the target pose type according to the time sequence of each video frame and the pose type of the target object in each video frame include:
determining a first video frame of which the posture type of the target object is changed and the changed posture type is the target posture type and a second video frame of which the target object is in the target posture type and is continuous with the first video frame in time from a plurality of video frames according to the posture type of the target object in each video frame;
determining the time sequence of the first video frame and the time sequence of the second video frame according to the time sequence of each video frame;
determining the attitude change speed of the target object according to the time sequence of the first video frame;
and determining the duration of the target object in the target posture type according to the time sequence of the second video frame.
In one possible embodiment, the processor 51 executes instructions to determine the state of the target object according to the determined attitude change speed and the determined duration, including:
if the attitude change speed is greater than or equal to a preset speed threshold and the duration is greater than a preset duration threshold, determining that the target object is in a first state;
if the attitude change speed is less than the preset speed threshold and/or the duration is less than or equal to the preset duration threshold, determining that the target object is in a second state; wherein the risk level of the target object in the first state is higher than the risk level of the target object in the second state.
As is known to those skilled in the art, as computer hardware evolves, the specific implementation and nomenclature of the bus may change, and the bus as referred to herein conceptually encompasses any information transfer line capable of servicing components within a computer device, including, but not limited to, FSB, HT, QPI, Infinity Fabric, etc.
In the embodiment of the present application, the processor may be a general-purpose processor including a Central Processing Unit (CPU), and may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), or other programmable logic device, discrete gate or transistor logic device, or discrete hardware component.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the object state detection method are performed.
Specifically, the storage medium can be a general-purpose storage medium, such as a mobile disk, a hard disk, and the like, and when executed, the computer program on the storage medium can execute the above object state detection method, thereby solving the problem of low accuracy in determining the type of the target object in the prior art, the present application obtains a plurality of video frames with continuous time obtained by shooting a target area, respectively calculates the posture type of the target object in each video frame, determines the posture change speed of the target object according to the time sequence of each video frame and the posture type of the target object in each video frame, and determines the state of the target object according to the determined posture change speed and duration, thus, comprehensively considering the time sequence corresponding to each video frame and the posture type of the target object in each video frame, the state of the target object in a certain time length is determined, the state of the target object determined at one moment is not determined, and the accuracy of the determined state is improved. The embodiments of the present application will be described in detail based on this idea.
An embodiment of the present application provides an electronic device, comprising a computer device as shown in fig. 5 and an imaging element, the imaging element being coupled to the processor; the imaging element is configured to acquire a plurality of consecutive video frames of a target area; the processor is configured to execute the machine readable instructions to perform the steps of the object state detection method described above when executed.
In one embodiment, the system further comprises a communication device coupled with the target device; the communication device is configured to send the state of the target object to the target device when the preset state of the target object occurs.
Alternatively, the electronic device to which the present application relates may be a camera device (e.g., a camera, a video camera, an edge computing box, etc.) for use in a home, classroom, nursing home, etc.; the target device can be a mobile phone, a tablet and other devices which are bound in an associated manner; the communication device may be a device based on technologies such as a bluetooth technology, a fourth generation mobile communication technology (4G), a fifth generation mobile communication technology (5G), a Wireless local Area Network (Wi-Fi Alliance) technology, and the like, and the communication device sends the alarm information for the state of the target object to the target device through a router or directly sends the alarm information for the state of the target object to the target device through a Wireless Wide Area Network (WWAN).
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to corresponding processes in the method embodiments, and are not described in detail in this application. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and there may be other divisions in actual implementation, and for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or modules through some communication interfaces, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An object state detection method, characterized in that the method comprises:
acquiring a plurality of video frames which are shot aiming at a target area and are continuous in time;
respectively calculating the gesture type of the target object in each video frame;
determining the attitude change speed of the target object and the duration of the target object in the target attitude type according to the time sequence of each video frame and the attitude type of the target object in each video frame;
and determining the state of the target object according to the determined attitude change speed and the duration.
2. The method of claim 1, wherein separately calculating the pose type of the target object in each video frame comprises:
carrying out target detection on the plurality of video frames to obtain position information of the target object in each video frame;
for each video frame, determining the posture type of the target object in the video frame according to the position information of the target object in the video frame; alternatively, the first and second electrodes may be,
extracting bone features of the video frames to obtain bone feature information of the target object in each video frame;
and for each video frame, determining the posture type of the target object in the video frame according to the bone feature information of the target object in the video frame.
3. The method of claim 2, wherein determining, for each video frame, the type of pose of the target object in the video frame according to the position information of the target object in the video frame comprises:
determining the aspect ratio of the region of the target in the video frame according to the position information of the target object in the video frame;
determining the attitude type of the target object in the video frame based on the determined aspect ratio and the preset aspect ratio range for each attitude type.
4. The method of claim 2, wherein for each video frame, determining the pose type of the target object in the video frame according to the skeletal feature information of the target object in the video frame comprises:
determining a distance variation in a predetermined direction for the target object in the video frame according to the bone feature information of the target object in the video frame;
determining proportion information of the target object in the video frame based on the distance variation and the height information of the target object;
and determining the posture type of the target object in the video frame based on the proportion information and a preset proportion range for each posture type.
5. The method of claim 1, wherein determining a rate of change of the pose of the target object and a duration of time that the target object is in the target pose type based on the timing of each video frame and the pose type of the target object in each video frame comprises:
determining a first video frame of which the posture type of the target object is changed and the changed posture type is the target posture type and a second video frame of which the target object is in the target posture type and is continuous with the first video frame in time from a plurality of video frames according to the posture type of the target object in each video frame;
determining the time sequence of the first video frame and the time sequence of the second video frame according to the time sequence of each video frame;
determining the attitude change speed of the target object according to the time sequence of the first video frame;
and determining the duration of the target object in the target posture type according to the time sequence of the second video frame.
6. The method of claim 1, wherein determining the state of the target object based on the determined rate of change of the pose and the duration comprises:
if the attitude change speed is greater than or equal to a preset speed threshold and the duration is greater than a preset duration threshold, determining that the target object is in a first state;
if the attitude change speed is less than the preset speed threshold and/or the duration is less than or equal to the preset duration threshold, determining that the target object is in a second state; wherein the risk level of the target object in the first state is higher than the risk level of the target object in the second state.
7. A computer device, comprising: a processor and a storage medium storing machine-readable instructions executable by the processor, the processor executing the machine-readable instructions when executed by a computer device to perform the steps of the method of any one of claims 1 to 6.
8. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, is adapted to carry out the steps of the method according to any one of claims 1 to 6.
9. An electronic device comprising the computer device of claim 7 and an imaging element, the imaging element coupled with the processor; the imaging element is configured to acquire a plurality of consecutive video frames of a target area; the processor is configured to execute the machine readable instructions to perform the steps of the method of any of claims 1 to 6 when executed.
10. The electronic device of claim 9, further comprising a communication device coupled to the target device; the communication device is configured to send the state of the target object to the target device when the preset state of the target object occurs.
CN202010650507.7A 2020-07-08 2020-07-08 Object state detection method, computer device, storage medium, and electronic device Pending CN111767888A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010650507.7A CN111767888A (en) 2020-07-08 2020-07-08 Object state detection method, computer device, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010650507.7A CN111767888A (en) 2020-07-08 2020-07-08 Object state detection method, computer device, storage medium, and electronic device

Publications (1)

Publication Number Publication Date
CN111767888A true CN111767888A (en) 2020-10-13

Family

ID=72725101

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010650507.7A Pending CN111767888A (en) 2020-07-08 2020-07-08 Object state detection method, computer device, storage medium, and electronic device

Country Status (1)

Country Link
CN (1) CN111767888A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381072A (en) * 2021-01-11 2021-02-19 西南交通大学 Human body abnormal behavior detection method based on time-space information and human-object interaction
CN112597899A (en) * 2020-12-24 2021-04-02 北京市商汤科技开发有限公司 Behavior state detection method and device, electronic equipment and storage medium
CN112651865A (en) * 2020-12-30 2021-04-13 北京市商汤科技开发有限公司 Behavior state prompting method and device, electronic equipment and storage medium
CN112735198A (en) * 2020-12-31 2021-04-30 深兰科技(上海)有限公司 Experiment teaching system and method
CN112949503A (en) * 2021-03-05 2021-06-11 齐齐哈尔大学 Site monitoring management method and system for ice and snow sports
CN113657163A (en) * 2021-07-15 2021-11-16 浙江大华技术股份有限公司 Behavior recognition method, electronic device, and storage medium
CN114220119A (en) * 2021-11-10 2022-03-22 深圳前海鹏影数字软件运营有限公司 Human body posture detection method, terminal device and computer readable storage medium
CN114724335A (en) * 2022-03-24 2022-07-08 慧之安信息技术股份有限公司 Nursing home safety monitoring system and method based on edge calculation
WO2022183661A1 (en) * 2021-03-03 2022-09-09 上海商汤智能科技有限公司 Event detection method and apparatus, electronic device, storage medium, and program product

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903281A (en) * 2014-04-04 2014-07-02 西北工业大学 Old people tumbling detecting method based on multi-feature analyzing and scene studying
CN103955699A (en) * 2014-03-31 2014-07-30 北京邮电大学 Method for detecting tumble event in real time based on surveillance videos
CN106571014A (en) * 2016-10-24 2017-04-19 上海伟赛智能科技有限公司 Method for identifying abnormal motion in video and system thereof
US20170109991A1 (en) * 2011-07-12 2017-04-20 Cerner Innovation, Inc. Method and process for determining whether an individual suffers a fall requiring assistance
CN108509897A (en) * 2018-03-29 2018-09-07 同济大学 A kind of human posture recognition method and system
CN108629300A (en) * 2018-04-24 2018-10-09 北京科技大学 A kind of fall detection method
CN108898108A (en) * 2018-06-29 2018-11-27 炬大科技有限公司 A kind of user's abnormal behaviour monitoring system and method based on sweeping robot
CN109685037A (en) * 2019-01-08 2019-04-26 北京汉王智远科技有限公司 A kind of real-time action recognition methods, device and electronic equipment
CN110111016A (en) * 2019-05-14 2019-08-09 深圳供电局有限公司 Precarious position monitoring method, device and the computer equipment of operating personnel
CN110287923A (en) * 2019-06-29 2019-09-27 腾讯科技(深圳)有限公司 Human body attitude acquisition methods, device, computer equipment and storage medium
CN110378244A (en) * 2019-05-31 2019-10-25 曹凯 The detection method and device of abnormal posture
CN110390303A (en) * 2019-07-24 2019-10-29 深圳前海达闼云端智能科技有限公司 Tumble alarm method, electronic device, and computer-readable storage medium
CN110458061A (en) * 2019-07-30 2019-11-15 四川工商学院 A kind of method and company robot of identification Falls in Old People
CN110765860A (en) * 2019-09-16 2020-02-07 平安科技(深圳)有限公司 Tumble determination method, tumble determination device, computer apparatus, and storage medium
CN110852237A (en) * 2019-11-05 2020-02-28 浙江大华技术股份有限公司 Object posture determining method and device, storage medium and electronic device
CN111242030A (en) * 2020-01-13 2020-06-05 平安国际智慧城市科技股份有限公司 Video data processing method, device, equipment and computer readable storage medium
CN111241913A (en) * 2019-12-19 2020-06-05 北京文安智能技术股份有限公司 Method, device and system for detecting falling of personnel
US20200211154A1 (en) * 2018-12-30 2020-07-02 Altumview Systems Inc. Method and system for privacy-preserving fall detection

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170109991A1 (en) * 2011-07-12 2017-04-20 Cerner Innovation, Inc. Method and process for determining whether an individual suffers a fall requiring assistance
CN103955699A (en) * 2014-03-31 2014-07-30 北京邮电大学 Method for detecting tumble event in real time based on surveillance videos
CN103903281A (en) * 2014-04-04 2014-07-02 西北工业大学 Old people tumbling detecting method based on multi-feature analyzing and scene studying
CN106571014A (en) * 2016-10-24 2017-04-19 上海伟赛智能科技有限公司 Method for identifying abnormal motion in video and system thereof
CN108509897A (en) * 2018-03-29 2018-09-07 同济大学 A kind of human posture recognition method and system
CN108629300A (en) * 2018-04-24 2018-10-09 北京科技大学 A kind of fall detection method
CN108898108A (en) * 2018-06-29 2018-11-27 炬大科技有限公司 A kind of user's abnormal behaviour monitoring system and method based on sweeping robot
US20200211154A1 (en) * 2018-12-30 2020-07-02 Altumview Systems Inc. Method and system for privacy-preserving fall detection
CN109685037A (en) * 2019-01-08 2019-04-26 北京汉王智远科技有限公司 A kind of real-time action recognition methods, device and electronic equipment
CN110111016A (en) * 2019-05-14 2019-08-09 深圳供电局有限公司 Precarious position monitoring method, device and the computer equipment of operating personnel
CN110378244A (en) * 2019-05-31 2019-10-25 曹凯 The detection method and device of abnormal posture
CN110287923A (en) * 2019-06-29 2019-09-27 腾讯科技(深圳)有限公司 Human body attitude acquisition methods, device, computer equipment and storage medium
CN110390303A (en) * 2019-07-24 2019-10-29 深圳前海达闼云端智能科技有限公司 Tumble alarm method, electronic device, and computer-readable storage medium
CN110458061A (en) * 2019-07-30 2019-11-15 四川工商学院 A kind of method and company robot of identification Falls in Old People
CN110765860A (en) * 2019-09-16 2020-02-07 平安科技(深圳)有限公司 Tumble determination method, tumble determination device, computer apparatus, and storage medium
CN110852237A (en) * 2019-11-05 2020-02-28 浙江大华技术股份有限公司 Object posture determining method and device, storage medium and electronic device
CN111241913A (en) * 2019-12-19 2020-06-05 北京文安智能技术股份有限公司 Method, device and system for detecting falling of personnel
CN111242030A (en) * 2020-01-13 2020-06-05 平安国际智慧城市科技股份有限公司 Video data processing method, device, equipment and computer readable storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ROBERT JOHN GRIPENTOG: "FALL DETECTION BY USING VIDEO", 《HTTP://DIGITALSCHOLARSHIP.UNLV.EDU/CGI/VIEWCONTENT.CGI?ARTICLE=3537&CONTENT=THESEDISSERTATIONS》, pages 1 - 48 *
XUEYI WANG 等: "Elderly Fall Detection Systems: A Literature Survey", 《FRONTIERS IN ROBOTICS AND AI》, vol. 7, pages 1 - 23 *
田国会 等: "基于多特征融合的人体动作识别", 《山东大学学报(工学版)》, vol. 39, no. 5, pages 43 - 47 *
魏振钢 等: "基于多摄像头监控的人体跌倒检测算法", 《中国海洋大学学报》, vol. 49, no. 7, pages 142 - 148 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597899A (en) * 2020-12-24 2021-04-02 北京市商汤科技开发有限公司 Behavior state detection method and device, electronic equipment and storage medium
CN112651865A (en) * 2020-12-30 2021-04-13 北京市商汤科技开发有限公司 Behavior state prompting method and device, electronic equipment and storage medium
CN112735198A (en) * 2020-12-31 2021-04-30 深兰科技(上海)有限公司 Experiment teaching system and method
CN112381072A (en) * 2021-01-11 2021-02-19 西南交通大学 Human body abnormal behavior detection method based on time-space information and human-object interaction
WO2022183661A1 (en) * 2021-03-03 2022-09-09 上海商汤智能科技有限公司 Event detection method and apparatus, electronic device, storage medium, and program product
CN112949503A (en) * 2021-03-05 2021-06-11 齐齐哈尔大学 Site monitoring management method and system for ice and snow sports
CN113657163A (en) * 2021-07-15 2021-11-16 浙江大华技术股份有限公司 Behavior recognition method, electronic device, and storage medium
CN114220119A (en) * 2021-11-10 2022-03-22 深圳前海鹏影数字软件运营有限公司 Human body posture detection method, terminal device and computer readable storage medium
CN114220119B (en) * 2021-11-10 2022-08-12 深圳前海鹏影数字软件运营有限公司 Human body posture detection method, terminal device and computer readable storage medium
CN114724335A (en) * 2022-03-24 2022-07-08 慧之安信息技术股份有限公司 Nursing home safety monitoring system and method based on edge calculation

Similar Documents

Publication Publication Date Title
CN111767888A (en) Object state detection method, computer device, storage medium, and electronic device
JP6942488B2 (en) Image processing equipment, image processing system, image processing method, and program
EP2919153A1 (en) Event detection apparatus and event detection method
CN107657244B (en) Human body falling behavior detection system based on multiple cameras and detection method thereof
WO2016167017A1 (en) Image processing device, image processing method, and image processing system
CN109145696B (en) Old people falling detection method and system based on deep learning
CN111241913A (en) Method, device and system for detecting falling of personnel
KR20180020123A (en) Asynchronous signal processing method
CN109543607A (en) Object abnormal state detection method, system, monitor system and storage medium
CN111597879A (en) Gesture detection method, device and system based on monitoring video
CN111666821A (en) Personnel gathering detection method, device and equipment
JP2020504383A (en) Image foreground detection device, detection method, and electronic apparatus
JP2018151834A (en) Lost child detection apparatus and lost child detection method
KR101860062B1 (en) Fall detection system and method
CN111797776A (en) Infant monitoring method and device based on posture
JP6599644B2 (en) Motion determination device, motion determination method, and motion determination program
US11393091B2 (en) Video image processing and motion detection
CN111597889B (en) Method, device and system for detecting target movement in video
JP2016129008A (en) Video surveillance system and method for fraud detection
CN111144260A (en) Detection method, device and system of crossing gate
US20140147011A1 (en) Object removal detection using 3-d depth information
CN111191499A (en) Fall detection method and device based on minimum center line
JP6939065B2 (en) Image recognition computer program, image recognition device and image recognition method
CN115731563A (en) Method for identifying falling of remote monitoring personnel
JP6451418B2 (en) Gaze target determination device, gaze target determination method, and gaze target determination program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination