WO2019128304A1 - 人体跌倒检测方法和装置 - Google Patents
人体跌倒检测方法和装置 Download PDFInfo
- Publication number
- WO2019128304A1 WO2019128304A1 PCT/CN2018/104734 CN2018104734W WO2019128304A1 WO 2019128304 A1 WO2019128304 A1 WO 2019128304A1 CN 2018104734 W CN2018104734 W CN 2018104734W WO 2019128304 A1 WO2019128304 A1 WO 2019128304A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- human body
- image
- sample data
- state
- target
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
Definitions
- the present application relates to the field of human body detection technology, and in particular, to a human body fall detection method and device.
- most of the existing methods are to arrange a plurality of cameras in the human active area in advance to collect video stream data, and then analyze the human body changes in the video stream data to determine whether the human body has fallen.
- the processing and analysis of the video stream data are required, and the workload is large and the efficiency is low.
- judging whether the human body has a fall by analyzing the changes of the human body is more complicated and the error is relatively large.
- the existing methods are implemented, there are often technical problems in identifying the accuracy of the fall, the error is large, and the efficiency is low.
- the embodiment of the present invention provides a method and a device for detecting a fall of a human body, so as to solve the technical problem of identifying a fall accuracy, a large error, and a low efficiency existing in the prior method, and achieving the technical effect of accurately and efficiently identifying a fall state. .
- the embodiment of the present application provides a human body fall detection method, including:
- the target image is an image including a human body
- the target image is subjected to fall recognition by a convolutional neural network to determine whether the human body in the target image is in a falling state.
- the acquiring the target image includes:
- the target detection network is established in the following manner:
- body image sample data wherein the body image sample data includes a plurality of images including a human body state
- the labeled human body image sample data is used for training to obtain a target detection network based on the target detection algorithm.
- the human body state includes a state in which the human body is standing, a state in which the human body is sitting, a state in which the human body is lying down, a state in which the human body is kneeling, a state in which the human body is tilted, and a state in which the human body is kneeling.
- the method further includes: re-acquiring the target image.
- the convolutional neural network is established in the following manner:
- the image in the positive sample data includes at least one of: including An image of a state in which the human body is standing, an image including a state in which the human body is sitting, an image including a state in which the human body is squatting, an image including a state in which the human body is tilted; and an image in the negative sample data includes at least the following One of: an image containing a state in which the human body is lying, and an image containing a state in which the human body is lying;
- Training is performed using the positive sample data and the negative sample data to establish the convolutional neural network for identifying a human body state type.
- the compliant image comprises an image of a human body region having a map ratio greater than 80%.
- the embodiment of the present application further provides a human body fall detection device, including:
- a human body detecting module configured to perform human body detection on the target image through a target detection network to determine whether the target image is an image including a human body
- a fall identification module configured to perform fall recognition on the target image by using a convolutional neural network to determine whether the human body in the target image is in a falling state, in a case where the target image is determined to be an image including a human body.
- the obtaining module comprises:
- a sound collector for collecting sound information in a target area
- a locator configured to determine a target orientation according to the sound information
- a mobile device and a camera wherein the camera is disposed on the mobile device, and the mobile device is configured to move the camera according to the target orientation, and the camera is configured to acquire a target image.
- the apparatus further includes an alarm module for issuing an alarm and/or transmitting an alert message if the human body in the target image is determined to be in a falling state.
- the target image of the single frame is acquired instead of the video stream for analysis and processing, and the target detection network based on the target detection algorithm first identifies the image containing the human body, and then uses the convolutional nerve based on the classification algorithm.
- the network classifies and recognizes the human body state in the target image to identify the specific state of the human body in the target image, thereby solving the technical problem of identifying the fall accuracy, large error and low efficiency existing in the existing method, and achieving accurate Efficiently identify the technical effects of the fall state.
- FIG. 1 is a schematic diagram of a process flow of a human body fall detection method according to an embodiment of the present application
- FIG. 2 is a schematic structural diagram of a structure of a human fall detection device according to an embodiment of the present application
- FIG. 3 is a schematic structural diagram of an electronic device according to a human fall detection method provided by an embodiment of the present application
- FIG. 4 is a schematic structural diagram of a human fall detection robot designed by applying a human fall detection method and apparatus provided by an embodiment of the present application;
- FIG. 5 is a flow chart showing the application of a human fall detection robot to perform human fall detection in a scene example.
- the existing methods are mostly for collecting video stream data and analyzing and processing the video stream data, the amount of data to be analyzed is large, resulting in a large amount of resources and low efficiency.
- most of the existing methods detect human body fall by analyzing human body changes. This identification method is inherently complicated, has poor precision, and is prone to errors.
- the existing methods are implemented, there are often technical problems in identifying the accuracy of the fall and the low efficiency.
- the present application considers that image data of a single frame can be acquired, instead of specific analysis of the video stream data, so as to effectively reduce the amount of data processing; in addition, for the characteristics and advantages of the image data, through analysis
- the human body state in the image rather than the human body change, determines whether the human body has fallen, and solves the technical problem of identifying the fall accuracy, large error, and low efficiency existing in the existing method, and achieves accurate and efficient recognition of the fall state.
- the embodiment of the present application provides a human body fall detection method.
- a human body fall detection method For details, refer to the process flow diagram of the human fall detection method provided by the embodiment of the present application shown in FIG. 1 .
- the method for detecting a human fall in the embodiment of the present application may include the following steps.
- a target image of a single frame may be acquired instead of the video stream collected by the existing method, and subsequent specific analysis and processing are performed. Compared with the video stream, it is only necessary to analyze, detect, and identify a single frame of image for subsequent analysis and processing of a single frame of the target image, thereby effectively reducing the amount of calculation, reducing the calculation cost, and improving the recognition speed.
- the image acquisition is prevented from being acquired multiple times in order to acquire an image containing the human body.
- the effective image may be preferentially obtained as long as possible in the specific implementation.
- Target image The above effective image can be specifically understood as an image including a human body.
- an image that does not contain a human body can be understood as an invalid image. In this way, it is possible to avoid obtaining the image of the human body that can be used later, and repeatedly acquiring the target image, which helps to improve the processing efficiency.
- the above-mentioned acquisition target image may be specifically implemented to include the following contents:
- S11-1 collecting sound information in the target area
- the target orientation may specifically be the direction of the sound source.
- the above directions have a greater probability of human movement. Therefore, in the above-described target orientation, an image including a human body, that is, an effective image, is acquired with a greater probability than other orientations.
- the microphone array may be used as a sound collector to collect sound information in the target area; and the positioner determines the direction of the sound source according to the collected sound information, and determines the direction of the sound source.
- the microphone array may be used as a sound collector to collect sound information in the target area; and the positioner determines the direction of the sound source according to the collected sound information, and determines the direction of the sound source.
- the microphone array may be used as a sound collector to collect sound information in the target area; and the positioner determines the direction of the sound source according to the collected sound information, and determines the direction of the sound source.
- the microphone array may be used as a sound collector to collect sound information in the target area; and the positioner determines the direction of the sound source according to the collected sound information, and determines the direction of the sound source.
- the microphone array may be used as a sound collector to collect sound information in the target area; and the positioner determines the direction of the sound source according to the collected sound information, and determines the direction of the sound source.
- the camera may be specifically disposed on the mobile device, that is, the camera is movable in a target area that is not fixedly disposed.
- the camera can be placed on a mobile device consisting of a pulley and a motor.
- the camera can be flexibly moved in the target area by the mobile device, so that the range of the target image can be effectively expanded, and more target images can be acquired in a larger detection range. That is, the manner in which the camera is used in the embodiment of the present application is different from the manner in which the camera is used in the existing method.
- the camera is fixedly disposed at a certain fixed position in the target area to collect video stream data.
- the range that can be detected by a single camera is limited, and in order to increase the total detection range, it is necessary to separately arrange cameras at a plurality of positions in the target area. In this way, it will increase the implementation cost.
- the method for using the camera provided in the embodiment of the present application is to set the camera on the mobile device, and then, according to the situation, the camera can be moved in real time by the mobile device to acquire target images at different positions in the target area, thereby utilizing a Or a small number of cameras to achieve a large range of target images, reducing implementation costs.
- the angle and distance between the camera and the human body can be adjusted according to the specific conditions of the human body, so that the target image with higher quality can be obtained, so that the fall recognition can be performed more accurately.
- the above-described mobile devices are only for better explaining the embodiments of the present application.
- other movable structures can also be selected as mobile devices according to specific situations and precision requirements, such as a mobile robot, a remote control car, etc., so that the position of the camera can be flexibly moved.
- the application is not limited.
- the sound information in the target area may be first collected through the microphone array; the source direction of the sound is determined by the locator, and the direction is taken as the direction in which the human activity may exist, that is, the target orientation; By moving the device, the camera is moved to the source position of the sound according to the determined target orientation, so that an effective image with relatively high quality can be obtained by a common camera.
- S12 Perform human body detection on the target image through the target detection network to determine whether the target image is an image including a human body.
- the target image is first subjected to human body detection to determine whether the target image to be analyzed is an image including the human body, that is, an effective image.
- the target image to be analyzed is an image including the human body, that is, an effective image.
- An image that does not contain the human body is taken as an invalid image, and the next fall recognition is not performed. Therefore, it is possible to eliminate the meaningless fall recognition of the image that does not include the human body by eliminating the image that does not include the human body in advance, thereby reducing the data processing amount of the fall recognition and further improving the processing rate.
- the target image may be re-acquired in the case of determining that the target image is an image that does not include a human body, so as to perform real-time monitoring on an area in the target area where there may be human activities.
- the acquired data to be analyzed is a single-frame image
- the acquired target image may be subjected to human body detection by the target detection network based on the target detection algorithm to determine whether the target image is an image including the human body.
- the target detection network for performing human body detection may be established in advance by performing the following manner before the step S12 is performed:
- S1 collecting human body image sample data, where the human body image sample data includes human body images in different states;
- S3 training is performed by using the labeled human body image sample data to obtain a target detection network based on the target detection algorithm.
- the target detection algorithm may be a depth learning based detection algorithm, which is also called an SSD (SingleShotMultiBoxDetector) algorithm.
- the core of the algorithm is to use the convolution kernel to predict the category score and offset of a series of defaultboundingboxes on the feature map, so that it can quickly and accurately detect whether the target image to be detected is a valid image containing the human body.
- the human body image sample data is required to specifically include a plurality of images of the human body state in different states.
- the human body state may specifically include: a state in which the human body stands, a state in which the human body is sitting, a state in which the human body is lying, and a state in which the human body is lying. The state in which the human body is tilted, the state in which the human body is squatting, and the like.
- a plurality of images including different human body states can be learned by the target detection algorithm, so that an image capable of simultaneously detecting and recognizing a plurality of different human body states can be established.
- the SSD target detection network may be used to calibrate the human body region in the image of the human body image sample data, so that the training related to the human body region feature recognition may be performed subsequently.
- the SSD target detection network ie, the initial model corresponding to the target detection, may be constructed prior to training with the annotated human body image sample data.
- the above SSD target detection network can be constructed on the tensorflow framework, and the inception_v2 is used as the feature extractor.
- the labeled human body image sample data is used for training to obtain a target detection network based on the target detection algorithm.
- the following may be included: using the labeled human body image sample data as input data.
- the above SSD target detection network that is, the initial model of the target detection is trained to obtain the trained target detection network; and then the above-mentioned trained target detection network is adjusted and optimized according to the human body image sample data and accuracy requirements to obtain
- the human body detects the SSD network, that is, the target detection network based on the target detection algorithm.
- the target image is determined to be an image including a human body
- the target image is subjected to fall recognition by a convolutional neural network to determine whether the human body in the target image is in a falling state.
- a convolutional neural network pair may be used.
- the target image is subjected to fall recognition to determine whether the human body in the target image is in a falling state.
- the trained convolutional neural network can be used as the fall recognition model, and the target image including the human body is determined as the input data, and the fall identification is determined by the above-mentioned fall.
- the model recognizes whether the human body in the target image is in a falling state, so that it is possible to determine whether the human body has fallen based on the single-frame image.
- a convolutional neural network with a higher fall accuracy and a faster recognition speed may be established in advance by performing the following manner:
- S1 acquiring human body image sample data, wherein the human body image sample data includes a human body image in different states;
- S3 dividing the image in the preprocessed sample data into positive sample data and negative sample data according to a human body state in the image of the preprocessed sample data, wherein the image in the positive sample data includes at least one of the following : an image including a state in which the human body is standing, an image including a state in which the human body is sitting, an image including a state in which the human body is kneeling, an image including a state in which the human body is tilted; and an image in the negative sample data includes At least one of the following: an image including a state in which the human body is lying, an image including a state in which the human body is kneeling;
- the human body image sample data in order to establish a more accurate target detection network based on the target detection algorithm, the human body image sample data already includes a plurality of images including the human body state. Therefore, in the present embodiment, an image that meets the requirements can be extracted based on the human body image sample data as preprocessed sample data.
- the images in the pre-processed sample data need to be classified according to the two states of falling and non-falling.
- the non-falling image may be represented in the pre-processed sample data, including: an image including a state in which the human body is standing, an image including a state in which the human body is sitting, an image including a state in which the human body is lying, and including An image such as an image of a state in which the human body is tilted is divided into positive sample data, that is, a positive image data set.
- the image representing the fall in the preprocessed sample data including an image including a state in which the human body is lying down, an image including a state in which the human body is lying, is divided into negative sample data, that is, a negative image data set.
- negative sample data that is, a negative image data set.
- the positive sample data and the negative sample data are used for training to establish the convolutional neural network for identifying a human body state type.
- the following may be included: constructing an initial volume Using the above positive sample data and negative sample data as input data, the initial convolutional neural network is trained on the fall state of the human body and the non-fall state of the human body, so that the recognition accuracy is higher and the recognition speed is higher.
- Fast convolutional neural network Further, the convolutional neural network can be used to accurately recognize whether the human body state in the target image corresponds to a fall state of the human body.
- the human body state in the identified target image corresponds to the falling state of the human body, it can be judged that the human body is in a falling state; if the human body state in the identified target image corresponds to the non-falling state of the human body, it can be judged that the human body is not in a falling state status.
- the specific implementation may further include the following content:
- S1 acquiring image sample data that does not include a human body
- S2 Perform error detection training on the convolutional neural network by using the image sample data that does not include the human body.
- the target image that does not include the human body can be identified and filtered first, and the processing efficiency of the convolutional neural network when performing the fall detection can be improved.
- the target image of the single frame is acquired instead of the video stream for analysis processing, and the target detection network based on the target detection algorithm first identifies the image containing the human body, and then passes the image.
- the convolutional neural network based on the classification algorithm classifies and recognizes the human body state in the target image to identify the specific state of the human body in the target image, thereby solving the problem of poor recognition accuracy, large error and low efficiency in the existing methods.
- the technical problem has reached the technical effect of accurately and efficiently identifying the fall state.
- the compliant image may specifically include: an image of the human body region having a map ratio greater than 80%.
- sample data suitable for fall recognition training can be extracted from the body image sample data, thereby avoiding re-acquisition of sample data for fall recognition, reducing training cost and improving learning efficiency.
- the initial convolutional neural network may specifically be an inception_v3 network.
- the above-mentioned inception_v3 network is specifically a convolutional neural network suitable for image recognition.
- the above-listed convolutional neural networks are only for better explaining the embodiments of the present application.
- other suitable convolutional neural networks may also be selected according to specific situations and specific characteristics identified. In this regard, the application is not limited.
- the method further comprises, according to an initial convolution
- the neural network preprocesses the image in the positive sample data and the negative sample data such that the image in the positive sample data and the negative sample data matches the initial convolutional neural network.
- the foregoing pre-processing may specifically include: performing image transformation on the image in the positive sample data and the negative sample data to a specified size, for example, converting to 299 ⁇ 299 The size of the pixel.
- the convolutional neural network when using a convolutional neural network for fall recognition, it is only necessary to distinguish between two types, namely, the fall state of the human body and the non-fall state of the human body. Therefore, according to the complexity of the classification and identification of the convolutional neural network, in order to improve the processing efficiency and reduce the occupation and waste of computing resources, the convolutional neural network can be simplified first when establishing the initial convolutional neural network. Improve. Wherein, the above simplified improvement may specifically include: reducing the number of layers of the convolutional neural network, and/or reducing the number of convolution kernels of the convolutional neural network.
- the foregoing simplified improvement of the inception_v3 network may specifically include: deleting the number of layers of the inception_v3 network from 11 layers (or structures) to 6 layers or 5 layers. And/or, the number of convolution kernels in the inception_v3 network is truncated, and a simplified convolutional neural network can be obtained.
- the simplified convolutional neural network described above may be implemented in the following manner:
- S1 Simplify the existing inception_V3 network.
- the last five inception structures of the inception_V3 network can be deleted, and the simplified inception_v3 network is obtained.
- S3 The number of convolution kernels of all convolutional layers of the simplified inception_v3 network is reduced to two-thirds of the original, and the parameter model Fa1 is modified to adapt to the network after reducing the number of convolution kernels.
- the verification may specifically include: comparing the accuracy of the network fall detection after the reduction of the convolution kernel and the reduction, and if the accuracy of the fall detection does not decrease significantly, the reduced convolution kernel may be continued. And perform corresponding training and fine-tuning operations to obtain a more convolutional convolutional neural network; if the accuracy of the fall detection is significantly reduced, the training and fine-tuning operations can be stopped, and the last network and parameter model can be determined.
- fall detection as a convolutional neural network for fall detection.
- the sending the alarm may specifically include an alarm sounding by the buzzer to remind the person to fall; or sending a warning message (for example, an alarm message) to the person in charge of the target area or the surrounding medical staff through the communication device, requesting timely treatment, etc. Wait.
- a warning message for example, an alarm message
- the human fall detection method analyzes and processes the target image of a single frame instead of the video stream, and uses the target detection network based on the target detection algorithm to first identify the inclusion.
- the human body image is used to classify and recognize the human body state in the target image through a convolutional neural network based on the classification algorithm to identify the specific state of the human body in the target image, thereby solving the recognition fall accuracy existing in the existing method.
- the technical problem of poor error, large error and low efficiency achieves the technical effect of accurately and efficiently identifying the fall state; and by collecting sound information to determine the target orientation, and moving the camera according to the target orientation to acquire a valid target image, effectively
- the detection range of the fall detection is expanded, the accuracy of obtaining an effective target image is improved, the detection effect is improved, and the user experience is improved; and the image including multiple human body states is acquired as sample data to establish a target detection network and convolution.
- Neural network improved image recognition based on single frame The accuracy of the human body fall; also through the complexity of the type of state to be identified, the convolutional neural network has been simplified and improved accordingly, improving the implementation efficiency and reducing the occupation of computing resources.
- an embodiment of the present invention further provides a human body fall detection device, as described in the following embodiments. Since the principle of the human fall detection device solving the problem is similar to the human fall detection method, the implementation of the device can be referred to the implementation of the human fall detection method, and the repeated description will not be repeated.
- the term "unit” or “module” may implement a combination of software and/or hardware of a predetermined function. Although the apparatus described in the following embodiments is preferably implemented in software, hardware, or a combination of software and hardware, is also possible and contemplated.
- FIG. 2 is a schematic structural diagram of a structure of a human body fall detection device according to an embodiment of the present disclosure. The device may specifically include: an acquisition module 21, a human body detection module 22, and a fall recognition module 23. Description.
- the obtaining module 21 is specifically configured to acquire a target image.
- the human body detecting module 22 is specifically configured to perform human body detection on the target image through the target detection network to determine whether the target image is an image including a human body;
- the fall identification module 23 may be specifically configured to: when determining that the target image is an image including a human body, perform a fall recognition on the target image by using a convolutional neural network to determine whether the human body in the target image is in a Falling state.
- the human body fall detection device may specifically be a human body fall detection robot capable of realizing human body fall detection.
- the above-mentioned human fall detection robot can be specifically applied to various places such as homes, hospitals, shopping malls, etc., to detect the above-mentioned places in real time, and to find out in time that the personnel in the place fall, so as to timely perform an alarm and timely perform related assistance.
- the acquisition module 21 may specifically include the following structural units in order to expand the detection range and efficiently obtain an effective target image:
- the sound collector can be specifically used to collect sound information in the target area
- a locator specifically, configured to determine a target orientation according to the sound information
- the mobile device and the camera wherein the camera may be specifically disposed on the mobile device, and the mobile device may be specifically configured to move the camera according to the target orientation, and the camera may be specifically configured to acquire a target image.
- the moving device may specifically include a pulley and a motor.
- the moving position of the camera avatar can be driven by a moving device with a pulley and a motor to better acquire an effective target image.
- the mobile device may also be other types of mobile devices, such as a mobile robot, a remote control car, and the like. In this regard, the application is not limited.
- the effective target image may specifically be an image including a human body.
- the camera can be moved according to the target orientation, and the effective target image can be acquired as much as possible, so that the workload of the human body detecting module 22 can be reduced, and the work efficiency can be improved.
- the device in order to promptly respond to the fall of the person after detecting the fall of the human body, the device may further include an alarm module for issuing an alarm.
- the alarm module may specifically include a buzzer.
- an alarm may be issued by the buzzer in determining that the target image is considered to be in a falling state.
- the alarm module may further include a communication device such as a signal transmitter.
- the communication device such as a signal transmitter may determine that the target image is in a falling state. Down to the relevant responsible person (such as guardian or store security) or surrounding medical staff to send an alarm message to remind the relevant person in charge or the surrounding medical staff to fall, as soon as possible to treat.
- the device may further include a target detection network establishing module, where the target detection network establishing module may be executed according to the following procedure: acquiring human body image sample data, wherein the human body image sample data includes multiple An image including a human body state; a human body region in the image in which the human body image sample data is marked; and the labeled human body image sample data is used for training to obtain a target detection network based on the target detection algorithm.
- the target detection network establishing module may be executed according to the following procedure: acquiring human body image sample data, wherein the human body image sample data includes multiple An image including a human body state; a human body region in the image in which the human body image sample data is marked; and the labeled human body image sample data is used for training to obtain a target detection network based on the target detection algorithm.
- the human body state may specifically include: a state in which the human body is standing, a state in which the human body is sitting, a state in which the human body is lying down, a state in which the human body is kneeling, a state in which the human body is tilted, a state in which the human body is tilted, and the like.
- the above-described human body states are only for better explaining the embodiments of the present application.
- other states than the above-described states may be introduced as the human body state according to specific conditions and requirements. In this regard, the application is not limited.
- the human body detecting module 22 is connected to the acquiring module 21.
- the human body detecting module 22 may send the information to the acquiring module 21 by determining that the target image is an image that does not include a human body.
- the acquisition module 21 reacquires the target image.
- the apparatus may further include a convolutional neural network establishing module, configured to establish a convolutional neural network for identifying a human state type, wherein the convolutional neural network establishing module may specifically include:
- the acquiring unit may be specifically configured to acquire human body image sample data, where the human body image sample data includes a plurality of images including a human body state;
- the extracting unit may be specifically configured to extract, from the human body image sample data, an image that meets the requirements as the pre-processed sample data;
- the dividing unit may be specifically configured to divide the image in the preprocessed sample data into positive sample data and negative sample data according to a human body state in the image of the preprocessed sample data, wherein the image in the positive sample data Including at least one of the following: an image including a state in which the human body is standing, an image including a state in which the human body is sitting, an image including a state in which the human body is kneeling, an image including a state in which the human body is tilted; the negative sample
- the image in the data includes at least one of the following: an image including a state in which the human body lies, and an image including a state in which the human body is kneeling;
- the establishing unit may be specifically configured to perform training by using the positive sample data and the negative sample data to establish a convolutional neural network for identifying a human state type.
- the convolutional neural network establishing module may further include:
- the erroneous detection training unit may be specifically configured to acquire image sample data that does not include a human body; and perform error detection training on the convolutional neural network by using the image sample data that does not include the human body.
- the image that meets the requirements may specifically include: an image of the human body region having a ratio of more than 80%, and the like.
- system, device, module or unit illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product having a certain function.
- the above devices are described separately in terms of functions divided into various units.
- the functions of each unit may be implemented in the same software or software and/or hardware when implementing the present application.
- the human fall detection device analyzes and processes the target image of the single frame instead of the video stream, and firstly identifies the inclusion by using the target detection network based on the target detection algorithm.
- the human body image is used to classify the human body state in the target image by a convolutional neural network based on the classification algorithm to identify the specific state of the human body in the target image, thereby solving the difference in recognition fall accuracy existing in the existing method.
- the low-efficiency technical problem achieves the technical effect of accurately and efficiently identifying the fall state; and by collecting the sound information to determine the target orientation, and moving the camera according to the target orientation to acquire a valid target image, effectively expanding the fall detection
- the detection range improves the accuracy of obtaining an effective target image and improves the detection effect.
- the embodiment of the present application further provides an electronic device.
- the electronic device may specifically include an input device 31 and 32, memory 33.
- the input device 31 can be specifically configured to receive the acquired target image.
- the processor 32 may be specifically configured to perform human body detection on the target image through a target detection network to determine whether the target image is an image including a human body; and in a case where the target image is determined to be an image including a human body
- the target image is subjected to fall recognition by a convolutional neural network to determine whether the human body in the target image is in a falling state.
- the memory 33 may be specifically configured to store the target image, the target detection network, the convolutional neural network, and intermediate data generated during the detection process.
- the input device may specifically be one of main devices for exchanging information between the user and the computer system.
- the input device may include a keyboard, a mouse, a camera, a scanner, a light pen, a handwriting input pad, a voice input device, etc.; the input device is used to input raw data and a program for processing the numbers into the computer.
- the input device can also acquire data transmitted by other modules, units, and devices.
- the processor can be implemented in any suitable manner.
- a processor can employ, for example, a microprocessor or processor and a computer readable medium, logic gate, switch, or application-specific integrated circuit (such as software or firmware) that can be executed by the (micro)processor.
- the memory may specifically be a memory device for storing information in modern information technology.
- the memory may include multiple levels.
- a circuit having a storage function without a physical form is also called a memory, such as a RAM, a FIFO, or the like;
- a storage device having a physical form is also called a memory, such as a memory stick, a TF card, or the like.
- the application storage embodiment further provides a computer storage medium based on a human body fall detection method, wherein the computer storage medium stores computer program instructions, when the computer program instructions are executed: obtaining a target image; a network, performing human body detection on the target image to determine whether the target image is an image including a human body; and in the case of determining that the target image is an image including a human body, the target image is obtained by convolving a neural network A fall recognition is performed to determine whether the human body in the target image is in a falling state.
- the storage medium includes but is not limited to a random access memory (RAM), a read-only memory (ROM), a cache (Cache), a hard disk (Hard Disk Drive, HDD), or a memory card (MemoryCard). ).
- the memory can be used to store computer program instructions.
- the network communication unit may be an interface for performing network connection communication in accordance with a standard stipulated by the communication protocol.
- the application provides a human fall detection method and a device for designing a corresponding human fall detection robot, and applying the human fall detection robot to perform specific human fall detection.
- the specific implementation process can refer to the following.
- the human body fall detection detecting robot may be configured by using the human fall detection method and device designed by applying the human body fall detection method provided in the embodiment of the present application.
- the robot can specifically use the sound source positioning module to locate the general orientation of the human body (ie, the target orientation), and then use the camera to collect data (ie, the target image), and realize the human fall detection based on the single frame image through the deep learning algorithm.
- the fall detection robot includes a specific movable robot body 12, a camera module 13, an alarm module 14 (optional), a sound source positioning module 15 (optional), a human body detection module 16, and a fall recognition module 17 functional module.
- the sound source locating module 15 can be specifically configured to determine a general orientation of the human body, and use the camera module 13 to capture a single frame image.
- the human body detecting module 16 and the fall recognition module 17 can be specifically configured to determine whether a person falls according to the captured image. And transmitting the result to the movable robot body 12; if it falls, the movable robot body 12 can alarm by controlling the alarm module 14.
- the movable robot body 12 includes at least a structure such as a robot body, a motor, and a pulley.
- the camera module 13 can be used to collect a single image and send it to the human body detection module 16 for determining whether there is a human body (ie, determining whether the image is an image containing a human body).
- the alarm module 14 can include at least a mobile phone communication function and a 110 alarm function. In this way, in the specific implementation, the mobile phone communication function can be used to realize the sending of the fall information and the sending of the picture information, and the 110 alarm function realizes the 110 alarm for timely rescue.
- the sound source locating module 15 can specifically determine the source direction of the sound through the microphone array for conveniently searching for people.
- the human body detecting module 16 can specifically implement human body detection by using an SSD target detection algorithm in deep learning.
- the fall identification module 17 implements fall state recognition by a deep learning convolutional neural network.
- the above-described human fall detection robot can be considered as a specific human fall detection device, and the main principle of the implementation is the same as the human fall detection device.
- S4 It is determined by the human body detection module whether a person exists in the collected image. If yes, continue with 5; if not, return 1;
- S5 sending the detected human body area to the fall recognition module to determine whether the human body falls;
- S8 Perform an alarm to transmit the information and images of the fall to the connected mobile phone or other terminal.
- the human body detection module described above is implemented based on an SSD target detection algorithm in deep learning.
- the SSD algorithm training can be performed according to the following process:
- S1 Collecting sample data of human body images containing human body (the ratio of people to pictures is not limited). Because the human body area needs to be detected, and the human body in any state needs to be detected, the collected image data may specifically include a human body in different states, such as a human body standing, kneeling, lying, and leaning.
- S2 Label the collected human body image sample data.
- the SSD target detection network will calibrate the human body area during human body detection. Therefore, it is necessary to provide the human body area in the human body image sample data during training.
- the SSD target detection network can be constructed on the tensorflow framework, and the inception_v2 is used as the feature extractor.
- the fall detection module may specifically include a convolutional neural network in deep learning. Before the image recognition, the fall recognition module can perform convolutional neural network training through the following process:
- S1 Collecting pre-processed sample data including the human body (the proportion of the human-generated picture is more than 80%, that is, the human body area image detected by the human body detection module).
- the positive sample (that is, the positive sample data) contains all non-falling human body pictures, that is, the human body state is standing, holding, tilting, etc.;
- the negative sample ie, negative sample data contains pictures that are fallen after the person falls. That is, the human body state is lying, kneeling, and the like.
- S3 Preprocess the image in the image data sample. Specifically, all image data can be converted to a specified size, for example, a 299 x 299 pixel size.
- the fall identification module may use an inception_v3 network.
- S4-1 Reduce the inception structure, such as the number of layers, while ensuring the recognition accuracy. It simplifies the network structure, improves the recognition speed, and saves the effect of computing resources.
- S4-2 Reduce the number of convolution kernels while ensuring the recognition accuracy.
- the network size is reduced, the recognition speed is improved, and the effect of computing resources is saved.
- S5 The preprocessed picture data sample is input into the inception_v3 network for training, and a fall identification network (ie, a convolutional neural network) is obtained.
- a fall identification network ie, a convolutional neural network
- the human body detection module and the fall detection module are specifically used to perform the human body fall detection, the following may specifically include the following content:
- S1 Input the captured image into the SSD target detection network, detect the area where the human body is located, and save the result.
- S2 Convert all detected human body regions to a specified size, such as a 299 ⁇ 299 pixel size.
- S3 Input the result obtained in S2 into the obtained inception_v3 model, and simultaneously predict in a multi-threaded manner to give a recognition result.
- the fall detection result is displayed to determine whether the human body has a fall.
- the above-mentioned human fall detection robot can achieve higher precision fall through a single frame image in a complicated scene by using the target detection algorithm SSD and the image classification algorithm CNN. Detection and alarm handling can be implemented. Overcoming the problem of inaccurate human detection in the existing methods; at the same time, since the analysis and processing of the video stream is not required, the fall detection can be realized only by the single frame image, the calculation amount is reduced, the detection efficiency is improved, and the movable efficiency is improved.
- the robot is a carrier that enables full-scale monitoring.
- the method and device for detecting human fall in the embodiment of the present application are verified, and the target image of the single frame is acquired instead of the video stream for analysis and processing, and the target detection network based on the target detection algorithm is first identified and included.
- the image of the human body is classified by the convolutional neural network based on the classification algorithm to classify the human body state in the target image to identify the specific state of the human body in the target image, and indeed solves the difference in recognition fall accuracy existing in the existing method.
- the technical problem of low efficiency has achieved the technical effect of accurately and efficiently identifying the fall state.
- the device or module and the like set forth in the above embodiments may be implemented by a computer chip or an entity, or by a product having a certain function.
- the above devices are described as being separately divided into various modules by function.
- the functions of the modules may be implemented in the same software or software and/or hardware, or the modules that implement the same function may be implemented by a combination of multiple sub-modules.
- the device embodiments described above are merely illustrative.
- the division of the modules is only a logical function division. In actual implementation, there may be another division manner.
- multiple modules or components may be combined or integrated. Go to another system, or some features can be ignored or not executed.
- the controller can be logically programmed by means of logic gates, switches, ASICs, programmable logic controllers, and embedding.
- the application can be described in the general context of computer-executable instructions executed by a computer, such as a program module.
- program modules include routines, programs, objects, components, data structures, classes, and the like that perform particular tasks or implement particular abstract data types.
- the present application can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are connected through a communication network.
- program modules can be located in both local and remote computer storage media including storage devices.
- the present application can be implemented by means of software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be embodied in the form of a software product in essence or in the form of a software product, which may be stored in a storage medium such as a ROM/RAM or a disk. , an optical disk, etc., includes instructions for causing a computer device (which may be a personal computer, mobile terminal, server, or network device, etc.) to perform the methods described in various embodiments of the present application or portions of the embodiments.
- a computer device which may be a personal computer, mobile terminal, server, or network device, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Alarm Systems (AREA)
Abstract
Description
Claims (8)
- 一种人体跌倒检测方法,其特征在于,包括:获取目标图像;通过目标检测网络,对所述目标图像进行人体检测,以确定所述目标图像是否为包含人体的图像;在确定所述目标图像为包含人体的图像的情况下,通过卷积神经网络,对所述目标图像进行跌倒识别,以确定所述目标图像中的人体是否处于跌倒状态。
- 根据权利要求1所述的方法,其特征在于,所述获取目标图像,包括:采集目标区域中的声音信息;根据所述声音信息,确定目标方位;根据所述目标方位,移动摄像头,以获取所述目标图像。
- 根据权利要求1所述的方法,其特征在于,按照以下方式建立所述卷积神经网络:获取人体图像样本数据,其中,所述人体图像样本数据包括多个包含人体状态的图像;从所述人体图像样本数据中提取符合要求的图像作为预处理样本数据;根据所述预处理样本数据的图像中的人体状态,将所述预处理样本数据中的图像划分正样本数据和负样本数据,其中,所述正样本数据中的图像包括以下至少之一:包含有人体站着的状态的图像、包含有人体坐着的状态的图像、包含有人体蹲着的状态的图像、包含有人体倾斜着的状态的图像;所述负样本数据中的图像包括以下至少之一:包含有人体躺着的状态的图像、包含有人体趴着的状态的图像;利用所述正样本数据、所述负样本数据进行训练,以建立用于识别人体状态类型的卷积神经网络。
- 根据权利要求3所述的方法,其特征在于,建立所述卷积神经网络的过程中,所述方法还包括:获取不包含人体的图像样本数据;利用所述不包含人体的图像样本数据,对所述卷积神经网络进行误检测训练。
- 一种人体跌倒检测装置,其特征在于,包括:获取模块,用于获取目标图像;人体检测模块,用于通过目标检测网络,对所述目标图像进行人体检测,以确定所述目标图像是否为包含人体的图像;跌倒识别模块,用于在确定所述目标图像为包含人体的图像的情况下,通过卷积神经网络,对所述目标图像进行跌倒识别,以确定所述目标图像中的人体是否处于跌倒状态。
- 根据权利要求5所述的装置,其特征在于,所述获取模块包括:声音采集器,用于采集目标区域中的声音信息;定位器,用于根据所述声音信息,确定目标方位;移动装置和摄像头,其中,所述摄像头设于所述移动装置上,所述移动装置用于根据所述目标方位,移动所述摄像头;所述摄像头用于获取目标图像。
- 根据权利要求5所述的装置,其特征在于,所述装置还包括卷积神经网络建立模块,用于建立用于识别人体状态类型的卷积神经网络,其中,所述卷积神经网络建立模块包括:获取单元,用于获取人体图像样本数据,其中,所述人体图像样本数据包括多个包含人体状态的图像;提取单元,用于从所述人体图像样本数据中提取符合要求的图像作为预处理样本数据;划分单元,用于根据所述预处理样本数据的图像中的人体状态,将所述预处理样本数据中的图像划分正样本数据和负样本数据,其中,所述正样本数据中的图像包括以下至少之一:包含有人体站着的状态的图像、包含有人体坐着的状态的图像、包含有人体蹲着的状态的图像、包含有人体倾斜着的状态的图像;所述负样本数据中的图像包括以下至少之一:包含有人体躺着的状态的图像、包含有人体趴着的状态的图像;建立单元,用于利用所述正样本数据、所述负样本数据进行训练,以建立用于识别人体状态类型的卷积神经网络。
- 根据权利要求7所述的装置,其特征在于,所述卷积神经网络建立模块还包括:误检测训练单元,用于获取不包含人体的图像样本数据;并利用所述不包含人体的图像样本数据,对所述卷积神经网络进行误检测训练。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711468689.0 | 2017-12-29 | ||
CN201711468689.0A CN108090458B (zh) | 2017-12-29 | 2017-12-29 | 人体跌倒检测方法和装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019128304A1 true WO2019128304A1 (zh) | 2019-07-04 |
Family
ID=62179860
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/104734 WO2019128304A1 (zh) | 2017-12-29 | 2018-09-08 | 人体跌倒检测方法和装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108090458B (zh) |
WO (1) | WO2019128304A1 (zh) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110633643A (zh) * | 2019-08-15 | 2019-12-31 | 青岛文达通科技股份有限公司 | 一种面向智慧社区的异常行为检测方法及系统 |
CN111178134A (zh) * | 2019-12-03 | 2020-05-19 | 广东工业大学 | 一种基于深度学习与网络压缩的摔倒检测方法 |
CN111461042A (zh) * | 2020-04-07 | 2020-07-28 | 中国建设银行股份有限公司 | 跌倒检测方法及系统 |
CN111639546A (zh) * | 2020-05-07 | 2020-09-08 | 金钱猫科技股份有限公司 | 一种基于神经网络的小尺度目标云计算识别方法和装置 |
CN113221621A (zh) * | 2021-02-04 | 2021-08-06 | 宁波卫生职业技术学院 | 一种基于深度学习的重心监测与识别方法 |
CN113478485A (zh) * | 2021-07-06 | 2021-10-08 | 上海商汤智能科技有限公司 | 机器人及其控制方法、装置、电子设备、存储介质 |
CN113762219A (zh) * | 2021-11-03 | 2021-12-07 | 恒林家居股份有限公司 | 一种移动会议室内人物识别方法、系统和存储介质 |
CN114229646A (zh) * | 2021-12-28 | 2022-03-25 | 苏州汇川控制技术有限公司 | 电梯控制方法、电梯及电梯检测系统 |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108090458B (zh) * | 2017-12-29 | 2020-02-14 | 南京阿凡达机器人科技有限公司 | 人体跌倒检测方法和装置 |
CN108961675A (zh) * | 2018-06-14 | 2018-12-07 | 江南大学 | 基于卷积神经网络的跌倒检测方法 |
CN108985214A (zh) * | 2018-07-09 | 2018-12-11 | 上海斐讯数据通信技术有限公司 | 图像数据的标注方法和装置 |
CN111127837A (zh) * | 2018-10-31 | 2020-05-08 | 杭州海康威视数字技术股份有限公司 | 一种报警方法、摄像机及报警系统 |
CN111382610B (zh) * | 2018-12-28 | 2023-10-13 | 杭州海康威视数字技术股份有限公司 | 一种事件检测方法、装置及电子设备 |
US11179064B2 (en) * | 2018-12-30 | 2021-11-23 | Altum View Systems Inc. | Method and system for privacy-preserving fall detection |
CN110008853B (zh) * | 2019-03-15 | 2023-05-30 | 华南理工大学 | 行人检测网络及模型训练方法、检测方法、介质、设备 |
CN111967287A (zh) * | 2019-05-20 | 2020-11-20 | 江苏金鑫信息技术有限公司 | 一种基于深度学习的行人检测方法 |
CN110443150A (zh) * | 2019-07-10 | 2019-11-12 | 思百达物联网科技(北京)有限公司 | 一种跌倒检测方法、装置、存储介质 |
CN110532966A (zh) * | 2019-08-30 | 2019-12-03 | 深兰科技(上海)有限公司 | 一种基于分类模型进行跌倒识别的方法及设备 |
CN111352349A (zh) * | 2020-01-27 | 2020-06-30 | 东北石油大学 | 对老年人居住环境进行信息采集和调节的系统及方法 |
CN112149511A (zh) * | 2020-08-27 | 2020-12-29 | 深圳市点创科技有限公司 | 基于神经网络的驾驶员违规行为检测方法、终端、装置 |
CN112418096A (zh) * | 2020-11-24 | 2021-02-26 | 京东数科海益信息科技有限公司 | 检测跌的方法、装置和机器人 |
CN112784676A (zh) * | 2020-12-04 | 2021-05-11 | 中国科学院深圳先进技术研究院 | 图像处理方法、机器人及计算机可读存储介质 |
CN112733618A (zh) * | 2020-12-22 | 2021-04-30 | 江苏艾雨文承养老机器人有限公司 | 人体跌倒检测方法、防跌倒机器人及防跌倒系统 |
CN113158733B (zh) * | 2020-12-30 | 2024-01-02 | 北京市商汤科技开发有限公司 | 图像过滤方法、装置、电子设备及存储介质 |
CN113065473A (zh) * | 2021-04-07 | 2021-07-02 | 浙江天铂云科光电股份有限公司 | 一种适用于嵌入式系统的口罩人脸检测和体温测量方法 |
CN113221661A (zh) * | 2021-04-14 | 2021-08-06 | 浪潮天元通信信息系统有限公司 | 一种智能化人体摔倒检测系统及方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102722715A (zh) * | 2012-05-21 | 2012-10-10 | 华南理工大学 | 一种基于人体姿势状态判决的跌倒检测方法 |
US20130128051A1 (en) * | 2011-11-18 | 2013-05-23 | Syracuse University | Automatic detection by a wearable camera |
CN105678267A (zh) * | 2016-01-08 | 2016-06-15 | 浙江宇视科技有限公司 | 一种场景识别方法及装置 |
CN107331118A (zh) * | 2017-07-05 | 2017-11-07 | 浙江宇视科技有限公司 | 跌倒检测方法及装置 |
CN107408308A (zh) * | 2015-03-06 | 2017-11-28 | 柯尼卡美能达株式会社 | 姿势检测装置以及姿势检测方法 |
CN108090458A (zh) * | 2017-12-29 | 2018-05-29 | 南京阿凡达机器人科技有限公司 | 人体跌倒检测方法和装置 |
-
2017
- 2017-12-29 CN CN201711468689.0A patent/CN108090458B/zh active Active
-
2018
- 2018-09-08 WO PCT/CN2018/104734 patent/WO2019128304A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130128051A1 (en) * | 2011-11-18 | 2013-05-23 | Syracuse University | Automatic detection by a wearable camera |
CN102722715A (zh) * | 2012-05-21 | 2012-10-10 | 华南理工大学 | 一种基于人体姿势状态判决的跌倒检测方法 |
CN107408308A (zh) * | 2015-03-06 | 2017-11-28 | 柯尼卡美能达株式会社 | 姿势检测装置以及姿势检测方法 |
CN105678267A (zh) * | 2016-01-08 | 2016-06-15 | 浙江宇视科技有限公司 | 一种场景识别方法及装置 |
CN107331118A (zh) * | 2017-07-05 | 2017-11-07 | 浙江宇视科技有限公司 | 跌倒检测方法及装置 |
CN108090458A (zh) * | 2017-12-29 | 2018-05-29 | 南京阿凡达机器人科技有限公司 | 人体跌倒检测方法和装置 |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110633643A (zh) * | 2019-08-15 | 2019-12-31 | 青岛文达通科技股份有限公司 | 一种面向智慧社区的异常行为检测方法及系统 |
CN111178134A (zh) * | 2019-12-03 | 2020-05-19 | 广东工业大学 | 一种基于深度学习与网络压缩的摔倒检测方法 |
CN111178134B (zh) * | 2019-12-03 | 2023-05-30 | 广东工业大学 | 一种基于深度学习与网络压缩的摔倒检测方法 |
CN111461042A (zh) * | 2020-04-07 | 2020-07-28 | 中国建设银行股份有限公司 | 跌倒检测方法及系统 |
CN111639546A (zh) * | 2020-05-07 | 2020-09-08 | 金钱猫科技股份有限公司 | 一种基于神经网络的小尺度目标云计算识别方法和装置 |
CN113221621A (zh) * | 2021-02-04 | 2021-08-06 | 宁波卫生职业技术学院 | 一种基于深度学习的重心监测与识别方法 |
CN113221621B (zh) * | 2021-02-04 | 2023-10-31 | 宁波卫生职业技术学院 | 一种基于深度学习的重心监测与识别方法 |
CN113478485A (zh) * | 2021-07-06 | 2021-10-08 | 上海商汤智能科技有限公司 | 机器人及其控制方法、装置、电子设备、存储介质 |
CN113762219A (zh) * | 2021-11-03 | 2021-12-07 | 恒林家居股份有限公司 | 一种移动会议室内人物识别方法、系统和存储介质 |
CN114229646A (zh) * | 2021-12-28 | 2022-03-25 | 苏州汇川控制技术有限公司 | 电梯控制方法、电梯及电梯检测系统 |
CN114229646B (zh) * | 2021-12-28 | 2024-03-22 | 苏州汇川控制技术有限公司 | 电梯控制方法、电梯及电梯检测系统 |
Also Published As
Publication number | Publication date |
---|---|
CN108090458A (zh) | 2018-05-29 |
CN108090458B (zh) | 2020-02-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019128304A1 (zh) | 人体跌倒检测方法和装置 | |
US20220175287A1 (en) | Method and device for detecting driver distraction | |
CN112183166B (zh) | 确定训练样本的方法、装置和电子设备 | |
WO2019000929A1 (zh) | 垃圾分类回收方法、垃圾分类设备以及垃圾分类回收系统 | |
US20090051787A1 (en) | Apparatus and method for photographing image using digital camera capable of providing preview images | |
WO2019129255A1 (zh) | 一种目标跟踪方法及装置 | |
EP3702957B1 (en) | Target detection method and apparatus, and computer device | |
WO2022227490A1 (zh) | 行为识别方法、装置、设备、存储介质、计算机程序及程序产品 | |
WO2021031954A1 (zh) | 对象数量确定方法、装置、存储介质与电子设备 | |
CN110705500A (zh) | 基于深度学习的人员工作图像的注意力检测方法及系统 | |
KR101337554B1 (ko) | 차량용 블랙박스의 영상인식을 이용한 수배자 및 실종자 추적 장치 및 그 방법 | |
US10621424B2 (en) | Multi-level state detecting system and method | |
KR20220078893A (ko) | 영상 속 사람의 행동 인식 장치 및 방법 | |
CN111723671A (zh) | 一种智慧灯杆呼救系统及方法 | |
CN104392201B (zh) | 一种基于全向视觉的人体跌倒识别方法 | |
KR102116396B1 (ko) | 사회약자 인식장치 및 그 장치의 구동방법 | |
EP3035238A1 (en) | Video surveillance system and method for fraud detection | |
CN115331386B (zh) | 一种基于计算机视觉的防垂钓检测告警系统 | |
WO2023164370A1 (en) | Method and system for crowd counting | |
CN112733722B (zh) | 姿态识别方法、装置、系统及计算机可读存储介质 | |
CN111178134B (zh) | 一种基于深度学习与网络压缩的摔倒检测方法 | |
KR102134771B1 (ko) | 객체 인식을 통해 위급 상황을 판단하는 장치 및 방법 | |
Varghese et al. | An Intelligent Voice Assistance System for Visually Impaired using Deep Learning | |
CN112883876A (zh) | 室内行人检测的方法、装置、设备及计算机存储介质 | |
CN111611979A (zh) | 基于面部扫描的智能健康监测系统及方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18894977 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18894977 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 09.03.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18894977 Country of ref document: EP Kind code of ref document: A1 |