CN112784987A - Target nursing method and device based on multistage neural network cascade - Google Patents

Target nursing method and device based on multistage neural network cascade Download PDF

Info

Publication number
CN112784987A
CN112784987A CN202110296284.3A CN202110296284A CN112784987A CN 112784987 A CN112784987 A CN 112784987A CN 202110296284 A CN202110296284 A CN 202110296284A CN 112784987 A CN112784987 A CN 112784987A
Authority
CN
China
Prior art keywords
model
deep network
deep
network model
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110296284.3A
Other languages
Chinese (zh)
Other versions
CN112784987B (en
Inventor
陈辉
熊章
张智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Xingxun Intelligent Technology Co ltd
Original Assignee
Wuhan Xingxun Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Xingxun Intelligent Technology Co ltd filed Critical Wuhan Xingxun Intelligent Technology Co ltd
Priority to CN202110296284.3A priority Critical patent/CN112784987B/en
Publication of CN112784987A publication Critical patent/CN112784987A/en
Application granted granted Critical
Publication of CN112784987B publication Critical patent/CN112784987B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Hardware Design (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a target nursing method and device based on multistage neural network cascade. The method comprises the following steps: acquiring a video stream image acquired by a camera; selecting a logic combination relation, and determining a cascade relation and a corresponding output action among all the deep network models in the deep network module according to the logic combination relation; loading a corresponding deep network model; calling a multi-core dynamic allocation management instruction, calculating the complexity of the loaded deep network models, and allocating a corresponding memory and a predetermined number of core processors to each deep network model according to the complexity; inputting the collected video stream image into a corresponding depth network model; and analyzing scene information in the video stream image according to the specified output information obtained after the processing of each cascaded deep network model, and executing corresponding output action. The invention has low requirement on memory space, flexibly combines various deep network models to form a building block type development mode, and improves the development efficiency and the interest of users.

Description

Target nursing method and device based on multistage neural network cascade
The application is a divisional application of an invention patent application with application number 201910088001.9, which is filed in 2019, 1, 29 and has the invention name of a multi-core processor-based multi-element deep network model reconstruction method and device.
Technical Field
The invention relates to a data processing technology, in particular to data processing under a deep network model adopting a multi-core processor, and specifically relates to a target nursing method and device based on multistage neural network cascade.
Background
The existing deep network model can only process tasks such as detection, classification, segmentation and the like of a single scene. The involvement of complex scenes would be energetically inefficient.
The existing methods for solving the problem of complex scene analysis can adopt two general methods: one is to adopt a GPU with higher cost to directly and serially output the output information of a plurality of deep network models. But the GPU is too high in cost and is not beneficial to large-scale use; in another method, a scene is directly analyzed in time sequence, and the scene is analyzed through spatial position correlation and time correlation among actions or targets in the scene. The training data set required by the method is special and troublesome to process, and the time sequence analysis is continuously carried out on the video, so that the required memory space is large, and the algorithm complexity is high.
Disclosure of Invention
The technical problem to be solved by the invention is to provide the multi-core processor-based multi-element deep network model reconstruction method and device which are low in memory space requirement, low in algorithm complexity, flexible in combination and low in cost.
In order to achieve the above object, in one aspect, the present invention provides a target nursing method based on multi-stage neural network cascade, which is used for nursing old people and/or infants, and comprises:
acquiring a video stream image acquired by a camera;
selecting a logic combination relation according to a nursing object, and determining a cascade relation and a corresponding output action among all the deep network models in the deep network module according to the logic combination relation;
loading a corresponding depth network model according to the logic combination relation;
calling a multi-core dynamic allocation management instruction, calculating the complexity of the loaded deep network models, and allocating a corresponding memory and a preset number of core processors for each deep network model according to the complexity;
inputting the collected video stream image into a corresponding depth network model;
analyzing scene information in the video stream image according to the specified output information obtained after processing of each cascaded deep network model, and executing corresponding output action;
the logic combination relationship is as follows: and the effective identification area is appointed in the acquired video stream image.
Preferably, if the nursing object is an infant, the deep network model comprises a child deep detection model and a network tracking identification model, and the child deep detection model and the network tracking identification model are in secondary cascade connection;
if the nursing object is an old person, the deep network model comprises a small old person deep detection model, a network tracking identification model and a deep network falling classification model;
the old people deep network detection model, the deep network tracking identification model and the deep network falling classification model are cascaded in three levels; the output of the old people deep network detection model is used as the input of the deep network tracking identification model, and the output of the deep network tracking identification model is used as the input of the deep network falling classification model.
Preferably, the output is a voice prompt, wherein the prompt content includes that the child is out of the defined area or the old falls.
Preferably, the allocating a corresponding memory and a predetermined number of core processors for each deep network model according to the complexity includes:
obtaining a parameter file and a weight file of each deep network model;
respectively calculating the space complexity and the time complexity of each deep network model according to the parameter file and the weight file corresponding to each deep network model;
and distributing a corresponding memory and a preset number of core processors to each deep network model according to the space complexity and the time complexity of each deep network model.
Preferably, the parameter file comprises at least one of: convolutional layers and their number of layers, depth convolutional layers and their number of layers, and convolutional kernel size.
Preferably, the inputting the acquired video stream image into the corresponding depth network model comprises:
obtaining the number of detection models contained in the deep network model group;
dividing the video stream image into subcode streams corresponding to the number of the models;
and respectively sending the sub-code streams into the detection models after scaling.
Preferably, the detection model is a network detection model based on MovilenetV2-SSD training.
Preferably, the allocating a corresponding memory and a predetermined number of core processors to each deep network model according to the spatial complexity and the temporal complexity of each deep network model includes:
acquiring the computing capacity of each vector computing unit of the multi-core processor;
determining the number of vector computing units required by each deep network model through each computing capacity according to the time complexity and the space complexity of each deep network model, and distributing the number of core processors to each deep network model;
the number of the core processors is the same as that of the vector computing units.
Preferably, the effective identification area is: a region in the middle of the image is selected and has an area that is the entire image area 1/2.
In another aspect, the present invention further provides a device for monitoring a target scene based on multistage neural network cascade, where the device includes:
the video stream image input module is used for acquiring a video stream image acquired by the camera;
the logic combination module is used for selecting a logic combination relation according to a nursing object and determining a cascade relation and a corresponding output action among all the depth network models in the depth network module according to the logic combination relation;
the loading module is used for loading the corresponding deep network model according to the logic combination relation;
the multi-core dynamic allocation management module is used for calling a multi-core dynamic allocation management instruction, calculating the complexity of the loaded deep network models, and allocating corresponding memories and a preset number of core processors for each deep network model according to the complexity;
the depth network module is used for inputting the collected video stream image into a corresponding depth network model;
the execution module is used for analyzing scene information in the video stream image according to the specified output information obtained after the processing of each cascaded depth network model and executing corresponding output actions;
the logic combination relationship is as follows: and the effective identification area is appointed in the acquired video stream image.
According to the target nursing method and device based on multistage neural network cascade, the cascade relation among the depth network models is determined by adopting the multi-core processor according to the preset logic combination relation, then the corresponding memory and the core processor are distributed to each depth network model according to the complexity of the depth network models, and the scene in the video stream image is comprehensively analyzed, so that the requirements on memory space and algorithm complexity are low, the product cost is low, various depth network models (depth network learning algorithms) can be flexibly combined to form a building block type development mode, and the development efficiency and the interestingness of a user are improved.
Drawings
FIG. 1 is a flowchart of a preferred embodiment of a multi-core processor-based multivariate deep network model reconstruction method of the present invention;
FIG. 2 is a video stream image classification structure diagram of the multi-core processor-based multi-element depth network model reconstruction method in FIG. 1;
fig. 3 is a schematic diagram of classification of tasks specified by a user in the multi-core processor-based multi-element deep network model reconstruction method in fig. 1.
Fig. 4 is a schematic structural diagram of a multi-core dynamic allocation management module according to an embodiment of the present invention.
Fig. 5 is a schematic structural diagram of a preferred embodiment of the multi-core processor-based multi-element deep network model reconstruction device of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and examples. It should be noted that, if not conflicted, the embodiments of the invention and the individual features of the embodiments can be combined with each other within the scope of protection of the invention.
Example 1
Referring to fig. 1 to 4, an embodiment of the present invention provides a multi-core processor based multi-element deep network model reconstruction method, where the multi-core processor may be a multi-core neural network processing chip or another integrated chip with a plurality of core processors, and includes a predetermined number of vector computing units, which are currently commonly 12 or 16 vector computing units, and may also be in other numbers. The computing power and on-chip cache size of each vector computation unit can be set by itself. In the embodiment of the invention, a multi-core neural network processing chip is selected and connected with a CCD camera, and a camera device based on visible light or infrared light can be obtained through the CCD camera externally connected with an infrared light supplement lamp to shoot a preset area of a scene (such as a family scene, a work scene, a conference scene and the like) so as to obtain a real-time image of the current scene. The multi-core processor-based multi-element depth network model reconstruction method mainly comprises the following steps:
s10, acquiring a video stream image collected by the camera; here, the video stream image is visible light, but may be infrared light.
S20, selecting a logic combination relation, and determining a cascade relation and a corresponding output action among all the deep network models in the deep network module according to the logic combination relation; the logical combination relationship is designed in advance, and the depth network models are selected and determined to be cascaded according to the user requirements.
S30, loading a corresponding deep network model according to the logic combination relation;
s40, calling a multi-core dynamic allocation management instruction, calculating the complexity of the loaded deep network model, and allocating a corresponding memory and a preset number of core processors for each deep network model according to the complexity; the multi-core dynamic allocation management instruction mainly comprises a memory management instruction and a multi-core allocation management instruction. The memory management instruction mainly manages a plurality of memory blocks, such as memory block 1, memory block 2, memory block 3 … memory block n; the multi-core allocation management instruction is mainly to manage allocation of a plurality of core processors, such as processor 1, processor 2, and processor 3 …, processor n, but here, for example, the number of processors may be equal to or different from the number of memory blocks.
S50, inputting the collected video stream image into a corresponding depth network model;
and S60, analyzing scene information in the video stream image according to the specified output information obtained after the processing of each cascaded depth network model, and executing corresponding output action.
The multi-core processor-based multi-element depth network model reconstruction method analyzes the scene, has no special requirement on a training data set, is simple to process, can continuously analyze the time sequence of the acquired video stream images, has low requirement on memory space, has simple algorithm, can accurately understand the semantics of the scene, and can execute corresponding output actions in time after analyzing the scene information. The invention can flexibly combine various deep learning algorithms (namely deep learning models) to form a building block type development mode, thereby improving the development efficiency and the interestingness of users.
In a preferred embodiment, the video stream images include a main stream video image and a plurality of sub stream video images, and the main stream video image and the plurality of sub stream video images are respectively input into the depth network models corresponding to the main stream video image and the plurality of sub stream video images. The resolution and frame rate of the main code stream video image and the plurality of sub code stream video images are customized by a user according to needs. As shown in fig. 2, the video stream image includes a main stream video image and a plurality of sub-stream images, where the plurality of sub-stream images are sub-stream images 1, sub-stream images 2, and sub-stream images 3 …, where W is an integer greater than 3.
In a preferred embodiment, the invoking a multi-core dynamic allocation management instruction, calculating the complexity of the loaded deep network models, and allocating a corresponding memory and a predetermined number of core processors to each deep network model according to the complexity further includes:
and calling a multi-core dynamic allocation management instruction, calculating the time complexity and the space complexity of the corresponding depth network model according to the loaded parameter file of the depth network model, and determining the memory space and the number of core processors dynamically allocated to the corresponding depth network model according to the time complexity and the space complexity.
In a preferred embodiment, the calculation formula of the spatial complexity is as follows:
Figure BDA0002984474730000051
the calculation formula of the time complexity is as follows:
(1) the time complexity formula for a single convolutional layer is calculated first as follows:
Time~O(M2·K2·Cin·Cout)
(2) the time complexity formula for calculating the whole depth network model is as follows:
Figure BDA0002984474730000052
where M is the size of the output feature map, K is the size of the convolution kernel, CinIs the number of input channels, CoutThe number of output channels, D is the total number of convolution layers of the depth network model; l is the first convolution layer of the deep network model, ClThe number of output channels of the ith convolution layer of the depth network model is also the number of convolution kernels of the convolution layer; cl-1Is the number of input channels of the ith convolutional layer. Wherein M and K are numbers greater than 0 and D is an integer greater than 0.
In a preferred embodiment, the determining, according to the temporal complexity and the spatial complexity, the memory space and the number of core processors dynamically allocated to the corresponding deep network model further includes:
calculating the space complexity of each depth network model to be M according to the N depth network models specified by the logical combination relationi(K) Distributing corresponding memory space for each deep network model; the time complexity of each depth network model is calculated to be Ti(G) (ii) a Setting the computing capacity of each core processor in the multi-core processor as H (G), wherein G is a unit for measuring the complexity of computing time, and the total number of core processors required by the deep network model is sum Ti(G) (G), wherein N is an integer greater than 0. Each deep network model represents a deep learning algorithm.
In a preferred embodiment, the logical combination relationship includes at least one of: the method comprises the steps of specifying a cascade relation among various depth network models, a selected video stream image area, a specified task and a specified depth network model. The selected video stream image area comprises: selected rectangular areas, circular areas or polygonal areas with the number of sides larger than 4 in the video stream image.
In a preferred embodiment, the N deep network models include: the system comprises a deep network detection model, a deep network classification model, a deep network semantic segmentation model, a deep network tracking recognition model and a deep network voice recognition model; the deep network detection model is mainly a target detection model based on deep learning and detects a target trained in advance by a user. The deep network classification model is a target classification model based on deep learning, extracts the depth characteristics of the images, classifies the images, and judges which scene or target the images belong to. The deep network semantic segmentation module is a semantic segmentation model based on deep learning and mainly segments objects with specific meanings. The depth network tracking model is a tracking model based on depth learning, and refers to extracting depth features of images for tracking. The deep network voice recognition model is a voice recognition model based on deep learning, and is used for recognizing the voice information of the user and extracting the semantics based on the deep learning.
The cascade relation among the deep network models is as follows:
a single-layer cascade or multilayer cascade between one deep network detection model and another deep network detection model, a single-layer cascade or multilayer cascade between one deep network detection model and one deep network classification model, a single-layer cascade or multilayer cascade between one deep network classification model and another deep network classification model, a single-layer cascade or multilayer cascade between one deep network detection model and one deep tracking model, a single-layer cascade or multilayer cascade between one deep network detection model and one deep network semantic segmentation model, a single-layer cascade or multilayer cascade between one deep network tracking model and one deep network semantic segmentation model. More accurate output information can be obtained through cascade connection, and judgment errors are reduced.
In a preferred embodiment, the specified tasks include at least one of:
safety zone detection and protection, timed starting procedures, article care, stranger intrusion alarm, face white list and/or black list reminding, wonderful instant shooting (including what is commonly referred to as snap shot), elder care and child care.
In a preferred embodiment, the multi-core processor adopts a front-end embedded processing chip, and includes: at least one of a multi-core DSP, a multi-core CPU and a multi-core FPGA.
In a preferred embodiment, the corresponding output action includes at least one of voice prompt, automatic video recording, automatic photo taking, and flashing.
In a preferred embodiment, when the depth network models are cascaded into a plurality of depth network models, the output information of one depth network model is used as the input information of another depth network model, and new output information is generated after calculation processing so as to analyze the deep meaning of the scene in the video stream image.
Application example 1
In an application embodiment of the present invention, a nursing scene for fall detection of an elderly person is taken as an example to describe the present invention in detail.
S100, a multi-core neural network processing chip is selected, wherein the multi-core neural network processing chip comprises 12 vector computing units, the computing power of each vector computing unit is 10(G), and the on-chip cache of the vector computing unit is 2M.
S200, shooting a scene where the old man is located by adopting a CCD camera, and acquiring a video stream image of infrared light or visible light; in this example, a visible light video stream image is used as a test example.
An onboard CCD camera on a chip is processed through a multi-core neural network, a camera device based on visible light or infrared light is obtained through an external infrared light supplement lamp to shoot a preset area of a home scene, and a real-time image of an old person in the current scene is obtained;
s300, selecting a logic combination relation (logic combination module), and determining a cascade relation and a corresponding output action among all the depth network models in the depth network module according to the logic combination relation; the logical combination relationship here is mainly: the cascade relation among the deep network models specified by the user, the areas selected by the user and the deep network types specified by the user. The output action can be voice prompt or sending a remote network alarm signal to a specified nursing person mobile phone terminal. Meanwhile, the picture can be automatically photographed, and the picture and the alarm signal are sent to the electronic equipment of the caregiver together.
Specifically, the selected logical combination relationship is to preferentially select the 1/2 area in the middle of the captured video stream image as the effective identification area, and of course, other areas in the video stream image may also be selected to implement the old people fall detection function, where the selected deep network model includes: the system comprises three depth network models, namely an old people depth network detection model, a depth network tracking identification model, a depth network falling classification model and the like. The method comprises the steps of carrying out three-level cascade on an old people deep network detection model, a deep network tracking recognition model and a deep network falling classification model, wherein the output of the old people deep network detection model is used as the input of the deep network tracking recognition model, and the output of the deep network tracking recognition model is used as the input of the deep network falling classification model. The appointed output task is to judge the old people to fall down and then send out voice to give an alarm.
S400, loading a corresponding depth network model, namely loading an old people depth network detection model, a depth network tracking and identifying model, a depth network falling classification model and the like which are included in the depth network model.
S500, calling a multi-core dynamic allocation management instruction, calculating the complexity of the loaded deep network model, and allocating a corresponding memory and a preset number of core processors for each deep network model according to the complexity.
Specifically, the parameter files and the weight files of the three deep network models, namely the old people deep network detection model, the deep network tracking identification model and the deep network falling classification model, are imported into a memory through a multi-core dynamic allocation management model, and the space complexity and the time complexity of the corresponding deep network model are calculated: each depth network model is provided with a parameter file and a weight file which are uniquely corresponding to each depth network model, wherein the parameter file describes the calculation rule of each layer, and the weight file is obtained by data training.
In this embodiment, the advanced network inspection model for the elderly people adopts an inspection model (for example, MobilenetV2-SSD, or other deep neural networks) trained based on a deep neural network, and analyzes a parameter file of the advanced network inspection model for the elderly people, where the parameter file mainly includes: the number of convolutional layers and layers thereof, the number of depth convolutional layers and layers thereof, the size of a convolution kernel and the like.
The parameter file of the old people detection model has 78 convolutional layers, the calculated amount can be calculated by the time complexity formula to obtain 16(G), and the space complexity can be calculated by the space complexity calculation formula to obtain 120K. And the computing power of each vector computing unit of the multi-core processor is 10(G), and the multi-core dynamic allocation management model computes that 2 computing vector units need to be called. Thus, the system automatically allocates two core processors, and the memory space is 128K. The input video to the elderly detection model is a sub-stream 1 scaled to 300 × 300 resolution.
The deep network tracking and identifying model provided by the embodiment of the invention is mainly used for tracking and identifying the characteristics of the old in a scene. The embodiment of the invention adopts an ECO tracking model and can also adopt a C-COT tracking model. The main calculated amount in the ECO tracking model is divided into two parts, namely a related filtering calculation part; another part is the computation of depth features. The related filtering part is calculated less than 1G, and the depth feature adopts a model extraction feature in a deep neural network (such as MobileNet V2). The structure of the deep neural network adopted by the invention is as follows 1:
Figure BDA0002984474730000091
TABLE 1
The calculated amount is 0.585G, the space complexity is 25K, 1 calculation vector unit is called in the multi-core dynamic allocation management model, and the allocated memory space is 128K.
For the deep network falling classification model of the old people, a classification model trained based on MobilenetV2 can be adopted, the spatial complexity is 25K, and the calculated amount is 0.585G. Here the system will automatically allocate 1 core with 128K memory space.
S600, inputting the two paths of subcode streams separated by the video input model into the corresponding depth network model.
S700, automatically sending the video subcode streams to two paths of old people deep network detection models for detection respectively, and directly sending results obtained by detecting the subcode streams 1 and 2 to the deep network tracking identification model. Tracking the old people through a deep network tracking recognition model, and then displaying a tracking result into a main code stream. When the old people are detected by an old people depth network detection model in the area where the old people are located in a scene, the old people are tracked by the depth network tracking identification model, the output of the depth network tracking identification model is an image containing the old people, the output is used as an input image of the depth network falling classification model, the image is classified and judged by the depth network falling classification model, if the old people fall, the logic combination model loads alarm audio according to the output action corresponding to the preset task, and therefore the old people can be reminded to fall. Through tracking the old man in real time, when the old man tumbles in this region, send alarm signal, perhaps shoot the old man and tumble the picture and send on the electronic equipment of nursing, therefore great prevention the risk, also help the user even if in time know the old man when busy other things whether tumbles the injury, alleviateed caregiver's pressure.
Application example 2
Take the nursing of an infant who is not able to walk upright as an example. The main steps of application example 2 are the same as those of application example 1. Only because the infant nursing system is used for infant nursing, the deep network model of the application embodiment 2 of the invention mainly comprises: the child depth network detection model and the depth network tracking recognition model are cascaded in two stages, the designated task is a child nursing function, and the designated output task is that the child goes beyond a defined area to prompt a voice.
After children are detected by the child depth network detection model in the demarcated area, the output of the child depth network detection model at the moment is a rectangular frame containing the children, the rectangular frame area is used as the input of a tracking network, the rectangular frame always containing the children is always tracked, when the children move, the rectangular frame can move along with the children, when the children climb out of the area designated by the user, the logic combination model can automatically load alarm audio at the moment, and an alarm sound is given out to prompt the children to climb out of the designated area.
The child detection model can also adopt a child deep network detection model trained on Movilent V2-SSD, parameter files of the child deep network detection model are analyzed, similarly, the parameter calculation amount of the child deep network detection model is 16(G) and the space complexity is 120K through the time complexity and space complexity calculation, and 2 calculation vector units are required to be called when the multi-core dynamic allocation management model calculates the parameter calculation amount. The system will automatically allocate 2 core processors here, and the memory space is 128K. And the input video to the model is the sub-stream 2 scaled to 300 x 300 resolution.
When the deep network tracking recognition model tracks that the child exceeds the set effective area, the system automatically sends out voice prompt.
Application example 3
Application embodiment 3 of the present invention provides a multi-core processor-based multivariate deep network model reconstruction method with both old person care and child care based on application embodiments 1 and 2, which includes the following steps:
selecting a multi-core neural network processing chip;
collecting video stream images of visible light or infrared light;
selecting a corresponding logic combination relation, and determining a cascade relation and an output action among all the deep network models in the deep network module;
in a preferred embodiment of the present invention, the selected logical combination is to select a region in the middle of the image that is the entire image area 1/2 as the valid recognition region.
Taking the implementation of the old fall detection and child care functions at the same time as an example, the selected deep network model includes: the system comprises four depth network models, namely an old people detection model, a child detection model, a depth tracking identification model, a falling classification model and the like. The old people detection model, the depth tracking identification model and the falling classification model are subjected to three-level cascade, namely the old people detection model is connected with the depth tracking identification model, the depth tracking identification model is connected with the falling classification model, and the specified output task is to judge that the old people fall and then give an alarm by voice. And (3) carrying out secondary cascade on the child detection model and the depth tracking recognition model, wherein the designated task is a child nursing function, and the designated output task is that the child exceeds a defined area to prompt a voice. The model can be tracked and recognized by the same depth, and when the old people and the children are determined in the detection area, for example, the old people can nurse the old people and the children only by one set of equipment when the old people nurse the children or accompany the children in the playing scene.
The four deep network models are imported into a memory through a multi-core dynamic allocation management module, and two processing cores are allocated to each network model:
specifically, in this embodiment, the detection model for the elderly people is a detection model trained based on MobilenetV2-SSD, and the input video sent to the model is a subcode stream 1 that is scaled to 300 × 300, and the model is loaded to a memory with addresses of 0-256K in Movidius. Since the calculation amount of the training MobilenetV2-SSD is 6G, Movidius has 12 vector calculation units, and the calculation power of each calculation unit is 10G, in order to obtain better real-time capability, calculation vector units of jump 1 and jump 2 are called here for calculation. The child detection model is also based on the detection model trained by the Movilent V2-SSD, and the input video sent into the model is scaled to a 300 x 300 sub-stream 2. And the model is loaded to a memory with 256-512K addresses in the Movidius chip for calculation, and calculation vector units SHAVE3 and SHAVE4 are called for calculation. And the ECO tracking model adopted by the depth tracking identification model has the calculated amount of 3G, and in order to obtain a better tracking effect, the SHAVE6-SHAVE9 are used for calculating four SHAVEs. And the memory of the Movidius on-chip address 512-640K is called for calculation. The old people falling model adopts a classification model trained based on MobilenetV2, the classification model is loaded to a memory of Movidius on-chip addresses 640-758K for calculation, the calculated amount is 1G, and a calculation vector unit SHAVE10 is called.
The two paths of subcode streams separated by the video stream image input module automatically send the video subcode streams to the two paths of detection models respectively for detection, and the results obtained by detecting the subcode stream 1 and the subcode stream 2 are directly sent to the tracking model. Tracking the old and children through the tracking model, and displaying the tracking result to the main code stream. By tracking the old and children in real time, when the children exceed a set effective area, the system automatically sends out voice prompt; when the old falls down in the effective area, the old will also send out an alarm.
The invention can be applied to various scene identification through multi-element deep network model reconstruction based on a multi-core processor, such as safe region detection and protection, timing starting program, important article nursing, stranger intrusion alarm, human face white list and/or black list reminding, wonderful instant shooting, real-time dynamic snapshot, pet snapshot, smile snapshot and the like. And one or more of the foregoing may be combined to provide a set of devices, such as a device for monitoring important items and alarming intrusion of strangers. The wonderful instantaneous shooting and the real-time dynamic snapshot are combined. Or a combination of a plurality.
Example 2
Referring to fig. 5, an embodiment of the present invention corresponds to the method for reconstructing a multi-core processor-based multi-component deep network model provided in embodiment 1 and application embodiments 1 to 3, and further provides a multi-core processor-based multi-component deep network model reconstruction apparatus, including:
the video stream image input module 10 is used for acquiring a video stream image acquired by a camera; (ii) a
The logic combination module 20 is configured to select a logic combination relationship, and determine a cascade relationship and a corresponding output action between the depth network models in the depth network module according to the logic combination relationship;
a loading module 30, configured to load a corresponding deep network model according to the logical combination relationship;
the multi-core dynamic allocation management module 40 is used for calling a multi-core dynamic allocation management instruction, calculating the complexity of the loaded deep network models, and allocating corresponding memories and a preset number of core processors to each deep network model according to the complexity;
the deep network processing module 50 is used for inputting the acquired video stream images into corresponding deep network models;
and the execution module 60 is configured to analyze scene information in the video stream image according to the specified output information obtained after the processing by the deep network model, and execute a corresponding output action.
The multi-core processor-based multi-element depth network model reconstruction device determines the cascade relation among the depth network models according to the logic combination relation by adopting the multi-core processor, then allocates the corresponding memory and core processor for each depth network model according to the complexity of the depth network model, and comprehensively analyzes the scene in the video stream image, so that the requirements on memory space and algorithm complexity are low, the product cost is low, various depth network models (depth network learning algorithms) can be flexibly combined to form a building block type development mode, and the development efficiency and the interestingness of a user are improved.
The multi-core dynamic allocation management module 40 is specifically configured to invoke a multi-core dynamic allocation management instruction, calculate time complexity and space complexity of a corresponding deep network model according to a loaded parameter file of the deep network model, and determine memory space and the number of core processors dynamically allocated to the corresponding deep network model according to the time complexity and the space complexity.
The multi-core dynamic allocation management module 40 includes: the device comprises a space complexity degree operator module and a time complexity degree operator module, wherein the space complexity degree operator module is used for calculating space complexity. The calculation formula of the space complexity is as follows:
Figure BDA0002984474730000131
the time complexity calculating operator module is used for calculating the time complexity. The calculation formula of the time complexity is as follows:
(1) the time complexity formula for a single convolutional layer is calculated first as follows:
Time~O(M2·K2·Cin·Cout)
(2) the time complexity formula for calculating the whole depth network model is as follows:
Figure BDA0002984474730000132
where M is the size of the output feature map, K is the size of the convolution kernel, CinIs the number of input channels, CoutThe number of output channels, D is the total number of convolution layers of the depth network model; l is the first convolution layer of the deep network model, ClThe number of output channels of the ith convolution layer of the depth network model is also the number of convolution kernels of the convolution layer; cl-1Is the number of input channels of the ith convolutional layer.
The multi-core dynamic allocation management module 40 further includes:
a memory space allocation submodule for calculating the space complexity of each depth network model to be M according to the N depth network models specified by the logic combination relationi(K) Distributing corresponding memory space for each deep network model;
a core processor number distribution submodule for calculating the time complexity of each deep network model as Ti(G) (ii) a Setting the computing capacity of each core processor in the multi-core processor as H (G), wherein G is a unit for measuring the complexity of computing time, and the total number of core processors required by the deep network model is sum Ti(G) (G), wherein N is an integer greater than 0.
The method and the device for reconstructing the multi-core processor-based multi-element deep network model are described in detail, specific examples are applied in the method to explain the principle and the implementation mode of the invention, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the invention, the specific embodiments and the application range may be changed. In summary, the present disclosure is only an embodiment of the invention, and not intended to limit the scope of the invention, and all equivalent structures or equivalent flow transformations made by using the contents of the description and the drawings, or applied directly or indirectly to other related technical fields, are included in the scope of the invention, and should not be construed as limiting the invention.

Claims (10)

1. A target nursing method based on multistage neural network cascade is characterized in that a nursing object is an old man and/or an infant, and the method comprises the following steps:
acquiring a video stream image which is acquired by a camera and comprises a nursing object;
selecting a logic combination relation according to a nursing object, and determining a cascade relation and a corresponding output action among all the deep network models in the deep network module according to the logic combination relation;
loading a corresponding depth network model according to the logic combination relation;
calling a multi-core dynamic allocation management instruction, calculating the complexity of the loaded deep network models, and allocating a corresponding memory and a preset number of core processors for each deep network model according to the complexity;
inputting the collected video stream image into a corresponding depth network model;
analyzing scene information in the video stream image according to the specified output information obtained after processing of each cascaded deep network model, and executing corresponding output action;
the logic combination relationship is as follows: and the effective identification area is appointed in the acquired video stream image.
2. The target nursing method based on the multistage neural network cascade connection of claim 1, wherein if the nursing object is an infant, the deep network model comprises a child depth detection model and a network tracking identification model, and the child depth detection model and the network tracking identification model are cascaded in two stages;
if the nursing object is an old person, the deep network model comprises a small old person deep detection model, a network tracking identification model and a deep network falling classification model;
the old people deep network detection model, the deep network tracking identification model and the deep network falling classification model are cascaded in three levels; the output of the old people deep network detection model is used as the input of the deep network tracking identification model, and the output of the deep network tracking identification model is used as the input of the deep network falling classification model.
3. The method as claimed in claim 2, wherein the output is a voice prompt, and the prompt includes a child exceeding a defined area or an old person falling.
4. The method for target care based on multi-stage neural network cascade of any one of claims 1 to 3, wherein the allocating a corresponding memory and a predetermined number of core processors for each deep network model according to the complexity comprises:
obtaining a parameter file and a weight file of each deep network model;
respectively calculating the space complexity and the time complexity of each deep network model according to the parameter file and the weight file corresponding to each deep network model;
and distributing a corresponding memory and a preset number of core processors to each deep network model according to the space complexity and the time complexity of each deep network model.
5. The method of claim 4, wherein the profile includes at least one of: convolutional layers and their number of layers, depth convolutional layers and their number of layers, and convolutional kernel size.
6. The method of claim 4, wherein the inputting the captured video stream images into a corresponding depth network model comprises:
obtaining the number of detection models contained in the deep network model group;
dividing the video stream image into subcode streams corresponding to the number of the models;
and respectively sending the sub-code streams into the detection models after scaling.
7. The method for target nursing based on multi-stage neural network cascade of claim 4, wherein the detection model is a network detection model based on MovilentV 2-SSD training.
8. The method of claim 4, wherein the allocating a corresponding memory and a predetermined number of core processors to each deep network model according to the spatial complexity and the temporal complexity of each deep network model comprises:
acquiring the computing capacity of each vector computing unit of the multi-core processor;
determining the number of vector computing units required by each deep network model through each computing capacity according to the time complexity and the space complexity of each deep network model, and distributing the number of core processors to each deep network model;
the number of the core processors is the same as that of the vector computing units.
9. The method of claim 1, wherein the effective identification area is: a region in the middle of the image is selected and has an area that is the entire image area 1/2.
10. A target nursing device based on multistage neural network cascade is characterized in that a nursing object is an old man and/or an infant, and the target nursing device comprises:
the video stream image input module is used for acquiring video stream images which are acquired by the camera and comprise a nursing object;
the logic combination module is used for selecting a logic combination relation according to a nursing object and determining a cascade relation and a corresponding output action among all the depth network models in the depth network module according to the logic combination relation;
the loading module is used for loading the corresponding deep network model according to the logic combination relation;
the multi-core dynamic allocation management module is used for calling a multi-core dynamic allocation management instruction, calculating the complexity of the loaded deep network models, and allocating corresponding memories and a preset number of core processors for each deep network model according to the complexity;
the depth network module is used for inputting the collected video stream image into a corresponding depth network model;
the execution module is used for analyzing scene information in the video stream image according to the specified output information obtained after the processing of each cascaded depth network model and executing corresponding output actions;
the logic combination relationship is as follows: and the effective identification area is appointed in the acquired video stream image.
CN202110296284.3A 2019-01-29 2019-01-29 Target nursing method and device based on multistage neural network cascade Active CN112784987B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110296284.3A CN112784987B (en) 2019-01-29 2019-01-29 Target nursing method and device based on multistage neural network cascade

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910088001.9A CN109829542B (en) 2019-01-29 2019-01-29 Multi-core processor-based multi-element deep network model reconstruction method and device
CN202110296284.3A CN112784987B (en) 2019-01-29 2019-01-29 Target nursing method and device based on multistage neural network cascade

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910088001.9A Division CN109829542B (en) 2019-01-29 2019-01-29 Multi-core processor-based multi-element deep network model reconstruction method and device

Publications (2)

Publication Number Publication Date
CN112784987A true CN112784987A (en) 2021-05-11
CN112784987B CN112784987B (en) 2024-01-23

Family

ID=66862999

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110296284.3A Active CN112784987B (en) 2019-01-29 2019-01-29 Target nursing method and device based on multistage neural network cascade
CN201910088001.9A Active CN109829542B (en) 2019-01-29 2019-01-29 Multi-core processor-based multi-element deep network model reconstruction method and device

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201910088001.9A Active CN109829542B (en) 2019-01-29 2019-01-29 Multi-core processor-based multi-element deep network model reconstruction method and device

Country Status (1)

Country Link
CN (2) CN112784987B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313098A (en) * 2021-07-30 2021-08-27 阿里云计算有限公司 Video processing method, device, system and storage medium
CN115937743A (en) * 2022-12-09 2023-04-07 武汉星巡智能科技有限公司 Image fusion-based infant nursing behavior identification method, device and system

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363303B (en) * 2019-06-14 2023-07-07 平安科技(深圳)有限公司 Memory training method and device for intelligent distribution model and computer readable storage medium
CN110472531B (en) * 2019-07-29 2023-09-01 腾讯科技(深圳)有限公司 Video processing method, device, electronic equipment and storage medium
CN110516795B (en) * 2019-08-28 2022-05-10 北京达佳互联信息技术有限公司 Method and device for allocating processors to model variables and electronic equipment
CN111729283B (en) * 2020-06-19 2021-07-06 杭州赛鲁班网络科技有限公司 Training system and method based on mixed reality technology
CN113627620A (en) * 2021-07-29 2021-11-09 上海熠知电子科技有限公司 Processor module for deep learning

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015057630A (en) * 2013-08-13 2015-03-26 日本電信電話株式会社 Acoustic event identification model learning device, acoustic event detection device, acoustic event identification model learning method, acoustic event detection method, and program
CN105917354A (en) * 2014-10-09 2016-08-31 微软技术许可有限责任公司 Spatial pyramid pooling networks for image processing
CN106846729A (en) * 2017-01-12 2017-06-13 山东大学 A kind of fall detection method and system based on convolutional neural networks
CN107220604A (en) * 2017-05-18 2017-09-29 清华大学深圳研究生院 A kind of fall detection method based on video
CN107239790A (en) * 2017-05-10 2017-10-10 哈尔滨工程大学 A kind of service robot target detection and localization method based on deep learning
CN108171117A (en) * 2017-12-05 2018-06-15 南京南瑞信息通信科技有限公司 Electric power artificial intelligence visual analysis system based on multinuclear heterogeneous Computing
CN108683724A (en) * 2018-05-11 2018-10-19 江苏舜天全圣特科技有限公司 A kind of intelligence children's safety and gait health monitoring system
CN108764190A (en) * 2018-06-04 2018-11-06 山东财经大学 The elderly is from bed and in the video monitoring method of bed state

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217214B (en) * 2014-08-21 2017-09-19 广东顺德中山大学卡内基梅隆大学国际联合研究院 RGB D personage's Activity recognition methods based on configurable convolutional neural networks
CN106295668A (en) * 2015-05-29 2017-01-04 中云智慧(北京)科技有限公司 Robust gun detection method
CN105095866B (en) * 2015-07-17 2018-12-21 重庆邮电大学 A kind of quick Activity recognition method and system
CN106569574A (en) * 2015-10-10 2017-04-19 中兴通讯股份有限公司 Frequency management method and device for multicore CPU (Central Processing Unit)
CN205123923U (en) * 2015-11-20 2016-03-30 杭州电子科技大学 Many information fusion's old man monitor system equipment
US10083378B2 (en) * 2015-12-28 2018-09-25 Qualcomm Incorporated Automatic detection of objects in video images
US20190265319A1 (en) * 2016-07-22 2019-08-29 The Regents Of The University Of California System and method for small molecule accurate recognition technology ("smart")
CN107301456B (en) * 2017-05-26 2020-05-12 中国人民解放军国防科学技术大学 Deep neural network multi-core acceleration implementation method based on vector processor
CN107766406A (en) * 2017-08-29 2018-03-06 厦门理工学院 A kind of track similarity join querying method searched for using time priority
CN107872776B (en) * 2017-12-04 2021-08-27 泰康保险集团股份有限公司 Method and device for indoor monitoring, electronic equipment and storage medium
CN108491261A (en) * 2018-01-19 2018-09-04 西安电子科技大学 Multichannel frame sequence sort method based on many-core parallel processor
CN109189580B (en) * 2018-09-17 2021-06-29 武汉虹旭信息技术有限责任公司 Multi-task development model based on multi-core platform and method thereof

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015057630A (en) * 2013-08-13 2015-03-26 日本電信電話株式会社 Acoustic event identification model learning device, acoustic event detection device, acoustic event identification model learning method, acoustic event detection method, and program
CN105917354A (en) * 2014-10-09 2016-08-31 微软技术许可有限责任公司 Spatial pyramid pooling networks for image processing
CN106846729A (en) * 2017-01-12 2017-06-13 山东大学 A kind of fall detection method and system based on convolutional neural networks
CN107239790A (en) * 2017-05-10 2017-10-10 哈尔滨工程大学 A kind of service robot target detection and localization method based on deep learning
CN107220604A (en) * 2017-05-18 2017-09-29 清华大学深圳研究生院 A kind of fall detection method based on video
CN108171117A (en) * 2017-12-05 2018-06-15 南京南瑞信息通信科技有限公司 Electric power artificial intelligence visual analysis system based on multinuclear heterogeneous Computing
CN108683724A (en) * 2018-05-11 2018-10-19 江苏舜天全圣特科技有限公司 A kind of intelligence children's safety and gait health monitoring system
CN108764190A (en) * 2018-06-04 2018-11-06 山东财经大学 The elderly is from bed and in the video monitoring method of bed state

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HANCHUAN XU ET AL: "Activity recognition method for home-based elderly care service based on random forest and activity similarity", 《IEEE》, vol. 7, pages 16217 - 16225, XP011709225, DOI: 10.1109/ACCESS.2019.2894184 *
吴鹏: "监控视频人体状态识别方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, pages 138 - 3056 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313098A (en) * 2021-07-30 2021-08-27 阿里云计算有限公司 Video processing method, device, system and storage medium
CN113313098B (en) * 2021-07-30 2022-01-04 阿里云计算有限公司 Video processing method, device, system and storage medium
CN115937743A (en) * 2022-12-09 2023-04-07 武汉星巡智能科技有限公司 Image fusion-based infant nursing behavior identification method, device and system
CN115937743B (en) * 2022-12-09 2023-11-14 武汉星巡智能科技有限公司 Infant care behavior identification method, device and system based on image fusion

Also Published As

Publication number Publication date
CN112784987B (en) 2024-01-23
CN109829542A (en) 2019-05-31
CN109829542B (en) 2021-04-16

Similar Documents

Publication Publication Date Title
CN109829542B (en) Multi-core processor-based multi-element deep network model reconstruction method and device
US11055516B2 (en) Behavior prediction method, behavior prediction system, and non-transitory recording medium
WO2020107847A1 (en) Bone point-based fall detection method and fall detection device therefor
CN108875481B (en) Method, device, system and storage medium for pedestrian detection
KR20190099443A (en) Systems and Methods for Appearance Navigation
CN108875476B (en) Automatic near-infrared face registration and recognition method, device and system and storage medium
CN109154976A (en) Pass through the system and method for machine learning training object classifier
CN106845352B (en) Pedestrian detection method and device
US9183431B2 (en) Apparatus and method for providing activity recognition based application service
Fan et al. Fall detection via human posture representation and support vector machine
CN113111767A (en) Fall detection method based on deep learning 3D posture assessment
CN110659391A (en) Video detection method and device
CN109117882B (en) Method, device and system for acquiring user track and storage medium
CN115424171A (en) Flame and smoke detection method, device and storage medium
Cameron et al. A fall detection/recognition system and an empirical study of gradient-based feature extraction approaches
CN114359618A (en) Training method of neural network model, electronic equipment and computer program product
CN113963438A (en) Behavior recognition method and device, equipment and storage medium
CN113920585A (en) Behavior recognition method and device, equipment and storage medium
CN114764895A (en) Abnormal behavior detection device and method
JP2019020820A (en) Video recognition system
CN113902030A (en) Behavior identification method and apparatus, terminal device and storage medium
Paul Ijjina Human fall detection in depth-videos using temporal templates and convolutional neural networks
Mobsite et al. A Deep Learning Dual-Stream Framework for Fall Detection
Iazzi et al. A new method for fall detection of elderly based on human shape and motion variation
CN110659624A (en) Group personnel behavior identification method and device and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant