CN115187888A - Infant monitoring system based on deep learning and construction method - Google Patents

Infant monitoring system based on deep learning and construction method Download PDF

Info

Publication number
CN115187888A
CN115187888A CN202210556558.2A CN202210556558A CN115187888A CN 115187888 A CN115187888 A CN 115187888A CN 202210556558 A CN202210556558 A CN 202210556558A CN 115187888 A CN115187888 A CN 115187888A
Authority
CN
China
Prior art keywords
model
infant
monitoring system
deep learning
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210556558.2A
Other languages
Chinese (zh)
Inventor
田昶
潘宇航
黄柏翰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN202210556558.2A priority Critical patent/CN115187888A/en
Publication of CN115187888A publication Critical patent/CN115187888A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/0202Child monitoring systems using a transmitter-receiver system carried by the parent and the child
    • G08B21/0261System arrangements wherein the object is to detect trespassing over a fixed physical boundary, e.g. the end of a garden
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/08Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using communication transmission lines

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Business, Economics & Management (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Emergency Management (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Child & Adolescent Psychology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an infant monitoring system based on deep learning and a construction method thereof, which are used for acquiring infant image data under a plurality of scene environments, training a high-robustness YOLOV3 model under a mode of verifying while training, accurately positioning the head and the main body of an infant in a complex environment, and communicating with a small program to transmit video data, electronic fence data and alarm prompt. The user can self-define the virtual fence boundary, and the alarm is prompted by detecting whether the main body exceeds the virtual boundary, so that the possibility of accidents is reduced, the alarm has high reliability, and the anxiety of parents in frequently receiving the prompt can be effectively relieved. The monitoring system can accurately and pertinently monitor the activities of infants and is not interfered by night environment and adult accompanying scenes. The monitoring system of the invention uses a lighter network to detect the coordinates of the baby, directly judges whether the baby is out of range, and better accords with embedded computing power and better universality.

Description

Infant monitoring system based on deep learning and construction method
Technical Field
The invention relates to the technical field of infant monitoring, in particular to an infant monitoring system based on deep learning and a construction method.
Background
The existing monitoring camera cannot well distinguish adults from infants, so that false alarm and error prompt are caused, and the alarm prompt needs to have accuracy and effectiveness, so that the monitoring system needs to pertinently identify infants and not identify adults, and does not give an alarm when the infants grow into an electronic fence, and therefore, a targeted intelligent monitoring system is needed to help parents to assist in monitoring the infants.
House key. Fence crossing behavior detection method based on deep learning [ J ] computer system application, 2021,30 (2): 147-153 have proposed a fence and crossed the behavioral detection method, this method mainly classifies behaviors through the deep learning, judge whether a behavior is a behavior of crossing, this deep learning model needs information and information of the time sequence of the picture, classify and judge the behavior, therefore the model is very bulky, only suitable for running on the computer at present, not suitable for the embedded system, and the algorithm itself is suitable for the movements that have more obvious differentiation, such as adult's crossing, not suitable for the baby and infant with few types of movements, for example the baby and infant crawls to the danger area, it is difficult to carry on the alarm judgement with the behavior.
Disclosure of Invention
In order to solve the problems that the prior art is not suitable for an embedded system and the alarm judgment is difficult to be carried out by using behaviors due to algorithm limitation, the invention provides an infant monitoring system based on deep learning, wherein a hardware part comprises Hi3516DV300, a sony camera and Hi 3861. The invention collects the image data of the baby under a plurality of scene environments, trains the high-robustness YOLOV3 model under the mode of 'verification while training', can accurately position the head and the main body of the baby in a more complex environment, and can communicate with a small program to transmit video data, electronic fence data and alarm prompt. The user can self-define the virtual fence boundary, and whether the main body exceeds the virtual boundary or not is detected to prompt and alarm, so that the possibility of accidents is reduced, the alarm has high reliability, and the anxiety of parents who frequently receive the prompt can be effectively relieved.
The invention specifically adopts the following technical scheme:
an infant monitoring system based on deep learning comprises a user side, a plurality of family monitoring cameras and a server, wherein the user side is provided with a small program, the user can check videos transmitted by the cameras in real time in the small program and set regions of electronic fences, the family monitoring cameras are respectively arranged in infant activity regions, a Hi3516DV300 development board is arranged in the server, and a YOLOV3 model and corresponding codes are stored in the development board; the household monitoring camera transmits a monitoring video to the server in real time, the server judges whether the infant exceeds the area of the electronic fence or not according to the area of the electronic fence set by the user, and if the infant exceeds the area of the electronic fence, a short message prompt is sent to the user side.
A method for building an infant monitoring system based on deep learning comprises the following steps:
step 1: selecting a plurality of scenes, and acquiring video data of the infant by using a suspension type camera or a mobile phone video device;
and 2, step: frame capture and screening are carried out on the video data acquired in the step 1, the head and body parts of the baby in each image in the data set are subjected to frame selection and labeling manually, and the labeled data set is divided into a training set and a verification set;
and 3, step 3: putting the training set and the verification set in the step 2 into a YOLOV3 network built by a Pythrch platform for training and verification respectively;
and 4, step 4: testing the model obtained after training, observing the model performance, analyzing the defect reasons and adding a data set in a targeted manner;
and 5: carrying out model conversion on the Pythrch model with higher robustness obtained in the step 4, and converting the Pythrch model into a caffe model;
and 6: INT8 quantization is carried out on yolov3. Cafemodel of the floating point number obtained in the step 5, a wk file which can be used by hardware is generated, model precision is verified, and the model and a corresponding code are imported into a Hi3516DV300 development board;
and 7: a server is set up, and a short message sending program is written in the Hi3516DV 300;
and step 8: the development small program is connected with the server, the server is communicated with the Hi3516DV300, real-time video data are transmitted, and a client can set an electronic fence area in the small program;
and 9, post-processing and judging border crossing of model prediction, and alarming by a short message.
Further, the step 1 specifically includes: adopt family surveillance camera head and cell-phone camera to carry out video data and gather, video scene includes: the infant mask comprises a day crib scene, a day living room scene, a day outdoor scene and a night crib scene, and an infant wearing mask scene.
Further, the step 2 specifically includes: and (3) performing frame capture on the video data by using an ffmpeg tool, generally setting 1 frame capture in 1 second, screening the captured pictures, and deleting the pictures and reserving other pictures as a data set if the pictures have the conditions of no baby target, blurred pictures, small change of adjacent images or artificial difficulty in judgment. And (4) using a labelimg tool to perform frame selection and labeling on the pictures, wherein two labeling targets are provided, namely the head of the baby and the whole body of the baby. After the labeling is finished, dividing the data set into a training set and a verification set according to the proportion of 9.
Further, the step 3 specifically includes: the model is slightly modified to avoid the error of model conversion later, the size of the image transmission is set to be 416 multiplied by 416, 50 times of freezing training and 150 times of unfreezing training are specified.
Further, the step 4 specifically includes: and applying the model generated by training to the different environments for detection, if the performance is poor in a certain environment, increasing the data set of the environment, training again, repeating for multiple times, and finally totaling 5000 data sets.
Further, the step 5 specifically includes: and (4) generating an onnx model by using the pth file and the model trained in the step (4) by using an export method, and converting the onnx2caffe model into YOLOV3. Caffiedel and YOLOV3.Prototxt by using an onx 2caffe toolkit under the environment configured by Ubuntu.
Further, the step 6 specifically includes: and (3) placing the cafemodel file and the cfg file converted in the step (5) into RuyiStaudio to be quantized into a reagent model, generating a wk file required by Hi3516DV300 hardware, testing the model precision, and burning the program into a development board when the precision is reduced by less than 0.5 percent, namely within an allowable range. If the accuracy is not within the tolerance range, the model is invalidated or a reason for which operator of the model appears is found, and the operator is replaced.
Further, the step 7 specifically includes: a server is set up in a target host, and a program is written to provide MQIT protocol communication among multiple devices in a specific network. A program capable of transmitting a short message to a number designated by a user is written in the Hi3516DV 300.
Further, the step 8 specifically includes: and importing the mqtt.js file into the small program to enable the small program to support mqtt connection and service, wherein the small program is divided into two interfaces, the network connection interface and the electronic fence setting interface are connected, the network connection interface supports one-key connection to the corresponding server to check the video in real time, the electronic fence setting interface supports a user to zoom and drag the canvas for area selection, and the modified data can be sent to the server by clicking a confirmation key.
Further, the step 9 specifically includes: writing a non-maximum suppression processing algorithm of a YOLOV3 network by using C language to obtain a final output center coordinate and a type of a corresponding frame, obtaining a credible coordinate and type by adopting a median filtering algorithm, comparing the coordinate with a boundary defined by a user, and if the coordinate is not in a range of a virtual fence defined by the user, judging that a baby leaves the electronic fence by using local Hi3516DV300 equipment, sending a short message prompt to the user in time, so as to realize monitoring.
The invention has the beneficial effects that:
the monitoring system can accurately and pertinently monitor the activities of the infants and is not interfered by night environment and adult accompanying scenes. The YOLOV 3-based infant monitoring system established by the invention can solve the problem that infants easily crawl into dangerous areas by means of accurate identification of characteristics of the infants and setting of the electronic fence, reduces the risk of accidents of the infants, and simultaneously meets the requirements of users on self-defining the electronic fence and real-time monitoring.
The monitoring system of the invention uses a lighter network to detect the coordinates of the baby, directly judges whether the baby is out of range, and better accords with embedded computing power and better universality.
Drawings
FIG. 1 is a flow chart of an embodiment construction method;
FIG. 2 is a merged view of partial scene styles;
FIG. 3 is a picture style labeled in labellimg software;
FIG. 4 is a partial training set picture;
FIG. 5 is a partial authentication set picture;
FIG. 6 is a prediction of a model with good robustness and accuracy;
FIG. 7 is a status page of the MQTT server;
FIG. 8 is a developed WeChat applet camera interface;
FIG. 9 is a screenshot of the top half of a developed WeChat applet electronic fence setup interface;
FIG. 10 is a screenshot of the lower half of the developed WeChat applet electronic fence setup interface;
fig. 11 is a prompt screenshot of the sms message.
Detailed Description
In order to make the objects, technical solutions and technical effects of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments.
As shown in fig. 1, a method for building an infant monitoring system based on deep learning includes the following steps:
step 1: use suspension type camera video recording device to carry out baby video data acquisition, specifically be: under the crib scene daytime, the family's camera orientation divide into place directly over the crib with two kinds of oblique tops, shoot the distance and divide into long distance: the camera is placed on the ceiling, and the middle distance is as follows: the camera field of vision covers the crib, closely: the camera view covers the whole body or the upper part of the body of the baby. Under the scene of a living room in the daytime, the camera faces randomly and has a random distance, and under the scene of the night, the camera is arranged right above the crib, and the distance is collected for the middle distance. The scene that the mask is worn by the infant and the adult accompanies is included in the collected sample, and the system identification accuracy is improved. A partial scene style merge map is shown in fig. 2.
Step 2: and (2) using an ffmpeg tool for the video data acquired in the step (1), intercepting videos with more data volume of the same type by adopting 0.2-0.5 frame/second, avoiding intercepting the same picture, preventing poor prediction effect caused by overlarge weight of a certain scene during model learning, and intercepting video data which are inconvenient to acquire widely, such as night scenes and scenes with masks worn by children by adopting 0.8-1.0 frame/second. Therefore, the effect of specific gravity balance of different scene data sets is achieved. If the picture has the situations of no baby target, fuzzy picture, little change of adjacent images or difficult judgment by people, the picture is deleted. During marking, the frame selection standard is as follows: for the head, the four sides of the label frame should be tangent to the infant's head, i.e., the smallest rectangle containing the infant. If the body is determined to be recognized when more than half of the body area of the infant appears in the picture, the label box should be the smallest rectangle that contains the body of the infant. The annotation decision criteria are shown in fig. 3. After the labeling is finished, dividing the data set into a training set and a verification set according to the proportion of 9.
And 3, step 3: and modifying the upsampling mode of the model from bilinear to nearest, and modifying the result of two times of upsampling in the model into a static constant so as to avoid the error of model conversion later. The size of image transmission is set to be 416 multiplied by 416, 50 rounds of freezing training are designated, the learning rate is 0.01, each round is reduced by 2%, the optimizer is sgd, 150 rounds of thawing training are designated, the learning rate is 0.0005, and each round is reduced by 2%.
And 4, step 4: after the model training is finished, the model is put into the previously prepared test videos of different scenes for testing, a target frame predicted by the model is displayed in real time, if the phenomenon that the detection frame jumps sharply, an object or an adult is identified as an infant by mistake, and the infant in the video is not detected in a certain scene, a data set of a corresponding type of scene is expanded, if the phenomenon that the detection frame is lost and misjudged for no more than 0.5 second in the certain scene and more than 80% of a target area can be continuously framed out, the data set of the scene is not specially added, and finally the data set is 5000 pieces (the model test performance is shown in figure 6).
And 5: and (4) calling an export function in the torch library for the high-robustness model verified in the step (4), converting the pth parameter file and the model building program into an onnx file, entering an Ubuntu system, building a caffe environment, downloading an onnx2caffe package, adding an upper sampling layer into a caffe support layer, and finally executing the model conversion program to obtain a YOLOV3. Cafemodel file and a YOLOV3. Prototxel file.
Step 6: downloading and configuring RuyiStudio, selecting a chip model number Hi3516DV300, selecting 40 representative pictures covering all scenes during training for quantization, setting wk files for generating board end operation, obtaining a YOLOV3.Mk file, and verifying the precision of the converted model on the IDE, wherein the precision is compared with that of an mAP =96.18% of an original model, the mAP =95.7% of the converted model, and the loss of the model precision is less than 0.5% and is within a reasonable range.
And 7: a server environment is built in a host, the host selects a cloud host, a program is written in the server to establish an MQTT communication mechanism according to an MQTT5.0 protocol standard, a state page of the MQTT server is shown in figure 7, and QoS0 is selected according to message quality when communication parameters are selected; the heartbeat packet sending time is 30s, in the online state perception, the advice information is set as alarm information, namely, the equipment group under the topic is alarmed when the equipment is disconnected; the communication topic is the device ID, and a topic hierarchy is established to adapt to the simultaneous control of a plurality of devices. Meanwhile, a program is written in the Hi3516DV300, so that the program can call the short message sending API of the short message treasure of the third-party platform through the POST method of HTTP, and the function of sending short messages to specific numbers is realized.
And 8: js files of open sources are downloaded and added into the WeChat applet, the Baby Guard applet has two pages, one page is connected with a camera, the page can be connected with a Hi3861 development board which is automatically networked by pressing one-key connection, the Hi3861 is controlled by a main control Hi3516, so that the camera and a client are both connected into the server configured in the step 7, real-time transmitted video pictures are displayed on the camera, and the WeChat applet page is as shown in FIG. 8. The second page is an electronic fence setting interface, a user can drag the zoomed canvas to set an electronic fence area, after clicking and determining, the small program sends finally determined position and size information of the canvas according to an mqtt protocol, the second WeChat small program page is shown in fig. 9 and 10, meanwhile, hi3861 receives the subscribed position information, modifies global variables in the body, and continuously judges whether the baby exceeds the electronic fence in real time.
And step 9: because the execution result of the neural network returns a plurality of possible prediction frames, but the non-maximum suppression algorithm does not belong to the neural network, a non-maximum suppression processing algorithm of the YOLOV3 network needs to be written by using C language to obtain the final output center coordinate and the type of the corresponding frame, the detection frame jumps due to possible instant misprediction, a median filtering algorithm needs to be adopted to obtain reliable and stable infant head coordinates and body coordinates, if the infant body is detected and the coordinates of the infant body are output, the body coordinates of the infant are preferentially adopted, if the AI does not identify the body, whether the infant head coordinates are in a boundary range is judged, the boundary range is the detected numerical range of the horizontal and vertical coordinates, the coordinates are obtained by simply calculating the position and the size of the mask sent in the step 8, the coordinates are compared with the boundary defined by the user, if the infant head coordinates are not in the virtual fence range defined by the user, the local Hi3516DV300 device judges that the infant leaves the electronic fence, and the device can timely initiate a short message prompt to the user (as shown in fig. 11) to realize the monitoring.
It should be understood that parts of the present invention not specifically set forth are within the prior art.
It should be understood by those skilled in the art that the above-mentioned embodiments are only specific embodiments and procedures of the present invention, and the scope of the present invention is not limited thereto. The scope of the invention is limited only by the appended claims.

Claims (10)

1. An infant monitoring system based on deep learning is characterized by comprising a user side, a plurality of family monitoring cameras and a server, wherein the user side is provided with a small program, the user can check videos transmitted by the cameras in real time in the small program and set an area of an electronic fence, the plurality of family monitoring cameras are respectively arranged in an infant activity area, the server is provided with a Hi3516DV300 development board, and a YOLOV3 model and corresponding codes are stored in the development board; the household monitoring camera transmits a monitoring video to the server in real time, the server judges whether the infant exceeds the area of the electronic fence or not according to the area of the electronic fence set by the user, and if the infant exceeds the area of the electronic fence, a short message prompt is sent to the user side.
2. A method for building an infant monitoring system based on deep learning according to claim 1, which comprises the following steps:
step 1: selecting a plurality of scenes, and acquiring video data of the infant by using a suspension type camera or a mobile phone video device;
step 2: frame capture and screening are carried out on the video data acquired in the step 1, the head and body parts of the baby of each image in the data set are subjected to frame selection and marking manually, and the marked data set is divided into a training set and a verification set;
and 3, step 3: putting the training set and the verification set in the step 2 into a YOLOV3 network built by a Pytrch platform for training and verification respectively;
and 4, step 4: testing the model obtained after training, observing the model performance, analyzing the defect reasons, and adding a data set in a targeted manner;
and 5: carrying out model conversion on the Pythrch model with higher robustness obtained in the step 4, and converting the model into a buffer model;
and 6: INT8 quantization is carried out on yolov3.Ca ffemodel of the floating point number obtained in the step 5, a wk file which can be used by hardware is generated, model precision is verified, and the model and a corresponding code are led into a Hi3516DV300 development board;
and 7: a server is set up, and a short message sending program is written in the Hi3516DV 300;
and step 8: the small program is developed to be connected with the server, the server is communicated with the Hi3516DV300, real-time video data are transmitted, and a client can set an electronic fence area in the small program;
and 9, post-processing and judging boundary crossing of model prediction, and alarming by a short message.
3. The method for building an infant monitoring system based on deep learning according to claim 2, wherein the scene in step 1 comprises: a day crib scene, a day living room scene, a day outdoor scene, a night crib scene, and a child mask wearing scene.
4. The method for building an infant monitoring system based on deep learning according to claim 3, wherein the step 2 specifically comprises: performing frame interception on the video data by using an ffmpeg tool, setting the frame interception time to 1 second for 1 frame interception, screening the intercepted picture, and deleting the picture if the picture has the conditions of no baby target, fuzzy picture, small change of adjacent images or artificial difficulty in judgment, wherein other pictures are reserved as a data set; using a labelimg tool to perform frame selection and labeling on the pictures, wherein two labeling targets are provided, namely the head of the baby and the whole body of the baby; after the labeling is finished, dividing the data set into a training set and a verification set according to the proportion of 9.
5. The method for building an infant monitoring system based on deep learning according to claim 4, wherein the step 3 specifically comprises: the image entry size was set to 416 x 416, and 50 passes of freeze training and 150 passes of thaw training were designated.
6. The method for building an infant monitoring system based on deep learning according to claim 5, wherein the step 4 specifically comprises: and applying the model generated by training to the different environments for detection, if the model is not well performed in a certain environment, increasing the data set of the environment, training again, and repeating for multiple times.
7. The method for building an infant monitoring system based on deep learning according to claim 6, wherein the step 5 specifically comprises: and (4) generating an onnx model by using the pth file and the model trained in the step (4) by using an export method, and converting the onnx2caffe model into YOLOV3. Caffiedel and YOLOV3.Prototxt by using an onx 2caffe toolkit under the environment configured by Ubuntu.
8. The method for building an infant monitoring system based on deep learning according to claim 7, wherein the step 6 specifically comprises: and (4) putting the ca ffemodel file and the cfg file converted in the step (5) into RuyiStudio to be quantized into a reagent model, generating a wk file required by Hi3516DV300 hardware, testing the model precision, and burning the program into a development board when the precision is reduced by less than 0.5 percent, namely within an allowable range.
9. The method for building an infant monitoring system based on deep learning according to claim 8, wherein the step 8 specifically comprises: and importing the mqtt.js file into a WeChat small program to enable the WeChat small program to support mqtt connection and service, wherein the small program is divided into two interfaces, the network connection interface is connected with an electronic fence setting interface, the network connection interface supports one-key connection with a corresponding server to check a video in real time, the electronic fence setting interface supports a user to zoom and drag a canvas for area selection, and the modified data can be sent to the server by clicking a confirmation key.
10. The method for building an infant monitoring system based on deep learning according to claim 9, wherein the step 9 specifically comprises: writing a non-maximum suppression processing algorithm of a YOLOV3 network by using C language to obtain a final output center coordinate and a type of a corresponding frame, obtaining a credible coordinate and type by adopting a median filtering algorithm, comparing the coordinate with a boundary defined by a user, judging that a baby leaves the electronic fence by using local Hi3516DV300 equipment if the baby does not fall within the range of the virtual electronic fence defined by the user, and timely initiating a short message prompt to the user to realize monitoring.
CN202210556558.2A 2022-05-20 2022-05-20 Infant monitoring system based on deep learning and construction method Pending CN115187888A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210556558.2A CN115187888A (en) 2022-05-20 2022-05-20 Infant monitoring system based on deep learning and construction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210556558.2A CN115187888A (en) 2022-05-20 2022-05-20 Infant monitoring system based on deep learning and construction method

Publications (1)

Publication Number Publication Date
CN115187888A true CN115187888A (en) 2022-10-14

Family

ID=83513785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210556558.2A Pending CN115187888A (en) 2022-05-20 2022-05-20 Infant monitoring system based on deep learning and construction method

Country Status (1)

Country Link
CN (1) CN115187888A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116229674A (en) * 2023-02-07 2023-06-06 广州后为科技有限公司 Infant monitoring method, device, electronic equipment and storage medium
CN116311780A (en) * 2023-03-16 2023-06-23 宁波星巡智能科技有限公司 Intelligent monitoring method, device and equipment for preventing infants from falling from high place

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116229674A (en) * 2023-02-07 2023-06-06 广州后为科技有限公司 Infant monitoring method, device, electronic equipment and storage medium
CN116311780A (en) * 2023-03-16 2023-06-23 宁波星巡智能科技有限公司 Intelligent monitoring method, device and equipment for preventing infants from falling from high place

Similar Documents

Publication Publication Date Title
CN115187888A (en) Infant monitoring system based on deep learning and construction method
CN105120217B (en) Intelligent camera mobile detection alert system and method based on big data analysis and user feedback
CN108416979A (en) A kind of intelligence the elderly's tumbling alarm system
CN111918039B (en) Artificial intelligence high risk operation management and control system based on 5G network
CN110769195B (en) Intelligent monitoring and recognizing system for violation of regulations on power transmission line construction site
CN109447448A (en) A kind of method, client, server and the system of fire Safety Assessment management
CN111920129A (en) Intelligent safety helmet system
CN108739490A (en) A kind of wild animal tracking and monitoring system and method
CN114863489B (en) Virtual reality-based movable intelligent auxiliary inspection method and system for construction site
CN213128247U (en) Intelligent safety helmet system
CN115376269B (en) Fire monitoring system based on unmanned aerial vehicle
CN108319892A (en) A kind of vehicle safety method for early warning and system based on genetic algorithm
CN114022810A (en) Method, system, medium and terminal for detecting working state of climbing frame protective net in construction site
CN113887318A (en) Embedded power violation detection method and system based on edge calculation
CN112434827A (en) Safety protection identification unit in 5T fortune dimension
CN112434828A (en) Intelligent identification method for safety protection in 5T operation and maintenance
CN105929392A (en) Radar and video multi-system interaction offshore platform system
CN110414360A (en) A kind of detection method and detection device of abnormal behaviour
CN116129490A (en) Monitoring device and monitoring method for complex environment behavior recognition
CN115346157A (en) Intrusion detection method, system, device and medium
CN113723701A (en) Forest fire monitoring and predicting method and system, electronic equipment and storage medium
CN111507294B (en) Classroom security early warning system and method based on three-dimensional face reconstruction and intelligent recognition
CN113392706A (en) Device and method for detecting smoking and using mobile phone behaviors
CN112422895A (en) Image analysis tracking and positioning system and method based on unmanned aerial vehicle
CN116824725A (en) Forest resource intelligent management and protection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination