CN112580593A - Behavior monitoring method and apparatus, behavior monitoring device, and computer storage medium - Google Patents

Behavior monitoring method and apparatus, behavior monitoring device, and computer storage medium Download PDF

Info

Publication number
CN112580593A
CN112580593A CN202011595416.4A CN202011595416A CN112580593A CN 112580593 A CN112580593 A CN 112580593A CN 202011595416 A CN202011595416 A CN 202011595416A CN 112580593 A CN112580593 A CN 112580593A
Authority
CN
China
Prior art keywords
behavior
information
video
monitored object
characteristic information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011595416.4A
Other languages
Chinese (zh)
Inventor
徐花平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Skyworth RGB Electronics Co Ltd
Original Assignee
Shenzhen Skyworth RGB Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Skyworth RGB Electronics Co Ltd filed Critical Shenzhen Skyworth RGB Electronics Co Ltd
Priority to CN202011595416.4A priority Critical patent/CN112580593A/en
Publication of CN112580593A publication Critical patent/CN112580593A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a behavior monitoring method and a device, behavior monitoring equipment and a computer storage medium, wherein the behavior monitoring method comprises the following steps: collecting audio and video information of an environment where a monitored object is located; determining a child-bearing scene where the monitored object is located according to the audio and video information; and the prompt information corresponding to the scene is pushed, so that the problem that an effective solution cannot be recommended according to the change of the education scene in the prior art is solved, and the education atmosphere is improved.

Description

Behavior monitoring method and apparatus, behavior monitoring device, and computer storage medium
Technical Field
The present invention relates to the field of intelligent monitoring technologies, and in particular, to a behavior monitoring method and system, a behavior monitoring device, and a computer storage medium.
Background
At present, parents pay more and more attention to childcare education after 80 and 90 days, good education is the inevitable basis for enabling children to grow happily, the education is embodied in the aspect of children knowledge, the education is also embodied in the aspect of emotion, tools such as current childcare software and childcare early education machines mostly only aim at the education in the aspect of children knowledge, education is not performed in the aspect of emotion, emotion education mainly comes from parents, but when the education mode of parents is wrong, effective solutions cannot be recommended according to the change of current education scenes, improper emotion and behaviors are caused, and irreparable harm is caused to the psychology or the body of the children.
Therefore, in order to fully develop knowledge and emotion of children, it is important to design an invention which can monitor emotion in real time and recommend a proper child-bearing scene.
Disclosure of Invention
The invention mainly aims to provide a behavior monitoring method and device, behavior monitoring equipment and a computer storage medium, and aims to solve the problem that an effective solution cannot be recommended according to the change of an education scene in the prior art.
In order to achieve the above object, the present invention provides a behavior monitoring method, which in one embodiment includes the following steps:
collecting audio and video information of an environment where a monitored object is located;
determining a child-bearing scene where the monitored object is located according to the audio and video information;
and pushing prompt information corresponding to the scene.
In an embodiment, the step of determining the child bearing scene where the monitoring object is located according to the audio-video information includes:
extracting the behavior characteristic information of the monitoring object in the audio and video information;
acquiring preset characteristic information matched with the extracted behavior characteristic information;
and determining the nursery scene of the monitored object according to the matched preset characteristic information.
In an embodiment, the extracting the behavior feature information of the monitoring object in the audio/video information includes:
the method comprises the steps of cutting video information of an environment where a monitored object is located into a plurality of video blocks which are the same in size and continuous in time sequence;
cutting each video block into a plurality of video frames with the same resolution;
extracting video frames corresponding to two adjacent video blocks to obtain a video frame set;
and inputting the video frame set into a deep convolutional network to extract behavior characteristic information.
In an embodiment, the determining a child bearing scene where the monitoring object is located according to the matched preset feature information includes:
acquiring behavior characteristic information of a monitored object, and searching preset characteristic information matched with the behavior characteristic information;
and determining the nursery scene of the monitored object according to the matched preset characteristic information.
In an embodiment, the matched preset feature information is a keyword of an inappropriate behavior and an inappropriate action, and the prompt information is soothing music information.
In an embodiment, after the pushing of the prompt information corresponding to the scene, the method includes:
detecting whether behavior characteristic information of the monitored object is matched with preset characteristic information within a preset time interval after the prompt information is output;
and if so, pushing the preset feature information to a prompt terminal corresponding to the monitored object in a voice mode.
In one embodiment, the behavior monitoring method further includes:
after receiving an instruction of entering a learning mode, playing directory information of the stored books and audio files;
and after receiving a playing instruction, playing the book or audio file corresponding to the playing instruction.
To achieve the above object, the present invention also provides a behavior monitoring device, including:
a behavior data acquisition module: the system is used for acquiring audio and video information of the environment where the monitored object is located;
a behavior data analysis module: the system is used for determining a child-bearing scene where the monitored object is located according to the audio and video information;
a behavior pushing module: and the system is used for pushing prompt information corresponding to the scene.
To achieve the above object, the present invention further provides a behavior monitoring device, which includes a memory, a processor, and a behavior monitoring program stored in the memory and executable on the processor, wherein the behavior monitoring program, when executed by the processor, implements the steps of the behavior monitoring method as described above.
To achieve the above object, the present invention further provides a computer storage medium storing a behavior monitoring program, which when executed by a processor implements the steps of the behavior monitoring method as described above.
The behavior monitoring method and device, the behavior monitoring equipment and the computer storage medium provided by the invention at least have the following technical effects:
the method comprises the steps of acquiring audio and video information of the environment where a monitored object is located, comparing the monitored behavior characteristic information with preset behavior characteristics, determining a child-rearing scene where the monitored object is located according to the audio and video information if the behavior characteristic information is not matched with the preset behavior characteristic information, sending corresponding prompt information under the child-rearing scene to the monitored object, adjusting behaviors and languages, and solving the problem that an effective solution cannot be recommended according to changes of education scenes in the prior art so as to improve the education atmosphere.
Drawings
Fig. 1 is a schematic diagram of an architecture of a behavior monitoring device according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart diagram illustrating a behavior monitoring method according to a first embodiment of the present invention;
FIG. 3 is a detailed flowchart of step S120 in the second embodiment of the behavior monitoring method according to the present invention;
fig. 4 is a detailed flowchart of step S121 in the third embodiment of the behavior monitoring method according to the present invention;
FIG. 5 is a detailed flowchart of step S123 in the fourth embodiment of the behavior monitoring method according to the present invention;
FIG. 6 is a schematic flow chart diagram of a fifth embodiment of the behavior monitoring method of the present invention;
FIG. 7 is a flowchart illustrating a behavior monitoring method according to a sixth embodiment of the present invention;
FIG. 8 is a functional block diagram of the behavior monitoring device of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In order to solve the problem that an effective solution cannot be recommended according to the change of an education scene in the prior art, the method adopts the steps of collecting audio and video information of the environment where a monitoring object is located; determining a child-bearing scene where the monitored object is located according to the audio and video information; and pushing the prompt information corresponding to the scene to improve the education atmosphere.
For a better understanding of the above technical solutions, exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a hardware operating environment according to an embodiment of the present invention.
It should be noted that fig. 1 is a schematic diagram of a hardware operating environment of a behavior monitoring device.
As shown in fig. 1, the behavior monitoring device may include: a processor 1001, such as a CPU, a memory 1005, a user interface 1003, a network interface 1004, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the behavior monitoring device may further include a camera, a Radio Frequency (RF) wireless charging module, a sensor, an audio circuit, a wireless transmission module, and the like.
The wireless charging module is mainly used for charging other modules such as a camera module and the like, and the modules can automatically work for a long time; sensors such as light sensors, motion sensors, voice sensors, and others. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the mobile terminal is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer and tapping) and the like for recognizing the attitude of the mobile terminal; the wireless transmission module mainly adopts a high-capacity transmission mode such as 5G, WiFi6 to realize transmission of high-definition video streams, pictures and the like, taking single-point video stream 4Mbps data transmission as an example, the transmission rate of 5G, WiFi6 theoretically can reach 10Gbps, theoretically can bear about 2000 channels of data access, and the high-capacity transmission can meet the data transmission requirement by combining the transmission asynchronism of each terminal; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the configuration of the activity monitoring device shown in FIG. 1 does not constitute a limitation of the activity monitoring device, and that the activity monitoring device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a behavior monitoring program. Among them, the operating system is a program that manages and controls the hardware and software resources of the behavior monitoring device, the behavior monitoring program, and the execution of other software or programs.
In the behavior monitoring device shown in fig. 1, the user interface 1003 is mainly used for connecting a terminal and performing data communication with the terminal; the network interface 1004 is mainly used for the background server and performs data communication with the background server; the processor 1001 may be used to invoke a behavior monitoring program stored in the memory 1005.
In this embodiment, the behavior monitoring device includes: a memory 1005, a processor 1001, and a behavior monitoring program stored on the memory and executable on the processor, wherein:
in this embodiment, the processor 1001 may be configured to call the behavior monitoring program stored in the memory 1005 and perform the following operations:
collecting audio and video information of an environment where a monitored object is located;
determining a child-bearing scene where the monitored object is located according to the audio and video information;
and pushing prompt information corresponding to the scene.
In this embodiment, the processor 1001 may be configured to call the behavior monitoring program stored in the memory 1005 and perform the following operations:
extracting the behavior characteristic information of the monitoring object in the audio and video information;
acquiring preset characteristic information matched with the extracted behavior characteristic information;
and determining the nursery scene of the monitored object according to the matched preset characteristic information.
In this embodiment, the processor 1001 may be configured to call the behavior monitoring program stored in the memory 1005 and perform the following operations:
the method comprises the steps of cutting video information of an environment where a monitored object is located into a plurality of video blocks which are the same in size and continuous in time sequence;
cutting each video block into a plurality of video frames with the same resolution;
extracting video frames corresponding to two adjacent video blocks to obtain a video frame set;
and inputting the video frame set into a deep convolutional network to extract behavior characteristic information.
In this embodiment, the processor 1001 may be configured to call the behavior monitoring program stored in the memory 1005 and perform the following operations:
acquiring behavior characteristic information of a monitored object, and searching preset characteristic information matched with the behavior characteristic information;
and determining the nursery scene of the monitored object according to the matched preset characteristic information.
In this embodiment, the processor 1001 may be configured to call the behavior monitoring program stored in the memory 1005 and perform the following operations:
detecting whether behavior characteristic information of the monitored object is matched with preset characteristic information within a preset time interval after the prompt information is output;
and if so, pushing the preset feature information to a prompt terminal corresponding to the monitored object in a voice mode.
In this embodiment, the processor 1001 may be configured to call the behavior monitoring program stored in the memory 1005 and perform the following operations:
after receiving an instruction of entering a learning mode, playing directory information of the stored books and audio files;
and after receiving a playing instruction, playing the book or audio file corresponding to the playing instruction.
Since the behavior monitoring device provided in the embodiment of the present application is a behavior monitoring device used for implementing the method in the embodiment of the present application, based on the method described in the embodiment of the present application, a person skilled in the art can know the specific structure and deformation of the behavior monitoring device, and thus details are not described here. All behavior monitoring devices used in the method of the embodiments of the present application fall within the intended scope of the present application. The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
For a software implementation, the techniques described in this disclosure may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described in this disclosure. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Based on the above structure, an embodiment of the present invention is proposed.
Referring to fig. 2, fig. 2 is a schematic flow chart of a behavior monitoring method according to a first embodiment of the present invention, which includes the following steps:
and step S110, collecting audio and video information of the environment where the monitoring object is located.
In this embodiment, the monitored object refers to a person with an improper behavior or an improper language, the collecting of the audio/video information of the environment where the monitored object is located may be that the monitored object wears an intelligent terminal device, the intelligent terminal device carries a camera module and a voice module, for example, a smart phone, a smart watch and the like can track and monitor the behavior and the language of the monitored object in a certain range in real time, and the camera module is adopted to collect the behavior information, the expression and other performances of the monitored object; collecting voice information of a monitored object by adopting a voice module; the audio and video information of the environment where the monitored object is located can be acquired by the monitored object wearing intelligent terminal equipment, such as an intelligent watch; the system comprises a rotatable track, a camera module, a pickup module, a rotatable track, a camera module, a pickup module, a rotary track and a monitoring object, wherein the camera module or the pickup module is arranged on the rotatable track; the audio and video information collected through the camera module or the voice module is connected with the intelligent equipment through the wireless module so as to send the audio and video information to the processor of the intelligent equipment for processing.
In this embodiment, the environment where the monitoring object is located may be at home, for example, used for monitoring communication and mutual environment between parents and children; the environment of the monitored object can also be in a school, for example, used for communication and mutual environment between teachers and students; the environment where the monitoring object is located can also be the internet, for example, the live broadcast process is used for monitoring the behavior of the anchor broadcast so as to find whether bad performance or illegal operation exists or not, and the communication environment is improved; the present application takes monitoring of communication and mutual environment between parents and children as an example for description.
And step S120, determining the child-bearing scene of the monitored object according to the audio and video information.
In this embodiment, audio and video information of an environment where a monitoring object is located is collected, the audio and video information is transmitted to an intelligent terminal through a wireless device, the intelligent terminal processes the audio and video information to extract behavior feature information, preset feature information matched with the behavior feature information is searched by obtaining the behavior feature information of the monitoring object, and a child care scene where the monitoring object is located is determined according to the matched preset feature information.
In this embodiment, the collected audio and video data may be stored in a local folder of the intelligent terminal after being sent to the intelligent terminal, and the processor processes the audio and video data by acquiring the audio and video data in the local folder; the collected audio and video data can be stored on a cloud server, the cloud server stores matched preset behavior feature information, an improper behavior trigger threshold value and an improper keyword trigger threshold value are further arranged on a cloud service storage, the collected video data are processed, then the behavior feature information is extracted and compared with the preset behavior feature information, and when the behavior feature information is matched with the preset behavior feature information, namely the improper behavior trigger threshold value and the improper keyword trigger threshold value are reached, the cloud server can push a correct child-bearing scene corresponding to the preset behavior feature information to a monitoring object.
And step S130, pushing prompt information corresponding to the scene.
In this embodiment, the prompt information is soothing music information, and the prompt information may also be in other prompt manners, such as light emitted by the intelligent device, ring tones emitted by the intelligent device, and the like, and may be changed and set according to user requirements, and when the processor identifies that there is an inappropriate behavior or an inappropriate keyword in the communication process, the prompt information corresponding to the scene is pushed to the monitoring object; for example: when people beating, words of zang, words with defamation or disrespect to the other party, passive words with words of reprior and defamation, and the like are detected, light music is inserted and then warm voice prompts are played to calm the mood of the people concerned, the unpleasant accommodation process is ended, and when the behaviors are not appeared, the monitoring is continued.
The method comprises the steps of dividing video information of an environment where a monitored object is located into a plurality of video blocks which are the same in size and continuous according to a time sequence, extracting video frames corresponding to two adjacent video blocks to obtain a video frame set, inputting the video frame set into a deep convolution network to extract behavior characteristic information, obtaining the behavior characteristic information of the monitored object, searching preset characteristic information matched with the behavior characteristic information, searching a grade corresponding to the preset characteristic information according to the matched preset characteristic information, and determining a child-rearing scene where the monitored object is located according to the grade, so that the monitored object can perform targeted education, and the education atmosphere is improved.
Referring to fig. 3, fig. 3 is a detailed flowchart of step S120 in the second embodiment of the behavior monitoring method of the present invention, and in this embodiment, step S120 in the first embodiment includes:
and step S121, extracting the behavior characteristic information of the monitoring object in the audio and video information.
In this embodiment, after the acquired audio and video information is sent to the processor, the processor processes the audio and video information, and the processor extracts behavior feature information of a monitored object in the audio and video information, specifically, the processor divides the video information of an environment where the monitored object is located into a plurality of video blocks with the same size and continuity according to a time sequence, divides each video block into a plurality of video frames with the same resolution, extracts video frames corresponding to two adjacent video blocks to obtain a video frame set, and inputs the video frame set into the deep convolution network conference to extract behavior feature information.
And step S122, acquiring preset characteristic information matched with the extracted behavior characteristic information.
In this embodiment, the matched preset feature information is a keyword and an inappropriate action of an inappropriate behavior, the preset feature information is stored in a memory, the preset feature information includes some world classic childbearing books, children development psychology and education books, marital and family books, and the like, the preset feature information can be stored in the preset feature information in a development stage, a user can add a favorite electronic book by himself or delete an existing electronic book; the memory must also store the usual behavioral picture language that may harm children, etc. as reference information.
In this embodiment, the behavior feature information needs to be considered from multiple aspects in comparison with preset feature information, and the frequency of occurrence of an inappropriate behavior keyword in the behavior feature information may be detected by setting, or the tone and volume of the inappropriate behavior keyword are detected, and the matched preset feature information is searched from a memory, for example, the preset feature information stored in the memory is "call-off, you-go", when the processor processes the behavior feature information, when it is detected that a word-off occurs in the communication process or a word-off occurs 3 times in succession, it is determined that the current behavior is inappropriate, which indicates that the emotion of the monitored object changes, and "call-off, you-go" is the preset feature information matched with the current behavior feature information; the gesture of the improper action or the duration of the improper action in the behavior characteristic information can be detected through setting, and the matched preset characteristic information is searched from a storage, for example, the preset characteristic information is stored in a storage as a fist holding photo, when the processor processes the behavior characteristic information, when the action of clenching the fist in the communication process is detected and the duration of the action exceeds the preset time, it is determined that the current action is improper, and the current action of clenching the fist is the action matched with the preset characteristic information; the time when the improper action of the improper action and the time when the improper keyword appears in the behavior feature information are detected, for example, when the improper keyword such as 'typing' word appears in the communication process, and the monitored object is detected to make an action of 'raising hands upwards' in the same time, the current action is judged to be the improper action, and the matched preset feature information is searched from the database.
And S123, determining a nursery scene where the monitored object is located according to the matched preset characteristic information.
In this embodiment, a plurality of preset feature information are stored in the memory, and the same child care scene at least corresponds to one matched preset feature information, behavior feature information of the monitored object is obtained, the preset feature information matched with the behavior feature information is searched, and the child care scene where the monitored object is located is determined according to the matched preset feature information.
Referring to fig. 4, fig. 4 is a detailed flowchart of step S121 in the third embodiment of the behavior monitoring method of the present invention, and in this embodiment, step S121 in the second embodiment includes:
step S1211, the video information of the environment where the monitoring object is located is divided into a plurality of video blocks with the same size and continuous in time sequence.
In this embodiment, because the duration of the collected video information is too long, the processing process is slow, and meanwhile, the whole video information is not effective, so the collected video information needs to be processed, specifically, the collected video information is divided into a plurality of video blocks with the same size and continuity according to a time sequence, the video blocks are conveniently distinguished and separately stored according to effective behaviors and invalid behaviors, and thus the processing speed is improved by processing the effective video blocks, so that the detected behavior characteristic information is more accurate.
In step S1212, video frames corresponding to two adjacent video blocks are extracted to obtain a video frame set.
In this embodiment, the segmented video blocks are obtained, and the video frames corresponding to two adjacent video blocks are respectively extracted to obtain a video frame set, because the video information included in the video blocks may be video data of a certain body part of the monitored object or may not be video data of the monitored object, a video frame needs to be preprocessed before the video frame set is obtained, and video frames that do not meet the preset condition are filtered out, and the processor may mark the video frames that do not have the monitored object, for example, mark the video frames that do not have the monitored object as 0, and screen out the video frames marked as 0 to leave the video frames that have the monitored object; the processor may further mark a video frame of a certain body part of the monitored object, for example, mark a place where the monitored object part appears in the video frame as 1, mark a place where the monitored object part does not appear in the video frame as 0, when a pixel value marked as 1 is less than a preset threshold, screen out the video frame as a video frame where the monitored object does not exist, obtain video frame data after screening out, extract video frames corresponding to two adjacent video blocks after screening out invalid video information to obtain a video frame set, where data stored in the video frame set includes data of adjacent continuous valid behavior actions or adjacent discontinuous valid behavior actions.
And step S1213, inputting the video frame set into a deep convolution network to extract behavior feature information.
In this embodiment, before inputting the data in the video frame set into a deep convolutional network to extract behavior feature information, image processing is further performed on the video frames in the video frame set, for example, the image is subjected to gray processing and binarization processing, so as to process the data in the video frame set into binarized data which can be processed by the deep convolutional network, the deep convolutional network model may be an existing mainstream convolutional network, for example, a binarized neural network, the data in the video frame set is processed into binarized data, the binarized data is input into a deep convolutional network model to be trained, so as to extract behavior feature information, the behavior feature information includes information such as behavior and expression of a monitored object, and the behavior and the expression of the monitored object can be obtained according to the processed behavior feature information.
The technical scheme that the video information of the environment where the monitored object is located is divided into a plurality of video blocks which are the same in size and continuous according to the time sequence, the video frames corresponding to two adjacent video blocks are extracted to obtain a video frame set, and the video frame set is input into a deep convolution network to extract the behavior characteristic information is adopted, so that the behavior characteristic information of the monitored object is extracted.
Referring to fig. 5, fig. 5 is a detailed flowchart of step S123 in the fourth embodiment of the behavior monitoring method of the present invention, and in this embodiment, step S123 in the third embodiment includes:
step S1231, behavior feature information of the monitored object is obtained, and preset feature information matched with the behavior feature information is searched.
In this embodiment, a preset feature information matching level of the monitored object is pre-stored in the memory, and the level includes: the child care system comprises a monitoring object, a plurality of levels, a plurality of groups of children and a plurality of groups of children, wherein the levels are mild, medium, heavy, critical and the like, the child care scenes corresponding to each level are different, and the setting of each level can be adjusted according to the behaviors of the monitoring object; and acquiring behavior characteristic information of the monitored object, and searching preset characteristic information matched with the behavior characteristic information.
And S1232, determining the nursery scene where the monitored object is located according to the matched preset characteristic information.
In one embodiment, the child-care scene in which the monitored object is located at the level is recommended according to the level of the preset feature information of the monitored object, when the matched preset feature information is extracted, the level of the preset feature information is judged, and if the level is slight, the child-care scene corresponding to the slight level is played.
The method comprises the steps of obtaining behavior characteristic information of a monitored object, searching preset characteristic information matched with the behavior characteristic information, searching the grade corresponding to the preset characteristic information according to the matched preset characteristic information, and determining the childbearing scene of the monitored object according to the grade, so that the monitored object can perform targeted education, and the education atmosphere is improved.
Referring to fig. 6, fig. 6 is a schematic flowchart of a fifth embodiment of the behavior monitoring method according to the present invention, in this embodiment, step S240 and step S250 are located after step S130 of the first embodiment, and include:
and step S210, collecting audio and video information of the environment where the monitoring object is located.
And step S220, determining the child-bearing scene of the monitored object according to the audio and video information.
Step S230, pushing the prompt information corresponding to the scene.
Step S240, detecting whether the behavior characteristic information of the monitored object matches the preset characteristic information within a preset time interval after the prompt information is output.
In this embodiment, when a behavior mode that the acquired audio/video information has an error is found, a voice prompt is sent through the voice module, for example: please pause and suggest what to do next; when the collected information is found to be unfavorable for the duration of the whole atmosphere, the voice reminds the user of paying attention to the word, and the passing of the thing is stored for the user to subsequently check the thinking; if the inappropriate behavior or the inappropriate keyword of the monitoring object changes within the preset time interval after the voice prompt is detected, for example, the posture that the control object still keeps clenching the fist is set within 20 seconds, and the processor judges that the inappropriate behavior of the monitoring object is not changed, namely, the behavior characteristic information is matched with the preset characteristic information.
And step S250, if the preset characteristic information is matched with the preset characteristic information, pushing the preset characteristic information to a prompt terminal corresponding to the monitored object in a voice mode.
In this embodiment, when it is detected that the behavior feature information of the monitored object is matched with the preset feature information, it is determined that the current behavior of the monitored object is an inappropriate behavior, the processor searches out a keyword according to the received audio/video information, compares the keyword with a case in the memory, finds out a similar case and a solution, and then pushes the solution to a prompt terminal of the monitored object by voice, for example: when a child toddlers, the prompting terminal can push information or voice to tell young parents how to make safety protection for the child, wrap the place with sharp corners and move fragile articles which may hurt the child; when children are afraid of learning and rubbed during operation, the prompt terminal tells parents how to guide the children in a sound mode.
The technical scheme is that audio and video information of the environment where the monitoring object is located is collected, the childcare scene where the monitoring object is located is determined according to the audio and video information, the prompting information corresponding to the scene is pushed, whether behavior characteristic information of the monitoring object is matched with the preset characteristic information within a preset time interval after the prompting information is output is detected, and if the behavior characteristic information is matched with the preset characteristic information, the preset characteristic information is pushed to the prompting terminal corresponding to the monitoring object in a voice mode, so that timely prompting is realized after improper behaviors occur, and the atmosphere of the phase place is improved.
Referring to fig. 7, fig. 7 is a flowchart illustrating a sixth embodiment of the behavior monitoring method according to the present invention, in this embodiment, step S340 and step S350 may be located at any position before and after each step in the first embodiment, including:
and step S310, collecting audio and video information of the environment where the monitoring object is located.
And step S320, determining the child-bearing scene of the monitored object according to the audio and video information.
And step S330, pushing prompt information corresponding to the scene.
Step S340, after receiving the instruction to enter the learning mode, playing the directory information of the stored books and audio files.
In this embodiment, the directory information is a book list of books and audio files, after receiving an instruction of entering a learning mode, the prompt function is closed, only the directory information of the electronic books is broadcasted through the voice module for selection of the monitoring object, the monitoring object can selectively play the directory information of the corresponding stored books and audio files for learning according to actual conditions, the self-education ability is improved, the speaking skill is improved, and when the same scene appears next time, the monitoring object can make correct actions.
Step S350, after receiving the play instruction, playing the book or audio file corresponding to the play instruction.
In this embodiment, when receiving the monitoring object playing instruction, the processor calls the electronic book matched in the memory after receiving the monitoring object playing instruction, and plays the book or audio file corresponding to the playing instruction.
The technical scheme is that the audio and video information of the environment where the monitoring object is located is collected, the childcare scene where the monitoring object is located is determined according to the audio and video information, the prompt information corresponding to the scene is pushed, and the directory information of the stored books and audio files is played after the instruction of entering the learning mode is received, so that the education capability of the monitoring object is improved, the speaking skill is improved, and the education atmosphere is improved.
Based on the same inventive concept, the present invention further provides a behavior monitoring device, as shown in fig. 8, fig. 8 is a functional block diagram of the behavior monitoring device of the present invention, the behavior monitoring device includes: the behavior data acquisition module 10, the behavior data analysis module 20, the behavior data pushing module 30, etc., and the following description will be made on each module:
the behavior data acquisition module 10: the system is used for collecting audio and video information of the environment where a monitored object is located, dividing the video information of the environment where the monitored object is located into a plurality of video blocks which are identical in size and continuous in time sequence, dividing each video block into a plurality of video frames with identical resolution, extracting the video frames corresponding to two adjacent video blocks to obtain a video frame set, inputting the video frame set into a deep convolution network to extract behavior characteristic information, wherein the audio and video signals are non-contact signals, namely non-electric signals, and converting the non-electric signals into electric signals.
The behavior data analysis module 20: the behavior data analysis module 20 is further configured to extract behavior feature information of the monitoring object from the audio/video information; acquiring preset characteristic information matched with the extracted behavior characteristic information; determining a nursery scene where the monitored object is located according to the matched preset feature information, specifically, the behavior data analysis module 20 is further configured to obtain behavior feature information of the monitored object, and search for preset feature information matched with the behavior feature information; and determining a nursery scene where the monitored object is located according to the matched preset characteristic information, wherein the matched preset characteristic information is keywords of improper behaviors and improper actions, and the prompt information is relieving music information.
The behavior data pushing module 30: the system is used for selecting and pushing prompt information corresponding to the scene; specifically, the behavior data pushing module 30 is further configured to detect whether behavior feature information of the monitored object is matched with preset feature information within a preset time interval after the prompt information is output; and if so, pushing the preset feature information to a prompt terminal corresponding to the monitored object in a voice mode.
The method comprises the steps of collecting audio and video information of the environment where a monitored object is located, dividing the video information of the environment where the monitored object is located into a plurality of video blocks which are same in size and continuous in time sequence, dividing each video block into a plurality of video frames with the same resolution, extracting the video frames corresponding to two adjacent video blocks to obtain a video frame set, inputting the video frame set into a deep convolution network to extract behavior characteristic information, obtaining preset characteristic information matched with the extracted behavior characteristic information, obtaining the behavior characteristic information of the monitored object, and searching the preset characteristic information matched with the behavior characteristic information, and determining the child-bearing scene of the monitoring object according to the matched preset characteristic information to form a complete behavior monitoring device capable of improving the education atmosphere in a targeted manner according to the child-bearing scene and the behavior of the monitoring object.
Based on the same inventive concept, the embodiment of the present application further provides a computer storage medium, where a behavior monitoring program is stored in the computer storage medium, and when the behavior monitoring program is executed by a processor, the steps of the behavior monitoring method described above are implemented, and the same technical effect can be achieved, and no further description is provided here to avoid repetition.
Since the computer storage medium provided in the embodiments of the present application is a computer storage medium used for implementing the method in the embodiments of the present application, based on the method described in the embodiments of the present application, a person skilled in the art can understand a specific structure and a modification of the computer storage medium, and thus details are not described here. Computer storage media used in the methods of embodiments of the present application are all intended to be protected by the present application.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable computer storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A behavior monitoring method is characterized by being applied to a behavior monitoring device; the method comprises the following steps:
collecting audio and video information of an environment where a monitored object is located;
determining a child-bearing scene where the monitored object is located according to the audio and video information;
and pushing prompt information corresponding to the scene.
2. The behavior monitoring method according to claim 1, wherein the step of determining the child bearing scene in which the monitoring object is located according to the audio-visual information includes:
extracting the behavior characteristic information of the monitoring object in the audio and video information;
acquiring preset characteristic information matched with the extracted behavior characteristic information;
and determining the nursery scene of the monitored object according to the matched preset characteristic information.
3. The behavior monitoring method according to claim 2, wherein the extracting of the behavior feature information of the monitoring object in the audio-video information includes:
the method comprises the steps of cutting video information of an environment where a monitored object is located into a plurality of video blocks which are the same in size and continuous in time sequence;
cutting each video block into a plurality of video frames with the same resolution;
extracting video frames corresponding to two adjacent video blocks to obtain a video frame set;
and inputting the video frame set into a deep convolutional network to extract behavior characteristic information.
4. A method for monitoring behavior according to claim 3, wherein the determining of the childbearing scenario of the monitored subject according to the matched preset feature information comprises:
acquiring behavior characteristic information of a monitored object, and searching preset characteristic information matched with the behavior characteristic information;
and determining the nursery scene of the monitored object according to the matched preset characteristic information.
5. The behavior monitoring method according to claim 4, wherein the matched preset characteristic information is a keyword of inappropriate behavior and inappropriate action, and the prompt information is soothing music information.
6. The behavior monitoring method according to claim 1, wherein after the pushing of the prompt information corresponding to the scene, the method includes:
detecting whether behavior characteristic information of the monitored object is matched with preset characteristic information within a preset time interval after the prompt information is output;
and if so, pushing the preset feature information to a prompt terminal corresponding to the monitored object in a voice mode.
7. The behavior monitoring method of claim 1, further comprising:
after receiving an instruction of entering a learning mode, playing directory information of the stored books and audio files;
and after receiving a playing instruction, playing the book or audio file corresponding to the playing instruction.
8. A behavior monitoring device, comprising:
a behavior data acquisition module: the system is used for acquiring audio and video information of the environment where the monitored object is located;
a behavior data analysis module: the system is used for determining a child-bearing scene where the monitored object is located according to the audio and video information;
a behavior pushing module: and the system is used for pushing prompt information corresponding to the scene.
9. A behavior monitoring device, comprising a memory, a processor, and a behavior monitoring program stored in the memory and executable on the processor, the behavior monitoring program, when executed by the processor, implementing the steps of the behavior monitoring method according to any one of claims 1-7.
10. A computer storage medium, characterized in that the computer storage medium stores a behavior monitoring program, which when executed by a processor implements the steps of the behavior monitoring method according to any one of claims 1-7.
CN202011595416.4A 2020-12-28 2020-12-28 Behavior monitoring method and apparatus, behavior monitoring device, and computer storage medium Pending CN112580593A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011595416.4A CN112580593A (en) 2020-12-28 2020-12-28 Behavior monitoring method and apparatus, behavior monitoring device, and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011595416.4A CN112580593A (en) 2020-12-28 2020-12-28 Behavior monitoring method and apparatus, behavior monitoring device, and computer storage medium

Publications (1)

Publication Number Publication Date
CN112580593A true CN112580593A (en) 2021-03-30

Family

ID=75144170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011595416.4A Pending CN112580593A (en) 2020-12-28 2020-12-28 Behavior monitoring method and apparatus, behavior monitoring device, and computer storage medium

Country Status (1)

Country Link
CN (1) CN112580593A (en)

Similar Documents

Publication Publication Date Title
US20200089661A1 (en) System and method for providing augmented reality challenges
CN105957530B (en) Voice control method and device and terminal equipment
CN105874454B (en) Methods, systems, and media for generating search results based on contextual information
CN106202165B (en) Intelligent learning method and device for man-machine interaction
CN108604179A (en) The realization of voice assistant in equipment
CN109766759A (en) Emotion identification method and Related product
KR20180120146A (en) Response to remote media classification queries using classifier model and context parameters
CN108132805A (en) Voice interactive method, device and computer readable storage medium
CN105657535A (en) Audio recognition method and device
CN104169995A (en) Information processing device, information processing system, and program
CN112040263A (en) Video processing method, video playing method, video processing device, video playing device, storage medium and equipment
CN108377422B (en) Multimedia content playing control method, device and storage medium
CN109947984A (en) A kind of content delivery method and driving means for children
JP4621758B2 (en) Content information reproducing apparatus, content information reproducing system, and information processing apparatus
CN109271533A (en) A kind of multimedia document retrieval method
KR20190067638A (en) Apparatus for Voice Recognition and operation method thereof
WO2021075658A1 (en) System and method for prediction and recommendation using collaborative filtering
CN111414506B (en) Emotion processing method and device based on artificial intelligence, electronic equipment and storage medium
CN104318950A (en) Information processing method and electronic equipment
CN111654748A (en) Limit level picture detection method and device, display equipment and readable storage medium
US20160371890A1 (en) System and method for generating a customized augmented reality environment to a user
CN107688617B (en) Multimedia service method and mobile terminal
US20200402498A1 (en) Information processing apparatus, information processing method, and program
CN113591515B (en) Concentration degree processing method, device and storage medium
CN106503190B (en) Show the method and device of information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination