CN115240715A - Child care monitoring method based on child bracelet - Google Patents

Child care monitoring method based on child bracelet Download PDF

Info

Publication number
CN115240715A
CN115240715A CN202210927413.9A CN202210927413A CN115240715A CN 115240715 A CN115240715 A CN 115240715A CN 202210927413 A CN202210927413 A CN 202210927413A CN 115240715 A CN115240715 A CN 115240715A
Authority
CN
China
Prior art keywords
emotion
child
audio data
wearing
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210927413.9A
Other languages
Chinese (zh)
Inventor
李兴昶
熊佳
左俊
陈瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sanli Information Technology Co ltd
Original Assignee
Shanghai Sanli Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sanli Information Technology Co ltd filed Critical Shanghai Sanli Information Technology Co ltd
Priority to CN202210927413.9A priority Critical patent/CN115240715A/en
Publication of CN115240715A publication Critical patent/CN115240715A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Alarm Systems (AREA)

Abstract

The invention discloses a childcare monitoring method based on a child bracelet, and relates to the technical field of childcare monitoring. The childbirth control monitoring method comprises the steps of firstly obtaining emotion representation data acquired by an emotion representation data acquisition module in real time for a child wearing the childhood, then leading the emotion representation data into a pre-trained emotion recognition model based on an artificial neural network to obtain an emotion recognition result, and finally switching from a silent mode to an AI automatic conversation mode when the child wearing the childhood is found to be in a certain preset emotion needing to be pacified and intervened by a guardian according to the emotion recognition result.

Description

Child care monitoring method based on child bracelet
Technical Field
The invention belongs to the technical field of childbirth control, and particularly relates to a childbirth control method based on a child bracelet.
Background
The narrowly defined nursing service is a mechanism or system that an infant below 3 years old must leave a guardian (specifically, but not limited to a parent) for some time period in the day and perform alternative nursing or nursing through other people or mechanisms due to the fact that the normal home care function of the infant is insufficient or the home care function is destroyed. As the times develop and the concept of education progresses, in addition to the functions of child care and protection, the guardian also expects that the childcare service can provide an educational function at the same time, so that the boundary between child care and early education is difficult to divide, in other words, the generalized childcare service can be extended to a service for child care and early education for preschool children of 0 to 6 years old.
Considering that after a child to be nurtured is brought into a nurturing mechanism (such as a nurturing park or a kindergarten), if a guardian disappears or an accident (such as a failure or falling of a toy) occurs during playing, an emotion such as anger, fear, sadness or disgust may be caused, and at this time, a nurturing teacher needs to be cared for to prevent further deterioration. However, the emotions are not easy to be found in time, and meanwhile, the phenomena that the situation that hands are temporarily insufficient or a teacher has poor soothing effect and the like are considered, the situation is worsened, the crying and screaming of children are not stopped and the like are caused, namely, the existing fostering and monitoring technology possibly has the problem that the emotions of the children cannot be responded in time to sooth.
Disclosure of Invention
The invention aims to provide a childcare monitoring method, a childcare monitoring device, a childcare monitoring system, a computer device and a computer readable storage medium, which are used for solving the problem that the existing childcare monitoring technology may not respond to the emotion of a child in time to pacify.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, a child care monitoring method based on a child bracelet is provided, which includes:
acquiring emotion representation data acquired by an emotion representation data acquisition module in real time for a wearing child, wherein the emotion representation data acquisition module is arranged in the child bracelet;
importing the emotion representation data into an emotion recognition model which is based on an artificial neural network and is pre-trained to obtain an emotion recognition result;
when the wearing child is found to be in a certain preset emotion needing the conciliation intervention of the guardian according to the emotion recognition result, switching from a silent mode to an AI automatic conversation mode: based on guardian's voiceprint characteristic information generation AI automatic conversation audio data to play through voice horn AI automatic conversation audio data, and acquire by the adapter right wear children real-time collection's audio data, and according to the audio data response generates newly AI automatic conversation audio data, so that children's bracelet with wear children and carry out AI automatic conversation, until certain preset emotion disappears, wherein, certain preset emotion is anger, fear, sadness or disgust, voice horn with the adapter is arranged respectively in children's bracelet.
Based on the invention, a child bracelet scheme for automatically responding to the emotion of a child to pacify based on an emotion recognition technology and an AI automatic conversation technology is provided, namely emotion characterization data acquired by an emotion characterization data acquisition module in real time for a wearing child is acquired, then the emotion characterization data is imported into an emotion recognition model which is based on an artificial neural network and is pre-trained to obtain an emotion recognition result, and finally when the wearing child is found to be in a certain preset emotion needing to be pacified and intervened by a guardian according to the emotion recognition result, the silence mode is switched to an AI automatic conversation mode, and AI automatic conversation audio data are generated based on voiceprint characteristic information of the guardian, so that the emotion of the child can be responded in time, the child guardian can be simulated to automatically and effectively pacify, further, the situation deterioration can be suppressed, the workload of a fostering teacher is reduced, the fostering and monitoring function is expanded, and the practical application and popularization are facilitated.
In one possible design, the emotion characterization data acquisition module comprises an electrode type heart rate sensor and/or a sound pickup, and the emotion characterization data comprises heart rate data and/or audio data.
In one possible design, when the wearing child is found to be in a certain preset emotion that requires a guardian to perform appeasing intervention according to the emotion recognition result, switching from the silence mode to an AI automatic conversation mode includes:
when the wearing child is found to be in a certain preset emotion needing the conciliation of the guardian according to the emotion recognition result, starting a first timer, wherein the certain preset emotion is anger, fear, sadness or disgust;
and before the timing of the first timer reaches a preset first time threshold, continuously judging whether the wearing child is continuously in the certain preset emotion according to the emotion recognition result, if so, switching from a silent mode to an AI automatic conversation mode when the timing of the first timer reaches the first time threshold.
In one possible design, after entering the AI automatic dialog mode, the method further includes:
starting a second timer when entering an AI automatic dialogue mode;
when the timing of the second timer reaches a preset second duration threshold, if the certain preset emotion is continuously found to exist according to the emotion recognition result, automatically calling the guardian terminal bound with the wearing child, and switching from the AI automatic conversation mode to the manual conversation mode when the guardian terminal is connected: and audio data from the guardian terminal is played through the voice loudspeaker, and the audio data acquired by the pickup in real time for the wearing child is transmitted to the guardian terminal, so that the guardian and the wearing child can carry out manual conversation.
In one possible design, after automatically calling the guardian terminal bound to the wearing child, the method further includes:
starting a wireless positioning unit to determine the current position of the child bracelet;
sending the warning message to a procreation teacher terminal, wherein the warning message contains a certain preset emotion, the personnel identification information of wearing children and marked with the procreation area electronic map of the current position.
In one possible design, after entering the AI automatic dialog mode, the method further includes the following steps S51 to S53:
s51, continuously judging whether the wearing child is still in the certain preset emotion according to the emotion recognition result, if so, returning to execute the step S51, otherwise, starting a third timer, and then executing the step S52;
s52, before the timing of the third timer reaches a preset third duration threshold, continuously judging whether the wearing child is in the certain preset emotion again according to the emotion recognition result, if so, finishing the timing and initializing of the third timer, and then returning to execute the step S51, otherwise, executing the step S53;
and S53, when the timing of the third timer reaches the third duration threshold, determining that the certain preset emotion disappears, and switching from the AI automatic conversation mode to the silence mode.
In a second aspect, a child care monitoring device based on a child bracelet is provided, which comprises a data acquisition unit, an emotion recognition unit and a mode switching unit which are sequentially in communication connection;
the data acquisition unit is used for acquiring emotion characterization data acquired by an emotion characterization data acquisition module in real time for wearing a child, wherein the emotion characterization data acquisition module is arranged in a hand ring of the child;
the emotion recognition unit is used for importing the emotion representation data into an emotion recognition model which is based on an artificial neural network and is pre-trained to obtain an emotion recognition result;
the mode switching unit is used for switching from a silent mode to an AI automatic conversation mode when the wearing child is found to be in a certain preset emotion needing the conciliation intervention of the guardian according to the emotion recognition result: based on guardian's voiceprint characteristic information generation AI automatic conversation audio data to play through voice horn AI automatic conversation audio data, and acquire by the adapter right wear children real-time collection's audio data, and according to the audio data response generates newly AI automatic conversation audio data, so that children's bracelet with wear children and carry out AI automatic conversation, until certain preset emotion disappears, wherein, certain preset emotion is anger, fear, sadness or disgust, voice horn with the adapter is arranged respectively in children's bracelet.
In a third aspect, a child bracelet system is provided, which includes an emotion characterization data acquisition module, a voice loudspeaker, a sound pickup and a processing module, wherein the processing module is respectively in communication connection with the emotion characterization data acquisition module, the voice loudspeaker and the sound pickup;
the emotion representation data acquisition module is used for acquiring emotion representation data of a wearing child in real time and transmitting the emotion representation data to the processing module;
the voice loudspeaker is used for playing the audio data from the processing module;
the sound pickup is used for collecting the audio data of the wearing child and transmitting the audio data of the wearing child to the processing module;
the processing module is configured to perform the method for remote monitoring as described in the first aspect or any possible design of the first aspect.
In a fourth aspect, a computer device is provided, which comprises a memory, a processor and a transceiver, wherein the memory is used for storing a computer program, the transceiver is used for transceiving a message, and the processor is used for reading the computer program and executing the method for fiddle monitoring according to the first aspect or any possible design of the first aspect.
In a fifth aspect, a computer-readable storage medium is provided, having instructions stored thereon, which, when run on a computer, perform the method of fiduciary monitoring according to the first aspect or any of the possible designs of the first aspect.
A sixth aspect provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of claim monitoring as described in the first aspect or any of the possible designs thereof.
Has the advantages that:
(1) The invention provides a child bracelet scheme for automatically responding to the emotion of a child to be appealed based on an emotion recognition technology and an AI automatic conversation technology, namely emotion characterization data acquired by an emotion characterization data acquisition module in real time for the child to be worn is acquired, then the emotion characterization data is led into an emotion recognition model which is based on an artificial neural network and is pre-trained to obtain an emotion recognition result, and finally when the child to be worn is found to be in a certain preset emotion needing appeasing intervention of a guardian according to the emotion recognition result, the emotion recognition result is switched from a silent mode to an AI automatic conversation mode, and AI automatic conversation audio data are generated based on voiceprint characteristic information of the guardian, so that the emotion of the child can be responded in time, the child guardian is simulated to automatically and effectively appeals, further, the situation deterioration can be prevented, the workload of the fostering teacher is reduced, the fostering monitoring function is expanded, and the practical application and popularization are facilitated.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flow chart of a childcare monitoring method based on a child bracelet according to this embodiment.
Fig. 2 is a schematic structural view of the child bracelet-based support nursing device according to this embodiment.
Fig. 3 is a schematic structural diagram of the child bracelet system provided in this embodiment.
Fig. 4 is a schematic structural diagram of the computer device provided in this embodiment.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the present invention will be briefly described below with reference to the accompanying drawings and the embodiments or the description in the prior art, it is obvious that the following description of the structure of the drawings is only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto.
It will be understood that, although the terms first, second, etc. may be used herein to describe various objects, these objects should not be limited by these terms. These terms are only used to distinguish one object from another. For example, a first object may be referred to as a second object, and similarly, a second object may be referred to as a first object, without departing from the scope of example embodiments of the present invention.
It should be understood that, for the term "and/or" as may appear herein, it is merely an associative relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, B exists alone or A and B exist at the same time; for the term "/and" as may appear herein, which describes another associative object relationship, it means that two relationships may exist, e.g., a/and B, may mean: a exists singly or A and B exist simultaneously; in addition, with respect to the character "/" which may appear herein, it generally means that the former and latter associated objects are in an "or" relationship.
As shown in fig. 1, the childhood education monitoring method based on the child bracelet according to the first aspect of the embodiment may be, but not limited to, executed by a computer device having certain computing resources and being respectively in communication connection with an emotion characterization data acquisition module, a voice speaker, and a sound pickup, for example, executed by a processing device such as a microcontroller, a single chip microcomputer, an FPGA (Field Programmable Gate Array), or a PLC (Programmable Logic Controller). As shown in fig. 1, the childcare support monitoring method is executed by a processing module in the child bracelet, and may include, but is not limited to, the following steps S1 to S3.
S1, emotion representation data acquired by an emotion representation data acquisition module in real time for a child wearing the child is acquired, wherein the emotion representation data acquisition module is arranged in a hand ring of the child.
In the step S1, the child is worn, that is, the child carrier wearing the child bracelet. Specifically, the emotion characterization data acquisition module includes, but is not limited to, an electrode type heart rate sensor and/or a microphone, and the like, and the emotion characterization data includes, but is not limited to, heart rate data and/or audio data, wherein the heart rate data is acquired by the electrode type heart rate sensor based on the prior art, and the audio data is acquired by the microphone based on the prior art.
And S2, importing the emotion characterization data into an emotion recognition model which is based on an artificial neural network and is pre-trained to obtain an emotion recognition result.
In the step S2, the Artificial Neural Network (ANN) is a complex Network structure formed by connecting a large number of processing units (i.e., neurons), is a certain abstraction, simplification, and simulation of a human brain organization structure and an operation mechanism, simulates neuron activities with a mathematical model, and is an information processing system established based on a simulation of a brain Neural Network structure and function, and therefore has self-learning, self-organization, self-adaptation, and strong nonlinear function approximation capabilities, has strong fault tolerance, can realize functions such as simulation, binary image recognition, prediction, and fuzzy control, and is a powerful tool for processing a nonlinear system. Therefore, the emotion recognition model can be obtained through pre-training in a conventional learning and training manner, so that after the emotion characterization data is input, a corresponding emotion recognition result can be output, wherein the emotion recognition result includes but is not limited to happiness, anger, fear, sadness, disgust and surprise.
S3, when the wearing child is found to be in a certain preset emotion needing the conciliation of the guardian according to the emotion recognition result, switching from the silence mode to an AI automatic conversation mode: based on guardian's voiceprint characteristic information generation AI automatic conversation audio data to play through voice horn AI automatic conversation audio data to and obtain by the adapter to wear children real-time collection's audio data, and according to the audio data response generates newly AI automatic conversation audio data, so that children's bracelet with wear children and carry out AI automatic conversation, until certain predetermines the mood and disappear, wherein, certain predetermined mood is anger, fear, sadness or disgust etc. for, voice horn with the adapter is arranged respectively in children's bracelet.
In step S3, in order to prevent the child from entering the AI automatic conversation mode by mistake due to accidental situations or inaccurate recognition, preferably, when the wearing child is found to be in a certain preset emotion that requires the concierge of the guardian, the wearing child is switched from the silent mode to the AI automatic conversation mode, which includes, but is not limited to, the following steps S31 to S32: s31, when the wearing child is found to be in a certain preset emotion needing to be appealed and intervened by a guardian according to the emotion recognition result, starting a first timer, wherein the certain preset emotion is anger, fear, sadness, disgust and the like; and S32, before the timing of the first timer reaches a preset first time length threshold value, continuously judging whether the wearing child is continuously in the certain preset emotion according to the emotion recognition result, and if so, switching from a silent mode to an AI automatic conversation mode when the timing of the first timer reaches the first time length threshold value. Specifically, the first time length threshold may be 30 seconds or 60 seconds, for example.
Therefore, based on the childhood education support monitoring method described in the steps S1 to S3 and based on the childhood bracelet, a childhood bracelet scheme for automatically responding to the emotion of a child to be appealed based on an emotion recognition technology and an AI automatic conversation technology is provided, namely emotion characterization data acquired by an emotion characterization data acquisition module in real time for the child to be worn is acquired, then the emotion characterization data is imported into an artificial neural network-based and pre-trained emotion recognition model to obtain an emotion recognition result, and finally when the child to be worn is found to be in a certain preset emotion needing appeasing intervention of a guardian according to the emotion recognition result, the silence mode is switched to the AI automatic conversation mode, and AI automatic conversation audio data are generated based on voiceprint characteristic information of the guardian, so that the child emotion can be responded in time, the child guardian is simulated to carry out automatic effective appeasing, thereby being capable of stopping situation deterioration, workload of the childhood teacher being reduced, the childhood education support monitoring function is expanded, and practical application and popularization are facilitated.
On the basis of the technical solution of the first aspect, the present embodiment further provides a possible design of how to switch to manual pacifying, that is, after entering the AI automatic dialog mode, the method further includes, but is not limited to, the following steps S41 to S42.
And S41, starting a second timer when the AI automatic dialogue mode is entered.
S42, when the timing of the second timer reaches a preset second duration threshold, if the certain preset emotion is continuously found to exist according to the emotion recognition result, automatically calling the guardian terminal bound with the wearing child, and switching from the AI automatic conversation mode to the manual conversation mode when the guardian terminal is connected: and audio data from the guardian terminal is played through the voice loudspeaker, and the audio data acquired by the pickup in real time for the wearing child is transmitted to the guardian terminal, so that the guardian and the wearing child can carry out manual conversation. Therefore, the guardian can be called in time to perform remote pacification, and the situation is further worsened. Meanwhile, after the guardian terminal bound with the wearing child is automatically called, the method further comprises but is not limited to the following steps: firstly, starting a wireless positioning unit to determine the current position of the child bracelet; then send the message of warning to ask for educating mr terminal, wherein, the message of warning contains but not limited to have certain preset mood, wear children's personnel's identifying information and mark have ask for educating regional electronic map etc. of current position.
Therefore, based on the first possible design, the relevant personnel can be reminded to manually pacify in time, and further deterioration of the situation is stopped.
In this embodiment, on the basis of the technical solution of the first aspect, a second possible design is provided for how to automatically recover to the silent mode, that is, after entering the AI automatic dialog mode, the method further includes, but is not limited to, the following steps S51 to S53.
S51, whether the wearing child is still in the certain preset emotion is continuously judged according to the emotion recognition result, if yes, the step S51 is returned to be executed, otherwise, a third timer is started, and then the step S52 is executed.
S52, before the timing of the third timer reaches a preset third duration threshold, whether the wearing child is in the certain preset emotion again is continuously judged according to the emotion recognition result, if yes, the timing of the third timer is ended and initialized, then the step S51 is executed, and if not, the step S53 is executed.
And S53, when the timing of the third timer reaches the third duration threshold, determining that the certain preset emotion disappears, and switching from the AI automatic conversation mode to the silence mode.
Therefore, based on the second possible design, the AI automatic conversation can be automatically ended after the children are worn and the emotion is stable.
As shown in fig. 2, a second aspect of this embodiment provides a virtual device for implementing the method for trusteeship monitoring according to any one of the first aspect or the first aspect, including a data acquisition unit, an emotion recognition unit, and a mode switching unit, which are sequentially connected in a communication manner;
the data acquisition unit is used for acquiring emotion representation data acquired by an emotion representation data acquisition module in real time for wearing a child, wherein the emotion representation data acquisition module is arranged in the hand ring of the child;
the emotion recognition unit is used for importing the emotion representation data into an emotion recognition model which is based on an artificial neural network and is pre-trained to obtain an emotion recognition result;
the mode switching unit is used for switching from a silent mode to an AI automatic conversation mode when finding that the wearing child is in a certain preset emotion needing the conciliation and intervention of the guardian according to the emotion recognition result: based on guardian's voiceprint characteristic information generation AI automatic conversation audio data to play through voice horn AI automatic conversation audio data to and obtain by the adapter to wear children real-time collection's audio data, and according to the audio data response generates newly AI automatic conversation audio data, so that children's bracelet with wear children and carry out AI automatic conversation, until certain predetermines the mood and disappear, wherein, certain predetermined mood is anger, fear, sadness or disgust, voice horn with the adapter is arranged respectively in children's bracelet.
For the working process, working details and technical effects of the foregoing device provided in the second aspect of this embodiment, reference may be made to the method for fidgeting monitoring in the first aspect or any design of the first aspect, which is not described herein again.
As shown in fig. 3, a second aspect of the present embodiment provides a child bracelet system for implementing the childcare monitoring method according to any one of the first aspect or possible designs thereof, including an emotion characterization data acquisition module, a voice speaker, a sound pickup, and a processing module, where the processing module is respectively in communication connection with the emotion characterization data acquisition module, the voice speaker, and the sound pickup; the emotion characterization data acquisition module is used for acquiring emotion characterization data of a wearing child in real time and transmitting the emotion characterization data to the processing module; the voice loudspeaker is used for playing the audio data from the processing module; the sound pick-up is used for collecting the audio data of the wearing child and transmitting the audio data of the wearing child to the processing module; the processing module is configured to perform the method for remote monitoring as described in the first aspect or any one of the first aspect.
For the working process, working details, and technical effects of the foregoing system provided in the third aspect of this embodiment, reference may be made to the first aspect or any one of the first aspects that may be designed to provide the method for supporting nursing care, which is not described herein again.
As shown in fig. 4, a fourth aspect of this embodiment provides a computer device for executing the method for remote monitoring as may be designed in any of the first aspect or the first aspect, including a memory, a processor, and a transceiver, which are sequentially connected in a communication manner, where the memory is used for storing a computer program, the transceiver is used for transceiving messages, and the processor is used for reading the computer program to execute the method for remote monitoring as may be designed in any of the first aspect or the first aspect. For example, the Memory may include, but is not limited to, a Random-Access Memory (RAM), a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a First-in First-out (FIFO), a First-in Last-out (FILO), and/or a First-in Last-out (FILO); the processor may be, but is not limited to, a microprocessor of the model number STM32F105 family. In addition, the computer device may also include, but is not limited to, a power module, a display screen, and other necessary components.
For the working process, working details, and technical effects of the foregoing computer device provided in the fourth aspect of this embodiment, reference may be made to the first aspect or any one of the possible designs of the method for supporting nursing care in the first aspect, which is not described herein again.
A fifth aspect of the present embodiment provides a computer-readable storage medium storing instructions for implementing the method for fiduciary monitoring as described in the first aspect or any of the first possible designs, wherein the instructions are stored on the computer-readable storage medium and when the instructions are executed on a computer, the method for fiduciary monitoring as described in the first aspect or any of the first possible designs is executed. The computer-readable storage medium refers to a carrier for storing data, and may include, but is not limited to, a computer-readable storage medium such as a floppy disk, an optical disk, a hard disk, a flash Memory, a flash disk and/or a Memory Stick (Memory Stick), and the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
For the working process, working details and technical effects of the foregoing computer-readable storage medium provided in the fifth aspect of this embodiment, reference may be made to the method for fidgeting monitoring as described in the first aspect or any possible design of the first aspect, which is not described herein again.
A sixth aspect of the present embodiment provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of claim for home care monitoring as described in the first aspect or any one of the first aspect's possible designs. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices.
Finally, it should be noted that: the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A childbirth supporting and monitoring method based on a child bracelet is characterized by being executed by a processing module in the child bracelet, and comprising the following steps of:
acquiring emotion representation data acquired by an emotion representation data acquisition module in real time for a wearing child, wherein the emotion representation data acquisition module is arranged in the child bracelet;
leading the emotion characterization data into an emotion recognition model which is based on an artificial neural network and is trained in advance to obtain an emotion recognition result;
when the wearing child is found to be in a certain preset emotion needing the conciliation of the guardian according to the emotion recognition result, switching from the silence mode to an AI automatic conversation mode: based on guardian's voiceprint characteristic information generation AI automatic conversation audio data to play through voice horn AI automatic conversation audio data, and acquire by the adapter right wear children real-time collection's audio data, and according to the audio data response generates newly AI automatic conversation audio data, so that children's bracelet with wear children and carry out AI automatic conversation, until certain preset emotion disappears, wherein, certain preset emotion is anger, fear, sadness or disgust, voice horn with the adapter is arranged respectively in children's bracelet.
2. The method for claim 1, wherein the emotion-characterization data acquisition module comprises an electrode-based heart rate sensor and/or a microphone, and the emotion-characterization data comprises heart rate data and/or audio data.
3. The method for claim 1, wherein when the child wearing the child is found to be in a preset emotion requiring a concierge intervention by a guardian according to the emotion recognition result, entering an AI automatic conversation mode, comprises:
when the wearing child is found to be in a certain preset emotion needing career intervention according to the emotion recognition result, starting a first timer, wherein the certain preset emotion is anger, fear, sadness or disgust;
and before the timing of the first timer reaches a preset first time threshold, continuously judging whether the wearing child is continuously in the certain preset emotion according to the emotion recognition result, and if so, entering an AI automatic conversation mode when the timing of the first timer reaches the first time threshold.
4. The method of claim 1, wherein after entering an AI automatic dialog mode, the method further comprises:
starting a second timer when entering an AI automatic dialogue mode;
when the timing of the second timer reaches a preset second duration threshold, if the fact that the certain preset emotion is still not disappeared according to the emotion recognition result is continuously found, automatically calling a guardian terminal bound with the wearing child, and switching from the AI automatic conversation mode to a manual conversation mode when the guardian terminal is connected: and audio data from the guardian terminal is played through the voice loudspeaker, and the audio data acquired by the pickup in real time for the wearing child is transmitted to the guardian terminal, so that the guardian and the wearing child can carry out manual conversation.
5. The method for claim 4, wherein after automatically calling the parent terminal bound to the child, the method further comprises:
if the guardian terminal is not connected after overtime, a wireless positioning unit is started to determine the current position of the child bracelet;
sending a warning message to a fostering teacher terminal, wherein the warning message comprises a certain preset emotion, personnel identification information of the children and a fostering area electronic map marked with the current position.
6. The method for remote monitoring according to claim 1, wherein after entering the AI automatic dialogue mode, the method further comprises steps S51-S53 as follows:
s51, continuously judging whether the wearing child is still in the certain preset emotion according to the emotion recognition result, if so, returning to execute the step S51, otherwise, starting a third timer, and then executing the step S52;
s52, before the timing of the third timer reaches a preset third duration threshold, continuously judging whether the wearing child is in the certain preset emotion again according to the emotion recognition result, if so, finishing the timing and initializing of the third timer, and then returning to execute the step S51, otherwise, executing the step S53;
and S53, when the timing of the third timer reaches the third duration threshold, determining that the certain preset emotion disappears, and switching from the AI automatic conversation mode to the silence mode.
7. A child care monitoring device based on a child bracelet is characterized by comprising a data acquisition unit, an emotion recognition unit and a mode switching unit which are sequentially in communication connection;
the data acquisition unit is used for acquiring emotion characterization data acquired by an emotion characterization data acquisition module in real time for wearing a child, wherein the emotion characterization data acquisition module is arranged in a hand ring of the child;
the emotion recognition unit is used for importing the emotion characterization data into an emotion recognition model which is based on an artificial neural network and is pre-trained to obtain an emotion recognition result;
the mode switching unit is used for switching from a silent mode to an AI automatic conversation mode when finding that the wearing child is in a certain preset emotion needing the conciliation and intervention of the guardian according to the emotion recognition result: based on guardian's voiceprint characteristic information generation AI automatic conversation audio data to play through voice horn AI automatic conversation audio data to and obtain by the adapter to wear children real-time collection's audio data, and according to the audio data response generates newly AI automatic conversation audio data, so that children's bracelet with wear children and carry out AI automatic conversation, until certain predetermines the mood and disappear, wherein, certain predetermined mood is anger, fear, sadness or disgust, voice horn with the adapter is arranged respectively in children's bracelet.
8. A child bracelet system is characterized by comprising an emotion representation data acquisition module, a voice loudspeaker, a sound pickup and a processing module, wherein the processing module is respectively in communication connection with the emotion representation data acquisition module, the voice loudspeaker and the sound pickup;
the emotion characterization data acquisition module is used for acquiring emotion characterization data of a wearing child in real time and transmitting the emotion characterization data to the processing module;
the voice loudspeaker is used for playing the audio data from the processing module;
the sound pickup is used for collecting the audio data of the wearing child and transmitting the audio data of the wearing child to the processing module;
the processing module is used for executing the fiduciary monitoring method according to any one of claims 1-6.
9. A computer device comprising a memory, a processor and a transceiver communicatively connected in sequence, wherein the memory is used for storing a computer program, the transceiver is used for transmitting and receiving messages, and the processor is used for reading the computer program and executing the fid monitoring method according to any one of claims 1 to 6.
10. A computer-readable storage medium having stored thereon instructions for performing the method of claim 1-6 when run on a computer.
CN202210927413.9A 2022-08-03 2022-08-03 Child care monitoring method based on child bracelet Pending CN115240715A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210927413.9A CN115240715A (en) 2022-08-03 2022-08-03 Child care monitoring method based on child bracelet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210927413.9A CN115240715A (en) 2022-08-03 2022-08-03 Child care monitoring method based on child bracelet

Publications (1)

Publication Number Publication Date
CN115240715A true CN115240715A (en) 2022-10-25

Family

ID=83677707

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210927413.9A Pending CN115240715A (en) 2022-08-03 2022-08-03 Child care monitoring method based on child bracelet

Country Status (1)

Country Link
CN (1) CN115240715A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108653898A (en) * 2018-03-26 2018-10-16 广东小天才科技有限公司 A kind of method for playing music and wearable device for pacifying children
CN110202587A (en) * 2019-05-15 2019-09-06 北京梧桐车联科技有限责任公司 Information interacting method and device, electronic equipment and storage medium
CN110599999A (en) * 2019-09-17 2019-12-20 寇晓宇 Data interaction method and device and robot
CN111368053A (en) * 2020-02-29 2020-07-03 重庆百事得大牛机器人有限公司 Mood pacifying system based on legal consultation robot
CN112102850A (en) * 2019-06-18 2020-12-18 杭州海康威视数字技术股份有限公司 Processing method, device and medium for emotion recognition and electronic equipment
CN113723292A (en) * 2021-08-31 2021-11-30 平安科技(深圳)有限公司 Driver-ride abnormal behavior recognition method and device, electronic equipment and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108653898A (en) * 2018-03-26 2018-10-16 广东小天才科技有限公司 A kind of method for playing music and wearable device for pacifying children
CN110202587A (en) * 2019-05-15 2019-09-06 北京梧桐车联科技有限责任公司 Information interacting method and device, electronic equipment and storage medium
CN112102850A (en) * 2019-06-18 2020-12-18 杭州海康威视数字技术股份有限公司 Processing method, device and medium for emotion recognition and electronic equipment
CN110599999A (en) * 2019-09-17 2019-12-20 寇晓宇 Data interaction method and device and robot
CN111368053A (en) * 2020-02-29 2020-07-03 重庆百事得大牛机器人有限公司 Mood pacifying system based on legal consultation robot
CN113723292A (en) * 2021-08-31 2021-11-30 平安科技(深圳)有限公司 Driver-ride abnormal behavior recognition method and device, electronic equipment and medium

Similar Documents

Publication Publication Date Title
CN107030691B (en) Data processing method and device for nursing robot
Solomon Transactional analysis theory: The basics
Weist Time concepts in language and thought: Filling the Piagetian void from two to five years
Shemmings et al. Understanding disorganized attachment: Theory and practice for working with children and adults
CN108108340A (en) For the dialogue exchange method and system of intelligent robot
Hwang et al. TalkBetter: family-driven mobile intervention care for children with language delay
CN108109622A (en) A kind of early education robot voice interactive education system and method
CN106024016A (en) Children's guarding robot and method for identifying crying of children
CN106774845B (en) intelligent interaction method, device and terminal equipment
CN107590953B (en) Alarm method and system of intelligent wearable device and terminal device
Easterbrooks et al. Helping deaf and hard of hearing students to use spoken language: A guide for educators and families
Day Deaf children's expression of communicative intentions
EP3191934A1 (en) Systems and methods for cinematic direction and dynamic character control via natural language output
CN108492829A (en) A kind of baby cry based reminding method, apparatus and system
KR20190031128A (en) Method And Apparatus for Providing Speech Therapy for Developmental Disability Child
CN110531849A (en) A kind of intelligent tutoring system of the augmented reality based on 5G communication
CN115240715A (en) Child care monitoring method based on child bracelet
CN113496586A (en) Infant monitoring method, infant monitoring device, infant monitoring equipment and storage medium
KR20170111875A (en) Apparatus and method for doll control using app
CN110349461A (en) Education and entertainment combination method and system based on children special-purpose smart machine
Reynell A developmental approach to language disorders
Thida et al. VOIS: The First Speech Therapy App Specifically Designed for Myanmar Hearing-Impaired Children
US20210202096A1 (en) Method and systems for speech therapy computer-assisted training and repository
Spencer 27 Play and Theory of Mind: Indicators and Engines of Early Cognitive Growth
Blair, Grant* & Cowley Language in iterating activity: microcognition re-membered

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20221025