CN111383422A - Monitoring method and system thereof - Google Patents
Monitoring method and system thereof Download PDFInfo
- Publication number
- CN111383422A CN111383422A CN202010481772.7A CN202010481772A CN111383422A CN 111383422 A CN111383422 A CN 111383422A CN 202010481772 A CN202010481772 A CN 202010481772A CN 111383422 A CN111383422 A CN 111383422A
- Authority
- CN
- China
- Prior art keywords
- data
- abnormal
- processing
- instruction
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/04—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
- G08B21/0438—Sensor means for detecting
- G08B21/0492—Sensor dual technology, i.e. two or more technologies collaborate to extract unsafe condition, e.g. video tracking and RFID tracking
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/04—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
- G08B21/0438—Sensor means for detecting
- G08B21/0476—Cameras to detect unsafe condition, e.g. video cameras
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/80—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for detecting, monitoring or modelling epidemics or pandemics, e.g. flu
Abstract
The application discloses a monitoring method and a system thereof, wherein the monitoring method comprises the following steps: acquiring information in a garden; extracting the information in the garden to obtain separation data, wherein the separation data comprises: video data and audio data; analyzing the separation data to obtain an analysis result; and issuing alarm information according to the analysis result. The method and the device have the technical effects that the sound abnormity and the action abnormity of the children of the young age can be monitored in real time, the alarm is given to the abnormal conditions in time, and the virus cross infection among the children of the young age is effectively prevented.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a monitoring method and a system thereof.
Background
Generally, kindergartens pay attention to physical health detection of infants, and before entering the kindergartens, the body temperature detection and related infectious disease symptoms detection of each infant are required, so that the infants are found in advance, prevented in time, and infectious diseases are prevented from being spread in the campus. Most infants are lively and lively, enter the garden, play with other children or do the in-process of activity, sweat easily, be in the condition of suddenly cold suddenly hot, the easy sudden cold of infant, for example: causing coughing or sneezing. However, the kindergarten park has a large range, and teachers cannot monitor each baby in real time, so that sudden diseases cannot be treated in time.
Disclosure of Invention
The application aims to provide a monitoring method and a system thereof, which have the technical effects of monitoring abnormal sound and abnormal actions of children of low ages in real time, giving an alarm on abnormal conditions in time and effectively preventing virus cross infection among the children of low ages.
In order to achieve the above object, the present application provides a monitoring method, including the following steps: acquiring information in a garden; extracting the information in the garden to obtain separation data, wherein the separation data comprises: video data and audio data; analyzing the separation data to obtain an analysis result; and issuing alarm information according to the analysis result.
As above, the sub-steps of analyzing the separated data and obtaining the analysis result are as follows: preprocessing audio data to obtain a first processing instruction; analyzing the video data according to the first processing instruction to generate a second processing instruction; accessing a pre-stored identity table according to the second processing instruction to obtain the voice features to be compared; processing the abnormal audio data to obtain abnormal sound characteristics; comparing the abnormal sound features with the sound features to be compared to generate an analysis result, wherein the analysis result comprises: the system comprises an abnormal object, an abnormal reason, a first client and a second client corresponding to the abnormal object.
As above, wherein the audio data is preprocessed, the sub-step of obtaining the first processing instruction is as follows: preprocessing the audio data according to a pre-stored sound exception library to generate a comparison result; and generating a first processing instruction according to the comparison result, wherein the type of the first processing instruction comprises an instruction Y and an instruction W.
As above, wherein according to the first processing instruction, the video data is analyzed, the sub-step of generating the second processing instruction is as follows: receiving a first processing instruction, and judging the type of the first processing instruction; if the first processing instruction is an instruction Y, preprocessing the video data to obtain an abnormal video segment; processing the abnormal video segment to determine an object to be analyzed; and generating a second processing instruction according to the object to be analyzed.
As above, the sub-step of determining the object to be analyzed by processing the abnormal video segment is as follows: processing the abnormal video segment to obtain an image to be identified; and identifying the image to be identified and determining the object to be analyzed.
As above, the sub-step of processing the abnormal audio data to obtain the abnormal sound characteristic is as follows: converting the abnormal audio data into an abnormal audio signal; decomposing the abnormal audio signal to obtain a plurality of component signals, wherein each component signal comprises certain frequency band information of the abnormal audio signal; and obtaining the ratio of the energy of each order component signal to the energy of the abnormal audio signal, combining all the ratios into an initial vector, carrying out normalization processing on the initial vector to obtain a vector after the normalization processing, and taking the vector after the normalization processing as the abnormal sound characteristic of the abnormal audio signal.
As above, wherein the abnormal audio signal is decomposed by its own decomposition model, the expression of the decomposition model is as follows:;;(ii) a Wherein the content of the first and second substances,an abnormal audio signal needing to be decomposed is selected;is the final remainder;is a component signal of the abnormal audio signal,in the form of a component order,,is a natural number;is composed ofA component signal of time;the noise adding times of the abnormal audio signals are obtained;is a natural number, and is provided with a plurality of groups,;is the noise amplitude;for adding noise for superimposing on the remainder;decomposing the abnormal audio signal to obtain a first-order component signal of the mixed signal;to add noise toTo (1) aAn order component signal;is the original signal in the abnormal audio signal.
The present application further provides a monitoring system, comprising: the system comprises a monitoring center, a plurality of data acquisition devices, a plurality of first clients and a plurality of second clients; wherein the plurality of data acquisition devices: the system comprises a monitoring center, a data acquisition module and a data transmission module, wherein the monitoring center is used for acquiring data of a monitored person in a monitoring area and uploading the data serving as information in a garden to the monitoring center; the monitoring center: for receiving the information in the garden; for carrying out the monitoring method described above; a first client: the monitoring center is used for receiving alarm information sent by the monitoring center; the second client: the alarm information receiving device is used for receiving alarm information sent by the monitoring center.
As above, wherein, the monitoring center includes: the device comprises a data receiving device, a data analyzing device, an alarming device and a storage device; wherein the data receiving device: for receiving the information in the garden; sending the information in the garden to a data analysis device; a data analysis device: the system comprises a storage device, an alarm device and a control device, wherein the storage device is used for storing the garden information and the garden information; an alarm device: the system comprises a first client, a second client, a first server and a second server, wherein the first client is used for receiving an analysis result, generating alarm information according to the analysis result and sending the alarm information to the corresponding first client and the corresponding second client; a storage device: the system is used for storing a sound exception library, an action exception library and an identity table.
As above, among others, the data analysis apparatus includes: the device comprises a data extraction unit, an audio processing unit and a video processing unit; wherein the data extraction unit: the system comprises a video processing unit, a data processing unit and a data processing unit, wherein the video processing unit is used for extracting information in the garden to obtain separation data and sending video data in the separation data to the video processing unit; sending the audio data in the separated data to an audio processing unit; an audio processing unit: receiving and processing the audio data, generating a first processing instruction, and sending the first instruction to a video processing unit; receiving a second processing instruction sent by the video processing unit, processing the audio data according to the second processing instruction, generating an analysis result, and sending the analysis result to the alarm device; a video processing unit: and receiving a first processing instruction, processing the video data according to the first processing instruction, generating a second processing instruction, and sending the second processing instruction to the audio processing unit.
The method and the device have the technical effects that the sound abnormity and the action abnormity of the children of the young age can be monitored in real time, the alarm is given to the abnormal conditions in time, and the virus cross infection among the children of the young age is effectively prevented.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a schematic diagram of an embodiment of a monitoring system;
FIG. 2 is a flow chart of one embodiment of a monitoring method.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The application provides a monitoring method and a system thereof, which have the technical effects of monitoring abnormal sound and abnormal actions of children in low ages in real time, timely alarming abnormal conditions and effectively preventing virus cross infection among the children in low ages.
As shown in fig. 1, the present application provides a monitoring system comprising: the monitoring center 110, a plurality of data acquisition devices 120, a plurality of first clients 130, and a plurality of second clients 140.
Wherein the plurality of data acquisition devices 120: the system is used for collecting data of monitored persons in the monitoring area and uploading the data serving as information in the garden to the monitoring center.
The monitoring center 110: for receiving the information in the garden; for performing the monitoring method described below.
The first client 130: the alarm information receiving device is used for receiving alarm information sent by the monitoring center.
The second client 140: the alarm information receiving device is used for receiving alarm information sent by the monitoring center.
Further, the monitoring center 110 includes: data receiving means, data analyzing means, alarm means and storage means.
Wherein the data receiving device: for receiving the information in the garden; and sending the information in the garden to a data analysis device.
A data analysis device: the system is used for receiving the information in the garden, accessing the storage device, processing the information in the garden, generating an analysis result and sending the analysis result to the alarm device.
An alarm device: and the alarm module is used for receiving the analysis result, generating alarm information according to the analysis result, and sending the alarm information to the corresponding first client and second client.
Specifically, the first client is a teacher client corresponding to the monitored object; the second client is a parent client corresponding to the monitored object.
A storage device: the system is used for storing a sound exception library, an action exception library and an identity table.
Further, the identity table comprises a plurality of levels, each level comprising: the identity, the sound characteristic and the face characteristic of the monitored object, the corresponding first client and the corresponding second client. The identity, the sound characteristic and the face characteristic of the monitored object, the corresponding first client and the corresponding second client are connected through a chain structure. The upper level and the lower level are connected through a chain structure.
Further, the data analysis apparatus includes: the device comprises a data extraction unit, an audio processing unit and a video processing unit.
Wherein the data extraction unit: the system comprises a video processing unit, a data processing unit and a data processing unit, wherein the video processing unit is used for extracting information in the garden to obtain separation data and sending video data in the separation data to the video processing unit; and sending the audio data in the separated data to an audio processing unit.
An audio processing unit: receiving and processing the audio data, generating a first processing instruction, and sending the first instruction to a video processing unit; and receiving a second processing instruction sent by the video processing unit, processing the audio data according to the second processing instruction, generating an analysis result, and sending the analysis result to the alarm device.
A video processing unit: and receiving a first processing instruction, processing the video data according to the first processing instruction, generating a second processing instruction, and sending the second processing instruction to the audio processing unit.
As shown in fig. 2, the present application provides a monitoring method, including:
s210: and acquiring the information in the garden.
Specifically, data acquisition is performed on a monitored person in the monitoring area through a plurality of data acquisition devices, and the acquired data is uploaded to a monitoring center as in-garden information, and S220 is executed.
S220: extracting the information in the garden to obtain separation data, wherein the separation data comprises: video data and audio data.
Specifically, after receiving the information in the garden through the data receiving device, the monitoring center sends the information in the garden to the data analyzing device, extracts the information in the garden through the data extracting unit in the data analyzing device to obtain the separated data, sends the separated data to the video processing unit and the audio processing unit, and executes S230.
S230: and analyzing the separation data to obtain an analysis result.
Specifically, the analysis result at least includes: abnormal objects and abnormal causes.
Further, the sub-steps of analyzing the separated data and obtaining the analysis result are as follows:
t1: and preprocessing the audio data to obtain a first processing instruction.
Further, the sub-step of preprocessing the audio data to obtain the first processing instruction is as follows:
t110: preprocessing audio data according to a pre-stored sound exception library to generate a comparison result, wherein the comparison result comprises: no anomalies and the presence of anomalies.
Specifically, after receiving the audio data, the audio processing unit accesses a sound abnormality library stored in the storage device in advance, compares the received audio data with the sound abnormality library, generates a comparison result as abnormal if the received audio data has abnormal sounds of the same type as the abnormal sounds stored in the sound abnormality library or similar abnormal sounds, and generates a comparison result as no abnormal if the received audio data does not have the abnormal sounds of the same type as the abnormal sounds stored in the sound abnormality library or similar abnormal sounds.
Specifically, the abnormality types in the sound abnormality library at least include: cough type and sneeze type. Among the cough types, there are a plurality of sound types of cough, such as: continuous cough, intermittent cough, and the like. The sneezing pattern includes a plurality of sneezing sound patterns, among others.
T120: and generating a first processing instruction according to the comparison result, wherein the type of the first processing instruction comprises an instruction Y and an instruction W.
Specifically, if the comparison result indicates that there is no abnormality, the generated first processing instruction is an instruction W, which indicates that there is no abnormality type in the sound abnormality library in the audio data in the acquired intra-garden information. And if the comparison result shows that the abnormality exists, the generated first processing instruction is an instruction Y, and the instruction Y represents that the abnormality type in the sound abnormality library exists in the audio data in the acquired intra-garden information. After the audio processing unit generates the first processing instruction, the first processing instruction is sent to the video processing unit, and T2 is executed.
Wherein the instruction Y at least comprises: an abnormal audio data segment and an abnormal time segment.
Wherein, the abnormal audio data segment is: a section of audio data containing an anomaly is intercepted from the entire received audio data. The abnormal time period is as follows: and the abnormal audio data segment corresponds to the time segment in the garden information.
T2: and analyzing the video data according to the first processing instruction to generate a second processing instruction.
Further, according to the first processing instruction, the sub-step of analyzing the video data and generating the second processing instruction is as follows:
t210: receiving a first processing instruction, judging the type of the first processing instruction, and if the first processing instruction is an instruction Y, executing T220; and if the type of the first processing instruction is the instruction W, sending a continuous acquisition instruction to the data acquisition device.
T220: and preprocessing the video data to obtain an abnormal video segment.
Specifically, the video processing unit preprocesses the video data according to the abnormal time period in the instruction Y to obtain an abnormal video segment, wherein the abnormal video segment is a segment of video data corresponding to the abnormal time period in the video data.
T230: and processing the abnormal video segment to determine an object to be analyzed.
Specifically, the object to be analyzed is: and the monitored person with abnormal action in the abnormal video segment.
Further, the sub-step of processing the abnormal video segment and determining the object to be analyzed is as follows:
f1: and processing the abnormal video segment to obtain an image to be identified.
Further, the sub-step of processing the abnormal video segment to obtain the abnormal target is as follows:
f110: and accessing a pre-stored action exception library.
Specifically, after obtaining the abnormal video segment, the video processing unit accesses the abnormal video segment according to an action abnormal library stored in the storage device in advance.
F120: comparing the abnormal video segments by using the action abnormal library to generate a judgment result; wherein the judgment result at least comprises the image to be identified.
Specifically, the abnormal video segments are compared by using the action abnormality library, if at least one monitored object in the abnormal video segments has an abnormal action with the same type as or similar to the abnormal action stored in the action abnormality library, the monitored object is taken as an abnormal target, a picture of one abnormal object is obtained and taken as an image to be recognized, a judgment result is generated, and F2 is executed.
Specifically, if there is no abnormal action or similar abnormal action in the abnormal video segment, the abnormal action is of the same type as the abnormal action stored in the action abnormal library, the abnormal action is fed back to the audio processing unit, and the audio processing unit performs the abnormal check on the abnormal audio data after receiving the feedback.
F2: and identifying the image to be identified and determining the object to be analyzed.
Specifically, the video processing unit identifies the facial features in the image to be identified through an AI image identification technology, wherein the identified facial features in the image to be identified are the object to be analyzed.
T240: and generating a second processing instruction according to the object to be analyzed.
Specifically, after the video processing unit generates the second processing instruction, the second processing instruction is fed back to the audio processing unit, and T3 is executed. Wherein the second processing instruction comprises: the identity of the object to be analyzed. The identity of the object to be analyzed is obtained from a pre-established identity table according to the face features of the object to be analyzed.
T3: and accessing a pre-stored identity table according to the second processing instruction to obtain the voice features to be compared.
Specifically, after receiving the second processing instruction, the audio processing unit accesses the pre-stored identity table according to the second processing instruction to obtain the sound feature of the object to be analyzed, and uses the sound feature as the sound feature to be compared.
T4: and processing the abnormal audio data to obtain the abnormal sound characteristics.
Further, the sub-step of processing the abnormal audio data to obtain the abnormal sound characteristic is as follows:
t410: and converting the abnormal audio data into an abnormal audio signal.
Specifically, after the audio processing unit receives the abnormal audio data, the abnormal audio data is converted into a binary stream, and then the binary stream is converted into an abnormal audio signal.
T420: and decomposing the abnormal audio signal to obtain a plurality of component signals, wherein each component signal contains certain frequency band information of the abnormal audio signal.
Further, the audio processing unit decomposes the abnormal audio signal through its own decomposition model, and the expression of the decomposition model is as follows:
wherein the content of the first and second substances,an abnormal audio signal needing to be decomposed is selected;is the final remainder;is a component signal of the abnormal audio signal,in the form of a component order,,is a natural number;is composed ofA component signal of time;the noise adding times of the abnormal audio signals are obtained;is a natural number, and is provided with a plurality of groups,;is the noise amplitude;for adding noise for superimposing on the remainder;decomposing the abnormal audio signal to obtain a first-order component signal of the mixed signal;to add noise toTo (1) aAn order component signal;is the original signal in the abnormal audio signal.
T430: and obtaining the ratio of the energy of each order component signal to the energy of the abnormal audio signal, combining all the ratios into an initial vector, carrying out normalization processing on the initial vector to obtain a vector after the normalization processing, and taking the vector after the normalization processing as the abnormal sound characteristic of the abnormal audio signal.
In particular, the initial vectorFrom all ratiosAs a combination of elements, for the initial vectorPerforming normalization, i.e. using the initial vectorBecomes a fraction between (0, 1), where the expression of the initial vector is as follows:
wherein the element expression is:
wherein, the expression of the abnormal sound characteristic is as follows:
wherein the content of the first and second substances,is an abnormal sound characteristic;is a ratio;is the energy of the component signal;is the energy of the abnormal audio signal;,is a natural number.
T5: comparing the abnormal sound features with the sound features to be compared to generate an analysis result, wherein the analysis result comprises: the system comprises an abnormal object, an abnormal reason, a first client and a second client corresponding to the abnormal object.
Specifically, if the detected object corresponding to the sound feature to be compared is determined to be an abnormal object, the voice processing unit generates an analysis result from the abnormal object, the abnormal reason, and the first client and the second client corresponding to the abnormal object, and sends the analysis result to the alarm device, and step S240 is executed.
Further, if the comparison result is not consistent, the audio processing unit traverses the identity table, compares the compared abnormal sound feature with the sound feature in each hierarchy until the comparison result is consistent, generates an analysis result, and executes S240.
And the first client and the second client corresponding to the abnormal object are obtained from the identity table.
S240: and issuing alarm information according to the analysis result.
Specifically, the alarm information includes: abnormal objects and abnormal causes. And after receiving the analysis result, the alarm device sends alarm information to the corresponding first client and the second client according to the analysis result.
The method and the device have the technical effects that the sound abnormity and the action abnormity of the children of the young age can be monitored in real time, the alarm is given to the abnormal conditions in time, and the virus cross infection among the children of the young age is effectively prevented.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, the scope of protection of the present application is intended to be interpreted to include the preferred embodiments and all variations and modifications that fall within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (10)
1. A monitoring method, comprising the steps of:
acquiring information in a garden;
extracting the information in the garden to obtain separation data, wherein the separation data comprises: video data and audio data;
analyzing the separation data to obtain an analysis result;
and issuing alarm information according to the analysis result.
2. A monitoring method according to claim 1, characterized in that the sub-step of analyzing the separation data to obtain an analysis result is as follows:
preprocessing audio data to obtain a first processing instruction;
analyzing the video data according to the first processing instruction to generate a second processing instruction;
accessing a pre-stored identity table according to the second processing instruction to obtain the voice features to be compared;
processing the abnormal audio data to obtain abnormal sound characteristics;
comparing the abnormal sound features with the sound features to be compared to generate an analysis result, wherein the analysis result comprises: the system comprises an abnormal object, an abnormal reason, a first client and a second client corresponding to the abnormal object.
3. A monitoring method according to claim 2, characterized in that the sub-step of preprocessing the audio data to obtain the first processing instruction is as follows:
preprocessing the audio data according to a pre-stored sound exception library to generate a comparison result;
and generating a first processing instruction according to the comparison result, wherein the type of the first processing instruction comprises an instruction Y and an instruction W.
4. A monitoring method according to claim 3, characterized in that the sub-step of analyzing the video data according to the first processing instruction, generating the second processing instruction, is as follows:
receiving a first processing instruction, and judging the type of the first processing instruction;
if the first processing instruction is an instruction Y, preprocessing the video data to obtain an abnormal video segment;
processing the abnormal video segment to determine an object to be analyzed;
and generating a second processing instruction according to the object to be analyzed.
5. The monitoring method according to claim 4, wherein the sub-step of processing the abnormal video segment and determining the object to be analyzed is as follows:
processing the abnormal video segment to obtain an image to be identified;
and identifying the image to be identified and determining the object to be analyzed.
6. The monitoring method according to claim 2, wherein the sub-step of processing the abnormal audio data to obtain the abnormal sound characteristic is as follows:
converting the abnormal audio data into an abnormal audio signal;
decomposing the abnormal audio signal to obtain a plurality of component signals, wherein each component signal comprises certain frequency band information of the abnormal audio signal;
and obtaining the ratio of the energy of each order component signal to the energy of the abnormal audio signal, combining all the ratios into an initial vector, carrying out normalization processing on the initial vector to obtain a vector after the normalization processing, and taking the vector after the normalization processing as the abnormal sound characteristic of the abnormal audio signal.
7. The monitoring method according to claim 6, wherein the abnormal audio signal is decomposed by a decomposition model of the abnormal audio signal, and the expression of the decomposition model is as follows:
wherein the content of the first and second substances,an abnormal audio signal needing to be decomposed is selected;is the final remainder;is a component signal of the abnormal audio signal,in the form of a component order,,is a natural number;is composed ofA component signal of time;the noise adding times of the abnormal audio signals are obtained;is a natural number, and is provided with a plurality of groups,;is the noise amplitude;for adding noise for superimposing on the remainder;decomposing the abnormal audio signal to obtain a first-order component signal of the mixed signal;to add noise toTo (1) aAn order component signal;is the original signal in the abnormal audio signal.
8. A monitoring system, comprising: the system comprises a monitoring center, a plurality of data acquisition devices, a plurality of first clients and a plurality of second clients;
wherein the plurality of data acquisition devices: the system comprises a monitoring center, a data acquisition module and a data transmission module, wherein the monitoring center is used for acquiring data of a monitored person in a monitoring area and uploading the data serving as information in a garden to the monitoring center;
the monitoring center: for receiving the information in the garden; for performing the monitoring method of any one of claims 1-7;
a first client: the monitoring center is used for receiving alarm information sent by the monitoring center;
the second client: the alarm information receiving device is used for receiving alarm information sent by the monitoring center.
9. The monitoring system of claim 8, wherein the monitoring center comprises: the device comprises a data receiving device, a data analyzing device, an alarming device and a storage device;
wherein the data receiving device: for receiving the information in the garden; sending the information in the garden to a data analysis device;
a data analysis device: the system comprises a storage device, an alarm device and a control device, wherein the storage device is used for storing the garden information and the garden information;
an alarm device: the system comprises a first client, a second client, a first server and a second server, wherein the first client is used for receiving an analysis result, generating alarm information according to the analysis result and sending the alarm information to the corresponding first client and the corresponding second client;
a storage device: the system is used for storing a sound exception library, an action exception library and an identity table.
10. The monitoring system of claim 9, wherein the data analysis device comprises: the device comprises a data extraction unit, an audio processing unit and a video processing unit;
wherein the data extraction unit: the system comprises a video processing unit, a data processing unit and a data processing unit, wherein the video processing unit is used for extracting information in the garden to obtain separation data and sending video data in the separation data to the video processing unit; sending the audio data in the separated data to an audio processing unit;
an audio processing unit: receiving and processing the audio data, generating a first processing instruction, and sending the first instruction to a video processing unit; receiving a second processing instruction sent by the video processing unit, processing the audio data according to the second processing instruction, generating an analysis result, and sending the analysis result to the alarm device;
a video processing unit: and receiving a first processing instruction, processing the video data according to the first processing instruction, generating a second processing instruction, and sending the second processing instruction to the audio processing unit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010481772.7A CN111383422B (en) | 2020-06-01 | 2020-06-01 | Monitoring method and system thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010481772.7A CN111383422B (en) | 2020-06-01 | 2020-06-01 | Monitoring method and system thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111383422A true CN111383422A (en) | 2020-07-07 |
CN111383422B CN111383422B (en) | 2020-12-01 |
Family
ID=71216086
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010481772.7A Active CN111383422B (en) | 2020-06-01 | 2020-06-01 | Monitoring method and system thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111383422B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112006702A (en) * | 2020-09-04 | 2020-12-01 | 北京伟杰东博信息科技有限公司 | Safety monitoring method and system |
CN112951267A (en) * | 2021-02-23 | 2021-06-11 | 恒大新能源汽车投资控股集团有限公司 | Passenger health monitoring method and vehicle-mounted terminal |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103279905A (en) * | 2013-05-12 | 2013-09-04 | 安徽工程大学 | Integrated kindergarten management system |
CN108962280A (en) * | 2018-07-31 | 2018-12-07 | 周兆楷 | A kind of baby's monitoring method, equipment and its system |
CN108992079A (en) * | 2018-06-12 | 2018-12-14 | 珠海格力电器股份有限公司 | A kind of Infant behavior monitoring method based on emotion recognition and Application on Voiceprint Recognition |
CN110049284A (en) * | 2019-03-20 | 2019-07-23 | 江苏润和软件股份有限公司 | A kind of kindergarten's video and audio monitoring system based on artificial intelligent voice analysis |
KR20190116778A (en) * | 2018-04-05 | 2019-10-15 | (주)로뎀기술 | An system and method for preventing child safety accident using an intelligent boarding and alerting apparatus |
CN110706449A (en) * | 2019-09-04 | 2020-01-17 | 中移(杭州)信息技术有限公司 | Infant monitoring method and device, camera equipment and storage medium |
CN110718235A (en) * | 2019-09-20 | 2020-01-21 | 精锐视觉智能科技(深圳)有限公司 | Abnormal sound detection method, electronic device and storage medium |
-
2020
- 2020-06-01 CN CN202010481772.7A patent/CN111383422B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103279905A (en) * | 2013-05-12 | 2013-09-04 | 安徽工程大学 | Integrated kindergarten management system |
KR20190116778A (en) * | 2018-04-05 | 2019-10-15 | (주)로뎀기술 | An system and method for preventing child safety accident using an intelligent boarding and alerting apparatus |
CN108992079A (en) * | 2018-06-12 | 2018-12-14 | 珠海格力电器股份有限公司 | A kind of Infant behavior monitoring method based on emotion recognition and Application on Voiceprint Recognition |
CN108962280A (en) * | 2018-07-31 | 2018-12-07 | 周兆楷 | A kind of baby's monitoring method, equipment and its system |
CN110049284A (en) * | 2019-03-20 | 2019-07-23 | 江苏润和软件股份有限公司 | A kind of kindergarten's video and audio monitoring system based on artificial intelligent voice analysis |
CN110706449A (en) * | 2019-09-04 | 2020-01-17 | 中移(杭州)信息技术有限公司 | Infant monitoring method and device, camera equipment and storage medium |
CN110718235A (en) * | 2019-09-20 | 2020-01-21 | 精锐视觉智能科技(深圳)有限公司 | Abnormal sound detection method, electronic device and storage medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112006702A (en) * | 2020-09-04 | 2020-12-01 | 北京伟杰东博信息科技有限公司 | Safety monitoring method and system |
CN112006702B (en) * | 2020-09-04 | 2021-09-24 | 北京伟杰东博信息科技有限公司 | Safety monitoring method and system |
CN112951267A (en) * | 2021-02-23 | 2021-06-11 | 恒大新能源汽车投资控股集团有限公司 | Passenger health monitoring method and vehicle-mounted terminal |
Also Published As
Publication number | Publication date |
---|---|
CN111383422B (en) | 2020-12-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111383422B (en) | Monitoring method and system thereof | |
Khan et al. | Detecting unseen falls from wearable devices using channel-wise ensemble of autoencoders | |
Jack et al. | Genetic algorithms for feature selection in machine condition monitoring with vibration signals | |
McCowan et al. | The fallacy of ‘signature whistles’ in bottlenose dolphins: a comparative perspective of ‘signature information’in animal vocalizations | |
Müller et al. | Human perception of audio deepfakes | |
Minvielle et al. | Fall detection using smart floor sensor and supervised learning | |
WO2020078214A1 (en) | Child state analysis method and apparatus, vehicle, electronic device and storage medium | |
KR20170021216A (en) | Method and system for providing monitoring service for kids | |
Droghini et al. | Audio metric learning by using siamese autoencoders for one-shot human fall detection | |
CN114489561A (en) | Intelligent audio volume adjusting method and device, electronic equipment and storage medium | |
Manikanta et al. | Deep learning based effective baby crying recognition method under indoor background sound environments | |
Thangavel et al. | The IoT based embedded system for the detection and discrimination of animals to avoid human–wildlife conflict | |
Agrawal et al. | A survey on video-based fake news detection techniques | |
Onie et al. | The use of closed-circuit television and video in suicide prevention: narrative review and future directions | |
CN112699744A (en) | Fall posture classification identification method and device and wearable device | |
CN110782622A (en) | Safety monitoring system, safety detection method, safety detection device and electronic equipment | |
US20140122074A1 (en) | Method and system of user-based jamming of media content by age category | |
Charest | Cumulative semantic interference in young children's picture naming | |
Roohi et al. | Recognizing emotional expression in game streams | |
Kim et al. | Modeling of child stress-state identification based on biometric information in mobile environment | |
Zhang | The Understanding of Spatial-Temporal Behaviors | |
US8752073B2 (en) | Generating event definitions based on spatial and relational relationships | |
Rizvi et al. | Immune inspired dendritic cell algorithm for stock price manipulation detection | |
Vesangi et al. | A Novel Approach to Predict the Reason for Baby Cry using Machine Learning | |
Deng et al. | Language-assisted deep learning for autistic behaviors recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20220511 Address after: 518000 208, block Yuheng, Xinggang Tongchuang Hui, No. 6099, Bao'an Avenue, Xinhe community, Fuhai street, Bao'an District, Shenzhen, Guangdong Province Patentee after: Central-South Information technology (Shenzhen) Co.,Ltd. Address before: 101300 Beijing Shunyi District Airport Street, No. 1 Anhua Street, 1st Building, 1st Floor, No. 2159 Patentee before: BEIJING LONGPU INTELLIGENT TECHNOLOGY Co.,Ltd. |
|
TR01 | Transfer of patent right |