CN115440000A - Campus early warning protection method and device - Google Patents

Campus early warning protection method and device Download PDF

Info

Publication number
CN115440000A
CN115440000A CN202110607953.4A CN202110607953A CN115440000A CN 115440000 A CN115440000 A CN 115440000A CN 202110607953 A CN202110607953 A CN 202110607953A CN 115440000 A CN115440000 A CN 115440000A
Authority
CN
China
Prior art keywords
early warning
user
emotion
campus
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110607953.4A
Other languages
Chinese (zh)
Inventor
张腾飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Imoo Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Imoo Electronic Technology Co Ltd filed Critical Guangdong Imoo Electronic Technology Co Ltd
Priority to CN202110607953.4A priority Critical patent/CN115440000A/en
Publication of CN115440000A publication Critical patent/CN115440000A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/0202Child monitoring systems using a transmitter-receiver system carried by the parent and the child
    • G08B21/0205Specific application combined with child monitoring using a transmitter-receiver system
    • G08B21/0211Combination with medical sensor, e.g. for measuring heart rate, temperature
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Abstract

The invention belongs to the field of wearable equipment, and discloses a campus early warning protection method and device, wherein the campus early warning protection method comprises the following steps: acquiring an emotion fluctuation value of a user; when the emotion fluctuation value is larger than a preset value, triggering a voice recognition algorithm to recognize the voice emotion of the user, and recognizing the current scene by a semantic recognition algorithm; and when the voice emotion of the user is a preset campus early warning emotion and the current scene is a campus early warning scene, automatically triggering an early warning protection mode to protect the user. The method can assist the user to automatically start the self-protection plan in time when facing a campus early warning scene; meanwhile, by combining speech emotion recognition and semantic recognition, invalid scenes can be effectively screened out, and the false touch probability is effectively reduced.

Description

Campus early warning protection method and device
Technical Field
The invention relates to the field of wearable equipment, in particular to a campus early warning protection method and device.
Background
Campus danger often happens, however, when danger happens, children are generally confused and cannot take help calling measures in time; after danger occurs, the campus early warning scene cannot be restored due to fear and timidity of children, and the same event cannot be effectively prevented from happening again.
Therefore, when facing a campus early warning scene, how to enable the pupils to call for help in time and keep evidence is particularly important for self-protection.
Disclosure of Invention
The invention aims to provide a campus early warning protection method and device, and solves the problems.
The technical scheme provided by the invention is as follows:
on one hand, the campus early warning protection method is provided and comprises the following steps:
acquiring an emotion fluctuation value of a user;
when the emotion fluctuation value is larger than a preset value, triggering a voice recognition algorithm to recognize the voice emotion of the user, and recognizing the current scene by a semantic recognition algorithm;
and when the voice emotion of the user is a preset campus early warning emotion and the current scene is a campus early warning scene, automatically triggering an early warning protection mode to protect the user.
Further preferably, the obtaining of the mood swing value of the user includes:
obtaining an emotional fluctuation value of the user through a biosensor of a wearable device worn by the user.
Further preferably, when the emotion fluctuation value is greater than a preset value, triggering a speech recognition algorithm to recognize the speech emotion of the user, and recognizing the current scene by a semantic recognition algorithm includes:
when the heart rate in a preset time and in a non-motion state reaches a heart rate preset value, triggering a voice recognition algorithm to recognize the voice emotion of the user, and recognizing the current scene by a semantic recognition algorithm;
and/or;
when the skin electricity response value in the wearing state reaches the skin electricity preset value within the preset time, triggering a voice recognition algorithm to recognize the voice emotion of the user, and recognizing the current scene through a semantic recognition algorithm;
wherein the mood swing values include heart rate and galvanic skin response values.
Further preferably, when the speech emotion of the user is a preset campus early warning emotion and the current scene is a campus early warning scene, an early warning protection mode is automatically triggered to protect the user, including:
when the voice emotion of the user is a preset campus early warning emotion and the current scene is a campus early warning scene, detecting a current mode of wearable equipment worn by the user;
and automatically triggering a corresponding early warning protection mode based on the current mode of the wearable device to protect the user.
Further preferably, the automatically triggering a corresponding early warning protection mode based on the current mode of the wearable device to protect the user includes:
when the current mode of the wearable device is a first mode, automatically adjusting the external early warning volume of the wearable device to a first preset volume, and sending out continuous early warning sound;
when the current mode of the wearable device is the second mode, communication is automatically established with a preset contact person, and after the communication is established, the volume of the outgoing communication of the wearable device is adjusted to the second preset volume.
Further preferably, the method further comprises the following steps:
and automatically triggering the wearable equipment to start a recording function and a video recording function with preset duration so as to acquire campus early warning information in the campus early warning scene.
A campus early warning protection device, comprising:
the obtaining module is used for obtaining the emotion fluctuation value of the user;
the triggering module is used for triggering a voice recognition algorithm to recognize the voice emotion of the user when the emotion fluctuation value is larger than a preset value, and recognizing the current scene by a semantic recognition algorithm;
and the protection module is used for automatically triggering an early warning protection mode to protect the user when the voice emotion of the user is a preset campus early warning emotion and the current scene is a campus early warning scene.
Further preferably, the obtaining module includes:
the biosensor is used for acquiring the emotional fluctuation value of the user through the biosensor of the wearable device worn by the user.
Further preferably, the triggering module includes:
the first triggering module is used for triggering a voice recognition algorithm to recognize the voice emotion of the user when the heart rate in a non-motion state reaches a heart rate preset value within preset time, and a semantic recognition algorithm to recognize the current scene;
and/or;
the second triggering module is used for triggering a voice recognition algorithm to recognize the voice emotion of the user and a semantic recognition algorithm to recognize the current scene when the skin electricity reaction value in the wearing state reaches the skin electricity preset value within the preset time;
wherein the mood swing values include heart rate and galvanic skin response values.
Further preferably, the protection module is specifically configured to:
when the voice emotion of the user is a preset campus early warning emotion and the current scene is a campus early warning scene, detecting a current mode of wearable equipment worn by the user;
and based on the current mode of the wearable device, automatically triggering a corresponding early warning protection mode to protect the user.
The campus early warning protection method and device provided by the invention at least have the following technical effects: the method can assist the user to automatically start the self-protection plan in time when facing a campus early warning scene; meanwhile, by combining speech emotion recognition and semantic recognition, invalid scenes can be effectively screened out, and the false touch probability is effectively reduced.
Drawings
The invention is described in further detail below with reference to the following figures and detailed description:
FIG. 1 is a schematic diagram of an embodiment of a campus early warning protection method of the present invention;
fig. 2 is a schematic diagram of another embodiment of a campus early warning protection method according to the present invention;
fig. 3 is a schematic diagram of an embodiment of a campus early warning protection device according to the present invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
For the sake of simplicity, only the parts relevant to the present invention are schematically shown in the drawings, and they do not represent the actual structure as a product. Moreover, in the interest of brevity and understanding, only one of the components having the same structure or function is illustrated schematically or designated in some of the drawings. In this document, "one" means not only "only one" but also a case of "more than one".
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
In this context, it is to be understood that, unless otherwise explicitly stated or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, in the description of the present application, the terms "first," "second," and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
Example one
The invention provides an embodiment of a campus early warning protection method, as shown in fig. 1, comprising:
s100, acquiring the emotion fluctuation value of the user.
Specifically, when the user wears the wearable device, the emotion fluctuation value of the user is obtained through a biosensor of the wearable device, for example, the emotion fluctuation of the user is sensed by using the biosensor such as heart rate, skin electricity and the like.
In this embodiment, the emotion fluctuation value of the user is used as a preset condition for automatically triggering speech semantic recognition and as a preliminary judgment, so that subsequent triggering recognition and early warning protection are more convenient and timely.
S200, when the emotion fluctuation value is larger than a preset value, triggering a voice recognition algorithm to recognize the voice emotion of the user, and recognizing the current scene through a semantic recognition algorithm.
Specifically, when emotion fluctuation is larger than a preset value, voice emotion recognition and semantic recognition strategies are triggered to carry out accurate user emotion and scene judgment, and if the emotion of the user is judged to be feared and a semantic analysis scene is similar to a campus early warning scene, a self-protection plan is started.
Exemplary speech emotions include joy, happy, excited, fear, sadness, and the like, which are recognized through speech.
And when the emotion fluctuation value is larger than the preset value, the emotion instability of the user at the moment is indicated, and the emotion instability may be happy, excited, feared or sad. At the moment, the voice emotion of the user is recognized by triggering the voice recognition algorithm in time, so that the real emotion of the user at the moment can be further accurately judged.
For example, the voice recognition algorithm identifies that the user expresses obvious emotional characteristics such as crying, sipping and the like, and the voice emotion is a sad emotion at the moment.
If the user expresses screaming and/or the heart rate is not influenced by the heart inhibition effect, the heartbeat is abnormal strong and fast, and the heart rate fluctuation curve accords with the fear heart rate fluctuation characteristic curve, which indicates that the user generates adaptive response to a large amount of heart contractions caused by fear.
Meanwhile, when the voice recognition algorithm recognizes common voice features such as screaming and the like during fear, the voice emotion of the user can be determined to be the fear emotion by combining the emotion fluctuation feature. In addition, when the user's mood swings are monitored to be abnormally excited, it may be happy or angry. Therefore, the embodiment judges the speech emotion of the user through the speech recognition algorithm, and when the speech recognition algorithm recognizes that the current speech of the user comprises the angry characteristic speech, the speech emotion of the user at the moment can be identified as the angry emotion instead of the happy emotion.
In the embodiment, the emotion of the user is accurately judged through the emotion fluctuation value and the voice recognition algorithm, so that the user is protected in the campus early warning scene in time, and the situation at ordinary times is prevented from being mistaken for the campus early warning scene.
Meanwhile, in the embodiment, the campus early warning scenes are identified through a semantic identification algorithm, for example, different campus early warning languages appear in the campus early warning scenes.
When the semantic identifies an apparent campus early warning scenario language, for example: and (3) typing, operating and the like, which obviously reflect behaviors, or determining that the user is in a campus early warning scene when the user suffers from the behaviors can be deduced from the languages.
S300, when the voice emotion of the user is a preset campus early warning emotion and the current scene is a campus early warning scene, automatically triggering an early warning protection mode to protect the user.
For example, a campus early warning scenario generally includes: the infringement behaviors are implemented to a certain degree aiming at the physiology, the psychology, the reputation, the right, the property and the like of teachers and students by intentionally abusing languages, physical strength, networks, instruments and the like by teachers, students or other persons outside the school during the course of going to or leaving the school and the educational activities of the school.
In this embodiment, the preset campus early warning emotion includes fear emotion, sad emotion, anger emotion, and the like, which are obviously emotions that may occur in the campus early warning scene.
Specifically, the self-protection plan for campus early warning is as follows in sequence:
1) The volume is increased, and continuous sharp buzzes are emitted to attract the attention of people;
2) In a silent state, a preset contact person is called, after the call is connected, the volume of the watch end is adjusted to be the highest, and the behavior of people causing a campus early warning scene is suppressed through the sound of parents;
3) And when the buzzing is finished, starting the recording and videoing functions with preset duration in a silent state, and keeping evidences for solving the campus safety problem in the later period.
According to the invention, the user can be assisted to automatically start the self-protection plan in time when facing the campus early warning; meanwhile, by combining speech emotion recognition and semantic recognition, invalid scenes can be effectively screened out, and the false touch probability is effectively reduced.
Example two
Based on the foregoing embodiment, parts in this embodiment that are the same as the foregoing embodiment are not repeated, and as shown in fig. 2, this embodiment provides an embodiment of a campus early warning protection method, which includes:
preferably, the obtaining of the mood swing value of the user includes:
obtaining an emotional fluctuation value of the user through a biosensor of a wearable device worn by the user.
Preferably, when the emotion fluctuation value is greater than a preset value, triggering a speech recognition algorithm to recognize the speech emotion of the user, and recognizing the current scene by a semantic recognition algorithm, includes:
when the heart rate in a preset time and in a non-motion state reaches a heart rate preset value, triggering a voice recognition algorithm to recognize the voice emotion of the user, and recognizing the current scene by a semantic recognition algorithm;
and/or;
when the skin electricity response value in the wearing state reaches the skin electricity preset value within the preset time, triggering a voice recognition algorithm to recognize the voice emotion of the user, and recognizing the current scene through a semantic recognition algorithm;
wherein the mood swing values include heart rate and galvanic skin response values.
Specifically, the manner of sensing mood fluctuation by the heart rate sensor is that the heart rate rises to a preset value x in a short time and in a non-motion state, wherein the preset value x is determined according to an actual test result, the preset value x is too large to trigger in time, the preset value x is too small to trigger by mistake easily, and a fixed value cannot be given.
The skin electric sensor senses the emotion fluctuation mode, namely, in a short time and in a wearing state, the skin electric reaction value floats and exceeds a preset value y, the preset value y is determined according to an actual test result, the preset value y is too large and cannot be triggered in time, the preset value y is too small and is easily triggered in error, and a fixed value cannot be given.
Illustratively, triggering speech emotion recognition and semantic recognition strategies to perform accurate user emotion and scene judgment specifically includes:
and when receiving the emotion fluctuation signal, starting the MIC of the equipment, detecting the nearby environmental sound, and performing voice semantic recognition.
The emotion of the user can be more accurately judged through a voice algorithm, such as happiness, excitement, fear, sadness and the like.
Scene recognition is carried out through environmental sound semantics, and whether the current scene has the conditions of noise frame, frame hitting and the like is judged.
Preferably, when the voice emotion of the user is a preset campus early warning emotion and the current scene is a campus early warning scene, automatically triggering an early warning protection mode to protect the user includes:
when the voice emotion of the user is a preset campus early warning emotion and the current scene is a campus early warning scene, detecting a current mode of wearable equipment worn by the user;
and based on the current mode of the wearable device, automatically triggering a corresponding early warning protection mode to protect the user.
In particular, the wearable device may be in different modes, such as a silent state, a non-silent state.
Therefore, different triggering modes and protection schemes are preset for each mode of the wearable device, time is saved, and meanwhile the process of early warning and alarming is more reasonable and faster.
Preferably, the automatically triggering a corresponding early warning protection mode based on the current mode of the wearable device to protect the user includes:
when the current mode of the wearable device is a first mode, the loudspeaker-out early warning volume of the wearable device is automatically adjusted to a first preset volume, and continuous early warning sound is sent out.
When the current mode of the wearable device is the second mode, communication is automatically established with a preset contact person, and after the communication is established, the volume of the outgoing communication of the wearable device is adjusted to the second preset volume.
Preferably, the method further comprises the following steps:
and automatically triggering the wearable equipment to start a sound recording and video recording function with preset time so as to acquire the campus early warning information in the campus early warning scene.
Illustratively, the volume is turned up, and a continuous and sharp beep is given out to attract the attention of people; in a silent state, a preset contact person is called, after the phone is connected, the volume of the watch end is adjusted to be the highest, and the behavior of the person causing the campus early warning scene is suppressed through the sound of parents; and when the buzzing sound is finished, the recording and videoing functions with preset duration are started in a silent state, so that the campus safety problem is solved in the later period, and evidence is reserved.
In the embodiment, the method can assist the user to automatically start a corresponding reasonable self-protection plan in time when the user faces campus early warning; meanwhile, by combining speech emotion recognition and semantic recognition, invalid scenes can be effectively screened out, and the false touch probability is effectively reduced.
EXAMPLE III
Based on the foregoing embodiment, the same parts as those in the foregoing embodiment are not repeated in this embodiment, and as shown in fig. 3, this embodiment provides a campus early warning protection device, which includes:
an obtaining module 201, configured to obtain an emotion fluctuation value of a user;
the triggering module 202 is configured to trigger a speech recognition algorithm to recognize speech emotion of the user when the emotion fluctuation value is greater than a preset value, and trigger a semantic recognition algorithm to recognize a current scene;
and the protection module 203 is used for automatically triggering an early warning protection mode to protect the user when the voice emotion of the user is a preset campus early warning emotion and the current scene is a campus early warning scene.
Specifically, the campus early warning protection device comprises wearable equipment, and a biosensor is arranged on the wearable equipment. The biosensor comprises a heart rate sensor, a skin electric sensor and the like.
Preferably, the obtaining module includes:
the biosensor is used for acquiring the emotion fluctuation value of the user through the biosensor of the wearable device worn by the user.
Preferably, the triggering module includes:
the first triggering module is used for triggering a voice recognition algorithm to recognize the voice emotion of the user when the heart rate in a non-motion state reaches a heart rate preset value within preset time, and a semantic recognition algorithm to recognize the current scene;
and/or;
the second triggering module is used for triggering a voice recognition algorithm to recognize the voice emotion of the user and a semantic recognition algorithm to recognize the current scene when the skin electricity reaction value in the wearing state reaches the skin electricity preset value within the preset time;
wherein the mood swing values include heart rate and galvanic skin response values.
Preferably, the protection module is specifically configured to:
when the voice emotion of the user is a preset campus early warning emotion and the current scene is a campus early warning scene, detecting a current mode of wearable equipment worn by the user;
and automatically triggering a corresponding early warning protection mode based on the current mode of the wearable device to protect the user.
Illustratively, triggering speech emotion recognition and semantic recognition strategies to perform accurate user emotion and scene judgment specifically includes:
and when receiving the emotion fluctuation signal, starting the MIC of the equipment, detecting the nearby environmental sound, and performing voice semantic recognition.
The emotion of the user can be more accurately judged through a voice algorithm, such as happiness, excitement, fear, sadness and the like.
Scene recognition is carried out through environment sound semantics, and whether the current scene has the conditions of quarrying, fighting and the like is judged.
Preferably, the automatically triggering a corresponding early warning protection mode based on the current mode of the wearable device to protect the user includes:
when the current mode of the wearable device is a first mode, the loudspeaker-out early warning volume of the wearable device is automatically adjusted to a first preset volume, and continuous early warning sound is sent out.
When the current mode of the wearable device is the second mode, communication is automatically established with a preset contact person, and after the communication is established, the volume of the outgoing communication of the wearable device is adjusted to the second preset volume.
Specifically, the self-protection plan for campus early warning is as follows:
1) The volume is increased, and continuous sharp buzzes are emitted to attract attention of people;
2) In a silent state, a preset contact person is called, after the phone is connected, the volume of the watch end is adjusted to be the highest, and the behavior of the person causing the campus early warning scene is suppressed through the sound of parents;
3) And when the buzzing is finished, starting the recording and videoing functions with preset duration in a silent state, and keeping evidences for solving the campus safety problem in the later period.
Preferably, the method further comprises the following steps:
and automatically triggering the wearable equipment to start a sound recording and video recording function with preset time so as to acquire the campus early warning information in the campus early warning scene.
Illustratively, the volume is turned up, and a continuous and sharp beep is given out to attract the attention of people; in a silent state, a preset contact person is called, after the call is connected, the volume of the watch end is adjusted to be the highest, and the behavior of people causing a campus early warning scene is suppressed through the sound of parents; when the buzzing sound is finished, the recording and videoing functions with preset duration are started in a silent state, so that the evidence is reserved for solving the campus safety problem in the later period.
In the embodiment, the method can assist the user to automatically start a corresponding reasonable self-protection plan in time when the user faces campus early warning; meanwhile, by combining speech emotion recognition and semantic recognition, invalid scenes can be effectively screened out, and the false touch probability is effectively reduced.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or recited in detail in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other ways. The above-described embodiments of the apparatus/electronic device are merely exemplary, and the described modules or units may be divided into only one logical functional division for practical implementation, and may have another division manner. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and amendments can be made without departing from the principle of the present invention, and these modifications and amendments should also be considered as the protection scope of the present invention.

Claims (10)

1. A campus early warning protection method is characterized by comprising the following steps:
acquiring an emotion fluctuation value of a user;
when the emotion fluctuation value is larger than a preset value, triggering a voice recognition algorithm to recognize the voice emotion of the user, and recognizing the current scene by a semantic recognition algorithm;
and when the voice emotion of the user is a preset campus early warning emotion and the current scene is a campus early warning scene, automatically triggering an early warning protection mode to protect the user.
2. The campus early warning protection method according to claim 1, wherein the obtaining of the mood swing value of the user comprises:
obtaining an emotional fluctuation value of the user through a biosensor of a wearable device worn by the user.
3. The campus early warning protection method as claimed in claim 1, wherein when the emotion fluctuation value is greater than a preset value, a voice recognition algorithm is triggered to recognize the voice emotion of the user, and a semantic recognition algorithm recognizes a current scene, including:
when the heart rate in a preset time and in a non-motion state reaches a heart rate preset value, triggering a voice recognition algorithm to recognize the voice emotion of the user, and recognizing the current scene by a semantic recognition algorithm;
and/or;
when the skin electricity response value in a preset time and in a wearing state reaches a skin electricity preset value, triggering a voice recognition algorithm to recognize the voice emotion of the user, and recognizing a current scene by a semantic recognition algorithm;
wherein the mood swing values include heart rate and galvanic skin response values.
4. The campus early warning protection method according to any one of claims 1 to 3, wherein when the voice emotion of the user is a preset campus early warning emotion and the current scene is a campus early warning scene, automatically triggering an early warning protection mode to protect the user comprises:
when the voice emotion of the user is a preset campus early warning emotion and the current scene is a campus early warning scene, detecting a current mode of wearable equipment worn by the user;
and based on the current mode of the wearable device, automatically triggering a corresponding early warning protection mode to protect the user.
5. The campus early warning protection method of claim 4, wherein the automatically triggering a corresponding early warning protection mode based on the current mode of the wearable device to protect the user comprises:
when the current mode of the wearable device is a first mode, automatically adjusting the external early warning volume of the wearable device to a first preset volume, and sending out continuous early warning sound;
when the current mode of the wearable device is the second mode, communication is automatically established with a preset contact person, and after the communication is established, the volume of the outgoing communication of the wearable device is adjusted to the second preset volume.
6. The campus early warning protection method of claim 5, further comprising:
and automatically triggering the wearable equipment to start a sound recording and video recording function with preset time so as to acquire the campus early warning information in the campus early warning scene.
7. The utility model provides a campus early warning protection device which characterized in that includes:
the obtaining module is used for obtaining the emotion fluctuation value of the user;
the triggering module is used for triggering a voice recognition algorithm to recognize the voice emotion of the user when the emotion fluctuation value is larger than a preset value, and recognizing the current scene by a semantic recognition algorithm;
and the protection module is used for automatically triggering an early warning protection mode to protect the user when the voice emotion of the user is a preset campus early warning emotion and the current scene is a campus early warning scene.
8. The campus early warning protection device of claim 7, wherein the obtaining module comprises:
the biosensor is used for acquiring the emotion fluctuation value of the user through the biosensor of the wearable device worn by the user.
9. The campus early warning protection device of claim 7, wherein the triggering module comprises:
the first trigger module is used for triggering a voice recognition algorithm to recognize the voice emotion of the user and a semantic recognition algorithm to recognize the current scene when the heart rate in a preset time and in a non-motion state reaches a preset heart rate value;
and/or;
the second triggering module is used for triggering a voice recognition algorithm to recognize the voice emotion of the user and a semantic recognition algorithm to recognize the current scene when the skin electricity reaction value in the wearing state reaches the skin electricity preset value within the preset time;
wherein the mood swing values include heart rate and galvanic skin response values.
10. The campus early warning protection device according to any one of claims 7 to 9, wherein the protection module is specifically configured to:
when the voice emotion of the user is a preset campus early warning emotion and the current scene is a campus early warning scene, detecting a current mode of wearable equipment worn by the user;
and based on the current mode of the wearable device, automatically triggering a corresponding early warning protection mode to protect the user.
CN202110607953.4A 2021-06-01 2021-06-01 Campus early warning protection method and device Pending CN115440000A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110607953.4A CN115440000A (en) 2021-06-01 2021-06-01 Campus early warning protection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110607953.4A CN115440000A (en) 2021-06-01 2021-06-01 Campus early warning protection method and device

Publications (1)

Publication Number Publication Date
CN115440000A true CN115440000A (en) 2022-12-06

Family

ID=84240330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110607953.4A Pending CN115440000A (en) 2021-06-01 2021-06-01 Campus early warning protection method and device

Country Status (1)

Country Link
CN (1) CN115440000A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765869A (en) * 2018-05-31 2018-11-06 深圳市零度智控科技有限公司 Children's safety wrist-watch based on recognition of face
CN109275070A (en) * 2018-10-23 2019-01-25 广东小天才科技有限公司 A kind of early warning and reminding method, device and terminal device for preventing microphone impaired
CN109407504A (en) * 2018-11-30 2019-03-01 华南理工大学 A kind of personal safety detection system and method based on smartwatch
CN110263653A (en) * 2019-05-23 2019-09-20 广东鼎义互联科技股份有限公司 A kind of scene analysis system and method based on depth learning technology
CN111564021A (en) * 2019-02-14 2020-08-21 阿里巴巴集团控股有限公司 Wearable prompting device, prompting method and smart watch
CN111820880A (en) * 2020-07-16 2020-10-27 深圳鞠慈云科技有限公司 Campus overlord early warning system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765869A (en) * 2018-05-31 2018-11-06 深圳市零度智控科技有限公司 Children's safety wrist-watch based on recognition of face
CN109275070A (en) * 2018-10-23 2019-01-25 广东小天才科技有限公司 A kind of early warning and reminding method, device and terminal device for preventing microphone impaired
CN109407504A (en) * 2018-11-30 2019-03-01 华南理工大学 A kind of personal safety detection system and method based on smartwatch
CN111564021A (en) * 2019-02-14 2020-08-21 阿里巴巴集团控股有限公司 Wearable prompting device, prompting method and smart watch
CN110263653A (en) * 2019-05-23 2019-09-20 广东鼎义互联科技股份有限公司 A kind of scene analysis system and method based on depth learning technology
CN111820880A (en) * 2020-07-16 2020-10-27 深圳鞠慈云科技有限公司 Campus overlord early warning system and method

Similar Documents

Publication Publication Date Title
CN109407504B (en) Personal safety detection system and method based on smart watch
US20210056981A1 (en) Systems and methods for managing an emergency situation
CN103124944B (en) Use the bio signal for controlling user's alarm
US20210287522A1 (en) Systems and methods for managing an emergency situation
US20090315719A1 (en) Fall accident detection apparatus and method
CN109410521A (en) Voice monitoring alarm method and system
CN107146386B (en) Abnormal behavior detection method and device, and user equipment
CN110525456B (en) Train safe driving monitoring system and method
KR20120133979A (en) System of body gard emotion cognitive-based, emotion cognitive device, image and sensor controlling appararus, self protection management appararus and method for controlling the same
CN106303961A (en) Automatic alarm method and equipment
CN110493474A (en) A kind of data processing method, device and electronic equipment
US20190282127A1 (en) System and method for early detection of transient ischemic attack
CN113205661A (en) Anti-cheating implementation method and system, intelligent wearable device and storage medium
CN115590516A (en) Emotion reminding method, bracelet with emotion reminding function and related device
CN106856416B (en) Incoming call reminding method of wearable device and wearable device
CN107644509A (en) Intelligent watch and Related product
EP3756176A1 (en) A wearable alarm device and a method of use thereof
CN115440000A (en) Campus early warning protection method and device
US10880655B2 (en) Method for operating a hearing apparatus system, and hearing apparatus system
CN112089970A (en) User state monitoring method, neck massager and device
CN114821962B (en) Triggering method, triggering device, triggering terminal and storage medium for emergency help function
CN105997084B (en) A kind of detection method and device of human body implication
CN109343710A (en) Message back method and apparatus
CN108494956A (en) A kind of intelligent wearable device based reminding method and intelligent wearable device
JP2002183859A (en) Abnormality discriminating and reporting apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230816

Address after: No.168, Dongmen Middle Road, Xiaobian community, Chang'an Town, Dongguan City, Guangdong Province

Applicant after: Guangdong Xiaotiancai Technology Co.,Ltd.

Address before: 523851 east side of the 15th floor, 168 dongmenzhong Road, Xiaobian community, Chang'an Town, Dongguan City, Guangdong Province

Applicant before: GUANGDONG AIMENG ELECTRONIC TECHNOLOGY CO.,LTD.