CN118262475A - AI intelligent sound wave auxiliary campus anti-cheating system - Google Patents

AI intelligent sound wave auxiliary campus anti-cheating system Download PDF

Info

Publication number
CN118262475A
CN118262475A CN202410685832.5A CN202410685832A CN118262475A CN 118262475 A CN118262475 A CN 118262475A CN 202410685832 A CN202410685832 A CN 202410685832A CN 118262475 A CN118262475 A CN 118262475A
Authority
CN
China
Prior art keywords
sound
spoofing
behavior
campus
amplitude
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410685832.5A
Other languages
Chinese (zh)
Other versions
CN118262475B (en
Inventor
傅海鑫
王春风
贾雪丽
马晓东
杨晓广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Xiong'an Yijing Cloud Technology Co ltd
Original Assignee
Hebei Xiong'an Yijing Cloud Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Xiong'an Yijing Cloud Technology Co ltd filed Critical Hebei Xiong'an Yijing Cloud Technology Co ltd
Priority to CN202410685832.5A priority Critical patent/CN118262475B/en
Publication of CN118262475A publication Critical patent/CN118262475A/en
Application granted granted Critical
Publication of CN118262475B publication Critical patent/CN118262475B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/08Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using communication transmission lines
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Educational Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Evolutionary Computation (AREA)
  • Emergency Management (AREA)
  • Signal Processing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Educational Administration (AREA)
  • Evolutionary Biology (AREA)
  • General Business, Economics & Management (AREA)
  • Primary Health Care (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • General Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Alarm Systems (AREA)

Abstract

The invention relates to the technical field of campus prevention alarm, in particular to an AI intelligent sound wave-assisted campus anti-spoofing system, which comprises the following components: capturing ambient sound by a plurality of sound sensors deployed inside the campus; the method comprises the steps of introducing a sound feature extraction module, extracting sound features related to the deception behaviors by the sound feature extraction module, integrating a machine learning classification algorithm, monitoring sound signals in the environment in real time, introducing the sound signal features extracted in real time into the machine learning classification algorithm, and determining whether the deception behaviors exist according to an analysis result; the environment monitoring module is introduced, and the analysis result of the sound feature extraction module is combined to assist in determining the behavior of the spoofing; triggering an alarm when a spoofing action is identified; and a positioning module is introduced, and the accurate position of the alarm event is determined through the position information of the sound sensor and the analysis of a combined positioning algorithm and is transmitted to the receiving equipment. The invention realizes timely response and treatment of the spoofing event and effectively improves the security of campuses.

Description

AI intelligent sound wave auxiliary campus anti-cheating system
Technical Field
The invention relates to the technical field of campus prevention alarm, in particular to an AI intelligent sound wave-assisted campus anti-spoofing system.
Background
With the development of society and popularization of education, campus safety problems are increasingly concerned, and in campuses, the deception behavior becomes a serious social problem, and serious injury is brought to physical and mental health of students. Traditional campus safety monitoring system relies on equipment such as camera and siren generally, but these equipment often can only record the emergence of incident, lacks real-time supervision and early warning function, can't effectively prevent the behavior of rubbings, consequently, the urgent need is an innovative campus anti-rubbings system, can real-time supervision campus environment and accurate discernment behavior of rubbings to improve the security of campus and ensure teachers and students' physical and mental health.
Aiming at the problems, the invention provides an AI intelligent sound wave-assisted campus anti-spoofing system. By arranging a plurality of sound sensors and key positions of the quick alarm buttons in the campus, the sound signal in the campus environment is monitored in real time by utilizing the sound feature extraction module, and whether the cheating behavior exists is accurately identified according to the comparison analysis result, so that the campus safety is improved, and the physical and mental health of teachers and students is guaranteed.
Disclosure of Invention
Based on the above purpose, the invention provides an AI intelligent sound wave assisted campus anti-spoofing system.
AI intelligent sound wave assisted campus anti-spoofing system includes:
Capturing ambient sound by a plurality of sound sensors deployed inside the campus;
The method comprises the steps of introducing a sound feature extraction module, analyzing the frequency spectrum, amplitude and duration of a sound signal by the sound feature extraction module, extracting sound features related to the spoofing behavior, constructing a sound pattern library of the spoofing behavior, integrating a machine learning classification algorithm in the sound pattern library, monitoring the sound signal in the environment in real time, introducing the sound signal features extracted in real time into the machine learning classification algorithm, and determining whether the spoofing behavior exists according to an analysis result;
The environment monitoring module is introduced, and the environment monitoring module utilizes cameras and temperature sensors which are deployed in the campus, so that when the environment monitoring module detects abnormal personnel aggregation and temperature change, the environment monitoring module is combined with the analysis result of the sound characteristic extraction module to assist in determining the behavior of the spoofing;
when the cheating behavior is identified, triggering an alarm and sending related information to preset receiving equipment, wherein the intelligent mobile phone comprises a campus security office or related teaching staff;
And a positioning module is introduced, and the accurate position of the alarm event is determined through the position information of the sound sensor and the analysis of a combined positioning algorithm and is transmitted to the receiving equipment.
Further, the sound sensor is arranged according to different areas in the campus, including classrooms, hallways, playgrounds, canteens, libraries and stair dead angle areas.
Further, the sound feature extraction module specifically includes:
Spectral analysis: converting the sound signal into a spectrogram by adopting a fast Fourier transform algorithm, extracting frequency distribution characteristics in the spectrogram, including main frequency and spectral bandwidth, and identifying a frequency range or a frequency spectrum form related to the deceptive behavior, including sharp sound waves and high-frequency noise, based on the characteristics of the spectrogram by adopting a frequency spectrum analysis technology;
Amplitude feature extraction: the method comprises the steps of using an amplitude detection algorithm to measure the amplitude value of a sound signal, analyzing the variation trend of the amplitude, extracting amplitude characteristics based on the fluctuation condition of the amplitude, including the variation frequency of the amplitude and the absolute value of the amplitude, analyzing the relation between the amplitude characteristics and the ice-cream behavior, identifying an amplitude mode with the ice-cream behavior characteristics, and the calculation formula of the amplitude is as follows: Wherein, the method comprises the steps of, wherein, Is the amplitude value of the pulse,Is a signal in the time domain and,The number of sampling points of the signal reflects the change of the sound amplitude in a short time based on the short-time amplitude change rate, and is expressed as:
Wherein, the method comprises the steps of, wherein, Is the short-time amplitude change rate and,Is the amplitude value at the current moment,Is the amplitude value at the next moment in time,Is a time interval;
Duration analysis: calculating the duration of the sound signal by using a duration analysis algorithm, determining the duration characteristic of the sound, comprehensively analyzing the duration of the sound signal by combining the duration characteristic and the frequency spectrum and the amplitude of the sound signal to distinguish the spoofing behavior from other normal sounds, wherein the duration analysis comprises calculating the duration of the sound signal, determining the duration characteristic of the sound, and the calculation formula of the duration is as follows:
Wherein, the method comprises the steps of, wherein, Is the duration of time that is required for the device to be,Is the number of samples taken of the signal,Is the sampling rate of the signal.
Further, the machine learning classification algorithm classifies and pattern-identifies the extracted sound features by using a support vector machine model, and based on a large number of sound samples of known spoofing behaviors, trains the model to identify the sound features of the spoofing behaviors, and constructs a sound pattern library of spoofing behaviors according to the classification and pattern identification, including the sound features of the spoofing behaviors and their corresponding classification labels, for subsequent use in voice identification and behavior detection.
Further, the step of classifying and pattern identifying the extracted sound features by the support vector machine model includes:
extracting frequency spectrum characteristics, amplitude characteristics and duration characteristics, and carrying out standardization processing on the extracted sound characteristics so that the characteristics have the same dimension and scale;
Dividing data: dividing the prepared sound sample data set into a training set and a testing set, wherein the training set is used for training a support vector machine model, and the testing set is used for evaluating the performance of the model;
Model training: training a support vector machine model by using the sound characteristics of the training set and the corresponding labels (the behaviors of the spoofing or the non-spoofing), wherein the support vector machine model separates two types of sound sample data by finding the optimal decision boundary so that samples of the same type are close to the hyperplane and the shortest distance (i.e. interval) from the hyperplane is maximized;
model evaluation: evaluating the trained support vector machine model by using the sound characteristics of the test set, calculating indexes such as accuracy, recall rate, F1 score and the like of the model, evaluating the performance and generalization capability of the model, and optimizing the parameters of the support vector machine model according to the evaluation result;
The trained support vector machine model is applied to real-time monitoring, classification and pattern recognition are carried out on the sound features extracted in real time, and when the sound features monitored in real time are matched with the known sound features of the deceptive behavior, the support vector machine model outputs the classification result of the deceptive behavior, triggers an alarm and takes corresponding measures.
Further, the support vector machine model separates different types of sound samples by learning an optimal decision boundary, and the decision function is expressed as:
Wherein, the method comprises the steps of, wherein, Is a decision function for judging the sampleWhich category, i.e. the spoofing behaviour tag or the non-spoofing behaviour tag,Is the lagrangian multiplier of the training samples, used to weight the support vector,Is a label of training sample, and represents the category of the sampleIs the feature vector of the training sample,Is a kernel function, used to measure the similarity between samples,Is a bias term for adjusting the position of the decision boundary.
Further, the environment monitoring module specifically includes:
And (3) camera image analysis: the camera images in the campus are monitored and analyzed in real time by using a computer vision technology, a crowd-intensive area is identified by utilizing a target detection algorithm YOLO, and the crowd-intensive degree and the aggregation condition are analyzed;
temperature sensor data analysis: acquiring temperature data of each area in the campus in real time, setting a threshold value of temperature change, and regarding the temperature as an abnormal condition when the temperature exceeds or is lower than the set threshold value;
And carrying out comprehensive decision analysis on the sound characteristics extracted by the sound characteristic extraction module and the data of the camera and the temperature sensor, and simultaneously analyzing the sound characteristics when the crowd-intensive area or abnormal temperature change is detected, and if the sound characteristics contain sound modes related to the deceptive behavior, determining that the deceptive behavior occurs.
Further, the comprehensive decision analysis is based on fuzzy logic, and specifically comprises the following steps:
analyzing camera data: set up crowd intensity and use variable through camera detects The value range is expressed asWherein 0 represents no person and 1 represents extremely dense;
analysis of temperature sensor data: variable for temperature change detected by temperature sensor The value range is expressed asWherein 0 represents normal temperature and 1 represents abnormal change;
Analysis results of the sound feature extraction module: setting the variable for the sound characteristic of the deceptive behavior obtained by the sound characteristic extraction module The value range is expressed asWherein 0 indicates that no sound characteristic of a spoofing action is detected and 1 indicates that a distinct spoofing action is detected.
Comprehensive evaluation of fuzzy logic: using fuzzy logic to divide three variablesAnd (3) performing comprehensive evaluation to obtain a comprehensive determination result of the deception behavior:
Setting fuzzy rule and membership function by combining three variables The value of (2) is input into fuzzy logic to perform fuzzy reasoning, and the probability of the deception behavior is obtained and expressed as a variableThe value range isWhere 0 indicates no spoofing and 1 indicates significant spoofing.
Furthermore, the positioning algorithm adopts a triangular positioning algorithm, utilizes the position information of the sound sensors, the position information comprises specific position coordinates of each sound sensor in the campus, and calculates the occurrence position of the sound event by combining the arrival time difference information of the sound signals detected by the sound sensors through the triangular positioning algorithm, wherein the method specifically comprises the following steps:
recording the position information of each sound sensor, including longitude and latitude coordinates or relative position relation;
time difference of arrival measurement of sound signals: when a sound event occurs, recording the arrival time of the sound signals received by each sound sensor, and calculating the relative position of the occurrence of the sound event by utilizing the arrival time difference;
The position information of the sound sensor and the arrival time difference of the sound signal are input into a triangulation algorithm, and the accurate position coordinates of the sound event are calculated.
Furthermore, the installation positions of the plurality of sound sensors are also provided with quick alarm buttons, and the quick alarm buttons are connected with preset receiving equipment.
The invention has the beneficial effects that:
According to the intelligent sound wave assisted campus anti-spoofing system, the plurality of sound sensors and the quick alarm buttons are combined and arranged at key positions in the campus, so that the security of the campus is effectively improved, the sound characteristic extraction module is utilized to construct a sound mode library of the spoofing behavior, the system can monitor sound signals in the campus environment in real time, accurately identify whether the spoofing behavior exists according to the result of classification analysis, meanwhile, a victim can quickly alarm through the quick alarm buttons, and related information is sent to a campus security office or related teaching staff in real time, so that timely response and disposal of the spoofing event are realized, and the security of the campus is effectively improved.
According to the invention, the environment monitoring module is introduced, the equipment such as cameras and temperature sensors deployed in the campus is combined to monitor the environmental changes such as abnormal personnel aggregation and temperature change, the early warning and prevention mechanism of the campus is enhanced, and the auxiliary system more accurately judges whether the cheating behavior exists through comprehensively judging the analysis results of the environment monitoring module and the voice feature extraction module, meanwhile, the search range of the event position is limited according to the campus map information and the path planning, the accuracy and the reliability of the early warning and prevention mechanism are improved, and the occurrence of the cheating event of the campus is effectively reduced.
According to the invention, the machine learning classification algorithm of the support vector machine is adopted to classify and pattern-identify the extracted sound features, and model training is carried out based on a large number of sound samples of known deceptive behaviors, so that the system can more accurately identify the sound features of the deceptive behaviors, meanwhile, the position coordinates of alarm events are accurately determined according to the position information and the positioning algorithm of the sound sensor and transmitted to receiving equipment, so that the disposal and rescue process is optimized, the campus security office or related education staff can quickly know the position and condition of the occurrence of the events, and timely take effective measures to dispose and rescue, thereby guaranteeing the security and health of teachers and students to the maximum extent.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only of the invention and that other drawings can be obtained from them without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of functional modules of a system according to an embodiment of the present invention;
Fig. 2 is a schematic diagram of a machine learning classification algorithm according to an embodiment of the invention.
Detailed Description
The present invention will be further described in detail with reference to specific embodiments in order to make the objects, technical solutions and advantages of the present invention more apparent.
It is to be noted that unless otherwise defined, technical or scientific terms used herein should be taken in a general sense as understood by one of ordinary skill in the art to which the present invention belongs. The terms "first," "second," and the like, as used herein, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
As shown in fig. 1-2, the AI intelligent sound wave assisted campus anti-spoofing system includes:
Capturing ambient sound by a plurality of sound sensors deployed inside the campus;
The method comprises the steps of introducing a sound feature extraction module, analyzing the frequency spectrum, amplitude and duration of a sound signal by the sound feature extraction module, extracting sound features related to the spoofing behavior, constructing a sound pattern library of the spoofing behavior, integrating a machine learning classification algorithm in the sound pattern library, monitoring the sound signal in the environment in real time, introducing the sound signal features extracted in real time into the machine learning classification algorithm, and determining whether the spoofing behavior exists according to an analysis result;
The environment monitoring module is introduced, the environment monitoring module utilizes cameras and temperature sensors deployed in the campus, when the environment monitoring module detects abnormal personnel aggregation and temperature change, and the analysis result of the sound characteristic extraction module is combined to assist in determining the behavior of the ice, the environment monitoring module firstly utilizes the cameras deployed in the campus to monitor the personnel activity condition of each area of the campus in real time. Through the image analysis technology, the system can identify the condition of abnormal personnel aggregation, such as dense crowd or people who are surrounding a student. Meanwhile, temperature sensors are used for monitoring temperature changes of all areas of the campus. Abnormal temperature changes may indicate abnormal personnel activities, such as crowd gathering or emotional agitation of certain personnel, and when the environment monitoring module detects abnormal personnel gathering or temperature changes, the system comprehensively judges by combining the analysis results of the sound feature extraction module;
when the cheating behavior is identified, triggering an alarm and sending related information to preset receiving equipment, wherein the intelligent mobile phone comprises a campus security office or related teaching staff;
And a positioning module is introduced, and the accurate position of the alarm event is determined through the position information of the sound sensor and the analysis of a combined positioning algorithm and is transmitted to the receiving equipment.
The arrangement of the sound sensor according to different areas in the campus comprises areas which are prone to generating campus spoofing and are hidden, such as classrooms, corridors, playgrounds, canteens, libraries, stair dead angles and the like.
Classroom: a sound sensor is arranged at the corner or the ceiling of each classroom, so that the sound in the classroom can be covered on the whole surface, and dead zones are reduced as much as possible;
Corridor: sound sensors are arranged along two sides of the corridor, so that the sound of activities, conversations and the like of students on the corridor can be captured;
playground: sound sensors are arranged around or at the high position of the playground and cover all corners in the playground so as to monitor the activities of students on the playground;
canteen: installing a sound sensor on the ceiling or wall of the canteen to monitor student communication sound and the occurring quarrying sound in the canteen;
Library: and setting sound sensors in a reading area and a learning area of the library, and monitoring the learning environment of students.
The sound feature extraction module specifically comprises:
Spectral analysis: converting the sound signal into a spectrogram by adopting a fast Fourier transform algorithm, extracting frequency distribution characteristics in the spectrogram, including main frequency and spectral bandwidth, and identifying a frequency range or a frequency spectrum form related to the deceptive behavior, including sharp sound waves and high-frequency noise, based on the characteristics of the spectrogram by adopting a frequency spectrum analysis technology; the fast fourier transform algorithm converts a time domain signal into a frequency domain signal, and the formula is:
Wherein, the method comprises the steps of, wherein, Is a frequency spectrum signal which is a frequency spectrum signal,Is an input time-domain signal which is a time-domain signal,Is the number of samples taken of the signal,Is the index of the frequency spectrum component, the sound signal in the campus environment is collected and converted into a spectrogram through a fast Fourier transform algorithm, and the spectrogram is utilized to analyze the frequency distribution characteristic of the sound signal;
Amplitude feature extraction: the method comprises the steps of using an amplitude detection algorithm to measure the amplitude value of a sound signal, analyzing the variation trend of the amplitude, extracting amplitude characteristics based on the fluctuation condition of the amplitude, including the variation frequency of the amplitude and the absolute value of the amplitude, analyzing the relation between the amplitude characteristics and the ice-cream behavior, identifying an amplitude mode with the ice-cream behavior characteristics, and the calculation formula of the amplitude is as follows: Wherein, the method comprises the steps of, wherein, Is the amplitude value of the pulse,Is a signal in the time domain and,The number of sampling points of the signal reflects the change of the sound amplitude in a short time based on the short-time amplitude change rate, and is expressed as:
Wherein, the method comprises the steps of, wherein, Is the short-time amplitude change rate and,Is the amplitude value at the current moment,Is the amplitude value at the next moment in time,Is a time interval;
Duration analysis: calculating the duration of the sound signal by using a duration analysis algorithm, determining the duration characteristic of the sound, comprehensively analyzing the duration of the sound signal by combining the duration characteristic and the frequency spectrum and the amplitude of the sound signal to distinguish the spoofing behavior from other normal sounds, wherein the duration analysis comprises calculating the duration of the sound signal, determining the duration characteristic of the sound, and the calculation formula of the duration is as follows:
Wherein, the method comprises the steps of, wherein, Is the duration of time that is required for the device to be,Is the number of samples taken of the signal,Is the sampling rate of the signal, and the persistence of the sound signal is comprehensively analyzed to distinguish the spoofing behavior from other normal sounds. Sounds of longer duration and abnormal spectral or amplitude characteristics indicate the presence of a deceptive behaviour.
The machine learning classification algorithm adopts a support vector machine model to classify and pattern-identify the extracted sound features, based on a large number of sound samples of known spoofing behaviors, trains the model to identify the sound features of the spoofing behaviors, and constructs a sound pattern library of the spoofing behaviors according to the classification and pattern identification, including the sound features of the spoofing behaviors and corresponding classification labels thereof, for subsequent sound identification and behavior detection.
The extracted sound features are stored as sample data in a pattern library, each sample being associated with a spoofing action, each sound sample having a tag indicating that it belongs to the category of the spoofing action. The library of voice patterns of the spoofing is updated and maintained periodically to ensure that it is consistent with the situation. The newly collected voice sample of the deceptive behavior needs to be added to the pattern library after feature extraction. Each sound sample should record detailed information related thereto, such as collection time, place, specific content of the sound, etc., for subsequent analysis and reference.
The step of classifying and pattern recognition of the extracted sound features by the support vector machine model comprises the following steps:
extracting frequency spectrum characteristics, amplitude characteristics and duration characteristics, and carrying out standardization processing on the extracted sound characteristics so that the characteristics have the same dimension and scale;
Dividing data: dividing the prepared sound sample data set into a training set and a testing set, wherein the training set is used for training a support vector machine model, and the testing set is used for evaluating the performance of the model;
Model training: training a support vector machine model by using the sound characteristics of the training set and the corresponding labels (the behaviors of the spoofing or the non-spoofing), wherein the support vector machine model separates two types of sound sample data by finding the optimal decision boundary so that samples of the same type are close to the hyperplane and the shortest distance (i.e. interval) from the hyperplane is maximized;
model evaluation: evaluating the trained support vector machine model by using the sound characteristics of the test set, calculating indexes such as accuracy, recall rate, F1 score and the like of the model, evaluating the performance and generalization capability of the model, and optimizing the parameters of the support vector machine model according to the evaluation result;
The trained support vector machine model is applied to real-time monitoring, classification and pattern recognition are carried out on the sound features extracted in real time, and when the sound features monitored in real time are matched with the known sound features of the deceptive behavior, the support vector machine model outputs the classification result of the deceptive behavior, triggers an alarm and takes corresponding measures.
The support vector machine model separates different classes of sound samples by learning an optimal decision boundary, and the decision function is expressed as:
Wherein, the method comprises the steps of, wherein, Is a decision function for judging the sampleWhich category, i.e. the spoofing behaviour tag or the non-spoofing behaviour tag,Is the lagrangian multiplier of the training samples, used to weight the support vector,Is a label of training sample, and represents the category of the sampleIs the feature vector of the training sample,Is a kernel function, used to measure the similarity between samples,Is a bias term for adjusting the position of the decision boundary.
The environment monitoring module specifically comprises:
And (3) camera image analysis: the camera images in the campus are monitored and analyzed in real time by using a computer vision technology, a crowd-intensive area is identified by utilizing a target detection algorithm YOLO, and the crowd-intensive degree and the aggregation condition are analyzed;
temperature sensor data analysis: acquiring temperature data of each area in the campus in real time, setting a threshold value of temperature change, and regarding the temperature as an abnormal condition when the temperature exceeds or is lower than the set threshold value;
And (3) comprehensively deciding and analyzing the sound characteristics extracted by the sound characteristic extraction module and the data of the camera and the temperature sensor, and simultaneously analyzing the sound characteristics when the crowd-intensive area or abnormal temperature change is detected, if the sound characteristics contain sound patterns related to the deception behaviors, determining that the deception behaviors occur, improving the accuracy, wherein the sound patterns related to the deception behaviors comprise scream and quarry sounds, and more specifically, the sound comprises' rescue/processing/hit the person la/death/do not beat/shout/death/teacher fast come.
The comprehensive decision analysis is based on fuzzy logic, and specifically comprises the following steps:
analyzing camera data: set up crowd intensity and use variable through camera detects The value range is expressed asWherein 0 represents no person and 1 represents extremely dense;
analysis of temperature sensor data: variable for temperature change detected by temperature sensor The value range is expressed asWherein 0 represents normal temperature and 1 represents abnormal change;
Analysis results of the sound feature extraction module: setting the variable for the sound characteristic of the deceptive behavior obtained by the sound characteristic extraction module The value range is expressed asWherein 0 indicates that no sound characteristic of a spoofing action is detected and 1 indicates that a distinct spoofing action is detected.
Comprehensive evaluation of fuzzy logic: using fuzzy logic to divide three variablesAnd (3) performing comprehensive evaluation to obtain a comprehensive determination result of the deception behavior:
Setting fuzzy rule and membership function by combining three variables The value of (2) is input into fuzzy logic to perform fuzzy reasoning, and the probability of the deception behavior is obtained and expressed as a variableThe value range isWhere 0 indicates no spoofing and 1 indicates significant spoofing.
The fuzzy rules and membership functions are as follows.
Blurring of camera data: crowd densityThe classification into three membership functions: low, medium, high, membership functions are defined as follows:
Low: At the position of Within the range ofLinear decrease in range;
Medium: At the position of Within the range ofAndLinear rise in range;
High: At the position of Within the range ofLinear rise in range;
Blurring of temperature sensor data: temperature variation The classification into three membership functions: low, medium, high, membership functions are defined as follows:
Low: At the position of Within the range ofLinear decrease in range;
Medium: At the position of Within the range ofAndLinear rise in range;
High: At the position of Within the range ofLinear rise in range;
blurring of sound feature data: sound features Divided into two membership functions: low, high, membership functions are defined as follows:
Low: At the position of Within the range ofLinear decrease in range;
High: At the position of Within the range ofThe range rises linearly.
The fuzzy rule is defined as follows:
IFis High ANDis High ANDis High THEN is High (there may be a deceptive behavior when crowd density, temperature variation, and sound characteristics are High);
IFis Low ANDis Low ANDis Low THEN is Low (there may be no deceptive behavior when crowd density, temperature variation, and sound characteristics are Low).
Other fuzzy rules may be further defined based on actual conditions and empirical knowledge.
The positioning algorithm adopts a triangular positioning algorithm, utilizes the position information of the sound sensors, comprises specific position coordinates of each sound sensor in a campus, and calculates the occurrence position of the sound event by combining the arrival time difference information of the sound signals detected by the sound sensors through the triangular positioning algorithm, wherein the method specifically comprises the following steps:
recording the position information of each sound sensor, including longitude and latitude coordinates or relative position relation;
time difference of arrival measurement of sound signals: when a sound event occurs, recording the arrival time of the sound signals received by each sound sensor, and calculating the relative position of the occurrence of the sound event by utilizing the arrival time difference;
The position information of the sound sensor and the arrival time difference of the sound signal are input into a triangulation algorithm, and the accurate position coordinates of the sound event are calculated.
Assume three sound sensors, the positions of which are respectivelyAndWhen the sound event occurs, the sensor records the arrival time as followsAndThe distances from the sound sensor to the event positions are respectively set asAnd
Calculating the distance of the sensor from the event location: use of arrival time and sound propagation velocityCalculating the distance of the sound sensor to the event location:
Wherein, Is the moment when the event occurs.
Calculating event position coordinates: calculating the position coordinates of the event by using a triangulation algorithm according to the three side lengths of the triangleFirst, the relative distance is calculated by the arrival time difference:
The corresponding relative distances are:
the following formula is derived to calculate the position of the event by using triangulation
The positioning algorithm also comprises the step of combining the campus map information to obtain the campus map information, including buildings, room layout, floor structures and the like. The information is obtained through campus plane diagrams or three-dimensional modeling and the like.
And mapping the position information of the sound sensors to a campus map, and determining the specific position of each sensor on the map.
The information of the circle correcting map is introduced into a positioning algorithm as constraint conditions, the event position is limited in a searching range, a constraint optimization method is used, the boundary of the circle correcting map and the known building outline are used as constraint conditions, and the constraint conditions are considered by the positioning algorithm when the event position is searched, so that the result is ensured to be in a reasonable range.
And ensuring that the calculated position coordinates are consistent with the actual layout on the campus map so as to accurately position the event.
The installation positions of the sound sensors are also provided with quick alarm buttons which are connected with preset receiving equipment.
Quick alarm buttons are arranged near the installed positions of the sound sensors, and the quick alarm buttons comprise places in a campus, such as hallways, classroom entrances, playgrounds and the like, which are prone to occurrence of a spoofing event, and are installed so that victims can trigger an alarm at any time.
The quick alarm button is designed as a button which is easy to identify and trigger, the designed button is red, and an alarm word or an alarm pattern is marked on the button so that a victim can quickly identify in an emergency.
When the victim triggers the alarm button, the alarm signal is immediately sent to the preset receiving equipment, the alarm signal comprises the information of the position of the victim, and the quick alarm button supports two-way communication to prevent the deception behavior.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to suggest that the scope of the invention is limited to these examples; the technical features of the above embodiments or in the different embodiments may also be combined within the idea of the invention, the steps may be implemented in any order and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity.
The present invention is intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Therefore, any omission, modification, equivalent replacement, improvement, etc. of the present invention should be included in the scope of the present invention.

Claims (10)

  1. AI intelligence sound wave auxiliary campus anti-spoofing system, its characterized in that includes:
    Capturing ambient sound by a plurality of sound sensors deployed inside the campus;
    The method comprises the steps of introducing a sound feature extraction module, analyzing the frequency spectrum, amplitude and duration of a sound signal by the sound feature extraction module, extracting sound features related to the spoofing behavior, constructing a sound pattern library of the spoofing behavior, integrating a machine learning classification algorithm in the sound pattern library, monitoring the sound signal in the environment in real time, introducing the sound signal features extracted in real time into the machine learning classification algorithm, and determining whether the spoofing behavior exists according to an analysis result;
    The environment monitoring module is introduced, and the environment monitoring module utilizes cameras and temperature sensors which are deployed in the campus, so that when the environment monitoring module detects abnormal personnel aggregation and temperature change, the environment monitoring module is combined with the analysis result of the sound characteristic extraction module to assist in determining the behavior of the spoofing;
    when the cheating behavior is identified, triggering an alarm and sending related information to preset receiving equipment, wherein the intelligent mobile phone comprises a campus security office or related teaching staff;
    And a positioning module is introduced, and the accurate position of the alarm event is determined through the position information of the sound sensor and the analysis of a combined positioning algorithm and is transmitted to the receiving equipment.
  2. 2. The AI intelligent sound wave assisted campus anti-spoofing system of claim 1 wherein the sound sensor is arranged according to different areas in the campus including classrooms, hallways, playgrounds, canteens, libraries and stairway dead angle areas.
  3. 3. The AI-intelligent sound-assisted campus anti-spoofing system of claim 1, wherein the sound feature extraction module comprises:
    Spectral analysis: converting the sound signal into a spectrogram by adopting a fast Fourier transform algorithm, extracting frequency distribution characteristics in the spectrogram, including main frequency and spectral bandwidth, and identifying a frequency range or a frequency spectrum form related to the deceptive behavior, including sharp sound waves and high-frequency noise, based on the characteristics of the spectrogram by adopting a frequency spectrum analysis technology;
    Amplitude feature extraction: the method comprises the steps of using an amplitude detection algorithm to measure the amplitude value of a sound signal, analyzing the variation trend of the amplitude, extracting amplitude characteristics based on the fluctuation condition of the amplitude, including the variation frequency of the amplitude and the absolute value of the amplitude, analyzing the relation between the amplitude characteristics and the ice-cream behavior, identifying an amplitude mode with the ice-cream behavior characteristics, and the calculation formula of the amplitude is as follows: Wherein, the method comprises the steps of, wherein, Is the amplitude value of the pulse,Is a signal in the time domain and,The number of sampling points of the signal reflects the change of the sound amplitude in a short time based on the short-time amplitude change rate, and is expressed as:
    Wherein, the method comprises the steps of, wherein, Is the short-time amplitude change rate and,Is the amplitude value at the current moment,Is the amplitude value at the next moment in time,Is a time interval;
    Duration analysis: calculating the duration of the sound signal by using a duration analysis algorithm, determining the duration characteristic of the sound, comprehensively analyzing the duration of the sound signal by combining the duration characteristic and the frequency spectrum and the amplitude of the sound signal to distinguish the spoofing behavior from other normal sounds, wherein the duration analysis comprises calculating the duration of the sound signal, determining the duration characteristic of the sound, and the calculation formula of the duration is as follows:
    Wherein, the method comprises the steps of, wherein, Is the duration of time that is required for the device to be,Is the number of samples taken of the signal,Is the sampling rate of the signal.
  4. 4. The AI-intelligent sound-assisted campus anti-spoofing system of claim 3 wherein the machine-learning classification algorithm classifies and pattern-identifies extracted sound features using a support vector machine model, trains the model to identify sound features of the spoofing behavior based on sound samples of known spoofing behavior, and constructs a library of sound patterns of the spoofing behavior based on the classification and pattern-identification, including the sound features of the spoofing behavior and their corresponding classification labels, for subsequent sound identification and behavior detection.
  5. 5. The AI intelligent sound wave assisted campus anti-spoofing system of claim 4 wherein the step of classifying and pattern recognition of extracted sound features by the support vector machine model comprises:
    extracting frequency spectrum characteristics, amplitude characteristics and duration characteristics, and carrying out standardization processing on the extracted sound characteristics so that the characteristics have the same dimension and scale;
    Dividing data: dividing the prepared sound sample data set into a training set and a testing set, wherein the training set is used for training a support vector machine model, and the testing set is used for evaluating the performance of the model;
    Model training: training a support vector machine model by using the sound characteristics of the training set and the corresponding labels, wherein the support vector machine model separates two types of sound sample data by finding the optimal decision boundary;
    model evaluation: evaluating the trained support vector machine model by using the sound characteristics of the test set, and optimizing parameters of the support vector machine model according to the evaluation result;
    The trained support vector machine model is applied to real-time monitoring, classification and pattern recognition are carried out on the sound features extracted in real time, and when the sound features monitored in real time are matched with the known sound features of the deceptive behavior, the support vector machine model outputs the classification result of the deceptive behavior, triggers an alarm and takes corresponding measures.
  6. 6. The AI intelligent sound wave assisted campus anti-spoofing system of claim 5 wherein the support vector machine model separates different classes of sound samples by learning an optimal decision boundary, the decision function being expressed as:
    Wherein, the method comprises the steps of, wherein, Is a decision function for judging the sampleWhich category, i.e. the spoofing behaviour tag or the non-spoofing behaviour tag,Is the lagrangian multiplier of the training samples, used to weight the support vector,Is a label of the training sample, represents the class of the sample,Is the feature vector of the training sample,Is a kernel function, used to measure the similarity between samples,Is a bias term for adjusting the position of the decision boundary.
  7. 7. The AI intelligent sound wave assisted campus anti-spoofing system of claim 1, wherein the environment monitoring module specifically comprises:
    And (3) camera image analysis: the camera images in the campus are monitored and analyzed in real time by using a computer vision technology, a crowd-intensive area is identified by utilizing a target detection algorithm YOLO, and the crowd-intensive degree and the aggregation condition are analyzed;
    temperature sensor data analysis: acquiring temperature data of each area in the campus in real time, setting a threshold value of temperature change, and regarding the temperature as an abnormal condition when the temperature exceeds or is lower than the set threshold value;
    And carrying out comprehensive decision analysis on the sound characteristics extracted by the sound characteristic extraction module and the data of the camera and the temperature sensor, and simultaneously analyzing the sound characteristics when the crowd-intensive area or abnormal temperature change is detected, and if the sound characteristics contain sound modes related to the deceptive behavior, determining that the deceptive behavior occurs.
  8. 8. The AI intelligent sound wave assisted campus anti-spoofing system of claim 7, wherein the comprehensive decision analysis is based on fuzzy logic, and specifically comprises:
    analyzing camera data: set up crowd intensity and use variable through camera detects The value range is expressed asWherein 0 represents no person and 1 represents extremely dense;
    analysis of temperature sensor data: variable for temperature change detected by temperature sensor The value range is expressed asWherein 0 represents normal temperature and 1 represents abnormal change;
    Analysis results of the sound feature extraction module: setting the variable for the sound characteristic of the deceptive behavior obtained by the sound characteristic extraction module The value range is expressed asWherein 0 indicates that no sound characteristic of a spoofing action is detected and 1 indicates that a distinct spoofing action is detected.
    Comprehensive evaluation of fuzzy logic: using fuzzy logic to divide three variablesAnd (3) performing comprehensive evaluation to obtain a comprehensive determination result of the deception behavior:
    Setting fuzzy rule and membership function by combining three variables The value of (2) is input into fuzzy logic to perform fuzzy reasoning, and the probability of the deception behavior is obtained and expressed as a variableThe value range isWhere 0 indicates no spoofing and 1 indicates significant spoofing.
  9. 9. The AI-intelligent sound-assisted campus anti-spoofing system of claim 1 wherein the positioning algorithm employs a triangulation algorithm utilizing the location information of the sound sensors, the location information including specific location coordinates of each sound sensor within the campus, and the location of the sound event occurrence is calculated using the triangulation algorithm in combination with time difference of arrival information of the sound signals detected by the sound sensors, comprising:
    recording the position information of each sound sensor, including longitude and latitude coordinates or relative position relation;
    time difference of arrival measurement of sound signals: when a sound event occurs, recording the arrival time of the sound signals received by each sound sensor, and calculating the relative position of the occurrence of the sound event by utilizing the arrival time difference;
    The position information of the sound sensor and the arrival time difference of the sound signal are input into a triangulation algorithm, and the accurate position coordinates of the sound event are calculated.
  10. 10. The AI intelligent sound wave assisted campus anti-spoofing system of claim 1, wherein the plurality of sound sensors are further installed at installation locations with a quick alarm button connected to a preset receiving device.
CN202410685832.5A 2024-05-30 2024-05-30 AI intelligent sound wave auxiliary campus anti-cheating system Active CN118262475B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410685832.5A CN118262475B (en) 2024-05-30 2024-05-30 AI intelligent sound wave auxiliary campus anti-cheating system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410685832.5A CN118262475B (en) 2024-05-30 2024-05-30 AI intelligent sound wave auxiliary campus anti-cheating system

Publications (2)

Publication Number Publication Date
CN118262475A true CN118262475A (en) 2024-06-28
CN118262475B CN118262475B (en) 2024-08-27

Family

ID=91605967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410685832.5A Active CN118262475B (en) 2024-05-30 2024-05-30 AI intelligent sound wave auxiliary campus anti-cheating system

Country Status (1)

Country Link
CN (1) CN118262475B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101685634A (en) * 2008-09-27 2010-03-31 上海盛淘智能科技有限公司 Children speech emotion recognition method
US20120308971A1 (en) * 2011-05-31 2012-12-06 Hyun Soon Shin Emotion recognition-based bodyguard system, emotion recognition device, image and sensor control apparatus, personal protection management apparatus, and control methods thereof
CN103198595A (en) * 2013-03-11 2013-07-10 成都百威讯科技有限责任公司 Intelligent door and window anti-invasion system
CN109147267A (en) * 2018-10-16 2019-01-04 温州洪启信息科技有限公司 Intelligent campus big data safe early warning platform based on cloud platform
CN109920203A (en) * 2019-02-12 2019-06-21 合肥极光科技股份有限公司 A kind of campus security intelligent monitor system based on technology of Internet of things
US20200258363A1 (en) * 2019-02-11 2020-08-13 Soter Technologies, Llc System and method for notifying detection of vaping, smoking, or potential bullying
CN215182302U (en) * 2021-04-23 2021-12-14 深圳市巨龙科教网络有限公司 Campus safety prevention and control terminal and system based on security brain
CN114220446A (en) * 2021-12-08 2022-03-22 漳州立达信光电子科技有限公司 Adaptive background noise detection method, system and medium
CN114283845A (en) * 2020-09-21 2022-04-05 亚旭电脑股份有限公司 Model construction method for audio recognition
CN116403377A (en) * 2023-04-06 2023-07-07 湘潭大学 Abnormal behavior and hidden danger detection device in public place
CN117197884A (en) * 2023-06-16 2023-12-08 南方科技大学 Campus-unfriendly behavior prevention and control method, device, equipment and storage medium
CN117912190A (en) * 2024-01-25 2024-04-19 济南信息工程学校 Anti-deception intelligent watch system
CN118053261A (en) * 2024-04-16 2024-05-17 深圳市巨龙科教网络有限公司 Anti-spoofing early warning method, device, equipment and medium for smart campus

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101685634A (en) * 2008-09-27 2010-03-31 上海盛淘智能科技有限公司 Children speech emotion recognition method
US20120308971A1 (en) * 2011-05-31 2012-12-06 Hyun Soon Shin Emotion recognition-based bodyguard system, emotion recognition device, image and sensor control apparatus, personal protection management apparatus, and control methods thereof
CN103198595A (en) * 2013-03-11 2013-07-10 成都百威讯科技有限责任公司 Intelligent door and window anti-invasion system
CN109147267A (en) * 2018-10-16 2019-01-04 温州洪启信息科技有限公司 Intelligent campus big data safe early warning platform based on cloud platform
US20200258363A1 (en) * 2019-02-11 2020-08-13 Soter Technologies, Llc System and method for notifying detection of vaping, smoking, or potential bullying
CN109920203A (en) * 2019-02-12 2019-06-21 合肥极光科技股份有限公司 A kind of campus security intelligent monitor system based on technology of Internet of things
CN114283845A (en) * 2020-09-21 2022-04-05 亚旭电脑股份有限公司 Model construction method for audio recognition
CN215182302U (en) * 2021-04-23 2021-12-14 深圳市巨龙科教网络有限公司 Campus safety prevention and control terminal and system based on security brain
CN114220446A (en) * 2021-12-08 2022-03-22 漳州立达信光电子科技有限公司 Adaptive background noise detection method, system and medium
CN116403377A (en) * 2023-04-06 2023-07-07 湘潭大学 Abnormal behavior and hidden danger detection device in public place
CN117197884A (en) * 2023-06-16 2023-12-08 南方科技大学 Campus-unfriendly behavior prevention and control method, device, equipment and storage medium
CN117912190A (en) * 2024-01-25 2024-04-19 济南信息工程学校 Anti-deception intelligent watch system
CN118053261A (en) * 2024-04-16 2024-05-17 深圳市巨龙科教网络有限公司 Anti-spoofing early warning method, device, equipment and medium for smart campus

Also Published As

Publication number Publication date
CN118262475B (en) 2024-08-27

Similar Documents

Publication Publication Date Title
Ahamad et al. Person detection for social distancing and safety violation alert based on segmented ROI
US20050271266A1 (en) Automated rip current detection system
JPWO2007138811A1 (en) Suspicious behavior detection apparatus and method, program, and recording medium
US11093738B2 (en) Systems and methods for detecting flying animals
Liu et al. A sound monitoring system for prevention of underground pipeline damage caused by construction
US20180313950A1 (en) CNN-Based Remote Locating and Tracking of Individuals Through Walls
CN102855508B (en) Opening type campus anti-following system
KR102173241B1 (en) Method and system for detecting abnormal sign in construction site
Kongrattanaprasert et al. Detection of road surface states from tire noise using neural network analysis
CN114782897A (en) Dangerous behavior detection method and system based on machine vision and deep learning
US11495111B2 (en) Indoor occupancy estimation, trajectory tracking and event monitoring and tracking system
CN112257533B (en) Perimeter intrusion detection and identification method
JP2021064364A (en) Information recognition system and method of the same
Aydın et al. Development of a new light-weight convolutional neural network for acoustic-based amateur drone detection
CN118053261B (en) Anti-spoofing early warning method, device, equipment and medium for smart campus
Drira et al. Occupant-detection strategy using footstep-induced floor vibrations
Haq et al. Implementation of smart social distancing for COVID-19 based on deep learning algorithm
CN114355462A (en) Human hidden dangerous object detection method and medium based on micro-Doppler characteristics
CN118262475B (en) AI intelligent sound wave auxiliary campus anti-cheating system
CN116978152B (en) Noninductive safety monitoring method and system based on radio frequency identification technology
CN116204784B (en) DAS-based subway tunnel external hazard operation intrusion recognition method
Jagirdar et al. Development and Evaluation of Traffic Count Sensor with Low-Cost Light-Detection and Ranging and Continuous Wavelet Transform: Initial Results
CN116910662A (en) Passenger anomaly identification method and device based on random forest algorithm
CN116509382A (en) Human body activity intelligent detection method and health monitoring system based on millimeter wave radar
CN202736085U (en) Anti-trailing system for open type schoolyard

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant