CN116259101A - Method for inspection hall or classroom discipline inspection tour and inspection robot - Google Patents

Method for inspection hall or classroom discipline inspection tour and inspection robot Download PDF

Info

Publication number
CN116259101A
CN116259101A CN202211572157.2A CN202211572157A CN116259101A CN 116259101 A CN116259101 A CN 116259101A CN 202211572157 A CN202211572157 A CN 202211572157A CN 116259101 A CN116259101 A CN 116259101A
Authority
CN
China
Prior art keywords
behaviors
characteristic
behavior
tour
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211572157.2A
Other languages
Chinese (zh)
Inventor
张坛
钟浩洋
钟智伶
陈晨阳
梁文斌
张政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Technology University
Original Assignee
Shenzhen Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Technology University filed Critical Shenzhen Technology University
Priority to CN202211572157.2A priority Critical patent/CN116259101A/en
Publication of CN116259101A publication Critical patent/CN116259101A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Psychiatry (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Signal Processing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a method and a patrol robot for a test room or class discipline patrol, and relates to the technical field of image analysis, wherein the method comprises the following steps: acquiring a tour route, shooting a examination room or a class scene according to the tour route to obtain a current scene image, and preprocessing the current scene image to obtain a target image; extracting characteristic behaviors in the target image, inputting the characteristic behaviors into a convolutional neural network, judging whether the characteristic behaviors belong to preset abnormal behaviors in the convolutional neural network, and performing voice broadcasting if the characteristic behaviors are judged to belong to the preset abnormal behaviors. According to the invention, by collecting images of examination rooms or classroom scenes and combining a convolutional neural network and a target detection algorithm, abnormal behaviors in scene images can be identified, and voice broadcasting is carried out on the abnormal behaviors, so that cheating behaviors of the examination rooms are reduced, and discipline of the classroom is improved.

Description

Method for inspection hall or classroom discipline inspection tour and inspection robot
Technical Field
The present invention relates to the field of image analysis technologies, and in particular, to a method, a system, a patrol robot, and a computer readable storage medium for a test room or class discipline patrol.
Background
Examination is taken as a talent selection mechanism and a means for measuring whether the learned knowledge of students is qualified or not, plays a role in promoting education development and social development of China, and is self-evident in fairness and importance. Because the examination result often has a great relation with the interests of the examinee, the examinee is not free from danger, and adopts various modes to conduct fraud, and typically, the examinee communicates with each other in the examination room to transfer answers.
In addition, the situation that students play mobile phones, walk away, steal snacks, talk and leave in class often occurs, but the traditional video monitoring in the examination room or class is mostly completed by manpower, but the abnormal behaviors of the students or the students in the examination room or class cannot be analyzed and processed in real time, and the manual judgment efficiency is low.
Accordingly, the prior art is still in need of improvement and development.
Disclosure of Invention
The invention mainly aims to provide a method, a system, a patrol robot and a computer-readable storage medium for a test room or class discipline patrol, which aim to solve the problem that video monitoring in the test room or class cannot analyze and process abnormal behaviors of test takers or students in real time.
In order to achieve the above object, the present invention provides a method for examination hall or classroom discipline tour, the method for examination hall or classroom discipline tour comprising the steps of:
acquiring a tour route, shooting a examination room or a class scene according to the tour route to obtain a current scene image, and preprocessing the current scene image to obtain a target image;
extracting characteristic behaviors in the target image, inputting the characteristic behaviors into a convolutional neural network, judging whether the characteristic behaviors belong to preset abnormal behaviors in the convolutional neural network, and performing voice broadcasting if the characteristic behaviors are judged to belong to the preset abnormal behaviors.
Optionally, in the method for examination room or classroom discipline tour, a tour route is obtained, an examination room or classroom scene is photographed according to the tour route, a current scene image is obtained, and the current scene image is preprocessed, so that a target image is obtained;
the method comprises the steps of obtaining a tour route, shooting a examination room or class scene according to the tour route to obtain a current scene image, preprocessing the current scene image to obtain a target image, and the steps of:
establishing a storage database, wherein the storage database comprises preset abnormal behaviors, preset normal behaviors and skeleton key points;
establishing a target data set in the convolutional neural network, wherein the target data set comprises video data sets in different scenes;
framing the video data set to obtain a plurality of frame images, marking the abnormal behaviors and the normal behaviors in the frame images to obtain preset abnormal behaviors and preset normal behaviors, and enhancing the frame images corresponding to the preset abnormal behaviors and the preset normal behaviors;
the method comprises the steps of obtaining a tour route, shooting a examination room or class scene according to the tour route to obtain a current scene image, preprocessing the current scene image to obtain a target image, and further comprises the following steps:
establishing a decision model in a dynamic environment, and obtaining the tour route according to the decision model;
the decision model comprises a multitasking decision function and a predicted obstacle movement function;
the multitasking decision function includes: path planning, obstacle avoidance decision, target detection and autonomous patrol;
the predicted obstacle-movement function includes: 3D visual detection, visual SLAM and characteristic semantic information identification;
the method comprises the steps of obtaining a tour route, shooting a examination room or class scene according to the tour route to obtain a current scene image, preprocessing the current scene image to obtain a target image, and specifically comprises the following steps:
after the tour route is obtained according to the decision model, shooting the examination room or class scene passing through the tour route in real time to obtain a current scene image;
and filtering, removing noise and convolving the current scene image to obtain the target image.
Optionally, the method for examination hall or class discipline tour includes extracting a characteristic behavior in the target image, inputting the characteristic behavior into a convolutional neural network, judging whether the characteristic behavior belongs to a preset abnormal behavior in the convolutional neural network, and if the characteristic behavior is judged to belong to the preset abnormal behavior, performing voice broadcasting;
the extracting the characteristic behavior in the target image, inputting the characteristic behavior into a convolutional neural network, judging whether the characteristic behavior belongs to a preset abnormal behavior in the convolutional neural network, and if so, performing voice broadcasting, wherein the voice broadcasting specifically comprises the following steps:
extracting characteristic behaviors of the target image according to a target detection algorithm, and inputting the characteristic behaviors into the convolutional neural network;
comparing the characteristic behavior with a preset abnormal behavior in the convolutional neural network, and judging whether the characteristic behavior belongs to the preset abnormal behavior;
if the characteristic behavior is judged to belong to the preset abnormal behavior, judging that the characteristic behavior belongs to the abnormal behavior, and performing voice broadcasting on the abnormal behavior;
extracting characteristic behaviors in the target image, inputting the characteristic behaviors into a convolutional neural network, judging whether the characteristic behaviors belong to preset abnormal behaviors in the convolutional neural network, and performing voice broadcasting if the characteristic behaviors are judged to belong to the preset abnormal behaviors, wherein the method comprises the following steps of:
when the characteristic behavior is judged to belong to the abnormal behavior, and the abnormal behavior is subjected to voice broadcasting, extracting skeleton key points in a target image corresponding to the abnormal behavior according to the target detection algorithm;
and storing the skeleton key points into the storage database through a CSV mode, and extracting behavior characteristics in the later-stage detection class image according to the skeleton key points.
Optionally, the method for examination room or classroom discipline tour, wherein the target detection algorithm includes: yolo target extraction algorithm, hret human skeleton extraction algorithm and fernet emotion recognition algorithm.
In addition, in order to achieve the above object, the present invention further provides a system for a classroom discipline tour, wherein the system for a classroom discipline tour includes:
the image processing module is used for acquiring a patrol route, shooting a examination room or class scene according to the patrol route to obtain a current scene image, and preprocessing the current scene image to obtain a target image;
the behavior analysis module is used for extracting characteristic behaviors in the target image, inputting the characteristic behaviors into a convolutional neural network, judging whether the characteristic behaviors belong to preset abnormal behaviors in the convolutional neural network, and performing voice broadcasting if the characteristic behaviors are judged to belong to the preset abnormal behaviors.
In addition, to achieve the above object, the present invention also provides a patrol robot, wherein the patrol robot includes: the system comprises a memory, a processor and a program which is stored in the memory and can run on the processor and is used for performing the classroom discipline patrol, wherein the program is used for performing the steps of a method for performing the examination hall or the classroom discipline patrol when being executed by the processor.
In addition, in order to achieve the above object, the present invention also provides a computer-readable storage medium storing a program for class discipline tour, which when executed by a processor, implements the steps of a method for examination hall or class discipline tour as described above.
According to the invention, a patrol route is obtained, a examination room or class scene is shot according to the patrol route, a current scene image is obtained, and the current scene image is preprocessed, so that a target image is obtained;
extracting characteristic behaviors in the target image, inputting the characteristic behaviors into a convolutional neural network, judging whether the characteristic behaviors belong to preset abnormal behaviors in the convolutional neural network, and performing voice broadcasting if the characteristic behaviors are judged to belong to the preset abnormal behaviors. According to the invention, the inspection robot is provided with the camera, the examination hall or class is photographed according to a formulated planning route to obtain a current scene image, the current scene image is transmitted to the background, the scene image is analyzed by the convolutional neural network and the target detection algorithm, whether abnormal behaviors exist in the current scene image is judged, and if so, voice broadcasting is performed. The invention can analyze and process abnormal behaviors of the examinee or the students in the examination room or the classroom in real time, thereby being beneficial to reducing cheating behaviors of the examination room and improving discipline of the classroom.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of a method of the present invention for an examination room or classroom discipline tour;
FIG. 2 is a schematic illustration of a patrol robot product of the method for an examination room or class discipline patrol of the present invention;
FIG. 3 is a schematic illustration of a manual annotation of the method of the present invention for an examination room or classroom discipline tour;
FIG. 4 is a schematic diagram of behavior extraction of the system for classroom discipline tour of the present invention;
FIG. 5 is a schematic illustration of human skeletal points of the system for a classroom discipline tour of the present invention;
FIG. 6 is a block diagram of a preferred embodiment of the system for classroom discipline tour of the present invention;
fig. 7 is a block diagram of a preferred embodiment of the inspection robot of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clear and clear, the present invention will be further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The method for examination room or classroom discipline patrol according to the preferred embodiment of the present invention, as shown in fig. 1, comprises the following steps:
and S10, acquiring a tour route, shooting a examination room or class scene according to the tour route to obtain a current scene image, and preprocessing the current scene image to obtain a target image.
As shown in fig. 2, the inspection robot adopts an advanced SLAM (Simultaneous Localization and Mapping, synchronous positioning and map construction) control system, and is equipped with intelligent devices such as a front-view camera, an ultrasonic radar, a binocular vision camera and the like, and has the functions of stable and reliable autonomous walking, autonomous moving multipoint inspection and the like, and can automatically, efficiently and accurately inspect the indoor and outdoor areas according to a set route.
In addition, the inspection robot has the characteristics of automatically detecting low electric quantity, actively charging the charging pile, avoiding various obstacles, being unattended for 24 hours and the like, and the user can control the inspection robot through the mobile phone APP, so that the separation of the human and the machine is realized, the contact of personnel is reduced, the use safety is improved, and the inspection robot is simple in deployment and convenient to manage and maintain.
Specifically, a storage database is established, wherein the storage database comprises preset abnormal behaviors, preset normal behaviors and skeleton key points.
Wherein, the preset abnormal behavior may include: in the examination room, the paper strips, the joint lugs, the mobile phones, the side peeps, the secret numbers and the like are used; in class, the actions of playing mobile phones, engaging in the junction, eating snacks and the like are performed.
A target data set is established in the convolutional neural network, the target data set comprising video data sets in different scenes.
Wherein, these video data sets come from scene images collected by the inspection robot monitoring cameras under different environmental conditions.
As shown in fig. 3, the video data set is subjected to framing processing to obtain a plurality of framing images, abnormal behaviors and normal behaviors in the plurality of framing images are marked to obtain preset abnormal behaviors and preset normal behaviors, and the framing images corresponding to the preset abnormal behaviors and the preset normal behaviors are enhanced.
As shown in fig. 4, video data sets in different examination rooms or classes are collected in advance, and are input into a convolutional neural network for training, and because the video data sets in the convolutional neural network are formed by images, the video data in the convolutional neural network need to be subjected to framing processing to obtain framing images, and Labelimg software (Labelimg is a data marking tool with a open source and can mark three formats) is used for manually marking abnormal behaviors and normal behaviors existing in the framing images and converting the abnormal behaviors and normal behaviors into VOC formats (Visual Object Classes, the VOC formats are marking specifications of a picture and are a set of standardized data sets for detection and identification), in addition, the framing images corresponding to the marked abnormal behaviors and normal behaviors also need to be subjected to image enhancement, so that the characteristic behavior extraction process in the later training process is smoother, and the training effect is more obvious.
And establishing a decision model in a dynamic environment, obtaining the tour route according to the decision model, planning the tour route of the tour robot from two aspects of motion control and motion decision, improving the early training efficiency of the tour robot, and simultaneously providing important guarantee for real-time online training path planning.
The method comprises the steps of establishing a patrol route, establishing and storing a map, setting patrol points and setting working time, and realizing connection between a mobile phone APP and a patrol robot in a WIFI connection mode before the patrol route is established, and setting of all parameters can be completed on a display screen interface of the patrol robot.
Map creation and saving: the remote control inspection robot performs map construction of a daily use place, the map is created by the remote control robot running through the whole area of daily work, and a user is positioned at the rear part of the robot running direction as far as possible and within 10m from the robot during operation. Each time the robot walks for 10m, the robot needs to be remotely controlled to rotate for 360 degrees in situ, so that the map construction is more accurate; the remote control robot is required to rotate in the areas such as glass wall surfaces, corners and intersections in a resetting manner. When constructing, if some gray points or parts of repeated black lines are found on the map, the remote control robot needs to walk again for several times until the gray points are eliminated, and only one black line wall surface exists. After the map is constructed, a user remotely controls the robot to the position 0.5m near the charging pile, and the front face of the robot faces the charging pile, so that the map is stored.
Setting patrol points: the Patrol points of daily work are set in a map according to a certain position (note that the Patrol points are preferably set to the middle position of the corridor, the corridor width is required to be larger than 1.5m, and the Patrol points cannot be set outside the map, if the Patrol points are set to unreasonable positions, 4-8 Patrol points are generally set), after the setting is finished, the robot is remotely controlled to the position 0.5m in front of the charging pile, and the work can be started by clicking Start paint. If the robot is in the Patrol process, after the Patrol is stopped by clicking the Patrol- > Pause Patrol, clicking the Set Points to Set Patrol Points.
Setting working time: clicking the System- > Work Time, can set up two working Time periods of morning and afternoon, the System adopts 24 hours System, click OK after finishing setting up and take effect.
The decision model comprises a multitasking decision function and a predicted obstacle movement function, is a URDF model (the URDF represents the format of the inspection robot), and is displayed in a gazebo simulation environment (gazebo is an environment in which the inspection robot operates).
The multitasking decision function includes: path planning, obstacle avoidance decision, target detection and autonomous patrol.
The path planning function is to collect and integrate data sent by sensors such as a laser radar, an odometer, an IMU (Inertial Measurement Unit, an inertial sensor), a motor driver and the like, send the data to the inspection robot for comprehensive processing, and send corresponding decisions to the control module after processing by an ROS_inspection (navigation frame brief) module of the inspection robot to realize a path planning process.
The obstacle avoidance decision function is that when the inspection robot performs path planning on the perceived environment, the position of a certain obstacle is changed to possibly cause the path failure of the previously planned path, and the obstacle avoidance decision function is started at the moment, so that the inspection robot can quickly make the changed path again according to the change of the current scene.
The target detection function is to detect a person in an application scene image corresponding to the inspection robot, and when the target person is detected, shooting is started, and when the person in the application scene image is not detected, shooting is closed.
The autonomous patrol function means that when the patrol robot receives the planned specified route, patrol can be performed autonomously along the specified route without real-time monitoring of a user.
The predicted obstacle-movement function includes: 3D visual detection, visual SLAM (synchronous positioning and drawing) and characteristic semantic information identification ensure the cognition and understanding of the inspection robot to the live environment through the obstacle prediction movement function, and the robustness of a target detection algorithm is improved.
And after the tour route is obtained according to the decision model, shooting the examination room or class scene passing through the tour route in real time to obtain a current scene image.
According to the invention, the inspection robot is preferentially adopted to carry the camera to photograph images of the examination hall or class scene, the inspection route of the inspection robot is preset, so that the inspection robot can photograph images in the examination hall or class scene of the way according to the inspection route, and the photographed contents are uploaded to the background for analysis.
And filtering, removing noise and convolving the current scene image to obtain the target image.
The current examination room or class image acquired by the inspection robot camera may be interfered by noise or impurities, and may affect pixels of the current scene image, so that an error analyzed by the target detection algorithm is too large, and filtering, noise elimination and convolution processing are required to be performed on the acquired current scene image.
And S20, extracting characteristic behaviors in the target image, inputting the characteristic behaviors into a convolutional neural network, judging whether the characteristic behaviors belong to preset abnormal behaviors in the convolutional neural network, and performing voice broadcasting if the characteristic behaviors are judged to belong to the preset abnormal behaviors.
Specifically, the characteristic behaviors of the target image are extracted according to a target detection algorithm, and the characteristic behaviors are input into the convolutional neural network.
Comparing the characteristic behavior with a preset abnormal behavior in the convolutional neural network, and judging whether the characteristic behavior belongs to the preset abnormal behavior.
As shown in fig. 5, the target image may be subjected to target extraction, human skeleton extraction and emotion recognition by the target detection algorithm, when the inspection robot transmits the current scene image to the background, the target image in the current scene may be extracted, each human skeleton point in the target image may be recognized, and matching is performed according to the human skeleton point combination in the target image and the human skeleton point combination of the preset abnormal behavior in the convolutional neural network, if the matching is successful, the abnormal behavior may be determined, otherwise, the abnormal behavior may be determined by emotion recognition, and the expression state of the face in the target image may be obtained, for example, the eye may be looked around, the observation of the surrounding with frequency may also be recognized by the combined action of the human skeleton extraction and emotion recognition.
If the characteristic behavior is judged to belong to the preset abnormal behavior, the characteristic behavior is judged to belong to the abnormal behavior, and the abnormal behavior is subjected to voice broadcasting.
When abnormal behaviors exist in scene images acquired by the inspection robot, the inspection robot reports the content of the abnormal behaviors in a voice mode and displays the images of the abnormal behaviors.
For example: when the inspection robot detects that the behaviors of the examinee belong to abnormal behaviors in the current scene image in the inspection process, and the abnormal behaviors are judged to be stealing the sensing paper, voice broadcasting is carried out: and detecting that the examinee has abnormal behaviors, judging that the examinee is the information to be read, and checking. And the screenshot picture which is judged to be abnormal in the current scene image is displayed on the display screen, and at the moment, a prisoner can judge whether the behavior of the examinee is cheating or not according to the screenshot content on the display screen and the actual situation.
And after judging that the characteristic behavior belongs to the abnormal behavior and broadcasting the abnormal behavior in a voice mode, extracting skeleton key points in a target image corresponding to the abnormal behavior according to the target detection algorithm.
And storing the skeleton key points into the storage database through a CSV (Comma Separated Values) mode, and extracting behavior features in a later-stage detection class image according to the skeleton key points.
Whether the scene image characteristic behaviors are abnormal behaviors or normal behaviors is judged, the scene image characteristic behaviors are established on the basis of connection by skeleton key points, each behavior is composed of a plurality of human skeleton key points.
When the characteristic behavior in the current scene image is judged to be abnormal behavior, bone key points corresponding to the abnormal behavior are obtained through a target detection algorithm, and the bone key points are stored in a storage database through a CSV mode.
The skeleton key points corresponding to the abnormal behaviors are obtained, wherein the same abnormal behavior is represented by a plurality of different skeleton key point combinations, the abnormal behaviors are detected mainly through the actions of the detection targets, and the same behavior can be represented by a plurality of actions, so that the skeleton key point combinations corresponding to the abnormal behaviors detected in the current scene are also obtained, the richness of the preset abnormal behaviors in the storage database is further expanded, and meanwhile, the accurate recognition of the characteristic behaviors in the later stage is facilitated.
In addition, the characteristic behaviors in the current scene image are predicted, when the characteristic behaviors are detected not to belong to abnormal behaviors, but the skeleton key point combination detected according to the target algorithm is close to the skeleton key point combination of the preset abnormal behaviors, the characteristic behaviors in the current scene image are tracked, the characteristic behaviors are continuously detected within the preset time (which can be set to be 3-5 minutes), and the accuracy of judging the abnormal behaviors by the inspection robot is improved.
The target detection algorithm comprises the following steps: yolo target extraction algorithm, hret human skeleton extraction algorithm and fernet emotion recognition algorithm.
And if the characteristic behavior is matched with the preset normal behavior, judging that the examinee does not have cheating behavior in the current scene image.
According to the invention, by using the intelligent video processing algorithm that the inspection robot carries the camera to carry out the shooting task and uploads the background, a large amount of worthless redundant information can be filtered, abnormal behaviors in a scene can be automatically detected and an alarm can be sent out, so that real-time uninterrupted monitoring is realized, and the inspection task of security personnel is shared.
The intelligent system capable of automatically analyzing the personal or group behavior information not only can improve the capability and efficiency of the monitoring system for finding potential hazards, but also can replace manpower continuously for a long time to reduce the monitoring cost, and great business opportunities and considerable economic benefits are involved in the construction of the current intelligent society and safe cities.
In addition, the invention has great possibility in public safety protection under a large area scope, for example, the invention can detect the falling and other behaviors of old people and children in families; in public service measures such as hospitals and prisons, abnormal behaviors of patients or criminals can be detected remotely by utilizing a video monitoring technology, and an alarm can be timely given, so that staff can quickly deal with corresponding problems.
Further, as shown in fig. 6, based on the above method for examination hall or classroom discipline tour, the present invention further correspondingly provides a system for classroom discipline tour, where the system for classroom discipline tour includes:
the image processing module 51 is configured to obtain a tour route, shoot a examination room or a class scene according to the tour route, obtain a current scene image, and perform preprocessing on the current scene image to obtain a target image.
The behavior analysis module 52 is configured to extract a characteristic behavior in the target image, input the characteristic behavior into a convolutional neural network, determine whether the characteristic behavior belongs to a preset abnormal behavior in the convolutional neural network, and if it is determined that the characteristic behavior belongs to the preset abnormal behavior, perform voice broadcast.
Further, as shown in fig. 7, based on the above method and system for examination hall or classroom discipline tour, the present invention further provides a tour robot, which includes a processor 10, a memory 20, and a display 30. Fig. 7 shows only some of the components of the inspection robot, but it should be understood that not all of the illustrated components are required to be implemented, and that more or fewer components may be implemented instead.
The memory 20 may in some embodiments be an internal storage unit of the inspection robot, such as a hard disk or a memory of the inspection robot. The memory 20 may also be an external storage device of the inspection robot in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like. Further, the memory 20 may also include both an internal storage unit and an external storage device of the inspection robot. The memory 20 is used for storing application software and various data installed on the inspection robot, such as program codes of the inspection robot. The memory 20 may also be used to temporarily store data that has been output or is to be output. In one embodiment, the memory 20 stores a procedure 40 for examination room or classroom discipline tour, and the procedure 40 for examination room or classroom discipline tour may be executed by the processor 10, thereby implementing a method for examination room or classroom discipline tour in the present application.
The processor 10 may in some embodiments be a central processing unit (Central Processing Unit, CPU), microprocessor or other data processing chip for executing program code or processing data stored in the memory 20, for example, for performing the method for examination hall or class discipline patrols, etc.
The display 30 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like in some embodiments. The display 30 is used for displaying information on the inspection robot and for displaying a visual user interface. The components 10-30 of the inspection robot communicate with each other via a system bus.
In one embodiment, the following steps are implemented when the processor 10 executes the program 40 for class discipline tour in the memory 20:
acquiring a tour route, shooting a examination room or a class scene according to the tour route to obtain a current scene image, and preprocessing the current scene image to obtain a target image;
extracting characteristic behaviors in the target image, inputting the characteristic behaviors into a convolutional neural network, judging whether the characteristic behaviors belong to preset abnormal behaviors in the convolutional neural network, and performing voice broadcasting if the characteristic behaviors are judged to belong to the preset abnormal behaviors.
The method comprises the steps of obtaining a patrol route, shooting an examination room or a class scene according to the patrol route to obtain a current scene image, preprocessing the current scene image to obtain a target image, and the steps of:
establishing a storage database, wherein the storage database comprises preset abnormal behaviors, preset normal behaviors and skeleton key points;
establishing a target data set in the convolutional neural network, wherein the target data set comprises video data sets in different scenes;
and carrying out framing processing on the video data set to obtain a plurality of framing images, marking the abnormal behaviors and the normal behaviors in the plurality of framing images to obtain preset abnormal behaviors and preset normal behaviors, and enhancing the framing images corresponding to the preset abnormal behaviors and the preset normal behaviors.
The method comprises the steps of obtaining a patrol route, shooting an examination room or a class scene according to the patrol route to obtain a current scene image, preprocessing the current scene image to obtain a target image, and further comprises the following steps:
establishing a decision model in a dynamic environment, and obtaining the tour route according to the decision model;
the decision model comprises a multitasking decision function and a predicted obstacle movement function;
the multitasking decision function includes: path planning, obstacle avoidance decision, target detection and autonomous patrol;
the predicted obstacle-movement function includes: 3D visual inspection, visual SLAM, and feature semantic information identification.
The method comprises the steps of obtaining a patrol route, shooting an examination room or a class scene according to the patrol route to obtain a current scene image, preprocessing the current scene image to obtain a target image, and specifically comprises the following steps:
after the tour route is obtained according to the decision model, shooting the examination room or class scene passing through the tour route in real time to obtain a current scene image;
and filtering, removing noise and convolving the current scene image to obtain the target image.
The method for extracting the characteristic behavior in the target image includes the steps of extracting the characteristic behavior in the target image, inputting the characteristic behavior into a convolutional neural network, judging whether the characteristic behavior belongs to a preset abnormal behavior in the convolutional neural network, and performing voice broadcasting if the characteristic behavior is judged to belong to the preset abnormal behavior, wherein the method specifically comprises the following steps:
extracting characteristic behaviors of the target image according to a target detection algorithm, and inputting the characteristic behaviors into the convolutional neural network;
comparing the characteristic behavior with a preset abnormal behavior in the convolutional neural network, and judging whether the characteristic behavior belongs to the preset abnormal behavior;
if the characteristic behavior is judged to belong to the preset abnormal behavior, the characteristic behavior is judged to belong to the abnormal behavior, and the abnormal behavior is subjected to voice broadcasting.
The method comprises the steps of extracting a characteristic behavior in a target image, inputting the characteristic behavior into a convolutional neural network, judging whether the characteristic behavior belongs to a preset abnormal behavior in the convolutional neural network, and performing voice broadcasting if the characteristic behavior is judged to belong to the preset abnormal behavior, wherein the method comprises the following steps:
when the characteristic behavior is judged to belong to the abnormal behavior, and the abnormal behavior is subjected to voice broadcasting, extracting skeleton key points in a target image corresponding to the abnormal behavior according to the target detection algorithm;
and storing the skeleton key points into the storage database through a CSV mode, and extracting behavior characteristics in the later-stage detection class image according to the skeleton key points.
Wherein the target detection algorithm comprises: yolo target extraction algorithm, hret human skeleton extraction algorithm and fernet emotion recognition algorithm.
The present invention also provides a computer-readable storage medium storing a program for class discipline tour, which when executed by a processor, implements the steps of a method for examination room or class discipline tour as described above.
In summary, the present invention provides a method, a system, a patrol robot and a computer readable storage medium for a examination room or classroom discipline patrol, wherein the method includes: acquiring a tour route, shooting a examination room or a class scene according to the tour route to obtain a current scene image, and preprocessing the current scene image to obtain a target image; extracting characteristic behaviors in the target image, inputting the characteristic behaviors into a convolutional neural network, judging whether the characteristic behaviors belong to preset abnormal behaviors in the convolutional neural network, and performing voice broadcasting if the characteristic behaviors are judged to belong to the preset abnormal behaviors. According to the invention, the inspection robot is provided with the camera, the examination hall or class is photographed according to a formulated planning route to obtain a current scene image, the current scene image is transmitted to the background, the scene image is analyzed by the convolutional neural network and the target detection algorithm, whether abnormal behaviors exist in the current scene image is judged, and if so, voice broadcasting is performed. The invention can analyze and process abnormal behaviors of the examinee or the students in the examination room or the classroom in real time, thereby being beneficial to reducing cheating behaviors of the examination room and improving discipline of the classroom.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or inspection robot that comprises a list of elements does not include only those elements but may include other elements not expressly listed or may include elements inherent to such process, method, article, or inspection robot. Without further limitation, an element defined by the statement "comprising one … …" does not exclude that there are additional identical elements in a process, method, article or inspection robot comprising the element.
Of course, those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by a computer program for instructing relevant hardware (e.g., processor, controller, etc.), the program may be stored on a computer readable storage medium, and the program may include the above described methods when executed. The computer readable storage medium may be a memory, a magnetic disk, an optical disk, etc.
It is to be understood that the invention is not limited in its application to the examples described above, but is capable of modification and variation in light of the above teachings by those skilled in the art, and that all such modifications and variations are intended to be included within the scope of the appended claims.

Claims (10)

1. A method for examination room or classroom discipline tour, the method comprising:
acquiring a tour route, shooting a examination room or a class scene according to the tour route to obtain a current scene image, and preprocessing the current scene image to obtain a target image;
extracting characteristic behaviors in the target image, inputting the characteristic behaviors into a convolutional neural network, judging whether the characteristic behaviors belong to preset abnormal behaviors in the convolutional neural network, and performing voice broadcasting if the characteristic behaviors are judged to belong to the preset abnormal behaviors.
2. The method for examination room or classroom discipline tour according to claim 1, wherein the obtaining a tour route, photographing an examination room or classroom scene according to the tour route, obtaining a current scene image, preprocessing the current scene image, and obtaining a target image, includes:
establishing a storage database, wherein the storage database comprises preset abnormal behaviors, preset normal behaviors and skeleton key points;
establishing a target data set in the convolutional neural network, wherein the target data set comprises video data sets in different scenes;
and carrying out framing processing on the video data set to obtain a plurality of framing images, marking the abnormal behaviors and the normal behaviors in the plurality of framing images to obtain preset abnormal behaviors and preset normal behaviors, and enhancing the framing images corresponding to the preset abnormal behaviors and the preset normal behaviors.
3. The method for examination room or classroom discipline tour according to claim 1, wherein the obtaining a tour route, photographing an examination room or classroom scene according to the tour route, obtaining a current scene image, preprocessing the current scene image, and obtaining a target image, further comprises:
establishing a decision model in a dynamic environment, and obtaining the tour route according to the decision model;
the decision model comprises a multitasking decision function and a predicted obstacle movement function;
the multitasking decision function includes: path planning, obstacle avoidance decision, target detection and autonomous patrol;
the predicted obstacle-movement function includes: 3D visual inspection, visual SLAM, and feature semantic information identification.
4. A method for a examination room or class discipline tour according to claim 3, wherein the obtaining a tour route, photographing an examination room or class scene according to the tour route, obtaining a current scene image, and preprocessing the current scene image to obtain a target image, specifically includes:
after the tour route is obtained according to the decision model, shooting the examination room or class scene passing through the tour route in real time to obtain a current scene image;
and filtering, removing noise and convolving the current scene image to obtain the target image.
5. The method for examination hall or classroom discipline tour according to claim 2, wherein the extracting the characteristic behavior in the target image, inputting the characteristic behavior into a convolutional neural network, determining whether the characteristic behavior belongs to a preset abnormal behavior in the convolutional neural network, and if it is determined that the characteristic behavior belongs to the preset abnormal behavior, performing voice broadcasting specifically includes:
extracting characteristic behaviors of the target image according to a target detection algorithm, and inputting the characteristic behaviors into the convolutional neural network;
comparing the characteristic behavior with a preset abnormal behavior in the convolutional neural network, and judging whether the characteristic behavior belongs to the preset abnormal behavior;
if the characteristic behavior is judged to belong to the preset abnormal behavior, the characteristic behavior is judged to belong to the abnormal behavior, and the abnormal behavior is subjected to voice broadcasting.
6. The method for examination hall or classroom discipline tour according to claim 5, wherein the extracting the characteristic behavior in the target image, inputting the characteristic behavior into a convolutional neural network, determining whether the characteristic behavior belongs to a preset abnormal behavior in the convolutional neural network, and if it is determined that the characteristic behavior belongs to the preset abnormal behavior, performing voice broadcasting, and then comprising:
when the characteristic behavior is judged to belong to the abnormal behavior, and the abnormal behavior is subjected to voice broadcasting, extracting skeleton key points in a target image corresponding to the abnormal behavior according to the target detection algorithm;
and storing the skeleton key points into the storage database through a CSV mode, and extracting behavior characteristics in the later-stage detection class image according to the skeleton key points.
7. A method for a test room or classroom discipline tour according to claim 5 or 6, wherein the target detection algorithm includes: yolo target extraction algorithm, hret human skeleton extraction algorithm and fernet emotion recognition algorithm.
8. A system for examination room or classroom discipline tour, the system comprising:
the image processing module is used for acquiring a patrol route, shooting a examination room or class scene according to the patrol route to obtain a current scene image, and preprocessing the current scene image to obtain a target image;
the behavior analysis module is used for extracting characteristic behaviors in the target image, inputting the characteristic behaviors into a convolutional neural network, judging whether the characteristic behaviors belong to preset abnormal behaviors in the convolutional neural network, and performing voice broadcasting if the characteristic behaviors are judged to belong to the preset abnormal behaviors.
9. A patrol robot, comprising: memory, a processor and a program for classroom discipline cruising stored on the memory and executable on the processor, which when executed by the processor, implements the steps of a method for examination or classroom discipline cruising as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a program for class discipline cruising, which when executed by a processor, implements the steps of the method for examination room or class discipline cruising as claimed in any of claims 1-7.
CN202211572157.2A 2022-12-08 2022-12-08 Method for inspection hall or classroom discipline inspection tour and inspection robot Pending CN116259101A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211572157.2A CN116259101A (en) 2022-12-08 2022-12-08 Method for inspection hall or classroom discipline inspection tour and inspection robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211572157.2A CN116259101A (en) 2022-12-08 2022-12-08 Method for inspection hall or classroom discipline inspection tour and inspection robot

Publications (1)

Publication Number Publication Date
CN116259101A true CN116259101A (en) 2023-06-13

Family

ID=86683304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211572157.2A Pending CN116259101A (en) 2022-12-08 2022-12-08 Method for inspection hall or classroom discipline inspection tour and inspection robot

Country Status (1)

Country Link
CN (1) CN116259101A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437696A (en) * 2023-12-20 2024-01-23 山东山大鸥玛软件股份有限公司 Behavior monitoring analysis method, system, equipment and medium based on deep learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437696A (en) * 2023-12-20 2024-01-23 山东山大鸥玛软件股份有限公司 Behavior monitoring analysis method, system, equipment and medium based on deep learning

Similar Documents

Publication Publication Date Title
CN107679471B (en) Indoor personnel air post detection method based on video monitoring platform
CN108596148B (en) System and method for analyzing labor state of construction worker based on computer vision
CN109298785A (en) A kind of man-machine joint control system and method for monitoring device
US11935297B2 (en) Item monitoring for doorbell cameras
US11676360B2 (en) Assisted creation of video rules via scene analysis
US11200435B1 (en) Property video surveillance from a vehicle
CN110458794B (en) Quality detection method and device for accessories of rail train
CN114445780A (en) Detection method and device for bare soil covering, and training method and device for recognition model
CN116259101A (en) Method for inspection hall or classroom discipline inspection tour and inspection robot
US11257355B2 (en) System and method for preventing false alarms due to display images
CN112949457A (en) Maintenance method, device and system based on augmented reality technology
CN112785564B (en) Pedestrian detection tracking system and method based on mechanical arm
CN115083229B (en) Intelligent recognition and warning system of flight training equipment based on AI visual recognition
CN115063921B (en) Building site intelligent gate system and building method
US11631245B2 (en) Smart glasses for property evaluation using AI and ML
US11893714B2 (en) Precipitation removal from video
CN114429677A (en) Coal mine scene operation behavior safety identification and assessment method and system
CN111696194A (en) Three-dimensional visualization implementation method and system based on field investigation and storage medium
JP7467311B2 (en) Inspection system and inspection method
CN114612068B (en) Automatic attendance checking and supplementing method and device
US20240020963A1 (en) Object embedding learning
CN113762096A (en) Health code identification method and device, storage medium and electronic equipment
CN115100871A (en) Pedestrian traffic violation identification method and system
CN118097198A (en) Automatic dressing compliance management and control system and method based on artificial intelligence
CN113989698A (en) Automatic regional equipment linkage method based on video intelligent object detection technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination