CN117437696A - Behavior monitoring analysis method, system, equipment and medium based on deep learning - Google Patents

Behavior monitoring analysis method, system, equipment and medium based on deep learning Download PDF

Info

Publication number
CN117437696A
CN117437696A CN202311754373.3A CN202311754373A CN117437696A CN 117437696 A CN117437696 A CN 117437696A CN 202311754373 A CN202311754373 A CN 202311754373A CN 117437696 A CN117437696 A CN 117437696A
Authority
CN
China
Prior art keywords
image
video frame
behavior
examinee
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311754373.3A
Other languages
Chinese (zh)
Inventor
马磊
陈义学
侯庆
梁延灼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANDONG SHANDA OUMA SOFTWARE CO Ltd
Original Assignee
SHANDONG SHANDA OUMA SOFTWARE CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANDONG SHANDA OUMA SOFTWARE CO Ltd filed Critical SHANDONG SHANDA OUMA SOFTWARE CO Ltd
Priority to CN202311754373.3A priority Critical patent/CN117437696A/en
Publication of CN117437696A publication Critical patent/CN117437696A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B3/00Audible signalling systems; Audible personal calling systems
    • G08B3/10Audible signalling systems; Audible personal calling systems using electric transmission; using electromagnetic transmission

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a behavior monitoring analysis method, system, equipment and medium based on deep learning, which mainly relate to the technical field of behavior monitoring analysis and are used for solving the problem of low accuracy of analysis results of the existing scheme. Comprising the following steps: acquiring a behavior video of an examinee in real time through mobile acquisition equipment so as to obtain a video frame image; performing image enhancement on the video frame image, and adjusting the image after image enhancement into a preset format to obtain a processed image; the processed image is used as the input of a trained position detection convolutional neural network to obtain the position coordinates of the examinee, and further the position matting of the examinee is obtained; taking the position matting of the examinee as the input of a human skeleton key point detection algorithm to obtain skeleton key point positions; determining whether the positions of the skeletal key points meet preset requirements or not, and performing voice reminding when the positions of the skeletal key points do not meet the preset requirements; the final behavior category is obtained by utilizing a position detection convolutional neural network; and when the behavior category does not meet the requirements, carrying out voice reminding.

Description

Behavior monitoring analysis method, system, equipment and medium based on deep learning
Technical Field
The application relates to the technical field of behavior monitoring analysis, in particular to a behavior monitoring analysis method, system, equipment and medium based on deep learning.
Background
With the development of network technology, online examination is becoming a mainstream examination form. However, the difficulty of proctoring an on-line test is high, and there may be various actions that violate the rule of the test, such as taking up and leaving the test interface to look up a book, etc., which may seriously affect fairness of the test. Therefore, how to effectively monitor the behaviors of online test takers and timely discover and correct illegal behaviors is a current urgent problem to be solved.
In order to solve the problem, a large number of invigorator staff are dispatched to monitor the monitoring pictures of a plurality of examinees in real time. Or the face recognition algorithm is used for monitoring a plurality of examinees in real time, specifically, face image capturing is carried out in real time in the electric power line examination process, and examination interface locking processing is carried out when no face is captured or no examination face is captured, so that secret management of examination contents in the examination process is realized, the guarantee force of examination safety in the prevention of the tilapia is effectively improved, and examination behaviors and answer state analysis are carried out in the on-line examination cheating behaviors.
However, in the monitoring method for the prison staff, the analysis of the video is subjective due to the physiological characteristics of the prison staff, and the video picture is intensively examined for a long time and in high tension, so that visual fatigue can be quickly generated, and the cheating behaviors of the examinee occasionally appearing in the monitoring video are difficult to judge timely and accurately. The monitoring method for capturing the face image is mainly focused on face analysis, and the accuracy of an analysis result is low.
Disclosure of Invention
Aiming at the defects in the prior art, the application provides a behavior monitoring analysis method, a behavior monitoring analysis system, behavior monitoring analysis equipment and a behavior monitoring analysis medium based on deep learning, so as to solve the problem of low accuracy of analysis results of the existing scheme.
In a first aspect, the present application provides a behavior monitoring analysis method based on deep learning, where the method includes: acquiring a behavior video of an examinee in real time through mobile acquisition equipment so as to obtain a video frame image; performing image enhancement on the video frame image, and adjusting the image after image enhancement into a preset format to obtain a processed image; the processed image is used as the input of a trained position detection convolutional neural network to obtain the position coordinates of the examinee, and further the position matting of the examinee is obtained; the prediction output layer of the position detection convolutional neural network adopts a decoupling head mode; taking the position matting of the examinee as the input of a human skeleton key point detection algorithm to obtain skeleton key point positions; determining whether the positions of the skeletal key points meet preset requirements or not, and performing voice reminding when the positions of the skeletal key points do not meet the preset requirements; the position detection convolutional neural network is utilized, output data of branches after decoupling heads of a prediction output layer are taken, and a Softmax activation function is input to obtain a final behavior class; and when the behavior category does not meet the requirements, carrying out voice reminding.
Further, after the moving acquisition device acquires the behavior video of the examinee in real time so as to obtain the video frame image, the method further comprises: and determining whether the video frame images meet preset standards, and if not, popping up a prompt box of the correction mobile acquisition device.
Further, the image enhancement of the video frame image specifically includes: by the formula:image enhancement is carried out on the video frame image; wherein (1)>To output an imagePlain value of->For the original image pixel values, A and +.>Is a preset enhancement constant.
Further, the method further comprises: after triggering the voice prompt, storing a video frame image triggering the voice prompt.
In a second aspect, the present application provides a behavior monitoring analysis system based on deep learning, the system comprising: the acquisition module is used for acquiring the behavior video of the examinee in real time through the mobile acquisition equipment so as to acquire a video frame image; performing image enhancement on the video frame image, and adjusting the image after image enhancement into a preset format to obtain a processed image; the processed image is used as the input of a trained position detection convolutional neural network to obtain the position coordinates of the examinee, and further the position matting of the examinee is obtained; the prediction output layer of the position detection convolutional neural network adopts a decoupling head mode; the reminding module is used for taking the candidate position matting as the input of a human skeleton key point detection algorithm so as to obtain skeleton key point positions; determining whether the positions of the skeletal key points meet preset requirements or not, and performing voice reminding when the positions of the skeletal key points do not meet the preset requirements; the position detection convolutional neural network is utilized, output data of branches after decoupling heads of a prediction output layer are taken, and a Softmax activation function is input to obtain a final behavior class; and when the behavior category does not meet the requirements, carrying out voice reminding.
Further, the system also comprises a prompt module for determining whether the video frame image meets the preset standard, and if not, popping up a prompt box of the correction mobile acquisition device.
Further, the obtaining module includes a prompting unit configured to pass through the formula:image enhancement is carried out on the video frame image; wherein (1)>For outputting pixel values +.>For the original image pixel values, A and +.>Is a preset enhancement constant.
Further, the system also comprises a storage module for storing the video frame image triggering the voice prompt after triggering the voice prompt.
In a third aspect, the present application provides a behavior monitoring analysis device based on deep learning, the device comprising: a processor; and a memory having executable code stored thereon that, when executed, causes the processor to perform a deep learning based behavior monitoring analysis method as in any of the above.
In a fourth aspect, the present application provides a non-volatile computer storage medium having stored thereon computer instructions that, when executed, implement a deep learning based behavior monitoring analysis method as in any of the above.
As can be appreciated by those skilled in the art, the present application has at least the following beneficial effects:
the mobile acquisition equipment is responsible for acquiring video frame images of on-line examinees in real time; preprocessing the captured video frame image of the examinee, such as image enhancement, image size adjustment and the like; extracting the position information of the examinee through a position detection convolutional neural network; judging whether the limb of the examinee is completely positioned in the frame or not through a human skeleton key point detection algorithm, and checking the behavior category (whether standing behaviors exist) by utilizing a Softmax activation function; when the test taker has standing behaviors or the limbs leave the monitoring picture, the abnormal behavior reminding module sends out reminding to remind the test taker to standardize the answering behaviors of the test; in addition, the method and the device can save the video frame image with suspected cheating behaviors, and facilitate later-period manual examination and verification.
Drawings
Some embodiments of the present disclosure are described below with reference to the accompanying drawings, in which:
fig. 1 is a flowchart of a behavior monitoring analysis method based on deep learning according to an embodiment of the present application.
Fig. 2 is a schematic diagram of an internal structure of a behavior monitoring analysis system based on deep learning according to an embodiment of the present application.
Fig. 3 is a schematic diagram of an internal structure of a behavior monitoring and analyzing device based on deep learning according to an embodiment of the present application.
Detailed Description
It should be understood by those skilled in the art that the embodiments described below are only preferred embodiments of the present disclosure, and do not represent that the present disclosure can be realized only by the preferred embodiments, which are merely for explaining the technical principles of the present disclosure, not for limiting the scope of the present disclosure. Based on the preferred embodiments provided by the present disclosure, all other embodiments that may be obtained by one of ordinary skill in the art without inventive effort shall still fall within the scope of the present disclosure.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
The following describes in detail the technical solution proposed in the embodiments of the present application through the accompanying drawings.
The embodiment of the application provides a behavior monitoring analysis method based on deep learning, as shown in fig. 1, and the method mainly comprises the following steps:
step 110, acquiring a behavior video of an examinee in real time through mobile acquisition equipment so as to obtain a video frame image; performing image enhancement on the video frame image, and adjusting the image after image enhancement into a preset format to obtain a processed image; and taking the processed image as the input of a trained position detection convolutional neural network to obtain the position coordinates of the examinee, and further obtaining the position matting of the examinee.
The prediction output layer of the position detection convolutional neural network adopts a decoupling head mode and is divided into three branches, and the foreground and background, the position frame coordinates and the categories are respectively predicted.
In addition, after the video of the behavior of the examinee is acquired in real time through the mobile acquisition equipment, and then the video frame image is obtained, the video frame image can be checked, whether the video frame image meets the preset standard or not is determined, and when the video frame image does not meet the preset standard (when the image quality is poor), a prompt box of the mobile acquisition equipment is popped up to correct, so that the examinee is prompted to correct position light and the like.
The image enhancement of the video frame image can be specifically:
by the formula:image enhancement is carried out on the video frame image; wherein (1)>In order to output the pixel value(s),for the original image pixel values, A and +.>Is a preset enhancement constant.
Step 120, taking the candidate position matting as the input of a human skeleton key point detection algorithm to obtain skeleton key point positions; determining whether the positions of the skeletal key points meet preset requirements or not, and performing voice reminding when the positions of the skeletal key points do not meet the preset requirements; the position detection convolutional neural network is utilized, output data of branches after decoupling heads of a prediction output layer are taken, and a Softmax activation function is input to obtain a final behavior class; and when the behavior category does not meet the requirements, carrying out voice reminding.
The method can also be used for data storage, and is convenient for later-stage manual examination and verification.
The method comprises the following steps: after triggering the voice prompt, storing a video frame image triggering the voice prompt.
In addition, fig. 2 is a schematic diagram of a behavior monitoring analysis system based on deep learning according to an embodiment of the present application. As shown in fig. 2, the system provided in the embodiment of the present application mainly includes:
the obtaining module 210 is configured to collect the behavior video of the examinee in real time through the mobile collection device, so as to obtain a video frame image; performing image enhancement on the video frame image, and adjusting the image after image enhancement into a preset format to obtain a processed image; and taking the processed image as the input of a trained position detection convolutional neural network to obtain the position coordinates of the examinee, and further obtaining the position matting of the examinee.
The obtaining module 210 includes a prompting unit configured to perform a method of:image enhancement is carried out on the video frame image; wherein (1)>For outputting pixel values +.>For the original image pixel values, A and +.>Is a preset enhancement constant.
The prediction output layer of the position detection convolutional neural network adopts a decoupling head mode.
The reminding module 220 is used for taking the candidate position matting as the input of a human skeleton key point detection algorithm so as to obtain skeleton key point positions; determining whether the positions of the skeletal key points meet preset requirements or not, and performing voice reminding when the positions of the skeletal key points do not meet the preset requirements; the position detection convolutional neural network is utilized, output data of branches after decoupling heads of a prediction output layer are taken, and a Softmax activation function is input to obtain a final behavior class; and when the behavior category does not meet the requirements, carrying out voice reminding.
The system also comprises a prompt module which is used for determining whether the video frame images accord with the preset standard, and when the video frame images do not accord with the preset standard, a prompt box of the correction mobile acquisition device is popped up.
The system also comprises a storage module for storing the video frame image triggering the voice prompt after triggering the voice prompt.
The above is a method embodiment in the present application, and based on the same inventive concept, the embodiment of the present application further provides a behavior monitoring analysis device based on deep learning. As shown in fig. 3, the apparatus includes: a processor; and a memory having executable code stored thereon that, when executed, causes the processor to perform a deep learning based behavior monitoring analysis method as in the above embodiments.
Specifically, the server acquires the behavior video of the examinee in real time through the mobile acquisition equipment, so as to obtain a video frame image; performing image enhancement on the video frame image, and adjusting the image after image enhancement into a preset format to obtain a processed image; the processed image is used as the input of a trained position detection convolutional neural network to obtain the position coordinates of the examinee, and further the position matting of the examinee is obtained; the prediction output layer of the position detection convolutional neural network adopts a decoupling head mode; taking the position matting of the examinee as the input of a human skeleton key point detection algorithm to obtain skeleton key point positions; determining whether the positions of the skeletal key points meet preset requirements or not, and performing voice reminding when the positions of the skeletal key points do not meet the preset requirements; the position detection convolutional neural network is utilized, output data of branches after decoupling heads of a prediction output layer are taken, and a Softmax activation function is input to obtain a final behavior class; and when the behavior category does not meet the requirements, carrying out voice reminding.
In addition, the embodiment of the application also provides a nonvolatile computer storage medium, on which executable instructions are stored, and when the executable instructions are executed, the behavior monitoring analysis method based on deep learning is realized.
Thus far, the technical solution of the present disclosure has been described in connection with the foregoing embodiments, but it is easily understood by those skilled in the art that the protective scope of the present disclosure is not limited to only these specific embodiments. The technical solutions in the above embodiments may be split and combined by those skilled in the art without departing from the technical principles of the present disclosure, and equivalent modifications or substitutions may be made to related technical features, which all fall within the scope of the present disclosure.

Claims (10)

1. A behavior monitoring analysis method based on deep learning, the method comprising:
acquiring a behavior video of an examinee in real time through mobile acquisition equipment so as to obtain a video frame image; performing image enhancement on the video frame image, and adjusting the image after image enhancement into a preset format to obtain a processed image; the processed image is used as the input of a trained position detection convolutional neural network to obtain the position coordinates of the examinee, and further the position matting of the examinee is obtained; the prediction output layer of the position detection convolutional neural network adopts a decoupling head mode;
taking the position matting of the examinee as the input of a human skeleton key point detection algorithm to obtain skeleton key point positions; determining whether the positions of the skeletal key points meet preset requirements or not, and performing voice reminding when the positions of the skeletal key points do not meet the preset requirements; the position detection convolutional neural network is utilized, output data of branches after decoupling heads of a prediction output layer are taken, and a Softmax activation function is input to obtain a final behavior class; and when the behavior category does not meet the requirements, carrying out voice reminding.
2. The behavior monitoring analysis method based on deep learning according to claim 1, wherein after collecting the behavior video of the examinee in real time by the mobile collection device, thereby obtaining the video frame image, the method further comprises:
and determining whether the video frame images meet preset standards, and if not, popping up a prompt box of the correction mobile acquisition device.
3. The behavior monitoring analysis method based on deep learning according to claim 1, wherein the image enhancement is performed on the video frame image, and specifically comprises:
by the formula:image enhancement is carried out on the video frame image;
wherein,for outputting pixel values +.>For the original image pixel values, A and +.>Is a preset enhancement constant.
4. The deep learning based behavioral monitoring and analysis method of claim 1 further comprising:
after triggering the voice prompt, storing a video frame image triggering the voice prompt.
5. A behavior monitoring analysis system based on deep learning, the system comprising:
the acquisition module is used for acquiring the behavior video of the examinee in real time through the mobile acquisition equipment so as to acquire a video frame image; performing image enhancement on the video frame image, and adjusting the image after image enhancement into a preset format to obtain a processed image; the processed image is used as the input of a trained position detection convolutional neural network to obtain the position coordinates of the examinee, and further the position matting of the examinee is obtained; the prediction output layer of the position detection convolutional neural network adopts a decoupling head mode;
the reminding module is used for taking the candidate position matting as the input of a human skeleton key point detection algorithm so as to obtain skeleton key point positions; determining whether the positions of the skeletal key points meet preset requirements or not, and performing voice reminding when the positions of the skeletal key points do not meet the preset requirements; the position detection convolutional neural network is utilized, output data of branches after decoupling heads of a prediction output layer are taken, and a Softmax activation function is input to obtain a final behavior class; and when the behavior category does not meet the requirements, carrying out voice reminding.
6. The deep learning based behavioral monitoring and analysis system of claim 5 further comprising a prompt module,
and the prompt box is used for determining whether the video frame images accord with preset standards, and when the video frame images do not accord with the preset standards, the prompt box of the correction mobile acquisition equipment is popped up.
7. The deep learning based behavioral monitoring and analysis system of claim 5 where the acquisition module includes a prompt unit,
for passing through the formula:image enhancement is carried out on the video frame image;
wherein,for outputting pixel values +.>For the original image pixel values, A and +.>Is a preset enhancement constant.
8. The deep learning based behavioral monitoring and analysis system of claim 5 further comprising a storage module,
and the video frame image for triggering the voice prompt is stored after triggering the voice prompt.
9. A behavior monitoring analysis device based on deep learning, the device comprising:
a processor;
and a memory having executable code stored thereon that, when executed, causes the processor to perform a deep learning based behavior monitoring analysis method as recited in any one of claims 1-4.
10. A non-transitory computer storage medium having stored thereon computer instructions that, when executed, implement a deep learning based behavior monitoring analysis method according to any of claims 1-4.
CN202311754373.3A 2023-12-20 2023-12-20 Behavior monitoring analysis method, system, equipment and medium based on deep learning Pending CN117437696A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311754373.3A CN117437696A (en) 2023-12-20 2023-12-20 Behavior monitoring analysis method, system, equipment and medium based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311754373.3A CN117437696A (en) 2023-12-20 2023-12-20 Behavior monitoring analysis method, system, equipment and medium based on deep learning

Publications (1)

Publication Number Publication Date
CN117437696A true CN117437696A (en) 2024-01-23

Family

ID=89553832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311754373.3A Pending CN117437696A (en) 2023-12-20 2023-12-20 Behavior monitoring analysis method, system, equipment and medium based on deep learning

Country Status (1)

Country Link
CN (1) CN117437696A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815816A (en) * 2018-12-24 2019-05-28 山东山大鸥玛软件股份有限公司 A kind of examinee examination hall abnormal behaviour analysis method based on deep learning
CN112036299A (en) * 2020-08-31 2020-12-04 山东科技大学 Examination cheating behavior detection method and system under standard examination room environment
CN113537005A (en) * 2021-07-02 2021-10-22 福州大学 On-line examination student behavior analysis method based on attitude estimation
CN114120242A (en) * 2021-12-03 2022-03-01 山东山大鸥玛软件股份有限公司 Monitoring video behavior analysis method, system and terminal based on time sequence characteristics
CN114333070A (en) * 2022-03-10 2022-04-12 山东山大鸥玛软件股份有限公司 Examinee abnormal behavior detection method based on deep learning
CN114973097A (en) * 2022-06-10 2022-08-30 广东电网有限责任公司 Method, device, equipment and storage medium for recognizing abnormal behaviors in electric power machine room
CN115546899A (en) * 2022-11-09 2022-12-30 山东山大鸥玛软件股份有限公司 Examination room abnormal behavior analysis method, system and terminal based on deep learning
CN115880647A (en) * 2023-02-22 2023-03-31 山东山大鸥玛软件股份有限公司 Method, system, equipment and storage medium for analyzing abnormal behaviors of examinee examination room
CN116259101A (en) * 2022-12-08 2023-06-13 深圳技术大学 Method for inspection hall or classroom discipline inspection tour and inspection robot
CN116895098A (en) * 2023-07-20 2023-10-17 桂林电子科技大学 Video human body action recognition system and method based on deep learning and privacy protection

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815816A (en) * 2018-12-24 2019-05-28 山东山大鸥玛软件股份有限公司 A kind of examinee examination hall abnormal behaviour analysis method based on deep learning
CN112036299A (en) * 2020-08-31 2020-12-04 山东科技大学 Examination cheating behavior detection method and system under standard examination room environment
CN113537005A (en) * 2021-07-02 2021-10-22 福州大学 On-line examination student behavior analysis method based on attitude estimation
CN114120242A (en) * 2021-12-03 2022-03-01 山东山大鸥玛软件股份有限公司 Monitoring video behavior analysis method, system and terminal based on time sequence characteristics
CN114333070A (en) * 2022-03-10 2022-04-12 山东山大鸥玛软件股份有限公司 Examinee abnormal behavior detection method based on deep learning
CN114973097A (en) * 2022-06-10 2022-08-30 广东电网有限责任公司 Method, device, equipment and storage medium for recognizing abnormal behaviors in electric power machine room
CN115546899A (en) * 2022-11-09 2022-12-30 山东山大鸥玛软件股份有限公司 Examination room abnormal behavior analysis method, system and terminal based on deep learning
CN116259101A (en) * 2022-12-08 2023-06-13 深圳技术大学 Method for inspection hall or classroom discipline inspection tour and inspection robot
CN115880647A (en) * 2023-02-22 2023-03-31 山东山大鸥玛软件股份有限公司 Method, system, equipment and storage medium for analyzing abnormal behaviors of examinee examination room
CN116895098A (en) * 2023-07-20 2023-10-17 桂林电子科技大学 Video human body action recognition system and method based on deep learning and privacy protection

Similar Documents

Publication Publication Date Title
CN109726663A (en) Online testing monitoring method, device, computer equipment and storage medium
CN110570358A (en) vehicle loss image enhancement method and device based on GAN network
Hu et al. Research on abnormal behavior detection of online examination based on image information
CN111563422B (en) Service evaluation acquisition method and device based on bimodal emotion recognition network
CN109919079A (en) Method and apparatus for detecting learning state
EP4300417A1 (en) Method and apparatus for evaluating image authenticity, computer device, and storage medium
CN111241883B (en) Method and device for preventing cheating of remote tested personnel
WO2020029608A1 (en) Method and apparatus for detecting burr of electrode sheet
KR102154953B1 (en) Apparutus and method for automatically determining ring size
CN107832721B (en) Method and apparatus for outputting information
CN111382672A (en) Cheating monitoring method and device for online examination
Nakabayashi et al. Dissociating positive and negative influences of verbal processing on the recognition of pictures of faces and objects.
CN112637568B (en) Distributed security monitoring method and system based on multi-node edge computing equipment
CN115205764B (en) Online learning concentration monitoring method, system and medium based on machine vision
CN115511329A (en) Electric power operation compliance monitoring system and method
CN116132637A (en) Online examination monitoring system and method, electronic equipment and storage medium
CN117557941A (en) Video intelligent analysis system and method based on multi-mode data fusion
CN113794759B (en) Examination cloud platform system based on block chain
CN113128522B (en) Target identification method, device, computer equipment and storage medium
CN117437696A (en) Behavior monitoring analysis method, system, equipment and medium based on deep learning
CN115132228B (en) Language capability grading method and system
CN116433029A (en) Power operation risk assessment method, system, equipment and storage medium
CN116206373A (en) Living body detection method, electronic device and storage medium
CN116091963A (en) Quality evaluation method and device for clinical test institution, electronic equipment and storage medium
CN108968892A (en) The system and method that blind area monitors under a kind of colonoscopy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination