CN114049585B - Mobile phone operation detection method based on motion prospect extraction - Google Patents

Mobile phone operation detection method based on motion prospect extraction Download PDF

Info

Publication number
CN114049585B
CN114049585B CN202111187354.8A CN202111187354A CN114049585B CN 114049585 B CN114049585 B CN 114049585B CN 202111187354 A CN202111187354 A CN 202111187354A CN 114049585 B CN114049585 B CN 114049585B
Authority
CN
China
Prior art keywords
mobile phone
motion
layer
network
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111187354.8A
Other languages
Chinese (zh)
Other versions
CN114049585A (en
Inventor
夏祎
葛宪莹
倪亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Control and Electronic Technology
Original Assignee
Beijing Institute of Control and Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Control and Electronic Technology filed Critical Beijing Institute of Control and Electronic Technology
Priority to CN202111187354.8A priority Critical patent/CN114049585B/en
Publication of CN114049585A publication Critical patent/CN114049585A/en
Application granted granted Critical
Publication of CN114049585B publication Critical patent/CN114049585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a mobile phone operation detection method based on motion foreground extraction, which is characterized in that a video sequence is subjected to motion foreground extraction by utilizing background modeling and background comparison analysis, a small-size image containing a motion area is obtained by segmentation, and then a mobile phone target is detected in the motion area image by utilizing a convolutional neural network, so that mobile phone operation detection is realized. The invention fully utilizes the space-time information provided by the video to realize the detection process from thick to thin, has simple steps and high practicability, and can detect the situation that personnel use the mobile phone by using the monitoring camera installed and fixed in places such as laboratories/meeting rooms/classrooms and the like, thereby improving the monitoring effect.

Description

Mobile phone operation detection method based on motion prospect extraction
Technical Field
The invention relates to a motion detection method, in particular to a motion detection method based on motion prospect extraction by using a hand machine.
Background
With the rapid development of computer vision and the gradual increase of computing power, intelligent video monitoring technology is gradually appearing in the field of view of the public. The technology effectively analyzes the video collected by the monitoring camera by adopting methods such as image processing, pattern recognition and the like, thereby automatically recognizing a specific target or abnormal condition in a video picture so as to send out early warning in time. The application and popularization of the intelligent video monitoring technology greatly promote the improvement of social security, and have important significance in the aspects of improving life quality, defending disasters and the like. However, due to the limitations of detection and identification algorithms and hardware platforms, the existing deployed part of intelligent video monitoring systems have the problems of low identification accuracy, poor real-time performance and the like, and a mature detection method which can be widely applied to all application scenes and application requirements is not yet available, so that a motion detection method which has good performance and is simple to realize needs to be provided for different scenes.
At present, in fixed indoor scenes such as laboratories/meeting rooms/classrooms, a detection method using mobile phone actions mainly processes and analyzes single-frame images in videos, and uses the mobile phone as a target to detect objects, so as to serve as a basis for judging whether the mobile phone actions are used. The method adopts a typical target detection algorithm based on deep learning to detect the mobile phone object, uses an image sample marked with a mobile phone frame to carry out detection model training, selects image data of a single frame from a plurality of frames at intervals as input in application, carries out mobile phone target detection through a detection model which completes training, and can realize mobile phone operation detection, and when a mobile phone is detected, the mobile phone operation is considered to exist. However, in the monitoring video, compared with a larger background area, the mobile phone has small size, unobvious characteristics and high similarity with other objects such as a notebook, is easily influenced by factors such as the field of view and angle of the monitoring camera to change in shape and size, and is easily shielded when a user holds the mobile phone, so that a mobile phone target is not clear in an image, and therefore, the problems of false detection, missed detection and the like are easily caused when the mobile phone is only used as a motion detection basis. In addition, the detection method is based on single frame image to detect, only uses the airspace characteristic of the image, namely detects the mobile phone target in the space of single time to judge whether the mobile phone is used for detection.
Disclosure of Invention
The invention aims to provide a mobile phone operation detection method based on motion prospect extraction, which solves the problems of false detection and missing detection existing when a single frame image is used for mobile phone target detection.
The manual operation detection method based on motion prospect extraction comprises the following specific steps:
step one, a mobile phone operation detection system based on motion prospect extraction is built
A mobile phone motion detection system based on motion foreground extraction, comprising: the system comprises a background model construction module, a motion prospect extraction module, an offline training module and a manual operation detection module.
The background model building module has the functions of: and fitting the background image by using a function to obtain a model, and updating the background model by combining the actual scene change of the video.
The motion prospect draws the function of module to: and comparing the video sequence with a background model, extracting a motion foreground, and segmenting a motion region through connectivity analysis.
The offline training module has the functions of: and determining a detection network model, constructing a moving area image sample library, and performing network offline training by using the sample library.
The manual operation detection module has the following functions: and calculating the moving area image by using the network model, and detecting whether the motion using the mobile phone exists or not.
The second step of background model construction module completes background modeling and background updating of the use scene
The background model construction module precisely quantifies the background by using a Gaussian probability density function, fits each pixel point by adopting K Gaussian distributions, and establishes a background model aiming at a use scene, and the background model is represented by a formula (1):
in the formula (1), a pixel point (X, y) at the time t takes a value of X t ,w i,t Is the weight of the ith Gaussian distribution, η (X ti,t ,∑ i,t )、μ i,t Sum sigma i,t The ith Gaussian probability density function, the mean and the covariance matrix are respectively, and n is the dimension of the Gaussian distribution.
Updating the background model in real time according to the change in the scene, and expressing the background model by using a formula (2) -a formula (4):
w i,t =(1-α)w i,t-1 +α (2)
μ i,t =(1-ρ)μ i,t-1 +ρX t (3)
i,t =(1-ρ)∑ i,t-1 +ρ[(X ti,t )(X ti,t ) T ] (4)
in the formulas (2) to (4),ρ is the update rate of the model. After the model updating is completed, calculating ++each pixel point in the image>The values are sequenced, the largest B models are selected as background models, namely, the number of Gaussian distribution describing the background is B, T is a weight accumulation threshold value, T epsilon (0.5, 1) is expressed by a formula (5):
the third step of motion foreground extraction module extracts motion foreground and divides motion area to complete coarse extraction
The motion foreground extraction module compares and calculates a current frame image of the video sequence with a background image model, extracts a motion foreground, and segments a target area containing human body motion from the current frame image according to the motion foreground.
From the detection time t, inputting the frame image, comparing with the background model, and calculating pixel value X one by one t And when the pixel value is matched with one of the first B Gaussian distributions, the pixel point is a background point, otherwise, the pixel point is divided into a motion foreground. And calculating pixel points in the frame image one by one according to the matching relation, and determining whether the pixel points can be matched with Gaussian distribution or not to obtain a binarized image. The matching relationship is expressed by formula (6):
in the formula (6), a point with a gray value of 0 is a background point, and a point with a gray value of 1 moves to a foreground point.
After the motion foreground is extracted, connectivity analysis is carried out on the motion foreground, a target area image containing human motion is segmented from the current frame image, a small-size image with the size of w x h is obtained, and rough extraction is completed.
The fourth step of off-line training module completes the determination and training of detecting the mobile phone network
The off-line training module marks the mobile phone in the moving area image obtained by the moving foreground extraction module, completes the construction of a training sample library, determines and builds a deep convolutional neural network model, is used for detecting the mobile phone from the moving area image containing the human body, determines the number of network layers, definition of each layer, the number of convolutional surfaces of each layer, the size of a convolutional kernel, the pooling size, a pooling layer calculation function, an activation function and a loss function, and then performs off-line learning training on unknown parameters of each convolutional kernel of the deep convolutional neural network by utilizing the constructed sample library.
The convolution layer basic operation of the network is represented by formula (7):
X a,b+1 =f(∑X b ·W a,b +b a,b ) (7)
in the formula (7), f is an activation function, W a,b And b a,b Convolution kernels and offset values, X, of an a-th convolution plane in a b-th layer of the network respectively b Representing the input of each channel of the b layer of the network, X a,b+1 Representing the output of the layer b, layer a convolution plane of the network.
The pooling layer basic operation of the network is represented by formula (8):
X a,b+1 =p(X a,b ) (8)
in the formula (8), X a,b X represents an input of a layer b, a channel of the network a,b+1 Representing the output of the layer b, layer a, channel of the network, p being the pooling layer calculation function.
The basic operation of the network full connection layer is expressed by a formula (9):
y b =f(∑x b ·w b +b b ) (9)
in the formula (9), w b And b b Respectively representing the weight and bias of the b layer in the full connection layer, x b Representing the input of layer b, y, of the fully connected layers b Representing the output of layer b in the fully connected layer.
During training, the parameters are updated with equation (10):
in the formula (10), η represents a learning rate of the design in the training process, and the superscript (m) represents a calculated amount of the m-th iteration process.
After iterative calculation, the loss function loss is converged to the minimum value, so that a deep convolutional neural network model suitable for detecting the mobile phone is obtained, and an offline preparation stage is completed.
And fifthly, using a mobile phone to operate a detection module to finish final detection
And the mobile phone detection module detects the mobile phone by using the network model obtained by the offline training module, inputs the moving area image obtained by the moving prospect extraction module into the network model for calculation, and outputs a mobile phone detection result. When the mobile phone is detected in the moving area image by using the mobile phone operation detection module, the mobile phone operation is considered to exist; when the mobile phone is not detected in the moving area image, it is considered that there is no action of using the mobile phone.
Thus, the manual operation detection based on the motion prospect extraction is realized.
The invention realizes the detection of the action of the mobile phone, extracts the motion foreground for coarse detection under the use scene, detects the mobile phone for fine detection in the small-size image of the motion foreground obtained by the coarse detection by using the deep learning network, realizes the detection step from coarse to fine, fully utilizes the space-time characteristic information and can achieve the effect of improving the detection accuracy.
Detailed Description
The manual operation detection method based on motion prospect extraction comprises the following specific steps:
step one, a mobile phone operation detection system based on motion prospect extraction is built
A mobile phone motion detection system based on motion foreground extraction, comprising: the system comprises a background model construction module, a motion prospect extraction module, an offline training module and a manual operation detection module.
The background model building module has the functions of: and fitting the background image by using a function to obtain a model, and updating the background model by combining the actual scene change of the video.
The motion prospect draws the function of module to: and comparing the video sequence with a background model, extracting a motion foreground, and segmenting a motion region through connectivity analysis.
The offline training module has the functions of: and determining a detection network model, constructing a moving area image sample library, and performing network offline training by using the sample library.
The manual operation detection module has the following functions: and calculating the moving area image by using the network model, and detecting whether the motion using the mobile phone exists or not.
The second step of background model construction module completes background modeling and background updating of the use scene
The background model construction module precisely quantifies the background by using a Gaussian probability density function, fits each pixel point by adopting K Gaussian distributions, and establishes a background model aiming at a use scene, and the background model is represented by a formula (1):
in the formula (1), a pixel point (X, y) at the time t takes a value of X t ,w i,t Is the weight of the ith Gaussian distribution, η (X ti,t ,∑ i,t )、μ i,t Sum sigma i,t The ith Gaussian probability density function, the mean and the covariance matrix are respectively, and n is the dimension of the Gaussian distribution.
Updating the background model in real time according to the change in the scene, and expressing the background model by using a formula (2) -a formula (4):
w i,t =(1-α)w i,t-1 +α (2)
μ i,t =(1-ρ)μ i,t-1 +ρX t (3)
i,t =(1-ρ)∑ i,t-1 +ρ[(X ti,t )(X ti,t ) T ] (4)
in the formulas (2) to (4),ρ is the update rate of the model. After the model updating is completed, calculating ++each pixel point in the image>The values are sequenced, the largest B models are selected as background models, namely, the number of Gaussian distribution describing the background is B, T is a weight accumulation threshold value, T epsilon (0.5, 1) is expressed by a formula (5):
the third step of motion foreground extraction module extracts motion foreground and divides motion area to complete coarse extraction
The motion foreground extraction module compares and calculates a current frame image of the video sequence with a background image model, extracts a motion foreground, and segments a target area containing human body motion from the current frame image according to the motion foreground.
From the detection time t, inputting the frame image, comparing with the background model, and calculating pixel value X one by one t And when the pixel value is matched with one of the first B Gaussian distributions, the pixel point is a background point, otherwise, the pixel point is divided into a motion foreground. And calculating pixel points in the frame image one by one according to the matching relation, and determining whether the pixel points can be matched with Gaussian distribution or not to obtain a binarized image. The matching relationship is expressed by formula (6):
in the formula (6), a point with a gray value of 0 is a background point, and a point with a gray value of 1 moves to a foreground point.
After the motion foreground is extracted, connectivity analysis is carried out on the motion foreground, a target area image containing human motion is segmented from the current frame image, a small-size image with the size of w x h is obtained, and rough extraction is completed.
The fourth step of off-line training module completes the determination and training of detecting the mobile phone network
The off-line training module marks the mobile phone in the moving area image obtained by the moving foreground extraction module, completes the construction of a training sample library, determines and builds a deep convolutional neural network model, is used for detecting the mobile phone from the moving area image containing the human body, determines the number of network layers, definition of each layer, the number of convolutional surfaces of each layer, the size of a convolutional kernel, the pooling size, a pooling layer calculation function, an activation function and a loss function, and then performs off-line learning training on unknown parameters of each convolutional kernel of the deep convolutional neural network by utilizing the constructed sample library.
The convolution layer basic operation of the network is represented by formula (7):
X a,b+1 =f(∑X b ·W a,b +b a,b ) (7)
in the formula (7), f is an activation function, W a,b And b a,b Convolution kernels and offset values, X, of an a-th convolution plane in a b-th layer of the network respectively b Representing the input of each channel of the b layer of the network, X a,b+1 Representing the output of the layer b, layer a convolution plane of the network.
The pooling layer basic operation of the network is represented by formula (8):
X a,b+1 =p(X a,b ) (8)
in the formula (8), X a,b X represents an input of a layer b, a channel of the network a,b+1 Representing the output of the layer b, layer a, channel of the network, p being the pooling layer calculation function.
The basic operation of the network full connection layer is expressed by a formula (9):
y b =f(∑x b ·w b +b b ) (9)
in the formula (9), w b And b b Respectively representing the weight and bias of the b layer in the full connection layer, x b Representing the input of layer b, y, of the fully connected layers b Representing the output of layer b in the fully connected layer.
During training, the parameters are updated with equation (10):
in the formula (10), η represents a learning rate of the design in the training process, and the superscript (m) represents a calculated amount of the m-th iteration process.
After iterative calculation, the loss function loss is converged to the minimum value, so that a deep convolutional neural network model suitable for detecting the mobile phone is obtained, and an offline preparation stage is completed.
And fifthly, using a mobile phone to operate a detection module to finish final detection
And the mobile phone detection module detects the mobile phone by using the network model obtained by the offline training module, inputs the moving area image obtained by the moving prospect extraction module into the network model for calculation, and outputs a mobile phone detection result. When the mobile phone is detected in the moving area image by using the mobile phone operation detection module, the mobile phone operation is considered to exist; when the mobile phone is not detected in the moving area image, it is considered that there is no action of using the mobile phone.
Thus, the manual operation detection based on the motion prospect extraction is realized.

Claims (5)

1. A manual operation detection method based on motion prospect extraction is characterized by comprising the following specific steps:
step one, a mobile phone operation detection system based on motion prospect extraction is built
A mobile phone motion detection system based on motion foreground extraction, comprising: the system comprises a background model construction module, a motion prospect extraction module, an offline training module and a manual operation detection module;
the second step of background model construction module completes background modeling and background updating of the use scene
The background model construction module precisely quantifies the background by using a Gaussian probability density function, fits each pixel point by adopting K Gaussian distributions, and establishes a background model aiming at a use scene, and the background model is represented by a formula (1):
in the formula (1), a pixel point (X, y) at the time t takes a value of X t ,w i,t Is the weight of the ith Gaussian distribution, η (X ti,t ,∑ i,t )、μ i,t Sum sigma i,t Respectively an ith Gaussian probability density function, a mean value and a covariance matrix, wherein n is the dimension of Gaussian distribution;
updating the background model in real time according to the change in the scene, and expressing the background model by using a formula (2) -a formula (4):
w i,t =(1-α)w i,t-1 +α (2)
μ i,t =(1-ρ)μ i,t-1 +ρX t (3)
i,t =(1-ρ)∑ i,t-1 +ρ[(X ti,t )(X ti,t ) T ] (4)
in the formulas (2) to (4),ρ is the update rate of the model; after the model updating is completed, calculating ++each pixel point in the image>The values are sequenced, the largest B models are selected as background models, namely, the number of Gaussian distribution describing the background is B, T is a weight accumulation threshold value, T epsilon (0.5, 1) is expressed by a formula (5):
the third step of motion foreground extraction module extracts motion foreground and divides motion area to complete coarse extraction
The motion foreground extraction module compares and calculates a current frame image of the video sequence with a background image model, extracts a motion foreground, and segments a target area containing human body motion from the current frame image according to the motion foreground;
from the detection time t, inputting the frame image, comparing with the background model, and calculating pixel value X one by one t The matching relation with the obtained B Gaussian distributions, when the pixel value is matched with one of the previous B Gaussian distributions, the pixel point is a background point, otherwise, the pixel point is divided into a motion foreground; calculating pixel points in the frame image one by one according to the matching relation, determining whether the pixel points can be matched with Gaussian distribution, and obtaining a binarized image; the matching relationship is expressed by formula (6):
in the formula (6), a point with a gray value of 0 is a background point, and a point with a gray value of 1 moves to the foreground point;
after the motion foreground is extracted, connectivity analysis is carried out on the motion foreground, a target area image containing human motion is segmented from the current frame image, a small-size image with the size of w x h is obtained, and rough extraction is completed;
the fourth step of off-line training module completes the determination and training of detecting the mobile phone network
The off-line training module marks the mobile phone in the moving area image obtained by the moving foreground extraction module, completes the construction of a training sample library, determines and builds a deep convolutional neural network model, is used for detecting the mobile phone from the moving area image containing the human body, determines the number of network layers, definition of each layer, the number of convolutional surfaces of each layer, the size of a convolutional kernel, the pooling size, a pooling layer calculation function, an activation function and a loss function, and then performs off-line learning training on unknown parameters of each convolutional kernel of the deep convolutional neural network by utilizing the constructed sample library;
the convolution layer basic operation of the network is represented by formula (7):
X a,b+1 =f(∑X b ·W a,b +b a,b ) (7)
in the formula (7), f is an activation function, W a,b And b a,b Convolution kernels and offset values, X, of an a-th convolution plane in a b-th layer of the network respectively b Representing the input of each channel of the b layer of the network, X a,b+1 An output representing a layer b, a, convolution plane of the network;
the pooling layer basic operation of the network is represented by formula (8):
X a,b+1 =p(X a,b ) (8)
in the formula (8), X a,b X represents an input of a layer b, a channel of the network a,b+1 Representing the output of a channel a of a layer b of the network, wherein p is a calculation function of a pooling layer;
the basic operation of the network full connection layer is expressed by a formula (9):
y b =f(∑x b ·w b +b b ) (9)
in the formula (9), w b And b b Respectively representing the weight and bias of the b layer in the full connection layer, x b Representing the input of layer b, y, of the fully connected layers b Representing the output of layer b in the fully connected layer;
during training, the parameters are updated with equation (10):
in the formula (10), eta represents the learning rate designed in the training process, and the upper mark (m) represents the calculated amount of the m-th iterative process;
after iterative calculation, converging the loss function loss to a minimum value to obtain a deep convolutional neural network model suitable for detecting the mobile phone, and completing an offline preparation stage;
and fifthly, using a mobile phone to operate a detection module to finish final detection
The mobile phone detection module is used for detecting the mobile phone by using the network model obtained by the offline training module, the moving area image obtained by the moving prospect extraction module is input into the network model for calculation, and a mobile phone detection result is output; when the mobile phone is detected in the moving area image by using the mobile phone operation detection module, the mobile phone operation is considered to exist; when the mobile phone is not detected in the moving area image, the action of using the mobile phone is considered to be absent;
thus, the manual operation detection based on the motion prospect extraction is realized.
2. The mobile phone motion detection method based on motion foreground extraction according to claim 1, wherein the background model building module functions as follows: and fitting the background image by using a function to obtain a model, and updating the background model by combining the actual scene change of the video.
3. The mobile phone operation detection method based on motion foreground extraction according to claim 1, wherein the motion foreground extraction module has the following functions: and comparing the video sequence with a background model, extracting a motion foreground, and segmenting a motion region through connectivity analysis.
4. The method for detecting motion using a mobile phone based on motion foreground extraction according to claim 1, wherein the offline training module has the following functions: and determining a detection network model, constructing a moving area image sample library, and performing network offline training by using the sample library.
5. The method for detecting motion foreground extraction by using a mobile phone according to claim 1, wherein the functions of the mobile phone motion detection module are as follows: and calculating the moving area image by using the network model, and detecting whether the motion using the mobile phone exists or not.
CN202111187354.8A 2021-10-12 2021-10-12 Mobile phone operation detection method based on motion prospect extraction Active CN114049585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111187354.8A CN114049585B (en) 2021-10-12 2021-10-12 Mobile phone operation detection method based on motion prospect extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111187354.8A CN114049585B (en) 2021-10-12 2021-10-12 Mobile phone operation detection method based on motion prospect extraction

Publications (2)

Publication Number Publication Date
CN114049585A CN114049585A (en) 2022-02-15
CN114049585B true CN114049585B (en) 2024-04-02

Family

ID=80205355

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111187354.8A Active CN114049585B (en) 2021-10-12 2021-10-12 Mobile phone operation detection method based on motion prospect extraction

Country Status (1)

Country Link
CN (1) CN114049585B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133974A (en) * 2017-06-02 2017-09-05 南京大学 The vehicle type classification method that Gaussian Background modeling is combined with Recognition with Recurrent Neural Network
CN107749067A (en) * 2017-09-13 2018-03-02 华侨大学 Fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks
WO2019237567A1 (en) * 2018-06-14 2019-12-19 江南大学 Convolutional neural network based tumble detection method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133974A (en) * 2017-06-02 2017-09-05 南京大学 The vehicle type classification method that Gaussian Background modeling is combined with Recognition with Recurrent Neural Network
CN107749067A (en) * 2017-09-13 2018-03-02 华侨大学 Fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks
WO2019237567A1 (en) * 2018-06-14 2019-12-19 江南大学 Convolutional neural network based tumble detection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种实用的运动目标检测和跟踪算法;赵宏伟;冯嘉;臧雪柏;宋波涛;;吉林大学学报(工学版);20090930(第A2期);第386-390页 *

Also Published As

Publication number Publication date
CN114049585A (en) 2022-02-15

Similar Documents

Publication Publication Date Title
US20230289979A1 (en) A method for video moving object detection based on relative statistical characteristics of image pixels
Huang et al. Radial basis function based neural network for motion detection in dynamic scenes
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN103971386A (en) Method for foreground detection in dynamic background scenario
CN108537829B (en) Monitoring video personnel state identification method
CN113011367A (en) Abnormal behavior analysis method based on target track
CN108804992B (en) Crowd counting method based on deep learning
CN111402298A (en) Grain depot video data compression method based on target detection and trajectory analysis
CN111882586A (en) Multi-actor target tracking method oriented to theater environment
Kumar et al. Background subtraction based on threshold detection using modified K-means algorithm
Yang et al. A method of pedestrians counting based on deep learning
Wang et al. Video background/foreground separation model based on non-convex rank approximation RPCA and superpixel motion detection
CN109359530B (en) Intelligent video monitoring method and device
CN113052136B (en) Pedestrian detection method based on improved Faster RCNN
Elbaşi Fuzzy logic-based scenario recognition from video sequences
CN114049585B (en) Mobile phone operation detection method based on motion prospect extraction
CN115188081B (en) Complex scene-oriented detection and tracking integrated method
Chen et al. Intrusion detection of specific area based on video
Wang et al. Video Smoke Detection Based on Multi-feature Fusion and Modified Random Forest.
CN114038011A (en) Method for detecting abnormal behaviors of human body in indoor scene
Weng et al. Crowd density estimation based on a modified multicolumn convolutional neural network
CN113591607A (en) Station intelligent epidemic prevention and control system and method
Yang et al. A hierarchical approach for background modeling and moving objects detection
Sujatha et al. An innovative moving object detection and tracking system by using modified region growing algorithm
Setyoko et al. Gaussian Mixture Model in Dynamic Background of Video Sequences for Human Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant