CN110245628B - Method and device for detecting discussion scene of personnel - Google Patents

Method and device for detecting discussion scene of personnel Download PDF

Info

Publication number
CN110245628B
CN110245628B CN201910531632.3A CN201910531632A CN110245628B CN 110245628 B CN110245628 B CN 110245628B CN 201910531632 A CN201910531632 A CN 201910531632A CN 110245628 B CN110245628 B CN 110245628B
Authority
CN
China
Prior art keywords
candidate
person
discussion
region
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910531632.3A
Other languages
Chinese (zh)
Other versions
CN110245628A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Century Photosynthesis Technology Co ltd
Original Assignee
Chengdu Century Photosynthesis Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Century Photosynthesis Technology Co ltd filed Critical Chengdu Century Photosynthesis Technology Co ltd
Priority to CN201910531632.3A priority Critical patent/CN110245628B/en
Publication of CN110245628A publication Critical patent/CN110245628A/en
Application granted granted Critical
Publication of CN110245628B publication Critical patent/CN110245628B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for detecting a person discussion scene, wherein the method comprises the following steps: acquiring a current scene image; identifying a personnel region in the scene image, and obtaining the position and the area of the personnel region; and detecting the personnel discussion scene in the scene image according to the determined position and area of the personnel region. By the method and the device for detecting the person discussion scene, the purpose of detecting the person discussion scene in the scene image can be achieved by processing the determined position and area of the person region, and the operation is simple and convenient.

Description

Method and device for detecting discussion scene of personnel
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for detecting a person discussion scene.
Background
At present, analysis of human behaviors such as action recognition and detection is a basic and difficult task in the field of computer vision, and has a wide application range, such as intelligent monitoring systems, human-computer interaction, game control, robots and other devices. The main goal of dynamic scene recognition is to determine the behavior category of a single person or a group of persons in a video and enable the device to interact with the person or the group of persons based on the behavior category of the person or the group of persons.
For dynamic scenes when a group of people are conducting a personal discussion, it is very difficult to identify.
Disclosure of Invention
In order to solve the above problem, an object of the embodiments of the present invention is to provide a method and an apparatus for detecting a person discussion scene.
In a first aspect, an embodiment of the present invention provides a method for detecting a person discussion scenario, including:
acquiring a current scene image;
identifying a personnel region in a current scene image, and obtaining the position and the area of the personnel region;
and detecting the personnel discussion scene in the current scene image according to the determined position and area of the personnel area.
In a second aspect, an embodiment of the present invention further provides an apparatus for detecting a person discussion scene, including:
the acquisition module is used for acquiring a current scene image;
the processing module is used for identifying the personnel area in the current scene image and obtaining the position and the area of the personnel area;
and the detection module is used for detecting the personnel discussion scene in the current scene image according to the determined position and area of the personnel area.
In the solutions provided in the foregoing first aspect to the second aspect of the embodiments of the present invention, the person discussion scene in the current scene image is detected according to the position and the area of the person region identified from the current scene image, and compared with the case that the person discussion scene cannot be identified in the related art, the purpose of detecting the person discussion scene in the current scene image can be achieved by processing the determined position and the area of the person region, and the operation is simple and convenient.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart illustrating a method for detecting a person discussion scenario according to embodiment 1 of the present invention;
fig. 2 is a schematic structural diagram illustrating an apparatus for detecting a person discussion scene according to embodiment 2 of the present invention.
Detailed Description
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise explicitly stated or limited, the terms "mounted," "connected," "fixed," and the like are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
Analysis of human behavior such as motion recognition and detection is a fundamental and difficult task in the field of computer vision, and has a wide range of applications, such as intelligent monitoring systems, human-computer interaction, game control and robots. The main goal of dynamic scene recognition is to determine the category of behavior of an individual or group of individuals in a video. The difficulty of scene action recognition mainly lies in the following four aspects: 1. intra-class and inter-class differences are not well quantified and, as with an action, the performance of different people may be very different. 2. Environmental differences such as occlusion, multi-angle, lighting effects, etc. 3. Time variation, the speed of movement of a person's body when performing an action varies greatly, making it difficult to determine the starting point of the action, thus adversely affecting the maximum impact when characterizing the action by extracting features from the video food. 4. There is a lack of suitable data sets for a particular scene mode.
Existing scene action recognition methods are divided into a traditional iDT (improved dense rejection) algorithm based on trajectory tracking, an optical flow method benefiting from deep learning development, a neural network and a skeeleton method based on skeleton extraction. The iDT method is the best method before deep learning enters the field, but has the disadvantages of high time complexity and slow algorithm speed. The basic idea is to use an optical flow field to obtain some tracks in a video sequence, extract various features from the tracks, combine the features, and train a support vector machine to classify behavior categories by fisher vector discriminant coding. Methods based on Convolutional Neural Network (CNN) models or skeleton-based extraction skeeleton are superior in extracting high-level information, and have also been used to learn spatio-temporal features from skeletons. These CNN-based methods may represent the skeletal sequence as one image by encoding the temporal dynamics and skeletal joints into rows and columns, respectively, and then inputting the image into the CNN as if it were classified into images to identify the motion contained therein. However, in this case, only the neighboring joints within the convolution kernel are considered to be learning co-occurrence features. Although the receptive field can cover all joints of the skeleton in the subsequent convolutional layer, it is difficult to efficiently mine the co-occurrence features from all joints. Due to the weight sharing mechanism in the spatial dimension, the CNN model cannot learn free parameters for every joint. The CNN or skeeleton-based model can well extract the characteristics of a scene at the present stage, but the training of the model depends on the size and the label of a data set seriously, and compared with the traditional iDT method, the real-time performance of the method is not high.
Compared with the existing method, the invention provides a novel unsupervised judgment algorithm aiming at the scene of personnel discussion. The method combines the characteristic that the convolutional neural network can extract high-dimensional features and the characteristic that clustering can quickly filter outliers. First we also use a convolutional neural network for head localization of the person in the scene. We then store the detected frame position for each frame. We use two clusterers to detect the occurrence of discussion activity, the first one to filter out those detections that do not move for a certain time, where the result may be a misdetected target or a non-moving person. The second clusterer is used to calculate the number of objects belonging to the same cluster class and their respective locations. Here we select a noisy Density-Based Clustering method (Density-Based Clustering of Applications with Noise, DBSCAN) that belongs to Density Clustering. The clustering method is improved based on DBSCAN. DBSCAN is a well-known density clustering algorithm that characterizes how closely a sample is distributed based on a set of "domain" parameters, which defines "clusters" (data points belonging to a class) as the largest set of density-related samples derived from the density reachability relationship. However, at the same time, there are some defects in the monitored scene, for example, the distance measurement cannot be measured by using a simple lagrangian distance, because the image space acquired by the monocular camera has certain distortion and perspective transformation, and the distance between two points on the image is not in proportion to the three-dimensional distance in the real world. Computer vision is therefore very difficult to identify for a group of people a dynamic scene when they are doing a personal discussion.
Based on this, the embodiment provides a method and an apparatus for detecting a person discussion scene, which can detect the person discussion scene in the scene image by detecting the position and the area of the person region identified from the scene image, and are simple and convenient to operate.
Example 1
The embodiment provides a method for detecting a person discussion scene, and an execution main body is a server.
The server may be any computing device capable of performing computer vision processing in the prior art, and details are not repeated here.
Referring to fig. 1, a flow of a method for detecting a people discussion scene may include the following specific steps:
step 100, acquiring a current scene image.
The current scene image is the scene image of the current frame acquired by the camera.
In step 100, the server may acquire the scene image through a camera connected to the server itself.
The scenes refer to various public places such as office areas, libraries, coffee houses and the like.
The camera is installed in various public places such as office areas, libraries, coffee houses and the like and is used for collecting images of different scenes.
Each camera carries the identification of the camera.
After the camera collects the scene image, the identification of the camera is added into the collected scene image, and then the scene image added with the identification of the camera is sent to the server.
Step 102, identifying a personnel area in the current scene image, and obtaining the position and the area of the personnel area.
Specifically, the step 102 may perform the following specific steps (1) to (2):
(1) Preprocessing the current scene image;
(2) And identifying a personnel region in the scene image from the current scene image after the preprocessing, and obtaining the position and the area of the personnel region.
In the step (1) above, the image preprocessing step includes, but is not limited to: compression (cropping), converting color space, and noise reduction.
In order to perform the step (2), the server sends the preprocessed scene image into a trained deep convolutional neural network model to acquire the head positioning detection boxes of the person region in the current scene, and the classification scores of the detection boxes and the area of the person region.
The convolutional neural network has the advantages of good generalization performance and capability of detecting all possible targets in a scene, and has the disadvantage of generating more false detections.
Therefore, the server filters the person area. Here the server filters out the less object-like detection boxes in the person area with a score filter and then filters out those detection boxes that are not moving for a long time, i.e. not object, using a background subtraction algorithm. However, the background subtraction may also filter out objects that are otherwise correct but not moving, and the server uses a location-based delay operation to avoid this. And finally, the server outputs the finally determined position and area size of the personnel area detected in each frame of scene.
A location of the personnel area comprising: the center point of the personnel area is abscissa and ordinate.
And the server stores the finally determined position and area size of the personnel area in the current scene image.
And 104, detecting the personnel discussion scene in the current scene image according to the determined position and area of the personnel region.
In order to detect the person discussing the scene in the scene image, the above step 104 may perform the following specific steps (1) to (5):
(1) Acquiring the position of a person region in a multi-frame scene image before a current scene image;
(2) Determining all identified personnel regions as first candidate discussion personnel regions;
(3) Determining the distance between the first candidate discussion personnel regions according to the positions and the areas of the candidate discussion personnel regions;
(4) According to the determined distance between the candidate discuss person areas, removing a second candidate discuss person area where the person not discussed is located from the first candidate discuss person area;
(5) And determining a third candidate discussion person region in which the person currently discussed is located from the first candidate discussion person region except the second candidate discussion person region, and determining the determined third candidate discussion person region as the person discussion scene region in the current scene image.
In the step (1), the server may obtain, from the stored scene image position and area set, the position of the person region in the multi-frame scene image before the current scene image captured by the camera according to the identifier of the camera in the scene image.
In step (3) above, the distance between each candidate of the regions of the reviewer is calculated by the following formula:
Figure BDA0002099941930000071
wherein p is 1 And p 2 Respectively representing different candidate areas of discussion, x 1 And x 2 Respectively representing the abscissa, y, of the center point of the different candidate reviewer regions 1 And y 2 Ordinate of the center point, z, representing the respective candidate region of the person under discussion 1 And z 2 Respectively representing the areas of different candidate areas of the discussion person, lambda x 、λ y And λ z Respectively, represent different weight values.
In the step (4), in order to determine a second candidate region where a person not under discussion is located, the server may first perform a clustering operation on the first candidate region at the positions of the current frame and the previous frames to obtain a position change value of the first candidate region at the positions of the current frame and the previous frames, and if the calculated position change value is smaller than the first distance threshold, determine the first candidate region corresponding to the position change value as the second candidate region.
In the step (5), in order to determine the third candidate region where the currently discussed person is located, the server may perform clustering operation again on the first candidate region where the second candidate region is removed, to obtain a distance value between the first candidate regions where the second candidate region is removed, and when the distance value is less than or equal to the second distance threshold, may determine the first candidate region corresponding to the distance value as the third candidate region where the currently discussed person is located.
After determining that the person in the current scene image discusses the scene, the lighting device in the area where the camera is located may be correspondingly controlled, such as: and changing the color temperature and brightness of the lighting equipment to transform the scene color of the scene area.
In summary, according to the method for detecting a person discussion scene provided in this embodiment, the person discussion scene in the current scene image is detected by using the position and the area of the person region identified from the current scene image, and compared with the case that the person discussion scene cannot be identified in the related art, the purpose of detecting the person discussion scene in the current scene image can be achieved by processing the determined position and the area of the person region, and the operation is simple and convenient.
Example 2
The present embodiment provides an apparatus for detecting a person discussion scene, which is used to execute the method for detecting a person discussion scene provided in embodiment 1 above.
Referring to fig. 2, a schematic structural diagram of an apparatus for detecting a person discussion scene is shown, in this embodiment, an apparatus for detecting a person discussion scene is provided, including:
an obtaining module 200, configured to obtain a current scene image;
the processing module 202 is configured to identify a person region in a current scene image, and obtain a position and an area of the person region;
and the detecting module 204 is configured to detect the person discussion scene in the current scene image according to the determined position and area of the person region.
Specifically, the processing module 202 is specifically configured to:
preprocessing the current scene image;
and identifying a personnel region in the scene image from the current scene image after the preprocessing, and obtaining the position and the area of the personnel region.
Specifically, the detection module 204 is specifically configured to:
acquiring the position of a person area in a multi-frame scene image before a current scene image;
determining all identified personnel regions as first candidate discussion personnel regions;
determining the distance between the first candidate discussion personnel regions according to the positions and the areas of the candidate discussion personnel regions;
according to the determined distance between the candidate discussion person areas, removing a second candidate discussion person area where the person not discussed is located from the first candidate discussion person area;
and determining a third candidate discussion person region in which the person currently discussed is located from the first candidate discussion person region except the second candidate discussion person region, and determining the determined third candidate discussion person region as the person discussion scene region in the current scene image.
The detecting module 204 calculates the distance between the candidate human discussing regions according to the positions and areas of the candidate human discussing regions, including:
the distance between the candidate reviewer regions is calculated by the following formula:
Figure BDA0002099941930000091
wherein p is 1 And p 2 Respectively representing different candidate areas of discussion, x 1 And x 2 Respectively representing the abscissa, y, of the center point of the different candidate reviewer regions 1 And y 2 Ordinate of the center point, z, representing the respective candidate region of the person under discussion 1 And z 2 Respectively represent different weathersSelecting area of the person in question, lambda x 、λ y And λ z Respectively, represent different weight values.
In summary, in the apparatus for detecting a person discussion scene provided in this embodiment, the person discussion scene in the current scene image is detected by using the position and the area of the person region identified from the current scene image, and compared with the situation that the person discussion scene cannot be identified in the related art, the purpose of detecting the person discussion scene in the current scene image can be achieved by processing the determined position and the area of the person region, and the operation is simple and convenient.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. A method for detecting a people discussion scenario, comprising:
acquiring a current scene image;
identifying a personnel region in a current scene image, and obtaining the position and the area of the personnel region;
according to the determined position and area of the personnel area, detecting the personnel discussion scene in the current scene image, comprising the following steps:
acquiring the position of a person region in a multi-frame scene image before a current scene image;
determining all identified personnel regions as first candidate discussion personnel regions;
determining the distance between the first candidate discussion personnel regions according to the positions and the areas of the candidate discussion personnel regions;
according to the determined distance between the candidate discuss person areas, removing a second candidate discuss person area where people not discussed are located from the first candidate discuss person area, wherein the first candidate discuss person area is clustered at the positions of the current frame and the previous frames to obtain the position change value of the first candidate discuss person area at the current frame and the previous frames, if the calculated position change value is smaller than a first distance threshold value, determining the first candidate discuss person area corresponding to the position change value as the second candidate discuss person area;
determining a third candidate discussion person region where the person currently discussed is located from the first candidate discussion person region excluding the second candidate discussion person region, and determining the determined third candidate discussion person region as a person discussion scene region in the current scene image, wherein the first candidate discussion person region excluding the second candidate discussion person region is determined as the third candidate discussion person region where the person currently discussed is located by performing clustering operation again on the first candidate discussion person region excluding the second candidate discussion person region.
2. The method of claim 1, wherein identifying the person region in the current scene image and obtaining the position and area of the person region comprises:
preprocessing the current scene image;
and identifying a personnel region in the scene image from the current scene image after the preprocessing, and obtaining the position and the area of the personnel region.
3. The method of claim 1, wherein calculating the distance between candidate regions of the discussion staff based on the locations and areas of the candidate regions of the discussion staff comprises:
the distance between the candidate panelist regions is calculated by the following formula:
Figure FDA0004087443280000021
wherein p is 1 And p 2 Respectively representing different candidate areas of discussion, x 1 And x 2 The abscissa of the center point, y, representing the respective candidate regions of the discussion staff 1 And y 2 Ordinate of the center point, z, representing the respective candidate region of the person under discussion 1 And z 2 Respectively representing the areas of different candidate areas of the discussion person, lambda x 、λ y And λ z Respectively representing different weight values.
4. An apparatus for detecting a person discussion scene, comprising:
the acquisition module is used for acquiring a current scene image;
the processing module is used for identifying the personnel area in the current scene image and obtaining the position and the area of the personnel area;
a detection module, configured to detect a person discussion scene in a scene image according to the determined position and area of the person region, including:
acquiring the position of a person area in a multi-frame scene image before a current scene image;
determining all identified personnel regions as first candidate discussion personnel regions;
determining the distance between the first candidate discussion personnel regions according to the positions and the areas of the candidate discussion personnel regions;
according to the determined distance between the candidate discuss person areas, removing a second candidate discuss person area where people not discussed are located from the first candidate discuss person area, wherein the first candidate discuss person area is clustered at the positions of the current frame and the previous frames to obtain the position change value of the first candidate discuss person area at the current frame and the previous frames, if the calculated position change value is smaller than a first distance threshold value, determining the first candidate discuss person area corresponding to the position change value as the second candidate discuss person area;
determining a third candidate discussion person region where the person currently discussed is located from the first candidate discussion person region excluding the second candidate discussion person region, and determining the determined third candidate discussion person region as a person discussion scene region in the current scene image, wherein the first candidate discussion person region excluding the second candidate discussion person region is determined as the third candidate discussion person region where the person currently discussed is located by performing clustering operation again on the first candidate discussion person region excluding the second candidate discussion person region.
5. The apparatus according to claim 4, wherein the processing module is specifically configured to:
preprocessing the current scene image;
and identifying a personnel region in the current scene image from the preprocessed scene image, and obtaining the position and the area of the personnel region.
6. The apparatus of claim 4, wherein the detecting module calculates the distance between candidate regions of the discussion staff according to the position and the area of the candidate regions of the discussion staff, and comprises:
the distance between the candidate reviewer regions is calculated by the following formula:
Figure FDA0004087443280000031
wherein p is 1 And p 2 Respectively representing different candidate areas of discussion, x 1 And x 2 The abscissa of the center point, y, representing the respective candidate regions of the discussion staff 1 And y 2 Ordinate of the center point, z, representing the respective candidate region of the person under discussion 1 And z 2 Respectively representing the areas, λ, of different candidate areas of the discussion person x 、λ y And λ z Respectively, represent different weight values.
CN201910531632.3A 2019-06-19 2019-06-19 Method and device for detecting discussion scene of personnel Active CN110245628B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910531632.3A CN110245628B (en) 2019-06-19 2019-06-19 Method and device for detecting discussion scene of personnel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910531632.3A CN110245628B (en) 2019-06-19 2019-06-19 Method and device for detecting discussion scene of personnel

Publications (2)

Publication Number Publication Date
CN110245628A CN110245628A (en) 2019-09-17
CN110245628B true CN110245628B (en) 2023-04-18

Family

ID=67888102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910531632.3A Active CN110245628B (en) 2019-06-19 2019-06-19 Method and device for detecting discussion scene of personnel

Country Status (1)

Country Link
CN (1) CN110245628B (en)

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101332362B (en) * 2008-08-05 2012-09-19 北京中星微电子有限公司 Interactive delight system based on human posture recognition and implement method thereof
CN102129682B (en) * 2011-03-09 2015-10-07 深圳市云宙多媒体技术有限公司 A kind of prospect background region partitioning method, system
CN102688603B (en) * 2011-03-22 2015-08-12 王鹏勃 Based on the System and method for of the real-time magic class stage performance of augmented reality and action recognition technology
WO2013114493A1 (en) * 2012-02-03 2013-08-08 日本電気株式会社 Communication draw-in system, communication draw-in method, and communication draw-in program
CN103679189B (en) * 2012-09-14 2017-02-01 华为技术有限公司 Method and device for recognizing scene
CN104834893A (en) * 2015-03-13 2015-08-12 燕山大学 Front-view pedestrian gait period detection method
JP2016206849A (en) * 2015-04-20 2016-12-08 河村電器産業株式会社 Surveillance camera system
CN105760141B (en) * 2016-04-05 2023-05-09 中兴通讯股份有限公司 Method for realizing multidimensional control, intelligent terminal and controller
CN106778650A (en) * 2016-12-26 2017-05-31 深圳极视角科技有限公司 Scene adaptive pedestrian detection method and system based on polymorphic type information fusion
CN107329569A (en) * 2017-06-29 2017-11-07 合肥步瑞吉智能家居有限公司 A kind of angle of television automatic regulating system based on single viewer's location

Also Published As

Publication number Publication date
CN110245628A (en) 2019-09-17

Similar Documents

Publication Publication Date Title
Feng et al. Spatio-temporal fall event detection in complex scenes using attention guided LSTM
CN108447078B (en) Interference perception tracking algorithm based on visual saliency
CN106897670B (en) Express violence sorting identification method based on computer vision
CN106778595B (en) Method for detecting abnormal behaviors in crowd based on Gaussian mixture model
Wang et al. Fall detection based on dual-channel feature integration
CN104881637B (en) Multimodal information system and its fusion method based on heat transfer agent and target tracking
Avgerinakis et al. Recognition of activities of daily living for smart home environments
CN110008867A (en) A kind of method for early warning based on personage's abnormal behaviour, device and storage medium
Asif et al. Privacy preserving human fall detection using video data
Qian et al. Intelligent surveillance systems
CN106570490B (en) A kind of pedestrian's method for real time tracking based on quick clustering
CN113378649A (en) Identity, position and action recognition method, system, electronic equipment and storage medium
CN113158983A (en) Airport scene activity behavior recognition method based on infrared video sequence image
CN111079694A (en) Counter assistant job function monitoring device and method
Iazzi et al. Fall detection based on posture analysis and support vector machine
CN108875448B (en) Pedestrian re-identification method and device
Wu et al. A novel detection framework for detecting abnormal human behavior
Wang et al. Video anomaly detection method based on future frame prediction and attention mechanism
Huo et al. 3DVSD: An end-to-end 3D convolutional object detection network for video smoke detection
Sowmyayani et al. Fall detection in elderly care system based on group of pictures
Tagore et al. Person re-identification from appearance cues and deep Siamese features
CN110245628B (en) Method and device for detecting discussion scene of personnel
CN116311497A (en) Tunnel worker abnormal behavior detection method and system based on machine vision
Cai et al. Detecting abnormal behavior in examination surveillance video with 3D convolutional neural networks
CN107273873B (en) Pedestrian based on irregular video sequence recognition methods and system again

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant