CN110245628A - A kind of method and apparatus that testing staff discusses scene - Google Patents

A kind of method and apparatus that testing staff discusses scene Download PDF

Info

Publication number
CN110245628A
CN110245628A CN201910531632.3A CN201910531632A CN110245628A CN 110245628 A CN110245628 A CN 110245628A CN 201910531632 A CN201910531632 A CN 201910531632A CN 110245628 A CN110245628 A CN 110245628A
Authority
CN
China
Prior art keywords
personnel
area
candidate
scene image
discussion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910531632.3A
Other languages
Chinese (zh)
Other versions
CN110245628B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Century Photosynthesis Science And Technology Ltd
Original Assignee
Chengdu Century Photosynthesis Science And Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Century Photosynthesis Science And Technology Ltd filed Critical Chengdu Century Photosynthesis Science And Technology Ltd
Priority to CN201910531632.3A priority Critical patent/CN110245628B/en
Publication of CN110245628A publication Critical patent/CN110245628A/en
Application granted granted Critical
Publication of CN110245628B publication Critical patent/CN110245628B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Abstract

The present invention provides the method and apparatus that a kind of testing staff discusses scene, wherein this method comprises: obtaining current scene image;Personnel area in the scene image is identified, and obtains position and the area of the personnel area;According to the position of the determining personnel area and area, scene, which detects, to be discussed to the personnel in scene image.The method and apparatus that the testing staff that provides through the embodiment of the present invention discusses scene are handled by position to the determining personnel area and area, so that it may and it realizes and purpose that scene is detected is discussed to the personnel in scene image, it is simple to operate.

Description

A kind of method and apparatus that testing staff discusses scene
Technical field
The present invention relates to technical field of image processing, in particular to a kind of testing staff discuss scene method and Device.
Background technique
Currently, action recognition and detection etc. are computer vision field one basic and difficult to the analysis of human behavior Task, also have and be widely applied very much range, such as the equipment such as intelligent monitor system, human-computer interaction, game control and robot. The main target of dynamic scene identification is to judge the behavior classification of single people or group in one section of video, and make above equipment It can be interacted according to the behavior classification of the individual or this crowd of people and the individual or this crowd of people.
Dynamic scene when personnel's discussion is carried out for group, identification is got up extremely difficult.
Summary of the invention
To solve the above problems, the embodiment of the present invention be designed to provide a kind of testing staff discuss scene method and Device.
In a first aspect, the embodiment of the invention provides a kind of methods that testing staff discusses scene, comprising:
Obtain current scene image;
Personnel area in current scene image is identified, and obtains position and the area of the personnel area;
According to the position of the determining personnel area and area, to the personnel in current scene image discuss scene into Row detection.
Second aspect, the embodiment of the invention also provides the devices that a kind of testing staff discusses scene, comprising:
Module is obtained, for obtaining current scene image;
Processing module for identifying to the personnel area in current scene image, and obtains the personnel area Position and area;
Detection module, for according to the determining personnel area position and area, in current scene image Personnel discuss that scene detects.
In the scheme that the above-mentioned first aspect of the embodiment of the present invention is provided to second aspect, by from current scene image The position for the personnel area identified and area discuss that scene detects to the personnel in current scene image, to related skill Personnel can not be discussed with scene carry out identification and compare, at position to the determining personnel area and area in art Reason, so that it may realize the purpose detected to personnel's discussion scene in current scene image, it is simple to operate.
To enable the above objects, features and advantages of the present invention to be clearer and more comprehensible, preferred embodiment is cited below particularly, and cooperate Appended attached drawing, is described in detail below.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 shows a kind of flow chart of the method for testing staff's discussion scene provided by the embodiment of the present invention 1;
Fig. 2 shows the structural schematic diagrams that a kind of testing staff provided by the embodiment of the present invention 2 discusses the device of scene.
Specific embodiment
In the description of the present invention, it is to be understood that, term " center ", " longitudinal direction ", " transverse direction ", " length ", " width ", " thickness ", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom" "inner", "outside", " up time The orientation or positional relationship of the instructions such as needle ", " counterclockwise " is to be based on the orientation or positional relationship shown in the drawings, and is merely for convenience of The description present invention and simplified description, rather than the device or element of indication or suggestion meaning must have a particular orientation, with spy Fixed orientation construction and operation, therefore be not considered as limiting the invention.
In addition, term " first ", " second " are used for descriptive purposes only and cannot be understood as indicating or suggesting relative importance Or implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or Implicitly include one or more of the features.In the description of the present invention, the meaning of " plurality " is two or more, Unless otherwise specifically defined.
In the present invention unless specifically defined or limited otherwise, term " installation ", " connected ", " connection ", " fixation " etc. Term shall be understood in a broad sense, for example, it may be being fixedly connected, may be a detachable connection, or be integrally connected;It can be machine Tool connection, is also possible to be electrically connected;It can be directly connected, two members can also be can be indirectly connected through an intermediary Connection inside part.For the ordinary skill in the art, above-mentioned term can be understood in this hair as the case may be Concrete meaning in bright.
Action recognition and detection etc. are computer vision field one basic and difficult timess to the analysis of human behavior Business, also has and is widely applied very much range, such as intelligent monitor system, human-computer interaction, game control and robot.Dynamic scene is known Other main target is to judge the classification of the behavior of single people or group in one section of video.The difficult point master of scene action recognition It is following four aspects: the bad quantization of difference in 1. classes between class, a same movement, the performance of different people may It is very different.2. environmental difference such as blocks, multi-angle, 3. time change of illumination effect etc., body of the people when execution acts Movement velocity variation very greatly, be difficult determine movement starting point, thus be unfavorable for by video food extract feature come It is influenced when portraying behavior representation movement maximum.4. lacking suitable data set for specific scene mode.
Existing scene action identification method is divided into traditional iDT (improved dense based on trajectory track Trajectories) algorithm and the optical flow method of deep learning development, neural network and based on skeletal extraction are benefited from Skeleton method.IDT method is that deep learning enters effect the best way before the field, disadvantage though is that the time is complicated Degree is higher, and algorithm speed is very slow.Its basic ideas is some tracks obtained in video sequence using optical flow field, by this The various features of a little trajectory extractions, combination carry out fisher vector later and differentiate that coding trains a support vector machines to be gone For the classification of classification.Based on convolutional neural networks (Convolutional Neural Network, CNN) model or it is based on skeleton The method for extracting skeleton is outstanding in extraction high level message context ability, and has also been used to be learnt according to skeleton empty M- temporal characteristics.These methods based on CNN can and time dynamic and skeleton joint are separately encoded into row and column incite somebody to action Frame sequence is expressed as an image, and image is then inputted CNN just as image classification to identify the movement wherein contained. But in this case, the adjacent segment only in convolution kernel is just considered as in study co-occurrence feature.Although receptive field energy All joints of skeleton are covered in convolutional layer later, but are difficult effectively to excavate co-occurrence feature from all joints.Due to Weight shared mechanism in Spatial Dimension, CNN model can not be the parameter of each joint Freedom of learning.Based on CNN or The model of skeleton at this stage can be relatively good the feature for extracting scene, but training the serious dependence of this class model The size and mark of data set, and compared to traditional iDT method, such methods real-time is also not too high.
Compared to existing method, the present invention discusses that this scene proposes a kind of novel non-supervisory judgement calculation for personnel Method.The characteristics of present invention incorporates convolutional neural networks can extract high dimensional feature and cluster can quickly filter out the spy of outlier Point.We have used a convolutional neural networks similarly to carry out the positioning of the head of personnel in scene first.Then our meetings The detection block position of every frame is stored.We used two cluster devices to discuss movable generation to detect, and first poly- Class device is used to filter those testing results not moved within a certain period of time, and result here may be the mesh of error detection Mark, it is also possible to the people not moved.Second cluster device is used to calculate the number of the target for belonging to same cluster class and respective Position.Here cluster device we selected the noisy density clustering method of the tool for belonging to Density Clustering (Density-Based Spatial Clustering of Applications with Noise, DBSCAN).Institute of the present invention Belonging to clustering method is to be based on improving on DBSCAN.DBSCAN is a kind of famous density clustering algorithm, it is based on one group of " neck Domain " parameter portrays the tightness degree of sample distribution, and " cluster " (belonging to a kind of data point) is defined as by DBSCAN can by density Up to the connected sample set of maximum density derived from relationship.But at the same time it there is also some defects, examples in monitoring scene If the measurement of distance can not be measured with simple Lagrangian distance, because of monocular cam institute's acquired image space There are certain distortion and perspective transforms, and the distance of two o'clock on the image is not with the three-dimensional distance in real world at just Than.So dynamic scene when computer vision is for group's progress personnel's discussion, identification is got up extremely difficult.
Based on this, the present embodiment proposes a kind of method and apparatus that testing staff discusses scene, by from scene image The position for the personnel area identified and area discuss that scene detects to the personnel in scene image, so that it may realize to field Personnel in scape image discuss the purpose that scene is detected, simple to operate.
Embodiment 1
The present embodiment proposes a kind of method that testing staff discusses scene, and executing subject is server.
The server, can be using any calculating equipment for being able to carry out computer vision processing in the prior art, this In no longer repeat one by one.
A kind of testing staff shown in Figure 1 discusses the process of the method for scene, may include step in detail below:
Step 100 obtains current scene image.
Current scene image is exactly the scene image of camera present frame collected.
In above-mentioned steps 100, server can obtain scene image by the camera connecting with server itself.
The scene refers to the various public places such as Administrative Area, library, coffee shop.
The camera is mounted in the various public places such as Administrative Area, library, coffee shop, for acquiring difference The image of scene.
Each camera, carries the mark of camera itself respectively.
The camera can be added to the mark of the camera of itself in the scene figure of acquisition after collecting scene image As in, server then is sent by the scene image for adding the mark of camera.
Step 102 identifies the personnel area in current scene image, and obtains the position of the personnel area And area.
Specifically, above-mentioned steps 102 can execute in detail below step (1) to step (2):
(1) the current scene image is pre-processed;
(2) from having carried out identifying the personnel area in the scene image in pretreated current scene image, and Obtain position and the area of the personnel area.
In above-mentioned steps (1), image preprocessing step includes but is not limited to: compression (cutting), convert color spaces, with And noise reduction.
In order to which in above-mentioned steps (2), server can be trained by pretreated scene image feeding one Depth convolutional neural networks model gets the head detection and localization frame and these detection blocks of personnel area in current scene Classification score and personnel area area.
The advantage of convolutional neural networks is that Generalization Capability is good, is capable of detecting when possibility target all in scene, disadvantage It is that it can generate more error detection.
So server can be filtered personnel area.Here server can be filtered out with a score value filter Less as the detection block of target in personnel area, then filters those using background subtraction algorithm and do not move for a long time and be not The detection block of target.But background subtraction device may also will be that the goal filtering moved correctly but not falls originally, service Device has used a kind of location-based delay operation to avoid such case.Server can finally export to be detected in every frame scene Finally determining personnel area position and size.
The position of the personnel area, comprising: the central point abscissa and ordinate of personnel area.
Server stores the position of personnel area finally determining in current scene image and size.
Step 104, the position according to the determining personnel area and area, beg for the personnel in current scene image It is detected by scene.
In order to discuss that scene detects to the personnel in scene image, above-mentioned steps 104 can be executed and be walked in detail below Suddenly (1) to step (5):
(1) position of personnel area in the multiframe scene image before current scene image is obtained;
(2) all personnel region that will identify that is determined as the first candidate discussion personnel area;
(3) it according to the candidate position that personnel area is discussed and area, determines between the first candidate discussion personnel area Distance;
(4) according to determining the distance between each candidate discussion personnel area, from the described first candidate discussion personnel area In remove and second candidate personnel area be discussed where the personnel not discussed;
(5) it is determined currently just from the described first candidate discuss in personnel area for removing the second candidate discussion personnel area Personnel area is discussed in the third candidate where the personnel to discuss, and the third candidate determined is discussed that personnel area is true The personnel being set in current scene image discuss scene areas.
In above-mentioned steps (1), server can be according to the mark of the camera in scene image, from the scene figure of storage Personnel area in multiframe scene image is obtained before scene image current captured by the camera in image position and area set The position in domain.
In above-mentioned steps (3), it is calculated by the following formula the distance between each candidate discussion personnel area:
Wherein, p1And p2Respectively indicate different candidate discussion personnel areas, x1And x2Respectively indicate different candidate discussion personnel The central point abscissa in region, y1And y2Respectively indicate the different candidate central point ordinates that personnel area is discussed, z1And z2Point The different candidate areas that personnel area is discussed, λ are not indicatedx、λyAnd λzRespectively indicate different weighted values.
In above-mentioned steps (4), in order to determine the second candidate discussion personnel area where the personnel not discussed, clothes Business device first can carry out cluster operation in the position of present frame multiframe with before to the described first candidate discussion personnel area, obtain First candidate discusses personnel area in the change in location value of present frame multiframe with before, if there is the change in location value being calculated small In first distance threshold value, begged for then the corresponding first candidate discussion personnel area of the change in location value is just determined as the second candidate By personnel area.
In above-mentioned steps (5), in order to determine that the third candidate where the personnel currently to discuss discusses people Member region, server can to from remove second it is candidate discuss personnel area described first it is candidate discuss personnel area again into Row cluster operation obtains removing second candidate the distance between described first candidate discussion personnel area that personnel area is discussed Value, when the distance value is less than or equal to second distance threshold value, so that it may by the distance value corresponding first candidate discussion personnel area Third candidate where domain is determined as the personnel currently to discuss discusses personnel area.
It, can be to the lamp of the camera region after determining that the personnel in current scene image discuss scene Light device is controlled accordingly, such as: changing the scene color of the colour temperature luminance transformation of the light units scene areas.
In conclusion the method that a kind of testing staff that the present embodiment proposes discusses scene, by from current scene figure The position for the personnel area identified as in and area discuss that scene detects to the personnel in current scene image, with phase Can not personnel be discussed with scene carry out identification and compare in the technology of pass, by position to the determining personnel area and area into Row processing, so that it may realize the purpose detected to personnel's discussion scene in current scene image, it is simple to operate.
Embodiment 2
The present embodiment proposes that a kind of testing staff discusses the device of scene, for executing the detection of the proposition of above-described embodiment 1 The method of personnel's discussion scene.
A kind of testing staff shown in Figure 2 discusses that the structural schematic diagram of the device of scene, the present embodiment propose one kind The device of testing staff's discussion scene, comprising:
Module 200 is obtained, for obtaining current scene image;
Processing module 202 for identifying to the personnel area in current scene image, and obtains the personnel area The position in domain and area;
Detection module 204, for according to the determining personnel area position and area, in current scene image Personnel discuss scene detect.
Specifically, the processing module 202, is specifically used for:
The current scene image is pre-processed;
From having carried out identifying the personnel area in the scene image in pretreated current scene image, and obtain The position of the personnel area and area.
Specifically, the detection module 204, is specifically used for:
Obtain the position of personnel area in the multiframe scene image before current scene image;
The all personnel region that will identify that is determined as the first candidate discussion personnel area;
According to the candidate position that personnel area is discussed and area, determine first it is candidate discuss between personnel area away from From;
According to determining the distance between each candidate discussion personnel area, from the described first candidate discussion personnel area The second candidate discussion personnel area where falling the personnel not discussed;
It is determined currently from second candidate the described first candidate discuss in personnel area that personnel area is discussed is removed Third candidate where the personnel to discuss discusses personnel area, and the third candidate determined is discussed that personnel area determines Scene areas is discussed for the personnel in current scene image.
The detection module 204 calculates each candidate discussion people according to the candidate position that personnel area is discussed and area The distance between member region, comprising:
It is calculated by the following formula the distance between each candidate discussion personnel area:
Wherein, p1And p2Respectively indicate different candidate discussion personnel areas, x1And x2Respectively indicate different candidate discussion The central point abscissa of personnel area, y1And y2Respectively indicate the different candidate central point ordinates that personnel area is discussed, z1With z2Respectively indicate the different candidate areas that personnel area is discussed, λx、λyAnd λzRespectively indicate different weighted values.
In conclusion a kind of testing staff that the present embodiment proposes discusses the device of scene, by from current scene figure The position for the personnel area identified as in and area discuss that scene detects to the personnel in current scene image, with phase Can not personnel be discussed with scene carry out identification and compare in the technology of pass, by position to the determining personnel area and area into Row processing, so that it may realize the purpose detected to personnel's discussion scene in current scene image, it is simple to operate.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. a kind of method that testing staff discusses scene characterized by comprising
Obtain current scene image;
Personnel area in current scene image is identified, and obtains position and the area of the personnel area;
According to the position of the determining personnel area and area, scene, which is examined, to be discussed to the personnel in current scene image It surveys.
2. the method according to claim 1, wherein knowing to the personnel area in current scene image Not, and position and the area of the personnel area are obtained, comprising:
The current scene image is pre-processed;
From having carried out identifying the personnel area in the scene image in pretreated current scene image, and obtain described The position of personnel area and area.
3. the method according to claim 1, wherein according to the position of the determining personnel area and area, Scene, which detects, to be discussed to the personnel in current scene image, comprising:
Obtain the position of personnel area in the multiframe scene image before current scene image;
The all personnel region that will identify that is determined as the first candidate discussion personnel area;
According to the candidate position that personnel area is discussed and area, the distance between first candidate discussion personnel area is determined;
According to determining the distance between each candidate discussion personnel area, remove not from the described first candidate discuss in personnel area The second candidate discussion personnel area where the personnel to discuss;
It determines currently carrying out from second candidate the described first candidate discuss in personnel area that personnel area is discussed is removed Third candidate where the personnel of discussion discusses personnel area, and the third candidate determined discussion personnel area is determined as working as Personnel in preceding scene image discuss scene areas.
4. according to the method described in claim 3, it is characterized in that, according to the candidate position and face that personnel area is discussed Product calculates the distance between candidate discussion personnel area, comprising:
It is calculated by the following formula the distance between each candidate discussion personnel area:
Wherein, p1And p2Respectively indicate different candidate discussion personnel areas, x1And x2Respectively indicate different candidate discussion personnel The central point abscissa in region, y1And y2Respectively indicate the different candidate central point ordinates that personnel area is discussed, z1And z2Point The different candidate areas that personnel area is discussed, λ are not indicatedx、λyAnd λzRespectively indicate different weighted values.
5. the device that a kind of testing staff discusses scene characterized by comprising
Module is obtained, for obtaining current scene image;
Processing module for identifying to the personnel area in current scene image, and obtains the position of the personnel area It sets and area;
Detection module, for according to the determining personnel area position and area, in scene image personnel discuss field Scape is detected.
6. device according to claim 5, which is characterized in that the processing module is specifically used for:
The current scene image is pre-processed;
From having carried out identifying the personnel area in the current scene image in pretreated scene image, and obtain described The position of personnel area and area.
7. device according to claim 5, which is characterized in that the detection module is specifically used for:
Obtain the position of personnel area in the multiframe scene image before current scene image;
The all personnel region that will identify that is determined as the first candidate discussion personnel area;
According to the candidate position that personnel area is discussed and area, the distance between first candidate discussion personnel area is determined;
According to determining the distance between each candidate discussion personnel area, remove not from the described first candidate discuss in personnel area The second candidate discussion personnel area where the personnel to discuss;
It determines currently carrying out from second candidate the described first candidate discuss in personnel area that personnel area is discussed is removed Third candidate where the personnel of discussion discusses personnel area, and the third candidate determined discussion personnel area is determined as working as Personnel in preceding scene image discuss scene areas.
8. device according to claim 7, which is characterized in that the detection module, according to the candidate discussion personnel area The position in domain and area calculate the distance between each candidate discussion personnel area, comprising:
It is calculated by the following formula the distance between each candidate discussion personnel area:
Wherein, p1And p2Respectively indicate different candidate discussion personnel areas, x1And x2Respectively indicate different candidate discussion personnel The central point abscissa in region, y1And y2Respectively indicate the different candidate central point ordinates that personnel area is discussed, z1And z2Point The different candidate areas that personnel area is discussed, λ are not indicatedx、λyAnd λzRespectively indicate different weighted values.
CN201910531632.3A 2019-06-19 2019-06-19 Method and device for detecting discussion scene of personnel Active CN110245628B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910531632.3A CN110245628B (en) 2019-06-19 2019-06-19 Method and device for detecting discussion scene of personnel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910531632.3A CN110245628B (en) 2019-06-19 2019-06-19 Method and device for detecting discussion scene of personnel

Publications (2)

Publication Number Publication Date
CN110245628A true CN110245628A (en) 2019-09-17
CN110245628B CN110245628B (en) 2023-04-18

Family

ID=67888102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910531632.3A Active CN110245628B (en) 2019-06-19 2019-06-19 Method and device for detecting discussion scene of personnel

Country Status (1)

Country Link
CN (1) CN110245628B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101332362A (en) * 2008-08-05 2008-12-31 北京中星微电子有限公司 Interactive delight system based on human posture recognition and implement method thereof
CN102129682A (en) * 2011-03-09 2011-07-20 深圳市融创天下科技发展有限公司 Foreground and background area division method and system
CN102688603A (en) * 2011-03-22 2012-09-26 王鹏勃 System of and method for real-time magic-type stage performance based on technologies of augmented reality and action recognition
CN103679189A (en) * 2012-09-14 2014-03-26 华为技术有限公司 Method and device for recognizing scene
US20150032254A1 (en) * 2012-02-03 2015-01-29 Nec Corporation Communication draw-in system, communication draw-in method, and communication draw-in program
CN104834893A (en) * 2015-03-13 2015-08-12 燕山大学 Front-view pedestrian gait period detection method
CN105760141A (en) * 2016-04-05 2016-07-13 中兴通讯股份有限公司 Multi-dimensional control method, intelligent terminal and controllers
JP2016206849A (en) * 2015-04-20 2016-12-08 河村電器産業株式会社 Surveillance camera system
CN106778650A (en) * 2016-12-26 2017-05-31 深圳极视角科技有限公司 Scene adaptive pedestrian detection method and system based on polymorphic type information fusion
CN107329569A (en) * 2017-06-29 2017-11-07 合肥步瑞吉智能家居有限公司 A kind of angle of television automatic regulating system based on single viewer's location

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101332362A (en) * 2008-08-05 2008-12-31 北京中星微电子有限公司 Interactive delight system based on human posture recognition and implement method thereof
CN102129682A (en) * 2011-03-09 2011-07-20 深圳市融创天下科技发展有限公司 Foreground and background area division method and system
CN102688603A (en) * 2011-03-22 2012-09-26 王鹏勃 System of and method for real-time magic-type stage performance based on technologies of augmented reality and action recognition
US20150032254A1 (en) * 2012-02-03 2015-01-29 Nec Corporation Communication draw-in system, communication draw-in method, and communication draw-in program
CN103679189A (en) * 2012-09-14 2014-03-26 华为技术有限公司 Method and device for recognizing scene
CN104834893A (en) * 2015-03-13 2015-08-12 燕山大学 Front-view pedestrian gait period detection method
JP2016206849A (en) * 2015-04-20 2016-12-08 河村電器産業株式会社 Surveillance camera system
CN105760141A (en) * 2016-04-05 2016-07-13 中兴通讯股份有限公司 Multi-dimensional control method, intelligent terminal and controllers
CN106778650A (en) * 2016-12-26 2017-05-31 深圳极视角科技有限公司 Scene adaptive pedestrian detection method and system based on polymorphic type information fusion
CN107329569A (en) * 2017-06-29 2017-11-07 合肥步瑞吉智能家居有限公司 A kind of angle of television automatic regulating system based on single viewer's location

Also Published As

Publication number Publication date
CN110245628B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN106778595B (en) Method for detecting abnormal behaviors in crowd based on Gaussian mixture model
Ke et al. Multi-dimensional traffic congestion detection based on fusion of visual features and convolutional neural network
CN104200237B (en) One kind being based on the High-Speed Automatic multi-object tracking method of coring correlation filtering
CN104392468B (en) Based on the moving target detecting method for improving visual background extraction
CN109816689A (en) A kind of motion target tracking method that multilayer convolution feature adaptively merges
CN103279765B (en) Steel wire rope surface damage detection method based on images match
CN106355604B (en) Tracking image target method and system
CN103426179B (en) A kind of method for tracking target based on mean shift multiple features fusion and device
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN106203274A (en) Pedestrian's real-time detecting system and method in a kind of video monitoring
CN107564062A (en) Pose method for detecting abnormality and device
CN106204640A (en) A kind of moving object detection system and method
CN112883820B (en) Road target 3D detection method and system based on laser radar point cloud
CN105046719B (en) A kind of video frequency monitoring method and system
CN106815578A (en) A kind of gesture identification method based on Depth Motion figure Scale invariant features transform
CN105243356B (en) A kind of method and device that establishing pedestrian detection model and pedestrian detection method
CN110490905A (en) A kind of method for tracking target based on YOLOv3 and DSST algorithm
CN112836640A (en) Single-camera multi-target pedestrian tracking method
CN112926522B (en) Behavior recognition method based on skeleton gesture and space-time diagram convolution network
CN109740609A (en) A kind of gauge detection method and device
CN106529441B (en) Depth motion figure Human bodys' response method based on smeared out boundary fragment
CN107808524A (en) A kind of intersection vehicle checking method based on unmanned plane
CN104200218B (en) A kind of across visual angle action identification method and system based on timing information
CN109840905A (en) Power equipment rusty stain detection method and system
CN109410248A (en) A kind of flotation froth motion feature extracting method based on r-K algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant