CN105550678A - Human body motion feature extraction method based on global remarkable edge area - Google Patents

Human body motion feature extraction method based on global remarkable edge area Download PDF

Info

Publication number
CN105550678A
CN105550678A CN201610075788.1A CN201610075788A CN105550678A CN 105550678 A CN105550678 A CN 105550678A CN 201610075788 A CN201610075788 A CN 201610075788A CN 105550678 A CN105550678 A CN 105550678A
Authority
CN
China
Prior art keywords
angle point
frame
pixel
color
strong angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610075788.1A
Other languages
Chinese (zh)
Other versions
CN105550678B (en
Inventor
胡瑞敏
徐增敏
陈军
陈华锋
李红阳
王中元
郑淇
吴华
王晓
周立国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201610075788.1A priority Critical patent/CN105550678B/en
Publication of CN105550678A publication Critical patent/CN105550678A/en
Application granted granted Critical
Publication of CN105550678B publication Critical patent/CN105550678B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a human body motion feature extraction method based on a global remarkable edge area, comprising steps of using a contrast between an area and a whole image to calculate the significance, reducing the color quantity of the color space, smoothing the significance of the color space, calculating the significance area according to the space relation of the neighboring areas, performing morphology gradient changing on the foreground area segmented by a binarized threshold to generate a global remarkable edge area, traversing strong corner points of all grids of the video frames under various sizes, collecting key characteristic points, the light stream amplitude value of which is not 0, in the remarkable edge area, solving the displacement of the strong corner point according to the corrected light stream field, and forming the human body motion local time space characteristic by using the strong corner point continuous multi-frame displacement locus and the neighbourhood gradient vector. The invention extracts the motion characteristics through global remarkable edge area, eliminates the background noise points irrelevant to the human body motion, removes the affect on the light stream calculation by the camera motion, improves the accuracy of the human body motion local time space characteristic description and improves the human body motion recognition rate.

Description

Based on the human action feature extracting method in overall prominent edge region
Technical field
The invention belongs to video analysis field, relate to a kind of human body behavior automatic identifying method, be specifically related to the human action feature extracting method based on overall prominent edge region.
Background technology
Along with the development of internet, the continuous popularization of video monitoring system, the video data volume sharp increase.In the face of the video data that magnanimity is emerged in large numbers, how analyzing video human behavior becomes a problem demanding prompt solution.Because video data is easily subject to the impact that foreground moving region is undistinct, camera shake amplitude is large, scene environment is complicated, make human motion in video data, there is a large amount of noise angle points, cause the key feature points of frame of video to extract inaccurate, Human bodys' response precision is limited.
Human action feature extraction is the important component part of Human bodys' response, belongs to an important research content in video analysis field, its objective is to allow Computer Automatic Extraction human action feature, the behavior of automatic decision prediction human body.Therefore, effective motion characteristic extracting method is conducive to the precision improving action recognition.
Current human action feature extracting method is divided into 3 classes: extract the method for bottom local space time point of interest, the motion characteristic attribute description method based on middle level semanteme study and the method based on high-level semantics features point tracking and limbs deformable template based on single-frame images or multi-frame video stream.
Method based on bottom local space time point of interest needs extraction target object being carried out to local space time's point of interest, and obtains target object motion modeling in conjunction with certain light stream estimation, and is aided with various description operator expression limb action.The defect of these class methods is the impact being easily subject to ground unrest, camera shake and target occlusion, and the analysis lacked human body behavior global characteristics and behavior model globality and understanding.
Based on the semantic method learnt in middle level usually on the basis extracting bottom activities feature, usually through prospect marking area, moving object detection, contour of object segmentation, differentiate the methods such as dictionary learning, multi-channel feature fusion, convolutional neural networks, higher level semantic feature modeling is carried out to basic motion feature, obtains the overall situation or local space time's feature representation of target object motion in multi-frame video stream.The problem of this method is highly to rely on the ability to express of input feature vector and the performance of middle level semanteme study algorithm frame.
Method based on high-level semantics features point depends on mark or body sense camera manually, demarcate human skeleton articulation point and carry out real-time tracing, and construct limbs tree structure model or deformable template, characterize human action feature in conjunction with articulation point motion history and the conventional operator that describes.The defect of this method is to need to rely on human experience to spend the plenty of time to mark video sample, or relies on intelligent body sense equipment calibration skeletal joint point.
The patents list relevant to motion characteristic extracting method is as follows:
The mutual field of human body: the open patent of invention of Institute of Automation, CAS in 2015 " human action collection and motion recognition system and control method " thereof, this invention uses wireless transceiver and 3 axle acceleration sensor circuit to obtain human action, is intended to the effect improving stage performance and speech; The open patent of invention " a kind of intelligent watch based on action recognition and action identification method " of Xian Electronics Science and Technology University in 2015, this invention carries out control operation by setting human body forearm gesture motion to intelligent watch; The open patent of invention " headwork defining method and device " of Beijing Zhi Gu Virtuozzo company in 2015, this invention, by obtaining the brain electro-detection information of described human body, determines the headwork corresponding with described brain electro-detection information; Within 2015, Lianxiang (Beijing) Co., Ltd. announces patent of invention " a kind of action identification method, device and electronic equipment ", this invention adds the trigger condition that action obtains, just trigger action identification when only satisfying condition with described electronic equipment physical distance in monitored area.
Video analysis field: the open patent of invention " a kind of action identification method based on time pyramid local matching window " of Zhejiang Polytechnical University in 2015, this invention extracts 3D articulation point from the human depth figure that stereoscopic camera obtains, with the feature representation of the 3D displacement difference between attitude as every frame depth map; Within 2015, Pan Gu scientific & technical corporation of BeiJing ZhongKe announces patent of invention " the human body limb gesture actions recognition methods based on compartition study ", and this invention carries out matching ratio pair by human synovial data and given pose sequence library of setting up; The open patent of invention " a kind of posture sequence finite state machine action identification method " of Xinan Science and Technology Univ. in 2015, the limbs node data that Kinect sensor obtains by this invention carries out coordinate transform, adopts unified space lattice model to measure transform data; The Computer Department of the Chinese Academy of Science in 2015 announces patent of invention " a kind of based on time sequence information across visual angle action identification method and system ", this invention using point of interest exercise intensity as feature interpretation, in conjunction with the source coarseness markup information of source multi-view video to obtain target coarse grain information.
Video analysis field based on significance analysis: Xinan Science and Technology Univ. announced patent of invention " a kind of Human bodys' response algorithm based on STDF feature " in 2015, this invention utilizes the depth information determination human motion salient region of video image, by the energy function of light stream in zoning as gauge region liveness, Gauss's sampling is carried out to motion salient region, make the probability distribution of samples points in motion intense regions, using the sample point that collects as action low-level image feature; University Of Suzhou announces patent of invention " the personage's Activity recognition method based on threshold matrix and Fusion Features vision word ", this invention by frame of video significance acquisitor's object area position, then to taking different threshold test to go out point of interest as motion characteristic inside and outside region; Within 2015, Nanjing Univ. of Posts and Telecommunications announces patent of invention " a kind of Human bodys' response method based on RGB-D video ", this invention extracts dense MovingPose feature, SHOPC characteristic sum HOG3D feature respectively from RGB-D video, adopts the Multiple Kernel Learning method of edge limitation to carry out Fusion Features to three kinds of features; Within 2014, University Of Tianjin announces patent of invention " a kind of human motion recognition method based on local feature ", space-time interest points characteristic sum coordinate is extracted in this invention from motion images sequence, trains word bag dictionary model to come to encode to local feature respectively by dividing human region.
Summary of the invention
In order to solve the problems of the technologies described above, the object of this invention is to provide a kind of human action feature extracting method based on overall prominent edge region.
The technical solution adopted in the present invention is: based on the human action feature extracting method in overall prominent edge region, it is characterized in that, comprise the following steps:
Step 1: the number of colors reducing rgb color space, the significance in smooth color space;
Step 2: the spatial relationship according to adjacent area calculates salient region;
Step 3: adopt binary-state threshold segmentation prospect marking area;
Step 4: do morphology graded to the foreground area be partitioned into, generates overall prominent edge region;
Step 5: by feature point pairs and stochastic sampling consistent correction optical flow field;
Step 6: the strong angle point of all grid-search method under traversal frame of video different scale;
Step 7: gather in prominent edge region and revise the non-vanishing strong angle point of light stream amplitude as the strong angle point of key feature;
Step 8: check the strong angle point number of key feature that step 7 obtains, if number is zero, get the strong angle point of step 6 as the strong angle point of key feature;
Step 9: according to the displacement revising the strong angle point of optical flow computation key feature;
Step 10: with the coordinate displacement track of strong angle point continuous multiple frames, and angle point neighborhood gradient vector composition human action local space time feature.
In described step 1, reduce the number of colors of rgb color space, the significance in smooth color space; Specific implementation process is:
A kth pixel I in definition image I ksignificance S () be:
S ( I k ) = Σ ∀ I i ∈ I D ( I k , I i ) Σ ∀ I i ∈ I | | I k - I i | | , - - - ( 1 )
Wherein D (I k, I i) be pixel I kwith pixel I iat the distance metric of color space;
First by the color quantizing of rgb color space 3 passages to 12 different values, make the number of colors of image pixel reduce to 12 3=1728; Then the color by selecting high frequency to occur, reduces to n=85 by number of colors, guarantees that these colors cover the pixel of more than 95%; Then to the smoothing operation of significance of color c after each quantification, improve significance with the weighted mean value of m neighbour's color conspicuousness, formula is as follows:
S ′ ( c ) = 1 ( m - 1 ) T Σ i = 1 m ( T - D ( c , c i ) ) S ( c i ) , - - - ( 2 )
Wherein for color c and m neighbour color c ibetween distance.
In described step 2, the spatial relationship according to adjacent area calculates salient region; Implementation procedure is:
First use image segmentation algorithm that input video frame is divided into multiple region, and set up color histogram for each region; For each region r k, by calculating significance with the color contrast in other region, formula is as follows:
S ( r k ) = Σ r k ≠ r i w ( r i ) D r ( r k , r i ) , - - - ( 3 )
Wherein w (r i) be the sum of all pixels in i-th region in image, represent region r iweight, emphasize the color contrast of large regions with this; D r() is the color distance in two regions; Two region r 1and r 2color distance be:
D r ( r 1 , r 2 ) = Σ i = 1 n 1 Σ j = 1 n 2 f ( c 1 , i ) f ( c 2 , j ) D ( c 1 , i , c 2 , j ) , - - - ( 4 )
Wherein c 1, ifor region r 1in the color value of i-th pixel, f (c 1, i) represent c 1, ithe probability occurred in image I; c 2, jfor region r 2the color value of a middle jth pixel, f (c 2, j) represent c 2, jthe probability occurred in image I; D (c 1, i, c 2, j) represent two pixel c 1, iand c 2, jcolor distance;
Then on the basis of formula (3), add adjacent space information, increase the impact of neighbour's area of space, see formula:
S ( r k ) = Σ r k ≠ r i exp ( - D s ( r k , r i ) / σ s 2 ) w ( r i ) D r ( r k , r i ) , - - - ( 5 )
Wherein D s(r i, r k) be region r iand r kspace length (i.e. the Euclidean distance of two regional barycenters), σ sfor color space weight intensity.
In described step 3, adopt binary-state threshold segmentation prospect marking area; Implementation procedure is: the frame of video marking area calculated by formula (5), 8 are converted to without symbol gray-scale map from real-coded GA, by setting one [0,255] threshold value carries out binaryzation operation, using the binary image drawn as the foreground area RCmap of input video frame.
In described step 4, morphology graded is done to the foreground area be partitioned into, generate new overall prominent edge region RCBmap; Implementation procedure is: do morphology graded with following formula to RCmap:
RCBmap=morph grad(RCmap)=dilate(RCmap)-erode(RCmap),(6)
Wherein morph grad() represents Morphological Gradient operation, and dilate () and erode () represents that dilation and erosion operates respectively.
In described step 5, by feature point pairs and stochastic sampling consistent correction optical flow field; Implementation procedure is: first use algorithm obtains the dense optical flow field vector ω of current video frame t, by SURF unique point and the strong angle point composition characteristic point pair of key feature of front and back two frame, then obtain revised optical flow field vector ω ' with RANSAC algorithm and these feature point pairs t.
In described step 6, the strong angle point of all grid-search method under traversal frame of video different scale; Implementation procedure is:
For the frame of video of different scale each after down-sampling, first press the model split grid of n*n pixel, then with the strong angle point of following formulas Extraction current video frame I:
T = 0.001 × m a x i ∈ I m i n ( λ i 1 , λ i 2 ) , - - - ( 7 )
Wherein for in pixel i contiguous range each in frame of video I, the 2*2 gradient covariance matrix eigenwert obtained by image derivative; For each pixel character pair value being greater than threshold value T, record its coordinate position in frame of video I, the n*n pixel coverage of a grid in the video frame if this pixel coordinate falls, then using the central pixel point of this grid as strong angle point P.
In described step 7, gather in prominent edge region and revise the non-vanishing strong angle point of light stream amplitude as the strong angle point of key feature; Implementation procedure is: according to the prominent edge region RCBmap of frame of video under each different scale that formula (6) obtains, from the whole strong angle point of all grids of t frame of video, filter out the strong angle point that coordinate drops on prominent edge region, if this angle point is greater than minimum light stream threshold value in correction light stream vector amplitude after normalization of next frame, so just using this angle point as the strong angle point P of key feature t; Optical flow field is mag (I in the amplitude of motion vector after normalization of i-th pixel i), computing formula is as follows:
m a g ( I i ) = I i u * I i u + I i v * I i v 2 m a x ∀ i ∈ I ( I i u * I i u + I i v * I i v 2 ) , - - - ( 8 )
Wherein, suppose for current optical flow field I is at the motion vector of i-th pixel, so with be respectively in the horizontal direction with the component in vertical direction.In described step 8, check the strong angle point number of key feature that step 7 obtains, if number is zero, get the strong angle point of step 6 as the strong angle point of key feature; Implementation procedure is: check the strong angle point number of key feature that step 7 collects at current video frame, if angle point number is zero, so the restriction in prominent edge region and minimum light stream threshold value will be cancelled, directly according to the method for step 6, using the strong angle point all under current scale of t frame of video as the strong angle point P of key feature t.
In described step 9, according to the displacement revising the strong angle point of optical flow computation key feature; Implementation procedure is: the correction optical flow field vector ω ' calculated according to step 5 t, the strong angle point P of recorded key feature tat the displacement coordinate P of t+1 frame t+1, formula is as follows:
P t+1=(x t+1,y t+1)=(x t,y t)+(M*ω′ t)| (xt,yt),(9)
Wherein M represents the core of median filter, (x t, y t) represent angle point P t+1at the coordinate position of frame of video.
In described step 10, with the coordinate displacement track of strong angle point continuous multiple frames, and angle point neighborhood gradient vector composition human action local space time feature; Implementation procedure is: the coordinate P recording the continuous L frame of the strong angle point of each key feature tto P t+L, this angle point, at neighborhood gradient vector HOG, HOF and MBH of continuous multiple frames, operator is described, by blank coil during 16 pixel × 5, pixel × 16 frames formation local feature; By formed in continuous for this angle point L=15 frame 3 local features time blank coil, describe operator by HOG, HOF, MBH and calculate this angle neighborhood of a point gradient vector, composition human action local space time feature.
Relative to prior art, beneficial effect of the present invention is: the marking area being partitioned into foreground moving by the remarkable algorithm of global contrast, there is according to the change of movement edge region gradient the visual characteristic of strong judgement index, in the overall prominent edge region generated frame by frame after Morphological Gradient conversion, in conjunction with the light stream motion vector revised, extract and revise the non-vanishing strong angle point of key feature of light stream amplitude, and estimate the deformation trace of these strong angle point continuous multiple frames, with blank coil during description operator combination formation local feature, realize the motion characteristic extracting method of middle level semantic class.The present invention can reject the ground unrest irrelevant with human motion, eliminates camera shake describes operator impact on HOF, MBH in motion characteristic, promotes the accuracy of human action local space time feature interpretation, improve Human bodys' response rate.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the embodiment of the present invention.
Blank coil exemplary plot when Fig. 2 is the action local feature of the embodiment of the present invention.
Embodiment
Understand for the ease of those of ordinary skill in the art and implement the present invention, below in conjunction with drawings and Examples, the present invention is described in further detail, should be appreciated that exemplifying embodiment described herein is only for instruction and explanation of the present invention, is not intended to limit the present invention.
See Fig. 1, a kind of human action feature extracting method based on overall prominent edge region that the embodiment of the present invention provides, specifically comprises the following steps:
Step 1: the number of colors reducing rgb color space, the significance in smooth color space.Specific implementation process is: a kth pixel I in definition image I ksignificance S () be:
S ( I k ) = Σ ∀ I i ∈ I D ( I k , I i ) = Σ ∀ I i ∈ I | | I k - I i | | , - - - ( 1 )
Wherein D (I k, I i) be pixel I kwith pixel I iat the distance metric of color space.In the application, all S () are all the meanings representing significance, as S (I k) represent the significance of a kth pixel in image I.S is writing a Chinese character in simplified form of Saliency.Symbol i in formula (1) represents image i-th pixel.
First by the color quantizing of rgb color space 3 passages to 12 different values, make the number of colors of image pixel reduce to 12 3=1728.Then the color by selecting high frequency to occur, reduces to n=85 by number of colors, guarantees that these colors cover the pixel of more than 95%.Then to the smoothing operation of significance of color c after each quantification, significance is improved with the weighted mean value of m neighbour's color conspicuousness.Formula is as follows:
S ′ ( c ) = 1 ( m - 1 ) T Σ j = 1 m ( T - D ( c , c j ) ) S ( c j ) , - - - ( 2 )
Wherein, for color c and m neighbour color c jbetween distance.Subscript j represents a jth contiguous color; C is writing a Chinese character in simplified form of color, and color c only has 1728 values after quantizing, and after color reduces to n=85 again, can reduce the time complexity of distance calculating.S (c j) represent the significance of jth neighbour's color in the color c after quantizing.
Step 2: the spatial relationship according to adjacent area calculates salient region.Specific implementation process is:
First use image segmentation algorithm that input video frame is divided into multiple region, and set up color histogram for each region.For each region r k, we are by calculating significance with the color contrast in other region, and formula is as follows:
S ( r k ) = Σ r k ≠ r i w ( r i ) D r ( r k , r i ) , - - - ( 3 )
Wherein w (r i) be region r iweight, D r() is the color distance in two regions.And all functions occurred with D () form in the application, the distance metric function of formula (1) can be used.D (I k, I i), D (c, c j) be all represent the color distance between two parameters.D is writing a Chinese character in simplified form of Distance, and r is writing a Chinese character in simplified form of region, and w is writing a Chinese character in simplified form of weight, w (r i) represent the sum of all pixels in i-th region in image, the color contrast of large regions is emphasized with this.Two region r 1and r 2color distance be:
D r ( r 1 , r 2 ) = Σ i = 1 n 1 Σ j = 1 n 2 f ( c 1 , i ) f ( c 2 , j ) D ( c 1 , i , c 2 , j ) , - - - ( 4 )
Wherein c 1, ifor region r 1in the color value of i-th pixel, f (c 1, i) represent c 1, ithe probability occurred in image I; c 2, jfor region r 2the color value of a middle jth pixel, f (c 2, j) represent c 2, jthe probability occurred in image I; D (c 1, i, c 2, j) represent two pixel c 1, iand c 2, jcolor distance.
Then on the basis of formula (3), add adjacent space information, increase the impact of neighbour's area of space, see formula:
S ( r k ) = Σ r k ≠ r i exp ( - D s ( r k , r i ) / σ s 2 ) w ( r i ) D r ( r k , r i ) , - - - ( 5 )
Wherein D s(r i, r k) be region r iand r kspace length (i.e. the Euclidean distance of two regional barycenters), σ sfor color space weight intensity.
Step 3: adopt binary-state threshold segmentation prospect marking area.Specific implementation process is: the frame of video marking area calculated by formula (5), 8 are converted to without symbol gray-scale map from real-coded GA, carry out binaryzation operation using the average of this gray-scale map as threshold value, the binary image drawn is the prospect marking area RCmap of input video frame.
Step 4: do morphology graded to the foreground area be partitioned into, generates new overall prominent edge region RCBmap.Specific implementation process is: do 2 Morphological Gradient changes with following formula to RCmap and expand prominent edge regional extent:
RCBmap=morph grad(RCmap)=dilate(RCmap)-erode(RCmap),(6)
Wherein morph grad() represents Morphological Gradient operation, and dilate () and erode () represents that dilation and erosion operates respectively.Subscript i in formula (3), (5) represents i-th region.
Step 5: by feature point pairs and stochastic sampling consistent correction optical flow field.Implementation procedure is: first use algorithm obtains the dense optical flow field vector ω of current video frame t, by SURF unique point and the strong angle point composition characteristic point pair of key feature of front and back two frame, then obtain revised optical flow field vector with RANSAC algorithm and these feature point pairs
Step 6: the strong angle point of all grid-search method under traversal frame of video different scale.Implementation procedure is: for the frame of video of different scale each after down-sampling, first presses the model split grid of n*n pixel, then with the strong angle point of Harris of following formulas Extraction current video frame I:
T = 0.001 × m a x i ∈ I m i n ( λ i 1 , λ i 2 ) , - - - ( 7 )
Wherein for within the scope of 3*3 neighborhood of pixels around pixel i each in frame of video I, the 2*2 gradient covariance matrix eigenwert obtained by image derivative.For each pixel character pair value being greater than threshold value T, record its coordinate position in frame of video I, the n*n pixel coverage of a grid in the video frame if this pixel coordinate falls, then using the central pixel point of this grid as strong angle point P.
Step 7: gather in prominent edge region and revise the non-vanishing strong angle point of light stream amplitude as the strong angle point of key feature.Implementation procedure is: according to the prominent edge region RCBmap of frame of video under each different scale that formula (6) obtains, from the whole strong angle point of all grids of t frame of video, filter out the strong angle point that coordinate drops on prominent edge region, if this angle point is greater than minimum light stream threshold value (minimum light stream threshold value can be set to 0.001) in correction light stream vector amplitude after normalization of next frame, so just using this angle point as the strong angle point P of key feature t.Optical flow field is mag (I in the amplitude of motion vector after normalization of i-th pixel i), computing formula is as follows:
m a g ( I i ) = I i u * I i u + I i v * I i v 2 m a x ∀ i ∈ I ( I i u * I i u + I i v * I i v 2 ) , - - - ( 8 )
Wherein, suppose for current optical flow field I is at the motion vector of i-th pixel, so with be respectively in the horizontal direction with the component in vertical direction.Step 8: check the strong angle point number of key feature that step 7 obtains, if number is zero, get the strong angle point of step 6 as the strong angle point of key feature; Implementation procedure is: check the strong angle point number of key feature that step 7 collects at current video frame, if angle point number is zero, so the restriction in prominent edge region and minimum light stream threshold value will be cancelled, directly according to the method for step 6, using the strong angle point all under current scale of t frame of video as the strong angle point P of key feature t.
Step 9: according to the displacement revising the strong angle point of optical flow computation key feature; Implementation procedure is: the correction optical flow field vector ω ' calculated according to step 5 t, the strong angle point P of recorded key feature tat the displacement coordinate P of t+1 frame t+1, formula is as follows:
P t+1=(x t+1,y t+1)=(x t,y t)+(M*ω′ t)| (xt,yt),(9)
Wherein M represents the core of median filter, (x t, y t) represent angle point P t+1at the coordinate position of frame of video.
Step 10: with the coordinate displacement track of strong angle point continuous multiple frames, and angle point neighborhood gradient vector composition human action local space time feature.Implementation procedure is: the coordinate P recording the continuous L frame of the strong angle point of each key feature tto P t+L, this angle point, at neighborhood gradient vector such as HOG, HOF and MBH of continuous multiple frames, operator is described, by blank coil during 16 pixel × 5, pixel × 16 frames formation local feature.By formed in continuous for this angle point L=15 frame 3 local features time blank coil, describe operator by HOG, HOF, MBH and calculate this angle neighborhood of a point gradient vector, be composed in series human action local space time feature.
Should be understood that, the part that this instructions does not elaborate all belongs to prior art.
Should be understood that; the above-mentioned description for preferred embodiment is comparatively detailed; therefore limiting the scope of the invention can not be thought; those of ordinary skill in the art is under enlightenment of the present invention; do not departing under the ambit that the claims in the present invention protect; can also make and replacing or distortion, all fall within protection scope of the present invention, request protection domain of the present invention should be as the criterion with claims.

Claims (10)

1., based on the human action feature extracting method in overall prominent edge region, it is characterized in that, comprise the following steps:
Step 1: the number of colors reducing rgb color space, the significance in smooth color space;
Step 2: the spatial relationship according to adjacent area calculates salient region;
Step 3: adopt binary-state threshold segmentation prospect marking area;
Step 4: do morphology graded to the foreground area be partitioned into, generates overall prominent edge region;
Step 5: by feature point pairs and stochastic sampling consistent correction optical flow field;
Step 6: the strong angle point of all grid-search method under traversal frame of video different scale;
Step 7: gather in prominent edge region and revise the non-vanishing strong angle point of light stream amplitude as the strong angle point of key feature;
Step 8: check the strong angle point number of key feature that step 7 obtains, if number is zero, get the strong angle point of step 6 as the strong angle point of key feature;
Step 9: according to the displacement revising the strong angle point of optical flow computation key feature;
Step 10: with the coordinate displacement track of strong angle point continuous multiple frames, and angle point neighborhood gradient vector composition human action local space time feature.
2. the human action feature extracting method based on overall prominent edge region according to claim 1, is characterized in that: in described step 1, reduces the number of colors of rgb color space, the significance in smooth color space; Specific implementation process is:
A kth pixel I in definition image I ksignificance S () be:
Wherein D (I k, I i) be pixel I kwith pixel I iat the distance metric of color space;
First by the color quantizing of rgb color space 3 passages to 12 different values, make the number of colors of image pixel reduce to 12 3=1728; Then the color by selecting high frequency to occur, reduces to n=85 by number of colors, guarantees that these colors cover the pixel of more than 95%; Then to the smoothing operation of significance of color c after each quantification, improve significance with the weighted mean value of m neighbour's color conspicuousness, formula is as follows:
Wherein, for color c and m neighbour color c jbetween distance; Subscript j represents a jth contiguous color.
3. the human action feature extracting method based on overall prominent edge region according to claim 2, is characterized in that: in described step 2, and the spatial relationship according to adjacent area calculates salient region; Implementation procedure is:
First use image segmentation algorithm that input video frame is divided into multiple region, and set up color histogram for each region; For each region r k, by calculating significance with the color contrast in other region, formula is as follows:
Wherein w (r i) be the sum of all pixels in i-th region in image, represent region r iweight, emphasize the color contrast of large regions with this; D r() is the color distance in two regions; Two region r 1and r 2color distance be:
Wherein c 1, ifor region r 1in the color value of i-th pixel, f (c 1, i) represent c 1, ithe probability occurred in image I; c 2, jfor region r 2the color value of a middle jth pixel, f (c 2, j) represent c 2, jthe probability occurred in image I; D (c 1, i, c 2, j) represent two pixel c 1, iand c 2, jcolor distance;
Then on the basis of formula (3), add adjacent space information, increase the impact of neighbour's area of space, see formula:
Wherein D s(r i, r k) be region r iand r kspace length (i.e. the Euclidean distance of two regional barycenters), σ sfor color space weight intensity.
4. the human action feature extracting method based on overall prominent edge region according to claim 3, is characterized in that: in described step 3, adopts binary-state threshold segmentation prospect marking area; Implementation procedure is: the frame of video marking area calculated by formula (5), 8 are converted to without symbol gray-scale map from real-coded GA, by setting one [0,255] threshold value carries out binaryzation operation, using the binary image drawn as the foreground area RCmap of input video frame.
5. the human action feature extracting method based on overall prominent edge region according to claim 4, is characterized in that: in described step 4, does morphology graded to the foreground area be partitioned into, and generates new overall prominent edge region RCBmap; Implementation procedure is: do morphology graded with following formula to RCmap:
RCBmap=morph grad(RCmap)=dilate(RCmap)-erode(RCmap),(6)。
Wherein morph grad() represents Morphological Gradient operation, and dilate () and erode () represents that dilation and erosion operates respectively.
6. the human action feature extracting method based on overall prominent edge region according to claim 5, is characterized in that: in described step 5, by feature point pairs and stochastic sampling consistent correction optical flow field; Implementation procedure is: first use algorithm obtains the dense optical flow field vector ω of current video frame t, by SURF unique point and the strong angle point composition characteristic point pair of key feature of front and back two frame, then obtain revised optical flow field vector with RANSAC algorithm and these feature point pairs
7. the human action feature extracting method based on overall prominent edge region according to claim 6, is characterized in that: in described step 6, the strong angle point of all grid-search method under traversal frame of video different scale; Implementation procedure is:
For the frame of video of different scale each after down-sampling, first press the model split grid of n*n pixel, then with the strong angle point of following formulas Extraction current video frame I:
Wherein for in pixel i contiguous range each in frame of video I, the 2*2 gradient covariance matrix eigenwert obtained by image derivative; For each pixel character pair value being greater than threshold value T, record its coordinate position in frame of video I, the n*n pixel coverage of a grid in the video frame if this pixel coordinate falls, then using the central pixel point of this grid as strong angle point P.
8. the human action feature extracting method based on overall prominent edge region according to claim 7, is characterized in that: in described step 7, gathers and revise the non-vanishing strong angle point of light stream amplitude as the strong angle point of key feature in prominent edge region; Implementation procedure is: according to the prominent edge region RCBmap of frame of video under each different scale that formula (6) obtains, from the whole strong angle point of all grids of t frame of video, filter out the strong angle point that coordinate drops on prominent edge region, if this angle point is greater than minimum light stream threshold value in correction light stream vector amplitude after normalization of next frame, so just using this angle point as the strong angle point P of key feature t; Optical flow field is mag (Ι in the amplitude of motion vector after normalization of i-th pixel i), computing formula is as follows:
Wherein, suppose for current optical flow field Ι is at the motion vector of i-th pixel, so with be respectively in the horizontal direction with the component in vertical direction;
In described step 8, check the strong angle point number of key feature that step 7 obtains, if number is zero, get the strong angle point of step 6 as the strong angle point of key feature; Implementation procedure is: check the strong angle point number of key feature that step 7 collects at current video frame, if angle point number is zero, so the restriction in prominent edge region and minimum light stream threshold value will be cancelled, directly by the method for step 6, using the strong angle point all under current scale of t frame of video as the strong angle point P of key feature t.
9. the human action feature extracting method based on overall prominent edge region according to claim 8, is characterized in that: in described step 9, according to the displacement revising the strong angle point of optical flow computation key feature; Implementation procedure is: the correction optical flow field vector calculated according to step 5 the strong angle point P of recorded key feature tat the displacement coordinate P of t+1 frame t+1, formula is as follows:
P t+1=(x t+1,y t+1)=(x t,y t)+(M*ω t′)|(x t,y t),(9)
Wherein M represents the core of median filter, (x t, y t) represent angle point P t+1at the coordinate position of frame of video.
10. the human action feature extracting method based on overall prominent edge region according to claim 9, it is characterized in that: in described step 10, with the coordinate displacement track of strong angle point continuous multiple frames, and angle point neighborhood gradient vector composition human action local space time feature; Implementation procedure is: the coordinate P recording the continuous L frame of the strong angle point of each key feature tto P t+L, this angle point, at neighborhood gradient vector HOG, HOF and MBH of continuous multiple frames, operator is described, by blank coil during 16 pixel × 5, pixel × 16 frames formation local feature; By formed in continuous for this angle point L=15 frame 3 local features time blank coil, describe operator by HOG, HOF, MBH and calculate this angle neighborhood of a point gradient vector, composition human action local space time feature.
CN201610075788.1A 2016-02-03 2016-02-03 Human action feature extracting method based on global prominent edge region Expired - Fee Related CN105550678B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610075788.1A CN105550678B (en) 2016-02-03 2016-02-03 Human action feature extracting method based on global prominent edge region

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610075788.1A CN105550678B (en) 2016-02-03 2016-02-03 Human action feature extracting method based on global prominent edge region

Publications (2)

Publication Number Publication Date
CN105550678A true CN105550678A (en) 2016-05-04
CN105550678B CN105550678B (en) 2019-01-18

Family

ID=55829861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610075788.1A Expired - Fee Related CN105550678B (en) 2016-02-03 2016-02-03 Human action feature extracting method based on global prominent edge region

Country Status (1)

Country Link
CN (1) CN105550678B (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295564A (en) * 2016-08-11 2017-01-04 南京理工大学 The action identification method that a kind of neighborhood Gaussian structures and video features merge
CN106339666A (en) * 2016-08-11 2017-01-18 中科爱芯智能科技(深圳)有限公司 Human body target nighttime monitoring method
CN106709933A (en) * 2016-11-17 2017-05-24 南京邮电大学 Unsupervised learning-based motion estimation method
CN106952269A (en) * 2017-02-24 2017-07-14 北京航空航天大学 The reversible video foreground object sequence detection dividing method of neighbour and system
CN107330385A (en) * 2017-06-21 2017-11-07 华东师范大学 A kind of multiple features pedestrian detection method based on semantic passage
CN107635099A (en) * 2017-10-09 2018-01-26 深圳市天视通电子科技有限公司 A kind of double optical-fiber network video cameras of human body sensing and safety defense monitoring system
CN107784266A (en) * 2017-08-07 2018-03-09 南京理工大学 Motion detection method based on spatiotemporal object statistical match model
CN108053410A (en) * 2017-12-11 2018-05-18 厦门美图之家科技有限公司 Moving Object Segmentation method and device
CN108288016A (en) * 2017-01-10 2018-07-17 武汉大学 The action identification method and system merged based on gradient boundaries figure and multimode convolution
CN108550159A (en) * 2018-03-08 2018-09-18 佛山市云米电器科技有限公司 A kind of flue gas concentration identification method based on the segmentation of three color of image
CN109118493A (en) * 2018-07-11 2019-01-01 南京理工大学 A kind of salient region detecting method in depth image
CN109118507A (en) * 2018-08-27 2019-01-01 明超 Shell bears air pressure real-time alarm system
CN109284667A (en) * 2018-07-26 2019-01-29 同济大学 A kind of three streaming human motion action space area detecting methods towards video
CN109299665A (en) * 2018-08-29 2019-02-01 上海悠络客电子科技股份有限公司 A kind of humanoid profile based on LSD algorithm describes method
CN109583341A (en) * 2018-11-19 2019-04-05 清华大学深圳研究生院 To more people's bone bone critical point detection method and devices of the image comprising portrait
CN109697409A (en) * 2018-11-27 2019-04-30 北京文香信息技术有限公司 A kind of feature extracting method of moving image and the recognition methods for motion images of standing up
CN109883609A (en) * 2018-08-27 2019-06-14 明超 Shell bears air pressure Realtime Alerts method
CN110096938A (en) * 2018-01-31 2019-08-06 腾讯科技(深圳)有限公司 A kind for the treatment of method and apparatus of action behavior in video
CN110163129A (en) * 2019-05-08 2019-08-23 腾讯科技(深圳)有限公司 Method, apparatus, electronic equipment and the computer readable storage medium of video processing
CN110222616A (en) * 2019-05-28 2019-09-10 浙江大华技术股份有限公司 Pedestrian's anomaly detection method, image processing apparatus and storage device
CN111461036A (en) * 2020-04-07 2020-07-28 武汉大学 Real-time pedestrian detection method using background modeling enhanced data
CN111614965A (en) * 2020-05-07 2020-09-01 武汉大学 Unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering
CN111753590A (en) * 2019-03-28 2020-10-09 杭州海康威视数字技术股份有限公司 Behavior identification method and device and electronic equipment
CN113205043A (en) * 2021-04-30 2021-08-03 武汉大学 Video sequence two-dimensional attitude estimation method based on reinforcement learning
TWI742690B (en) * 2019-09-27 2021-10-11 大陸商北京市商湯科技開發有限公司 Method and apparatus for detecting a human body, computer device, and storage medium
CN113902760A (en) * 2021-10-19 2022-01-07 深圳市飘飘宝贝有限公司 Object edge optimization method, system, device and storage medium in video segmentation
CN116012283A (en) * 2022-09-28 2023-04-25 逸超医疗科技(北京)有限公司 Full-automatic ultrasonic image measurement method, equipment and storage medium
CN116095363A (en) * 2023-02-09 2023-05-09 西安电子科技大学 Mobile terminal short video highlight moment editing method based on key behavior recognition

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200490A (en) * 2014-08-14 2014-12-10 华南理工大学 Rapid retrograde detecting and tracking monitoring method under complex environment
CN104424642A (en) * 2013-09-09 2015-03-18 华为软件技术有限公司 Detection method and detection system for video salient regions

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424642A (en) * 2013-09-09 2015-03-18 华为软件技术有限公司 Detection method and detection system for video salient regions
CN104200490A (en) * 2014-08-14 2014-12-10 华南理工大学 Rapid retrograde detecting and tracking monitoring method under complex environment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANTONIOS OIKONOMOPOULOS等: "Spatiotemporal salient points for visual recognition of human actions", 《IEEE TRANSACTIONS ON SYSTEMS,MAN,AND CYBERNETICS,PART B(CYBERNETICS)》 *
KARLA BRKIC等: "Combining spatio-temporal appearance descriptors and optical flow for human action recognition in video data", 《CCVW》 *
王晓等: "基于显著区域的行人检测算法", 《计算机工程与设计》 *

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106339666A (en) * 2016-08-11 2017-01-18 中科爱芯智能科技(深圳)有限公司 Human body target nighttime monitoring method
CN106295564A (en) * 2016-08-11 2017-01-04 南京理工大学 The action identification method that a kind of neighborhood Gaussian structures and video features merge
CN106339666B (en) * 2016-08-11 2019-08-20 中科亿和智慧物联(深圳)有限公司 A kind of night monitoring method of human body target
CN106295564B (en) * 2016-08-11 2019-06-07 南京理工大学 A kind of action identification method of neighborhood Gaussian structures and video features fusion
CN106709933A (en) * 2016-11-17 2017-05-24 南京邮电大学 Unsupervised learning-based motion estimation method
CN106709933B (en) * 2016-11-17 2020-04-07 南京邮电大学 Motion estimation method based on unsupervised learning
CN108288016B (en) * 2017-01-10 2021-09-03 武汉大学 Action identification method and system based on gradient boundary graph and multi-mode convolution fusion
CN108288016A (en) * 2017-01-10 2018-07-17 武汉大学 The action identification method and system merged based on gradient boundaries figure and multimode convolution
CN106952269A (en) * 2017-02-24 2017-07-14 北京航空航天大学 The reversible video foreground object sequence detection dividing method of neighbour and system
CN107330385A (en) * 2017-06-21 2017-11-07 华东师范大学 A kind of multiple features pedestrian detection method based on semantic passage
CN107784266A (en) * 2017-08-07 2018-03-09 南京理工大学 Motion detection method based on spatiotemporal object statistical match model
CN107635099A (en) * 2017-10-09 2018-01-26 深圳市天视通电子科技有限公司 A kind of double optical-fiber network video cameras of human body sensing and safety defense monitoring system
CN107635099B (en) * 2017-10-09 2020-08-18 深圳市天视通电子科技有限公司 Human body induction double-optical network camera and security monitoring system
CN108053410A (en) * 2017-12-11 2018-05-18 厦门美图之家科技有限公司 Moving Object Segmentation method and device
CN110096938B (en) * 2018-01-31 2022-10-04 腾讯科技(深圳)有限公司 Method and device for processing action behaviors in video
CN110096938A (en) * 2018-01-31 2019-08-06 腾讯科技(深圳)有限公司 A kind for the treatment of method and apparatus of action behavior in video
CN108550159B (en) * 2018-03-08 2022-02-15 佛山市云米电器科技有限公司 Flue gas concentration identification method based on image three-color segmentation
CN108550159A (en) * 2018-03-08 2018-09-18 佛山市云米电器科技有限公司 A kind of flue gas concentration identification method based on the segmentation of three color of image
CN109118493B (en) * 2018-07-11 2021-09-10 南京理工大学 Method for detecting salient region in depth image
CN109118493A (en) * 2018-07-11 2019-01-01 南京理工大学 A kind of salient region detecting method in depth image
CN109284667A (en) * 2018-07-26 2019-01-29 同济大学 A kind of three streaming human motion action space area detecting methods towards video
CN109284667B (en) * 2018-07-26 2021-09-03 同济大学 Three-stream type human motion behavior space domain detection method facing video
CN109118507B (en) * 2018-08-27 2019-09-13 嵊州市万睿科技有限公司 Shell bears air pressure real-time alarm system
CN109883609A (en) * 2018-08-27 2019-06-14 明超 Shell bears air pressure Realtime Alerts method
CN109118507A (en) * 2018-08-27 2019-01-01 明超 Shell bears air pressure real-time alarm system
CN109883609B (en) * 2018-08-27 2020-11-27 青田县元元科技有限公司 Real-time alarm method for air pressure borne by shell
CN109299665A (en) * 2018-08-29 2019-02-01 上海悠络客电子科技股份有限公司 A kind of humanoid profile based on LSD algorithm describes method
CN109299665B (en) * 2018-08-29 2023-04-14 上海悠络客电子科技股份有限公司 Human-shaped contour description method based on LSD algorithm
CN109583341A (en) * 2018-11-19 2019-04-05 清华大学深圳研究生院 To more people's bone bone critical point detection method and devices of the image comprising portrait
CN109697409A (en) * 2018-11-27 2019-04-30 北京文香信息技术有限公司 A kind of feature extracting method of moving image and the recognition methods for motion images of standing up
CN111753590A (en) * 2019-03-28 2020-10-09 杭州海康威视数字技术股份有限公司 Behavior identification method and device and electronic equipment
CN111753590B (en) * 2019-03-28 2023-10-17 杭州海康威视数字技术股份有限公司 Behavior recognition method and device and electronic equipment
CN110163129B (en) * 2019-05-08 2024-02-13 腾讯科技(深圳)有限公司 Video processing method, apparatus, electronic device and computer readable storage medium
CN110163129A (en) * 2019-05-08 2019-08-23 腾讯科技(深圳)有限公司 Method, apparatus, electronic equipment and the computer readable storage medium of video processing
CN110222616A (en) * 2019-05-28 2019-09-10 浙江大华技术股份有限公司 Pedestrian's anomaly detection method, image processing apparatus and storage device
TWI742690B (en) * 2019-09-27 2021-10-11 大陸商北京市商湯科技開發有限公司 Method and apparatus for detecting a human body, computer device, and storage medium
CN111461036B (en) * 2020-04-07 2022-07-05 武汉大学 Real-time pedestrian detection method using background modeling to enhance data
CN111461036A (en) * 2020-04-07 2020-07-28 武汉大学 Real-time pedestrian detection method using background modeling enhanced data
CN111614965A (en) * 2020-05-07 2020-09-01 武汉大学 Unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering
CN111614965B (en) * 2020-05-07 2022-02-01 武汉大学 Unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering
CN113205043A (en) * 2021-04-30 2021-08-03 武汉大学 Video sequence two-dimensional attitude estimation method based on reinforcement learning
CN113205043B (en) * 2021-04-30 2022-06-07 武汉大学 Video sequence two-dimensional attitude estimation method based on reinforcement learning
CN113902760A (en) * 2021-10-19 2022-01-07 深圳市飘飘宝贝有限公司 Object edge optimization method, system, device and storage medium in video segmentation
CN116012283A (en) * 2022-09-28 2023-04-25 逸超医疗科技(北京)有限公司 Full-automatic ultrasonic image measurement method, equipment and storage medium
CN116012283B (en) * 2022-09-28 2023-10-13 逸超医疗科技(北京)有限公司 Full-automatic ultrasonic image measurement method, equipment and storage medium
CN116095363A (en) * 2023-02-09 2023-05-09 西安电子科技大学 Mobile terminal short video highlight moment editing method based on key behavior recognition
CN116095363B (en) * 2023-02-09 2024-05-14 西安电子科技大学 Mobile terminal short video highlight moment editing method based on key behavior recognition

Also Published As

Publication number Publication date
CN105550678B (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN105550678A (en) Human body motion feature extraction method based on global remarkable edge area
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN104143079B (en) The method and system of face character identification
CN110120020A (en) A kind of SAR image denoising method based on multiple dimensioned empty residual error attention network
CN108133188A (en) A kind of Activity recognition method based on motion history image and convolutional neural networks
CN104615983A (en) Behavior identification method based on recurrent neural network and human skeleton movement sequences
CN105718879A (en) Free-scene egocentric-vision finger key point detection method based on depth convolution nerve network
CN108830196A (en) Pedestrian detection method based on feature pyramid network
CN108961675A (en) Fall detection method based on convolutional neural networks
CN106682569A (en) Fast traffic signboard recognition method based on convolution neural network
CN106407889A (en) Video human body interaction motion identification method based on optical flow graph depth learning model
CN108198147A (en) A kind of method based on the multi-source image fusion denoising for differentiating dictionary learning
CN110084165A (en) The intelligent recognition and method for early warning of anomalous event under the open scene of power domain based on edge calculations
CN103077537B (en) Novel L1 regularization-based real-time moving target tracking method
CN106408030A (en) SAR image classification method based on middle lamella semantic attribute and convolution neural network
CN107689052A (en) Visual target tracking method based on multi-model fusion and structuring depth characteristic
CN106815578A (en) A kind of gesture identification method based on Depth Motion figure Scale invariant features transform
CN114758288A (en) Power distribution network engineering safety control detection method and device
CN107452022A (en) A kind of video target tracking method
CN106991666A (en) A kind of disease geo-radar image recognition methods suitable for many size pictorial informations
CN112950780B (en) Intelligent network map generation method and system based on remote sensing image
CN104298974A (en) Human body behavior recognition method based on depth video sequence
CN109685045A (en) A kind of Moving Targets Based on Video Streams tracking and system
CN109934095A (en) A kind of remote sensing images Clean water withdraw method and system based on deep learning
CN106910188A (en) The detection method of airfield runway in remote sensing image based on deep learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190118

Termination date: 20200203

CF01 Termination of patent right due to non-payment of annual fee