CN101436301A - Method for detecting characteristic movement region of video encode - Google Patents

Method for detecting characteristic movement region of video encode Download PDF

Info

Publication number
CN101436301A
CN101436301A CNA2008102039797A CN200810203979A CN101436301A CN 101436301 A CN101436301 A CN 101436301A CN A2008102039797 A CNA2008102039797 A CN A2008102039797A CN 200810203979 A CN200810203979 A CN 200810203979A CN 101436301 A CN101436301 A CN 101436301A
Authority
CN
China
Prior art keywords
macro block
motion
zone
characteristic kinematic
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008102039797A
Other languages
Chinese (zh)
Other versions
CN101436301B (en
Inventor
张锦辉
石旭利
沈礼权
郭健
张兆扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN2008102039797A priority Critical patent/CN101436301B/en
Publication of CN101436301A publication Critical patent/CN101436301A/en
Application granted granted Critical
Publication of CN101436301B publication Critical patent/CN101436301B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for detecting a character motion region in video coding. The method can rapidly detect the character motion region in a video sequence and in particular can effectively detect the character motion region which is interested by human eyes in a frame of an video image under the condition of the violent motion and complex texture of background. Firstly, a video frame is subjected to wave filtering pretreatment; secondly, a character motion equation is applied to calculate the character motion region of the frame; thirdly, an eight-direction motion model is utilized to obtain a local motion region; and finally, macroblocks of the character motion region and the local motion region are integrated to obtain the final character motion region. The method has good self-adaptability, simple algorithm and accurate detection of a target object.

Description

The detection method in characteristic kinematic zone in the video coding
Technical field
The present invention relates to the detection method in characteristic kinematic zone in a kind of video coding, the motion macro block of reflection visually-perceptible feature is effectively detected, particularly can still can detect the interested characteristic kinematic macro block of human eye under the complicated situation very much in background strenuous exercise and background texture.This method is at first carried out low-pass filtering treatment to frame of video, sets up the characteristic kinematic equation then, utilizes the eigenwert of motion vector and matrix to find out the characteristic kinematic macro block; Set up 8 direction motion models again: utilize macro block direction of motion to find out the local motion macro block; Utilize the correlativity in characteristic kinematic zone and local motion zone to obtain final characteristic kinematic zone at last.
Background technology
The motion of video sequence is very complicated, the most basic motion is meant that video camera fixes, moving object is moved with respect to background, because video camera is fixed, these backgrounds are static constant for the consecutive frame sequence, and the motion that these moving object showed is referred to as local motion.Yet in Flame Image Process, normally obtain original video sequence image by video camera, video camera is installed on the motion platform under a lot of situations, in order to catch image better, video camera itself also will be done scanning motion in full extent of space, as convergent-divergent, tangential movement, vertical motion, rotatablely move etc., so, fixing background objects is known from experience the two dimensional motion that demonstrates a kind of overall situation on the plane of delineation, is referred to as global motion.
It is the forward position direction that receives much concern in the computer vision field in recent years that moving object detects.He is in security monitoring, video conference, and human motion analysis and compression of images and content-based aspects such as image storage and retrieval have wide practical use and potential economic worth.In video image was handled, the top priority that moving object detects will split foreground target exactly from background image.Following three kinds of implementation methods are generally arranged: background subtraction method, interframe movement analytic approach and optical flow method.Background subtraction is present the most frequently used a kind of method.It utilizes the Differential Detection of present image and background image to go out the moving region.This method generally can provide characteristic the most completely, but in the utilization poor effect of dynamic scene; The interframe movement analytic approach, promptly the frame-to-frame differences point-score be to adopt between adjacent two or three consecutive frames to give the picture point time difference in the continuous images sequence, and thresholding extracts the moving region in the image.This method has stronger adaptivity for dynamic environment, but is easy to generate cavitation; Optical flow method adopts the time dependent light stream characteristic of moving target, extracts effectively and the pursuit movement target, detects target though this method is applicable under the prerequisite that motion cameras exists, yet, most of optical flow algorithm complexity, noiseproof feature is poor.
How from motion sequence, to search out our subjectivity or human eye interested moving region then be a bigger difficult problem.General way all is to utilize the time shielding effect to search out these moving regions now.The time shielding effect is variation discontinuous lifting or the decline that causes the vision threshold value in time owing to brightness.Just explanation is in frame of video for this, and when the target of scene sudden change or rapid movement, the vision threshold value will promote to some extent, and this moment, the perception of human eyes degree will reduce greatly.
Summary of the invention
The objective of the invention is to utilize the defective of prior art, the detection method in characteristic kinematic zone in a kind of video coding is provided, utilize the motion vector and the direction of macro block to seek the interested characteristic kinematic macro block of human eye, if these characteristic kinematic macro blocks are carried out perceptual coding then can significantly promote subjective visual quality do.
For achieving the above object, design of the present invention is:
As shown in Figure 1, at first frame of video is carried out low-pass filtering treatment, set up the characteristic kinematic equation then, utilize the eigenwert of motion vector and matrix to find out the characteristic kinematic macro block; Set up 8 direction motion models again: utilize the macro block travel direction to find out the local motion macro block; Utilize the correlativity in characteristic kinematic zone and local motion zone to obtain final characteristic kinematic zone at last.
According to above-mentioned design, technical scheme of the present invention is:
The detection method in characteristic kinematic zone in a kind of video coding, can be come out by fast detecting in characteristic kinematic zone in the video sequence, particularly can effectively the interested characteristic kinematic of the human eye in frame video image zone effectively be detected, it is characterized in that at first frame of video being carried out filter preprocessing, use the characteristic kinematic zone in the characteristic kinematic Equation for Calculating frame then, utilize 8 direction motion models to obtain the local motion zone again, obtain final characteristic kinematic zone behind the macro block of last comprehensive characteristics moving region and local moving region.Concrete step is as follows:
(1) frame of video is carried out pre-service: uncoded original series is carried out low-pass filtering treatment;
(2) set up the characteristic kinematic equation: utilize the eigenwert of motion vector and matrix to find out the characteristic kinematic macro block;
(3) set up 8 direction motion models: utilize macro block direction of motion to find out the local motion macro block;
(4) utilize the correlativity in characteristic kinematic zone and local motion zone to obtain final characteristic kinematic zone.
Above-mentioned four steps further specify as follows:
(1) utilize Gauss's template of 5 * 5 to carry out low-pass filtering to present frame, template is as follows:
2 4 5 4 2 4 9 12 9 4 5 12 15 12 5 4 9 12 9 4 2 4 5 4 2
(2) set up the characteristic kinematic equation:
1. define one 2 * 2 matrix, formula is as follows:
Z = Σ y ∈ ω Σ x ∈ ω I x 2 I x I y I x I y I y 2 - - - ( 1 )
ω represents to wish to obtain the size of characteristic area in the following formula, gets ω=16 in this method; I x, I yThe single order partial derivative of presentation video P on x and y direction assigns to replace the calculating of single order partial derivative with the equation of the ecentre when calculating, formula is as follows:
I x = ∂ P ( x , y ) ∂ x = P cur ( x + 1 , y ) - P pre ( x + mv _ x - 1 , y + mv _ y ) 2 - - - ( 2 )
I y = ∂ P ( x , y ) ∂ y = P cur ( x , y + 1 ) - P pre ( x + mv _ x , y + mv _ y - 1 ) 2 - - - ( 3 )
P in the following formula Cur(x, y) coordinate (x, the y) brightness value on, the P of expression present image P Pre(x, y) (x, the y) brightness value on, mv_x, mv_y are represented the motion vector of the level and the vertical direction of current macro to the coordinate of expression former frame image.
2. after matrix Z calculates, utilize mathematical method to calculate two eigenwerts of Z, wherein bigger value defined is λ 1, less value is λ 2, keeping characteristics value λ 2
3. calculate the matrix Z of each macro block in the present frame and its minimal eigenvalue according to above-mentioned method, the minimal eigenvalue to each macro block sorts from big to small then, and the pairing macro block of preceding 20% eigenwert is the characteristic kinematic macro block.
(3) set up 8 direction motion models:
1. add up the motion vector of all macro blocks in the last coded frame, be divided into 8 directions according to the direction of motion vector, as shown in Figure 2, by counterclockwise 8 directions being numbered direction 1-direction 8 respectively.
Offset direction about 1 15 degree angles with all interior motion vectors all naturalization be the motion vector of direction 1, by that analogy, depart from all directions 15 degree with interior motion vector all naturalization be the motion vector of this direction.And add up the number of the motion macro block of this direction, the mean value that calculates all macroblock level of this direction and vertical movement size then is ave_mv_x, ave_mv_y.If the motion vector of macro block is 0, promptly this macro block is static macro block, and then this macro block of mark is direction 0 macro block.
2. add up the number of 8 direction motion macro blocks and static macro block, this moment, the motion macro block of which direction was maximum, and then we suppose that this direction is the global motion direction, and the average motion vector of this direction is a global motion vector.
3. make mv_x i, mv_y iBe the level of i macro block and the motion vector of vertical direction, utilize formula (4), (5) to recomputate the motion vector of macro block, ave_mv_x wherein, ave_mv_y is level and vertical global motion vector.
mv_x i=mv_x i-ave_mv_x (4)
mv_y i=mv_y i-ave_mv_y (5)
4. obtain recomputating the global motion direction after the new macroblock motion vector according to step 3, promptly jump to step 1 and step 2, obtain final global motion direction.
5. suppose that direction i is the global motion direction, for each macro block, if the direction of motion of this macro block is direction i or the both direction adjacent with i, then this macro block of mark is the global motion macro block, otherwise mark its be the local motion macro block.(4) method for rapidly positioning in characteristic kinematic zone in the H.264 video according to claim 1 coding is characterized in that the step that the described correlativity of utilizing characteristic kinematic zone and local motion zone obtains final characteristic kinematic zone is as follows:
1. be the center with current local motion macro block, if there is the characteristic kinematic macro block of some in 3 * 3 macro blocks around this macro block, then current local motion macro block is final characteristic kinematic macro block;
2. be the center with current non-feature macro block, if there is the local motion macro block of some in 3 * 3 macro blocks around this macro block, then current local motion macro block is final characteristic kinematic macro block.
The present invention has following conspicuous outstanding substantive distinguishing features and a bit remarkable compared with prior art:
The present invention utilizes motion feature proof equation and 8 direction motion models to unite and detects the interested object of human eye in the motion sequence, and adaptivity is relatively good, and algorithm is fairly simple, and it is more accurate that target object detects.
Description of drawings
Fig. 1 is the method for controlling self-adaption code rate FB(flow block) based on space-time shielding effect of the present invention.
Fig. 2 is from all directions to the synoptic diagram of motion model among Fig. 1.
Fig. 3 is the testing result figure in characteristic kinematic zone that adopts the football sequence of CIF form.
Fig. 4 is the testing result figure in characteristic kinematic zone that adopts the foreman sequence of CIF form.
Fig. 5 is the testing result figure in characteristic kinematic zone that adopts the coastguard sequence of CIF form.
Fig. 6 is the testing result figure in characteristic kinematic zone that adopts the deadline sequence of CIF form.
Fig. 7 is the testing result figure in characteristic kinematic zone that adopts the hall sequence of CIF form.
Fig. 8 is the testing result figure in characteristic kinematic zone that adopts the children sequence of CIF form.
Embodiment
Details are as follows in conjunction with the accompanying drawings for one embodiment of the present of invention: the detection method in characteristic kinematic zone is by flow chart shown in Figure 1 in this video coding, is that programming realizes on the PC test platform of Athlon x2 2.0GHz, internal memory 1024M at CPU.Referring to Fig. 1, the detection method in characteristic kinematic zone is at first carried out low-pass filtering treatment to frame of video in the video coding of the present invention, sets up the characteristic kinematic equation then, utilizes the eigenwert of motion vector and matrix to find out the characteristic kinematic macro block; Set up 8 direction motion models again: utilize macro block direction of motion to find out the local motion macro block; Utilize the correlativity in characteristic kinematic zone and local motion zone to obtain final characteristic kinematic zone at last.The steps include:
(1) frame of video is carried out pre-service: uncoded original series is carried out low-pass filtering treatment;
(2) set up the characteristic kinematic equation: utilize the eigenwert of motion vector and matrix to find out the characteristic kinematic macro block;
(3) set up 8 direction motion models: utilize macro block direction of motion to find out the local motion macro block;
(4) utilize the correlativity in characteristic kinematic zone and local motion zone to obtain final characteristic kinematic zone.
The pretreated process of frame of video of above-mentioned steps (1) is as follows:
1. utilize Gauss's template of 5 * 5 to carry out gaussian filtering to present frame, template is as follows.
2 4 5 4 2 4 9 12 9 4 5 12 15 12 5 4 9 12 9 4 2 4 5 4 2
The process of the sensor model of above-mentioned steps (2) is as follows:
1. define one 2 * 2 matrix, formula is as follows:
Z = Σ y ∈ ω Σ x ∈ ω I x 2 I x I y I x I y I y 2 - - - ( 1 )
ω represents to wish to obtain the size of characteristic area in the following formula, gets ω=16 in this method; I x, I yThe single order partial derivative of presentation video P on x and y direction assigns to replace the calculating of single order partial derivative with the equation of the ecentre when calculating, formula is as follows:
I x = ∂ P ( x , y ) ∂ x = P cur ( x + 1 , y ) - P pre ( x + mv _ x - 1 , y + mv _ y ) 2 - - - ( 2 )
I y = ∂ P ( x , y ) ∂ y = P cur ( x , y + 1 ) - P pre ( x + mv _ x , y + mv _ y - 1 ) 2 - - - ( 3 )
P in the following formula Cur(x, y) coordinate (x, the y) brightness value on, the P of expression present image P Pre(x, y) (x, the y) brightness value on, mv_x, mv_y are represented the motion vector of the level and the vertical direction of current macro to the coordinate of expression former frame image.
2. after matrix Z calculates, utilize mathematical method to calculate two eigenwerts of Z, wherein bigger value defined is λ 1, less value is λ 2, keeping characteristics value λ 2
3. calculate the matrix Z of each macro block in the present frame and its minimal eigenvalue according to above-mentioned method, the minimal eigenvalue to each macro block sorts from big to small then, and the pairing macro block of preceding 20% eigenwert is the characteristic kinematic macro block.The process of the method for controlling self-adaption code rate of above-mentioned steps (3) is as follows:
1. add up the motion vector of all macro blocks in the last coded frame, be divided into 8 directions according to the direction of motion vector, as shown in Figure 2, by counterclockwise 8 directions being numbered direction 1-direction 8 respectively.
Offset direction about 1 15 degree angles with all interior motion vectors all naturalization be the motion vector of direction 1, by that analogy, depart from all directions 15 degree with interior motion vector all naturalization be the motion vector of this direction.And add up the number of the motion macro block of this direction, the mean value that calculates all macroblock level of this direction and vertical movement size then is ave_mv_x, ave_mv_y.If the motion vector of macro block is 0, promptly this macro block is static macro block, and then this macro block of mark is direction 0 macro block.
2. add up the number of 8 direction motion macro blocks and static macro block, this moment, the motion macro block of which direction was maximum, and then we suppose that this direction is the global motion direction, and the average motion vector of this direction is a global motion vector.
3. make mv_x i, mv_y iBe the level of i macro block and the motion vector of vertical direction, utilize formula (4), (5) to recomputate the motion vector of macro block, ave_mv_x wherein, ave_mv_y is level and vertical global motion vector.
mv_x i=mv_x i-ave_mv_x (4)
mv_y i=mv_y i-ave_mv_y (5)
4. obtain recomputating the global motion direction after the new macroblock motion vector according to step 3, promptly jump to step 1 and step 2, obtain final global motion direction.
5. suppose that direction i is the global motion direction, for each macro block, if the direction of motion of this macro block is direction i or the both direction adjacent with i, then this macro block of mark is the global motion macro block, otherwise mark its be the local motion macro block.
The process of the method for controlling self-adaption code rate of above-mentioned steps (4) is as follows:
1. be the center with current local motion macro block, if there is the characteristic kinematic macro block of some in 3 * 3 macro blocks around this macro block, then current local motion macro block is final characteristic kinematic macro block;
2. be the center with current non-characteristic area, if there is the local motion macro block of some in 3 * 3 macro blocks around this macro block, then current local motion macro block is final characteristic kinematic macro block.
Example when below providing the input video form and be 352 * 288 CIF adopts the H.264 scrambler of JM10.2 version that standard test sequences is encoded.H.264 the configuration of scrambler is as follows: Baseline Profile, and IPPP, per 15 frames insert 1 I frame, 1 reference frame, bandwidth is set to 256k bps, and frame per second is set to 30fps, and the initial quantization parameter is set to 32.
Adopt the CIF of typical standard test sequences 352 * 288 to test as input video, these sequences had both been had powerful connections complicated, and it is static also to have powerful connections.By these testing results of Fig. 3 to Fig. 8 as can be seen, this method all has the quite good detecting result to stationary background and the sequence with big global motion.

Claims (5)

1. the detection method in characteristic kinematic zone during a video is encoded, can be come out by fast detecting in characteristic kinematic zone in the video sequence, particularly can effectively the interested characteristic kinematic of the human eye in frame video image zone effectively be detected, it is characterized in that at first frame of video being carried out filter preprocessing, use the characteristic kinematic zone in the characteristic kinematic Equation for Calculating frame then, utilize 8 direction motion models to obtain the local motion zone again, obtain final characteristic kinematic zone behind the macro block of last comprehensive characteristics moving region and local moving region.Concrete step is as follows:
(1) frame of video is carried out pre-service: uncoded original series is carried out low-pass filtering treatment;
(2) set up the characteristic kinematic equation: utilize the eigenwert of motion vector and matrix to find out the characteristic kinematic macro block;
(3) set up 8 direction motion models: utilize macro block direction of motion to find out the local motion macro block;
(4) utilize the correlativity in characteristic kinematic zone and local motion zone to obtain final characteristic kinematic zone.
2. the detection method in characteristic kinematic zone in the video according to claim 1 coding, it is characterized in that described step (1) frame of video is carried out pretreated method is to utilize Gauss's template of 5 * 5 to carry out low-pass filtering to present frame, template is as follows:
2 4 5 4 2 4 9 12 9 4 5 12 15 12 5 4 9 12 9 4 2 4 5 4 2 .
3. the detection method in characteristic kinematic zone in the video coding according to claim 1 is characterized in that the step of setting up the characteristic kinematic equation of described step (2) is:
1. define one 2 * 2 matrix, formula is as follows:
Z = Σ y ∈ ω Σ x ∈ ω I x 2 I x I y I x I y I y 2 - - - ( 1 )
ω represents to wish to obtain the size of characteristic area in the following formula, gets ω=16; I x, I yThe single order partial derivative of presentation video P on x and y direction assigns to replace the calculating of single order partial derivative with the equation of the ecentre when calculating, formula is as follows:
I x = ∂ P ( x , y ) ∂ x = P cur ( x + 1 , y ) - P pre ( x + mv _ x - 1 , y + mv _ y ) 2 - - - ( 2 )
I y = ∂ P ( x , y ) ∂ y = P cur ( x , y + 1 ) - P pre ( x + mv _ x , y + mv _ y - 1 ) 2 - - - ( 3 )
P in the following formula Cur(x, y) coordinate (x, the y) brightness value on, the P of expression present image P Pre(x, y) coordinate of expression former frame image (x, the y) brightness value on, mv_x, mv_y are represented the motion vector of the level and the vertical direction of current macro;
2. after matrix Z calculates, utilize mathematical method to calculate two eigenwerts of Z, wherein bigger value defined is λ 1, less value is λ 2, keeping characteristics value λ 2
3. calculate the matrix Z of each macro block in the present frame and its minimal eigenvalue according to above-mentioned method, the minimal eigenvalue to each macro block sorts from big to small then, and the pairing macro block of preceding 20% eigenwert is the characteristic kinematic macro block.
4. the detection method in characteristic kinematic zone in the video coding according to claim 1 is characterized in that the step of setting up 8 direction motion models of described step (3) is as follows:
1. add up the motion vector of all macro blocks in the last coded frame, be divided into 8 directions, by counterclockwise 8 directions being numbered direction 1-direction 8 respectively according to the direction of motion vector;
Offset direction about 1 15 degree angles with all interior motion vectors all naturalization be the motion vector of direction 1, by that analogy, depart from all directions 15 degree with interior motion vector all naturalization be the motion vector of this direction; And add up the number of the motion macro block of this direction, the mean value that calculates all macroblock level of this direction and vertical movement size then is ave_mv_x, ave_mv_y; If the motion vector of macro block is 0, promptly this macro block is static macro block, and then this macro block of mark is direction 0 macro block;
2. add up the number of 8 direction motion macro blocks and static macro block, this moment, the motion macro block of which direction was maximum, and then we suppose that this direction is the global motion direction, and the average motion vector of this direction is a global motion vector;
3. make mv_x i, mv_y iBe the level of i macro block and the motion vector of vertical direction, utilize formula (4), (5) to recomputate the motion vector of macro block, ave_mv_x wherein, ave_mv_y is level and vertical global motion vector.
mv_x i=mv_x i-ave_mv_x (4)
mv_y i=mv_y i-αve_mv_y (5)
4. 3. obtain recomputating the global motion direction after the new macroblock motion vector according to step, promptly jump to step 1. with step 2., obtain final global motion direction;
5. suppose that direction i is the global motion direction, for each macro block, if the direction of motion of this macro block is direction i or the both direction adjacent with i, then this macro block of mark is the global motion macro block, otherwise mark its be the local motion macro block.
5. the detection method in characteristic kinematic zone in the video according to claim 1 coding, the step that the correlativity of utilizing characteristic kinematic zone and local motion zone that it is characterized in that described step (3) obtains final characteristic kinematic zone is as follows:
1. be the center with current local motion macro block, if there is the characteristic kinematic macro block of some in 3 * 3 macro blocks around this macro block, then current local motion macro block is final characteristic kinematic macro block;
2. be the center with current non-feature macro block, if there is the local motion macro block of some in 3 * 3 macro blocks around this macro block, then current local motion macro block is final characteristic kinematic macro block.
CN2008102039797A 2008-12-04 2008-12-04 Method for detecting characteristic movement region of video encode Expired - Fee Related CN101436301B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008102039797A CN101436301B (en) 2008-12-04 2008-12-04 Method for detecting characteristic movement region of video encode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008102039797A CN101436301B (en) 2008-12-04 2008-12-04 Method for detecting characteristic movement region of video encode

Publications (2)

Publication Number Publication Date
CN101436301A true CN101436301A (en) 2009-05-20
CN101436301B CN101436301B (en) 2012-01-18

Family

ID=40710731

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008102039797A Expired - Fee Related CN101436301B (en) 2008-12-04 2008-12-04 Method for detecting characteristic movement region of video encode

Country Status (1)

Country Link
CN (1) CN101436301B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101882316A (en) * 2010-06-07 2010-11-10 深圳市融创天下科技发展有限公司 Method, device and system for regional division/coding of image
CN102568006A (en) * 2011-03-02 2012-07-11 上海大学 Visual saliency algorithm based on motion characteristic of object in video
CN103858075A (en) * 2011-10-14 2014-06-11 三星电子株式会社 Apparatus and method for recognizing motion by using event-based vision sensor
CN105338315A (en) * 2015-10-29 2016-02-17 宁波大学 Intelligent mobile terminal-based warehouse anti-theft video monitoring system
CN105335717A (en) * 2015-10-29 2016-02-17 宁波大学 Intelligent mobile terminal video jitter analysis-based face recognition system
CN115114466A (en) * 2022-08-30 2022-09-27 成都实时技术股份有限公司 Method, system, medium and electronic device for searching target information image

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5214507A (en) * 1991-11-08 1993-05-25 At&T Bell Laboratories Video signal quantization for an mpeg like coding environment
CN100544446C (en) * 2007-07-06 2009-09-23 浙江大学 The real time movement detection method that is used for video monitoring
CN101184221A (en) * 2007-12-06 2008-05-21 上海大学 Vision attention based video encoding method

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101882316A (en) * 2010-06-07 2010-11-10 深圳市融创天下科技发展有限公司 Method, device and system for regional division/coding of image
WO2011153869A1 (en) * 2010-06-07 2011-12-15 深圳市融创天下科技股份有限公司 Method, device and system for partition/encoding image region
CN102568006A (en) * 2011-03-02 2012-07-11 上海大学 Visual saliency algorithm based on motion characteristic of object in video
CN102568006B (en) * 2011-03-02 2014-06-11 上海大学 Visual saliency algorithm based on motion characteristic of object in video
CN103858075A (en) * 2011-10-14 2014-06-11 三星电子株式会社 Apparatus and method for recognizing motion by using event-based vision sensor
CN105338315A (en) * 2015-10-29 2016-02-17 宁波大学 Intelligent mobile terminal-based warehouse anti-theft video monitoring system
CN105335717A (en) * 2015-10-29 2016-02-17 宁波大学 Intelligent mobile terminal video jitter analysis-based face recognition system
CN105338315B (en) * 2015-10-29 2018-08-31 宁波大学 Storehouse security video monitoring system based on intelligent mobile terminal
CN105335717B (en) * 2015-10-29 2019-03-05 宁波大学 Face identification system based on the analysis of intelligent mobile terminal video jitter
CN115114466A (en) * 2022-08-30 2022-09-27 成都实时技术股份有限公司 Method, system, medium and electronic device for searching target information image
CN115114466B (en) * 2022-08-30 2022-12-13 成都实时技术股份有限公司 Method, system, medium and electronic device for searching target practice information image

Also Published As

Publication number Publication date
CN101436301B (en) 2012-01-18

Similar Documents

Publication Publication Date Title
CN101236656B (en) Movement target detection method based on block-dividing image
JP4623135B2 (en) Image recognition device
CN101436301B (en) Method for detecting characteristic movement region of video encode
Chen et al. An advanced moving object detection algorithm for automatic traffic monitoring in real-world limited bandwidth networks
US8582915B2 (en) Image enhancement for challenging lighting conditions
US20090027502A1 (en) Portable Apparatuses Having Devices for Tracking Object's Head, and Methods of Tracking Object's Head in Portable Apparatus
US10297016B2 (en) Video background removal method
Yang et al. Recurrent multi-frame deraining: Combining physics guidance and adversarial learning
Yao et al. Detecting video frame-rate up-conversion based on periodic properties of edge-intensity
US9736493B2 (en) System and method for achieving computationally efficient motion estimation in video compression based on motion direction and magnitude prediction
Biswas et al. Anomaly detection in compressed H. 264/AVC video
CN108200432A (en) A kind of target following technology based on video compress domain
CN102222321A (en) Blind reconstruction method for video sequence
CN114359626A (en) Visible light-thermal infrared obvious target detection method based on condition generation countermeasure network
CN101877135B (en) Moving target detecting method based on background reconstruction
Huang et al. Deep learning based moving object detection for video surveillance
Solana-Cipres et al. Real-time moving object segmentation in H. 264 compressed domain based on approximate reasoning
Yao et al. Detection and localization of video transcoding from AVC to HEVC based on deep representations of decoded frames and PU maps
JPH0759108A (en) Cut pattern detection method for motion picture
Boujut et al. No-reference video quality assessment of H. 264 video streams based on semantic saliency maps
Nguyen et al. Moving object detection in compressed domain for high resolution videos
Hamida et al. Video pre-analyzing and coding in the context of video surveillance applications
Li et al. Detection of information hiding by modulating intra prediction modes in H. 264/AVC
Yang et al. Surveillance video coding with dynamic textural background detection
He et al. Compressed Video Anomaly Detection of Human Behavior Based on Abnormal Region Determination

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120118

Termination date: 20141204

EXPY Termination of patent right or utility model