CN106651918A - Method for extracting foreground under shaking background - Google Patents

Method for extracting foreground under shaking background Download PDF

Info

Publication number
CN106651918A
CN106651918A CN201710083910.4A CN201710083910A CN106651918A CN 106651918 A CN106651918 A CN 106651918A CN 201710083910 A CN201710083910 A CN 201710083910A CN 106651918 A CN106651918 A CN 106651918A
Authority
CN
China
Prior art keywords
image
motion vector
frame
field picture
gravity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710083910.4A
Other languages
Chinese (zh)
Other versions
CN106651918B (en
Inventor
何冰
侯晓明
顾俊杰
印明骋
陆涛
柴忠良
赖志超
王欣庭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Shanghai Electric Power Co Ltd
Original Assignee
State Grid Shanghai Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Shanghai Electric Power Co Ltd filed Critical State Grid Shanghai Electric Power Co Ltd
Priority to CN201710083910.4A priority Critical patent/CN106651918B/en
Publication of CN106651918A publication Critical patent/CN106651918A/en
Application granted granted Critical
Publication of CN106651918B publication Critical patent/CN106651918B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The invention relates to a method for extracting a foreground under a shaking background. The method comprises the following steps: S1) initializing the size of an image slide block, the step length of the slide block movement and the position of the slide block; S2) reading front and rear frames of images from a video and acquiring binary images be_frame and af_frame; S3) extracting the image slide block on the same position in the be_frame and af_frame, calculating a gravity center difference of two extracting results and acquiring motion vectors of front and rear frames on the current position of the image slide block; S4) moving the image slide block according to the step length and repeating the step S3) till completing the extraction for the whole image; S5) calculating an average motion vector of the front and rear frames according to all the acquired motion vectors of the front and rear frames; S6) reversely translating the rear frame image according to the acquired average motion vector, thereby acquiring a relieved shaking error image; S7) utilizing a Gaussian mixture model to extract the foreground for the front and rear frames of images after the shaking is adjusted. Compared with the prior art, the method provided by the invention has the advantages of wide application scope and the like.

Description

Foreground extracting method under shake background
Technical field
The present invention relates to a kind of foreground extracting method, more particularly, to the foreground extracting method under a kind of shake background.
Background technology
In order to improve the quality and level of city security, picture pick-up device has been applied to almost all of public place, but Video acquired in substantial amounts of picture pick-up device but cannot also comprehensively, intelligently, accurately to it be analyzed at present.If utilized It is neither actual also uneconomical that manpower carrys out checking monitoring video.And study and find, people's its attention after viewing monitoring video 20 minutes Power will drop to unacceptable degree, and the unconventional phenomenon occurred in image can be turned a blind eye to.And one can adapt to The video foreground extraction algorithm of complex scene, just automatically can extract to the prospect in video, it is also possible to by what is extracted Prospect carries out intelligent analysis as pattern-recognition and the input of the system of motion analysis to video.Therefore, video foreground is extracted Development be conducive to social security system improve and public safety level lifting.
In perhaps multi-disciplinary research, also it is no lack of the figure of foreground extraction technology:Doctor can after some drugses under clothes for patients So that medicine is followed the trail of using foreground extraction technology in trace in patient body, whether checking medicine accurately arrives at focus and occurs Effect;The scholar of animal behavior research may not necessarily look-out target individual in a long time behavior, but utilize intelligence Video analytic system is replacing.Militarily, foreground extraction can also be used for catching the behavior of common-denominator target with target tracking technology, To strengthen the defense, aid in attack.
When the equipment for shooting video is in almost static, gauss hybrid models relatively accurately can carry foreground object Take out, but for the video that there is shake lacks adaptability.
The content of the invention
The purpose of the present invention is provided under a kind of shake background for the defect for overcoming above-mentioned prior art to exist Foreground extracting method.
The purpose of the present invention can be achieved through the following technical solutions:
A kind of foreground extracting method under shake background, including:
Step S1:The position of the size, the step-length of slide block movement and slide block of initialisation image slide block;
Step S2:From video read before and after two field picture, and obtain prior image frame binary image be_frame and after The binary image af_frame of two field picture;
Step S3:Same position of the image slider in the binary image of prior image frame and the binary image of rear two field picture Extraction is put, the center of gravity for calculating two extraction results is poor, the motion vector of frame before and after obtaining under present image slide position;
Step S4:Image slider is moved by step-length, repeat step S3, until completing the extraction of whole image;
Step S5:According to the average motion vector of frame before and after all of motion vector computation of frame in front and back for obtaining;
Step S6:Rear two field picture is reversely translated according to the average motion vector for obtaining, the jitter error figure being mitigated Picture;
Step S7:Two field picture carries out foreground extraction before and after after being shaken to adjustment using gauss hybrid models.
Step S6 is specifically included:
Step S61:Whether current average motion vector is judged more than or equal to 2, if if it has, then execution step S62 It is no, then execution step S64;
Step S62:The size of image slider is amplified into current twice, repeat step S3 to S5 obtains another average Motion vector, and using the mean value of this average motion vector and former average motion vector as final average motion vector, and hold Row step S63;
Step S63:Rear two field picture is reversely translated according to final average motion vector, the jitter error figure being mitigated Picture;
Step S64:The binary image of the binary image of prior image frame and rear two field picture is amplified into current 10 Times, repeat repeat step S3 to S5, another average motion vector is obtained, and using this average motion vector as final mean motion Vector, and execution step S65;
Step S65:Rear two field picture is amplified into current 10 times, according to final average motion vector to amplification after rear frame Image is reversely translated, and is then contracted to life size, the jitter error image being mitigated.
The original dimension of described image slide block is 50 pixel × 50 pixels, and step-length is 25 pixels.
The motion vector of frame is specially before and after described:
Wherein:CxbeThe x coordinate of the center of gravity of part, Cy are extracted by image slider for prior image framebeSchemed for prior image frame As slide block extracts the y-coordinate of the center of gravity of part, CxafThe x coordinate of the center of gravity of part, Cy are extracted by image slider for rear two field pictureaf The y-coordinate of the center of gravity of part is extracted by image slider for rear two field picture.
Barycentric coodinates are specially:
Wherein:Cx is the x coordinate of center of gravity, and Cy is the y-coordinate of center of gravity, and W is that image extracts all of part by image slider The pixel value sum of pixel, wiFor the pixel value of pixel i, xiFor the x coordinate of pixel i, yiFor the y-coordinate of pixel i, m is Image is extracted the pixel total number of part by image slider.
Compared with prior art, the present invention has advantages below:
1) coloured image is converted into binary image, you can accelerate arithmetic speed, the memory space of program can be reduced again, On this basis, by the way of image slider slides and extracts, it is easy to iterative algorithm to automatically complete the extraction to whole image, Mean value is calculated by several times to be conducive to amplifying difference, sensitiveness is improved, and then improves shake rectification effect.
2) if there is larger difference in the molecule of image block center of gravity, can be with the size of adjusting tile so that calculate center of gravity Denominator value it is larger, the change on molecule is then weakened, makes the calculating of center of gravity more accurate, finally by present frame according to average fortune Dynamic vector does reversely translation, the jitter error image being mitigated, if the skew of flating generation is less, by frame figure in front and back As amplifying N times, according to center of gravity calculation motion vector, then reversely translation is done to present frame according to average motion vector, finally will The present frame for obtaining reduces N times, the jitter error being mitigated.
Description of the drawings
Fig. 1 is the key step schematic flow sheet of the inventive method;
Fig. 2 is the previous frame image of an example;
Fig. 3 is the latter two field picture of an example;
Fig. 4 (a) is the image of non-shake video 1 in experiment;
Fig. 4 (b) is the foreground image that non-shake video 1 is extracted by gauss hybrid models method in experiment;
Fig. 4 (c) is the foreground image that non-shake video 1 is extracted by the application method in experiment;
Fig. 5 (a) is the image that video 2 is shaken in experiment;
Fig. 5 (b) is that the foreground image that video 2 is extracted by gauss hybrid models method is shaken in experiment;
Fig. 5 (c) is that the foreground image that video 2 is extracted by the application method is shaken in experiment;
Fig. 6 (a) is the image that video 3 is shaken in experiment;
Fig. 6 (b) is that the foreground image that video 3 is extracted by gauss hybrid models method is shaken in experiment;
Fig. 6 (c) is that the foreground image that video 3 is extracted by the application method is shaken in experiment.
Specific embodiment
Below in conjunction with the accompanying drawings the present invention is described in detail with specific embodiment.The present embodiment is with technical solution of the present invention Premised on implemented, give detailed embodiment and specific operating process, but protection scope of the present invention is not limited to Following embodiments.
In the shooting of video, if video camera there occurs motion, position of the actionless object in picture in practice Putting will occur change in location, and its position has changed to (x2, y2) by (x1, y1), and motion vector is:
The direct result that the shake of video camera is caused is that the object in video produces identical displacement, and block motion estimation method will Present frame is divided into the fritter of formed objects, then block most like therewith is searched in former frame to each fritter, contrast Coordinate between two blocks draws the displacement of each image block.The misalignment for considering all image blocks is just obtained in that shooting sets Standby motion conditions, draw the motion vector of background.
Block motion estimation algorithm based on square.Square is a numerical characteristic of stochastic variable.To stochastic variable X, it K rank squares computing formula be E (X-EX)k, wherein EX is the expectation of the stochastic variable.Algorithm can be described using n ranks square Image block estimates the motion vector of image block, in the application, the fortune of block is carried out as the first moment of image block using center of gravity square It is dynamic to estimate.
As shown in Figures 2 and 3, it is respectively the image of same position in the two field picture of t-1 moment and t in a video Block.Due to the shake of video camera so that the image in block there occurs translation (open circle there occurs shifting), the center of gravity of image block Change therewith.On the premise of previous frame removal pixel and rear frame immigration pixel difference is ignored, it is believed that the mobile arrow of image block Amount is exactly the motion vector of image.
A kind of foreground extracting method under shake background, as shown in figure 1, including:
Step S1:The position of the size, the step-length of slide block movement and slide block of initialisation image slide block, wherein image slider Original dimension be 50 pixel × 50 pixels, step-length be 25 pixels;
Step S2:From video read before and after two field picture, and obtain prior image frame binary image be_frame and after The binary image af_frame of two field picture;
Step S3:Same position of the image slider in the binary image of prior image frame and the binary image of rear two field picture Extraction is put, the center of gravity for calculating two extraction results is poor, the motion vector of frame before and after obtaining under present image slide position;
Step S4:Image slider is moved by step-length, repeat step S3, until completing the extraction of whole image;
Step S5:According to the average motion vector of frame before and after all of motion vector computation of frame in front and back for obtaining,
In front and back the motion vector of frame is specially:
Wherein:CxbeThe x coordinate of the center of gravity of part, Cy are extracted by image slider for prior image framebeSchemed for prior image frame As slide block extracts the y-coordinate of the center of gravity of part, CxafThe x coordinate of the center of gravity of part, Cy are extracted by image slider for rear two field pictureaf The y-coordinate of the center of gravity of part is extracted by image slider for rear two field picture;
Step S6:Rear two field picture is reversely translated according to the average motion vector for obtaining, the jitter error figure being mitigated Picture, specifically includes:
Step S61:Whether current average motion vector is judged more than or equal to 2, if if it has, then execution step S62 It is no, then execution step S64;
Step S62:The size of image slider is amplified into current twice, repeat step S3 to S5 obtains another average Motion vector, and using the mean value of this average motion vector and former average motion vector as final average motion vector, and hold Row step S63;
Step S63:Rear two field picture is reversely translated according to final average motion vector, the jitter error figure being mitigated Picture;
Step S64:The binary image of the binary image of prior image frame and rear two field picture is amplified into current 10 Times, repeat repeat step S3 to S5, another average motion vector is obtained, and using this average motion vector as final mean motion Vector, and execution step S65;
Step S65:Rear two field picture is amplified into current 10 times, according to final average motion vector to amplification after rear frame Image is reversely translated, and is then contracted to life size, the jitter error image being mitigated.
Barycentric coodinates are specially:
Wherein:Cx is the x coordinate of center of gravity, and Cy is the y-coordinate of center of gravity, and W is that image extracts all of part by image slider The pixel value sum of pixel, wiFor the pixel value of pixel i, xiFor the x coordinate of pixel i, yiFor the y-coordinate of pixel i, m is Image is extracted the pixel total number of part by image slider.
In the block analysis of video sequential, if ignoring previous frame removes the difference that pixel is moved into rear frame, it is believed that The mobile vector of block center of gravity is exactly the motion vector of block.But in most cases, removing and move into can deposit between pixel In difference, there may come a time when there is larger difference, in order to reduce the impact of this species diversity as far as possible, employ in the application following two Measure:
The value that coloured image describes pixel using RGB color is often larger, when the size of image block is than larger When, the operand for calculating block center of gravity is larger, so coloured image is converted into edge binary map by the application using Canny algorithms Picture, you can accelerate arithmetic speed, the memory space of program can be reduced again.
(1) motion vector computation of larger two field picture in front and back is shaken
But the comparison of block or of problems is directly carried out to bianry image, because removal pixel cannot be determined and moved Enter the relation of pixel:If it is black to remove pixel, the pixel of immigration is white, then the center of gravity of block will be moved down, otherwise Then move in the center of gravity of block.Employ in the experiment of this paper and be extracted the black matrix binary image at edge to two field picture to carry out block Relatively.So, the pixel that previous frame is removed and rear frame is moved into is essentially black color dots, obtains block centre-of-gravity motion vector.When segmentation When image block is less, if the molecule of image block center of gravity occurs larger difference, the calculating of center of gravity can occur larger error, in order to drop Low error, can be with the size of adjusting tile so that the denominator value for calculating center of gravity is larger, and the change on molecule is then weakened, makes The calculating of center of gravity is more accurate.Finally present frame is done into reversely translation, the jitter error being mitigated according to average motion vector Image.
(2) motion vector computation of less two field picture in front and back is shaken
The translation of image can only be carried out in pixel scale, if the skew of flating generation is less, such as side-play amount Less than a pixel, now center of gravity calculation skew then produces error.Therefore, when current method calculating centre-of gravity shift is 0 or 1, then N times will be amplified by two field picture in front and back, according to center of gravity calculation motion vector, then do reversely flat to present frame according to average motion vector Move, finally the present frame for obtaining is reduced into N times, the jitter error being mitigated.
Step S7:Two field picture carries out foreground extraction before and after after being shaken to adjustment using gauss hybrid models, using Gauss It is foreground extraction prior art quite ripe at present that mixed model carries out foreground extraction, and the application is no longer described in detail, substantially introduced It is as follows:
Given a collection of observed data X={ x1, x2..., xN, this batch data is generated altogether by M single Gauss model, but Concrete certain data xiBelong to which single Gauss model, each ratio α of single Gauss model in mixed modelj, mathematic expectaion μj With covariance CjAll unknown, these mix from the sample data of different Gaussian Profiles, just become gauss hybrid models.It is high The probability density function of this mixed model is:
Wherein
Order
All parameters of gauss hybrid models are estimated by sample set X
The probability density function of sample X is
Gauss hybrid models are one background model of video extraction.When a new pixel is read, can one by one with it is known Single Gauss model matched, the order of matching is matched from low to high according to the priority of each model.If sent out during matching Now the pixel matches with certain single Gauss model and is considered as the pixel and belongs to background dot, and the list is updated using the pixel The parameters of Gauss model.Illustrate that it is foreground point if pixel is not belonging to any single Gauss model.
Experiment
For the foreground extraction under complex background, herein using gauss hybrid models (prior art) and the application method two Plant algorithm.For the adaptability to video scene of com-parison and analysis algorithm, voluntarily recording 20 groups of videos carries out foreground extraction experiment. Wherein there are 5 groups to be non-shake video, 15 groups is shake video.Each group of video carries out foreground extraction using two kinds of algorithms respectively. The following is 3 groups in 20 groups of videos.
Three frames that every group of sample picture takes video are shown, and each two field picture shows using gauss hybrid models and this Shen Please method extraction result.The part irised out with red line in figure is that gauss hybrid models extract result presence substantially with context of methods The region of difference.
Video shown in Fig. 4 (a) (b) (c) is non-shake video, as can be seen from Figure Gaussian Mixture mould when non-shake The extraction result of type and context of methods does not have significant difference.Fig. 5 (a) (b) (c) and Fig. 6 (a) (b) (c) show shake video Foreground extraction result.As seen from the figure, the application algorithm shows the extraction result unanimous on the whole with gauss hybrid models, but There is less wrong prospect than gauss hybrid models.This shows that the application algorithm can carried out to a certain degree just to two field picture True moving step sizes so that the prospect of error extraction is reduced.
20 video bags of experiment are containing about 100~200 frames or so.This two classes video is therefrom chosen at random respectively 10 frames and 20 frames.Table 1 represents the frame of the result better than gauss hybrid models of this paper algorithms extraction prospect in the frame that each video is randomly selected Number and ratio.Wherein V1 to V5 is non-shake video, and remaining is shake video.
The foreground extraction Comparative result of table 1
In non-shake video, the performance of this algorithm is consistent with gauss hybrid models, therefore improved frame number ratio is always 0.In the video of shake, if extract frame be just shake more violent so improvement effect can be obvious, if taken out The dither frame amplitude very little got, improved space also can accordingly reduce.The data of comprehensive upper table, this method is under complex scene The improvement amplitude of foreground extraction is about 21%.
Above it is demonstrated experimentally that this algorithm compares gauss hybrid models has to the foreground extraction of the video that there is jitter phenomenon Well adapting to property.

Claims (5)

1. it is a kind of shake background under foreground extracting method, it is characterised in that include:
Step S1:The position of the size, the step-length of slide block movement and slide block of initialisation image slide block;
Step S2:Two field picture before and after reading from video, and obtain the binary image be_frame and rear frame figure of prior image frame The binary image af_frame of picture;
Step S3:Same position of the image slider in the binary image of prior image frame and the binary image of rear two field picture is carried Take, the center of gravity for calculating two extraction results is poor, the motion vector of frame before and after obtaining under present image slide position;
Step S4:Image slider is moved by step-length, repeat step S3, until completing the extraction of whole image;
Step S5:According to the average motion vector of frame before and after all of motion vector computation of frame in front and back for obtaining;
Step S6:Rear two field picture is reversely translated according to the average motion vector for obtaining, the jitter error image being mitigated;
Step S7:Two field picture carries out foreground extraction before and after after being shaken to adjustment using gauss hybrid models.
2. the foreground extracting method under a kind of shake background according to claim 1, it is characterised in that step S6 tool Body includes:
Step S61:Whether current average motion vector is judged more than or equal to 2, if it has, then execution step S62 is if it has not, then Execution step S64;
Step S62:The size of image slider is amplified into current twice, repeat step S3 to S5 obtains another mean motion Vector, and using the mean value of this average motion vector and former average motion vector as final average motion vector, and perform step Rapid S63;
Step S63:Rear two field picture is reversely translated according to final average motion vector, the jitter error image being mitigated;
Step S64:The binary image of the binary image of prior image frame and rear two field picture is amplified into current 10 times, weight Multiple repeat step S3 to S5, obtains another average motion vector, and using this average motion vector as final average motion vector, And execution step S65;
Step S65:Rear two field picture is amplified into current 10 times, according to final average motion vector to amplification after rear two field picture Reversely translated, be then contracted to life size, the jitter error image being mitigated.
3. it is according to claim 1 and 2 it is a kind of shake background under foreground extracting method, it is characterised in that described image The original dimension of slide block is 50 pixel × 50 pixels, and step-length is 25 pixels.
4. the foreground extracting method under a kind of shake background according to claim 1, it is characterised in that frame before and after described Motion vector is specially:
Cx a f - Cx b e Cy a f - Cy b e
Wherein:CxbeThe x coordinate of the center of gravity of part, Cy are extracted by image slider for prior image framebeIt is prior image frame by image slider Extract the y-coordinate of the center of gravity of part, CxafThe x coordinate of the center of gravity of part, Cy are extracted by image slider for rear two field pictureafFor rear frame Image is extracted the y-coordinate of the center of gravity of part by image slider.
5. it is according to claim 1 it is a kind of shake background under foreground extracting method, it is characterised in that barycentric coodinates are concrete For:
C x = Σ i = 1 m w i x i W C y = Σ i = 1 m w i y i W W = Σ i = 1 m w i
Wherein:Cx is the x coordinate of center of gravity, and Cy is the y-coordinate of center of gravity, and W is all pixels that image is extracted part by image slider Pixel value sum, wiFor the pixel value of pixel i, xiFor the x coordinate of pixel i, yiFor the y-coordinate of pixel i, m is image The pixel total number of part is extracted by image slider.
CN201710083910.4A 2017-02-16 2017-02-16 Foreground extraction method under shaking background Active CN106651918B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710083910.4A CN106651918B (en) 2017-02-16 2017-02-16 Foreground extraction method under shaking background

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710083910.4A CN106651918B (en) 2017-02-16 2017-02-16 Foreground extraction method under shaking background

Publications (2)

Publication Number Publication Date
CN106651918A true CN106651918A (en) 2017-05-10
CN106651918B CN106651918B (en) 2020-01-31

Family

ID=58846281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710083910.4A Active CN106651918B (en) 2017-02-16 2017-02-16 Foreground extraction method under shaking background

Country Status (1)

Country Link
CN (1) CN106651918B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697689A (en) * 2017-10-23 2019-04-30 北京京东尚科信息技术有限公司 Storage medium, electronic equipment, image synthesizing method and device
CN109724992A (en) * 2018-07-23 2019-05-07 永康市柴迪贸易有限公司 Cabinet for TV cleannes analytical mechanism
CN110458820A (en) * 2019-08-06 2019-11-15 腾讯科技(深圳)有限公司 A kind of multimedia messages method for implantation, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1211872A (en) * 1997-06-04 1999-03-24 株式会社日立制作所 Image signal system converter and TV set
CN1647113A (en) * 2002-04-11 2005-07-27 皇家飞利浦电子股份有限公司 Motion estimation unit and method of estimating a motion vector
CN1921628A (en) * 2005-08-23 2007-02-28 松下电器产业株式会社 Motion vector detection apparatus and motion vector detection method
CN101090456A (en) * 2006-06-14 2007-12-19 索尼株式会社 Image processing device and method, image pickup device and method
US8325810B2 (en) * 2002-06-19 2012-12-04 Stmicroelectronics S.R.L. Motion estimation method and stabilization method for an image sequence
CN104410855A (en) * 2014-11-05 2015-03-11 广州中国科学院先进技术研究所 Jitter detection method of monitoring video

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1211872A (en) * 1997-06-04 1999-03-24 株式会社日立制作所 Image signal system converter and TV set
CN1647113A (en) * 2002-04-11 2005-07-27 皇家飞利浦电子股份有限公司 Motion estimation unit and method of estimating a motion vector
US8325810B2 (en) * 2002-06-19 2012-12-04 Stmicroelectronics S.R.L. Motion estimation method and stabilization method for an image sequence
CN1921628A (en) * 2005-08-23 2007-02-28 松下电器产业株式会社 Motion vector detection apparatus and motion vector detection method
CN101090456A (en) * 2006-06-14 2007-12-19 索尼株式会社 Image processing device and method, image pickup device and method
CN104410855A (en) * 2014-11-05 2015-03-11 广州中国科学院先进技术研究所 Jitter detection method of monitoring video

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李尊民: "《电视图像自动跟踪的基本原理》", 30 September 1998 *
胡彦婷: "一种基于视觉感知的非线性幅型比变换方法", 《中国图象图形学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697689A (en) * 2017-10-23 2019-04-30 北京京东尚科信息技术有限公司 Storage medium, electronic equipment, image synthesizing method and device
CN109697689B (en) * 2017-10-23 2023-09-01 北京京东尚科信息技术有限公司 Storage medium, electronic device, video synthesis method and device
CN109724992A (en) * 2018-07-23 2019-05-07 永康市柴迪贸易有限公司 Cabinet for TV cleannes analytical mechanism
CN110458820A (en) * 2019-08-06 2019-11-15 腾讯科技(深圳)有限公司 A kind of multimedia messages method for implantation, device, equipment and storage medium

Also Published As

Publication number Publication date
CN106651918B (en) 2020-01-31

Similar Documents

Publication Publication Date Title
WO2020238560A1 (en) Video target tracking method and apparatus, computer device and storage medium
CN107358623B (en) Relevant filtering tracking method based on significance detection and robustness scale estimation
EP2864933B1 (en) Method, apparatus and computer program product for human-face features extraction
CN111861925B (en) Image rain removing method based on attention mechanism and door control circulation unit
CN111127308B (en) Mirror image feature rearrangement restoration method for single sample face recognition under partial shielding
CN108109162B (en) Multi-scale target tracking method using self-adaptive feature fusion
CN112184759A (en) Moving target detection and tracking method and system based on video
CN102156995A (en) Video movement foreground dividing method in moving camera
CN104751484B (en) A kind of moving target detecting method and the detecting system for realizing moving target detecting method
CN109919002B (en) Yellow stop line identification method and device, computer equipment and storage medium
CN112581540B (en) Camera calibration method based on human body posture estimation in large scene
CN110334589A (en) A kind of action identification method of the high timing 3D neural network based on empty convolution
CN107862680B (en) Target tracking optimization method based on correlation filter
CN110827304B (en) Traditional Chinese medicine tongue image positioning method and system based on deep convolution network and level set method
CN106651918A (en) Method for extracting foreground under shaking background
CN102314591B (en) Method and equipment for detecting static foreground object
CN105095898B (en) A kind of targeted compression cognitive method towards real-time vision system
CN114170570A (en) Pedestrian detection method and system suitable for crowded scene
CN101184235B (en) Method and apparatus for implementing background image extraction from moving image
Yang et al. Background extraction from video sequences via motion-assisted matrix completion
CN109784215B (en) In-vivo detection method and system based on improved optical flow method
CN113989556A (en) Small sample medical image classification method and system
Chen Moving object detection based on background extraction
Li et al. CDMY: A lightweight object detection model based on coordinate attention
CN111145221A (en) Target tracking algorithm based on multi-layer depth feature extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant