CN101277447A - Method for rapidly predicting frame space of aerophotographic traffic video - Google Patents

Method for rapidly predicting frame space of aerophotographic traffic video Download PDF

Info

Publication number
CN101277447A
CN101277447A CN 200810104091 CN200810104091A CN101277447A CN 101277447 A CN101277447 A CN 101277447A CN 200810104091 CN200810104091 CN 200810104091 CN 200810104091 A CN200810104091 A CN 200810104091A CN 101277447 A CN101277447 A CN 101277447A
Authority
CN
China
Prior art keywords
frame
search
execution
aerophotographic
estimation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200810104091
Other languages
Chinese (zh)
Other versions
CN100579228C (en
Inventor
罗喜伶
施健勇
陈煦阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Beijing University of Aeronautics and Astronautics
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN 200810104091 priority Critical patent/CN100579228C/en
Publication of CN101277447A publication Critical patent/CN101277447A/en
Application granted granted Critical
Publication of CN100579228C publication Critical patent/CN100579228C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

Aerial photography communication video fast inter-frame prediction method comprises: the step, collecting an image frame sequence, judging whether the frame to be processed currently is a first frame; if yes, performing the step 200; or, performing the step 300; the step 200, using an inter-frame code without area division; the step 300, determining a search start point of global motion estimation, and undergoing global motion estimation for current frame, obtaining global motion vector and the current frame and time frame difference of optimum matching; the step 400, comparing the time frame difference of optimum matching with a predetermined threshold value, judging whether the current frame has scene switch; if yes, performing the step 200, or, performing the step 500; the step 500, undergoing prediction coding for the frame without scene switch, cutting the frame into a background area and a motion area; the step 600, undergoing motion estimation for the current frame with macro-block as an unit, and undergoing inter-frame prediction coding based on division of background and motion areas. The invention reduces inter-frame prediction time consumption and remains quality of reconstructed image and code rate unchanged substantially.

Description

Method for rapidly predicting frame space of aerophotographic traffic video
Technical field
The present invention relates to a kind of method for video coding that is applicable to the space base traffic monitoring, relate in particular to a kind of inter prediction fast method of cutting apart based on background and moving region, belong to traffic monitoring, field of video encoding.
Background technology
Along with the aggravation of urban traffic congestion degree in recent years, traditional roadbed traffic monitoring equipment expose gradually monitoring range narrow and small, lack inherent defects such as control to macroscopic information.
At these defectives, the U.S. and some developed countries of Europe (Spain, France etc.) are since 20th century exploratory development nineties road traffic situation air surveillance technology.This technology is utilized the space base platform to carry video sensor and is realized the monitoring of traffic above-ground situation, has big, the flexible advantage of monitoring range, can obtain the traffic situation of macroscopic view, effectively remedy the deficiency that roadbed monitors means, therefore become in recent years the research focus of intelligent transportation field both at home and abroad.
Traffic situation air surveillance technology has formed a pair of contradiction to the low requirement of image sequence quality height, code check and space base platform computing capability between limited.This has brought huge challenge for video coding algorithm rapidly and efficiently.
H.264, video encoding standard has very high compression ratio to vision signal, under identical reconstructed image quality, required code check be H.263 standard 51%, it is 61% of MPEG-4 standard, meet picture quality height in the traffic situation air surveillance technology, demand that code check is low, shortcoming is that code rate is slower, and main cause is that H.264 inter prediction process has adopted complicated variable-block motion estimation process.
The implication of variable-block motion estimation process is: have 7 kinds of interframe to cut apart pattern (16 * 16,16 * 8,8 * 16,8 * 8,8 * 4,4 * 8,4 * 4), 13 kinds of intra prediction modes and a kind of Skip pattern to a macro block.H.264 encoder need travel through all patterns, and the pattern of utilization rate distortion (Rate-Distortion) Model Calculation rate distortion costs minimum is as final coding mode.In actual applications, 13 kinds of intra prediction mode effects are little, therefore often are not considered.Cut apart in the process of pattern rate distortion costs and all need to carry out estimation calculating every kind of interframe, comprise search starting point prediction and two stages of Optimum Matching point search.Though the variable-block motion estimation process can reduce coding bit rate well, improve compression efficiency and picture quality, increased encoder complexity simultaneously, brought difficulty for the H.264 application in the space base platform.Therefore need be at prediction algorithm between the special-purpose fast frame of the concrete characteristics research of aerial photography traffic video, simplification interframe is cut apart the model selection process and is removed a part of motion estimation search and count.
Traffic video Fast Compression method in the ground traffic monitoring has been studied by Electronics and Information Engineering institute of The Hong Kong Polytechnic University, cuts apart motion and background area and the background piece is used the SKIP pattern-coding by frame difference method, has effectively improved coding rate.Because do not consider the situation of video sensor shake, this method is applied to the rapid decline that the video council of taking photo by plane causes picture quality.
Princeton vision technique laboratory (Vision Technologies Laboratory) has been studied and has been used video sensor to monitor in the space base platform, and its video compression encoder uses object base method for encoding images.By image is carried out moving object detection, low compression ratio, high-quality compression are carried out in the moving target region, background is adopted the mosaic method compression of high compression ratio.This compression method can obtain under identical compression ratio than video compression standard better pictures quality, but its shortcoming is to need to carry out moving object detection, the computation complexity height before the compression.
Summary of the invention
Order of the present invention: be in order to overcome H.264 because the coding application difficult that in space base traffic monitoring platform, exists greatly consuming time, by the background of video being cut apart with the moving region and different inter-frame prediction methods being adopted in two kinds of zones, it is consuming time to have reduced inter prediction, keeps recovery picture quality and code check constant substantially simultaneously.
The object of the invention is embodied as: method for rapidly predicting frame space of aerophotographic traffic video may further comprise the steps:
Step 100, images acquired frame sequence judge whether current frame to be processed is first frame of sequence of image frames, if first frame, execution in step 200, otherwise, execution in step 300;
Step 200, the Region Segmentation of not carrying out are used traditional intraframe coding to present frame, finish frame coding;
Step 300, determine the search starting point of overall motion estimation, and present frame is carried out overall motion estimation, the global motion vector and the optimum Match time frame that obtain present frame are poor;
Step 400, by relatively with optimum Match time frame difference and predetermined threshold value, judge whether occurrence scene switches present frame, as if the occurrence scene switching, execution in step 200, otherwise, execution in step 500;
The frame of step 500, not occurrence scene switching is divided into background area and moving region;
Step 600, be unit with the macro block, and carry out the inter prediction encoding cut apart based on background and moving region, finish frame coding the present frame estimation of taking exercises.
Principle of the present invention: at first, the images acquired frame sequence if present frame is first frame of sequence of image frames, carries out traditional intraframe coding, after finishing next frame is operated; Otherwise, be reference frame with the former frame, present frame is carried out overall motion estimation obtains global motion vector and the optimum Match time frame is poor.Whether judge this frame difference less than predetermined threshold value, if then this frame is divided into background area and moving region and carries out predictive coding respectively; Otherwise, will carry out intraframe coding by this frame, finish frame coding.By background is adopted different inter-frame prediction methods with the moving region, reduced the amount of calculation of model selection and estimation, more effectively realized compression to aerial photography traffic video.
The present invention's advantage compared with prior art is:
(1) H.264 the present invention has overcome because the coding application difficult that exists in space base traffic monitoring platform greatly consuming time, by the background area of video being cut apart with the moving region and different inter-frame prediction methods being adopted in two kinds of zones, it is consuming time to reduce inter prediction, keeps recovery picture quality and code check constant substantially simultaneously.The present invention is guaranteeing the average inter prediction consuming time 21.6% that reduces under the constant substantially situation of recovery picture quality and code check.
(2) the present invention adapts to the overall little motion conditions that aerial photography traffic video exists by the background of image being cut apart with the moving region and being adopted different inter-frame prediction methods to reduce the amount of calculation of model selection and estimation to two zones.
(3) with H.264 standard is compatible fully, need not to develop the dedicated video decoder.
Description of drawings
Fig. 1 is the overview flow chart of the inventive method;
Fig. 2 is an overall motion estimation flow chart of the present invention;
Fig. 3 is cut apart flow chart for background of the present invention and moving region;
Fig. 4 is a motion estimation process flow chart of the present invention;
Fig. 5 is the search strategy flow chart of background piece of the present invention;
Fig. 6 is the search strategy flow chart of moving mass of the present invention;
Fig. 7 is the Region Segmentation result of a two field picture of employing the inventive method,
Wherein Fig. 7 a is an a certain frame in the sequence of image frames,
Fig. 7 b is the design sketch of this frame after cutting apart through background area and moving region.
Embodiment
As shown in Figure 1, the present invention includes following steps:
Step 100, images acquired frame sequence judge whether current frame to be processed is first frame of sequence of image frames, if first frame, execution in step 200, otherwise, execution in step 300;
Step 200, the Region Segmentation of not carrying out are used traditional intraframe coding to present frame, finish frame coding;
Step 300, determine the search starting point of overall motion estimation, and present frame is carried out overall motion estimation, the global motion vector and the optimum Match time frame that obtain present frame are poor;
Step 400, by relatively with optimum Match time frame difference and predetermined threshold value, judge whether occurrence scene switches present frame, as if the occurrence scene switching, execution in step 200, otherwise, execution in step 500;
The frame of step 500, not occurrence scene switching is divided into background area and moving region;
Step 600, be unit with the macro block, and carry out the inter prediction encoding cut apart based on background and moving region, finish frame coding the present frame estimation of taking exercises.
Technique scheme has proposed a kind ofly to carry out the method that background and moving region are cut apart based on overall motion estimation.In cutting procedure,, before cutting procedure, carry out overall motion estimation for avoiding cutting apart inaccurately because of image overall motion acutely causes.Common overall motion estimation is used the two parameter translation model of simplifying:
x′=G x+x
y′=G y+y (1)
In the formula, (x, y) and (x ', y ') be respectively the coordinate of present frame and former frame optimal match point, (G x, G y) be global motion vector.
The present invention improves, its global motion estimating method, and as shown in Figure 2, described step 300 may further comprise the steps:
Step 310, determine the search starting point of overall motion estimation, to judge whether the global motion vector of former frame exists, if exist, with the search starting point of this vector as overall motion estimation if not first frame; If there is no, with zero vector as the search starting point;
Step 320, to get former frame be reference frame, and reference frame and per 2 * 2 pixels of present frame are got 1 pixel, promptly gets delegation in every line, gets row every row, uses the melee template to search for;
Step 330, by formula 2 to calculate frames poor, and whether the point of judging frame difference minimum in the search pattern is melee search pattern central point or arrives the search window border, if finish to search for execution in step 340; Otherwise, be that step 320 continuation search is returned in the search starting point with the search halt;
The frame difference that step 340, recording step 330 obtain is that the optimum Match time frame is poor, and to play 4 times of point motion vector be global motion vector in template center point relative search when finishing search, finishes overall motion estimation one time.
SAD g n ( V x , V y ) = Σ y ∈ Y Σ x ∈ X | C xy - R ( x - V x ) ( y - V y ) | - - - ( 2 )
In the formula, C, R represents the brightness value of n frame original image and n-1 frame reconstructed image; (x, y) represent pixel point coordinates; X representative set { 4m|0≤4m≤w, m ∈ Z}; Y representative set 4m|0≤4m≤h, and m ∈ Z}, w wherein, h is respectively the pixel count of the every row of picture frame and every row; (Vx Vy) is the motion vector of template center's point relative search starting point in the search procedure.When the point of certain frame difference minimum in the search pattern is melee template center's point or arrival search window border, (G is arranged x, G y)=(V x, V y).
When calculating the frame difference, if x<V with formula 2 xOr y<V y, point (x-V then x, y-V y) not in the pixel coordinate range of frame, therefore the point outside the n-1 frame reconstructed image border is expanded with formula 3:
R ( x - V x ) ( y - V y ) = R xy - - - ( 3 )
The method that determines whether that occurrence scene switches of the present invention, as described below:
According to formula 4 calculated threshold T1, if optimum Match time frame difference greater than T1, is thought the scene switching has taken place, this frame is carried out intraframe coding, do not carry out Region Segmentation; Otherwise think that occurrence scene does not switch, and this frame is carried out predictive coding;
T 1 = 2 n - 2 Σ i = 2 n - 1 SAD g i ( G x , G y ) - - - ( 4 )
Fig. 3 is cut apart flow chart for background and moving region, and described step 500 is specially:
Step 510, utilize global motion vector with the relative former frame registration of present frame;
Step 520, with 4 * 4 be unit, calculate the SAD of luminance component;
Step 530, to get threshold value T2 be fixed value 300, whether judges SAD less than T2, if, execution in step 540, otherwise execution in step 550;
Step 540, judgement current block are the background piece, and it is classified as the background area;
Step 550, judgement current block are moving mass, and it is classified as the moving region.
Above-mentioned steps of the present invention has realized cutting apart of background and moving region to the frame of predictive coding, in the inter-frame prediction method of cutting apart based on background and moving region, the macro block that belongs to the background area is not cut apart, if promptly current macro belongs to the background area and does not pass through the anticipation of SKIP pattern, directly use 16 * 16 pattern-codings, do not carry out macroblock partition; If in like manner one 8 * 8 sub-macro block belongs to the background area, do not carry out sub-macroblock partition.
The flow chart of estimation as shown in Figure 4, this part can be divided into search starting point prediction and two subdivisions of search strategy selection again, the described step 600 of motion estimation process is specific as follows:
Step 610, according to the result of step 500, if current block is the background piece, execution in step 620; Otherwise execution in step 640;
Step 620, setting starting point forecast set are the set of global prediction vector and median prediction vector;
Step 630, determine search pattern and size, finish once the estimation of piece according to the size of piece;
Step 640, setting starting point forecast set are the median prediction vector, the set of the motion vector of adjacent moving mass in the present frame;
Step 650, determine search pattern and scope, finish once the estimation of piece according to the starting point rate distortion costs.
The present invention uses improved starting point forecast set, wherein the starting point forecast set of background piece only need comprise median prediction and global motion vector, and belong to the starting point forecast set of the piece of moving region, adjacent and belong to the motion vector of the piece of moving region except comprising that the median prediction vector also comprises in the present frame.Improved starting point forecast set is as shown in Equation 5:
V = { V p , V GM } MB c ∈ R b { V p , V 0 } ∪ { V m | 1 ≤ m ≤ 4 , M B m ∈ R m MB c ∈ R m - - - ( 5 )
In the formula, MB cThe expression current macro; R mAnd R bRepresent moving region and background area respectively; MB mRepresent m macro block; V pBe the median prediction vector; V GMRepresent global motion vector.
The present invention uses improved motion estimation search strategy, in the background area of aerial photography traffic video, uses less hunting zone to search for, the detailed process of background area search strategy as shown in Figure 5, described step 630 is specific as follows:
Whether the size of step 631, judgement background piece is 16 * 16, if, execution in step 632, otherwise, execution in step 633;
Step 632, the background piece to 16 * 16 use little rhombus template, and the hunting zone is ± 1, once searches for, and finishes the estimation of a background piece;
Step 633, non-16 * 16 background piece is not searched for, finished the estimation of a background piece for being of a size of.
And to the moving region, then determine the size of hunting zone by the sad value of judging starting point, the detailed process of moving region search strategy as shown in Figure 6, described step 650 is specific as follows:
The rate distortion costs Cost of step 651, calculating search starting point;
Step 652, whether judge Cost less than threshold value T3, if, execution in step 653, otherwise, execution in step 654;
Step 653, the little rhombus template of use, the hunting zone is ± 2, once searches for, and finishes the estimation of a moving mass;
Step 654, use hexagon template, the hunting zone is ± 16, once searches for, and re-uses little rhombus template search after finishing once, the hunting zone is ± 1, finishes the estimation of a moving mass;
Moving mass is 4 * 4 of belonging to the moving region, if the rate distortion costs of starting point is used little rhombus template search less than threshold value T3 (T3 gets left macro block, goes up the SAD minimum value of macro block, upper right macro block), the hunting zone is ± 2; Otherwise use the search of hexagon template, the hunting zone is ± 16, uses little rhombus template search at last once.
Fig. 7 has provided the Region Segmentation result of a two field picture.Wherein Fig. 7 a is an a certain frame in the sequence of image frames, and Fig. 7 b is the design sketch of this frame after cutting apart through background area and moving region, and black region is the background area, and all the other are the moving region.By picture frame among Fig. 7 a is divided into background area shown in Fig. 7 b and moving region, the piece in the zones of different is adopted different search strategies, can realize more effective data compression.
Below by a specific embodiment, further specify technique effect of the present invention.Suppose that the present invention adopts x264 as platform, to one section aerial photography traffic video encode (352 * 240, totally 1000 frames) test.Adopt following configuration to encode:
1. frame type is I frame and P frame;
2.P frame uses inter prediction encoding, adopts 1 reference frame;
3. Bi Jiao former algorithm adopts little rhombus template search, hunting zone ± 16;
4. adopt CABAC to carry out entropy coding;
Table 1 has provided the experimental result under the different quantization parameters (QP), comprises Y component PSNR, and code check and inter prediction are consuming time.From this table as can be known, the average reduction by 21.6% consuming time of the inter prediction of this method, and the PSNR reduction is no more than 0.1dB, the code check increase is no more than 1%.
The experimental result of table 1 the inventive method and x264 algorithm relatively
Figure A20081010409100101

Claims (11)

1, a kind of method for rapidly predicting frame space of aerophotographic traffic video is characterized in that may further comprise the steps:
Step 100, images acquired frame sequence judge whether current frame to be processed is first frame of sequence of image frames, if first frame, execution in step 200, otherwise, execution in step 300;
Step 200, the Region Segmentation of not carrying out are used traditional intraframe coding to present frame, finish frame coding;
Step 300, determine the search starting point of overall motion estimation, and present frame is carried out overall motion estimation, the global motion vector and the optimum Match time frame that obtain present frame are poor;
Step 400, by relatively with optimum Match time frame difference and predetermined threshold value, judge whether occurrence scene switches present frame, as if the occurrence scene switching, execution in step 200, otherwise, execution in step 500;
The frame of step 500, not occurrence scene switching is divided into background area and moving region;
Step 600, be unit with the macro block, and carry out the inter prediction encoding cut apart based on background and moving region, finish frame coding the present frame estimation of taking exercises.
2, method for rapidly predicting frame space of aerophotographic traffic video according to claim 1 is characterized in that: described step 300 may further comprise the steps:
Step 310, determine the search starting point of overall motion estimation, to judge whether the global motion vector of former frame exists, if exist, with the search starting point of this vector as overall motion estimation if not first frame; If there is no, with zero vector as the search starting point;
Step 320, to get former frame be reference frame, and reference frame and per 2 * 2 pixels of present frame are got 1 pixel, promptly gets delegation in every line, gets row every row, uses the melee template to search for;
Step 330, by formula 2 to calculate frames poor, and whether the point of judging frame difference minimum in the search pattern is melee search pattern central point or arrives the search window border, if finish to search for execution in step 340; Otherwise, be that step 320 continuation search is returned in the search starting point with the search halt;
SAD g n ( V x , V y ) = Σ y ∈ Y Σ x ∈ X | C xy - R ( x - V x ) ( y - V y ) | - - - ( 2 )
In the formula, C, R represents the brightness value of n frame original image and n-1 frame reconstructed image; (x, y) represent pixel point coordinates; X representative set { 4m|0≤4m≤w, m ∈ Z}; Y representative set 4m|0≤4m≤h, and m ∈ Z}, w wherein, h is respectively the pixel count of the every row of picture frame and every row; (V x, V y) be the motion vector of template center's point relative search starting point in the search procedure;
The frame difference that step 340, recording step 330 obtain is that the optimum Match time frame is poor, to play 4 times of point motion vector be global motion vector in template center point relative search when finishing search, finish overall motion estimation one time, when the point of certain frame difference minimum in the search pattern is melee template center's point or arrival search window border, (G is arranged x, G y)=(V x, V y).
3, method for rapidly predicting frame space of aerophotographic traffic video according to claim 2 is characterized in that: when described employing formula 2 calculates the frame difference, if x<V xOr y<V y, point (x-V then x, y-V y) not in the pixel coordinate range of frame, therefore the point outside the n-1 frame reconstructed image border is expanded with formula 3:
R ( x - V x ) ( y - V y ) = R xy - - - ( 3 )
4, method for rapidly predicting frame space of aerophotographic traffic video according to claim 1, it is characterized in that described step 400 may further comprise the steps: in the described step 400 according to formula 4 calculated threshold T1, if optimum Match time frame difference is greater than T1, think the scene switching has taken place, this frame is carried out intraframe coding, do not carry out Region Segmentation; Otherwise think that occurrence scene does not switch, and this frame is carried out predictive coding;
T 1 = 2 n - 2 Σ i = 2 n - 1 SAD g i ( G x , G y ) . - - - ( 4 )
5, method for rapidly predicting frame space of aerophotographic traffic video according to claim 1 is characterized in that described step 500 is specially:
Step 510, utilize global motion vector with the relative former frame registration of present frame;
Step 520, with 4 * 4 be unit, calculate the SAD of luminance component;
Step 530, whether judge SAD less than preset threshold T2, if, execution in step 540, otherwise execution in step 550;
Step 540, judgement current block are the background piece, and it is classified as the background area;
Step 550, judgement current block are moving mass, and it is classified as the moving region.
6, method for rapidly predicting frame space of aerophotographic traffic video according to claim 1 is characterized in that the threshold value T2=300 in the described step 530.
7, method for rapidly predicting frame space of aerophotographic traffic video according to claim 1 is characterized in that described step 600 is specific as follows:
Step 610, according to the result of step 500, if current block is the background piece, execution in step 620; Otherwise execution in step 640;
Step 620, setting starting point forecast set are the set of global prediction vector and median prediction vector;
Step 630, determine search pattern and size, finish once the estimation of piece according to the size of piece;
Step 640, setting starting point forecast set are the median prediction vector, the set of the motion vector of adjacent moving mass in the present frame;
Step 650, determine search pattern and scope, finish once the estimation of piece according to the starting point rate distortion costs.
8, method for rapidly predicting frame space of aerophotographic traffic video according to claim 7, it is as follows to it is characterized in that setting the starting point forecast set in the described step 620:
V = { V p , V GM } MB c ∈ R b { V p , V 0 } ∪ { V m | 1 ≤ m ≤ 4 , M B m ∈ R m MB c ∈ R m
In the formula, MB cThe expression current macro; R mAnd R bRepresent moving region and background area respectively; MB mRepresent m macro block; V pBe the median prediction vector; V GMRepresent global motion vector.
9, method for rapidly predicting frame space of aerophotographic traffic video according to claim 7 is characterized in that described step 630 is specific as follows:
Whether the size of step 631, judgement background piece is 16 * 16, if, execution in step 632, otherwise, execution in step 633;
Step 632, the background piece to 16 * 16 use little rhombus template, and the hunting zone is+1, once searches for, and finishes the estimation of a background piece;
Step 633, non-16 * 16 background piece is not searched for, finished the estimation of a background piece for being of a size of.
10, method for rapidly predicting frame space of aerophotographic traffic video according to claim 7 is characterized in that described step 650 is specific as follows:
The rate distortion costs Cost of step 651, calculating search starting point;
Step 652, judge that whether Cost is less than preset threshold T3; If, execution in step 653, otherwise, execution in step 654;
Step 653, the little rhombus template of use, the hunting zone is ± 2, once searches for, and finishes the estimation of a moving mass;
Step 654, use hexagon template, the hunting zone is ± 16, once searches for, and re-uses little rhombus template search after finishing once, the hunting zone is ± 1, finishes the estimation of a moving mass.
11, method for rapidly predicting frame space of aerophotographic traffic video according to claim 7 is characterized in that: described threshold value T3 gets left macro block, goes up the SAD minimum value of macro block, upper right macro block.
CN 200810104091 2008-04-15 2008-04-15 Method for rapidly predicting frame space of aerophotographic traffic video Expired - Fee Related CN100579228C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200810104091 CN100579228C (en) 2008-04-15 2008-04-15 Method for rapidly predicting frame space of aerophotographic traffic video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200810104091 CN100579228C (en) 2008-04-15 2008-04-15 Method for rapidly predicting frame space of aerophotographic traffic video

Publications (2)

Publication Number Publication Date
CN101277447A true CN101277447A (en) 2008-10-01
CN100579228C CN100579228C (en) 2010-01-06

Family

ID=39996391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200810104091 Expired - Fee Related CN100579228C (en) 2008-04-15 2008-04-15 Method for rapidly predicting frame space of aerophotographic traffic video

Country Status (1)

Country Link
CN (1) CN100579228C (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102263958A (en) * 2011-07-26 2011-11-30 中兴通讯股份有限公司 method and device for obtaining initial point based on H264 motion estimation algorithm
CN102291577A (en) * 2010-06-21 2011-12-21 北京中星微电子有限公司 Method and device for calculating macroblock motion vector
CN102821275A (en) * 2011-06-08 2012-12-12 中兴通讯股份有限公司 Data compression method, data compression device, data decompression method and data decompression device
CN103024384A (en) * 2012-12-14 2013-04-03 深圳百科信息技术有限公司 Method and device for encoding and decoding videos
CN103006332A (en) * 2012-12-27 2013-04-03 广东圣洋信息科技实业有限公司 Scalpel tracking method and device and digital stereoscopic microscope system
CN113055670A (en) * 2021-03-08 2021-06-29 杭州裕瀚科技有限公司 HEVC/H.265-based video coding method and system
WO2024040535A1 (en) * 2022-08-25 2024-02-29 深圳市大疆创新科技有限公司 Video processing method and apparatus, device, and computer storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102291577A (en) * 2010-06-21 2011-12-21 北京中星微电子有限公司 Method and device for calculating macroblock motion vector
CN102821275A (en) * 2011-06-08 2012-12-12 中兴通讯股份有限公司 Data compression method, data compression device, data decompression method and data decompression device
CN102821275B (en) * 2011-06-08 2017-08-08 南京中兴软件有限责任公司 Data compression method and device, uncompressing data and device
CN102263958A (en) * 2011-07-26 2011-11-30 中兴通讯股份有限公司 method and device for obtaining initial point based on H264 motion estimation algorithm
CN103024384A (en) * 2012-12-14 2013-04-03 深圳百科信息技术有限公司 Method and device for encoding and decoding videos
CN103024384B (en) * 2012-12-14 2015-10-21 深圳百科信息技术有限公司 A kind of Video coding, coding/decoding method and device
CN103006332A (en) * 2012-12-27 2013-04-03 广东圣洋信息科技实业有限公司 Scalpel tracking method and device and digital stereoscopic microscope system
CN113055670A (en) * 2021-03-08 2021-06-29 杭州裕瀚科技有限公司 HEVC/H.265-based video coding method and system
CN113055670B (en) * 2021-03-08 2024-03-19 浙江裕瀚科技有限公司 HEVC/H.265-based video coding method and system
WO2024040535A1 (en) * 2022-08-25 2024-02-29 深圳市大疆创新科技有限公司 Video processing method and apparatus, device, and computer storage medium

Also Published As

Publication number Publication date
CN100579228C (en) 2010-01-06

Similar Documents

Publication Publication Date Title
CN100579228C (en) Method for rapidly predicting frame space of aerophotographic traffic video
CN101448159B (en) Rapid interframe mode selection method based on rate-distortion cost and mode frequency
CN101640802B (en) Video inter-frame compression coding method based on macroblock features and statistical properties
CN103581647B (en) A kind of depth map sequence fractal coding based on color video motion vector
CN100571390C (en) A kind of H264 video coding fast schema selection method and device
CN103248893B (en) From H.264/AVC standard to code-transferring method and transcoder thereof the fast frame of HEVC standard
CN101217663B (en) A quick selecting method of the encode mode of image pixel block for the encoder
CN103188496B (en) Based on the method for coding quick movement estimation video of motion vector distribution prediction
CN107087200B (en) Skip coding mode advanced decision method for high-efficiency video coding standard
CN103873861A (en) Coding mode selection method for HEVC (high efficiency video coding)
CN101494792A (en) H.264/AVC frame inner prediction method based on edge characteristics
CN101621694B (en) Motion estimation method, motion estimation system and display terminal
CN106210721B (en) A kind of quick code check code-transferring methods of HEVC
CN103338370B (en) A kind of multi-view depth video fast encoding method
CN105657420B (en) HEVC-oriented fast intra-frame prediction mode decision method and device
CN101022555B (en) Interframe predictive coding mode quick selecting method
CN103546758A (en) Rapid depth map sequence interframe mode selection fractal coding method
CN108347605B (en) Quick decision-making method for 3D video depth image quad-tree coding structure division
CN105187826A (en) Rapid intra-frame mode decision method specific to high efficiency video coding standard
CN104853191A (en) HEVC fast coding method
CN101389023B (en) Adaptive movement estimation method
CN102196272B (en) P frame encoding method and device
CN100586186C (en) Quick inter-frame forecast mode selection method
CN101202915A (en) Method and apparatus for selecting frame inner forecasting mode
CN101883275B (en) Video coding method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100106

Termination date: 20160415

CF01 Termination of patent right due to non-payment of annual fee