CN102938074A - Self-adaptive extraction method of badminton field or tennis field in virtual advertising system during sports live broadcast - Google Patents

Self-adaptive extraction method of badminton field or tennis field in virtual advertising system during sports live broadcast Download PDF

Info

Publication number
CN102938074A
CN102938074A CN2012104881199A CN201210488119A CN102938074A CN 102938074 A CN102938074 A CN 102938074A CN 2012104881199 A CN2012104881199 A CN 2012104881199A CN 201210488119 A CN201210488119 A CN 201210488119A CN 102938074 A CN102938074 A CN 102938074A
Authority
CN
China
Prior art keywords
place
point
field
coordinate
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012104881199A
Other languages
Chinese (zh)
Other versions
CN102938074B (en
Inventor
陈临强
王振兴
杨礼坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Electronic Science and Technology University
Original Assignee
Hangzhou Electronic Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Electronic Science and Technology University filed Critical Hangzhou Electronic Science and Technology University
Priority to CN201210488119.9A priority Critical patent/CN102938074B/en
Publication of CN102938074A publication Critical patent/CN102938074A/en
Application granted granted Critical
Publication of CN102938074B publication Critical patent/CN102938074B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a self-adaptive extraction method of a badminton field or tennis field in a virtual advertising system during sports live broadcast. The method comprises the following steps of: I, extracting and completely complementing a prime area of the badminton field or the tennis field; II, preprocessing the prime area in the step I to obtain the margin of the badminton field or the tennis field; III, processing the margin in the step II and extracting straight lines of the badminton field or the tennis field; and IV, calculating the intersection of the straight lines of the badminton field or the tennis field in the step III so as to establish a correspondence relationship with a standard badminton field or tennis field. According to the self-adaptive extraction method of the badminton field or the tennis field in the virtual advertising system during the sports live broadcast, a prime video area can be extracted automatically, the self adaption of threshold values is realized, all field lines can be calculated precisely, and a relatively good field detecting effect can be achieved.

Description

The self-adaptation extraction method of physical culture live Virtual Advertisement System mesoptile ball or tennis court
Technical field
The invention belongs to mode identification technology, particularly during the live middle insertion virtual ads of a kind of sports video to the automatic identifying method of shuttlecock or tennis court.
Background technology
Virtual ads is an emerging technical research field, because it has very large range of application and development potentiality.Therefore, caused that people pay close attention to greatly.At present, research institutions more both domestic and external are being devoted to develop this field, comparatively speaking, domesticly then start late in the research aspect this.
Utilize Video Analysis Technology in the Virtual Advertisement System, analyze the sports video of relaying in real time, extract competition area framework in the video, then preprepared virtual ads such as display advertising or animated advertising are carried out being fused on the site location of appointment in the frame of video after the geometric transformation, obtained like this being similar to the virtual ads of entity advertisements effect.Yet in this process, the place information accurately of only obtaining just can be determined the position that virtual information inserts, and uses the place information that obtains and go some special circumstances are judged.
Therefore, for Virtual Advertisement System, the present invention has designed a kind of fairly perfect and stronger method of self-adaptation and has realized that sports video mesoptile ball or tennis court detect or extract.
Summary of the invention
A kind of fairly perfect and stronger method of self-adaptation realizes that sports video mesoptile ball or tennis court detect or extract for Virtual Advertisement System has designed in the present invention, and it is called the self-adaptation extraction method of physical culture live Virtual Advertisement System mesoptile ball or tennis court.
For achieving the above object, the present invention takes following technical scheme: the self-adaptation extraction method of physical culture live Virtual Advertisement System mesoptile ball or tennis court, as follows:
One, the main areas of shuttlecock or tennis court is extracted and is replenished complete;
Two, the main areas of the first step is carried out pre-service, obtain the edge of shuttlecock or tennis court;
Three, the edge of second step is processed, extracted the straight line of shuttlecock or tennis court;
Four, try to achieve the intersection point of the 3rd step shuttlecock or tennis court straight line, and then set up corresponding relation with shuttlecock or the tennis court of standard.
Preferably, the first step, extraction to shuttlecock or tennis court main areas: extract place master's color by color-based spatial histogram statistic law, then find the profile of area maximum, obtain the more complete place main areas that compares by corrosion and expansive working at last.
Further preferred, after the extraction main areas, remove noise and replenish the place complete by morphological approach, block operations method, maximum UNICOM field method successively.
Preferably, second step is by carrying out the edge that top cap conversion obtains the place to main areas.
Preferably, in the 3rd step, the adaptive threshold that adopts improved maximum variance between clusters to obtain carries out binaryzation with the place edge image, and the method that then adopts Hough straight-line detection and least square fitting to combine extracts straight line.
Further preferred, adopt improved maximum variance between clusters that the place edge image is carried out the binaryzation operation steps and comprise:
Image is carried out gray scale stretches, at first find minimum and maximum gray scale (MinGray, MaxGray, then ((MinGray, MaxGray) is mapped to (0,255), the mapping formula:
f ( i , j ) = f ( i , j ) - MinGray + 1 MaxGray - MinGray + 1
Wherein, f (i, j) is the gray-scale value that (i, j) locates on the image;
Remove noise by gaussian filtering;
Upgrade the new maximum gradation value and the minimum gradation value (MinGray, MaxGray) that produce behind the gaussian filtering;
The adaptive threshold that in (MinGray, MaxGray) scope, uses maximum variance between clusters to obtain, and then the place edge image carried out binaryzation.
Preferably, the 4th step, to try to achieve intersection point by straight line is crossing, and then set up corresponding relation with the place of standard, step is as follows:
Be the lower coordinate of normalized coordinates system with the coordinate transformation of place straight-line intersection;
The coordinate of the point after the conversion and the point in the pattern field ground model are mated.
Further preferred, be that coordinate under the relative coordinate system comprises step with the coordinate transformation of place straight-line intersection:
(1) determines the equation of straight line Li x, Li y;
(2) determine some M i, N iCoordinate;
(3) calculate P1, M iThe distance D x of point-to-point transmission i: (i=5,6 ...);
Calculate P1, N iThe distance D y of point-to-point transmission i: D y i = ( x 1 - p i ) 2 + ( y 1 - q i ) 2 (i=5、6……);
The distance of calculating P1, P2 point-to-point transmission is D 1 = ( x 1 - x 2 ) 2 + ( y 1 - y 2 ) 2 ;
The distance of calculating P1, P4 point-to-point transmission is D 2 = ( x 1 - x 4 ) 4 + ( y 1 - y 4 ) 4 ;
(4) will put p iBe converted to coordinate P τ under the conventional coordinates i(u i, v i) (i=5,6 ... n)
Conversion formula is: u i=Dx i/ D 1v i=Dy i/ D 2(i=5,6 ... n).
Further preferred, the coordinate of the point after the conversion and the point in the pattern field ground model mated comprise step:
(1) the place coordinate model 1 behind the selection standard;
(2) choose after the conversion point with Rule of judgment is mated a little in the institute in this place coordinate model: whether have | u i-a i|+| v i-b i| set up<ε/4, wherein, ε=Min (| a i-a j|+| b i-b j|); If set up this match is successful, count, be false and then mate point in the next standard place;
(3) measuring points to be checked of all after will changing judge finish after, judge whether the point that the match is successful account for that the standard place always counts more than 80%, if set up, then successfully mate in the place, return results finishes matching process; If be false, then choose successively next place model and repeat above-mentioned steps; Final with after all coordinate model coupling in places finishes, if equal unsuccessful then return results, matching process finishes.
The self-adaptation extraction method of physical culture of the present invention live Virtual Advertisement System mesoptile ball or tennis court has been realized the automatic extraction of video main areas, and threshold adaptive can accurately be tried to achieve all place lines, obtains preferably place detection effect.
Description of drawings
Fig. 1 is process flow diagram of the present invention.
Fig. 2 is the schematic diagram that calculates main color.
Fig. 3 is the process schematic diagram that extracts main areas.
Fig. 4 is the process flow diagram of the present invention's the 3rd step least square fitting method.
Fig. 5 is the coordinate of place straight-line intersection.
Embodiment
Below in conjunction with accompanying drawing the embodiment of the invention is elaborated.
The self-adaptation extraction method of the present embodiment physical culture live Virtual Advertisement System mesoptile ball or tennis court is as follows:
One, main areas is extracted
The color in place all is unified color generally in the sports video, when relaying simultaneously camera lens generally all be the place as main relay picture, the color of frame of video mainly concentrates on one section zone than comparatively dense of color space so basically.Therefore, can think zone that in color space distribution numerical value relatively the concentrates main areas color as the place.
In HSV or RGB color space, can obtain the main areas color no matter be by above method.Can be the absolutely mostly color pixel values of occupying at color histogram as the main color gamut in place.When relaying sports tournament, competition area are generally in the centre of camera lens, therefore, think that the middle part that is positioned at frame of video is sports field, but competition area vary, for example the ping-pong racket table is smaller, relatively accounts for to such an extent that ratio is smaller in frame of video in entire area, but generally it can account for more than 1/3rd areas of view picture frame of video through statistics.
The present invention has adopted the extracting method of an improved sports field main areas as follows:
1, at first from the video flowing that is imported by video file or video camera, takes out a frame of video that contains whole places.
2, secondly the central point of described frame of video of the 1st step select a width be original 1/2, highly be original 1/2 rectangular area.
3, adopt following method to calculate the main color in place:
1) calculates the histogram of each color component.
2) get certain interval [i at histogram Min, i Max], this interval has comprised histogrammic peak value H (i Peak) the interval value of left and right sides end points on histogram will satisfy formula 1.
3) calculate interval [i with formula 2 Min, i Max] center of gravity, be used as the main color of this histogram component
H [ i min ] &GreaterEqual; K &times; H [ i peak ] H [ i min - 1 ] < K &times; H [ i peak ] H [ i max ] &GreaterEqual; K &times; H [ i peak ] Formula 1
H[i max+1]<K×H[i peak]
Mean = &Sigma; i = i min i max H [ i ] &times; i &Sigma; i = i min i max H [ i ] Formula 2
Among above-mentioned formula and Fig. 2, K is a constant coefficient.
Extract the main areas that obtains after the main areas because the impact of blocking with noise can produce a lot of cavities by above step, the main areas place that extracts is imperfect in order to obtain complete place information, here need hollow sectors given and fill up complete, simultaneously some UNICOM zones are spliced.At last the main region that connects is carried out morphology and process, remove like this burr and filling cavity, then carry out mask by complete place main areas and original video frame, obtain main areas.
Can remove noise by following two kinds of methods:
1, block operations is removed noise
At first setting the moving window of a N*N, is the image segmentation size piece zone of N*N simultaneously, by the shared color-ratio of main color in the statistical window, if surpass threshold value T, thinks that then this window inner region is considered to the main areas part.If main color-ratio is less than threshold value T in this window, think that so the zone at this window place is not main areas.The formula that obtains main areas by block operations removal noise is as follows:
Figure BDA00002463645600051
Wherein, G (x, y) represents the previous methods extraction with the main areas figure of cavity and noise, and B (i, j) is the complete graph that obtains by block operations denoising point methods.T is the threshold value that determines whether main areas.
2, the regional noise of removing of maximum UNICOM
Although the place main areas has produced cavity and noise in inside, periphery, the whole place of main areas is complete basically, and each main region links together basically, can get final product by asking for maximum UNICOM zone like this.By calculating maximal value in each link area area, think that this link area is exactly required place main areas, and inside all is replaced as the place color.So just remove noise and cavity, removed simultaneously these interfere informations outside grandstand and the place.
Referring to Fig. 3.
Two, the pre-service of place edge extracting
Previous step has obtained the main areas image in place, then the main areas image is carried out the cap conversion of Top-Hat top, Top-Hat operator in the morphology is a kind of fabulous high-pass filtering operator, utilize this operator by selecting suitable structural elements, just the target of needs can be extracted from the background of complexity, the present invention is mainly used to carry out the front pre-service of edge extracting in place.
The place line of general sports field is designed to the line of white, and white is all very high at the gray-scale value of these three passages of RGB, can partly regard white line as " mountain peak " part of gray surface.Therefore, can detect by Top-Hat the character of crest, extract the crest part of white line in the image, eliminate the background of main areas image, keep and given prominence to the place white line.With the main areas image M (x, y) that obtains, increase sign line chart T (x, y) with process Top-Hat conversion and carry out and operation the image L (x, y) of the place line that is enhanced, i.e. formula L (x, y)=M (x, y); T (x, y) L (x, y).
After obtaining the gray-scale map in place through the cap conversion of top, front, use edge detection operator to detect very easily the place line.Can adopt Sobel operator, Canny operator and Laplace operator to extract place line in the sports video.Adopt improved maximum variance between clusters that it is carried out the binaryzation operation.Step is as follows:
The first step: gray scale stretches.
At first find minimum and maximum gray scale MinGray, then MaxGray is mapped to (MinGray, MaxGray) (0,255).The mapping formula:
f ( i , j ) = f ( i , j ) - MinGray + 1 MaxGray - MinGray + 1
Wherein, f (i, j) is the gray-scale value that (i, j) locates on the image.
Second step: remove noise spot by gaussian filtering.
Figure BDA00002463645600062
Wherein, A represents the set of the neighborhood point centered by (i, j).
The 3rd step: upgrade the new maximum gradation value and the minimum gradation value that produce behind the gaussian filtering, i.e. (MinGray, MaxGray).
The 4th step: the adaptive threshold that obtains of use maximum variance between clusters (MinGray, MaxGray) scope in, and then the place edge image carried out binaryzation.Concrete grammar is as follows:
Set f (i, j) and be N * M image at the gray-scale value at (i, j) some place, the gray level of image is μ, thinks that the value of f (i, j) is in the scope of [0, m-1].Being designated as p (K), gray-scale value to occur be the point of K shared ratio in whole image, so just has:
p ( k ) = 1 MN &Sigma; f ( i , j ) = k 1
Here use gray-scale value t to represent threshold value, the target that is partitioned into of passing threshold is respectively so: [f (i, j)≤t] and [f (i, j)>t), therefore can obtain target part ratio:
&omega; 0 ( t ) = &Sigma; 0 &le; i &le; t p ( i )
Target is partly counted:
N 0 ( t ) = MN &Sigma; 0 &le; i &le; t p ( i )
The background parts ratio:
&omega; 1 ( t ) = &Sigma; t < i &le; m - 1 p ( i )
Background parts is counted:
N 1 ( t ) = MN &Sigma; t < i &le; m - 1 p ( i )
Target mean:
&mu; 0 ( t ) = &Sigma; 0 &le; i &le; t p ( i ) / &omega; 0 ( t )
Background mean value:
&mu; 1 ( t ) = &Sigma; t < i &le; m - 1 ip ( i ) &omega; 1 ( t )
Grand mean: μ=ω 0(t) μ 0(t)+ω 1(t) μ 1(t)
Maximum variance between clusters points out to ask the formula of image optimum threshold value g to be:
g = Arg Max 0 &le; i &le; m - 1 [ &omega; 0 ( t ) ( &mu; 0 ( t ) - &mu; ) 2 + &omega; 1 ( t ) ( &mu; 1 ( t ) - &mu; ) 2 ]
Both is long-pending maximum:
g = Arg Max 0 &le; i &le; m - 1 [ ( &mu; 0 ( t ) - &mu; ) 2 ( &mu; 1 ( t ) - &mu; ) 2 ]
Three, the method that adopts Hough straight-line detection and least square fitting to combine extracts straight line
When extracting straight line, the present invention proposes a kind of new method: extract many straight lines in conjunction with Hough conversion and least square method.In order to accelerate the speed of Hough conversion process, a part of data are pre-deposited first array sinv and cosv, array sinv and cosv represent sine value and the cosine value of 0-179 degree.Two-dimensional array Param is the counting that represents Points on Straight Line.In advance white point to be processed is deposited among the array imagewhite, can prevents so repeatedly whole scan image.
The straight line that extraction is mentioned in the front all is discrete value, and the 3rd step in the process flow diagram (see figure 4) is revised the straight line parameter that extracts with the Hough conversion with the least square fitting method.By the straight line of least square again match, can obtain the Hough conversion and extract near the best-fitting straight line of having a few the parameter, can improve precision like this.
At first, in parameter space, extract the longest straight line (ρ, θ), then choose near the point (x of three pixels adjacent with this parameter i, y i), pass through at last | x i* cos θ+y i* sin θ-ρ |<3 the new straight line of all straight-line pass least square fittings, determines that the straight line computing formula of revising is:
cos?temp=(∑x i)(∑y i)-n∑x iy i
sin?temp=(n∑x i 2)-(∑x i) 2
sin &theta; = | sin temp | ( sin temp ) 2 + ( cos temp ) 2
cos &theta; i = &PlusMinus; 1 - sin &theta; i 2 (symbol with
Figure BDA00002463645600076
Identical) θ i=arccos ()
&rho; i = sin &theta; i ( ( &Sigma; y i ) ( &Sigma; x i 2 ) - ( &Sigma; x i ) ( &Sigma; x i y i ) ) ( n&Sigma; x i 2 ) - ( &Sigma; x i ) 2 , N is the number of unknown straight line.
(the ρ that tries to achieve i, θ i) be the straight line parameter of final output.
Four, set up corresponding relation x with the place of standard 4B 2b iD 2
Obtain four site boundaries by previous step, be made as L1, L2, L3, L4,
The general equation of L1 is A 1X+B 2Y+C 1=0
The general equation of L2 is A 2X+B 2Y+C 2=0
The general equation of L3 is A 3X+B 3Y+C 3=0
The general equation of L4 is A 4X+B 4Y+C 4=0, wherein, parameter a ib iC i(i=1,2,3,4)
If four angle points in place are P 1(x 1, y 1), P 2(x 2, y 2), P 3(x 3, y 3), P 4(x 4, y 4).Other place line intersection points are P in the place i(x i, y i), i=5,6,7 ...If site boundary opposite side extended line L1, L3 meet at a S (s 1, t 1), L2, L4 meet at a T (s 2, t 2).Point T and some p iThe straight line Li_x and the limit L1 that determine meet at a M i(M i, N i), some S and some p iThe straight line Li_y and the limit L4 that determine meet at a N i(p i, q i).Referring to Fig. 5.
Step 1: determine some M i, N iCoordinate
1. if A 3B 1-A 1b 3≠ 0, carry out so following steps
(1) obtains the coordinate of point of intersection S, T
According to the straight line general equation of four edges line, obtain the intersecting point coordinate S (s of two groups of relative limit L1, L3 1, t 1), and the intersecting point coordinate T (s of L2, L4 2, t 2).
The solving equation group L 1 : A 1 X + B 1 Y + C 1 = 0 L 3 : A 3 X + B 3 Y + C 3 = 0
Get s 1=(A 1C 3-A 3C 1)/(A 3B 1-A 1b 3) r 1=(b 3C 1-B 1C 3)/(A 3B 1-A 1b 3)
The solving equation group L 2 : A 2 x + B 2 y + C 2 = 0 L 4 : A 4 x + B 4 y + C 4 = 0
Get s 2=(A 2C 4-A 4C 2)/(A 4A 2-A 2B 4) t 2=(B 4C 2-A 2C 4)/(A 4A 2-A 2B 4)
(2) determine the equation of straight line Li_x, Li_y
By a S and p iThe straight line of determining is made as Li_y, its straight-line equation A iX+B iY+C i=0(i=5,6 ...), A wherein i=y i-t 1, B i=x i-s 1, C i=x it 1-s 1y i, (i=5,6 ...);
By a T and p iThe straight line of determining is made as Li_x, its straight-line equation D iX+E iY+F i=0(i=5,6 ...), D wherein i=y i-t 2, E i=x i-s 2, F i=x it 2-s 2y i, (i=5,6 ...).
(3) determine some M i, N iCoordinate
Ask the intersection point M of straight line Li_x and sideline L1 i(M i, N i) coordinate figure, the intersection point N of Li_y and sideline L4 i(p i, q i) coordinate figure
By the solving equation group D i x + E i y + F i = 0 A 1 x + B 1 y + C 1 = 0 And system of equations A i x + B i y + C i = 0 A 4 x + B 4 y + C 4 = 0
Can get M i=(A 1F i-D iC 1)/(D iB 1-A 1E i), N i=(E iC 1-B 1F i)/(D iB 1-A 1E i);
p i=(A i?C 4-A 4?C i)/(A 4B i-A iB 4),q i=(B 4C i-B iC 4)/(A 4B i-A iB 4)。
2. if A 3B 1-A 1b 3=0, carry out so following steps
(1)
Determined some p iAnd be parallel to the straight line of L1, be made as Li_y, its straight-line equation is: A 1X+B 1Y+C i=0
Determined some p iAnd be parallel to the straight line of L2, be made as Li_x, its straight-line equation is: A 2X+B 2Y+F i=0
Wherein, C i=-(A 1x i+ B 1y i), F i=-(A 2x i+ B 2y i)
(2) determine some M i, N iCoordinate
Ask the intersection point M of straight line Li_x and sideline L1 i(M i, N i) coordinate figure, and the intersection point N of Li_y and sideline L4 i(p i, q i) coordinate figure.
By the solving equation group A 2 X + B 2 Y + F i = 0 L 1 : A 1 X + B 1 Y + C 1 = 0
Get M i=(A 2C 1-A 1F i)/(A 1A 2-A 2B 1) N i=(B 1F i-A 2C 1)/(A 1A 2-A 2B 1);
By the solving equation group A 1 X + B 1 Y + C i = 0 L 4 : A 4 X + B 4 Y + C 4 = 0
Get p i=(A 1C 4-A 4C i)/(A 4B 1-A 1B 4) q i=(B 4C i-B 1C 4)/(A 4B 1-A 1B 4)
Step 2: coordinate is carried out standardization, and so-called standardization is to be the underground relative coordinate of pattern field with coordinate transformation
1. calculate P1, M iThe distance D x of point-to-point transmission i:
Figure BDA00002463645600101
(i=5,6 ...);
Calculate P1, N iThe distance D y of point-to-point transmission i:
Figure BDA00002463645600102
(i=5,6 ...);
The distance of calculating P1, P2 point-to-point transmission is D 1 = ( x 1 - x 2 ) 2 + ( y 1 - y 2 ) 2 ;
The distance of calculating P1, P4 point-to-point transmission is D 2 = ( x 1 - x 4 ) 4 + ( y 1 - y 4 ) 4 ;
2. will put p iBe converted to coordinate Pt under the conventional coordinates i(u i, v i) (i=5,6 ... n)
Conversion formula is: u i=Dx i/ D 1v i=Dy i/ D 2(i=5,6 ... n).
Step 3: the some Pt after will changing iThe judgement that compares a little with pattern field ground model
(1) coordinate of pattern field ground model is made as (a i, b i)
Choose place model 1,
Judged whether by calculating | u i-a i|+| v i-b i| set up<ε/4, wherein, ε=min (| a i-a j|+| b i-b j|); If set up, then record count numerical value and add 1, mate if be false then choose next point.
Judged whether count/all (always counting in the standard place)〉80% establishment, if set up, then successfully mate in the place, return results finishes matching process.If be false, then to choose successively next place model (model 2) on the spot and mate, judgment principle is the same, until finish.
Wherein, for standard badminton place ε=0.056, standard outdoor Tennis Court ground ε=0.125.
Standard place coordinate model 1 is:
(0.000,0.075)、(0.000,0.500)、(0.000,0.925)、(0.056,0)、(0.056,0.075)、(0.056,0.500)、(0.056,0.925)、(0.056,1.000)、(0.352,0)、(0.352,0.075)、(0.352,0.500)、(0.352,0.925)、(0.352,1.000)、(0.648,0)、(0.648,0.075)、(0.648,0.500)、(0.648,0.925)、(0.648,1.000)、(0.944,0)、(0.944,0.075)、(0.944,0.500)、(0.944,0.925)、(0.944,1.000)、(1.000,0.075)、(1.000,0.500)、(1.000,0.925)。
Standard place coordinate model 2 is:
(0.075,0.000)、(0.500,0.000)、(0.925,0.000)、(0,0.056)、(0.075,0.056)、(0.500,0.056)、(0.925,0.056)、(1.000,0.056)、(0,0.352)、(0.075,0.352)、(0.500,0.352)、(0.925,0.352)、(1.000,0.352)、(0,0.648)、(0.075,0.648)、(0.500,0.648)、(0.925,0.648)、(1.000,0.648)、(0,0.944)、(0.075,0.944)、(0.500,0.944)、(0.925,0.944)、(1.000,0.944)、(0.075,1.000)、(0.500,1.000)、(0.925,1.000)。
Standard place coordinate model 3 is:
(0.000,0.125)、(0.000,0.875)、(0.231,0.125)、(0.231,0.500)、(0.231,0.875)、(0.769,0.125)、(0.769,0.500)、(0.769,0.875)、(1.000,0.125)、(1.000,0.875)。
Standard place coordinate model 4 is:
(0.125,0.000)、(0.875,0.000)、(0.125,0.231)、(0.500,0.231)、(0.875,0.231)、(0.125,0.769)、(0.500,0.769)、(0.875,0.769)、(0.125,1.000)、(0.875,1.000)。
Those of ordinary skill in the art will be appreciated that; above embodiment illustrates the present invention; and be not as limitation of the invention, as long as within the scope of the invention, all will drop on protection scope of the present invention to variation, the distortion of above embodiment.

Claims (9)

1. the self-adaptation extraction method of physical culture live Virtual Advertisement System mesoptile ball or tennis court is characterized in that as follows:
One, the main areas of shuttlecock or tennis court is extracted and is replenished complete;
Two, the main areas of the first step is carried out pre-service, obtain the edge of shuttlecock or tennis court;
Three, the edge of second step is processed, extracted the straight line of shuttlecock or tennis court;
Four, try to achieve the intersection point of the 3rd step shuttlecock or tennis court straight line, and then set up corresponding relation with shuttlecock or the tennis court of standard.
2. by self-adaptation extraction method claimed in claim 1, it is characterized in that: the first step, extraction to shuttlecock or tennis court main areas: extract place master's color by color-based spatial histogram statistic law, then find the profile of area maximum, obtain the more complete place main areas that compares by corrosion and expansive working at last.
3. by self-adaptation extraction method claimed in claim 2, it is characterized in that: after extracting main areas, remove noise and replenish the place complete by morphological approach, block operations method, maximum UNICOM field method successively.
4. by self-adaptation extraction method claimed in claim 1, it is characterized in that: second step, by main areas is carried out the edge that top cap conversion obtains the place.
5. according to self-adaptation extraction method claimed in claim 1, it is characterized in that: the 3rd step, the adaptive threshold that adopts improved maximum variance between clusters to obtain carries out binaryzation with the place edge image, and the method that then adopts Hough straight-line detection and least square fitting to combine extracts straight line.
6. according to self-adaptation extraction method claimed in claim 5, it is characterized in that: adopt improved maximum variance between clusters that the place edge image is carried out the binaryzation operation steps and comprise:
Image is carried out gray scale stretches, at first find minimum and maximum gray scale (MinGray, MaxGray), then (MinGray, MaxGray) is mapped to (0,255), the mapping formula:
f ( i , j ) = f ( i , j ) - MinGray + 1 MaxGray - MinGray + 1
Wherein, f (i, j) is the gray-scale value that (i, j) locates on the image;
Remove noise by gaussian filtering;
Upgrade the new maximum gradation value and the minimum gradation value (MinGray, MaxGray) that produce behind the gaussian filtering;
The adaptive threshold that in (MinGray, MaxGray) scope, uses maximum variance between clusters to obtain, and then the place edge image carried out binaryzation.
7. according to self-adaptation extraction method claimed in claim 1, it is characterized in that: the 4th step, to try to achieve intersection point by straight line is crossing, and then set up corresponding relation with the place of standard, step is as follows:
Be the lower coordinate of normalized coordinates system with the coordinate transformation of place straight-line intersection;
The coordinate of the point after the conversion and the point in the pattern field ground model are mated.
8. according to self-adaptation extraction method claimed in claim 7, it is characterized in that: be that coordinate under the relative coordinate system comprises step with the coordinate transformation of place straight-line intersection:
(1) determines the equation of straight line Li_x, Li_y;
(2) determine some M i, N iCoordinate;
(3) calculate P1, M iThe distance D x of point-to-point transmission i:
Figure FDA00002463645500021
(i=5,6 ...);
Calculate P1, N iThe distance D y of point-to-point transmission i:
Figure FDA00002463645500022
(i=5,6 ...);
The distance of calculating P1, P2 point-to-point transmission is D 1 = ( x 1 - x 2 ) 2 + ( y 1 - y 2 ) 2 ;
The distance of calculating P1, P4 point-to-point transmission is D 2 = ( x 1 - x 4 ) 4 + ( y 1 - y 4 ) 4 ;
(4) will put p iBe converted to coordinate Pr under the conventional coordinates i(u i, v i) (i=5,6 ... n)
Conversion formula is: u i=Dx i/ D 1v i=Dy i/ D 2(i=5,6 ... n).
9. according to self-adaptation extraction method claimed in claim 7, it is characterized in that: the point in the coordinate of the point after will changing and the pattern field ground model mates and comprises step:
(1) the place coordinate model 1 behind the selection standard;
(2) choose after the conversion point with Rule of judgment is mated a little in the institute in this place coordinate model: whether have | u i-a i|+| v i-b i| set up<ε/4, wherein, ε=Min (| a i-a j|+| b i-b j|); If set up this match is successful, count, be false and then mate point in the next standard place;
(3) measuring points to be checked of all after will changing judge finish after, judge whether the point that the match is successful account for that the standard place always counts more than 80%, if set up, then successfully mate in the place, return results finishes matching process; If be false, then choose successively next place model and repeat above-mentioned steps; Final with after all coordinate model coupling in places finishes, if equal unsuccessful then return results, matching process finishes.
CN201210488119.9A 2012-11-26 2012-11-26 Self-adaptive extraction method of badminton field or tennis field in virtual advertising system during sports live broadcast Expired - Fee Related CN102938074B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210488119.9A CN102938074B (en) 2012-11-26 2012-11-26 Self-adaptive extraction method of badminton field or tennis field in virtual advertising system during sports live broadcast

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210488119.9A CN102938074B (en) 2012-11-26 2012-11-26 Self-adaptive extraction method of badminton field or tennis field in virtual advertising system during sports live broadcast

Publications (2)

Publication Number Publication Date
CN102938074A true CN102938074A (en) 2013-02-20
CN102938074B CN102938074B (en) 2017-04-12

Family

ID=47696969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210488119.9A Expired - Fee Related CN102938074B (en) 2012-11-26 2012-11-26 Self-adaptive extraction method of badminton field or tennis field in virtual advertising system during sports live broadcast

Country Status (1)

Country Link
CN (1) CN102938074B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680560A (en) * 2015-02-28 2015-06-03 东华大学 Fast sports venues detection method based on image line element correspondence
TWI584228B (en) * 2016-05-20 2017-05-21 銘傳大學 Method of capturing and reconstructing court lines
CN109146973A (en) * 2018-09-05 2019-01-04 鲁东大学 Robot Site characteristic identifies and positions method, apparatus, equipment and storage medium
CN111521128A (en) * 2020-04-15 2020-08-11 中国科学院海洋研究所 Shellfish external form automatic measurement method based on optical projection
CN115311573A (en) * 2022-10-08 2022-11-08 浙江壹体科技有限公司 Site line detection and target positioning method, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101753852A (en) * 2008-12-15 2010-06-23 姚劲草 Sports event dynamic mini- map based on target detection and tracking
CN102201052A (en) * 2010-03-26 2011-09-28 新奥特(北京)视频技术有限公司 Method for court detection in basketball broadcast video

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101753852A (en) * 2008-12-15 2010-06-23 姚劲草 Sports event dynamic mini- map based on target detection and tracking
CN102201052A (en) * 2010-03-26 2011-09-28 新奥特(北京)视频技术有限公司 Method for court detection in basketball broadcast video

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
殷伟良等: "体育视频中场地自动检测方法", 《计算机系统应用》 *
王磊: "特征标志检测与场景识别技术在体育视频中的应用研究", 《中国优秀硕士学位论文全文数据库(电子期刊)》 *
董敏: "网球比赛视频分析的若干技术研究", 《中国优秀硕士学位论文全文数据库(电子期刊)》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680560A (en) * 2015-02-28 2015-06-03 东华大学 Fast sports venues detection method based on image line element correspondence
CN104680560B (en) * 2015-02-28 2017-10-24 东华大学 Based on the corresponding sports place quick determination method of image line element
TWI584228B (en) * 2016-05-20 2017-05-21 銘傳大學 Method of capturing and reconstructing court lines
CN109146973A (en) * 2018-09-05 2019-01-04 鲁东大学 Robot Site characteristic identifies and positions method, apparatus, equipment and storage medium
CN109146973B (en) * 2018-09-05 2022-03-04 鲁东大学 Robot site feature recognition and positioning method, device, equipment and storage medium
CN111521128A (en) * 2020-04-15 2020-08-11 中国科学院海洋研究所 Shellfish external form automatic measurement method based on optical projection
CN115311573A (en) * 2022-10-08 2022-11-08 浙江壹体科技有限公司 Site line detection and target positioning method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN102938074B (en) 2017-04-12

Similar Documents

Publication Publication Date Title
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN102938074A (en) Self-adaptive extraction method of badminton field or tennis field in virtual advertising system during sports live broadcast
CN104504388B (en) A kind of pavement crack identification and feature extraction algorithm and system
CN105005766B (en) A kind of body color recognition methods
CN105675623B (en) It is a kind of based on the sewage color of sewage mouth video and the real-time analysis method of flow detection
CN109145708B (en) Pedestrian flow statistical method based on RGB and D information fusion
CN106127205A (en) A kind of recognition methods of the digital instrument image being applicable to indoor track machine people
CN103345755A (en) Chessboard angular point sub-pixel extraction method based on Harris operator
CN105046206B (en) Based on the pedestrian detection method and device for moving prior information in video
CN105069751B (en) A kind of interpolation method of depth image missing data
CN102982545B (en) A kind of image depth estimation method
CN109636732A (en) A kind of empty restorative procedure and image processing apparatus of depth image
CN104537651B (en) Proportion detecting method and system for cracks in road surface image
CN104408724A (en) Depth information method and system for monitoring liquid level and recognizing working condition of foam flotation
CN101807352A (en) Method for detecting parking stalls on basis of fuzzy pattern recognition
CN206322194U (en) A kind of anti-fraud face identification system based on 3-D scanning
CN102436575A (en) Method for automatically detecting and classifying station captions
CN108256467B (en) Traffic sign detection method based on visual attention mechanism and geometric features
CN103035013A (en) Accurate moving shadow detection method based on multi-feature fusion
CN104166983A (en) Motion object real time extraction method of Vibe improvement algorithm based on combination of graph cut
CN102915433A (en) Character combination-based license plate positioning and identifying method
EP2813973A1 (en) Method and system for processing video image
CN107264570A (en) steel rail light band distribution detecting device and method
CN103440492A (en) Hand-held cigarette recognizer
CN106447673A (en) Chip pin extraction method under non-uniform illumination condition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170412

Termination date: 20211126