WO2010083021A1 - Detection of field lines in sports videos - Google Patents

Detection of field lines in sports videos Download PDF

Info

Publication number
WO2010083021A1
WO2010083021A1 PCT/US2010/000032 US2010000032W WO2010083021A1 WO 2010083021 A1 WO2010083021 A1 WO 2010083021A1 US 2010000032 W US2010000032 W US 2010000032W WO 2010083021 A1 WO2010083021 A1 WO 2010083021A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
field lines
laplacian
pixels
playfield
Prior art date
Application number
PCT/US2010/000032
Other languages
French (fr)
Inventor
Mithun George Jacob
Sitaram Bhagavathy
Jesus Barcon-Palau
Joan Llach
Original Assignee
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US20542809P priority Critical
Priority to US61/205,428 priority
Application filed by Thomson Licensing filed Critical Thomson Licensing
Publication of WO2010083021A1 publication Critical patent/WO2010083021A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30221Sports video; Sports image
    • G06T2207/30228Playing field

Abstract

A method for accurately and robustly detecting field lines in an image, such as a frame of a sports video, includes: convolving the image with a Laplacian operator to generate a Laplacian image emphasizing likely line pixels; removing non-playfield related pixels from the Laplacian image, including pixels lying outside of the playfield and pixels representing players within the playfield; contrast stretching the resultant Laplacian image; iteratively applying a Hough transform to the contrast-stretched Laplacian image to detect lines, wherein each iteration results in the detection of one or more field lines which are removed prior to the next iteration, thus reducing the complexity of the next iteration; identifying fragments along each field line detected; filling gaps in the field lines, wherein the gaps are analyzed so that only those gaps that are not likely due to occlusion by players or other non-line objects are filled; and providing a binary mask indicating the pixels representing field lines.

Description

DETECTION OF FIELD LINES IN SPORTS VIDEOS

Related Patent Applications

(0001J This application claims the benefit under 35 U.S. C. § 1 19(e) of United States Provisional Application No. 61/205,428, filed January 16, 2009, the entire contents of which are hereby incorporated by reference for all purposes into this application.

Field of Invention

[0002] The present invention generally relates to digital image analysis, and more particularly to the detection of features in video images.

Background

[0003] In sports video analysis, the detection of field lines can provide useful a priori information for a variety of applications. For example, "virtual" occlusion, such as occurs when the ball crosses a field line, can be accurately detected given accurate detection of the field line. Additionally, detected field lines can be compared to a field model in order to obtain global motion estimates, detect events, such as the scoring of a goal, and/or provide enhanced reality, such as by insertion of graphics. Field line detection also allows enhancing or preserving field lines to improve the user experience. [0004] Several approaches have been taken to the problem of field line detection in sports videos. Cai and Tai, for example, have described an RGB -model based thresholding approach to remove non-white pixels, followed by a line-patching algorithm based on analyzing white pixels in the neighborhood. (Z. Q. Cai et al., "Line detection in soccer video," IEEE International Conference on Information, Communications and Signal Processing (ICICS), 2005.) RGB values, however, are subject to the vagaries of lighting and present high variance between images for the same "white" colors. [0005] Ribiero and Lopes have proposed using edge detection and a selection grid to identify key points. Collinear points are then extracted and pieced together to obtain the field lines. (F. Ribero et al., "Real time game field limits recognition for robot self- localization using collinearity in middle-size RoboCup soccer," Robόtica: automaςao, controlo, instrumentaςάo, 2003.) [0006] Liu et al. have incorporated field line detection as well into their framework for American football analysis and use the parallel and uniform distribution of lines in the field to refine detection results. Using the distribution of lines, broken lines are filled by finding the best possible line to fit across the detected line fragments. (Tie-Yan Liu et al., "Effective feature extraction for play detection in American football video," Proceedings of the 11th International Multimedia Modelling Conference (MMM), 2005.)

Summary

[0007] In an exemplary embodiment in accordance with the principles of the invention, a method for accurately and robustly detecting field lines in an image, such as a frame of a sports video, includes: convolving the image with a Laplacian operator to generate a Laplacian image emphasizing likely line pixels; removing non-playfield related pixels from the Laplacian image, including pixels lying outside of the playfield and pixels representing players within the playfield; contrast stretching the resultant Laplacian image; iteratively applying a Hough transform to the Laplacian image to detect lines, wherein each iteration results in the detection of one or more field lines which are removed prior to the next iteration, thus reducing the complexity of the next iteration; identifying fragments along each field line detected; filling gaps in the field lines, wherein the gaps are analyzed so that only those gaps that are not likely due to occlusion by players or other non-line objects are filled; and providing a binary mask indicating the pixels representing field lines.

[0008] In view of the. above, and as will be apparent from the detailed description, other embodiments and features are also possible and fall within the principles of the invention.

Brief Description of the Figures

[0009] Some embodiments of apparatus and/or methods in accordance with embodiments of the present invention are now described, by way of example only, and with reference to the accompanying figures in which: [0010] FIG. 1 is a high-level flowchart of an exemplary method in accordance with the principles of the invention; [0011] FIG. 2 is a more detailed flowchart of the method of FIG. 1 ;

[0012] FIG. 3 is a grayscale image of a frame grabbed from a video to be processed in accordance with the method of FIG. 2;

[0013] FIG. 4 is the Laplacian of the grayscale image of FIG. 3; [0014] FIG. 5 is a Harris feature mask generated by applying Harris corner detection to the grayscale image of FIG. 3;

[0015] FIG. 6 is a contrast-stretched Laplacian image;

[0016] FIG. 7 is a noisy, thresholded contrast-stretched Laplacian image;

[0017] FIG. 8 is the image of FIG. 7 after noise removal, showing the line fragments that remain;

[0018] FIG. 9 is an image showing fragments detected along a line;

[0019] FIG. 10 is an image indicating the start and end points of the line segments of the fragments shown in FIG. 9;

[0020] FIG. 11 shows the gaps detected between the start and end points shown in FIG. 10;

[0021] FIG. 12 is an image showing detected lines, including incorrect cross-lines;

[0022] FIG. 13 shows the detected field lines in the original image; and

[0023] FIG. 14 is a block diagram of an exemplary system in accordance with the principles of the invention.

Description of Embodiments

[0024] Other than the inventive concept, the elements shown in the figures are well known and will not be described in detail. For example, other than the inventive concept, familiarity with digital image processing techniques is assumed and not described herein. It should also be noted that embodiments of the invention may be implemented using various combinations of hardware and software. Finally, like-numbers in the figures represent similar elements.

[0025] FIG. 1 is a high-level flowchart summarizing an exemplary method in accordance with the principles of the invention. The exemplary method operates on image frames grabbed from a video stream to detect field lines therein. The image frames can be in any suitable colorspace, such as, the RGB, YUV or HSV colorspace. The exemplary method relies on one or more expected features of the lines of the field depicted in the video, such as the fact that in the illustrative case of soccer, field lines are almost always a high-contrast feature in the image and that except for the ellipses in the center of the field and near the goal -posts, most field lines are straight. [0026] The method of FIG. 1 can be broadly divided into three main procedures: preprocessing 110, line detection 120, and gap filling 130. Generally, in pre-processing 110, field line fragments are extracted and noise is minimized. Pre-processing 1 10 includes: generating the Laplacian of the image, 11 1 ; removing from the Laplacian image the audience and other non-playfield features using playfield detection, 112; removing from the Laplacian image non-playfield features on the playfield, such as players, using Harris corner detection, 113; performing contrast stretching and binarization (thresholding) of the Laplacian image, 114; and performing removal of noise (such as small non-linear objects) on the contrast-stretched and thresholded Laplacian image, 1 15. [0027] In line detection 120, fragments lying along a line are identified. In gap filling 130, any gaps (not caused by occluding objects) between fragments on a line are filled.

[0028] FIG. 2 shows a more detailed flowchart of the exemplary method of FIG. 1. Using a frame grabbed from a video stream at 201, the intensity, or grayscale, values of the image are obtained at 202. For a YUV colorspace frame, for example, the Y- component of the frame can be used to generate the corresponding grayscale image; for an HSV colorspace frame, the V-component can be used; and for an RGB frame, the grayscale image can be generated using a weighted sum of the three components. FIG. 3 shows an illustrative grayscale image of a frame grabbed from a video of a soccer match. [0029] At step 203, the Laplacian of the grayscale image is generated. FIG. 4 illustrates the Laplacian image that results from convolving the original grayscale image of FIG. 3 with a Laplacian operator. Because the Laplacian of an image highlights rapid intensity changes, convolving the image with a 3x3 Laplacian operator efficiently detects the line pixels. Because the field lines are relatively thin features, the Laplacian operator should have a relatively small window, such as 3x3, for example. However, due to perspective distortion, some of the field line pixels farther away from the camera, such as those towards the top of the frame, may be weaker in contrast with respect to other field lines. In order to compensate for weaker field line pixels and to utilize the high-contrast nature of the field line pixels, contrast stretching is performed

[0030] Because field line fragments are high-contrast features compared to the playfield but not to other features such as the players and the audience, pixels representing such other features should be removed from the image before contrast stretching is performed. To this end, playfield detection is performed at 204 in which a playfield mask is generated. The purpose of playfield detection 204 is to remove pixels outside of the playfield boundaries, including pixels representing the audience, sign boards, and the like. In an exemplary embodiment, the playfield mask that is generated at 204 is a binary mask, with pixels corresponding to the playfield having maximum intensity (white) and pixels outside of the playfield having minimum intensity (black). Playfield detection 204 can be performed in accordance with any suitable technique, such as that described in Y. Liu et al., "Playfield Detection Using Adaptive GMM and Its Application," /EEE ICASSP '05, pp. 421- 424, March 2005. Note that some playfield detection techniques based on the detection of grass or other such playfield feature of a distinct color may require that the frame first be converted to the RGB colorspace. [0031] To identify those pixels representing non-line features within the playfield, such as players, Harris corner detection is performed at 205. Harris corner detection is sensitive to pixels with high corner strength. FIG. 5 illustrates the Harris feature mask that results from the application of Harris corner detection to the original grayscale image of FIG. 3. Primarily, the white blobs in FIG. 5 correspond to the players on the field to be removed. Some blobs, however, may correspond to corners or intersections of lines, removal of which would lead to an undesirable loss of line pixels. To minimize such loss, the following strategy can be applied. First, all of the blobs obtained by applying Harris corner detection are morphologically dilated to encompass surrounding pixels. Then, only those blobs in the Harris feature mask larger than a certain threshold, which are more likely to be players, are maintained in the mask whereas the smaller blobs are removed from the mask (set to black). In an exemplary embodiment, any blob with an area of at least 60 pixels is deemed to correspond to a player. [0032] At step 206, the Laplacian image (FIG. 4) of the grayscale image generated at 203, the playfield mask generated at 204 and the Harris feature mask (FIG. 5) generated at 205 are combined so that pixels corresponding to the players and non-field portions of the image are removed from the Laplacian image. In an exemplary embodiment, the combination step 206 can be carried out by multiplying the intensities of the Laplacian image, the playfield mask, and the inverse of the Harris feature mask. [0033] In order to compensate for weaker field line pixels and to utilize the high- contrast nature of the acquired field line pixels, contrast stretching (or normalization) is performed at 210 on the Laplacian of the playfield without players. Contrast in the image is improved by "stretching" (or normalizing) the range of intensity values in the Laplacian image so that they lie within a desired range of values. In an exemplary embodiment, contrast stretching is performed in accordance with the following expression:

Figure imgf000007_0001
where:

Pom = Contrast-stretched Laplacian of the image Pm = Laplacian of the image; a , b = Minimum and maximum Laplacian values in the image, respectively; and c , d = Lower and upper limits specifying the bottom 1 % and top 1 % of all

Laplacian values, respectively.

[0034] FIG. 6 shows the contrast-stretched image generated from the Laplacian of the playfield without players. Note the enhanced field lines in FIG. 6.

[0035] The contrast-stretched Laplacian image (FIG. 6) is then thresholded at step 211 using a high value threshold S to identify line pixels. In other words, those pixels with intensity values of at least S are set to white (maximum intensity), whereas the remaining pixels are set to black (minimum intensity). For a contrast-stretched Laplacian image whose intensity has been normalized, an exemplary value for S is 0.7. FIG. 7 shows the resultant thresholded, contrast-stretched binary image. In an exemplary embodiment, the value of threshold S is preferably dependent on the vertical location within the frame to offset for the weaker contrast of line pixels towards the top of the frame due to perspective distortion. [0036] The thresholding process of step 211, however, returns a substantial amount of noise because of the contrast stretching. Removal of small, non-linear objects is therefore performed at step 212. At 212, region analysis is used to identify non-line regions. First, the 8-connected components in the binary image (FIG. 7) are labeled. Each labeled region is then analyzed using the following properties: 1) area, the number of pixels in the region; and 2) eccentricity, the eccentricity of the ellipse that has the same second moment of area as the region, a measure of how elongated the region is. Because line segments have very high values of eccentricity, they can be identified by values greater than 0.99. Some curvilinear line fragments from curved field lines (e.g. the center circle in a soccer field) may have a lower value of eccentricity but will usually have a fairly large area. Such fragments are to be retained. Therefore, only labeled regions with comparatively low eccentricity and low area will be determined to be noise. The labeled regions determined to be noise are removed. [0037] After removal of the labeled regions determined to be noise, a morphological thinning operation is performed until each remaining labeled region is shrunk to a minimally connected region thus obtaining one-pixel-wide regions and completing step 212. FIG. 8 illustrates the resultant binary image.

[0038] As illustrated in FIG. 8, after the above-described preprocessing, the resultant image will typically contain a large number of field line fragments. A fragment analysis is then performed to identify fragments lying along field lines. In the exemplary method of FIG. 2, the fragment analysis is performed in the following iterative procedure. [0039] At step 214, a Hough transform is applied to the binary image (FIG. 8) obtained as a result of pre-processing. The peak of the Hough transform corresponds to the line containing the largest number of fragment pixels. As described below, it is contemplated that several iterations of step 214 will be performed on the image, with the line corresponding to the peak of the Hough transform in the image being removed from the image in each subsequent iteration.

[0040] At 216, the highest peak in the Hough transform is located and compared to the highest Hough transform peak found for the image thus far. If the peak of the Hough transform is comparable to the highest peak found so far in the iterative process, operation proceeds to step 218, otherwise operation jumps to step 228, described below. In an exemplary embodiment, a peak is considered comparable to the highest peak if it is at least 10% of the highest peak.

[0041] At step 218, a refinement of the line corresponding to the Hough transform peak is performed. The "best" line found by the Hough transform is a coarse fit to the optimal line. If it were fine, then finding the best line with the Hough transform would be computationally expensive. The line found by the Hough transform can be characterized by the parameters p and θ, where p indicates the perpendicular distance of the line from the origin of the image and θ indicates the angle of the line with respect to the horizontal axis of the image. At 218, keeping θ constant, the parameter p is varied over a small range about the line and the line containing the largest number of fragment pixels is retained. A local search is thus performed in the proximity of the line found by the Hough transform for a more accurate location of the line in the image with the largest number of fragment pixels. This line is high-lighted in FIG. 9. [0042] Once the aforementioned line has been identified and refined, the gaps between the fragments on the line are detected at step 222. First, the pixels in a fragment are sorted according to their locations (row and column values) in order to identify the start- and end-points of the line segments of each fragment. The segment start- and end- points for the currently processed line are highlighted in FIG. 10. Second, each fragment end-point is then matched with the fragment start-point closest to it (in Euclidean distance) such that there are no fragment pixels between the two points. Ensuring that there are no fragment pixels between matched end- and start-points avoids matching up the start- and end-points of the same fragment with each other. Once the end- and start- points have been matched, the pixels between each pair (representing each gap) are stored for later use. FIG. 11 shows the detected gaps highlighted. [0043] Once the gaps in the line have been identified, all of the fragments along the line are removed at step 224 before operation proceeds to step 226 at which a determination is made, as described below, as to whether the iterative fragment analysis is done. If not, the fragment analysis described above is repeated starting from step 214 for the line corresponding to the next highest peak in the Hough transform. Each iteration of the fragment analysis involves re-application of the Hough transform and results in the removal of the line containing the maximum number of fragment pixels (i.e. the peak in the transform).

[0044] Removing the detected fragment pixels after each iteration allows efficient detection of shorter field lines, such as the lines near the soccer goal area, which are substantially shorter than the sidelines. Due to the discretization of Hough transform parameters, however, some collections of fragments from different field lines may be incorrectly detected as belonging to the same field line, as illustrated in FIG. 12. This can be avoided by removing the line with the largest number of fragment pixels and repeating the fragment analysis, as described. Since pre-processing has reduced the number of pixels, and since the number of pixels to be analyzed decreases with each successive iteration, this should be computationally feasible.

[0045] Note that the fragment analysis performed in steps 214-224 is repeated for successively smaller lines until it is determined at step 226 that the fragment analysis is done. This determination can be based on one or more criteria. For example, if a given number of lines have been analyzed, a determination can be made at step 226 to end further analysis. This given number may be based, for example, on a priori knowledge of the number of lines on the play field appearing in the video being analyzed. If the play field is known to have eleven lines, for example, processing can be completed after eleven lines have been analyzed. Another criterion can be based on the number of fragment pixels found to be on a line. Once most of the field lines have been detected and removed, the method will proceed to find successively smaller lines. To prevent unnecessary analysis of trivial lines, a threshold can be placed on the minimum number of fragment pixels required to be on a "line." This value can be specified as a factor of the first line detected, ostensibly the line in the frame with the largest number of fragment pixels. One or more of the aforementioned or other criteria can be used independently or combined for step 226.

[0046] After the field lines in the image and the gaps in them have been identified in accordance with the above-described iterative procedure, a gap filling process is applied at 228 to each gap. In an exemplary embodiment, the gap filling process entails extracting all pixels in the gap which pass a weaker threshold W on the contrast-stretched Laplacian image (FIG. 6). These pixels represent potential field line pixels in the gap. A gap is filled if there are enough pixels in the gap with intensities which equal or exceed the threshold W. Applying this criterion eliminates the filling of gaps in field lines caused by occluding objects. If this is not a concern, this criterion can be eliminated and all gaps between fragments on a field line filled. For normalized intensity, an exemplary value for the threshold W is 0.1. [0047] In an exemplary embodiment, the condition to fill a gap is defined as follows:

Figure imgf000011_0001
where:

Pweak - set of pixels whose values in Pout (see FIG. 6 and Eq. 1) are greater than W, Pgap = set of pixels in the detected gap, and

F = minimum fraction of weak pixels along the gap to allow gap filling (e.g., F = 0.3).

[0048] Thus, if a gap meets condition (2), it is filled. Note that the entire gap is treated equally, meaning that it is all filled or all left out. In a further embodiment, different pixels in the gap can be treated differently, so that, for instance, line pixels in a shadow are filled whereas line pixels behind a player are not. In an exemplary embodiment, the value of threshold W is preferably dependent on the vertical location within the frame to compensate for the weaker contrast of line pixels towards the top of the frame due to perspective distortion. The gap filling step 228 is repeated for all of the gaps in all of the lines detected as described above.

[0049] The detected field lines are then output at 230, such as in a binary mask identifying the field line pixels in which field line pixels are white and all other pixels are black. The field line mask can then be used for a variety of applications, such as to highlight the field lines in the original image, as illustrated in FIG. 13. [0050] In an exemplary embodiment, temporal information may be used to improve the line detection process. If one or more field lines are detected with high accuracy, then global motion estimates can be used to track those lines between frames, thereby reducing the computational overhead of finding those field lines again. [0051] In an exemplary embodiment, texture information can be used to obtain more coherent lines. Texture information can be obtained, for example, by computing the local standard deviation at each pixel to generate an image with sharper lines in the preprocessing phase. [0052] In an exemplary embodiment, machine learning can be used to generate values for one or more parameters such as the limits for contrast stretching the Laplacian image (see Eq. 1), and the strong and weak thresholds described above for detecting line pixels using the Laplacian image (FIG. 7 and Eq. 2). [0053] FIG. 14 is a block diagram of an exemplary system 1400 in accordance with the principles of the invention. The system 1400 can be used to generate a field line mask from a video stream of a sports event such as a soccer match. The system 1400 comprises a frame grabber 1410 and a digital video editor 1420. Frame grabber 1410 captures one or more frames of the video stream for processing by digital video editor 1420 in accordance with the principles of the invention. Digital video editor 1420 comprises a processor 1421, memory 1422 and I/O 1423. In an exemplary embodiment, digital video editor 1420 may be implemented as a general purpose computer executing software loaded in memory 1422 for carrying out field line detection as described above.

[0054] In view of the above, the foregoing merely illustrates the principles of the invention and it will thus be appreciated that those skilled in the art will be able to devise numerous alternative arrangements which, although not explicitly described herein, embody the principles of the invention and are within its spirit and scope. For example, although illustrated in the context of a particular sequence of steps, the steps shown may be combined, divided or re-ordered to accomplish the principles of the invention. Similarly, although shown as separate elements, some or all of the elements may be implemented in a stored-program-controlled processor, e.g., a digital signal processor or a general purpose processor, which executes associated software, e.g., corresponding to one, or more, steps, which software may be embodied in any of a variety of suitable storage media. Further, the principles of the invention are applicable to various types of systems. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the invention.

[0055] Additionally, it will be appreciated by those skilled in art that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.

[0056] In the claims hereof any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements which performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function.

The invention as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. Applicant thus regards any means which can provide those functionalities as equivalent as those shown herein. Finally, and unless otherwise explicitly specified herein, the drawings are not drawn to scale.

Claims

1. A computer implemented method for detecting field lines in an image comprising the steps of: generating a Laplacian image from the image; detecting field lines by iteratively applying a Hough transform to the Laplacian image, wherein each iteration results in the detection of a field line which is removed from the Laplacian image prior to the next iteration; performing gap filling in the detected field lines; and generating a binary mask indicating pixels representing the field lines.
2. The method of claim 1, wherein the image is a frame of a sports video.
3. The method of claim 1 further comprising: removing non-playfield related pixels from the Laplacian image, including pixels lying outside of the playfϊeld and pixels representing players within the play field.
4. The method of claim 3 further comprising: contrast stretching the resultant Laplacian image.
5. The method of claim 4 further comprising: performing a thresholding operation on the contrast-stretched Laplacian image.
6. The method of claim 4 further comprising: removing noise from the contrast-stretched Laplacian image.
7. The method of claim 3 further comprising: generating a playfield mask; and applying the playfield mask to the Laplacian image.
8. The method of claim 3 further comprising: generating a player mask; and applying the player mask to the Laplacian image.
9. The method of claim 8, wherein generating the player mask includes: performing Harris corner detection on the image.
10. The method of claim 9 further comprising: dilating features generated by the Harris corner detection; and removing features smaller than a predetermined threshold from the player mask.
11. The method of claim 1, wherein performing gap filling includes: analyzing gaps in the detected field lines so that gaps caused by occluding objects are not filled.
12. The method of claim 1, wherein detecting field lines includes: refining locations of field lines detected by applying the Hough transform.
13. A computer program recorded on a computer-readable recording medium, said program causing the computer to detect field lines in an image by executing the steps of: generating a Laplacian image from the image; detecting field lines by iteratively applying a Hough transform to the Laplacian image, wherein each iteration results in the detection of a field line which is removed from the Laplacian image prior to the next iteration; performing gap filling in the detected field lines; and generating a binary mask indicating pixels representing the field lines.
14. The computer program of claim 13, wherein the image is a frame of a sports video.
15. The computer program of claim 13 causing the computer to detect field lines in an image by executing the further steps of: removing non-playfield related pixels from the Laplacian image, including pixels lying outside of the playfield and pixels representing players within the playfield. 5
16. The computer program of claim 15 causing the computer to detect field lines in an image by executing the further steps of: contrast stretching the resultant Laplacian image. 0
17. The computer program of claim 16 causing the computer to detect field lines in an image by executing the further steps of: performing a thresholding operation on the contrast-stretched Laplacian image.
18. The computer program of claim 16 causing the computer to detect field lines in an 5 image by executing the further steps of: removing noise from the contrast-stretched Laplacian image.
19. The computer program of claim 15 causing the computer to detect field lines in an image by executing the further steps of: 0 generating a playfield mask; and applying the playfield mask to the Laplacian image.
20. The computer program of claim 15 causing the computer to detect field lines in an image by executing the further steps of: 5 generating a player mask; and applying the player mask to the Laplacian image.
21. The computer program of claim 20, wherein generating the player mask includes: performing Harris corner detection on the image.
!0
22. The computer program of claim 21 causing the computer to detect field lines in an image by executing the further steps of: dilating features generated by the Harris corner detection; and removing features smaller than a predetermined threshold from the player mask.
23. The computer program of claim 13, wherein performing gap filling includes: analyzing gaps in the detected field lines so that gaps caused by occluding objects are not filled.
24. The computer program of claim 13, wherein detecting field lines includes: refining locations of field lines detected by applying the Hough transform.
PCT/US2010/000032 2009-01-16 2010-01-07 Detection of field lines in sports videos WO2010083021A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US20542809P true 2009-01-16 2009-01-16
US61/205,428 2009-01-16

Publications (1)

Publication Number Publication Date
WO2010083021A1 true WO2010083021A1 (en) 2010-07-22

Family

ID=42340046

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2010/000032 WO2010083021A1 (en) 2009-01-16 2010-01-07 Detection of field lines in sports videos

Country Status (1)

Country Link
WO (1) WO2010083021A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680560A (en) * 2015-02-28 2015-06-03 东华大学 Fast sports venues detection method based on image line element correspondence

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030123736A1 (en) * 2001-12-12 2003-07-03 Xun Xu Imlementation of hough transform and its application in line detection and video motion analysis
US20040130567A1 (en) * 2002-08-02 2004-07-08 Ahmet Ekin Automatic soccer video analysis and summarization
US20080037876A1 (en) * 1999-08-09 2008-02-14 Michael Galperin Object based image retrieval
US20080138029A1 (en) * 2004-07-23 2008-06-12 Changsheng Xu System and Method For Replay Generation For Broadcast Video
US20080199044A1 (en) * 2007-02-20 2008-08-21 Shingo Tsurumi Image Processing Apparatus, Image Processing Method, and Program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080037876A1 (en) * 1999-08-09 2008-02-14 Michael Galperin Object based image retrieval
US20030123736A1 (en) * 2001-12-12 2003-07-03 Xun Xu Imlementation of hough transform and its application in line detection and video motion analysis
US20040130567A1 (en) * 2002-08-02 2004-07-08 Ahmet Ekin Automatic soccer video analysis and summarization
US20080138029A1 (en) * 2004-07-23 2008-06-12 Changsheng Xu System and Method For Replay Generation For Broadcast Video
US20080199044A1 (en) * 2007-02-20 2008-08-21 Shingo Tsurumi Image Processing Apparatus, Image Processing Method, and Program

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHOI ET AL.: "Where are the ball and players?: Soccer Game Analysis with Color-based Tracking and Image Mosaick'", ICIAP, 31 December 1997 (1997-12-31), Retrieved from the Internet <URL:http://academic.research.microsoft.com/Paper/191053.aspx?viewType=1> [retrieved on 20100217] *
CHOI ET AL.: "Where are the ball and players?: Soccer Game Analysis with Color-based Tracking and Image Mosaick", ICIAP, 31 December 1997 (1997-12-31), Retrieved from the Internet <URL:http://portal.acm.org/citation.cfm?id=686879> [retrieved on 20100217] *
STAHL ET AL.: "Globally Optimal Grouping for Symmetric Closed Boundaries by Combining Boundary and Region Information'", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 30, no. 3, March 2008 (2008-03-01), Retrieved from the Internet <URL:http://www.cse.sc.edu/~songwang/document/pami08a.pdf> [retrieved on 20100217] *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680560A (en) * 2015-02-28 2015-06-03 东华大学 Fast sports venues detection method based on image line element correspondence
CN104680560B (en) * 2015-02-28 2017-10-24 东华大学 Based on sports fields rapid detection of an image element corresponding to the line

Similar Documents

Publication Publication Date Title
Zhang et al. Image segmentation based on 2D Otsu method with histogram analysis
Gatos et al. ICDAR 2009 document image binarization contest (DIBCO 2009)
Jung Efficient background subtraction and shadow removal for monochromatic video sequences
EP1683105B1 (en) Object detection in images
Cheriet et al. A recursive thresholding technique for image segmentation
US7783118B2 (en) Method and apparatus for determining motion in images
JP3679512B2 (en) Image extraction apparatus and method
Giakoumis et al. Digital image processing techniques for the detection and removal of cracks in digitized paintings
Dorini et al. White blood cell segmentation using morphological operators and scale-space analysis
US20150071530A1 (en) Image processing apparatus and method, and program
CN102388391B (en) Video matting based on foreground-background constraint propagation
Gllavata et al. A robust algorithm for text detection in images
Peng et al. Parameter selection for graph cut based image segmentation
EP1840798A1 (en) Method for classifying digital image data
CN102722891B (en) Method for detecting image significance
US6707940B1 (en) Method and apparatus for image segmentation
Nouar et al. Improved object tracking with camshift algorithm
Kranthi et al. Automatic number plate recognition
JP2008192131A (en) System and method for performing feature level segmentation
US9292759B2 (en) Methods and systems for optimized parameter selection in automated license plate recognition
US20060008147A1 (en) Apparatus, medium, and method for extracting character(s) from an image
Yu et al. Detecting circular and rectangular particles based on geometric feature detection in electron micrographs
Jiang et al. Mathematical-morphology-based edge detectors for detection of thin edges in low-contrast regions
JP4373840B2 (en) Object Tracking method, object tracking program and a recording medium, and, animal tracking apparatus
Qian et al. Video background replacement without a blue screen

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10731919

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10731919

Country of ref document: EP

Kind code of ref document: A1