WO2000034918A1 - Interactive edge detection markup process - Google Patents

Interactive edge detection markup process Download PDF

Info

Publication number
WO2000034918A1
WO2000034918A1 PCT/US1999/028778 US9928778W WO0034918A1 WO 2000034918 A1 WO2000034918 A1 WO 2000034918A1 US 9928778 W US9928778 W US 9928778W WO 0034918 A1 WO0034918 A1 WO 0034918A1
Authority
WO
WIPO (PCT)
Prior art keywords
edges
image
edge
threshold level
annulus
Prior art date
Application number
PCT/US1999/028778
Other languages
French (fr)
Other versions
WO2000034918A9 (en
Inventor
Jean-Pierre Schott
Original Assignee
Synapix, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Synapix, Inc. filed Critical Synapix, Inc.
Priority to AU20416/00A priority Critical patent/AU2041600A/en
Publication of WO2000034918A1 publication Critical patent/WO2000034918A1/en
Publication of WO2000034918A9 publication Critical patent/WO2000034918A9/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the television, movie, video game, computer graphics, industrial design and architecture fields all have a need to analysis and manipulate images. Special effects may add a new dimension to movies and television shows, and the ability to effectively rotate a flat architectural drawing may help observers to better visualize what the finished three dimensional structure will look like.
  • Automated image analysis techniques in the current art use a digitized stream of image data points. These image data points are typically in the form of what are known in the art as pixels. Each pixel contains the data indicating a gray level, in the case of a black and white image, or color intensity levels, in the case of color images.
  • the automated image system analyzes and manipulates the image by grouping the pixels together in predefined ways. For example, the automated image system may create an abstraction of the image in the form of a wire frame or set of edge boundaries. The abstraction of the image may then be more easily mathematically manipulated due to its greater simplicity.
  • the image processing art has a problem with the large number of supposed edges found when using a low contrast threshold value. There is a problem with missing many real edges when using a high contrast threshold value. In both cases there is a problem with discontinuities in the edges found. It is not possible to simply find the correct contrast threshold since each image is different, and within each image there are areas that will require different contrast thresholds for optimum edge detection. Thus there exists a need in the art for a rapid and accurate method to analyze an image, whether real or synthetic, and correctly identify the edges of the objects in the image.
  • a system for an iterative and interactive precision edge detection process for a computer image processing system that has a variable contrast edge detection threshold.
  • the system first scans the image with a high contrast threshold, producing only a few strong edges.
  • the initial edge detection may be flawed due to shadows in the image that locally weaken the edge contrast, or due to part of the edge being obscured by an intervening object.
  • the user then identifies either the correct ones of the detected strong edges, or indicates the correct region for the computer to reexamine, using either roto-splines or free hand scribbles.
  • the oriented edge detector estimates the direction of the edge and the precise subpixel edge location by computing a parabolic inte ⁇ olation of the edge gradient magnitude value in the direction pe ⁇ endicular to the edge tangent.
  • the parabolic inte ⁇ olation uses the gradient magnitude of the current edge pixel and the two neighboring pixels on either side of the edge in the pe ⁇ endicular direction. If the direction is not a multiple of 45 degrees, the value of the neighboring pixel magnitude can be obtained by bilinear inte ⁇ olation of neighboring pixel magnitude values.
  • the edge detector displays the results of the next estimation done with a lower contrast threshold, allowing weaker edges to be found. The user again indicates which of the detected edges is the correct edge or again identifies the region in which to search further. The edge detector repeats the process in the newly defined region with a lowered contrast threshold. This iterative and interactive process continues until all of the correct edges are identified.
  • gaps between identified edges are automatically filled in with a best guess curve fit by examining all the edges in the indicated region, and matching the two edges that have the best combination of the longest segment length, the closest endpoints and the closest slope.
  • Fig. 2 is a bar graph of pixel intensity at an edge.
  • Fig. 3 is a drawing of a scribble.
  • Fig. 4 is a drawing showing discontinuities.
  • Fig. 5 is a flow chart showing the interactive method according to the invention.
  • an image field has been analyzed into an 1 1 by 13 array 10 of pixels, with each of the 130 pixels having a typical luminosity value on the scale of 1 to 256.
  • the image field shown in pixel array 10 contains an edge 12, which divides the image field into two parts in this illustrative example, a generally brighter part 14 having a typical luminosity value of 180 out of 256, and a generally duller part 16 having a typical luminosity value of 100.
  • Edges such as 12 have a different luminosity than the surrounding regions 14 and 16. In the illustrative example shown, the edge 12 has lower luminosity than either of the two surrounding regions 14 or 16. In other cases the edge 12 might have a higher luminosity than the surrounding regions, the direction of the ambient light having a major effect on the direction of edge luminosity.
  • Fig. 1 It is apparent in Fig. 1 that the actual edge 12 does not equally affect all the pixels that it crosses, since pixels containing a long segment of edge 12, such as the pixel labeled 18, will have a very low luminosity in this illustrative example, whereas pixels such as 20, which have only a short segment of edge 12 will have luminosity similar to the adjacent pixels having no segment of edge 12.
  • pixels such as 20 which have only a short segment of edge 12 will have luminosity similar to the adjacent pixels having no segment of edge 12.
  • using the low valued pixels to determine the location of edge 12 would result in a non smooth and discontinuous line because pixels such as 20 in this example would not be low enough luminosity to be considered as part of the edge 12. It would therefore be beneficial to have some measure of where an edge such as 12 crosses a particular pixel.
  • a method to connect line segments together whenever the identification of an edge is interrupted by a bright pixel such as 20 may be connected to a bright pixel such as 20.
  • a series of adjacent pixels 30 to 42 are shown, each having a gradient magnitude indicated by the height of the bar.
  • the edge is brighter than the surrounding regions and the edge is somewhere in pixel 34.
  • the true position of the peak illumination and therefore in this example the location of the peak, may be determined with subpixel resolution.
  • the location of the true edge is about 40% of the way from the center of pixel 36 toward pixel 38. This peak location provides an estimate of the edge location in units of measure which are smaller than a single pixel, and is stored in memory for use in future edge calculations.
  • an object 50 in an image field is shown.
  • the edges of object 50 are presumed to have had too low a contrast for the edge detector to have found an edge.
  • object 50 was not seen by the vision system.
  • the user draws a free hand line, known as a scribble, such as dashed line 52 around the area where the user desires the edge detector to look again for the object 50, but with a lower edge contrast threshold detection level.
  • the edge detector asymmetrically fattens up the user drawn line and creates an inner line 54, typically 5 pixels inside of line 52, and an outer line 56, typically 6 pixels outside of line 52. This creates a toroid shape, and the edge detector looks for edges within the toroid with greater sensitivity, thereby improving the chances of finding the edges of object 50.
  • an object 60 which has, in this illustrative embodiment, had a discontinuity in the real edge of the object.
  • the right hand edge of object 60 as detected consists of line 62 and line 64.
  • the edge detector in this example has also found two spurious edges, lines 66 and 68.
  • the problem is to connect the correct two lines, namely 62 and 64 together. This is done by having the user indicate the area to be reexamined by means of a scribble as was discussed above, or by means of a formula for a known curve, known as a roto- spline.
  • the edge detector looks at all detected edges within the toroid area, as was done above with reference to Fig. 3, and determines the endpoint locations 72- 78.
  • the edge detector measures the average slopes of the lines, and measures the length of each of the lines.
  • the edge detector connects the two endpoints with are a combination of the closest together, have the closest slopes and connect the two longest lines.
  • Fig. 5 a flow chart of the steps of the interactive and iterative edge detection process which may be performed in an image processing system such as Silicon Graphics Octane Workstation, or N.T. workstation.
  • the image is displayed on some form of user output device, typically a computer screen.
  • the graphics workstation provides the user with a tool that permits the user to mark the image with either a free form curve such as a scribble, or with a calculated roto- spline.
  • the user thus may either manually mark the image or let the system attempt to find the edges of the objects in the image automatically without user input.
  • the edges are preferably specified using the pixel peak location technique discussed in connection with Fig. 2.
  • step 84 the system proceeds to create an outline around any scribbles that the user may have made with a toroid shape of a thickness controlled by the user, typically 5 to 6 pixels in width.
  • the toroid thickness is determined by the amount that will capture the desired object edge without including overly much of the surrounding image.
  • the toroid and image then go through edge detection process 86, using an initial edge contrast threshold value predetermined by the user, typically a high value such as 10 gray scale levels change per pixel.
  • the edge detector 86 highlights the found edges and sends the data to the user screen in state 88, where the user decides if the image has been correctly processed. If the image edge detection is not good enough, the user lowers the edge contrast detection threshold in state 92, typically to one gray scale change per pixel, and goes back to the free form curve tool in state 82 to mark missing edges and delete extraneous edges.
  • This process of interaction between the edge detection system and the user continues iteratively until the user accepts the image edged detection and ends the process in state 94. It should be understood that the process flow chart could also be implemented with hardware designed to perform the tasks described and therefore the invention encompasses apparatus and should not be limited to only the disclosed process.

Abstract

A technique is disclosed for iteratively and interactively identifying edges in a graphic image editing system. The system uses a variable contrast edge detection threshold to first find only strong sharp edges, then uses a lower threshold to examine areas indicated by the user for weaker edges missed during the first pass. The system can also fill in gaps in identified edges using subpixel resolution with parabolic interpolation and line direction and end point separation. With such a system the user time and the computer time required to identify and capture an image on a computer vision system may be reduced, and the accuracy of the captured image may be increased to subpixel levels.

Description

INTERACTIVE EDGE DETECTION MARKUP PROCESS
BACKGROUND OF THE INVENTION
The television, movie, video game, computer graphics, industrial design and architecture fields all have a need to analysis and manipulate images. Special effects may add a new dimension to movies and television shows, and the ability to effectively rotate a flat architectural drawing may help observers to better visualize what the finished three dimensional structure will look like.
To make such image analysis and manipulation techniques economically viable, it is beneficial to have an automated system identify and mark the edges of objects in the image field. Automated image analysis techniques in the current art use a digitized stream of image data points. These image data points are typically in the form of what are known in the art as pixels. Each pixel contains the data indicating a gray level, in the case of a black and white image, or color intensity levels, in the case of color images. The automated image system analyzes and manipulates the image by grouping the pixels together in predefined ways. For example, the automated image system may create an abstraction of the image in the form of a wire frame or set of edge boundaries. The abstraction of the image may then be more easily mathematically manipulated due to its greater simplicity.
There exists a problem in the automated acquisition of images due to the large number of potential edges found in images having non uniform lighting, resulting in the misidentification of non edges. If the user sets the contrast edge detection threshold to a low level, many weak edges maybe found in addition to strong sharp edges. Some of the edges found in the low contrast threshold case will not be actual edges of objects, but will be due to shadows, lighting irregularities and variations in texture and color of the object. The large number of edges found in this case may require a great deal of manual user time to eliminate the unwanted spurious edges.
There is also exists a problem of missing correct edges because of low edge contrast and because of edges being partly obscured by intervening objects. If the user sets the contrast edge detection threshold to a high level, only sharp strong edges will be found, and some real object edges may be missed. This situation also results in the expenditure of a great deal of manual user time to add in the missing edges. There is another problem in the art of discontinuous object edges. An obstruction or a shadow may cause the edge detector to find two disconnected edges rather than a single continuous edge. Some of these discontinuities may be so small that the user may not notice, but the discontinuities may cause problems during image processing. For example, if a user wishes to shade a particular object in the image, typically they would select the desired object in the image field and initiate an automatic fill operation. If the object has a discontinuity somewhere in what appears to be the edge, even an effectively invisible gap, then the shading will spill over into an area that is not within the desired object.
SUMMARY OF THE INVENTION Thus the image processing art has a problem with the large number of supposed edges found when using a low contrast threshold value. There is a problem with missing many real edges when using a high contrast threshold value. In both cases there is a problem with discontinuities in the edges found. It is not possible to simply find the correct contrast threshold since each image is different, and within each image there are areas that will require different contrast thresholds for optimum edge detection. Thus there exists a need in the art for a rapid and accurate method to analyze an image, whether real or synthetic, and correctly identify the edges of the objects in the image.
In a preferred embodiment of the invention, a system is described for an iterative and interactive precision edge detection process for a computer image processing system that has a variable contrast edge detection threshold. The system first scans the image with a high contrast threshold, producing only a few strong edges. The initial edge detection may be flawed due to shadows in the image that locally weaken the edge contrast, or due to part of the edge being obscured by an intervening object. The user then identifies either the correct ones of the detected strong edges, or indicates the correct region for the computer to reexamine, using either roto-splines or free hand scribbles. The oriented edge detector estimates the direction of the edge and the precise subpixel edge location by computing a parabolic inteφolation of the edge gradient magnitude value in the direction peφendicular to the edge tangent. The parabolic inteφolation uses the gradient magnitude of the current edge pixel and the two neighboring pixels on either side of the edge in the peφendicular direction. If the direction is not a multiple of 45 degrees, the value of the neighboring pixel magnitude can be obtained by bilinear inteφolation of neighboring pixel magnitude values. The edge detector displays the results of the next estimation done with a lower contrast threshold, allowing weaker edges to be found. The user again indicates which of the detected edges is the correct edge or again identifies the region in which to search further. The edge detector repeats the process in the newly defined region with a lowered contrast threshold. This iterative and interactive process continues until all of the correct edges are identified.
In a further embodiment of the invention, gaps between identified edges are automatically filled in with a best guess curve fit by examining all the edges in the indicated region, and matching the two edges that have the best combination of the longest segment length, the closest endpoints and the closest slope. With such a process the large number of computer detected edges found when using a low enough threshold to assure detection of the desired edge, may be greatly reduced, and the time required to identify the correct edges may be greatly reduced.
BRIEF DESCRIPTION OF THE DRAWINGS Fig. lis a representation of pixel intensity.
Fig. 2 is a bar graph of pixel intensity at an edge.
Fig. 3 is a drawing of a scribble.
Fig. 4 is a drawing showing discontinuities.
Fig. 5 is a flow chart showing the interactive method according to the invention. The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of preferred embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention.
DETAILED DESCRIPTION OF THE INVENTION
Referring to Fig. 1, an image field has been analyzed into an 1 1 by 13 array 10 of pixels, with each of the 130 pixels having a typical luminosity value on the scale of 1 to 256. The image field shown in pixel array 10 contains an edge 12, which divides the image field into two parts in this illustrative example, a generally brighter part 14 having a typical luminosity value of 180 out of 256, and a generally duller part 16 having a typical luminosity value of 100. Edges such as 12 have a different luminosity than the surrounding regions 14 and 16. In the illustrative example shown, the edge 12 has lower luminosity than either of the two surrounding regions 14 or 16. In other cases the edge 12 might have a higher luminosity than the surrounding regions, the direction of the ambient light having a major effect on the direction of edge luminosity.
It is apparent in Fig. 1 that the actual edge 12 does not equally affect all the pixels that it crosses, since pixels containing a long segment of edge 12, such as the pixel labeled 18, will have a very low luminosity in this illustrative example, whereas pixels such as 20, which have only a short segment of edge 12 will have luminosity similar to the adjacent pixels having no segment of edge 12. Thus it is clear that using the low valued pixels to determine the location of edge 12 would result in a non smooth and discontinuous line because pixels such as 20 in this example would not be low enough luminosity to be considered as part of the edge 12. It would therefore be beneficial to have some measure of where an edge such as 12 crosses a particular pixel. It would also be beneficial to have a method to connect line segments together whenever the identification of an edge is interrupted by a bright pixel such as 20. Referring now to Fig. 2, a series of adjacent pixels 30 to 42 are shown, each having a gradient magnitude indicated by the height of the bar. In this illustrative example the edge is brighter than the surrounding regions and the edge is somewhere in pixel 34. Using an inteφolation, for example a parabolic function, the true position of the peak illumination, and therefore in this example the location of the peak, may be determined with subpixel resolution. In the illustrative example the location of the true edge is about 40% of the way from the center of pixel 36 toward pixel 38. This peak location provides an estimate of the edge location in units of measure which are smaller than a single pixel, and is stored in memory for use in future edge calculations.
Referring now to Fig. 3, an object 50 in an image field is shown. In this illustrative example, the edges of object 50 are presumed to have had too low a contrast for the edge detector to have found an edge. In other words, object 50 was not seen by the vision system. The user draws a free hand line, known as a scribble, such as dashed line 52 around the area where the user desires the edge detector to look again for the object 50, but with a lower edge contrast threshold detection level. The edge detector asymmetrically fattens up the user drawn line and creates an inner line 54, typically 5 pixels inside of line 52, and an outer line 56, typically 6 pixels outside of line 52. This creates a toroid shape, and the edge detector looks for edges within the toroid with greater sensitivity, thereby improving the chances of finding the edges of object 50.
Referring now to Fig. 4, an object 60 is shown which has, in this illustrative embodiment, had a discontinuity in the real edge of the object. Thus the right hand edge of object 60 as detected consists of line 62 and line 64. The edge detector in this example has also found two spurious edges, lines 66 and 68. The problem is to connect the correct two lines, namely 62 and 64 together. This is done by having the user indicate the area to be reexamined by means of a scribble as was discussed above, or by means of a formula for a known curve, known as a roto- spline. The edge detector then looks at all detected edges within the toroid area, as was done above with reference to Fig. 3, and determines the endpoint locations 72- 78. The edge detector then measures the average slopes of the lines, and measures the length of each of the lines. The edge detector connects the two endpoints with are a combination of the closest together, have the closest slopes and connect the two longest lines.
Referring now to Fig. 5, a flow chart of the steps of the interactive and iterative edge detection process which may be performed in an image processing system such as Silicon Graphics Octane Workstation, or N.T. workstation. After acquiring the image in step 80, typically a data stream of pixels, the image is displayed on some form of user output device, typically a computer screen. The graphics workstation provides the user with a tool that permits the user to mark the image with either a free form curve such as a scribble, or with a calculated roto- spline. The user thus may either manually mark the image or let the system attempt to find the edges of the objects in the image automatically without user input. The edges are preferably specified using the pixel peak location technique discussed in connection with Fig. 2. In step 84, the system proceeds to create an outline around any scribbles that the user may have made with a toroid shape of a thickness controlled by the user, typically 5 to 6 pixels in width. The toroid thickness is determined by the amount that will capture the desired object edge without including overly much of the surrounding image.
The toroid and image then go through edge detection process 86, using an initial edge contrast threshold value predetermined by the user, typically a high value such as 10 gray scale levels change per pixel. The edge detector 86 highlights the found edges and sends the data to the user screen in state 88, where the user decides if the image has been correctly processed. If the image edge detection is not good enough, the user lowers the edge contrast detection threshold in state 92, typically to one gray scale change per pixel, and goes back to the free form curve tool in state 82 to mark missing edges and delete extraneous edges. This process of interaction between the edge detection system and the user continues iteratively until the user accepts the image edged detection and ends the process in state 94. It should be understood that the process flow chart could also be implemented with hardware designed to perform the tasks described and therefore the invention encompasses apparatus and should not be limited to only the disclosed process.
While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims

CLAIMSWhat is claimed is:
1. A method for interactive edge detection in an automated image analysis system comprising the steps of: defining an edge contrast threshold level, examining said image using said threshold level and marking all image edges having a greater contrast level than said threshold level; identifying and marking incorrect edges for removal; identifying a region for reexamination by drawing an annulus around said region; reducing said edge contrast threshold level by a predetermined amount and reexamining said image in said annulus at said reduced threshold level and marking all image edges having a greater contrast level than said reduced threshold level; and repeating steps b to d, until correct edges have been identified.
2. The method of claim 1 wherein said marking of edges further comprises the steps of: marking the location of the edge at a subpixel resolution; selecting the pixel having a local maximum gradient magnitude, examining the gradient magnitude of a predetermined number of adjacent pixels on both sides of said local maximum; and determining the subpixel location of a peak of the local maximum by inteφolation of the gradient magnitude of the adjacent pixels.
3. The method of claim 2 wherein said inteφolation uses a parabolic method.
4. The method of claim 1 wherein said step of identifying a region for reexamination by drawing an annulus around said region further comprises using a roto-spline.
5. The method of claim 1 wherein said step of identifying a region for reexamination by drawing an annulus around said region further comprises using a free form curve.
6. The method of claim 1 wherein said annulus has a predetermined number of pixels of line width on the inside and outside of said annulus.
7. The method of claim 1 wherein said step of defining an edge contrast threshold level further comprises a threshold of 10 gray levels per pixel.
8. The method of claim 1 wherein said step of reducing said edge contrast threshold level further comprises a threshold of 1 gray level per pixel.
9. The method of claim 1 wherein said step of reexamining said image in said annulus includes filling gaps between lines marked as edges, comprising the steps of: examining said annulus for the longest image edge segments; determining the difference in average direction of said longest edge segments; determining the distance between endpoints of said longest edge segments; and connecting the two image edge segments having the longest edges and the least difference in average direction and endpoint difference.
10. An apparatus for interactive edge detection in an automated image analysis system comprising: means for defining an edge contrast threshold level; means for examining said image using said threshold level; means for marking all image edges having a greater contrast level than said threshold level; means for identifying and marking incorrect edges for removal; means for identifying a region for reexamination by drawing an annulus around said region; means for reducing said edge contrast threshold level by a predetermined amount; means for reexamining said image in said annulus at said reduced threshold level; and means for marking all image edges having a greater contrast level than said reduced threshold level.
11. The apparatus of claim 10 wherein said means for marking of edges further comprises: means for marking the location of the edge at a subpixel resolution; means for selecting the pixel having a local maximum gradient magnitude; means for examining the gradient magnitude of a predetermined number of adjacent pixels on both sides of said local maximum; and means for determining the subpixel location of a peak of the local maximum by inteφolation of the gradient magnitude of the adjacent pixels.
12. The apparatus of claim 11 wherein said inteφolation uses a parabolic method.
13. The apparatus of claim 10 wherein said means for identifying a region for reexamination by drawing an annulus around said region further comprises using a roto-spline.
14. The apparatus of claim 10 wherein said means for defining an edge contrast threshold level further comprises a threshold of 10 gray levels per pixel.
15. The apparatus of claim 10 wherein said means for reducing said edge contrast threshold level further comprises a threshold of 1 gray level per pixel.
16. The apparatus of claim 10 wherein said means for reexamining said image in said annulus includes means for filling gaps between lines marked as edges, comprising: means for examining said annulus for the longest image edge segments; means for determining the difference in average direction of said longest edge segments; means for determining the distance between endpoints of said longest edge segments; and means for connecting the two image edge segments having the longest edges and the least difference in average direction and endpoint difference.
PCT/US1999/028778 1998-12-11 1999-12-06 Interactive edge detection markup process WO2000034918A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU20416/00A AU2041600A (en) 1998-12-11 1999-12-06 Interactive edge detection markup process

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US11183398P 1998-12-11 1998-12-11
US60/111,833 1998-12-11
US45428299A 1999-12-03 1999-12-03
US09/454,282 1999-12-03

Publications (2)

Publication Number Publication Date
WO2000034918A1 true WO2000034918A1 (en) 2000-06-15
WO2000034918A9 WO2000034918A9 (en) 2000-11-30

Family

ID=26809293

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/028778 WO2000034918A1 (en) 1998-12-11 1999-12-06 Interactive edge detection markup process

Country Status (2)

Country Link
AU (1) AU2041600A (en)
WO (1) WO2000034918A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001067392A2 (en) * 2000-03-07 2001-09-13 Koninklijke Philips Electronics N.V. System and method for improving the sharpness of a video image
WO2011039684A1 (en) * 2009-09-30 2011-04-07 Nokia Corporation Selection of a region of an image
US8780134B2 (en) 2009-09-30 2014-07-15 Nokia Corporation Access to control of multiple editing effects

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815660B (en) * 2020-06-16 2023-07-25 北京石油化工学院 Method and device for detecting edges of goods in dangerous chemical warehouse and terminal equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997006631A2 (en) * 1995-08-04 1997-02-20 Ehud Spiegel Apparatus and method for object tracking
WO1997021189A1 (en) * 1995-12-06 1997-06-12 Cognex Corporation Edge peak boundary tracker

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997006631A2 (en) * 1995-08-04 1997-02-20 Ehud Spiegel Apparatus and method for object tracking
WO1997021189A1 (en) * 1995-12-06 1997-06-12 Cognex Corporation Edge peak boundary tracker

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HOI JEONG YOO ET AL: "Line drawing extraction from gray level images by feature integration", INTELLIGENT ROBOTS AND COMPUTER VISION XIII: ALGORITHMS AND COMPUTER VISION, BOSTON, MA, USA, 31 OCT.-2 NOV. 1994, vol. 2353, Proceedings of the SPIE - The International Society for Optical Engineering, 1994, SPIE-Int. Soc. Opt. Eng, USA, pages 96 - 107, XP000890054, ISSN: 0277-786X *
KOHLER R: "A SEGMENTATION SYSTEM BASED ON THRESHOLDING", COMPUTER GRAPHICS AND IMAGE PROCESSING,US,ACADEMIC PRESS. NEW YORK, vol. 15, no. 4, 1 April 1981 (1981-04-01), pages 319 - 338, XP000611793 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001067392A2 (en) * 2000-03-07 2001-09-13 Koninklijke Philips Electronics N.V. System and method for improving the sharpness of a video image
WO2001067392A3 (en) * 2000-03-07 2002-01-03 Koninkl Philips Electronics Nv System and method for improving the sharpness of a video image
WO2011039684A1 (en) * 2009-09-30 2011-04-07 Nokia Corporation Selection of a region of an image
US8780134B2 (en) 2009-09-30 2014-07-15 Nokia Corporation Access to control of multiple editing effects

Also Published As

Publication number Publication date
AU2041600A (en) 2000-06-26
WO2000034918A9 (en) 2000-11-30

Similar Documents

Publication Publication Date Title
JP3862140B2 (en) Method and apparatus for segmenting a pixelated image, recording medium, program, and image capture device
KR100591470B1 (en) Detection of transitions in video sequences
JP4017489B2 (en) Segmentation method
KR100459893B1 (en) Method and apparatus for color-based object tracking in video sequences
JP2642215B2 (en) Edge and line extraction method and apparatus
JPH07302328A (en) Method for extracting area of moving object based upon background difference
US6728400B1 (en) Apparatus, method, and storage medium for setting an extraction area in an image
US20030039402A1 (en) Method and apparatus for detection and removal of scanned image scratches and dust
US20050002566A1 (en) Method and apparatus for discriminating between different regions of an image
US20030053692A1 (en) Method of and apparatus for segmenting a pixellated image
CN109993797B (en) Door and window position detection method and device
US8311269B2 (en) Blocker image identification apparatus and method
CN105787870A (en) Graphic image splicing fusion system
JPH0793561A (en) Edge and contour extractor
US6999621B2 (en) Text discrimination method and related apparatus
WO2000034918A1 (en) Interactive edge detection markup process
CN109448010B (en) Automatic four-side continuous pattern generation method based on content features
JPH08249471A (en) Moving picture processor
US20040146201A1 (en) System and method for edge detection of an image
KR100353792B1 (en) A device for protecting the right to one's portraits and a method
JP6114559B2 (en) Automatic unevenness detector for flat panel display
JP2007006216A (en) Image processing apparatus and image processing method for extracting telop in image
JPH0624014B2 (en) Gray image processing method
MEDINA-RODRÍGUEZ et al. Adaptive method for image segmentation based in local feature
Abhilash et al. Rain Streaks Detection and Removal from an Image using Canny Edge Detection and Combination of Filter

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: C2

Designated state(s): AU CA JP

AL Designated countries for regional patents

Kind code of ref document: C2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

COP Corrected version of pamphlet

Free format text: PAGES 1/5-5/5, DRAWINGS, REPLACED BY NEW PAGES 1/5-5/5; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

122 Ep: pct application non-entry in european phase