GB2454214A - Detecting Edge Pixels In An Image - Google Patents

Detecting Edge Pixels In An Image Download PDF

Info

Publication number
GB2454214A
GB2454214A GB0721406A GB0721406A GB2454214A GB 2454214 A GB2454214 A GB 2454214A GB 0721406 A GB0721406 A GB 0721406A GB 0721406 A GB0721406 A GB 0721406A GB 2454214 A GB2454214 A GB 2454214A
Authority
GB
United Kingdom
Prior art keywords
image
pixel
images
matrix
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0721406A
Other versions
GB0721406D0 (en
Inventor
Alan Peter Birtles
Adam Wacey
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Priority to GB0721406A priority Critical patent/GB2454214A/en
Publication of GB0721406D0 publication Critical patent/GB0721406D0/en
Publication of GB2454214A publication Critical patent/GB2454214A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Edge pixels in an image are detected by replicating at least part of a first image to create at least one replicated image which is then spatially transform by a distance of one or more pixels relative to the first image. An edge pixel matrix indicating the presence of edge pixels in the at least part of the first image is then produced wherein the matrix comprises a plurality of matrix elements, each matrix element corresponding to a pixel in the at least part of the first image. A value for each element of the edge pixel matrix is then generated in accordance with a logical exclusive OR (XOR) combination of a same attribute of a pixel in the first image with one corresponding pixel from at least part of the transformed image, wherein said corresponding pixel from the transformed image is of a matching position to the pixel from the first image, the logical combination resulting in a first value only if both the pixels have the same attribute.

Description

METHOD AND APPARATUS FOR DETECTD4G EDGE PiXELS The present invention relates to a method for detecting edge pixels of objects.
Edge detection is a technique frequently used in image processing in order to identify various features and characteristics present in an image. For example the presence of an edge often indicates the location of a prominent or important feature.
Various techniques are known in the art for detecting edges. Some involve applying mathematical functions to the image such as taking a first or second order derivative or applying a frequency transform and then searching the resultant output for features indicative of an edge. Alternatively other more simplistic methods examine the image on a pixel by pixel basis. Most edge detection methods are computationally intensive and are difficult to implement in real time without using powerful and expensive processing hardware.
It is the aim of the present invention to address this and other problems associated with conventional edge detection methods and apparatus.
In one aspect of the present invention there is provided a method of detecting edge pixels including the steps of: replicating at least part of a first image to create at least one replicated images; spatially transforming the replicated images by a distance of one or more pixels relative to the first image; producing an edge pixel matrix indicating the presence of edge pixels in the at least part of the first image, the matrix comprising a plurality of matrix elements, each matrix element corresponding to a pixel in the at least part of the first image; and providing a value for each element of the edge pixel matrix in accordance with a logical combination of a same attribute of a pixel in the first image with one corresponding pixel from at least part of the transformed image, wherein said corresponding pixel from the transformed image is of a matching position to the pixel from the first image, the logical combination resulting in a first value only if both the pixels have the same attribute.
The method of edge pixel detection according to the present invention provides an advantage in that the operations required to identiJ' edge pixels comprising of replicating and shifting the image and then applying a logical operation to corresponding pixels requires a reduced level of computational activity. Furthermore, dedicated graphics cards typically include graphics processing hardware which is well suited to low level manipulation of pixels such as shifting and duplication and also include hardware which is well suited to apply simple logical operations on pixel variables such as colour. The method for detecting edge pixels according to the present invention can take thus provide an increased benefit when used in conjunction with dedicated graphics cards.
The term "spatially transforming" includes the situation where the value of each of the pixels is shifted in any one of a number of directions.
In some embodiments, there is provided a method of searching through stored images, comprising: detecting edge pixels in an image according to any one of the preceding embodiments; generating feature data representative of a property of at least part of an image; and comparing the generated feature data with other feature data representative of at least part of a plurality of stored images.
Various further aspects and features of the invention are defined in the appended claims.
Embodiments of the present invention will now be described by way of example only and with reference to the accompanying drawings where like parts are provided with corresponding reference numerals and in which: Figure 1A provides a diagram of a system for implementing and embodiment of the present invention; Figure lB provides a diagram of a graphical user interface for use in conjunction with an embodiment of the present invention; Figure 2A provides a diagram of a segmented image; Figure 2B provides an illustration of possible segmentation areas of an image; Figure 3 provides a diagram of a image with an area of interest selected; Figure 4 provides a diagram of an image after undergoing colour resolution reduction; Figure 5 provides a diagram of separating an image into colour planes; Figure 6 shows a diagram of a histogram; Figure 7 shows a diagram of an expanded area of an image to be tested for edge pixels; Figure 8 shows an illustration of the processing of replicated and transformed pixels; Figure 9 shows an illustration of the processing of replicated and transformed pixels; Figure 10 is a diagram of a matrix indicating edge pixels; and Figure 11 is a diagram of a plurality of images divided into a set of segments.
Figure 1A is a schematic diagram of an image processing system based around a general-purpose computer 10 having a processor unit 20 including disk storage 30 for programs and data, a network interface card 40 connected to a network 50 such as an Ethernet network or the internet., a display device such as a cathode ray tube or liquid crystal display device 60, a keyboard 70 and a user input device such as a mouse 80. The system operates under program control, the programs being stored on the disk storage 30 and provided, for example, by the network 50, a removable disk (not shown) or a pre-installation on the disk storage 30.
In general terms, the image processing system is arranged such that a user may search through a large number of images from an image repository in order to identify images which correspond to various search criteria specified by the user. Typically the user will specify the search criteria by taking a first image and selecting parts or features of this first image. The first image (or the selected part or parts) will, in embodiments, be subject to processing. This processing will be described below, It should be noted here that the processing carried out on the first image may also be carried out on one or more of the images in the repository through which the searching will take place. The processing on the images in the repository may take place before the search is conducted (termed "pre-analysis") or as the search through the images is carried out (termed "on the fly"). This processing will be explained later.
The image processing system will then search the image repository with reference to the parts or features of the first image selected by the user. For example, the user may wish to identify images from the repository including birds. In this case, the user selects a first image that includes a bird and selects the specific parts or features of the image which encapsulate a bird. After the search has been conducted, a list of images from the image repository will be generated. This identifies images in the repository which are deemed to be similar or contain similar elements to the parts or features of the first image selected by the user. This provides the user with the ability to pick out only features of an image that are relevant to them for the particular search. For instance, in this example, the beak of the bird may be selected and only images having similar beaks will be returned in the search. This makes more efficient use of computer resources because only relevant sections are returned to the user.
Additionally, by searching only selected parts which are processed in the manner discussed below, the returned images are scale invariant. In other words, in the example above, it will not matter whether the beak is 20% of the image or 70% of the image; both will be returned as relevant. This improves the searching mechanism. in some embodiments the system will rank the images in the generated list by identifying those images which most closely match the selected search criteria.
The image repository may comprise a plurality of images stored within the system for example on the disk storage 30. Alternatively the image repository may be stored on some form of storage media which is remote from the system and which the system gains access to via some form of intermediate link such as the network interface card connected to the network 50. The images may be distributed over a number of storage nodes connected to the network 50.
The images may be in various forms for example "still" images captured by a camera or the images may be taken from a series of images comprising a video stream.
Figure 1 B is a schematic diagram showing a graphical user interface 11 for display on the display device 60. The graphical user interface 11 includes a search window 114 and a results window 113. The search window 114 displays the first image 112 from which the search criteria are derived.
As noted above, the first image (or, in embodiments, the selected part) is subjected to image processing.
The Image Searching Mechanism In order to search the images in the image repository, the image processing system undertakes the following steps:
S
A first image from which the search criteria are to be derived is selected. The image might be selected from the image repository or be a new image loaded onto the system from an external source via the network 50 or from a disk or other storage media attached to the system.
The image is typically presented to the user on the display device 60 and the user selects an area of the image using an input device such as the mouse 80. In some embodiments the image is segmented into a grid and the user selects one or more segments of the grid which contain the features of the image upon which the user bases the search. However, the invention is not so limited and a user can define their own area using the mouse 80 as noted below. Figure 2A shows an example of an image which has been segmented into a grid and an area of interest highlighted by selecting blocks of the grid which contain a feature of interest from which search criteria area to be derived.
As noted above, in some embodiments at least some of the images from the image repository will be pre-analysed. The pre-analysis of the images in the repository reduces the processing load on the system at the time of searching and thus increases the speed at which the searching through images takes place. To further increase the speed with which the search is conducted, the pre-analysis of the images in the repository is carried out using a similar technique to that used to analyse the first image. Additionally, as part of the pre-analysis of the images in the repository, at least some of the pre-analysed images may be segmented into blocks for example by the application of a grid such as a 2x2 grid, 3x3 grid, 4x4 grid. Alternatively a non-square grid could be used such as a 2x3 or 3x4 grid. Individual blocks or groups of blocks may be analysed independently of the image as a whole therefore allowing not only images from the image repository to be searched but also different parts of each image from the image repository. Furthermore, the system may be operable to search for parts of images which correspond in shape to an area selected by the user, as described above. Thus if a user selects an area as shown in Figure 2A, areas of corresponding shape will be searched from the images in the image repository. This principle is illustrated in Figure 2B in which four areas 21, 22, 23, 24 corresponding in shape to the area of interest are highlighted in Figure 2A. Although Figure 2B only shows four areas of corresponding shape it will be understood that a 4x4 grid as shown in Figure 2B in fact comprises more areas of corresponding shape.
In another embodiment as noted above, the user may simply define an area which contains the features of the images upon which the search is to be based. This is indicated in Figure 3 by the dashed box. The definition of the area of interest will typically be performed using the user input device 80.
in another embodiment of the invention the images from the image repository are divided into a plurality of sets of segments. The plurality of sets of segments which are stored on the image repository are analysed to derive feature data representing an attribute of each of the set of segments. The results of this analysis is then stored in association with the image.
The user can then select a set of segments from the first image corresponding for example to a feature of interest. The system is operable to search the sets of segments from the images of the image repository which correspond in some respect to the selected segments. Figure 11 shows an example of this. Figure 11 shows a simplified diagram illustrating a first image 1111 divided into a number of segments (the number of segments shown is nine corresponding to a three by three grid but it will be appreciated that this is merely illustrative and in fact the first image may be divided into other numbers of segments such as sixteen for a four by four grid or twenty five for a five by five grid etc.). Further, it will be appreciated that not all possible combinations of segments are shown in Figure 11. A selected segment set 1112 selected by the user in accordance with the selection methods discussed above is indicated by the dashed line. Once the user has selected the segment set the system is operable to search the stored sets of segments from the images of the image repository which correspond to the selected segments. Figure 11 shows a plurality of images 1113 to 1126 representing images from the image repository. The plurality of images 1113 to 1126 are divided into segments and sets of segments some of which correspond to the segment set selected by the user. The segment sets searched in the plurality of images 1113 to 1126 are shown by shaded segments. As can be seen in a firstgroupofthepluralityofimages 1113, 1114, 1115, 1116, 1117, Ill8thesetof segments searched corresponds to the shape, size and orientation of the segment set selected by the user. In a second group of the plurality of images 1119, 1120, 1121, 1122, 1123 the set of segments searched corresponds to the shape and size of the segment set selected by the user. In a third group of the plurality of images 1124, 1125, 1126 the set of segments searched corresponds to the shape of the segment set selected by the user.
After the area containing the features of interest has been selected the search through the repository of images continues. In order to perform the search, the first image (or the selected part) needs to be subjected to processing.
Image Processing In order to commence the search the system, in embodiments, performs a colour resolution reduction procedure on the image. As will be understood, each pixel oan image is typically defined by data representing pixel colour component values such as "R", "G" and "B" values (defining red, green and blue components respectively) or colour encoding schemes providing colour component values such as "Y", "CB" and "CR" (defining a "luma" value and "chroma" values respectively). Such values determine the colour of each pixel. The number of possible colours that can be used to provide pixel colours is determined by the number of bits used to represent the pixel colour component values. Typically this is 16 million colours although this is only exemplary. The colour resolution reduction procedure will typically involve a "down-sampling" or decimation operation on each colour component value the result of which is to reduce the total number of possible colours for a pixel. After the colour resolution reduction procedure has been applied to the image, the number of colours in the image will be reduced. An effect that arises in many images after a colour resolution reduction procedure has been applied is that the image is segmented into areas of the same colour. This effect manifests itself as lending an image a "blocky" appearance. A simplified example of this is shown in Figure 4.
Figure 4 shows a result of the colour resolution reduction procedure applied to the selected area of the image as shown in Figure 3 in which the image has been segmented into image elements 41, 42, 43, 44 of the same colour. For the sake of simplicity the number of colours represented has been reduced to four although as will be appreciated the number of colours will be typically be greater than this. In some embodiments the number of colours in an image after it has undergone the colour resolution reduction procedure is 67 although any number less than that of the original image is envisaged.
After the colour resolution reduction procedure has segmented the image into a number of areas of identical colour, the image is further divided into a number of colour planes in which each plane comprises only the image elements of one colour.
Thus the number of colour planes will be the same as the total number of colours in the image after the colour resolution reduction procedure. The division of the image into colour planes comprising image elements of each colour is shown in Figures 5A to 5D.
Each plane is then analysed in order to derive feature data such as a feature vector corresponding to a property of the image element or elements contained therein.
The property may relate to one or many aspects of the image element for example simple size or colour or more complex considerations such as the form of the shape of the elements. Furthermore, as will be understood, a feature vector is one example of an abstract measure of a property of the image element. Another example might be the sum of the absolute differences. In some embodiments the feature vector for one or more colour plane is generated by first detecting the edge pixels for each image element and then counting the pixels around the perimeter of each image element in the colour plane. Although detecting the edge pixels is discussed further below, known techniques such as blob analysis may be used. A mean of this perimeter value is then calculated producing a single scalar value for each colour plane. This procedure is repeated for each colour plane. The calculated mean scalar value for each colour plane is taken and a histogram produced. A simplified histogram is shown in Figure 6.
The histogram is then compared to similarly generated histograms for each of the images from the image repository.
There are many techniques for comparing the histogram derived from the first image with those similarly derived from the repository of images. In a very simple example corresponding bins of the two histograms can be aligned and the absolute difference between the histograms calculated. The result of this subtraction can be represented as a further histogram. The bins from the resulting histogram can be summed to produce a single value. The closer this value to zero, the more similar the histograms. A similar image in the repository is identified when the summed data is below a threshold. Although only a simple technique described for comparing histograms, the skilled person will appreciate that more sophisticated techniques exist.
The result of the histogram comparison will typically generate a number of "hits" corresponding to similar images from the image repository. These similar images can then be presented to the user on the display screen. As will be understood, the number of returned images can be controlled by specifying certain parameters. For example the system may be arranged to return the first JO images with histograms which most closely correspond to that of the first image. Alternatively the system can be arranged to return all images the histograms of which meet a certain threshold level of similarity with the histogram derived from the first image, as noted above. In order to aid the user, the set of segments in the "hit" image which correspond to the set of segments selected by the user is outlined in the "hit" image.
In some embodiments the total number of pixels on the perimeter of each image element is counted in order to provide a feature vector for each colour plane.
Methods known in the art for detecting edge pixels are typically computationally intensive and require pixel by pixel analysis. This often makes real time edge detection for high resolution images quite difficult. In some embodiments of the system, in the image processing method, the following edge detection technique is used. It is understood, that in other embodiments, a different edge detection technique may be used, Edge Detection The technique comprises replicating eight times the image to be tested for edge pixels. Each duplication is shifted (i.e. spatially transformed) by one pixel in each of the eight possible directions (i.e. x+l, y+O; x-1, y+O; x+O, y+l; x+0, y-l; x+l, y+l; x+1, y-l; x-1, y-l; x-1, y+1). An XOR function is then taken of all of the corresponding pixels from the eight transformed replicated images. The result of this XOR function is a binary matrix with a "I" indicating an edge pixel and a "0" indicating a non-edge pixel. A simplified version of this technique is illustrated in Figures 7, 8 and 9.
Figure 7 shows a simplified expanded area 71 corresponding to a point 72 on an image 73. The simplified expanded area 71 is used to illustrate the technique operating on a small area of the image 73. As will be understood the following is merely illustrative and in practice the technique may be applied to all parts of the image 73. As described above, in accordance with the technique the system is arranged to replicate the entire image 73 eight times such that each duplication is shifted by one pixel in each possible direction (i.e. up, up and right, right, down and right, down, down and left, left, up and left). The relative movement in relation to one pixel is shown by arrows 76 on the simplified expanded area 71. Once the transformed duplications have been created, an XOR function is applied to the colour data for corresponding pixels from each replicated image and from the pixel being tested. This is illustrated in Figures 8 and 9. In order to further simplify the illustration, Figure 8 shows only a section of the area 71 indicated in Figure 7 by a hashed area 75. As can be seen, the centre pixel 76 of the hashed area 75 is an edge pixel. As shown in Figure 8, this area is replicated 8 times and each duplication 81, 82, 83, 84, 85, 86, 87, 88 is shifted by one pixel in each possible direction. An XOR function is applied to the colour data of the pixel 76 being tested and for each corresponding pixel (indicated in Figure 8 by the hashed boxes 89) from the replicated and transformed images 81, 82, 83, 84, 85, 86, 87, 88. As there are only two colours present in the illustration shown in the enlarged section of Figure 7, then the colour data can be considered to be either a "1" or a "0". This is consistent with using the edge detection technique in the searching of the image repository embodiment. However, as will be appreciated this particular technique is not so limited and there may be more colours than two. With reference to Figure 8, the XOR function resulting for the pixel 76 being tested is: I XOR 1 XOROXOROXOROXOR I XOR I = Thus the pixel 76 being tested is shown to be an edge pixel.
Figure 9 shows an example of a non edge pixel, i.e. one in which the centre pixel 91 is surrounded by pixels of the same colour. With reference to Figure 9, the XOR function resulting for the pixel 91 being tested is: 1 XOR 1 XOR I XOR I XOR I XOR I XOR 1=0 Thus the pixel 91 being tested is shown to be a non-edge pixel.
As described above, once the XOR function has been carried out for every pixel in the area 71, a binary matrix with a "1" indicating an edge pixel and a "0" indicating a non-edge pixel is produced. This is shown in figure 10. As will be understood, the data from the pixels need not be operated on only with an XOR logical function but may be operated on with a combination of other logical functions such as a combination of NAND functions, OR functions or any combination of logical functions. This particular technique for detecting an edge pixel is advantageous because shifting an image by one pixel is computationally inexpensive. Further, applying logical functions to the shifted image is also computationally inexpensive compared with the prior art. Therefore this technique of edge detection allows real time edge detection. Further, when the searching technique is used with the edge detection technique described in Figure 7, 8 and 9, substantially real time searching of video images stored in the repository can be achieved.
As noted earlier, although the foregoing processing has been described in respect of the first image (or the selected part thereof), it is understood that in embodiments, the same or similar processing may be carried out on one or more of the images stored in the repository. This may form "pre-analysed" images or may be performed "on the fly".
Other embodiments may be used in image restoration, for example to detect scratches in a digital representation of image material originally on film stock which has been scanned into digital formats. Other applications of the embodiments of the invention relate to general video processing. For instance, an object may be isolated from the image, processed and then replicated into the image. Processing might be for example colour correction or indeed other special effects. Another application may be to mark or tag an object within an image with a target hyper-link accurately. Systems for manually tagging faces in photographs often allow the user to define a face using a rectangle which may often overlap another face causing confusion from a user clicking on a hyper-link. Embodiments of the present invention may assist in more accurately defming a region to which the hyper-link may be assigned.
Although the foregoing processing describesthe colour resolution reduction procedure as taking place on the whole image, it is envisaged that this could instead take place on only the selected part of the image. This would reduce processing load on the system.
Although some embodiments in the foregoing have been described with reference to finding feature data of segments (i.e. the foreground and background components are treated relatively equally), in some embodiments, it is possible to find feature data of a foreground object and feature data of a background in an image or part of an image (for example, a segment). Using this, in embodiments, the feature data of the foreground object will be generated. Additionally, feature data for a part of, or all of, the background in the segment will be generated. The generated feature data of both the foreground object and the background will then be compared with feature data of similar combinations of foreground feature data and background feature data in the stored images. As a result of this comparison, it is possible, in embodiments to generate a relevancy indicator which can be used to generate an ordered list. The most relevant stored images will be seen first by the user, in embodiments. This allows more relevant results to be returned to the user because the foreground object is seen in context. For instance, if the segment under test consists of an image of a beak in a wooded surrounding, a similar beak in a wooded surrounding is more relevant that a similar beak in a desert. Thus, this embodiment returns more relevant images.
In some embodiments the image to be tested may not be replicated and spatially transformed eight times (thus not allowing spatial transform to be applied for every possible one pixel displacement), rather the image may be replicated and spatially transformed fewer than eight times. Although this will give an incomplete analysis as to the presence of edge pixels, the information generated may be sufficient in some applications to provide enough information regarding edge pixels to be useful. As will be understood various modifications can be made to the embodiments described above without departing from the inventive concepts of the present invention. For example, although the present invention has been described with reference to a discrete computer apparatus, the invention could be implemented in a more distributed system operating across a number of connected computers. A server may store the images from the image repository and execute the search whilst a remote computer connected via a network connection to the server may speci the search criteria. This may be achieved by integrating parts of the system, for example the graphical user interface, into a "plug-in" for a web browser.

Claims (16)

1. A method of detecting edge pixels in an image comprising the steps of: replicating at least part of a first image to create at least one replicated image; spatially transforming the replicated image by a distance of one or more pixels relativeto the first image; producing an edge pixel matrix indicating the presence of edge pixels in the at least part of the first image, the matrix comprising a plurality of matrix elements, each matrix element corresponding to a pixel in the at least part of the first image; and providing a value for each element of the edge pixel matrix in accordance with a logical combination of a same attribute of a pixel in the first image with one corresponding pixel from at least part of the transformed image, wherein said corresponding pixel from the transformed image is of a matching position to the pixel from the first image, the logical combination resulting in a first value only if both the pixels have the same attribute.
2. A method according to claim 1, wherein said replication step creates a plurality of replicated images.
3. A method according to claim I or 2, wherein the or each replicated image or images are spatially transformed by a distance of one pixel in accordance with each possible one pixel positional displacement of the first image.
4. A method according to any one of claims, 1, 2 or 3 wherein the attribute is a colour value.
5. A method according to any one of the preceding claims wherein the logical combination is an exclusive OR operation.
6. A method of searching through stored images, comprising: detecting edge pixels in an image according to any one of the preceding claims; generating feature data representative of a property of at least part of an image; and comparing the generated feature data with other feature data representative of at least part of a plurality of stored images.
7. An apparatus for detecting edge pixels comprising: a replicator operable to replicate at least part of a first image to create at least one replicated image; a transformer operable to spatially transform the replicated image by a distance of one or more pixels relative to the first image; a matrix producer operable to produce an edge pixel matrix indicating the presence of edge pixels in the at least part of the first image, the matrix comprising a plurality of matrix elements, each matrix element corresponding to a pixel of the at least part of the first image; logic means for providing a value for each element of the edge pixel matrix in accordance with a logical combination of a same attribute of a pixel in the first image with one corresponding pixel from at least part of the transformed image, wherein said corresponding pixel from the transformed image is of a matching position to the pixel from the first image, the logical combination resulting in a first value only if both the pixels have the same attribute.
8. An apparatus according to claim 7, wherein said replicator is operable to create more than one replicated image.
9. An apparatus according to either claim 7 or 8, wherein the transformer is operative such that the or each replicated image or images are spatially transformed by a distance of one pixel in accordance with each possible one pixel positional displacement of the first image.
10. An apparatus according to any one of claims 7, 8 or 9, wherein the attribute is a colour value.
11. An apparatus according to any one of claims 7 to I O,wherein the logical combination is an exclusive OR operation.
12. A device operable to search through stored images, comprising: an apparatus according to any one of claims 7 to 11; a feature data generator operable to generate feature data representative of a property of at least part of an image; and a comparing device operable to compare the generated feature data with other feature data representative of at least part of a plurality of stored images.
13. Computer software comprising program code which, when executed on a computer, configures the computer to perform a method according to any one of claims 1 to 6.
14. A medium by which computer software according to claim 13 is provided.
15. A medium according to claim 14, the medium being a storage medium.
16. A method, apparatus, software or medium as substantially herein before described with reference to the attached drawings.
GB0721406A 2007-10-31 2007-10-31 Detecting Edge Pixels In An Image Withdrawn GB2454214A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0721406A GB2454214A (en) 2007-10-31 2007-10-31 Detecting Edge Pixels In An Image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0721406A GB2454214A (en) 2007-10-31 2007-10-31 Detecting Edge Pixels In An Image

Publications (2)

Publication Number Publication Date
GB0721406D0 GB0721406D0 (en) 2007-12-12
GB2454214A true GB2454214A (en) 2009-05-06

Family

ID=38834625

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0721406A Withdrawn GB2454214A (en) 2007-10-31 2007-10-31 Detecting Edge Pixels In An Image

Country Status (1)

Country Link
GB (1) GB2454214A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110189279A (en) * 2019-06-10 2019-08-30 北京字节跳动网络技术有限公司 Model training method, device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Creative Computing Vol 9, No. 1/January 1983 - pages 146-156 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110189279A (en) * 2019-06-10 2019-08-30 北京字节跳动网络技术有限公司 Model training method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
GB0721406D0 (en) 2007-12-12

Similar Documents

Publication Publication Date Title
Zeng et al. Image splicing localization using PCA-based noise level estimation
Sadeghi et al. State of the art in passive digital image forgery detection: copy-move image forgery
Uliyan et al. Copy move image forgery detection using Hessian and center symmetric local binary pattern
EP2058741A2 (en) A method and apparatus for analysing a plurality of stored images
GB2431793A (en) Image comparison
EP2235680A1 (en) Invariant visual scene and object recognition
WO2007051992A1 (en) Image processing
CN113111947A (en) Image processing method, apparatus and computer-readable storage medium
Yarlagadda et al. Shadow removal detection and localization for forensics analysis
US8121437B2 (en) Method and apparatus of searching for images
Ajlan et al. A comparative study of edge detection techniques in digital images
Wang et al. Semantic segmentation of sewer pipe defects using deep dilated convolutional neural network
CN115861922B (en) Sparse smoke detection method and device, computer equipment and storage medium
CN116524357A (en) High-voltage line bird nest detection method, model training method, device and equipment
GB2454214A (en) Detecting Edge Pixels In An Image
KR101106448B1 (en) Real-Time Moving Object Detection For Intelligent Visual Surveillance
Katukam et al. Image comparison methods & tools: a review
Zhu et al. Image blind detection based on LBP residue classes and color regions
Mahmoudabadi et al. Detecting sudden moving objects in a series of digital images with different exposure times
Chaitra et al. Digital image forgery: taxonomy, techniques, and tools–a comprehensive study
JP6336827B2 (en) Image search device, image search method, and search system
Sari et al. An Approach For Stitching Satellite Images In A Bigdata Mapreduce Framework
Bian et al. Towards Stronger Illumination Robustness of Local Feature Detection and Description based on Auxiliary Learning
Iqbal et al. Seam Carve Detection Using Convolutional Neural Networks
Smith et al. Colour Histogram Segmentation for Object Tracking in Remote Laboratory Environments

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)