GB2522663A - A method of selecting a region of interest - Google Patents

A method of selecting a region of interest Download PDF

Info

Publication number
GB2522663A
GB2522663A GB1401697.6A GB201401697A GB2522663A GB 2522663 A GB2522663 A GB 2522663A GB 201401697 A GB201401697 A GB 201401697A GB 2522663 A GB2522663 A GB 2522663A
Authority
GB
United Kingdom
Prior art keywords
region
border
preliminary
image
interest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1401697.6A
Other versions
GB2522663B (en
GB201401697D0 (en
Inventor
Ilya Romanenko
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apical Ltd
Original Assignee
Apical Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apical Ltd filed Critical Apical Ltd
Priority to GB1401697.6A priority Critical patent/GB2522663B/en
Publication of GB201401697D0 publication Critical patent/GB201401697D0/en
Priority to US14/609,204 priority patent/US9384561B2/en
Publication of GB2522663A publication Critical patent/GB2522663A/en
Application granted granted Critical
Publication of GB2522663B publication Critical patent/GB2522663B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Abstract

A method of selecting at least one region of interest within an image is disclosed. The method includes the selection of a preliminary region having a preliminary border using one or more characteristics of the image. An inner probability distribution of colour components is determined within the preliminary region and an outer probability distribution of colour components outside the preliminary region. The preliminary border is adjusted in dependence on the probability distributions, the adjusted border defining the at least one region of interest. The processing of the image can comprise selecting a border region including the border and applying a first algorithm to the region excluding at least partially the border region. The probability distributions can be weighted with the weight increasing with distance from the preliminary border. The method can further comprise determining a vector indicative of a distance between colour component histograms and assigning the pixel to within or outside the region of interest based on the vector.

Description

A METHOD OF SELECTING A REGION OF INTEREST
Technical Field
The present invention relates to a method of selecting a region of interest in an image.
Background
Methods are known for selecting a region of interest (ROT) in an image. For example, it may be desirable to identifj a region corresponding to a human face, or to TO identify a region corresponding to sky in a landscape image.
An ROI may be manually identified by a user, for example by using a mouse to outline the region. However, it is often desirable to select this region automatically, for example to achieve improved speed or accuracy compared with manual selection by the user.
t5 Methods are known for automatic region selection, for example face recognition algorithms which identify the presence and location of human faces, or colour analysis algorithms which select a region corresponding to a given range of colours.
In order to produce a natural-looking result when applying different processing methods to different regions of an image, it is often necessary to identify the border of the ROT to a high degree of accuracy, for example within one pixel. Existing methods are often unable to achieve this level of precision.
Summary
According to a first aspect of the present invention, there is provided a method of identifying at least one region of interest within an image, the method comprising: selecting a preliminary region having a preliminary border using one or more characteristics of the image; determining an inner probability distribution of colour components within the preliminary region and an outer probability distribution of colour components outside the preliminary region; adjusting the preliminary border in dependence on the probability distributions, the adjusted border defining the at least one region of interest.
The method refines a previously approximately identified ROT by separately analysing the distribution of colours within and outside the preliminary region. Pixels close to the border of the preliminary ROl are then either included within, or excluded from, the ROT by determination of their degree of correspondence to the colour distributions within and outside the ROI.
A second aspect of the invention relates to a method of processing an image, the method including: selecting a region having a border using one or more characteristics of the TO image; selecting a border region including the border; applying a first algorithm to the region excluding at least partially the border region.
The invention frirther relates to an apparatus for processing an image adapted to carry out one or both of the above methods.
The invention further relates to a computer program product comprising a non-transitory computer readable storage medium having a computer program in the form of computer readable instructions stored thereon, the computer program causing a computerized device to perform one or both of the above methods Further features and advantages of the invention will become apparent from the following description of preferred embodiments of the invention, given by way of example only, which is made with reference to the accompanying drawings.
Brief Description of the Drawings
Figure 1 shows a method for identifying the border of an object in an image.
Figures 2a-2e show an image containing an object, with various methods for defining a border region.
Figure 3 shows a histogram of the hue and saturation components of colour space for a part of an image.
Figure 4 shows a second border region of smaller width than a first border region Figure 5 shows an apparatus for implementing an object border detection method.
Detailed Description
Region of interest selection is the process by which a region within an image may be selected. For example, it may be desirable to identify a region corresponding to grass within a landscape image, or to identify a region corresponding a human face.
Many methods for identifying regions of interest are known in the art.
Figure 1 shows schematically a method according to one embodiment, in which a region of interest corresponding to an object in an image is identified. A preliminary border corresponding approximately to the border of the region of interest is defined 101. This is then used to determine a border region 102 which defines an inner region inside the inner edge of the border region, and an outer region outside the outer edge of the border region. The colour components of the image are then analysed separately for the inner region and the outer region 103, following which the border of the region is refined to more accurately approximate the actual border of the region of interest 104.
Figure 2a shows an image 201 including an object 202. The part of the image occupied by the obj ect is the region of interest (ROT) 202. The RO! has a border 203.
The method approximates the location and extent of the ROt 202 by determining the location of the border 203.
A preliminary region surrounded by a preliminary border corresponding approximately to the object 202 may be identified using one or more known object identification techniques. For example, colours in the image may be analysed to identify a region of blue pixels corresponding approximately to sky in a landscape image. Other examples of methods which may be used to identify the preliminary region include the use of a computer facial recognition algorithm to identify a region corresponding to a human face or, where the image contains depth information, the preliminary region may be identified as containing pixels corresponding to a given depth range. In a particular embodiment the identification technique does not use colour.
A first border region is determined that should be sufficiently wide to include all or substantially all of the border 203 of the ROI. The first border region may extend outside or inside the preliminary border, in which case it borders on the preliminary border, Alternatively, the first border region may extend on both sides of the preliminary border. The width of the first border region depends on the shape of the ROL and the properties of the identification technique used for determining the preliminary border. Its width should be arge enough to include substantially all of the border 203 but small enough not to include too much of the image without border. In general, the choice of border region (inside, outside, on both sides of the preliminary border) and its width depend on the specific image and object being processed and the identification technique.
Figure 2b shows the image 201 containing the obj ect 202 defined by the border 203. A preliminary border 204 has been identified, defining a preliminary region 205.
In this embodiment the preliminary border 204 is wholly outside the region of interest corresponding to the object 202, due to a specific choice of the identification technique.
According to an embodiment, a first border region 206 (hatched in the figure) is determined as extending from the preliminary border 204 to an edge 207. The edge 207, indicated by a dashed line, is wholly within the preliminary border 204, and thus the first border region is also whofly within the preliminary border 204. The first border region 206 defines an inner region 208 entirely within the edge 207, and an outer region 209 entirely outside the preliminary border 204.
Figure 2c also shows the image 201 containing the object 202 defined by the border 203. According to another embodiment, a preliminary border 210 has been determined, the preliminary border being wholly inside the region of interest corresponding to the object 202. The identification technique used in Figure 2c in different from that used in Figure 2b.
A first border region 211, hatched in the figure, is determined as extending from the preliminary border 210 to an edge 22. The edge 212, indicated by a dashed line, is entirely outside the preliminary border 210, and thus the first border region is also entirely outside the preliminary border 210. The first border region 211 defines an iimer region 213 entirely within the preliminary border 20, and an outer region 214 entirely outside the edge 212.
Figure 2d also shows the image 201 containing the object 202 defined by the border 203. According to another embodiment, using an identification technique different from those used in Figures 2b and 2c, a preliminary border 215 has been determined, the preliminary border being partially inside and partially outside the region of interest corresponding to the obj ect 202; hence, the preliminary border crosses the border 203 of the region of interest.
A first border region 216, hatched in the figure, is identified as extending from an outer edge 217 to an inner edge 218. The outer edge 217 is entirely outside the preliminary border 215, and the inner edge 218 is entirely inside the preliminary border 215. The preliminary border 215 is thus contained within the first border region 216.
The first border region 216 defines an inner region 219 entirely within the inner edge 218, and an outer region 220 wholly outside the outer edge 217.
Figure 2e depicts a method by which the preliminary region and the first border region may be determined. The image 201 containing the object 202 is divided into a grid of zones 221. For example, a I000xI000 pixel image may be divided into a grid of I OOx 100 zones, each zone measuring lOx 10 pixels. A preliminary border 222, indicated by a bold line in figure 2e, is determined based on the zones. The preliminary border follows the boundaries between zones such that each zone is either entirely within or entirely outside the preliminary border. A first border region 223, hatched in the figure, is determined as extending from the preliminary border 222 to an edge 224, the edge being within the preliminary border and indicated by a dashed line.
The edge 224 may be selected by, for example, requiring the first border region 223 to have a width of one zone, In the example shown in figure 2e, zones 225 and 226 are not within the border region but contain pixels of the ROl. If this situation is undesirable, the width of the border region may be increased such that the border of the object 203 is entirely within the first border region 223. The first border region 223 defines an outer region 227 wholly outside the preliminary border 222, and an inner region 227 wholly within the edge 224. Analysing the image zone-by-zone in this manner is typically computationally faster than pixel-by-pixel analysis.
According to another embodiment, a preliminary border is determined on a pixel (as opposed to zone) basis and the image 201 is divided into a grid of zones 221 as depicted in figure 2e. The preliminary border may now pass through the zones instead of only following the zone boundaries. The first border region may then be defined to comprise the zones through which the preliminary border passes. The first border region may be enlarged relative to this by including zones adjacent to those through which the preliminary border passes. For example, zones inside the preliminary border, outside the preliminary border, or both inside and outside may be added in the cases whereby the preliminary border is wholly outside the region of interest, wholly inside the region of interest, or partially inside and outside the region of interest, respectively.
The approximation of the ROl defined by the preliminary border 204, 210, 215, 222 is then refined by statistical analysis of pixel colour components by determining one or more probability distributions of the pixel colour components. Examples of colour components are red, green and blue in red-green-blue (RGB) colour space, or hue, saturation and value in hue-saturation-value (HSV) colour space. The probability distributions may be determined separately for the preliminary region and the region outside the preliminary region, in other words, inside the preliminary border 204, 210, 215, 222, arid outside the preliminary border7 Alternatively, the probability distributions may be determined separately for the inner region 208, 213, 219, 227 and the outer region 209, 214, 220, 228. In this case pixels in the first border region 206, 211, 216, 223 are at least partly excluded from the determination from either or both probability distributions. The probability distributions will be better defined than those in the previous paragraph, because the inner region includes no or only a relatively small number of pixels that do not belong to the object. Similarly, the outer region includes no or only a relatively small number of pixels that belong to the object. A better definition of the probability distributions allows a more accurate assignment of pixels to the region of interest 202 and to outside the region of interest.
Pixels in the first border region may be excluded from either or both of these probability distributions. Alternatively, the pixels in the first border region may be partly excluded from either or both probability distributions by giving the pixels a weight in their contribution to the probability distributions. For example, in the case depicted in figure 2d, the weight for a given pixel may increase with perpendicular distance between the preliminary border and the pixel, from a weight of zero for pixels adjacent to the preliminary border to a weight of one for pixels at an edge of the border region.
As another example, in the embodiment depicted in figure 2e, the pixels in the first border region may also be included in either/both histograms, with a weight applied. The weight of a given pixel may be determined as described above, based on the perpendicular distance between the pixel and the preliminary border. Alternatively, all pixels within a given zone may have the same weight depending on the perpendicular distance between the preliminary border and the centre of that zone.
The probability distributions may be two-dimensional histograms indicating the number of pixels having colour components in given ranges. According to one embodiment, the colour of each pixel is expressed in hue-saturation-value (HSV) colour space; the two-dimensional histograms may then show distributions of pixels over hue and saturation, Typically such histograms contain peaks corresponding to the most common colours in the two regions, for example an inner region showing grass would contain peaks corresponding to green, yellow and brown, and an outer region covering sky would contain peaks corresponding to blue and white. Figure 3 shows a hue-saturation histogram 301 of this type, containing two peaks 302 at different hue and saturation values for two different regions. The accuracy of the method may be increased by using three dimensional histograms which for example include the value t5 component in addition to hue and saturation.
The adjustment of the preliminary border involves analysing the pixels in the first border region 206, 211, 216, 223 detennine whether they should be within or outside the ROT. This operation is performed by determining whether a given pixel corresponds more closely to the histogram of pixel colour components in the inner region 208, 213, 219, 227 or to the histogram of pixel colour components in the outer region 209, 214, 220, 228, or, alternatively, to the histogram ofpixel colour components in the preliminary region or to the histogram of pixel colour componments outside the preliminary region.
One method for determining to which histogram a pixel most closely corresponds is to produce a vector describing the distance between the two histograms, methods for this being known to those skilled in the art. The pixel in question may then be mapped onto this vector. A threshold can then be assigned to the vector, with pixels being assigned to the ROl if they lie between the threshold and the histogram corresponding to the inner region 208. Similarly, pixels are excluded from the ROT if they lie between the threshold and the histogram corresponding to the outer region 209.
In this manner, the accuracy of the determination of the border 203 of the ROL may be improved with respect to previous methods. The determination of distance between the histograms also provides a measure of the accuracy of the assignment of the pixels to the ROl or to outside the ROl, with a larger difference corresponding to a greater degree of accuracy. For example, in the embodiment shown in figure 2e in which the image is divided into a grid of zones, for sufficiently distant histograms the border may be determined to an accuracy of a single pixel, whereas for the worst case of very similar histograms the border may only be determined to an accuracy of one zone. Thus, the method is more accurate when identifying a region of significantly different colour component distribution to the rest of the image.
Figure 4 shows a close-up view of an area in figure 2d around a segment of the object border 202. Also shown is the preliminary border 215 and the outer edge 217 and inner edge 218 of the first border region 216. A second outer edge 40 t and a second inner edge 402 may be defined, the area between these edges being a second border region 403. If the width of the second border region is smaller than the width of the first border region, the computational speed of the method described above to assign pixels S to regions may be increased by using the second border region instead of the first border region, because there are fewer pixels to analyse.
Although several method steps have been explained with reference to figure 2b, the same method steps can be applied to the situations shown in figures 2c, 2d and 2e.
Once the border 203 of the ROl has been identified, different image processing techniques, well-known to those skilled in the art, may be applied within the ROT to those applied outside the ROt. For example, a sharpening algorithm may be applied, sharpening a region corresponding to grass to a greater degree than a region corresponding to sky, In this manner, the undesired production of image artefacts in the sky region, where they may be more noticeable, may be averted. By way of another example, a colour correction algorithm may be applied to human faces, with a different colour correction algorithm being applied to the background of the image. Other image processing techniques include reduction of noise and white balancing.
The first border region may also be used in more areas that apply an algorithm to an image. An algorithm may be applied to a selected region of an image. A border region is determined that includes the border of the selected region. The border region is excluded at least in part from the algorithm. Any irregularities near the border of the selected region will not be included in the processing, thereby reducing artefacts in the processed image. The border region may be determined as described above with reference to the first border region in figures 2b-e. The border region may be wholly excluded from the algorithm or in part. In the latter case, parts of the border region to which the algorithm is applied may be weighted with a weight increasing with distance S of the parts from the border. The algorithm may be applied to the region inside or to the region outside the border region, the border region itself being at least partly excluded.
If a first algorithm is applied to the region within the border region, a second algorithm, possibly different to the first one, may be applied to the region outside the border region.
One or both of the algorithms may be the known determination of a region of interest in an image using a single probability distribution, such as a histogram. The algorithm may be the above determination of the probability distribution of colour components.
The result of the algorithm will be more accurate if the probability distribution is determined over the region excluding at least partly the border region One or both of the algorithms may be a known image processing technique such as sharpening, colour tS correction, reduction of noise or white balancing.
In addition to identifying a single region in an image, the technique described here may be used to identify multiple regions of interest simultaneously, for example corresponding to multiple human faces in an image, or corresponding to grass, sky and water in a landscape image. If multiple regions of interest in an image relating to similar objects, such as faces, are to be determined, the histogram of the outer region should cover only pixels outside all preliminary regions corresponding to the regions of interest, The technique may also be applied automatically, for example to images captured by a camera, or it may be applied at the manual request of a user, for example in image manipulation software.
An apparatus for carrying out the above described method is shown in figure 5.
An input 50] for images is connected to a processor 502 and a memory 503 which includes computer program instructions. The instructions are configured to cause the processor to identi1,' the border of an object in the image in the manner described above.
The shape and position of the border is then output 504. The apparatus may for example be implemented in a camera or computer, and the image may be input from a camera sensor or from a memory, The output may for example be to a screen, or stored in memory.
The invention may be implemented in a computer program product comprising a non-transitory computer readable storage medium having computer readable instructions stored thereon, the computer readable instructions being executable by a computerized device to cause the computerized device to determine the orientation of an object in an image in the manner described above.
The above embodiments are to be understood as illustrative examples of the invention, Further embodiments of the invention are envisaged. For example, the invention may be implemented in a camera, a computer, or other image processing means. The invention may also be implemented in hardware or software. In addition, while the description is directed to a rectangular grid of "zones", it will be appreciated that the method may equally well be applied given an alternative method for approximate determination of the border of an ROT. It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments, In particular, features of the embodiments shown in Figures 2 and 4 relating to determining the border region and to determining a probability distribution can also be used in the method where an image processing technique is applied to a selected region of an image. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims,

Claims (32)

  1. Claims 1 A method of selecting at least one region of interest within an image, the method including: selecting a preliminary region having a preliminary border using one or more characteristics of the image; determining an inner probability distribution of colour components within the preliminary region and a outer probability distribution of colour components outside the preliminary region; adjusting the preliminary border in dependence on the probability distributions, the adjusted border defining the at least one region of interest.
  2. 2. A method according to claim 1, including selecting a first border region including the preliminary border. I5
  3. 3, A method according to claim 1 or 2, selecting the first border region to be inside, outside, and both inside and outside the preliminary border for the preliminary border being wholly outside the region of interest, wholly inside the region of interest, md partially outside the region of interest, respectively.
  4. 4. A method according to claim 2 or 3, in which the determining of the inner and/or outer probability distributions excludes the first border region.
  5. 5. A method according to claim 2 or 3, in which the determining of the inner and/or outer probability distributions includes the first border region, and contributions to the inner and! or outer probability distributions from the first border region are weighted with a weight increasing with distance of the contribution from the preliminary border.
  6. 6. A method according to any one of claims Ito 5, in which the inner and outer probability distributions of colour components are an inner colour component histogram and an outer colour component histogram, respectively.
  7. 7. A method according to claim 6, in which the colour component histograms are in at least two colour components of colour space.
  8. 8. A method according to claim 6, or 7, in which the colour component histograms are two-dimensional histograms in hue and saturation components of hue-saturation-value colour space.
  9. 9. A method according to claim 6, 7 or 8, in which the adjusting of the preliminary border includes determining whether a pixel corresponds more closely to the inner histogram or to the outer histogram, and assigning the pixel to be within or outside the region of interest accordingly.
  10. 10. A method according to claim 9, including: determining a vector indicative of a distance between the two histograms; defining a threshold on the vector; mapping the pixel onto the vector; assigning the pixel to within the region of interest if its mapping to the vector is on the side of the threshold closest to the inner histogram; arid assigning the pixel to outside the region of interest if its mapping to the vector is on the side of the threshold closest to the outer histogram.
  11. II. A method according to any one of claims Ito 10, in which the analysing of at least one of the probability distributions of colour components is carried out in hue-saturation-value colour space.
  12. 12. A method according to any one of claims ito 11, including selecting a second border region including the preliminary border.
  13. 13. A method according to claim 12, in which the adjusting the preliminary border includes assigning pixels of the image only in the second border region to the region of interest or to outside the region of interest. In I 3
  14. 14. A method according to claim 2 and 12, in which the first border region has a first width and the second border region has a second width, the first width being larger than or equal to the second width.
  15. 15. A method according to any one of claims ito 14, including dividing at least part of the image including the region of interest into a grid of zones.
  16. 16. A method according to claim iS, in which the preliminary region is defined in zones.
  17. 17. A method according to claim 15 or 16 and claim 2, in which at least part of the image is defined in zones.
  18. 18. A method according to any one of claims Ito 17, including applying one or more image processing techniques to the image after adjusting the extent of the regions of interest.
  19. 19. A method according to claim 18 in which the one or more processing techniques applied to the image differs between the region of interest and outside the region of interest.
  20. 20. A method according to claim i8 or 19, in which the one or more image processing techniques applied to the image include increasing or decreasing of sharpness, reduction of noise, adjustment of colour and! or white balancing.
  21. 21. A method according to any one of claims ito 20, in which one of the characteristics of the image is colour information, or depth information, or the presence of one of more faces recognised by facial recognition.
  22. 22. Apparatus for processing an image, the apparatus comprising: at least one processor; and at least one memory including computer program instructions, the at least one memory and the computer program instructions being configured to, with the at least one processor, cause the apparatus to perform: a method of selecting at least one region of interest within an image, the method comprising: selecting a preliminary region having a preliminary border using one or more characteristics of the image; determining an inner probability distribution of colour components within the preliminary region and an outer probability distribution of colour components outside the selected region; adjusting the preliminary border in dependence on the probability distributions, the adjusted border defining the at least one region of interest.t5
  23. 23. An apparatus according to claim 22, in which the methods includes selecting a first border region including the preliminary border, in which the determining of the inner and/or outer probability distributions excludes or only partially includes the first border region.
  24. 24. A computer program product comprising a non-tnmsitory computer readable storage medium having computer readable instructions stored thereon, the computer readable instructions being executable by a computerized device to cause the computerized device to perform a method of selecting at least one region of interest within an image, the method comprising: selecting a preliminary region having a preliminary border using one or more characteristics of the image; determining an inner probability distribution of colour components within the preliminary region arid an outer probability distribution of colour components outside the selected region; adjusting the preliminary border in dependence on the probability distributions, the adjusted border defining the at least one region of interest.
  25. 25. A computer program product according to claim 24, in which the methods includes selecting a first border region including the preliminary border; in which the determining of the inner and/or outer probability distributions excludes or only partially includes the first border region.
  26. 26. A method of processing an image, the method including: selecting a region having a border using one or more characteristics of the image; selecting a border region including the border; applying a first algorithm to the region excluding at least partially the border region.
  27. 27. A method according to claim 26, in which the first aigorithm is determining a probability distribution of colour components.
  28. 28. A method according to claim 26 or 27, in which the first algorithm is applied to the region excluding entirely the border region.
  29. 29. A method according to claim 26 or 27 in which the application of the first algorithm to a part of the border region is weighted with a weight increasing with distance of the part from the border.
  30. 30. A method according to any one of claims 26 to 29, including selecting the border region to be inside, outside, and both inside and outside the border, for the border being wholly outside the region, wholly inside the region, and partially outside the region, respectively.
  31. 31. A method according to any one of claims 26 to 30, induding: applying a second algorithm to the region outside the border region.
  32. 32. Apparatus for processing an image, the apparatus comprising: at least one processor; and at least one memory including computer program instructions, the at least one memory and the computer program instmctions being configured to, with the at least one processor, cause the apparatus to perform: a method of processing an image, the method including: selecting a region having a border using one or more characteristics of the image; selecting a border region including the border; applying a first algorithm to the region excluding at least partially the border region.
    33, A computer program product comprising a non-transitory computer readable storage medium having computer readable instructions stored thereon, the computer readable instructions being executable by a computerized device to cause t5 the computerized device to perform a method of processing an image, the method including: selecting a region having a border using one or more characteristics of the image; selecting a border region including the border; applying a first algorithm to the region excluding at least partially the border region.
GB1401697.6A 2014-01-31 2014-01-31 A method of selecting a region of interest Active GB2522663B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1401697.6A GB2522663B (en) 2014-01-31 2014-01-31 A method of selecting a region of interest
US14/609,204 US9384561B2 (en) 2014-01-31 2015-01-29 Method of selecting a region of interest

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1401697.6A GB2522663B (en) 2014-01-31 2014-01-31 A method of selecting a region of interest

Publications (3)

Publication Number Publication Date
GB201401697D0 GB201401697D0 (en) 2014-03-19
GB2522663A true GB2522663A (en) 2015-08-05
GB2522663B GB2522663B (en) 2020-02-12

Family

ID=50344203

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1401697.6A Active GB2522663B (en) 2014-01-31 2014-01-31 A method of selecting a region of interest

Country Status (2)

Country Link
US (1) US9384561B2 (en)
GB (1) GB2522663B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201410635D0 (en) 2014-06-13 2014-07-30 Univ Bangor Improvements in and relating to the display of images
US10607525B2 (en) 2015-05-19 2020-03-31 Irystec Software Inc. System and method for color retargeting
US9563824B2 (en) * 2015-06-15 2017-02-07 Qualcomm Incorporated Probabilistic color classification
US10255675B2 (en) * 2016-01-25 2019-04-09 Toshiba Medical Systems Corporation Medical image processing apparatus and analysis region setting method of texture analysis
US11776129B2 (en) * 2020-12-16 2023-10-03 Qualcomm Incorporated Semantic refinement of image regions

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002052739A1 (en) * 2000-12-21 2002-07-04 Adobe Systems Incorporated Image extraction from complex scenes in digital video
US20070133862A1 (en) * 1999-07-25 2007-06-14 Orbotech Ltd. Detection of surface defects employing subsampled images
US20090196349A1 (en) * 2008-02-01 2009-08-06 Young-O Park Method for estimating contour of video object
US20090300553A1 (en) * 2008-05-28 2009-12-03 Daniel Pettigrew Defining a border for an image
US20090297031A1 (en) * 2008-05-28 2009-12-03 Daniel Pettigrew Selecting a section of interest within an image
US20100278424A1 (en) * 2009-04-30 2010-11-04 Peter Warner Automatically Extending a Boundary for an Image to Fully Divide the Image
US20120148151A1 (en) * 2010-12-10 2012-06-14 Casio Computer Co., Ltd. Image processing apparatus, image processing method, and storage medium
US8391594B1 (en) * 2009-05-28 2013-03-05 Adobe Systems Incorporated Method and apparatus for generating variable-width border masks
US8406566B1 (en) * 2010-05-27 2013-03-26 Adobe Systems Incorporated Methods and apparatus for soft edge masking
US20130336582A1 (en) * 2012-06-14 2013-12-19 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5086496A (en) * 1988-05-02 1992-02-04 Arch Development Corporation Method for hidden line and surface removal in a three dimensional display
GB2231749B (en) * 1989-04-27 1993-09-29 Sony Corp Motion dependent video signal processing
US7486811B2 (en) * 1996-09-16 2009-02-03 The Research Foundation Of State University Of New York System and method for performing a three-dimensional virtual examination of objects, such as internal organs
US7194117B2 (en) * 1999-06-29 2007-03-20 The Research Foundation Of State University Of New York System and method for performing a three-dimensional virtual examination of objects, such as internal organs
US6631136B1 (en) * 1998-08-26 2003-10-07 Hypercom Corporation Methods and apparatus for data communication using a hybrid transport switching protocol

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070133862A1 (en) * 1999-07-25 2007-06-14 Orbotech Ltd. Detection of surface defects employing subsampled images
WO2002052739A1 (en) * 2000-12-21 2002-07-04 Adobe Systems Incorporated Image extraction from complex scenes in digital video
US20090196349A1 (en) * 2008-02-01 2009-08-06 Young-O Park Method for estimating contour of video object
US20090300553A1 (en) * 2008-05-28 2009-12-03 Daniel Pettigrew Defining a border for an image
US20090297031A1 (en) * 2008-05-28 2009-12-03 Daniel Pettigrew Selecting a section of interest within an image
US20100278424A1 (en) * 2009-04-30 2010-11-04 Peter Warner Automatically Extending a Boundary for an Image to Fully Divide the Image
US8391594B1 (en) * 2009-05-28 2013-03-05 Adobe Systems Incorporated Method and apparatus for generating variable-width border masks
US8406566B1 (en) * 2010-05-27 2013-03-26 Adobe Systems Incorporated Methods and apparatus for soft edge masking
US20120148151A1 (en) * 2010-12-10 2012-06-14 Casio Computer Co., Ltd. Image processing apparatus, image processing method, and storage medium
US20130336582A1 (en) * 2012-06-14 2013-12-19 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium

Also Published As

Publication number Publication date
US20150221101A1 (en) 2015-08-06
GB2522663B (en) 2020-02-12
US9384561B2 (en) 2016-07-05
GB201401697D0 (en) 2014-03-19

Similar Documents

Publication Publication Date Title
US9384561B2 (en) Method of selecting a region of interest
CN113781402A (en) Method and device for detecting chip surface scratch defects and computer equipment
US9489751B2 (en) Image processing apparatus and image processing method for color correction based on pixel proximity and color similarity
US10455123B2 (en) Method for increasing the saturation of an image, and corresponding device
US8787666B2 (en) Color analytics for a digital image
JP5914688B2 (en) Color analysis for digital images
US20160307306A1 (en) Method and apparatus for image colorization
US8559714B2 (en) Post processing for improved generation of intrinsic images
CN109903275B (en) Fermented grain mildewing area detection method based on self-adaptive multi-scale filtering and histogram comparison
CN110175967B (en) Image defogging processing method, system, computer device and storage medium
US9672447B2 (en) Segmentation based image transform
CN107239761B (en) Fruit tree branch pulling effect evaluation method based on skeleton angular point detection
US8428352B1 (en) Post processing for improved generation of intrinsic images
KR20180064028A (en) Method and apparatus of image processing
JP5203159B2 (en) Image processing method, image processing system, and image processing program
Wang et al. Saliency-based adaptive object extraction for color underwater images
JP4742068B2 (en) Image processing method, image processing system, and image processing program
JP5860970B2 (en) Post-processing to improve eigenimage generation
US10083516B2 (en) Method for segmenting a color image and digital microscope
Tran et al. Determination of Injury Rate on Fish Surface Based on Fuzzy C-means Clustering Algorithm and L* a* b* Color Space Using ZED Stereo Camera
CN112308791B (en) Color constancy method based on gray pixel statistics
CN111161157B (en) Color enhancement method, graphic processor and display device
CN117115490A (en) Image pre-segmentation method, device, medium and electronic equipment
Kwon et al. Color constancy algorithm without segmentation
CN117789242A (en) Gesture segmentation method, device, equipment and storage medium

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20220929 AND 20221005