GB2227346A - Image analysis - Google Patents

Image analysis Download PDF

Info

Publication number
GB2227346A
GB2227346A GB8919350A GB8919350A GB2227346A GB 2227346 A GB2227346 A GB 2227346A GB 8919350 A GB8919350 A GB 8919350A GB 8919350 A GB8919350 A GB 8919350A GB 2227346 A GB2227346 A GB 2227346A
Authority
GB
United Kingdom
Prior art keywords
objects
pixels
scene
data
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB8919350A
Other versions
GB8919350D0 (en
Inventor
A R Mcdonald
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BAE Systems PLC
Original Assignee
British Aerospace PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by British Aerospace PLC filed Critical British Aerospace PLC
Publication of GB8919350D0 publication Critical patent/GB8919350D0/en
Publication of GB2227346A publication Critical patent/GB2227346A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/457Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by analysing connectivity, e.g. edge linking, connected component analysis or slices

Abstract

Objects 1-12, Fig 2, making up an image are identified and labelled by assigning labels to the objects, calculating and storing information about said objects as a plurality of lists of data items Fig 3, and using pointer values associated with the lists which are indicative of the label of either the next object contained within the relevant object or the next object in the same list. <IMAGE>

Description

IMAGE ANALYSIS --- This invention relates to image analysis, and more specifically to a method for identifying and labelling a series of objects making up an image.
According to the invention there is provided, in image analysis, a method of identifying and labelling objects making up an image, the method comprising: forming a series of digital signals indicative of whether or not respective pixels of said image represent object edge points; taking said signals in sequence and evaluating each in conjunction with others of the signals to determine whether the associated pixel belongs to a newly detected object or an object which has already been detected by reference to preceding pixels belonging thereto; assigning label values to said pixels in dependence upon which object they belong to, the assignment differentiating between different objects made up of interconnected edge point pixels and also differentiating those objects from objects made up of interconnected not-edge point pixels; and calculating and storing information about said objects as a plurality of lists of data items, each, list comprising data about all objects which are contained within the same number of other objects, and each data item being associated with a particular object and comprising two pointer values, one indicative of the label assigned to the next object determined to be contained within the object associated with the data item and the other indicative of the label assigned to the next object associated with a data item in the same list.
For a better understanding of the invention, reference will now be made, by way of example, to the accompanying drawings, in which: Figure 1 is a block diagram for illustrating the operating principles of the invention, Figure 2 is a representation of an image made up of a series of objects, Figure 3 is a diagram for explaining the form of a series of pointer lists associated with objects in Figure 2, Figure 4 is a diagram for showing the relationship between live and dead structures, Figure 5 is a diagram showing the relationship between pixel mask content and operations, Figure 6 is a diagram for showing the modification to provide simple filling, and Figure 7 is a diagram for showing typical behaviour of simple filler.
The device is a filling in object number allocator. It is used to control collator devices so that image data may be segmented into object data.
The device accepts a bit sequence representing the presence or r absence of edges in a raster scanned scene. It then partitions that scene into objects. It generates object numbers to which data associated with those pixels may be allocated. The device also generates signals to control the collation of the object data and provides an interface such that an external processor may unload the object data and the structure of the objects within the scene. These functions are accomplished in a single pass of the scene.
The operation of the device is selectable between two modes namely single level with a simplistic filler and multi-level. In single level mode a binary scene is generated with all filled pixels of objects set to a defined level, and with the collated data matching this modified image. In multi-level mode nested object structures can. be handled without hiding the image structure but the binary scene is not generated.
The operating principle of the device is to represent the scene as a binary tree (which may be considered as a list structure) and to analyse it by modifying the structure as the scene is scanned.
The operation of 'filling in' a closed object is accomplished by splitting the scene into connected edge and interior areas for which data is collated separately. Collated interior data sets are then merged with their edges in each case where an edge object is found to have surrounded an interior object. In this way the effect of object filling in a single pass is achieved, without restriction on the object shape or the nesting of objects within objects.
Operation in Single Level mode may be achieved by modifying the action of the control logic to actually fill the binary scene, a process which masks any inner object structure.
An overall system block diagram is shown in Figure 1. The edge scene is input into a mask decoder 1 together with its line delayed version. The mask is decoded to give instructions to a tracer 2 which analyses the scene, using a pointer memory 3, and outputs object numbers and instructions to one or more collators (not shown). The tracer gets new object numbers as required from a freelist 4, and when it determines that an object is complete it places it on an output queue 5, if it is to be unloaded by an external system (not shown) or back on the freelist if not An unloader 6 forms the interface to the external system and allows access to object data as well as replacing unloaded objects back on the freelist.
The scene representation is input as a stream of bìt data.
The operation of the unit involves generating an object list structure representation of the scene which is used to control the allocation of object numbers to pixels.
The scene input is represented by an array of pixels.
These pixels are spatially arranged in a rectangular grid. Each pixel has an edge value of 1 or 0 depending on some externally derived criterion and may also have a qualifying value of 1 or 0.
The external edge identification system is arranged such that pixels at the border have edge values of 0 so that edges do not touch the scene border.
The output structure of the scene is represented as a binary tree where each element represents an object A and contains two pointers, the first being either a null or a pointer to an object B totally surrounded by the object A and the second being a null or a pointer to another object C, which is at the same level as A. Objects in the scene have levels where the level of an object is the number of objects surrounding that object. The scene is represented by a binary tree where each object has two pointers, one to the next object at the same level and the other to the first object at the next level down. The pointers can be null pointers signifying there being no object pointed to. A typical scene processed in this way is shown in Figure 2.
Objects are considered as being of two classes, edge and interior, with all pixels being assigned to one or other class of object. An edge or exterior object is either an edge point, a collection of connected edge points or a collection of connected edge points surrounding an interior object. An interior object is either an interior point, a collection of connected interior points or a collection of connected interior points surrounding or being surrounded by an edge object. It is possible for data on interior objects to be combined with that for their surrounding edges, and vice-versa, on completion. This results in a hierarchy of objects where each object data set refers to both the object and all objects which it surrounds. Optionally it is then possible to delete some data, for instance that on internal objects, whilst preserving meaningful object statistics. This combination of data means that the top level object for the scene will accumulate the total scene statistics.
A Top level object represents the whole scene and is an interior object and has a pointer to the first level object list.
The first level list contains exterior or objects. These objects have pointers to lists of the objects which are immediately surrounded by first level objects and below that objects immediately surrounded by second level objects which are interior objects and so on.
The binary tree structure used may be thought of as a list structure. Two lists are used, the final list structure and a temporary one which reflects the object structure of the line currently being processed for example line 'a' of Figure 4. The final list structure is built up by unloading objects from the temporary list structure as they are completed. The temporary list structure contains pointers to objects which are 'live' on the particular line and also pointers to the most recently unloaded objects at each level This list is referred to as the live list.
Objects are created in this list and stay in it until completed or merged.
The final list structure is constructed externally as the object data is unloaded in a bottom-up manner (this is necessary because the upper level structure is unknown until the scene has been completely traversed). The pointers used in the temporary list will be object addresses within the collators but the pointers used in the final list are arbitrary numbers since particular object addresses can be reused. When an object is unloaded it is given an arbitrary number and will have pointers to previously unloaded objects similarly numbered. This process is illustrated in Figure 4.
lt is necessary for 'append' instructions to be issued to the device building the final list structure when separated complex sections of a scene are found to be joined. this occurs when two objects in the live structure merge where both have dead pointers. an example of such a situation may be seen in figure 4 where object 5 and object 7 both have dead pointers. If objects 5 and 7 were to merge, as would happen if there was a break in the line at point 'b' then their substructures would need to be merged within the dead structure (ie: object 9 would be given a tail pointer to object 10).The instructions take the form of two pointers, both referring to already unloaded objects, the action then being for the device building the final list to find the null at the end of the string of pointers indicated by one of the appended objects and replace it with a pointer to the other structure.
The application of the system: An edge scene is scanned in a raster manner, i.e. line by line, left to right. The object representation can then be built up by decoding a mask built up from some of the current and immediately proceeding line of edge scene bits. A typical mask 10 is shown in Figure 5. It is sufficiently large to cover 5 pixels of the scene at any one time. Each section 11 to 15 of the mask may contain either an edge pixel or interior (non-edge) pixel. If an edge is detected the section (11 to 15) contains a 1 and if an interior pixel is detected the section contains a 0. By examining the mask content the type of object (if there is one) can be determined. In other words whether it is a new (N), old (O) or link (L) object. The defined system allocates an object number to each pixel as it is scanned.Should it be found that two objects are in fact joined a signal is generated and the relevant object numbers indicated. This allows an external system to merge the data for those objects.
When an object is completed or surrounded the defined system detects this and indicates to an external system the object number of the completed object and also assigns a pointer number for the located object and the other to the last object completed at that level.
If a merge occurs between objects with already completed sub objects it is indicated by showing the heads of the sub object lists to be merged so that an external system may keep track of the list structure.
When an object is completed it will either be unloaded or merged with its surroundings. the basis for this is Object Qualification and this may be applied at the discretion of the user. One form is Interiorledge merging which, if in force, means that all interior objects are merged on completion with their surrounding edge, so that the only objects unloaded are filled in edge objects. The other form is a bit signal which is loaded in parallel with the edge data from the scene and logically 'or'd' with a status flag for the current object. If this qualification is in operation then only objects having this flag set at completion will be unloaded. This enables Seed point qualification of objects.
When an object is unloaded it is desirable that a pointer to it is preserved so that the total scene structure can be built up externally. To this end, internally, a third pointer, the dead pointer, is used on each object which points to the last substructure object to be unloaded, that is the last object unloaded for which that object was the parent. Since the parent object is always known (from an internal stack) this pointer is easily accessed as objects are unloaded to provide the link information.
When an object is completed it is passed into an unloading queue. As it is completed it is assigned a number in the final object number system. The set of Data items with an object are therefore: collator object number final object number pointer to substructure pointer to previously unloaded object level at unload time internallexternal bit The pointer to substructure, which will be all unloaded before the object completes, is the contents of the last unloaded object pointer for the object. The pointer to a previously unloaded object is the object's parent's last unloaded object pointer. Level at unload time is the stack pointer value.
Internaijexternal bit is only of interest if internal! external merging is not used. If internallexternal merging is operational then completed internal objects are merged with their surrounding external object and not separately unloaded.

Claims (1)

1. A method of identifying and labelling objects making up an image, the method comprising: forming a series of digital signals indicative of whether or not respective pixels of said image represent object edge points; taking said signals in sequence and evaluating each in conjunction with others of the signals to determine whether the associated pixel belongs to a newly detected object or an object which has already been detected by reference to preceding pixels belonging thereto; assigning label values to said pixels in dependence upon which object they belong to, the assignment differentiating between different objects made up of interconnected edge point pixels and also differentiating those objects from objects made up of interconnected not-edge point pixels; and calculating and storing information about said objects as a plurality of lists of data items, each, list comprising data about all objects which are contained within the same number of other objects, and each data item being associated with a particular object and comprising two pointer values, one indicative of the label assigned to the next object determined to be contained within the object associated with the data item and the other indicative of the label assigned to the next object associated with a data item in the same list.
GB8919350A 1988-08-26 1989-08-25 Image analysis Withdrawn GB2227346A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB8820279 1988-08-26

Publications (2)

Publication Number Publication Date
GB8919350D0 GB8919350D0 (en) 1990-04-25
GB2227346A true GB2227346A (en) 1990-07-25

Family

ID=10642752

Family Applications (1)

Application Number Title Priority Date Filing Date
GB8919350A Withdrawn GB2227346A (en) 1988-08-26 1989-08-25 Image analysis

Country Status (1)

Country Link
GB (1) GB2227346A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994001528A1 (en) * 1992-07-13 1994-01-20 Minnesota Mining And Manufacturing Company A technique to count objects in a scanned image
US5510246A (en) * 1993-05-14 1996-04-23 Minnesota Mining And Manufacturing Company Method for rapid quantification of microorganism growth
US5694478A (en) * 1994-12-15 1997-12-02 Minnesota Mining And Manufacturing Company Method and apparatus for detecting and identifying microbial colonies
US5744322A (en) * 1993-12-17 1998-04-28 Minnesota Mining And Manufacturing Company Automated incubating and imaging system for a disposable microorganism culturing device and method of use
US9834748B2 (en) 2007-07-09 2017-12-05 3M Innovative Properties Company Modular system and method for detecting microorganisms

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1333439A (en) * 1969-12-30 1973-10-10 Texas Instruments Inc Expanded search method and system in training processors

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1333439A (en) * 1969-12-30 1973-10-10 Texas Instruments Inc Expanded search method and system in training processors

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994001528A1 (en) * 1992-07-13 1994-01-20 Minnesota Mining And Manufacturing Company A technique to count objects in a scanned image
US5403722A (en) * 1992-07-13 1995-04-04 Minnesota Mining And Manufacturing Company Technique to count objects in a scanned image
US5510246A (en) * 1993-05-14 1996-04-23 Minnesota Mining And Manufacturing Company Method for rapid quantification of microorganism growth
US5744322A (en) * 1993-12-17 1998-04-28 Minnesota Mining And Manufacturing Company Automated incubating and imaging system for a disposable microorganism culturing device and method of use
US5694478A (en) * 1994-12-15 1997-12-02 Minnesota Mining And Manufacturing Company Method and apparatus for detecting and identifying microbial colonies
US9834748B2 (en) 2007-07-09 2017-12-05 3M Innovative Properties Company Modular system and method for detecting microorganisms
US10190089B2 (en) 2007-07-09 2019-01-29 3M Innovative Properties Company Modular system and method for detecting microorganisms

Also Published As

Publication number Publication date
GB8919350D0 (en) 1990-04-25

Similar Documents

Publication Publication Date Title
EP0924653B1 (en) Blending graphical objects using planar maps
US3987412A (en) Method and apparatus for image data compression utilizing boundary following of the exterior and interior borders of objects
Li et al. Highly efficient forward and backward propagation of convolutional neural networks for pixelwise classification
US4622545A (en) Method and apparatus for image compression and manipulation
CA1187184A (en) Method for automatic recognition of white blocks as well as text, graphics and/or gray image areas on a printed master
US4777651A (en) Method of pixel to vector conversion in an automatic picture coding system
US5970170A (en) Character recognition system indentification of scanned and real time handwritten characters
US8577143B2 (en) Label reuse method and connected component labeling
US8577144B2 (en) Connected component labeling system and method
US20060001681A1 (en) Method of rendering graphic objects
US11657306B2 (en) Form structure extraction by predicting associations
US8811731B2 (en) Modified propagated last labeling system and method for connected components
US8111919B2 (en) Feature encoding system and method for connected component labeling
US9846825B2 (en) Method, apparatus and system for generating an intermediate region-based representation of a document
GB2227346A (en) Image analysis
US7570811B2 (en) Segmenting an image via a graph
JP2004288158A (en) Division of image by shortest cycle
US20040120593A1 (en) System and method for flattening spans
KR20040028945A (en) Color image processing method, and computer readable recording medium having program to perform the method
Blake Partitioning graph matching with constraints
US20050238235A1 (en) Run length based connected components and contour following for enhancing the performance of circled region extraction algorithm
JPH08167028A (en) Image processing method
CN100543766C (en) Image segmentation methods, compact representation production method, image analysis method and device
Webb Architecture-independent global image processing
Chung et al. Finding neighbors on bincode-based images in O (n log log n) time

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)