WO2012089262A1 - Method and apparatus for use in forming an image - Google Patents

Method and apparatus for use in forming an image Download PDF

Info

Publication number
WO2012089262A1
WO2012089262A1 PCT/EP2010/070896 EP2010070896W WO2012089262A1 WO 2012089262 A1 WO2012089262 A1 WO 2012089262A1 EP 2010070896 W EP2010070896 W EP 2010070896W WO 2012089262 A1 WO2012089262 A1 WO 2012089262A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
attribute
value
image
sets
Prior art date
Application number
PCT/EP2010/070896
Other languages
French (fr)
Inventor
Radoslaw Chmielewski
Wojciech Tomasz Nowak
Original Assignee
Tele Atlas Polska Sp.Z.O.O
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tele Atlas Polska Sp.Z.O.O filed Critical Tele Atlas Polska Sp.Z.O.O
Priority to PCT/EP2010/070896 priority Critical patent/WO2012089262A1/en
Publication of WO2012089262A1 publication Critical patent/WO2012089262A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • G06V10/7515Shifting the patterns to accommodate for positional errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Definitions

  • the present invention relates to a method and apparatus for use in forming an image of an object and, in particular, though not exclusively, for use in providing a realistic image of a road surface for use in a digital map.
  • Known satellite navigation devices may display an image to a user according to a position of the user as determined by at least one of a satellite navigation system receiver such as a Global Positioning System (GPS) or a Galileo receiver, a Distance Measurement Instrument (DMI) and an Inertial Measurement Unit (IMU).
  • the image displayed to the user is generally selected from a map database according to the determined position of the user.
  • the map database stores electronic maps and associated data or images, and is generally stored locally within the satellite navigation device but may be stored remotely from the satellite navigation device.
  • the images may comprise representations of real-world scenes.
  • representations of real-world scenes is not limited to navigation devices or electronic maps, but can be applicable to a wide range of different devices or applications that require the display of a representation of a particular location.
  • the image displayed to the user may be artificially generated and may have an appearance which does not closely resemble the environment around the satellite navigation device or other location which it represents.
  • a known solution to this problem is to capture images of the environment and store the captured images in a map database.
  • the images of the environment around the satellite navigation device are selected from the map database according to the determined position of the user and are displayed to the user.
  • the images may be subject to additional processing or rendering processes before storage or display.
  • One problem with such known methods is that capturing images of the environment can be time consuming and costly, particularly the capture of any images has to be repeated at a later time. Furthermore, storing the captured images may require the storage of large quantities of data.
  • the captured images often have noise and/or unwanted obstructions such as vehicles, pedestrians and the like which may be distracting when displayed to the user of the satellite navigation device.
  • the images ultimately displayed to a user can vary significantly along different parts of the route which can be distracting and unrealistic.
  • Noise and/or unwanted obstructions can be particularly distracting if they are present on a plain surface, such as a road surface.
  • a road surface image for display is generated from a captured image of the road surface obtained at a particular time. If another vehicle or object is present on the road surface at that time, then the resulting road surface image, displayed for example to a user at a later time, will include all or part of the vehicle or object which can be particular distracting and unrealistic for the user.
  • Such extraneous objects can be removed by an operator, however that usually requires a manual or semi-manual process which can time consuming and inefficient.
  • a method for use in forming an image of a scene comprising at least one object, the method comprising:- acquiring a plurality of sets of image data each representative of at least part of the scene; determining a value of an attribute for the object or part of the object from each of the plurality of sets of image data; and setting a value of an attribute of the object or part of the object according to the determined values of the attribute of the object or the part of the object obtained from the plurality of sets of image data.
  • images of extraneous objects temporarily present in the scene may be eliminated or made substantially indistinguishable, for example substantially indistinguishable to a user.
  • Each set of image data may be representative of an image of the scene.
  • the method may comprise storing the attributes as digital map data, or associating the attributes with digital map data. Said at least part of the object may be represented by a single pixel, for example.
  • the object may comprise a substantially plain surface.
  • the object may comprise the whole or part of a surface for vehicular or other traffic, for example a road or pavement
  • Each set of image data may be obtained from a different perspective. Thus, extraneous objects present in one of the sets of image data may not be present, or may be present in a different position, in at least one other of the sets of image data. Alternatively or additionally each set of image data may be obtained at a different time and/or under different conditions
  • the method may further comprise determining a median value of the attribute from each of the plurality of sets of image data, and setting the value of the attribute of the image of the object or the part of the object according to the median value.
  • the method may comprise determining any suitable statistical measure, for example the mode or mean, of the attribute from each of the plurality of sets of image data, and setting the value of the attribute of the image of the object or the part of the object according to the statistical measure.
  • the attribute may comprise colour.
  • the attribute may comprise at least one of an R value, a G value and a B value.
  • the attribute may comprise each of an R value, a G value and a B value.
  • the R value may be determined from R values obtained from the plurality of sets of image data.
  • the G value may be determined from G values obtained from the plurality of sets of image data.
  • the B value may be determined from B values obtained from the plurality of sets of image data.
  • the R value may, for example, be the value of a red channel
  • the B value may, for example, be the value of a blue channel
  • the B value may, for example, be the value of a green channel.
  • the attribute may comprise at least one of brightness, lightness, intensity, grayscale intensity, saturation, or contrast.
  • the attribute may comprise a plurality of attributes.
  • the method may further comprise determining an image of the object and/or scene from the plurality of sets of image data.
  • the image may comprise a plurality of points or pixels.
  • the setting of the value of the attribute may comprise determining a value of an attribute of each point or pixel of the image of the object based on the determined values of the attribute for a corresponding at least one point or pixel obtained from each of the sets of image data.
  • the determining of the image may comprise generating or otherwise determining image data to represent the image or the object and/or scene.
  • the method may comprise storing the image data for example as, or associated with, digital map data.
  • the method may comprise, for each of a plurality of positions on the object, identifying at least one point or pixel from each of the sets of image data that represents that position.
  • Each set of image data may be representative of a substantially panoramic image.
  • Each set of image data may be representative of an image having a field of view greater than at least one of 120 degree, 180 degrees and 240 degrees, optionally substantially equal to 360 degrees.
  • Each set of image data may be representative of an image having a field of view of at least one of:- between 120 degrees and 360 degrees, between 180 degrees and 360 degrees; or between 240 degrees and 360 degrees.
  • Each set of image data may be captured by a mapping vehicle, for example a mapping vehicle travelling along a road.
  • the method may comprise capturing each of the sets of image data from a different position, for example a different position along a road.
  • the method may comprise determining the position from which of the sets of image data was obtained using a satellite-based position determining system, for example a GPS system.
  • an apparatus for use in forming an image of a scene comprising at least one object
  • the apparatus comprising means for acquire a plurality of sets of image data each representative of at least part of the scene; means for determining a value of an attribute for the object or part of the object from each of the plurality of sets of image data; and means for setting a value of an attribute of the object or part of the object according to the determined values of the attribute of the object or the part of the object obtained from the plurality of sets of image data.
  • an apparatus for use in forming an image of a scene comprising at least one object
  • the apparatus comprising a processing resource that is configured to:- acquire a plurality of sets of image data each representative of at least part of the scene; determine a value of an attribute for the object or part of the object from each of the plurality of sets of image data; and set a value of an attribute of the object or part of the object according to the determined values of the attribute of the object or the part of the object obtained from the plurality of sets of image data.
  • the apparatus may comprise means for obtaining each set of image data from a different perspective.
  • the apparatus may comprise means for determining a median value of the attribute from each of the plurality of sets of image data, and setting the value of the attribute of the image of the object or the part of the object according to the median value.
  • the apparatus may comprise means for determining an image of the object comprising a plurality of points or pixels.
  • the means for setting the value of the attribute may be configured to determine a value of an attribute of each point or pixel of the image of the object based on the determined values of the attribute for a corresponding point or pixel obtained from each of the sets of image data.
  • the apparatus may be installed, or installable, in a mapping vehicle.
  • the mapping vehicle may comprise a satellite-based positioning system, for example a GPS or Galileo system.
  • a computer program product that comprises computer-readable code that is executable to perform at least one aspect or feature of any method as claimed or described herein.
  • Figure 1 is a schematic of a mobile mapping vehicle
  • Figure 2 is a schematic illustration of the mobile mapping vehicle of Figure 1 in use
  • Figure 3 is a flow chart illustrating a method constituting an embodiment of the present invention
  • Figure 4a is a schematic representation of a path followed by a mapping vehicle, with the positions at which panoramic images were obtained being indicated;
  • Figure 4b is a schematic representation of the position of a road surface determined from measurement data captured by the mapping vehicle of Figure 4a;
  • Figures 5a and 5b are representations of the road surface of Figure 4b;
  • Figure 6 is a schematic representation, in three dimensions, of the path of a mapping vehicle on a road surface
  • Figure 7 is a panoramic image captured by the mapping vehicle
  • Figures 8a to 8d are different images of the same part of a scene
  • Figure 9a is an image of a road surface generated from a single panoramic image, in which extraneous objects such as vehicles are visible;
  • Figure 9b is an image of the road surface of Figure 9a generated according to an embodiment from a plurality of panoramic images, in which the extraneous objects are no longer visible;
  • Figure 10 is a schematic representation of a user navigation system. DETAILED DESCRIPTION OF THE DRAWINGS
  • the mobile mapping system 2 comprises a survey vehicle 4 and an imaging device 40 mounted on the roof 8 of the survey vehicle 4, optionally together with a laser scanner 6 or other sensor devices.
  • the camera unit 40 comprises a plurality of cameras configured to capture images of a location. The cameras are displaced (for example, circumferentially displaced) around the imaging device 40. This allows for capturing images in different directions at particular locations.
  • An example of such an imaging device 40 is sold under the trade name LadyBug®2, which is provided by Point Grey, 12051 Riverside Way, Richmond, British Columbia, V6W 1 K7, Canada.
  • the height, h, of imaging device 40 above the road surface can be determined, or approximated.
  • An exemplary height is 3 metres, or so.
  • the height, h may be different due to use of a different sized vehicle 4.
  • Embodiments are not limited to any particular type of imaging device and, although use of a panoramic camera may be advantageous, a non-panoramic camera or cameras may be used in some embodiments.
  • the survey vehicle 2 further comprises a processor 10, a memory 12 and a transceiver 14.
  • the survey vehicle 2 comprises an absolute positioning device 20 having a GPS or a Galileo satellite navigation receiver and a relative positioning device 22 having an Inertial Measurement Unit (IMU) and a Distance Measurement Instrument (DMI).
  • the absolute positioning device 20 may provide global co-ordinates of the vehicle.
  • the relative positioning device 22 may serve to enhance the accuracy of the global co-ordinates measured by the absolute positioning device 20.
  • the laser scanner 6, the memory 12, the transceiver 14, the absolute positioning device 20 and the relative positioning device 22 are all configured for communication with the processor 10.
  • the survey vehicle 4 travels along a road 30 comprising a surface 32, which may also have road markings 34 painted thereon.
  • the surface 32 may be formed of asphalt, tarmac or the like.
  • the surface 32 has a relatively dark, relatively rough texture.
  • the road markings 32 are typically formed by painting the surface 32 white or yellow so as to provide areas having a relatively light, relatively smooth texture to provide a contrast in appearance with the other areas of the surface 32.
  • the imaging device 40 repeatedly captures images of the surrounding scene including road surface 32 to provide a plurality of images at different locations.
  • the images from each of the cameras can be combined to provide a single panoramic image for each location, for example having a field of view substantially equal to 360 degrees.
  • the processor 10 time-stamps each panoramic image (or each image that can combine to make up the panoramic image) and stores the image in the memory 12 as a set of image data for post-processing.
  • the processor 10 also determines the position and the orientation of the vehicle 4 at any instant of time from position and orientation data measured using the absolute positioning device 20 and the relative positioning device 22.
  • the processor 10 time- stamps the position and the orientation of the vehicle 4 and stores them in the memory 12 for post-processing.
  • Figure 3 illustrates a method of post-processing the measured data stored in the memory 12.
  • the processor 10 performs all of the steps 100 to 1 12 shown in Figure 3, for example, according to instructions provided to the processor 10.
  • the plurality of sets of image data are acquired, for example by being read from memory 12.
  • the processor determines the location of an object that is to be the subject of the processing, in this case a road surface.
  • the location of the road surface can be determined manually or automatically.
  • One method of determining of the location of a road surface is described with reference to Figures 4a and 4b, and comprises determining the road surface geometry automatically based upon GPS position data obtained by the mapping vehicle.
  • Figure 4a shows a schematic, top-down representation of a track 120 or sequence of GPS positions determined by the mapping vehicle 4.
  • the points where panoramic images have been captured by the imaging device 40 are indicated by dots 122. For each position 122 a 360 degree panoramic image has been captured.
  • the position of the road surface is determined by taking the position (for example, the longitude, latitude and altitude) of each GPS point and adding a selected width (also referred to as side buffer) 130a, 130b; 132a, 132b at each side of the GPS point to determine the position of the edges of the road surface, as shown schematically in Figure 4b.
  • the edge points are then joined to define the road surface 134.
  • the same predetermined widths are added to each GPS point, but the widths can vary in other embodiments, for example based on other, complementary measurements.
  • the processor 10 next generates image data to represent the road surface, the image data comprising a plurality of pixels that represent the road surface, in accordance with stages 104 to 106.
  • the processor 10 initiates a loop counter. For each point of the road surface, the processor 10 then assigns, at stage 106, an absolute position (for example, a longitude, latitude and altitude) to a pixel representing that point.
  • an absolute position for example, a longitude, latitude and altitude
  • the processor 10 selects from each set of image data taken from different perspectives attribute value(s) for the position in question.
  • the process determines from an attribute value for the pixel representing the point in question from the attribute values obtained from the different sets of image data.
  • the attribute that is determined is the colour assigned to the pixel
  • the colour value that is assigned to the pixel is the median colour value for that position obtained from the different sets of image data measured from different perspectives.
  • Figure 6 shows, in three-dimensional space, four successive positions 142a
  • the imaging device 40 includes the three-dimensional road surface 134.
  • a position 140 whose colour is being determined is also shown.
  • each pixel of the image is mapped to an absolute location which the pixel represents.
  • Any suitable method for mapping each pixel of panoramic image to an absolute location can be used.
  • a virtual sphere is defined around each imaging device location for which a panoramic image was obtained.
  • Each pixel of the panoramic image is mapped to a point on the virtual sphere.
  • Figure 7 shows a panoramic image and, by way of example shows four points 160, 162, 164, 166 that have been mapped to different points on the virtual sphere.
  • point 160 has been mapped to a point on the sphere at a horizontal angle of 0 degrees and a vertical angle of 90 degrees
  • point 162 has been mapped to a point on the sphere at a horizontal angle of 360 degrees and a vertical angle of 90 degrees
  • point 164 has been mapped to a point on the sphere at a horizontal angle of 0 degrees and a vertical angle of -90 degrees
  • point 166 has been to a point on the sphere at a horizontal angle of 360 degrees and a vertical angle of -90 degrees.
  • the horizontal and vertical angles of the ray in three-dimensional space are calculated. Having the vertical and horizontal angles, the point 148a at which the ray intersects the virtual sphere 144a can be determined.
  • the pixel of the panoramic image that was mapped to that point 148a on the virtual sphere can then be selected as being the pixel that represents the image of the road surface at the point 140.
  • the process is repeated for each of several sets of panoramic image data taken from successive imaging device positions 142a, 142b, 142c, 142d.
  • four different pixels are extracted from the different sets of panoramic image data, each representing the same point 140 on the road surface but obtained from different perspectives.
  • the number of different pixels or other measurement signals from different perspectives used to represent a single point varies in different embodiments or different modes of operation, and may depend for example on the rate of acquisition of panoramic image data, and the acceptable resolution for a particular application. For example in some embodiments between two and twenty sets of panoramic image data are used for each point, although usually four or five sets of panoramic image data are used in the embodiment of Figure 1 .
  • stages 104 to 108 are repeated for each point of the road surface 134 so that for each point four (in this case) pixels obtained from different perspectives are extracted.
  • the red (R), green (G) and blue (B) values obtained from the pixels from the multiple sets of panoramic image data that represent the same point of the road surface are used to set R, G, and B values of a pixel used to represent that point. It has been found to be particularly useful to set the R value to be the median of the R values obtained for the selected pixels of the multiple sets of panoramic image data. Similarly the G value (or B value) can be set to be the median of the G values (or B values) obtained for the selected pixels of the multiple sets of panoramic image data.
  • Table 1 shows values of R, G and B values obtained for pixels representing the same point on a road surface and obtained from three different panoramic images each captured from a different perspective.
  • Figures 8a to 8d show images of the same section of road captured by a mapping vehicle at different times and from different perspectives. The images shown in Figure 8a to 8d are mapped to common absolute position co-ordinates. Using any one of the images of Figures 8a to 8d alone to produce a representation of the road surface would result in the inclusion of distracting extraneous objects, in this case other vehicles, in the representation. However by, for each point on the road surface, taking median values of the R, G, and B signals obtained from the different images results in the vehicles or other extraneous objects effectively disappearing from, or becoming indistinguishable within, the representation.
  • Figures 9a and 9b show representations of a road surface 180.
  • the representation has been produced automatically from a single panoramic image obtained using a mapping vehicle, and it can be seen there is a poor quality texture with various extraneous vehicles or parts of such vehicles being included in the representation. If the representation was included in a digital map, for example for use in a navigation vehicle, then the presence of the vehicles would be distracting for the user.
  • Figure 9b the representation has been produced automatically using the process of Figure 3, in which median R, G and B values for each point taken from several panoramic images are used. It can be seen that the extraneous vehicles are no longer present, but road markings can still be seen and the representation of the road surface 180 has a realistic texture.
  • the representation of Figure 9b can be obtained from images captured by a mapping vehicle during a single pass along the section of road.
  • data associated with such an enhanced image may be stored in a map database.
  • the enhanced image data may be stored in a map database and transferred to a navigation system or the enhanced image data may be stored in a map database of a navigation system such as a satellite navigation system.
  • Figure 10 shows a navigation system generally designated 202 having such a map database stored in a memory 212.
  • Such a map database may provide navigational assistance to a user of a vehicle 204 in which the satellite navigation system 202 is located or to which the satellite navigation system 202 is attached.
  • the satellite navigation system 202 of Figure 9 shares many features with the mobile mapping system 2 of Figure 1.
  • the satellite navigation system 202 comprises absolute and relative positioning sensors 220 and 206 respectively.
  • the satellite navigation system 202 also comprises object detection sensors 206 which may take the form of laser rangefinder scanners.
  • the memory 212 contains the map database comprising enhanced image data for a plurality of road surface sections where the image data for each road surface section is generated according to the method described with reference to Figure 3.
  • the satellite navigation system 202 comprises a display 250 for displaying one or more of the enhanced images stored in the memory 212 according to an absolute position of a vehicle 204 as determined by the absolute and relative positioning sensors 220, 206.
  • median values of R, G and B values obtained for the same point in different images were used.
  • the use of a median value can be particularly useful in eliminating or reducing the effect of extraneous objects in a computationally simple fashion.
  • any suitable filtering or processing of the multiple pixel signals representative of the same point can be used.
  • mean or mode values can be used instead of median values.
  • High pass, low pass or other filters can be used, or outlying values can be eliminated in a pre-processing procedure.
  • a fitting process can be used if desired to fit image parameters obtained from the different sets of image data.
  • the colour of a particular point of a road surface is determined from R, G and B values determined for corresponding points in multiple images of the road surface taken from different perspectives.
  • other image attributes as well as or instead of colour are determined from the multiple images, for example any one or more of brightness, lightness, intensity, grayscale intensity, saturation, or contrast.
  • panoramic images from different perspectives are used.
  • An advantage of using panoramic images is that the same point of a scene may be present in a larger number of different images than if conventional, narrower angle images were used.
  • a point may be present in panoramic images obtained by a mapping vehicle both on approach to, and after having passed by, the point.
  • non-panoramic images can be used if desired.
  • a plurality of images obtained from different perspectives are used. That can be particularly useful in the context of a mapping vehicle used to image sections of road, as a particular section of road can for example be represented using multiple sets of image data obtained from a single pass of the mapping vehicle. Nevertheless, in alternative embodiments, sets of image data obtained from the same or similar perspective at different times can be used.
  • the method has been found to be particularly useful for determining attributes of substantially plain surfaces, for example roads or pavements, but is not limited to determining attributes of such substantially plain surfaces.
  • the processor 10 may determine one or more different attributes of a portion of the image of the section of the road surface. Such a portion may, for example, comprise a plurality of pixels.
  • the processor 10 may process the measurement data in real-time and store only processed data in the memory 12.
  • the processor 10 may be located remotely from the survey vehicle 4.
  • the transceiver 14 may transmit unprocessed or partially processed data to such a remote processor for processing.
  • Alternative embodiments of the invention can be implemented as a computer program product for use with a computer system, the computer program product being, for example, a series of computer instructions stored on a tangible data recording medium, such as a diskette, CD-ROM, ROM, or fixed disk, or embodied in a computer data signal, the signal being transmitted over a tangible medium or a wireless medium, for example, microwave or infrared.
  • the series of computer instructions can constitute all or part of the functionality described above, and can also be stored in any memory device, volatile or non-volatile, such as semiconductor, magnetic, optical or other memory device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Navigation (AREA)

Abstract

A method for use in forming an image of a scene comprising at least one object (134), comprises acquiring a plurality of sets of image data each representative of at least part of the scene; determining a value of an attribute for the object or part of the object (140) from each of the plurality of sets of image data; and setting a value of an attribute of the object or part of the object (140) according to the determined values of the attribute of the object or the part of the object (140) obtained from the plurality of sets of image data.

Description

METHOD AND APPARATUS FOR USE IN FORMING AN IMAGE
FIELD
The present invention relates to a method and apparatus for use in forming an image of an object and, in particular, though not exclusively, for use in providing a realistic image of a road surface for use in a digital map.
BACKGROUND
Known satellite navigation devices may display an image to a user according to a position of the user as determined by at least one of a satellite navigation system receiver such as a Global Positioning System (GPS) or a Galileo receiver, a Distance Measurement Instrument (DMI) and an Inertial Measurement Unit (IMU). The image displayed to the user is generally selected from a map database according to the determined position of the user. The map database stores electronic maps and associated data or images, and is generally stored locally within the satellite navigation device but may be stored remotely from the satellite navigation device. The images may comprise representations of real-world scenes.
The use of representations of real-world scenes is not limited to navigation devices or electronic maps, but can be applicable to a wide range of different devices or applications that require the display of a representation of a particular location.
The image displayed to the user may be artificially generated and may have an appearance which does not closely resemble the environment around the satellite navigation device or other location which it represents. A known solution to this problem is to capture images of the environment and store the captured images in a map database. The images of the environment around the satellite navigation device are selected from the map database according to the determined position of the user and are displayed to the user. The images may be subject to additional processing or rendering processes before storage or display. One problem with such known methods is that capturing images of the environment can be time consuming and costly, particularly the capture of any images has to be repeated at a later time. Furthermore, storing the captured images may require the storage of large quantities of data.
In addition, the captured images often have noise and/or unwanted obstructions such as vehicles, pedestrians and the like which may be distracting when displayed to the user of the satellite navigation device. Furthermore, if images of different parts of a route are captured at different times or under different conditions, the images ultimately displayed to a user can vary significantly along different parts of the route which can be distracting and unrealistic.
Noise and/or unwanted obstructions can be particularly distracting if they are present on a plain surface, such as a road surface. For example in some known methods a road surface image for display is generated from a captured image of the road surface obtained at a particular time. If another vehicle or object is present on the road surface at that time, then the resulting road surface image, displayed for example to a user at a later time, will include all or part of the vehicle or object which can be particular distracting and unrealistic for the user. Such extraneous objects can be removed by an operator, however that usually requires a manual or semi-manual process which can time consuming and inefficient.
SUMMARY
In a first, independent aspect of the invention there is provided a method for use in forming an image of a scene comprising at least one object, the method comprising:- acquiring a plurality of sets of image data each representative of at least part of the scene; determining a value of an attribute for the object or part of the object from each of the plurality of sets of image data; and setting a value of an attribute of the object or part of the object according to the determined values of the attribute of the object or the part of the object obtained from the plurality of sets of image data.
By setting image attributes based on a plurality of sets of image data, images of extraneous objects temporarily present in the scene may be eliminated or made substantially indistinguishable, for example substantially indistinguishable to a user.
Each set of image data may be representative of an image of the scene. The method may comprise storing the attributes as digital map data, or associating the attributes with digital map data. Said at least part of the object may be represented by a single pixel, for example.
The object may comprise a substantially plain surface. The object may comprise the whole or part of a surface for vehicular or other traffic, for example a road or pavement
Each set of image data may be obtained from a different perspective. Thus, extraneous objects present in one of the sets of image data may not be present, or may be present in a different position, in at least one other of the sets of image data. Alternatively or additionally each set of image data may be obtained at a different time and/or under different conditions
The method may further comprise determining a median value of the attribute from each of the plurality of sets of image data, and setting the value of the attribute of the image of the object or the part of the object according to the median value.
Alternatively or additionally the method may comprise determining any suitable statistical measure, for example the mode or mean, of the attribute from each of the plurality of sets of image data, and setting the value of the attribute of the image of the object or the part of the object according to the statistical measure.
The attribute may comprise colour. The attribute may comprise at least one of an R value, a G value and a B value. The attribute may comprise each of an R value, a G value and a B value. The R value may be determined from R values obtained from the plurality of sets of image data. The G value may be determined from G values obtained from the plurality of sets of image data. The B value may be determined from B values obtained from the plurality of sets of image data. The R value may, for example, be the value of a red channel, the B value may, for example, be the value of a blue channel, and the B value may, for example, be the value of a green channel.
The attribute may comprise at least one of brightness, lightness, intensity, grayscale intensity, saturation, or contrast. The attribute may comprise a plurality of attributes.
The method may further comprise determining an image of the object and/or scene from the plurality of sets of image data. The image may comprise a plurality of points or pixels. The setting of the value of the attribute may comprise determining a value of an attribute of each point or pixel of the image of the object based on the determined values of the attribute for a corresponding at least one point or pixel obtained from each of the sets of image data. The determining of the image may comprise generating or otherwise determining image data to represent the image or the object and/or scene. The method may comprise storing the image data for example as, or associated with, digital map data.
The method may comprise, for each of a plurality of positions on the object, identifying at least one point or pixel from each of the sets of image data that represents that position.
Each set of image data may be representative of a substantially panoramic image. Each set of image data may be representative of an image having a field of view greater than at least one of 120 degree, 180 degrees and 240 degrees, optionally substantially equal to 360 degrees. Each set of image data may be representative of an image having a field of view of at least one of:- between 120 degrees and 360 degrees, between 180 degrees and 360 degrees; or between 240 degrees and 360 degrees.
Each set of image data may be captured by a mapping vehicle, for example a mapping vehicle travelling along a road.
The method may comprise capturing each of the sets of image data from a different position, for example a different position along a road. The method may comprise determining the position from which of the sets of image data was obtained using a satellite-based position determining system, for example a GPS system.
In a further independent aspect of the invention there is provided an apparatus for use in forming an image of a scene comprising at least one object, the apparatus comprising means for acquire a plurality of sets of image data each representative of at least part of the scene; means for determining a value of an attribute for the object or part of the object from each of the plurality of sets of image data; and means for setting a value of an attribute of the object or part of the object according to the determined values of the attribute of the object or the part of the object obtained from the plurality of sets of image data.
In another independent aspect of the invention there is provided an apparatus for use in forming an image of a scene comprising at least one object, the apparatus comprising a processing resource that is configured to:- acquire a plurality of sets of image data each representative of at least part of the scene; determine a value of an attribute for the object or part of the object from each of the plurality of sets of image data; and set a value of an attribute of the object or part of the object according to the determined values of the attribute of the object or the part of the object obtained from the plurality of sets of image data.
The apparatus may comprise means for obtaining each set of image data from a different perspective. The apparatus may comprise means for determining a median value of the attribute from each of the plurality of sets of image data, and setting the value of the attribute of the image of the object or the part of the object according to the median value.
The apparatus may comprise means for determining an image of the object comprising a plurality of points or pixels. The means for setting the value of the attribute may be configured to determine a value of an attribute of each point or pixel of the image of the object based on the determined values of the attribute for a corresponding point or pixel obtained from each of the sets of image data.
The apparatus may be installed, or installable, in a mapping vehicle. The mapping vehicle may comprise a satellite-based positioning system, for example a GPS or Galileo system.
In a further independent aspect of the invention there is provided a computer program product that comprises computer-readable code that is executable to perform at least one aspect or feature of any method as claimed or described herein.
Features of any independent or optional aspect of the invention may be applied to any other independent or optional aspect of the invention, in any suitable combination. For example, method features may be applied to apparatus features and vice versa.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will now be described by way of non-limiting example only with reference to the drawings of which:
Figure 1 is a schematic of a mobile mapping vehicle;
Figure 2 is a schematic illustration of the mobile mapping vehicle of Figure 1 in use; Figure 3 is a flow chart illustrating a method constituting an embodiment of the present invention;
Figure 4a is a schematic representation of a path followed by a mapping vehicle, with the positions at which panoramic images were obtained being indicated;
Figure 4b is a schematic representation of the position of a road surface determined from measurement data captured by the mapping vehicle of Figure 4a;
Figures 5a and 5b are representations of the road surface of Figure 4b;
Figure 6 is a schematic representation, in three dimensions, of the path of a mapping vehicle on a road surface;
Figure 7 is a panoramic image captured by the mapping vehicle;
Figures 8a to 8d are different images of the same part of a scene;
Figure 9a is an image of a road surface generated from a single panoramic image, in which extraneous objects such as vehicles are visible;
Figure 9b is an image of the road surface of Figure 9a generated according to an embodiment from a plurality of panoramic images, in which the extraneous objects are no longer visible; and
Figure 10 is a schematic representation of a user navigation system. DETAILED DESCRIPTION OF THE DRAWINGS
Referring initially to Figure 1 there is provided a survey or mobile mapping system generally designated 2. The mobile mapping system 2 comprises a survey vehicle 4 and an imaging device 40 mounted on the roof 8 of the survey vehicle 4, optionally together with a laser scanner 6 or other sensor devices. The camera unit 40 comprises a plurality of cameras configured to capture images of a location. The cameras are displaced (for example, circumferentially displaced) around the imaging device 40. This allows for capturing images in different directions at particular locations. An example of such an imaging device 40 is sold under the trade name LadyBug®2, which is provided by Point Grey, 12051 Riverside Way, Richmond, British Columbia, V6W 1 K7, Canada. Generally, when the imaging device 40 is fitted to the vehicle 4, the height, h, of imaging device 40 above the road surface can be determined, or approximated. An exemplary height is 3 metres, or so. Of course, in different examples, the height, h, may be different due to use of a different sized vehicle 4. Embodiments are not limited to any particular type of imaging device and, although use of a panoramic camera may be advantageous, a non-panoramic camera or cameras may be used in some embodiments.
The survey vehicle 2 further comprises a processor 10, a memory 12 and a transceiver 14. In addition, the survey vehicle 2 comprises an absolute positioning device 20 having a GPS or a Galileo satellite navigation receiver and a relative positioning device 22 having an Inertial Measurement Unit (IMU) and a Distance Measurement Instrument (DMI). The absolute positioning device 20 may provide global co-ordinates of the vehicle. The relative positioning device 22 may serve to enhance the accuracy of the global co-ordinates measured by the absolute positioning device 20. As indicated by the dotted lines 24, the laser scanner 6, the memory 12, the transceiver 14, the absolute positioning device 20 and the relative positioning device 22 are all configured for communication with the processor 10.
In use, as shown in Figure 2, the survey vehicle 4 travels along a road 30 comprising a surface 32, which may also have road markings 34 painted thereon. Typically, the surface 32 may be formed of asphalt, tarmac or the like. Thus, in areas 36 outside the road markings 34, the surface 32 has a relatively dark, relatively rough texture. The road markings 32 are typically formed by painting the surface 32 white or yellow so as to provide areas having a relatively light, relatively smooth texture to provide a contrast in appearance with the other areas of the surface 32.
As the survey vehicle 4 travels along the road 30, the imaging device 40 repeatedly captures images of the surrounding scene including road surface 32 to provide a plurality of images at different locations. The images from each of the cameras can be combined to provide a single panoramic image for each location, for example having a field of view substantially equal to 360 degrees. The processor 10 time-stamps each panoramic image (or each image that can combine to make up the panoramic image) and stores the image in the memory 12 as a set of image data for post-processing.
The processor 10 also determines the position and the orientation of the vehicle 4 at any instant of time from position and orientation data measured using the absolute positioning device 20 and the relative positioning device 22. The processor 10 time- stamps the position and the orientation of the vehicle 4 and stores them in the memory 12 for post-processing.
Figure 3 illustrates a method of post-processing the measured data stored in the memory 12. The processor 10 performs all of the steps 100 to 1 12 shown in Figure 3, for example, according to instructions provided to the processor 10.
At step 100 the plurality of sets of image data are acquired, for example by being read from memory 12. At step 102 the processor determines the location of an object that is to be the subject of the processing, in this case a road surface. The location of the road surface can be determined manually or automatically. One method of determining of the location of a road surface is described with reference to Figures 4a and 4b, and comprises determining the road surface geometry automatically based upon GPS position data obtained by the mapping vehicle.
Figure 4a shows a schematic, top-down representation of a track 120 or sequence of GPS positions determined by the mapping vehicle 4. The points where panoramic images have been captured by the imaging device 40 are indicated by dots 122. For each position 122 a 360 degree panoramic image has been captured.
The position of the road surface is determined by taking the position (for example, the longitude, latitude and altitude) of each GPS point and adding a selected width (also referred to as side buffer) 130a, 130b; 132a, 132b at each side of the GPS point to determine the position of the edges of the road surface, as shown schematically in Figure 4b. The edge points are then joined to define the road surface 134. In the embodiment of Figure 4b the same predetermined widths are added to each GPS point, but the widths can vary in other embodiments, for example based on other, complementary measurements.
An example of a resulting road surface 134 is shown schematically in Figures 5a and 5b, filled with diagonal lines for clarity in Figure 5a, and coloured in Figure 5b by way of illustration.
The processor 10 next generates image data to represent the road surface, the image data comprising a plurality of pixels that represent the road surface, in accordance with stages 104 to 106.
At stage 104 the processor 10 initiates a loop counter. For each point of the road surface, the processor 10 then assigns, at stage 106, an absolute position (for example, a longitude, latitude and altitude) to a pixel representing that point.
At stage 108, the processor 10 selects from each set of image data taken from different perspectives attribute value(s) for the position in question. The process then determines from an attribute value for the pixel representing the point in question from the attribute values obtained from the different sets of image data. In the embodiment of Figure 1 , the attribute that is determined is the colour assigned to the pixel, and the colour value that is assigned to the pixel is the median colour value for that position obtained from the different sets of image data measured from different perspectives. By setting the colour value of the pixel as the median colour value obtained for that positions in images taken from different perspectives, the effect of any extraneous objects (for example, vehicles or people) present in any of the images can be greatly reduced or eliminated. That will be discussed further below, but firstly the process of determining the attribute value for each pixel is described in more detail with reference to Figures 6 and 7.
Figure 6 shows, in three-dimensional space, four successive positions 142a,
142b. 142c, 142d of the imaging device 40 at which panoramic images of a scene were obtained. The scene includes the three-dimensional road surface 134. A position 140 whose colour is being determined is also shown.
For each panoramic image, each pixel of the image is mapped to an absolute location which the pixel represents. Any suitable method for mapping each pixel of panoramic image to an absolute location can be used. For example, in the embodiment of Figure 1 , a virtual sphere is defined around each imaging device location for which a panoramic image was obtained. Each pixel of the panoramic image is mapped to a point on the virtual sphere. Figure 7 shows a panoramic image and, by way of example shows four points 160, 162, 164, 166 that have been mapped to different points on the virtual sphere. In this case, point 160 has been mapped to a point on the sphere at a horizontal angle of 0 degrees and a vertical angle of 90 degrees; point 162 has been mapped to a point on the sphere at a horizontal angle of 360 degrees and a vertical angle of 90 degrees; point 164 has been mapped to a point on the sphere at a horizontal angle of 0 degrees and a vertical angle of -90 degrees; and point 166 has been to a point on the sphere at a horizontal angle of 360 degrees and a vertical angle of -90 degrees.
In order to determine which pixel of the panoramic image corresponds to the absolute position assigned to the point 140 of the road surface that is under consideration, a ray 146a taken as passing from the point 140 to the position 142a of the imaging device 40 at which the panoramic image of the scene was obtained. The horizontal and vertical angles of the ray in three-dimensional space are calculated. Having the vertical and horizontal angles, the point 148a at which the ray intersects the virtual sphere 144a can be determined. The pixel of the panoramic image that was mapped to that point 148a on the virtual sphere can then be selected as being the pixel that represents the image of the road surface at the point 140.
The process is repeated for each of several sets of panoramic image data taken from successive imaging device positions 142a, 142b, 142c, 142d. Thus, in this example, four different pixels are extracted from the different sets of panoramic image data, each representing the same point 140 on the road surface but obtained from different perspectives. The number of different pixels or other measurement signals from different perspectives used to represent a single point varies in different embodiments or different modes of operation, and may depend for example on the rate of acquisition of panoramic image data, and the acceptable resolution for a particular application. For example in some embodiments between two and twenty sets of panoramic image data are used for each point, although usually four or five sets of panoramic image data are used in the embodiment of Figure 1 .
The process of stages 104 to 108 are repeated for each point of the road surface 134 so that for each point four (in this case) pixels obtained from different perspectives are extracted.
Having several different pixels for each position on the road surface, it is possible to filter or otherwise process them automatically to determine attributes of a pixel used to represent that position in a way that effectively removes, or renders indistinguishable to the user's eye, extraneous objects such as cars or people from a resulting image of the road surface. In the embodiment of Figure 1 , the red (R), green (G) and blue (B) values obtained from the pixels from the multiple sets of panoramic image data that represent the same point of the road surface are used to set R, G, and B values of a pixel used to represent that point. It has been found to be particularly useful to set the R value to be the median of the R values obtained for the selected pixels of the multiple sets of panoramic image data. Similarly the G value (or B value) can be set to be the median of the G values (or B values) obtained for the selected pixels of the multiple sets of panoramic image data.
Table 1 shows values of R, G and B values obtained for pixels representing the same point on a road surface and obtained from three different panoramic images each captured from a different perspective. The resulting R, G and B values used to represent the point, by taking the median values from the pixel signals of the panoramic images, are also shown. It can be understood that a new colour may be produced which, particularly in the case of a road surface, tends to be a grey-scale colour.
Figure imgf000012_0001
As has already been mentioned, vehicles, people or other mobile or otherwise extraneous objects may be present in the images captured by the mapping vehicle. Figures 8a to 8d show images of the same section of road captured by a mapping vehicle at different times and from different perspectives. The images shown in Figure 8a to 8d are mapped to common absolute position co-ordinates. Using any one of the images of Figures 8a to 8d alone to produce a representation of the road surface would result in the inclusion of distracting extraneous objects, in this case other vehicles, in the representation. However by, for each point on the road surface, taking median values of the R, G, and B signals obtained from the different images results in the vehicles or other extraneous objects effectively disappearing from, or becoming indistinguishable within, the representation. Figures 9a and 9b show representations of a road surface 180. In figure 9a the representation has been produced automatically from a single panoramic image obtained using a mapping vehicle, and it can be seen there is a poor quality texture with various extraneous vehicles or parts of such vehicles being included in the representation. If the representation was included in a digital map, for example for use in a navigation vehicle, then the presence of the vehicles would be distracting for the user.
In Figure 9b, the representation has been produced automatically using the process of Figure 3, in which median R, G and B values for each point taken from several panoramic images are used. It can be seen that the extraneous vehicles are no longer present, but road markings can still be seen and the representation of the road surface 180 has a realistic texture. The representation of Figure 9b can be obtained from images captured by a mapping vehicle during a single pass along the section of road.
Having generated an enhanced image of a road section as described above, data associated with such an enhanced image may be stored in a map database. For example, the enhanced image data may be stored in a map database and transferred to a navigation system or the enhanced image data may be stored in a map database of a navigation system such as a satellite navigation system. Figure 10 shows a navigation system generally designated 202 having such a map database stored in a memory 212. Such a map database may provide navigational assistance to a user of a vehicle 204 in which the satellite navigation system 202 is located or to which the satellite navigation system 202 is attached. The satellite navigation system 202 of Figure 9 shares many features with the mobile mapping system 2 of Figure 1. Accordingly, like features in Figure 9 have the same reference numerals as corresponding features in Figure 1 incremented by "200". For example, the satellite navigation system 202 comprises absolute and relative positioning sensors 220 and 206 respectively. The satellite navigation system 202 also comprises object detection sensors 206 which may take the form of laser rangefinder scanners. The memory 212 contains the map database comprising enhanced image data for a plurality of road surface sections where the image data for each road surface section is generated according to the method described with reference to Figure 3. In addition, the satellite navigation system 202 comprises a display 250 for displaying one or more of the enhanced images stored in the memory 212 according to an absolute position of a vehicle 204 as determined by the absolute and relative positioning sensors 220, 206. In the embodiment described in relation to Figure 3, median values of R, G and B values obtained for the same point in different images were used. The use of a median value can be particularly useful in eliminating or reducing the effect of extraneous objects in a computationally simple fashion. However, in alternative embodiments any suitable filtering or processing of the multiple pixel signals representative of the same point can be used. For example, mean or mode values can be used instead of median values. High pass, low pass or other filters can be used, or outlying values can be eliminated in a pre-processing procedure. A fitting process can be used if desired to fit image parameters obtained from the different sets of image data.
In the embodiment of Figure 1 , the colour of a particular point of a road surface is determined from R, G and B values determined for corresponding points in multiple images of the road surface taken from different perspectives. In alternative embodiments, other image attributes as well as or instead of colour are determined from the multiple images, for example any one or more of brightness, lightness, intensity, grayscale intensity, saturation, or contrast.
In the embodiment of Figure 1 , panoramic images from different perspectives are used. An advantage of using panoramic images is that the same point of a scene may be present in a larger number of different images than if conventional, narrower angle images were used. For example, a point may be present in panoramic images obtained by a mapping vehicle both on approach to, and after having passed by, the point. However, in alternative embodiments non-panoramic images can be used if desired.
A plurality of images obtained from different perspectives are used. That can be particularly useful in the context of a mapping vehicle used to image sections of road, as a particular section of road can for example be represented using multiple sets of image data obtained from a single pass of the mapping vehicle. Nevertheless, in alternative embodiments, sets of image data obtained from the same or similar perspective at different times can be used.
The method has been found to be particularly useful for determining attributes of substantially plain surfaces, for example roads or pavements, but is not limited to determining attributes of such substantially plain surfaces.
In alternative embodiments, rather than determining an attribute value of each pixel of an image of a section of a road surface separately, the processor 10 may determine one or more different attributes of a portion of the image of the section of the road surface. Such a portion may, for example, comprise a plurality of pixels.
Rather than storing and post-processing image or other measurement data, the processor 10 may process the measurement data in real-time and store only processed data in the memory 12.
Rather than being located in the survey vehicle 4, the processor 10 may be located remotely from the survey vehicle 4. In such an embodiment, the transceiver 14 may transmit unprocessed or partially processed data to such a remote processor for processing.
Alternative embodiments of the invention can be implemented as a computer program product for use with a computer system, the computer program product being, for example, a series of computer instructions stored on a tangible data recording medium, such as a diskette, CD-ROM, ROM, or fixed disk, or embodied in a computer data signal, the signal being transmitted over a tangible medium or a wireless medium, for example, microwave or infrared. The series of computer instructions can constitute all or part of the functionality described above, and can also be stored in any memory device, volatile or non-volatile, such as semiconductor, magnetic, optical or other memory device.
It will also be well understood by persons of ordinary skill in the art that whilst the described embodiments implement certain functionality by means of software, that functionality could equally be implemented solely in hardware (for example by means of one or more ASICs (application specific integrated circuit)) or indeed by a mix of hardware and software. As such, the scope of the present invention should not be interpreted as being limited only to being implemented in software.
It will be appreciated that whilst various aspects and embodiments of the present invention have heretofore been described, the scope of the present invention is not limited to the particular arrangements set out herein and instead extends to encompass all arrangements, and modifications and alterations thereto, which fall within the scope of the appended claims.
Lastly, it should also be noted that whilst the accompanying claims set out particular combinations of features disclosed herein, the scope of the present invention is not limited to the particular combinations hereafter claimed, but instead extends to encompass any combination of features or embodiments herein disclosed irrespective of whether or not that particular combination has been specifically enumerated in the accompanying claims at this time.

Claims

1 . A method for use in forming an image of a scene comprising at least one object (134), the method comprising:
acquiring a plurality of sets of image data each representative of at least part of the scene;
determining a value of an attribute for the object or part of the object (140) from each of the plurality of sets of image data; and
setting a value of an attribute of the object or part of the object (140) according to the determined values of the attribute of the object or the part of the object (140) obtained from the plurality of sets of image data.
2. A method according to Claim 1 , wherein the object (134) comprises a substantially plain surface.
3. A method according to Claim 1 or 2, wherein the object (134) comprises the whole or part of a surface for vehicular or other traffic, for example a road or pavement
4. A method according to any preceding claim, wherein each set of image data is obtained from a different perspective.
5. A method according to any preceding claim, further comprising determining a median value of the attribute from each of the plurality of sets of image data, and setting the value of the attribute of the image of the object or the part of the object (140) according to the median value.
6. A method according to any preceding claim wherein the attribute comprises colour.
7. A method according to Claim 6, wherein the attribute comprises at least one of an R value, a G value and a B value.
8. A method according to any preceding claim, wherein the attribute comprises at least one of brightness, lightness, intensity, grayscale intensity, saturation, or contrast.
9. A method according to any preceding claim, further comprising determining an image of the object (134) comprising a plurality of points or pixels, and the setting of the value of the attribute comprises determining a value of an attribute of each point or pixel of the image of the object based on the determined values of the attribute for a corresponding at least one point or pixel obtained from each of the sets of image data.
10. A method according to any preceding claim, wherein each set of image data is representative of a substantially panoramic image.
1 1 . A method according to any preceding claim, wherein each set of image data is captured by a mapping vehicle (4), for example a mapping vehicle (4) travelling along a road (30).
12. An apparatus for use in forming an image of a scene comprising at least one object (134), the apparatus comprising:
means (10, 40) for acquiring a plurality of sets of image data each representative of at least part of the scene;
means (10) for determining a value of an attribute for the object or part of the object from each of the plurality of sets of image data; and
means (10) for setting a value of an attribute of the object or part of the object according to the determined values of the attribute of the object or the part of the object obtained from the plurality of sets of image data.
13. A computer program product comprising computer-readable instructions that are executable to perform a method according to any of Claims 1 to 1 1 .
PCT/EP2010/070896 2010-12-29 2010-12-29 Method and apparatus for use in forming an image WO2012089262A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2010/070896 WO2012089262A1 (en) 2010-12-29 2010-12-29 Method and apparatus for use in forming an image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2010/070896 WO2012089262A1 (en) 2010-12-29 2010-12-29 Method and apparatus for use in forming an image

Publications (1)

Publication Number Publication Date
WO2012089262A1 true WO2012089262A1 (en) 2012-07-05

Family

ID=44624964

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2010/070896 WO2012089262A1 (en) 2010-12-29 2010-12-29 Method and apparatus for use in forming an image

Country Status (1)

Country Link
WO (1) WO2012089262A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106097357A (en) * 2016-06-17 2016-11-09 深圳市灵动飞扬科技有限公司 The bearing calibration of auto-panorama photographic head

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008093321A1 (en) * 2007-02-01 2008-08-07 Yissum Research Development Company Of The Hebrew University Of Jerusalem Method and system for video indexing and video synopsis
WO2008139465A2 (en) * 2007-05-10 2008-11-20 Yeda Research And Development Co. Ltd. Bidirectional similarity of signals
US20090033540A1 (en) * 1997-10-22 2009-02-05 Intelligent Technologies International, Inc. Accident Avoidance Systems and Methods

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090033540A1 (en) * 1997-10-22 2009-02-05 Intelligent Technologies International, Inc. Accident Avoidance Systems and Methods
WO2008093321A1 (en) * 2007-02-01 2008-08-07 Yissum Research Development Company Of The Hebrew University Of Jerusalem Method and system for video indexing and video synopsis
WO2008139465A2 (en) * 2007-05-10 2008-11-20 Yeda Research And Development Co. Ltd. Bidirectional similarity of signals

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
DAVIES E ROY ED - DAVIES E ROY: "Machine vision: theory, algorithms, practicalities", 1 January 2005, MACHINE VISION: THEORY, ALGORITHMS, PRACTICALITIES, ELSEVIER, AMSTERDAM, PAGE(S) 1 - 973, ISBN: 978-0-12-206093-9, XP040425677 *
HAOJIE LI ET AL: "Automatic Detection and Analysis of Player Action in Moving Background Sports Video Sequences", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 20, no. 3, 1 March 2010 (2010-03-01), pages 351 - 364, XP011297067, ISSN: 1051-8215 *
HAOJIE LI ET AL: "Automatic Video-based Analysis of Athlete Action", 14TH INTERNATIONAL CONFERENCE ON IMAGE ANALYSIS AND PROCESSING, 2007. ICIAP 2007, 10-13 SEPT. 2007 - MODENA, ITALY, IEEE, PISCATAWAY, NJ, USA, 1 September 2007 (2007-09-01), pages 205 - 210, XP031152343, ISBN: 978-0-7695-2877-9 *
HSU C-T ET AL: "Mosaics of video sequences with moving objects", SIGNAL PROCESSING. IMAGE COMMUNICATION, ELSEVIER SCIENCE PUBLISHERS, AMSTERDAM, NL, vol. 19, no. 1, 1 January 2004 (2004-01-01), pages 81 - 98, XP004476840, ISSN: 0923-5965, DOI: 10.1016/J.IMAGE.2003.10.001 *
IRANI M ET AL: "Efficient representations of video sequences and their applications", SIGNAL PROCESSING. IMAGE COMMUNICATION, ELSEVIER SCIENCE PUBLISHERS, AMSTERDAM, NL, vol. 8, no. 4, 1 May 1996 (1996-05-01), pages 327 - 351, XP004069965, ISSN: 0923-5965, DOI: 10.1016/0923-5965(95)00055-0 *
LAURA TEODOSIO, WALTER BENDER: "Salient Stills", ACM TRANSACTIONS ON MULTIMEDIA COMPUTING, COMMUNICATIONS AND APPLICATIONS,, 1 February 2005 (2005-02-01), pages 16 - 36, XP040015399 *
MICHAL IRANI ET AL: "Video Indexing Based on Mosaic Representations", PROCEEDINGS OF THE IEEE, IEEE. NEW YORK, US, vol. 86, no. 5, 1 May 1998 (1998-05-01), XP011044016, ISSN: 0018-9219 *
N. FRIETSCH: "Detection and tracking of objects in an image sequence captured by a VTOL-UAV", SPIE, PO BOX 10 BELLINGHAM WA 98227-0010 USA, 9 April 2007 (2007-04-09), XP040240185 *
WINKELMAN F ET AL: "Online globally consistent mosaicing using an efficient representation", SYSTEMS, MAN AND CYBERNETICS, 2004 IEEE INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, vol. 4, 10 October 2004 (2004-10-10), pages 3116 - 3121, XP010773234, ISBN: 978-0-7803-8566-5 *
ZHU Z ET AL: "Fast construction of dynamic and multi-resolution 360<o> panoramas from video sequences", IMAGE AND VISION COMPUTING, ELSEVIER, GUILDFORD, GB, vol. 24, no. 1, 1 January 2006 (2006-01-01), pages 13 - 26, XP025135374, ISSN: 0262-8856, [retrieved on 20060101], DOI: 10.1016/J.IMAVIS.2005.09.006 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106097357A (en) * 2016-06-17 2016-11-09 深圳市灵动飞扬科技有限公司 The bearing calibration of auto-panorama photographic head
CN106097357B (en) * 2016-06-17 2019-04-16 深圳市灵动飞扬科技有限公司 The bearing calibration of auto-panorama camera

Similar Documents

Publication Publication Date Title
TWI693422B (en) Integrated sensor calibration in natural scenes
CN110926474B (en) Satellite/vision/laser combined urban canyon environment UAV positioning and navigation method
Manyoky et al. Unmanned aerial vehicle in cadastral applications
US8571354B2 (en) Method of and arrangement for blurring an image
WO2011023244A1 (en) Method and system of processing data gathered using a range sensor
EP2659458B1 (en) System and method for generating textured map object images
CN111448591A (en) System and method for locating a vehicle in poor lighting conditions
JP6833668B2 (en) Image feature enhancement device, road surface feature analysis device, image feature enhancement method and road surface feature analysis method
JP2016189184A (en) Real time multi dimensional image fusing
JP6060682B2 (en) Road surface image generation system, shadow removal apparatus, method and program
CN112419385B (en) 3D depth information estimation method and device and computer equipment
CN111436216A (en) Method and system for color point cloud generation
JP2021508815A (en) Systems and methods for correcting high-definition maps based on the detection of obstructing objects
AU2008241689A1 (en) Method of and apparatus for producing road information
JP6278791B2 (en) Vehicle position detection device, vehicle position detection method, vehicle position detection computer program, and vehicle position detection system
CN111145362B (en) Virtual-real fusion display method and system for airborne comprehensive vision system
JP6854195B2 (en) Image processing device, image processing method and program for image processing
KR20130034528A (en) Position measuring method for street facility
CN108195359B (en) Method and system for acquiring spatial data
JP6773473B2 (en) Survey information management device and survey information management method
WO2012089262A1 (en) Method and apparatus for use in forming an image
US10859377B2 (en) Method for improving position information associated with a collection of images
CN109840920A (en) It takes photo by plane object space information method for registering and aircraft spatial information display methods
KR101393273B1 (en) System for advanced road texture image
CN113421325B (en) Three-dimensional reconstruction method for vehicle based on multi-sensor fusion

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10798362

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08/10/2013)

122 Ep: pct application non-entry in european phase

Ref document number: 10798362

Country of ref document: EP

Kind code of ref document: A1