WO2012046671A1 - Positioning system - Google Patents

Positioning system Download PDF

Info

Publication number
WO2012046671A1
WO2012046671A1 PCT/JP2011/072702 JP2011072702W WO2012046671A1 WO 2012046671 A1 WO2012046671 A1 WO 2012046671A1 JP 2011072702 W JP2011072702 W JP 2011072702W WO 2012046671 A1 WO2012046671 A1 WO 2012046671A1
Authority
WO
WIPO (PCT)
Prior art keywords
landscape image
database
feature
landscape
image
Prior art date
Application number
PCT/JP2011/072702
Other languages
French (fr)
Japanese (ja)
Inventor
高橋 勝彦
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to US13/877,944 priority Critical patent/US9104702B2/en
Priority to JP2012537688A priority patent/JPWO2012046671A1/en
Publication of WO2012046671A1 publication Critical patent/WO2012046671A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5854Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using shape and object relationship
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present invention relates to a positioning system, a landscape image database, a database construction device, a database construction method, a landscape image database construction program, and a positioning system for identifying the current position of the mobile body based on a landscape image photographed by an imaging means mounted on the mobile body.
  • the present invention relates to a positioning target device.
  • a global positioning system is generally known as a technique for identifying the position of a moving object.
  • GPS Global System
  • a radio wave transmitted from a GPS satellite is received by a vehicle-mounted receiving device, and positioning is performed based on a time difference between transmission and reception.
  • a positioning system based on wireless technology such as GPS has a problem that positioning cannot be performed in a place where a radio wave more than the required number cannot be received. Specific examples include valleys of buildings and underpasses. In urban areas, it is easy for positioning to occur.
  • the current position of the moving object is checked by comparing a landscape image acquired by a camera mounted on the moving object with a database relating to a previously stored landscape image. Techniques for identifying are disclosed.
  • Patent Document 1 discloses the following positioning method. That is, from the landscape image acquired by the camera mounted on the mobile body, shape data indicating the planar shape of the road and information such as the height and color of surrounding buildings are extracted, and these are compared with a stored database. It identifies the current position of the moving object.
  • Patent Document 2 the position of a feature point of a road marking is extracted from a landscape image acquired by a camera mounted on a moving body, and this is compared with a stored database to identify the current position of the moving body.
  • Patent Document 3 discloses a method for identifying a current position and posture by capturing a ceiling with a camera mounted on a moving object and checking the image against a stored database for a moving object operating in an indoor environment. ing.
  • Non-Patent Document 1 detects the following obstacles as a technique for collating with a database based on feature points extracted from a landscape image, as in Patent Document 2.
  • a technique is disclosed. That is, in this method, a landscape image acquired by a mobile body mounted camera is associated with a stored database with respect to a feature point called SIFT. An obstacle is detected by calculating a difference between road surface area images determined to correspond.
  • SIFT is an abbreviation for Scale-invariant feature transform.
  • RANSAC Random Sample Consensus
  • RANSAC Random Sample Consensus
  • Patent Document 4 discloses the following change area recognition device based on a landscape image acquired by a camera mounted on a moving body. This is done by matching the acquired landscape image against a stored database, and extracting the area that did not match after registration as a change area, that is, an area where an object that did not exist at the time of database creation exists. It is.
  • Patent Document 3 can only be applied in an indoor environment, it cannot be applied to a moving body that functions outdoors, such as an automobile.
  • Patent Document 2 also has a problem that it does not function unless it is an environment where road markings exist.
  • Patent Document 1 is considered to be a technique that can be applied regardless of location because collation is performed based on information such as a predetermined feature amount related to the appearance of a road and a building adjacent to the road.
  • features such as the road width and road length (distance from the intersection to the next intersection) and the number of buildings existing on each side of the road are automatically stored in the computer.
  • Non-Patent Document 1 the method of specifying the current position of the moving body based on the position information of the feature point extracted from the landscape image as in Non-Patent Document 1 is advantageous in that it can be applied in a wide range of environments regardless of indoors and outdoors. It is. In addition, the feature point extraction can be easily realized by an existing method, so that it is effective in that the database construction cost is low.
  • a method based on feature point matching has the following problems.
  • the estimated correspondence is found for all the feature points that are randomly extracted. If it is not completely correct, the current position cannot be estimated.
  • the correct correspondence means that the feature point on the landscape image corresponding to the same part of the same object in the real world is associated with the feature point in the database. Since it is difficult to match 100% correctly with the feature point matching process using the existing image recognition technology, the method disclosed in Non-Patent Document 1 or the like introduces a trial-and-error process called RANSAC. By this RANSAC, a set of a certain number of feature points is extracted many times, and the case where all feature points are correctly associated is included by chance.
  • an object of the present invention is to provide a positioning system that enables a positioning process in a positioning target moving body with a smaller amount of calculation.
  • the landscape image database of the present invention stores a plurality of landscape image data and an image acquisition position that is a position where the landscape image data is collected, and each of the plurality of landscape image data is located in the real world. Alternatively, it includes feature amounts of feature points for things other than things whose shape is less likely to be maintained for the predetermined period or longer.
  • the database construction method of the present invention captures a landscape image, acquires current position information, extracts feature quantities of feature points from the landscape image, and objects whose position or shape will change in the real world from the landscape image in the future The region corresponding to is extracted.
  • the landscape image database construction program of the present invention is characterized by a first imaging step for capturing a landscape image, a position information acquisition step for acquiring current position information, and a feature point feature from the landscape image acquired by the first imaging means.
  • a first feature point extracting step for extracting a quantity; a future variation area extracting step for extracting an area corresponding to an object whose position or shape will change in the real world in the future from the landscape image acquired by the first imaging means; , Is executed on the computer.
  • FIG. It is a figure which shows an example of the description of a landscape image database. It is a figure which shows an example of the image ahead of the moving body which the 1st imaging module 101 acquired. It is a figure which shows the position of the feature point extracted from the image which the 1st imaging module 101 acquired, and there. It is a figure which shows the positional relationship of 4 points
  • FIG. It is a figure which shows an example of the probability of the collation process with respect to four points, the positional information on a point, and the relative position (direction) information obtained by the collation process.
  • the first embodiment of the present invention includes a database construction mobile mounting device 100 and a positioning target mobile mounting device 110.
  • the database construction mobile is, for example, a car or robot dedicated to database construction.
  • the positioning target moving body is, for example, a private vehicle or a business vehicle, or a robot provided with moving means such as wheels and feet.
  • the database construction mobile body mounting apparatus 100 includes a first imaging module 101 that captures a landscape image, and a position information acquisition module 102 that acquires the current position of the database construction mobile body.
  • the database construction mobile body mounting apparatus 100 includes a first feature point extraction module 103 that extracts feature amounts of feature points from a landscape image acquired by the first imaging module 101. Furthermore, the database construction mobile body mounting device 100 includes a landscape image database 104 that stores the feature quantities of the extracted feature points and the positional information in association with each other.
  • the feature points to be extracted by the first feature point extraction module 103 and their feature amounts include, for example, well-known SIFT (Scale-invariant feature transform) features and SURF (Speeded Up Robust Features) features.
  • the database construction mobile unit mounting apparatus 100 includes a future variation area extraction module 105 that extracts an area whose position or shape is likely to change in the near future from a landscape image captured by the imaging module.
  • the database construction mobile unit mounting apparatus 100 includes a landscape image database construction module 106 having the following functions. That is, the landscape image database construction module 106 stores the feature quantities of the feature points extracted from areas other than those extracted by the future variation area extraction module 105 in the landscape image database in association with the current position information.
  • the positioning target moving body mounting apparatus 110 includes a second imaging module 111 that captures a landscape image, and a second feature point extraction module 112 that extracts feature amounts of feature points from the landscape image acquired by the second imaging module. including.
  • the positioning object mobile unit mounting apparatus 110 includes an image collation / position identification module 113 and a landscape image database 114.
  • the image collation / position identification module 113 collates the feature points extracted by the second feature point extraction module 112 with the feature point information in the landscape image database 104 to identify the current position of the positioning target mobile unit mounting apparatus 110.
  • the first imaging module 101 is composed of an in-vehicle camera and the like, and captures a landscape image. For example, while the moving object for database construction moves on a predetermined road course, a landscape image in front of the vehicle is taken every moment.
  • the position information acquisition module 102 includes a highly accurate positioning module using RTK-GPS (Real-Time Kinetic GPS) and vehicle speed pulse information mounted on the database construction mobile body.
  • the position information acquisition module 102 acquires position information of a point where the first imaging module 101, that is, the database construction mobile body has taken an image.
  • the position information is expressed by two types of numerical values, for example, latitude and longitude.
  • the first feature point extraction module 103 extracts feature points from the landscape image acquired by the first imaging module 101, and extracts the coordinate positions and feature amounts of the feature points on the image as feature point information.
  • the landscape image database 104 is a database that stores the feature point information extracted from the landscape image and the shooting position of the landscape image in association with each other. An example of the description of the database is shown in FIG.
  • the record number 201 is composed of latitude 203 and longitude 204 information and feature point information 205 as shooting position information 202 for a landscape image taken at a certain time.
  • the feature point information 205 includes a set of sets of a feature point number 206 indicating the total number of feature points extracted at this time, a coordinate position 207 in the image of each feature point, and an image feature value 208 near the feature point.
  • a SURF feature or the like may be used as the image feature amount 208.
  • the image feature amount 208 is expressed by a 64-dimensional vector value obtained by quantifying the change direction and change intensity of the pixel value in the local region.
  • the landscape image database 104 is initially constructed in the database construction mobile unit mounting apparatus 100, but is then copied and placed in the positioning target mobile unit mounting apparatus 110 described later.
  • the future fluctuation area extraction module 105 extracts an area on the image corresponding to an object that is likely to change in the future position and shape in the real world, which is included in the image acquired by the first imaging module 101.
  • the future fluctuation area extraction module 105 extracts an area corresponding to a vehicle or a pedestrian by an image recognition method or the like. Specifically, it can be detected by an existing person detection method, vehicle detection method, or general object detection method. Note that the area extracted by the future fluctuation area extraction module 105 is not an area corresponding to an object that is moving at the time when the first imaging module 101 acquires an image, but a movable object such as a person or a vehicle.
  • FIG. 3 is a diagram illustrating an example of an image ahead of a moving object acquired by the first imaging module 101, and includes a road surface area 301, a building area 302, and a parked vehicle area 303.
  • the future change area extraction module 105 extracts the parked vehicle area 303 including the parked vehicle that may change in the future.
  • the landscape image database construction module 106 removes the feature points extracted from the region corresponding to the vehicle or pedestrian extracted by the future variation region extraction module 105 from the feature points extracted by the first feature point extraction module 103.
  • the point information is processed as follows. That is, the landscape image database construction module 106 records this feature point information in the landscape image database 104 in association with the location information of the database construction mobile body acquired by the location information acquisition module 102.
  • the feature point selection by the landscape image database construction module 106 will be described with reference to FIG.
  • FIG. 4 is a diagram illustrating an image acquired by the first imaging module 101 and the positions of feature points extracted therefrom. The star in the figure indicates the position of the feature point.
  • the second imaging module 111 is mounted on the positioning target moving body and captures a landscape image every moment.
  • the second feature point extraction module 112 extracts feature points from the landscape image acquired by the second imaging module 111, and extracts coordinate position information and image feature amounts on the images.
  • the same feature point extraction algorithm as that of the first feature point extraction module 102 may be used.
  • the image collation / position identification module 113 collates the feature point information extracted by the second feature point extraction module 112 with the feature point information in the landscape image database 104 to identify the current position of the positioning target moving body.
  • the feature amount is extracted from the feature point group extracted by the second feature point extraction module 112 and one record in the landscape image database 104, that is, the feature point group stored in association with one point.
  • a set of similar feature points is extracted.
  • the feature amount of the feature point is expressed by a vector value.
  • the feature quantity of one feature point in the feature point group extracted by the second feature point extraction module 112 and the feature quantity of one feature point in the landscape image database 104 2 Calculate the norm of the difference between two vectors. Then, it is only necessary to determine that the size is equal to or less than the threshold and to extract the corresponding feature points as a set. After extracting all pairs of feature points having similar feature amounts, eight pairs are extracted at random. Thereafter, an estimated value of the relative positional relationship of the moving object with respect to the moving object position associated with the record in the landscape image database 104 is obtained using an algorithm called the 8-point method. In addition, the probability of the estimated relative position can be calculated using information on a set of all feature points having feature quantities similar to the estimated value.
  • the maximum probability is calculated by changing the way of selecting the eight sets many times and calculating the relative position and the probability. Find and select the relative position that gives. Further, the above is collation with respect to one record selected from the landscape image database 104, and collation is performed with feature points corresponding to other records, and about 1 to 3 results regarding a record with a high probability of estimation are selected.
  • FIG. 5 is a diagram schematically showing four points (points 501, 502, 503, and 504) from which data recorded in the landscape image database 104 is collected and the current position 507 of the positioning target moving body.
  • FIG. 6 shows the accuracy of the matching process between the feature point data corresponding to the points 501 to 504 and the feature point data extracted at the current time, the position information from the point 501 to the point 504, and the matching process.
  • the relative position information of the base obtained by the 8-point method includes the three-dimensional relative azimuth information or absolute distance information of the moving body, and the light of the imaging module when the landscape image database 104 is created and the current positioning target moving body. This is three-dimensional rotation information of the shaft.
  • FIG. 6 simply shows only relative orientation information on the horizontal plane. In the case of FIGS. 5 and 6, for example, the determination can be made based on the relative position information extracted at the points 502 and 503 where the relatively high “probability” is extracted.
  • the first imaging module 101 captures an image in front of the moving body (step S701). Further, the position information acquisition module 102 acquires the current accurate position information of the mobile body on which the database construction mobile body mounting apparatus 100 is mounted (step S702). Subsequently, the first feature point extraction module 103 extracts feature points from the image captured by the first imaging module 101 (step S703). Further, the future fluctuation area extraction module 105 extracts an area corresponding to a vehicle and a person from the image captured by the first imaging module (step S704).
  • the landscape image database construction module 106 extracts only feature points that do not belong to the region corresponding to the vehicle or person extracted by the future variation region extraction module 105 from the feature points extracted by the first feature point extraction module 103. To do.
  • the feature point position and feature amount information relating to these feature points and the position information acquired by the position information acquisition module 102 are associated with each other and stored in the landscape image database 104 (step S705). Steps S701 to S705 are repeated each time the first imaging module 101 acquires a new image. Note that step S702 is most preferably executed in synchronization with step S701. Next, the operation of the positioning target moving body mounting apparatus 110 will be described with reference to the flowchart of FIG.
  • the second imaging module 111 captures an image in front of the positioning target moving body mounting apparatus 110 (step S801).
  • the second feature point extraction module 112 extracts feature points from the image captured by the second imaging module 111 (step S802).
  • the image collation / position identification module 113 collates the feature points extracted by the second feature point extraction module 112 with the feature point information in the landscape image database 104 to identify and output the current position of the positioning target moving body. (Step S803).
  • the database construction mobile unit mounting apparatus 100 described above includes an ECU (Electronic Control Unit) 1801, an in-vehicle camera 1802, a hard disk 1803, and a high-precision GPS 1804 as shown in FIG.
  • ECU Electronic Control Unit
  • the above-described database construction mobile unit mounting apparatus 100 is described as a configuration in which the above-described module is mounted and operated in the apparatus of this configuration.
  • the ECU 1801 controls the entire apparatus, and includes, for example, a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), a signal processing circuit, a power supply circuit, and the like.
  • the above-described embodiment is realized by reading out and executing a computer program capable of realizing the functions of the flowchart and the determination logic referred to in the description to the ECU 1801 except for the functions of the imaging module and the position information acquisition module.
  • It is also possible to configure a microcomputer by implementing the functions executed by the ECU 1801 as hardware.
  • the hard disk 1803 is a device for storing a landscape image database, and there is no problem even if it is constituted by a storage medium other than the hard disk such as a flash memory.
  • the above-described positioning target mobile unit mounting apparatus 110 includes an ECU (Electronic Control Unit) 1901, an in-vehicle camera 1902, and a hard disk 1903 as shown in FIG. 19.
  • the above-described module is mounted on the apparatus of this configuration. It has been described as a configuration that operates.
  • the ECU 1901 controls the entire apparatus, and includes, for example, a CPU, RAM, ROM, signal processing circuit, power supply circuit, and the like.
  • the above-described embodiment is realized by reading out and executing a computer program capable of realizing the functions of the flowchart and the determination logic referred to in the description, except for the function of the second imaging module, in the ECU 1901. It is also possible to configure a microcomputer by implementing the functions executed by the ECU 1901 as hardware. Furthermore, some functions may be realized by hardware, and similar functions may be realized by cooperative operation of the hardware and the software program.
  • a copy of the landscape image database constructed by the database construction mobile body may be stored in the hard disk 1903. Further, as another configuration example, it may be stored in a ROM in the ECU 1901 without using a hard disk.
  • the landscape image database 104 is constructed by removing specific feature points from the feature points extracted by the first feature point extraction module 103.
  • the specific feature point is a feature point belonging to a vehicle or a pedestrian that causes a reduction in the efficiency of image collation in the image collation / position identification module in the positioning target moving body mounting apparatus 110 when the future position moves. Accordingly, the image collation / position identification module 113 can correctly collate images with a smaller number of RANSAC trials.
  • the description has been made assuming that the image acquired by the first imaging module is an image capturing a visible light wavelength band such as a color image or a monochrome image.
  • the first imaging module may be configured by an apparatus that can also acquire an image in a wavelength band other than the visible light wavelength region, such as a multispectral camera.
  • the future fluctuation region extraction module 105 may further extract plants and the like. Many feature points are extracted from the area of the leaves of the street tree, but the shape of the street tree changes due to wind and growth, and the position of the feature point moves, which causes noise when performing image matching . It is not always easy to accurately determine a region corresponding to a plant such as a roadside tree from only information in the visible light wavelength region.
  • chlorophyll is known to reflect light in the near-infrared wavelength band well
  • using a multispectral image that includes information in the near-infrared wavelength band makes it relatively easy to identify areas corresponding to plants.
  • fibers such as clothes often show high reflection intensity in the near-infrared wavelength band
  • the future fluctuation area extraction module 105 is also a detection target for pedestrians, and it is necessary to distinguish plant areas from pedestrian areas. There is no functional problem.
  • a vehicle or a person who is likely to move in the future is extracted from the image used for constructing the landscape image database by the future variation area extraction module 105. Further, a region corresponding to a plant such as a roadside tree whose shape changes with growth is extracted by the future variation region extraction module 105. After that, since the landscape image database 104 is constructed using only the information on the feature points extracted from outside these regions, the probability of selecting a pair of feature points that correspond correctly in the RANSAC process increases, and the number of RANSAC iterations is increased. Can be reduced.
  • RANSAC is an abbreviation for RANdom Sample Consensus.
  • positioning can be performed with a smaller amount of calculation.
  • the future variation region extraction module 105 that extracts regions corresponding to vehicles, people, and plants is applied to the database construction mobile body mounting device 100 for the following reason. That is, it is assumed that the database construction mobile unit mounting apparatus 100 is a small number of special mobile units that can be equipped with expensive and high-performance apparatuses. This is because it is easy to tolerate price increases. In addition, in a configuration in which a landscape image database is constructed by a server device to be described later, there is no restriction on real-time processing when constructing a landscape image database. It is because it does not become. (Second Embodiment) A second embodiment of the present invention will be described with reference to FIG. Referring to FIG.
  • the positioning target moving body mounting apparatus 910 includes the approximate position acquisition module 901.
  • the functions of the first embodiment shown in FIG. 1 are the same except for the approximate position acquisition module 901 and the image collation / position identification module 113.
  • the approximate position acquisition module 901 includes an inexpensive GPS or map matching mechanism, and acquires the current approximate position of the positioning target moving body.
  • the image collation / position identification module 113 collates the feature point information extracted by the second feature point extraction module 112 with the landscape image database 104, and determines the current position of the moving body with higher accuracy than the approximate position.
  • the second imaging module 111 captures an image in front of the positioning target moving body (step S1001).
  • the second feature point extraction module 112 extracts feature points from the image acquired in step S1001 (step S1002).
  • the approximate position acquisition module 1001 acquires the current approximate position of the positioning vehicle (step S1003).
  • the image collation / position identification module 113 collates with the feature points extracted in step S1002 using only the feature point information in the landscape image database narrowed down by the approximate position information obtained in step S1003, and Specify and output the exact current position.
  • the feature point information in the landscape image database to be collated by the image collation / position identification module can be narrowed down, so that the amount of calculation required for collation can be reduced.
  • FIG. 11 A third embodiment of the present invention will be described with reference to FIG. Referring to FIG. 11, the third embodiment is different in the following points. That is, as compared with the first embodiment shown in FIG.
  • the module in the database construction mobile unit mounting apparatus 100 is divided into two, and a new database construction mobile unit mounting apparatus 1100 and a server apparatus 1120 are provided.
  • the installation point is different.
  • a position / image data recording module 1102 and a position / image data storage unit 1103 are added as a mechanism for exchanging landscape images and position information between the database construction mobile unit mounting apparatus 1100 and the server apparatus 1120. Is different.
  • a database construction mobile body mounting apparatus 1100 according to the third embodiment will be described.
  • the first imaging module 101 and the position information acquisition module 102 are the same as those in the embodiment shown in FIG.
  • the position / image data storage unit 1103 is data in which the first imaging module 101 and the position information acquisition module 102 store image information and position information acquired at the same time in association with each other.
  • the position / image data recording module 1102 is a module that records the images and position information acquired by the first imaging module 100 and the position information acquisition module 101 at the same time in the position / image data storage unit 1103.
  • the server device 1120 will be described.
  • the first feature point extraction module 103 extracts feature points from the landscape image recorded in the position / image data storage unit 1103, and outputs position information and feature points of feature points in the image.
  • the landscape image database 104 is the same as the embodiment shown in FIG.
  • the main scenery image database 104 is once constructed on the server, but after the database construction, it is copied and installed on the positioning vehicle.
  • the future variation area extraction module 105 extracts an area corresponding to a vehicle, a pedestrian, or a plant whose future position or shape may change from a landscape image in the position / image data 803 referred to by the first feature point extraction module 103.
  • the scenery image database construction module 106 records specific feature points among the feature points extracted by the first feature point extraction module 103 in association with the position information acquired by the position information acquisition module 102 in the landscape image database 104.
  • the specific feature points are other feature points excluding the feature points extracted from the regions corresponding to the vehicle, the pedestrian, and the plant extracted by the future variation region extraction module 105.
  • the positioning object moving body mounting apparatus 110 is the same as that of the embodiment shown in FIG. Next, the operation of the third embodiment will be described in detail with reference to the drawings. Since the operation of the positioning object moving body mounting apparatus 110 is the same as that shown in the flowchart of FIG. 8, the description thereof is omitted.
  • the operations of the database construction mobile body mounting apparatus 1100 and the server apparatus 1120 are shown.
  • the first imaging module 101 captures an image in front of the moving body (step S1201).
  • the position information acquisition module 102 acquires the current accurate position information of the moving body in synchronization with step S1201 (step S1202).
  • the position / image data recording module 1102 records the image and position information acquired in steps S1201 and S1202 in association with each other in the position / image data storage unit 1103 (step S1203).
  • the operation of the server apparatus 1120 will be described with reference to the flowchart of FIG.
  • the first feature point extraction module 101 extracts feature points from the image data of the position / image data generated by the database construction mobile body mounting apparatus 1100, and extracts the position and feature amount on the image ( Step S1301).
  • the future variation area extraction module 105 extracts areas corresponding to vehicles, pedestrians, plants, and the like from the same image data as the image data referenced in step S1301 (step S1302).
  • the landscape image database construction module 106 removes points corresponding to the region extracted in step S1302 from the feature points extracted in step S1301.
  • a landscape image database is constructed by associating the feature amount information (position and feature amount on the image) regarding the feature point after removal with the imaging position information stored in association with the image data referred to in step S1301. (Step S1303).
  • the database construction mobile body mounting apparatus 1100 described above includes an image recording apparatus 2001, an in-vehicle camera 2002, a hard disk 2003, and a high-accuracy GPS 2004 as shown in FIG. 20, and is configured to mount and operate the modules described above. As explained.
  • the image recording apparatus 2001 controls the entire apparatus, and includes, for example, a CPU, a RAM, a ROM, a signal processing circuit, a power supply circuit, and the like. That is, in the present embodiment described above, it is realized by reading out and executing the computer program capable of realizing the functions of the flowchart and the determination logic referred to in the description to the ECU 2001, except for the functions of the imaging module and the position information acquisition module.
  • the server device 1120 described above is configured by a computer 2101 as shown in FIG.
  • the computer 2101 includes, for example, a CPU, RAM, ROM, signal processing circuit, power supply circuit, hard disk, and the like. That is, the above-described embodiment is realized by reading out and executing a computer program capable of realizing the function and determination logic of the flowchart referred to in the description on the computer 2101.
  • a microcomputer by implementing functions executed by the computer 2101 as hardware.
  • some functions may be realized by hardware, and similar functions may be realized by cooperative operation of the hardware and the software program.
  • the landscape image database constructed by the database construction mobile body may be exchanged with the database construction mobile body via the hard disk in the computer 2101.
  • the fourth embodiment further includes a server download module 1401 in the server device 1420 with respect to the configuration shown in FIG.
  • the positioning target mobile unit mounting apparatus 1410 has an approximate position acquisition module 901 that acquires the current approximate position, and a positioning target mobile unit download module that can communicate with the server unit 1420 to acquire a part of the landscape image database by communication. 1402 is provided.
  • the functions of the database construction mobile body mounting device 1400 and the server device 1420 are substantially the same as those in the example of FIG. 11, and only the server download module 1401 is different.
  • the server download module 1401 operates as follows when there is a request for the approximate position information and the landscape image database download from the positioning target mobile unit download module 1402 of the positioning target mobile unit mounting apparatus 1410.
  • the server download module 1401 extracts record data generated in the vicinity of the approximate position from the landscape image database stored in the server, and transmits it to the positioning target mobile unit download module 1402.
  • the functions of the second imaging module 111 and the second feature point extraction module 112 are the same as those in the embodiment of FIG.
  • the positioning target mobile unit download module 1402 sends the information on the approximate position acquired by the approximate position acquisition module 601 and the request message for landscape image data to the server download module 1401 of the server device 1420. Thereafter, the positioning object moving body download module 1402 receives the corresponding landscape image data from the server download module 1401.
  • the image collation / position identification module 113 collates the feature point information extracted by the second feature point extraction module 112 with the landscape image data received from the server download module 1401 to determine the accurate current position of the moving object.
  • the operation of the database construction mobile unit mounting apparatus is the same as the flowchart shown in FIG.
  • the operation of the server device 1420 will be described with reference to the flowchart of FIG. Steps S1501 to S1503 are the same as steps S1201 to S1203.
  • the server download module 1401 operates as follows only when a landscape image database is requested from the positioning target mobile unit download module 1402 of the positioning target mobile unit mounting apparatus 1410. That is, the server download module 1401 extracts and transmits corresponding landscape image data (step S1504).
  • the second imaging module 111 captures a landscape image (step S1601).
  • the second feature point extraction module 112 outputs the feature amount of the feature point from the image captured in step S1601 (step S1602).
  • the approximate position acquisition module 901 acquires the approximate position of the positioning target moving body mounting apparatus 1410 (step S1603).
  • the positioning target mobile unit download module 1402 communicates with the server download module 1401 of the server device 1420 and receives landscape image data corresponding to the approximate position (step S1604).
  • the image collation / position identification module 113 collates the landscape image data received by the positioning target mobile unit download module 1402 with the feature amounts of the feature points extracted by the second feature point extraction module 112. Thereafter, the image collation / position identification module 113 determines the current position of the moving body (step S1605).
  • the server device 1420 described above is configured by a computer 2201 and a communication device 2202 as shown in FIG. 22, and has been described as a configuration in which the module described above is mounted and operated in the device of this configuration.
  • the computer 2202 includes, for example, a CPU, RAM, ROM, power supply circuit, hard disk, and the like.
  • the above-described embodiment is realized by reading out and executing a computer program capable of realizing the functions and determination logic of the flowchart referred to in the description, excluding the function of the server download module 1401.
  • a microcomputer by implementing functions executed by the computer 2201 as hardware.
  • some functions may be realized by hardware, and similar functions may be realized by cooperative operation of the hardware and the software program.
  • the communication device 2202 is hardware for wireless LAN (Local Area Network) or mobile phone data communication that executes the function of the server download module 1401.
  • the above-described positioning target moving body mounting apparatus 1410 includes an ECU (Electronic Control Unit) 2301, a camera 2302, a hard disk 2303, a GPS 2304, and a communication device 2305 as shown in FIG.
  • the ECU 2301 controls the entire apparatus, and includes, for example, a CPU, RAM, ROM, signal processing circuit, power supply circuit, and the like. That is, the above-described embodiment is realized by reading out and executing a computer program capable of realizing the functions and determination logic of the flowchart referred to in the description to the ECU 2301. It is also possible to configure a microcomputer by implementing the functions executed by the ECU 2301 as hardware.
  • the landscape image database constructed by the database construction mobile body may be stored in the hard disk 2303 as a copy. As another configuration example, it may be stored in a ROM in the ECU 2301 without using a hard disk.
  • the communication device 2305 is hardware for wireless LAN or cellular phone data communication that executes the function of the server download module 1401. According to the present embodiment, since it is not necessary to hold the landscape image database in the positioning target mobile unit mounting apparatus 1410, the capacity of the magnetic disk mounted on the positioning target mobile unit mounting unit 1410 can be reduced. Furthermore, since the landscape image database is stored in the server device 1420, there is an advantage that the landscape image database can be easily updated.
  • FIG. 24 shows a landscape image database according to the fifth embodiment of the present invention.
  • the landscape image database 2401 of this embodiment stores a plurality of landscape image data and image acquisition positions that are positions where the landscape image data is collected in association with each other. Furthermore, the landscape image database 2401 of the present embodiment is characterized in that each of the plurality of landscape image data is a feature point for an object other than an object whose position or shape in the real world is less likely to be maintained as it is for a predetermined period or longer. Includes features.
  • the following effects can be obtained.
  • a landscape image of a positioning system that can perform positioning processing on a positioning target moving body with a smaller amount of calculation by identifying feature points that are likely to be difficult to correctly associate and not storing them in the landscape image database in advance.
  • a database can be provided.
  • a dedicated device is assumed, but the following may be used. That is, for example, a personal computer device that performs various data processing is loaded with a board or a card that performs processing corresponding to this example, and each processing is executed on the computer device side. In this way, a configuration may be adopted in which software for executing the processing is installed in a personal computer device and executed.
  • the program installed in the data processing device such as the personal computer device may be distributed via various recording (storage) media such as an optical disk and a memory card, or distributed via communication means such as the Internet. Also good.
  • each of the above embodiments can be combined with other embodiments. While the present invention has been described with reference to the embodiments, the present invention is not limited to the above embodiments.
  • (Appendix 2) The landscape image database according to appendix 1, wherein an object that is less likely to be maintained in a current position or shape in the real world for a predetermined period or longer is a vehicle.
  • (Appendix 3) The landscape image database according to appendix 1 or appendix 2, wherein the thing that is unlikely to be maintained as it is for a predetermined period or longer in the real world is a person.
  • the feature image of the feature point obtained by removing the feature point extracted from the area extracted by the future variation area extraction unit from the feature point acquired by the first imaging unit is associated with the current position information and the landscape image.
  • a database construction device comprising: (Appendix 5) The first imaging unit captures not only an image in the visible light wavelength band but also an image in a wavelength band other than the visible light region, and the future variation area extracting unit captures the landscape image acquired by the first imaging unit.
  • the database construction device according to appendix 4, wherein a region corresponding to a plant is extracted from the inside.
  • (Appendix 6) A landscape image database described in any one of supplementary notes 1 to 3; A second image pickup means for picking up a landscape image; a second feature point extraction means for extracting feature amounts of feature points from the landscape image acquired by the second image pickup means; and the second feature point extraction means.
  • Image collation / position identification means for collating the feature amount of the extracted feature point with the landscape image database to identify the current position of the positioning target moving body;
  • a positioning target device comprising: (Appendix 7) The positioning target device further includes a rough position acquisition unit that acquires a current rough position of the positioning target device, and the image collation / position identification unit includes, among landscape image data stored in the landscape image database, The positioning target device according to appendix 6, wherein the positioning target device performs image matching using only landscape image data generated near the approximate position.
  • a positioning object moving body scenery image database download means for acquiring scenery image data in the scenery image database associated with position information near the position;
  • the image collation / position identification unit identifies the current position by collating the landscape image data acquired by the landscape image database download unit with the feature amount of the feature point extracted by the second feature point extraction unit.
  • the positioning system according to appendix 8, characterized by: (Appendix 10) Take a landscape image, Get current location information Extracting feature amounts of feature points from the landscape image; An area corresponding to an object whose position or shape changes in the real world in the future is extracted from the landscape image.
  • a database construction method characterized by that.
  • (Appendix 11) Not only images in the visible light wavelength band, but also images in the wavelength band other than the visible light range, Extracting an area corresponding to a plant from the landscape image;
  • the database construction method characterized by: (Appendix 12) A first imaging step for capturing a landscape image; A location information acquisition step for acquiring current location information; A first feature point extracting step of extracting feature amounts of feature points from the landscape image acquired by the first imaging means; A future change area extracting step of extracting an area corresponding to an object whose position or shape will change in the future in the real world from the landscape image acquired by the first imaging means;
  • a landscape image database construction program that causes a computer comprising the landscape image database described in any one of supplementary notes 1 to 3 to be executed.
  • the present invention relates to a positioning system, a landscape image database, a database construction device, a database construction method, and a landscape image database construction program for identifying the current position of the mobile body based on a landscape image taken by an imaging means mounted on the mobile body.
  • the present invention relates to a positioning target device and has industrial applicability.
  • DESCRIPTION OF SYMBOLS 100 Mobile construction apparatus for database construction 101 1st imaging module 102 Position information acquisition module 103 First feature point extraction module 104 Landscape image database 105 Future fluctuation area extraction module 106 Landscape image database construction module 110 Positioning object mobile equipment installation apparatus 111 Second imaging module 112 Second feature point extraction module 113 Image collation / position identification module 114 Landscape image database 201 Record number 202 Shooting position information 203 Latitude 204 Longitude 205 Feature point information 206 Number of feature points 207 Coordinate position 208 Image feature amount 301 road surface area 302 building area 303 parked vehicle area 401 feature points 402 corresponding to the parked vehicle area 303 feature points 501 and 5 excluding the feature point 401 2, 503, 504 Four points where data was collected 505 Angle of the current position relative to the point 502 with respect to the optical axis estimated by the image verification / position identification module 506 Current position relative to the point 503 estimated by the image verification / position identification module Angle 507 relative to the optical axis 507 Current position of the positioning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Navigation (AREA)
  • Processing Or Creating Images (AREA)
  • Traffic Control Systems (AREA)

Abstract

In a positioning system for identifying the current position of a mobile body by comparing a landscape image captured by image capture means provided in a mobile body with a landscape image database in which features of landscapes have been described, a landscape image database of the positioning system is provided so as to perform positioning processing in the mobile body to be positioned using less calculation. The landscape image database of the present invention is such that a plurality of landscape image data and image acquisition positions, which are positions where the landscape image data were gathered, are associated and stored, wherein each of the plurality of landscape image data includes a feature value of a feature point corresponding to an entity other than entities for which the possibility is low that the position and shape in the real world will be maintained as-is for a predetermined period or longer.

Description

測位システムPositioning system
 この発明は、移動体に搭載された撮像手段により撮影した風景画像を基に該移動体の現在位置を同定する測位システム、風景画像データベース、データベース構築装置、データベース構築方法、風景画像データベース構築プログラム及び測位対象装置に関する。 The present invention relates to a positioning system, a landscape image database, a database construction device, a database construction method, a landscape image database construction program, and a positioning system for identifying the current position of the mobile body based on a landscape image photographed by an imaging means mounted on the mobile body. The present invention relates to a positioning target device.
 移動体の位置を同定する技術として全地球測位システム(GPS:Global Positioning System)が一般に知られている。GPSでは、GPS衛星が発信する無線電波を車載の受信装置で受信し、発信−受信の時刻差に基づいて測位を行う。
 GPSのような無線技術をベースとした測位システムでは、必要数以上の無線電波を受信できない場所では測位が行えないという問題がある。具体例としては、ビルの谷間や地下道などが挙げられ、都市部においては測位困難な状況が起こりやすい。
 このような問題点を回避できる原理の全く異なる測位技術として、移動体搭載のカメラにより取得される風景画像を、事前に記憶済みの風景画像に関するデータベースと照合することによって、該移動体の現在位置を同定する技術が開示されている。
 特許文献1には次のような測位方法が開示されている。即ち、移動体搭載カメラにより取得された風景画像から、道路の平面形状を示す形状データや周辺の建築物の高さ・色などの情報を抽出し、これらと記憶済みのデータベースとを照合して移動体の現在位置を同定するものである。
 特許文献2には、移動体搭載カメラにより取得された風景画像から、道路標示の特徴点の位置を抽出し、これと記憶済みのデータベースとを照合して該移動体の現在位置を同定する測位方法が開示されている。
 特許文献3には、屋内環境で動作する移動体を対象として、移動体搭載カメラで天井を撮影し、その画像を記憶済みのデータベースと照合することによって現在位置および姿勢を同定する方法が開示されている。
 また、目的は測位ではないものの、特許文献2と同じように風景画像から抽出される特徴点に基づいてデータベースと照合を行う技術として、非特許文献1には次のような障害物を検出する手法が開示されている。即ち、当該手法に於いては移動体搭載カメラにより取得された風景画像と記憶済みのデータベースとをSIFTと呼ばれる特徴点に関して対応づける。そして対応すると判断された路面領域画像の差分を計算することにより障害物を検出する。ここでSIFTとはScale−invariant feature transformの略語である。この手法では、誤対応した特徴点が混入していても風景画像と記憶済みのデータベースとを正しく対応づけられるようにするための仕組みとして、RANSAC(RANdom SAmple Consensus)という試行錯誤的なパラメータ推定手法が導入されている。
 特許文献4には、移動体搭載カメラにより取得された風景画像を基にした次のような変化領域認識装置が開示されている。これは取得された風景画像を記憶済みのデータベースと照合して位置あわせを行い、位置合わせ後に合致しなかった領域を変化領域、即ちデータベース作成時には存在しなかった物体の存在する領域として抽出するものである。
A global positioning system (GPS) is generally known as a technique for identifying the position of a moving object. In GPS, a radio wave transmitted from a GPS satellite is received by a vehicle-mounted receiving device, and positioning is performed based on a time difference between transmission and reception.
A positioning system based on wireless technology such as GPS has a problem that positioning cannot be performed in a place where a radio wave more than the required number cannot be received. Specific examples include valleys of buildings and underpasses. In urban areas, it is easy for positioning to occur.
As a positioning technique with completely different principles that can avoid such problems, the current position of the moving object is checked by comparing a landscape image acquired by a camera mounted on the moving object with a database relating to a previously stored landscape image. Techniques for identifying are disclosed.
Patent Document 1 discloses the following positioning method. That is, from the landscape image acquired by the camera mounted on the mobile body, shape data indicating the planar shape of the road and information such as the height and color of surrounding buildings are extracted, and these are compared with a stored database. It identifies the current position of the moving object.
In Patent Document 2, the position of a feature point of a road marking is extracted from a landscape image acquired by a camera mounted on a moving body, and this is compared with a stored database to identify the current position of the moving body. A method is disclosed.
Patent Document 3 discloses a method for identifying a current position and posture by capturing a ceiling with a camera mounted on a moving object and checking the image against a stored database for a moving object operating in an indoor environment. ing.
In addition, although the purpose is not positioning, Non-Patent Document 1 detects the following obstacles as a technique for collating with a database based on feature points extracted from a landscape image, as in Patent Document 2. A technique is disclosed. That is, in this method, a landscape image acquired by a mobile body mounted camera is associated with a stored database with respect to a feature point called SIFT. An obstacle is detected by calculating a difference between road surface area images determined to correspond. Here, SIFT is an abbreviation for Scale-invariant feature transform. In this method, RANSAC (RANdom Sample Consensus) is a trial-and-error parameter estimation method as a mechanism for correctly associating a landscape image with a stored database even if erroneous feature points are mixed. Has been introduced.
Patent Document 4 discloses the following change area recognition device based on a landscape image acquired by a camera mounted on a moving body. This is done by matching the acquired landscape image against a stored database, and extracting the area that did not match after registration as a change area, that is, an area where an object that did not exist at the time of database creation exists. It is.
特許第4206036号公報Japanese Patent No. 4206036 特開2007−107043号公報JP 2007-107043 A 特開2004−12429号公報JP 2004-12429 A 特許第3966419号公報Japanese Patent No. 3966419
 しかしながら、上述した関連する技術を用いて移動体の現在位置を測位しようとすると実際にはさまざまな問題が存在する。
 まず、特許文献3に開示された技術は屋内環境でないと適用できないため、自動車など屋外で機能する移動体には適用不可能である。特許文献2も、道路標示が存在する環境でないと機能しないという問題がある。
 特許文献1は、道路および該道路に隣接する建造物の外観に関連する所定の特徴量などの情報に基づいて照合を行うため場所を選ばず適用可能な技術であると考えられる。しかし、特許文献1に記載の発明が照合に用いる道路の幅や道路の長さ(交差点から次の交差点までの距離)、道路の両側それぞれに存在する建造物の数等の特徴量をコンピュータに自動的に抽出させるのは現状の画像認識技術では困難である。従ってデータベースの構築を人手で行う必要があるため、データベースの構築コストが高いという問題がある。
 これらに対し、非特許文献1のように風景画像から抽出される特徴点の位置情報に基づいて移動体の現在位置を特定する方法は、屋内屋外によらず広範な環境でも適用できる点で有利である。また、特徴点の抽出は既存の手法により容易に実現可能であるためデータベース構築コストが安いといった点においても有効である。
 ただし、このような特徴点の照合による手法では、次のような問題がある。つまり、風景画像から抽出される特徴点とデータベース中に記憶されている特徴点との対応関係を推定した後、ランダムに抽出した一定個数以上の全ての特徴点に対して、推定した対応関係が完全に正しくないと現在位置を推定することができない。ここで、正しい対応関係とは、実世界上の同一物体の同一箇所に対応する風景画像上の特徴点とデータベース中の特徴点とが対応づけられることをさす。
 現有の画像認識技術による特徴点の対応づけ処理では100%正しく対応づけることは困難であるため、非特許文献1などに開示されている手法ではRANSACと呼ばれる試行錯誤的プロセスを導入している。このRANSACにより一定個数の特徴点の組の抽出を何度も行い、全ての特徴点が正しく対応づけられている場合が偶然含まれるようにしている。
 そして、誤って対応づけられた特徴点の割合が多いほど、一定個数の特徴点の組の抽出をより多い回数行わなければならず、位置の同定に要する処理時間が増大する。風景画像データベース構築に利用した画像中に場所が移動する可能性が高い車両や人、さらには成長にしたがって形状が変化する街路樹などの植物が写っている場合は次の様な問題がある。これら移動する物体等に対応する領域から抽出された特徴点に対しては、測位対象移動体の測位時に正しい対応関係が存在し得ない可能性が極めて高い。しかし、従来はこれらの特徴点も照合対象の特徴点として扱われていたため、位置同定に要する処理時間の増大を招いていた。
(発明の目的)
 本発明は上記課題を解決すべく、より少ない計算量で測位対象移動体における測位処理が行えるようにする測位システムを提供することを目的とする。
However, there are actually various problems when trying to determine the current position of the moving object using the related technique described above.
First, since the technique disclosed in Patent Document 3 can only be applied in an indoor environment, it cannot be applied to a moving body that functions outdoors, such as an automobile. Patent Document 2 also has a problem that it does not function unless it is an environment where road markings exist.
Patent Document 1 is considered to be a technique that can be applied regardless of location because collation is performed based on information such as a predetermined feature amount related to the appearance of a road and a building adjacent to the road. However, features such as the road width and road length (distance from the intersection to the next intersection) and the number of buildings existing on each side of the road are automatically stored in the computer. It is difficult for the current image recognition technology to extract them. Therefore, since it is necessary to manually construct the database, there is a problem that the construction cost of the database is high.
On the other hand, the method of specifying the current position of the moving body based on the position information of the feature point extracted from the landscape image as in Non-Patent Document 1 is advantageous in that it can be applied in a wide range of environments regardless of indoors and outdoors. It is. In addition, the feature point extraction can be easily realized by an existing method, so that it is effective in that the database construction cost is low.
However, such a method based on feature point matching has the following problems. In other words, after estimating the correspondence between the feature points extracted from the landscape image and the feature points stored in the database, the estimated correspondence is found for all the feature points that are randomly extracted. If it is not completely correct, the current position cannot be estimated. Here, the correct correspondence means that the feature point on the landscape image corresponding to the same part of the same object in the real world is associated with the feature point in the database.
Since it is difficult to match 100% correctly with the feature point matching process using the existing image recognition technology, the method disclosed in Non-Patent Document 1 or the like introduces a trial-and-error process called RANSAC. By this RANSAC, a set of a certain number of feature points is extracted many times, and the case where all feature points are correctly associated is included by chance.
As the ratio of feature points that are erroneously associated with each other increases, a certain number of feature point sets must be extracted more times, and the processing time required for position identification increases. There are the following problems when a vehicle or a person whose location is likely to move in an image used for constructing a landscape image database, and a plant such as a roadside tree whose shape changes as it grows. There is a high possibility that a correct correspondence relationship does not exist at the time of positioning of the positioning target moving body with respect to the feature points extracted from the region corresponding to the moving object or the like. Conventionally, however, these feature points are also handled as feature points to be collated, which increases the processing time required for position identification.
(Object of invention)
In order to solve the above-described problems, an object of the present invention is to provide a positioning system that enables a positioning process in a positioning target moving body with a smaller amount of calculation.
 本発明の風景画像データベースは、複数の風景画像データと該風景画像データが採取された位置である画像取得位置が対応付けて格納され、前記複数の風景画像データの各々は、実世界上で位置もしくは形状が所定期間以上現状のまま維持される可能性の少ない事物以外の事物に対する特徴点の特徴量を含む。
 本発明のデータベース構築方法は、風景画像を撮像し、現在位置情報を取得し、前記風景画像から特徴点の特徴量を抽出し、前記風景画像から実世界上において位置もしくは形状が将来変化する物体に対応する領域を抽出する。
 本発明の風景画像データベース構築プログラムは、風景画像を撮像する第一の撮像ステップと、現在位置情報を取得する位置情報取得ステップと、前記第一の撮像手段が取得した風景画像から特徴点の特徴量を抽出する第一の特徴点抽出ステップと、前記第一の撮像手段が取得した風景画像から実世界上において位置もしくは形状が将来変化する物体に対応する領域を抽出する将来変動領域抽出ステップと、
をコンピュータに実行させる。
The landscape image database of the present invention stores a plurality of landscape image data and an image acquisition position that is a position where the landscape image data is collected, and each of the plurality of landscape image data is located in the real world. Alternatively, it includes feature amounts of feature points for things other than things whose shape is less likely to be maintained for the predetermined period or longer.
The database construction method of the present invention captures a landscape image, acquires current position information, extracts feature quantities of feature points from the landscape image, and objects whose position or shape will change in the real world from the landscape image in the future The region corresponding to is extracted.
The landscape image database construction program of the present invention is characterized by a first imaging step for capturing a landscape image, a position information acquisition step for acquiring current position information, and a feature point feature from the landscape image acquired by the first imaging means. A first feature point extracting step for extracting a quantity; a future variation area extracting step for extracting an area corresponding to an object whose position or shape will change in the real world in the future from the landscape image acquired by the first imaging means; ,
Is executed on the computer.
 上記に説明したように、本発明によれば、より少ない計算量で測位対象移動体における測位処理が行えるようにする測位システムを提供することが出来る。 As described above, according to the present invention, it is possible to provide a positioning system that enables a positioning process in a positioning target moving body with a smaller amount of calculation.
本発明を説明する図である。It is a figure explaining this invention. 風景画像データベースの記述の一例を示す図である。It is a figure which shows an example of the description of a landscape image database. 第一の撮像モジュール101が取得した移動体の前方の画像の一例を示す図である。It is a figure which shows an example of the image ahead of the moving body which the 1st imaging module 101 acquired. 第一の撮像モジュール101が取得した画像およびそこから抽出される特徴点の位置を示す図である。It is a figure which shows the position of the feature point extracted from the image which the 1st imaging module 101 acquired, and there. 風景画像データベース104に記録されているデータを収集した4地点と測位対象移動体搭載装置110の現在位置との位置関係を示す図である。It is a figure which shows the positional relationship of 4 points | pieces which collected the data currently recorded on the landscape image database 104, and the present position of the positioning object mobile body mounting apparatus 110. FIG. 4つの地点に対する照合処理の確からしさ、地点の位置情報、および照合処理によりえられた相対位置(方向)情報の一例を示す図である。It is a figure which shows an example of the probability of the collation process with respect to four points, the positional information on a point, and the relative position (direction) information obtained by the collation process. データベース構築用移動体搭載装置のフローチャートを示す図である。It is a figure which shows the flowchart of the mobile body mounting apparatus for database construction. 測位対象移動体搭載装置のフローチャートを示す図である。It is a figure which shows the flowchart of a positioning object mobile body mounting apparatus. 本発明の第2の実施の形態を説明する図である。It is a figure explaining the 2nd Embodiment of this invention. 本発明の第2の実施の形態における、測位対象移動体搭載装置のフローチャートを示す図である。It is a figure which shows the flowchart of the positioning object mobile body mounting apparatus in the 2nd Embodiment of this invention. 本発明の第3の実施の形態を説明する図である。It is a figure explaining the 3rd Embodiment of this invention. 本発明の第3の実施の形態における、データベース構築用移動体搭載装置のフローチャートを示す図である。It is a figure which shows the flowchart of the mobile body mounting apparatus for database construction in the 3rd Embodiment of this invention. 本発明の第3の実施の形態における、サーバ装置のフローチャートを示す図である。It is a figure which shows the flowchart of the server apparatus in the 3rd Embodiment of this invention. 本発明の第4の実施の形態を説明する図である。It is a figure explaining the 4th Embodiment of this invention. 本発明の第4の実施の形態における、サーバ装置のフローチャートを示す図である。It is a figure which shows the flowchart of the server apparatus in the 4th Embodiment of this invention. 本発明の第4の実施の形態における、測位対象移動体搭載装置のフローチャートを示す図である。It is a figure which shows the flowchart of the positioning object mobile body mounting apparatus in the 4th Embodiment of this invention. 本発明の第4の実施の形態における、サーバ装置の他のフローチャートを示す図である。It is a figure which shows the other flowchart of the server apparatus in the 4th Embodiment of this invention. 本発明の第4の実施形態における、データベース構築用移動体搭載装置のハードウェア構成を示す図である。It is a figure which shows the hardware constitutions of the mobile body mounting apparatus for database construction in the 4th Embodiment of this invention. 本発明の第4の実施形態における、測位対象移動体搭載装置のハードウェア構成を示す図である。It is a figure which shows the hardware constitutions of the positioning object mobile body mounting apparatus in the 4th Embodiment of this invention. 本発明の第4の実施形態における、データベース構築用移動体搭載装置のハードウェア構成を示す図である。It is a figure which shows the hardware constitutions of the mobile body mounting apparatus for database construction in the 4th Embodiment of this invention. 本発明の第4の実施形態における、サーバ装置のハードウェア構成を示す図である。It is a figure which shows the hardware constitutions of the server apparatus in the 4th Embodiment of this invention. 本発明の第4の実施形態における、サーバ装置のハードウェア構成を示す図である。It is a figure which shows the hardware constitutions of the server apparatus in the 4th Embodiment of this invention. 本発明の第4の実施形態における、測位対象移動体搭載装置のハードウェア構成を示す図である。It is a figure which shows the hardware constitutions of the positioning object mobile body mounting apparatus in the 4th Embodiment of this invention. 本発明の第5の実施形態における、データベース構築用移動体の構成を示す図である。It is a figure which shows the structure of the mobile body for database construction in the 5th Embodiment of this invention.
 以下、本発明の実施の形態について、図面を参照して詳細に説明する。
 (第1の実施の形態)
 本発明の第1の実施形態について図面を参照して詳細に説明する。
 図1を参照すると、本発明の第1の実施形態は、データベース構築用移動体搭載装置100と測位対象移動体搭載装置110を含む。データベース構築用移動体とは、例えばデータベース構築専用の自動車やロボットなどである。測位対象移動体とは、例えば自家用および業務用車両、もしくは車輪や足などの移動手段を供えたロボットなどである。
 前記データベース構築用移動体搭載装置100は、風景画像を撮像する第一の撮像モジュール101と、該データベース構築用移動体の現在位置を取得する位置情報取得モジュール102とを含む。
 更にデータベース構築用移動体搭載装置100は、第一の撮像モジュール101が取得した風景画像から特徴点の特徴量を抽出する第一の特徴点抽出モジュール103を含む。更に、データベース構築用移動体搭載装置100は、抽出された特徴点の特徴量と位置情報とを対応づけて記憶する風景画像データベース104と、を含む。第一の特徴点抽出モジュール103で」抽出する特徴点及びその特徴量としては、例えばよく知られているSIFT(Scale−invariant feature transform)特徴やSURF(Speeded Up Robust Features)特徴などである。
 更にデータベース構築用移動体搭載装置100は、撮像モジュールが撮影した風景画像から位置もしくは形状が近い将来変化する可能性が高い領域を抽出する将来変動領域抽出モジュール105を含む。近い将来変化する可能性が高い領域とは、現在移動している人物や車両等のみならず、静止していても一時的であり、一定時間後に移動しうる人物や車両等が記録された領域である。
 更にデータベース構築用移動体搭載装置100は、以下の機能を有する風景画像データベース構築モジュール106を備える。即ち風景画像データベース構築モジュール106は、前記将来変動領域抽出モジュール105が抽出した領域以外から抽出された前記特徴点の特徴量を前記現在位置情報と対応づけて前記風景画像データベースに記憶する。
 測位対象移動体搭載装置110は、風景画像を撮像する第二の撮像モジュール111と、第二の撮像モジュールが取得した風景画像から特徴点の特徴量を抽出する第二の特徴点抽出モジュール112とを含む。更に測位対象移動体搭載装置110は、画像照合・位置同定モジュール113、風景画像データベース114を有する。画像照合・位置同定モジュール113は、第二の特徴点抽出モジュール112が抽出した特徴点と風景画像データベース104中の特徴点情報とを照合して前記測位対象移動体搭載装置110の現在位置を同定する。
 まず、データベース構築用移動体搭載装置100の構成について詳細に説明する。
 第一の撮像モジュール101は、車載カメラなどから構成され、風景画像を撮像する。例えば、データベース構築用移動体が予め定めた道路のコースを移動する間、車両前方の風景画像を時々刻々撮影する。
 位置情報取得モジュール102は、データベース構築用移動体に搭載されたRTK−GPS(Real−Time Kinetic GPS)や車速パルス情報等を用いた高精度な測位モジュールから構成される。位置情報取得モジュール102は、第一の撮像モジュール101即ちデータベース構築用移動体が画像を撮影した地点の位置情報を取得する。位置情報は、例えば緯度・経度の2種類の数値により表現される。
 第一の特徴点抽出モジュール103は、第一の撮像モジュール101が取得した風景画像中から特徴点を抽出し、特徴点の画像上での座標位置および特徴量を特徴点情報として抽出する。第一の特徴点抽出モジュール103が抽出する特徴点及び特徴量としては、例えばよく知られているSIFT(Scale−invariant feature transform)特徴やSURF(Speeded Up Robust Features)特徴などを用いることができる。
 風景画像データベース104は、風景画像から抽出した特徴点情報と、風景画像の撮影位置を関連づけて記憶したデータベースである。
 データベースの記述の一例を図2に示す。レコード番号201はある1つの時刻に撮影した風景画像に対する撮影位置情報202として緯度203、および経度204の情報と、特徴点情報205とから構成される。特徴点情報205は、このとき抽出された特徴点の総数を示す特徴点数206、および各特徴点の画像中における座標位置207および特徴点近傍の画像特徴量208の組の集合から構成される。
 画像特徴量208には例えばSURF特徴などを用いればよく、例えば局所領域内における画素値の変化方向および変化強度を数値化した64次元のベクトル値などで表現される。また、処理した風景画像の枚数だけのレコードが作成される。なお、風景画像データベース104は、本実施の形態においてはデータベース構築用移動体搭載装置100内に当初構築されるが、その後複製されて、後述する測位対象移動体搭載装置110内におかれる。
 将来変動領域抽出モジュール105は、第一の撮像モジュール101が取得した画像に含まれる、実世界上において将来位置や形状が変動する可能性が高い物体に対応する画像上の領域を抽出する。屋外環境において、将来位置や形状が変動する可能性が高い物体として車両、歩行者が挙げられる。
 将来変動領域抽出モジュール105では車両や歩行者に対応する領域を画像認識手法等により抽出する。具体的には、既存の人物検知手法、車両検知手法、もしくは一般物体検知手法などによって検出することが可能である。
 なお、将来変動領域抽出モジュール105が抽出する領域は、第一の撮像モジュール101が画像を取得した時刻においてなんらかの移動をしている物体に対応する領域ではなく、人物や車両などの移動しうる物体に対応する領域である。言い換えると、第一の撮像モジュール101が画像を取得した時刻において、静止している人や車両に対応する領域も本モジュールにて抽出している。
 将来変動領域抽出モジュール105が抽出する領域の一例を図3を用いて説明する。図3は、第一の撮像モジュール101が取得した移動体の前方の画像の例を示す図であり、道路面領域301、建築物領域302、駐車車両領域303が含まれる。この場合、将来変動領域抽出モジュール105は将来変動する可能性がある駐車車両を含む駐車車両領域303を抽出する。
 風景画像データベース構築モジュール106は、第一の特徴点抽出モジュール103が抽出した特徴点のうち、将来変動領域抽出モジュール105が抽出した車両や歩行者に対応する領域から抽出された特徴点を除く特徴点情報を以下の様に処理する。即ち、風景画像データベース構築モジュール106は、この特徴点情報を位置情報取得モジュール102が取得したデータベース構築用移動体の位置情報と対応づけて風景画像データベース104に記録する。風景画像データベース構築モジュール106による特徴点の選択の様子を図4を用いて説明する。図4は、第一の撮像モジュール101が取得した画像およびそこから抽出される特徴点の位置を示す図である。図中の星印が特徴点の位置を示す。将来変動領域抽出モジュール105が抽出する駐車車両領域303に対応する特徴点401を除いた特徴点402に関する情報が風景画像データベース104に記録される。
 次に、測位対象移動体搭載装置110の構成について説明する。
 第二の撮像モジュール111は、測位対象移動体に搭載されて時々刻々風景画像を撮像する。
 第二の特徴点抽出モジュール112は、第二の撮像モジュール111が取得した風景画像から特徴点を抽出し、その画像上での座標位置情報および画像特徴量を抽出する。特徴点の抽出アルゴリズムには、第一の特徴点抽出モジュール102と同一のものを用いればよい。
画像照合・位置同定モジュール113は、第二の特徴点抽出モジュール112が抽出した特徴点情報と風景画像データベース104中の特徴点情報とを照合して測位対象移動体の現在位置を同定する。
 以下、特徴点の具体的な照合方法および測位対象移動体の位置同定方法について詳細に説明する。
 まず、第二の特徴点抽出モジュール112が抽出した特徴点群と、風景画像データベース104中の1つのレコード、すなわち1つの地点に対応づけられて記憶された特徴点群の中から、特徴量が類似する特徴点の組を抽出する。特徴点の特徴量はベクトル値で表現される。そこで、具体的には、例えば第二の特徴点抽出モジュール112が抽出した特徴点群の中の1つの特徴点の特徴量と風景画像データベース104中の1つの特徴点の特徴量とに関して、2つのベクトルの差分のノルムの大きさを計算する。そして、その大きさが閾値以下のものを類似しているものと判定し、対応する特徴点を組として抽出すればよい。
 類似した特徴量を持つ特徴点の組をすべて抽出した後、その中から8組をランダムに抽出する。その後、8点法と呼ばれるアルゴリズムを用いて、風景画像データベース104のレコードに関連づけられた移動体位置に対する移動体の相対位置関係の推定値が得られる。また、推定値と類似した特徴量を持つすべての特徴点の組の情報を用いて、推定された相対位置の確からしさを算出することが出来る。
 ただし、ここで相対的位置関係を正確に求めるためには8組の特徴点すべてが正しく対応している必要がある。しかし、特徴点の組の中には誤った対応のものが含まれる可能性があるため、8組の選び方を何度も変更して相対位置とその確からしさを算出して、最大の確からしさを与える相対位置を求め、選択する。
 また、以上は風景画像データベース104から選択した1つのレコードに対する照合であり、他のレコードに対応する特徴点とも照合し、推定値の確からしさの高いレコードに関する結果を1~3通り程度選択する。その後三角測量の原理に従ってレコード中のデータを撮影した位置に対する現在の相対的位置を算出し、風景画像データベース104のデータを撮影した絶対位置情報を加味して現在の移動体の位置を特定する。
 具体例を図5,6を参照して説明する。図5は、風景画像データベース104に記録されているデータを収集した4地点(地点501、地点502、地点503、地点504)および測位対象移動体の現在位置507を模式的に示す図である。また、図6は地点501から地点504に対応する特徴点のデータと現時刻に抽出された特徴点のデータとの照合処理の確からしさ、および地点501から地点504の位置情報、および照合処理で得られた相対位置関係である方位の値を示す図である。
 8点法により得られるおおもとの相対位置情報は、移動体の3次元的な相対方位情報もしくは絶対距離情報、および風景画像データベース104作成時と現在の測位対象移動体との撮像モジュールの光軸の3次元的回転情報である。しかし図6ではその中から単純に水平面上における相対方位情報のみを記している。
 図5,6の場合には、例えば比較的高い“確からしさ”が抽出されている地点502、地点503において抽出された相対位置情報に基づいて判断することが出来る。図5,6には画像照合・位置同定モジュール113で推定された、地点502に対する現在位置の光軸に対する角度505、画像照合・位置同定モジュール113で推定された、地点503に対する現在位置の光軸に対する角度506が示されている。即ち、地点502を基準として80度右向きの角度505上であり、且つ地点503を基準として45度右向きの角度506上である点(図5に於いて507で示される点)が測位対象移動体の現在位置であると幾何学的に判断することができる。
 次に、本第1の実施形態の動作について図面を参照して詳細に説明する。図7、図8は、それぞれ本実施形態のデータベース構築用移動体搭載装置100および測位対象移動体搭載装置110の動作を示すフローチャートである。
 まず、データベース構築用移動体搭載装置100の動作を図7を参照しつつ説明する。
 まず、第一の撮像モジュール101が移動体前方の画像を撮像する(ステップS701)。また、位置情報取得モジュール102はデータベース構築用移動体搭載装置100が搭載された移動体の現在の正確な位置情報を取得する(ステップS702)。続いて、第一の特徴点抽出モジュール103は第一の撮像モジュール101が撮像した画像中から特徴点を抽出する(ステップS703)。更に将来変動領域抽出モジュール105は第一の撮像モジュールが撮像した画像中から車両、人に対応する領域を抽出する(ステップS704)。
 そして、風景画像データベース構築モジュール106は、第一の特徴点抽出モジュール103が抽出した特徴点のうち、将来変動領域抽出モジュール105が抽出した車両や人に対応する領域に属さない特徴点のみを抽出する。その後、これら特徴点に関する特徴点位置および特徴量の情報と前記位置情報取得モジュール102が取得した位置情報とを対応づけて風景画像データベース104に記憶する(ステップS705)。
 S701~S705のステップを第一の撮像モジュール101が新たな画像を取得する毎に繰り返す。なお、ステップS702は、ステップS701と同期して実行されるのがもっとも好ましい。
 次に、測位対象移動体搭載装置110の動作を図8のフローチャートを参照しつつ説明する。
 第二の撮像モジュール111は、測位対象移動体搭載装置110前方の画像を撮影する(ステップS801)。第二の特徴点抽出モジュール112は、第二の撮像モジュール111が撮影した画像中から特徴点を抽出する(ステップS802)。
 画像照合・位置同定モジュール113は、第二の特徴点抽出モジュール112が抽出した特徴点と、風景画像データベース104中の特徴点情報とを照合して測位対象移動体の現在位置を特定し出力する(ステップS803)。
 上述のデータベース構築用移動体搭載装置100は、図18のようなECU(Electronic Control Unit)1801、車載カメラ1802、ハードディスク1803、高精度GPS1804とから構成される。上述のデータベース構築用移動体搭載装置100は、本構成の装置において上記で説明したモジュールを搭載し動作させるような構成として説明される。ECU1801は、装置全体の制御を行うものであり、例えばCPU(Central Processing Unit)、RAM(Random Access Memory)、ROM(Read Only Memory)、信号処理回路、電源回路などから構成される。つまり、上述した実施形態は、撮像モジュール、位置情報取得モジュールの機能を除き、その説明において参照したフローチャートの機能や判定ロジックを実現可能なコンピュータ・プログラムをECU1801に読み出して実行することにより実現される。また、ECU1801で実行される機能をハードウェア化してマイコンを構成することも可能である。さらには、一部の機能をハードウェアで実現し、それらのハードウェアとソフトウェア・プログラムの協調動作により同様の機能を実現してもよい。また、ハードディスク1803は風景画像データベースを記憶するための装置であり、フラッシュメモリなどハードディスク以外の記憶媒体により構成しても問題ない。
 また、上述の測位対象移動体搭載装置110は、図19のようなECU(Electronic Control Unit)1901と車載カメラ1902とハードディスク1903とから構成され、本構成の装置で上記で説明したモジュールを搭載し動作させる様な構成として説明した。ECU1901は、装置全体の制御を行うものであり、例えばCPU、RAM、ROM、信号処理回路、電源回路などから構成される。つまり、上述した実施形態は、第二の撮像モジュールの機能を除き、その説明において参照したフローチャートの機能や判定ロジックを実現可能なコンピュータ・プログラムをECU1901に読み出して実行することにより実現される。また、ECU1901で実行される機能をハードウェア化してマイコンを構成することも可能である。さらには、一部の機能をハードウェアで実現し、それらのハードウェアとソフトウェア・プログラムの協調動作により同様の機能を実現してもよい。データベース構築用移動体で構築した風景画像データベースは、ハードディスク1903内にコピーを記憶しておけばよい。また、別の構成例として、ハードディスクを用いず、ECU1901内のROMに記憶するのでもよい。
 本実施形態によれば、第一の特徴点抽出モジュール103が抽出した特徴点のうち、特定の特徴点を除いて風景画像データベース104を構築する。特定の特徴点とは、将来位置が移動することによって、測位対象移動体搭載装置110内の画像照合・位置同定モジュールにおける画像照合の効率低下を招く車両や歩行者に属する特徴点である。従って、画像照合・位置同定モジュール113において、より少ないRANSACの試行回数で正しく画像の照合を行うことができる。
 また、上記実施の形態においては、第一の撮像モジュールが取得する画像はカラー画像もしくはモノクロ画像といった可視光域の波長帯をとらえた画像であることを想定し説明した。しかし、第一の撮像モジュールをマルチスペクトルカメラなどの可視光波長領域以外の波長帯の画像も取得できる装置で構成しても良い。
 この場合、将来変動領域抽出モジュール105はさらに植物などを抽出してもよい。街路樹の葉の領域からは多数の特徴点が抽出されるが、街路樹の形状は風や成長により変化し特徴点の位置が移動してしまうため、画像照合を行う上でノイズ要因となる。
 街路樹のような植物に対応する領域を可視光波長域の情報のみから正確に判定することは必ずしも容易でない。しかし、葉緑素が近赤外域の波長帯の光をよく反射することが知られているため、近赤外域の波長帯の情報を含むマルチスペクトル画像を用いれば植物に対応する領域を比較的容易に抽出することができる。
 具体的には、例えば近赤外域の波長帯に対する反射強度が一定の閾値以上である領域を抽出するのでもよい。衣服などの繊維も近赤外域の波長帯に対して高い反射強度を示すことが多いが、将来変動領域抽出モジュール105は歩行者も検出対象であるし、植物領域と歩行者領域を区別する必要はないため、機能上問題はない。
 本発明の第1の実施形態では、風景画像データベース構築に利用した画像中に、将来場所が移動する可能性が高い車両や人を将来変動領域抽出モジュール105により抽出している。さらには成長にしたがって形状が変化する街路樹などの植物などに対応する領域を将来変動領域抽出モジュール105により抽出している。
 その後、これらの領域外から抽出された特徴点に関する情報のみを用いて風景画像データベース104を構築している為、RANSACプロセスにおいて正しく対応する特徴点の組の選択確率が高まり、RANSACの繰り返し回数を低減できる。なおRANSACとはRANdom SAmple Consensusの略語である。
 その結果、特徴点の位置をデータベースと照合して測位対象移動体の現在位置を同定する測位システムにおいて、より少ない計算量で測位を行うことができる。
 計算量の低減効果について詳細に説明を行う。仮に、第一の特徴点抽出モジュール103が抽出した特徴点をすべて風景画像データベース104に記憶し、そのうちの100点が風景画像データベース104中の特徴点と対応づけられたとする。また、そのうちの60%が正しい対応であるとする。
 そして、100組の特徴点の組の中からランダムに抽出した8組がすべて正しい対応である場合に現在位置を正しく同定できるものとすると、RANSAC1回の試行で正しく測位できる確率はおよそ(0.6)=1.7%となる。よって、期待値としては60回試行すると1回正しく測位できるものと考えられる。
 これに対し、人や車両、街路樹などに対応する特徴点を除いて風景画像データベースを構築した場合は、正しく対応する特徴点の組の確率が向上するので、仮にこの確率を0.8とすれば1回の試行で正しく測位できる確率はおよそ(0.8)=17%となる。よって、期待値としては、6回の試行で1回正しく測位できる。すなわち、この例の場合は測位対象移動体における測位のための画像照合に要する計算量を1/10に低減できる。
 また、計算量の低減による効果は、実際上は測位対象移動体搭載装置のコストを低減する効果となる。計算量が減少することにより、より低価格な組み込みプロセッサを用いて測位対象移動体搭載装置を構成することができるからである。
 なお、本第1の実施形態において、車両、人、植物に対応する領域を抽出する将来変動領域抽出モジュール105を特にデータベース構築用移動体搭載装置100に適用するのは以下の理由による。即ちデータベース構築用移動体搭載装置100は高価で高性能な装置を装備可能な、少ない台数の特殊な移動体であることを想定しており、将来変動領域抽出モジュール105の追加による処理計算量や価格上昇を許容しやすいからである。また、後述のサーバ装置にて風景画像データベースを構築する構成においては、風景画像データベース構築時の処理のリアルタイム性に対する制限もないため、将来変動領域抽出モジュール105の追加による処理計算量の増大も問題にならないためである。
(第2の実施形態)
 図9を用いて本発明の第2の実施形態について説明する。図9を参照すると、本第2の実施形態は、図1に示した第1の実施形態における装置の構成に加えて、測位対象移動体搭載装置910が概略位置取得モジュール901を具備する。図1に示した第1の実施形態とは、概略位置取得モジュール901と画像照合・位置同定モジュール113以外の機能については同一である。
 概略位置取得モジュール901は、安価なGPSやマップマッチング機構により構成され、測位対象移動体の現在の概略位置を取得する。
画像照合・位置同定モジュール113は、第二の特徴点抽出モジュール112が抽出した特徴点情報と風景画像データベース104とを照合して、概略位置よりも高精度に移動体の現在位置を判定する。ただし、本実施の形態においては、概略位置取得モジュール901が取得した概略位置付近の位置に対応づけられて記憶されている風景画像データベース104中のレコードのみを照合に用いる。これにより、照合対象のデータを限定することが可能となり、照合に要する計算量を削減することができる。
 次に、本第2の実施の形態の動作について図面を参照して詳細に説明する。データベース構築用移動体搭載装置100の動作については図3と同一であるため、測位対象移動体搭載装置110の動作を図10のフローチャートを参照しつつ詳細に説明する。
 まず、第二の撮像モジュール111は、測位対象移動体の前方の画像を撮影する(ステップS1001)。第二の特徴点抽出モジュール112は、ステップS1001で取得された画像中から特徴点を抽出する(ステップS1002)。概略位置取得モジュール1001は測位用車両の現在の概略位置を取得する(ステップS1003)。画像照合・位置同定モジュール113は、ステップS1003で得られた概略位置情報により絞り込まれた風景画像データベース中の特徴点情報のみを用いて、前記ステップS1002で抽出した特徴点と照合し、移動体の正確な現在位置を特定し出力する。
 本第2の実施の形態によれば、画像照合・位置同定モジュールが照合を行う風景画像データベース中の特徴点情報を絞り込むことができるので、照合に要する計算量を低減できる効果がある。
(第3の実施形態)
 図11を用いて本発明の第3の実施形態について説明する。図11を参照すると、本第3の実施形態は、次の点が異なっている。即ち、図1に示した第1の実施形態と比較して、データベース構築用移動体搭載装置100内のモジュールを2つに分割して新たなデータベース構築用移動体搭載装置1100とサーバ装置1120に設置している点が異なる。更に、データベース構築用移動体搭載装置1100とサーバ装置1120との間で風景画像および位置情報を授受するための仕組みとして、位置・画像データ記録モジュール1102および位置・画像データ格納手段1103を追加している点が異なる。
 まず、本第3の実施形態におけるデータベース構築用移動体搭載装置1100について説明する。
 第一の撮像モジュール101および位置情報取得モジュール102は図1に示した実施の形態と同一である。
 位置・画像データ格納手段1103は、第一の撮像モジュール101および位置情報取得モジュール102が同一時刻に取得する画像情報および位置情報を対応づけて記憶したデータである。
 位置・画像データ記録モジュール1102は、第一の撮像モジュール100および位置情報取得モジュール101が同一時刻に取得した画像および位置情報を対応づけて位置・画像データ格納手段1103に記録するモジュールである。
 次に、サーバ装置1120について説明する。
 第一の特徴点抽出モジュール103は、前記位置・画像データ格納手段1103に記録された風景画像中から特徴点を抽出し、その画像中における位置情報および特徴点の特徴量を出力する。
 風景画像データベース104は、図1に示した実施の形態と同一である。本風景画像データベース104は、サーバ上に一旦構築されるが、データベース構築後は複製されて測位用車両上に設置される。
 将来変動領域抽出モジュール105は、第一の特徴点抽出モジュール103が参照した位置・画像データ803中の風景画像中から将来位置や形状が変動しうる車両や歩行者および植物に対応する領域を抽出する。
 風景画像データベース構築モジュール106は、第一の特徴点抽出モジュール103が抽出した特徴点のうち、特定の特徴点を、位置情報取得モジュール102が取得した位置情報と対応づけて風景画像データベース104に記録する。特定の特徴点は、将来変動領域抽出モジュール105が抽出した車両、歩行者、植物に対応する領域から抽出された特徴点を除く他の特徴点である。
 測位対象移動体搭載装置110については、図1に示した実施の形態と同一である。
 次に、本第3の実施形態の動作について図面を参照して詳細に説明する。測位対象移動体搭載装置110の動作は図8のフローチャートで示されるものと同一であるためその説明は省略する。ここではデータベース構築用移動体搭載装置1100およびサーバ装置1120の動作を示す。
 まず、データベース構築用移動体搭載装置1100の動作を図12のフローチャートに示す。
 第一の撮像モジュール101が移動体前方の画像を撮像する(ステップS1201)。位置情報取得モジュール102がステップS1201と同期して、現在の正確な移動体の位置情報を取得する(ステップS1202)。位置・画像データ記録モジュール1102が、ステップS1201およびステップS1202で取得された画像および位置情報を対応づけて位置・画像データ格納手段1103に記録する(ステップS1203)。
 次に、サーバ装置1120の動作を図13のフローチャートを参照しつつ説明する。
 まず、第一の特徴点抽出モジュール101がデータベース構築用移動体搭載装置1100で生成された位置・画像データの画像データから特徴点を抽出し、その画像上での位置および特徴量を抽出する(ステップS1301)。次に、ステップS1301で参照した画像データと同一の画像データから将来変動領域抽出モジュール105が、車両、歩行者、植物などに対応する領域を抽出する(ステップS1302)。
 そして、風景画像データベース構築モジュール106は、ステップS1301で抽出された特徴点のうち,ステップS1302で抽出された領域に対応する点を除去する。除去後の特徴点に関する特徴量の情報(画像上での位置および特徴量)を、ステップS1301で参照した画像データに対応づけられて記憶されている撮像位置情報と対応づけて風景画像データベースを構築する(ステップS1303)。
 上述のデータベース構築用移動体搭載装置1100は、図20のような画像記録装置2001、車載カメラ2002、ハードディスク2003、高精度GPS2004とから構成され、上記で説明したモジュールを搭載し動作させるような構成として説明した。
 画像記録装置2001は、装置全体の制御を行うものであり、例えばCPU、RAM、ROM、信号処理回路、電源回路などから構成される。つまり、上述した本実施形態では、撮像モジュール、位置情報取得モジュールの機能を除き、その説明において参照したフローチャートの機能や判定ロジックを実現可能なコンピュータ・プログラムをECU2001に読み出して実行することにより実現される。また、ECU2001で実行される機能をハードウェア化してマイコンを構成することも可能である。さらには、一部の機能をハードウェアで実現し、それらのハードウェアとソフトウェア・プログラムの協調動作により同様の機能を実現してもよい。
 また、上述のサーバ装置1120は、図21のようなコンピュータ2101により構成され、本構成の装置において上記で説明したモジュールを搭載し動作させるような構成として説明した。コンピュータ2101は、例えばCPU、RAM、ROM、信号処理回路、電源回路、ハードディスクなどから構成される。つまり、上述した実施形態は、その説明において参照したフローチャートの機能や判定ロジックを実現可能なコンピュータ・プログラムをコンピュータ2101に読み出して実行することにより実現される。また、コンピュータ2101で実行される機能をハードウェア化してマイコンを構成することも可能である。さらには、一部の機能をハードウェアで実現し、それらのハードウェアとソフトウェア・プログラムの協調動作により同様の機能を実現してもよい。データベース構築用移動体で構築した風景画像データベースは、コンピュータ2101内のハードディスクを介してデータベース構築用移動体とデータの授受をおこなえばよい。
 また、測位用車両に搭載の装置は、図19と同様の構成とすればよい。
本実施の形態によれば、風景画像データベース構築に用いる、画像と位置の情報を一旦位置・画像データとして蓄積し、これをサーバ装置1120に移動して処理する。よって、サーバ装置1120の計算能力を強化することによって、比較的計算量の多い将来変動領域抽出モジュール105の処理をより高速に処理できる効果がある。特に、将来変動領域抽出モジュール105で抽出する領域の種類を増やす場合などは、計算能力を増強しやすい本実施の形態が適している。
(第4の実施形態)
 図14を用いて本発明の第4の実施形態について説明する。図14を参照すると、本第4実施形態は、図11に示した構成に対し、サーバ装置1420内にさらにサーバダウンロードモジュール1401を具備する。更に、測位対象移動体搭載装置1410に、現在の概略位置を取得する概略位置取得モジュール901と、前記サーバ装置1420と通信して一部の風景画像データベースを通信により取得できる測位対象移動体ダウンロードモジュール1402を具備する。
 データベース構築用移動体搭載装置1400およびサーバ装置1420の機能は図11の例とほぼ同一であり、サーバダウンロードモジュール1401のみが異なる。
 サーバダウンロードモジュール1401は、測位対象移動体搭載装置1410の測位対象移動体ダウンロードモジュール1402から概略位置の情報と風景画像データベースダウンロードの要求があった場合、次のように動作する。即ちサーバダウンロードモジュール1401は、サーバ内に記憶されている風景画像データベース中から、該概略位置付近にて生成されたレコードのデータを抽出し、これを測位対象移動体ダウンロードモジュール1402へ送信する。
 次に、測位対象移動体搭載装置1410の機能について説明する。
第二の撮像モジュール111および第二の特徴点抽出モジュール112の機能は図11の実施の形態と同一である。
 測位対象移動体ダウンロードモジュール1402は、概略位置取得モジュール601が取得した概略位置の情報と風景画像データの要求メッセージをサーバ装置1420のサーバダウンロードモジュール1401へ送出する。その後、測位対象移動体ダウンロードモジュール1402は、該当する風景画像データをサーバダウンロードモジュール1401から受信する。
 画像照合・位置同定モジュール113は、第二の特徴点抽出モジュール112が抽出した特徴点情報とサーバダウンロードモジュール1401より受信した風景画像データとを照合して移動体の正確な現在位置を判定する。
 次に、本実施の形態の動作について説明する。データベース構築用移動体搭載装置の動作は図12に示されたフローチャートと同一である。
 サーバ装置1420の動作を図15のフローチャートを参照しつつ説明する。ステップS1501からステップS1503までは、ステップS1201からステップS1203までと同一である。
 サーバダウンロードモジュール1401は、測位対象移動体搭載装置1410の測位対象移動体ダウンロードモジュール1402より風景画像データベースの要求があった場合のみに次のように動作する。即ち、サーバダウンロードモジュール1401は、対応する風景画像データを抽出し送信する(ステップS1504)。
 次に、測位対象移動体搭載装置1410の動作を図16のフローチャートを参照しつつ説明する。
 まず、第二の撮像モジュール111が風景画像を撮像する(ステップS1601)。第二の特徴点抽出モジュール112が、ステップS1601で撮影された画像から特徴点の特徴量を出力する(ステップS1602)。概略位置取得モジュール901が、測位対象移動体搭載装置1410の概略位置を取得する(ステップS1603)。
 測位対象移動体ダウンロードモジュール1402が、サーバ装置1420のサーバダウンロードモジュール1401と通信を行い、該概略位置に対応する風景画像データを受信する(ステップS1604)。
 最後に、画像照合・位置同定モジュール113は、測位対象移動体ダウンロードモジュール1402が受信した風景画像データと、第二の特徴点抽出モジュール112が抽出した特徴点の特徴量とを照合する。しかる後、画像照合・位置同定モジュール113は移動体の現在位置を判定する(ステップS1605)。
 上述のサーバ装置1420は、図22のようなコンピュータ2201および通信機器2202により構成され、本構成の装置において上記で説明したモジュールを搭載し動作させるような構成として説明した。コンピュータ2202は、例えばCPU、RAM、ROM、電源回路、ハードディスクなどから構成される。つまり、上述した実施形態は、サーバダウンロードモジュール1401の機能を除き、その説明において参照したフローチャートの機能や判定ロジックを実現可能なコンピュータ・プログラムをコンピュータ2201に読み出して実行することにより実現される。また、コンピュータ2201で実行される機能をハードウェア化してマイコンを構成することも可能である。さらには、一部の機能をハードウェアで実現し、それらのハードウェアとソフトウェア・プログラムの協調動作により同様の機能を実現してもよい。通信機器2202は、サーバダウンロードモジュール1401の機能を実行する無線LAN(Local Area Network)もしくは携帯電話データ通信のためのハードウェアである。
 また、上述の測位対象移動体搭載装置1410は、図23のようなECU(Electronic Control Unit)2301とカメラ2302とハードディスク2303とGPS2304と通信機器2305とから構成された。そして本構成の装置において上記で説明したモジュールを搭載し動作させるような構成として説明した。ECU2301は、装置全体の制御を行うものであり、例えばCPU、RAM、ROM、信号処理回路、電源回路などから構成される。つまり、上述した実施形態は、その説明において参照したフローチャートの機能や判定ロジックを実現可能なコンピュータ・プログラムをECU2301に読み出して実行することにより実現される。また、ECU2301で実行される機能をハードウェア化してマイコンを構成することも可能である。さらには、一部の機能をハードウェアで実現し、それらのハードウェアとソフトウェア・プログラムの協調動作により同様の機能を実現してもよい。
 データベース構築用移動体で構築した風景画像データベースは、ハードディスク2303内にコピーを記憶しておけばよい。また、別の構成例として、ハードディスクを用いず、ECU2301内のROMに記憶するのでもよい。通信装置2305は、サーバダウンロードモジュール1401の機能を実行する無線LANもしくは携帯電話データ通信のためのハードウェアである。
 本実施の形態によれば、風景画像データベースを測位対象移動体搭載装置1410内に保持する必要がないため測位対象移動体搭載装置1410に搭載する磁気ディスクの容量を低減できる。更に、風景画像データベースがサーバ装置1420内に記憶されているので風景画像データベースの更新がしやすいという利点がある。測位対象移動体搭載装置1410に風景画像データベースを保持する場合は、すべての測位対象移動体搭載装置1410に保持されている風景画像データベースを更新する必要がある。
 なお、上記サーバ装置1420のフローチャートではステップS1501~ステップS1504を一連の処理として実行する動作の例について説明したが、風景画像データベースの構築と風景画像データの送信は同時には行わないようにしてもよい。すなわち、ユーザの動作モード指定に応じて、図17に示すように、風景画像データベース構築モードもしくは風景画像データ送信モードのどちらかを実行するような動作にしてもよい。
(第5の実施形態)
 次に、本発明を実施するための第5の実施形態について説明する。
 図24は本発明の第5の実施形態の風景画像データベースである。
本実施形態の風景画像データベース2401は、複数の風景画像データと該風景画像データが採取された位置である画像取得位置が対応付けて格納される。
 更に本実施形態の風景画像データベース2401は、前記複数の風景画像データの各々は、実世界上で位置もしくは形状が所定期間以上現状のまま維持される可能性の少ない事物以外の事物に対する特徴点の特徴量を含む。
 以上説明した実施の形態5では、以下に述べるような効果が得られる。即ち正しい対応づけが困難な可能性が高い特徴点を特定して事前に風景画像データベースに記憶しないことにより、より少ない計算量で測位対象移動体における測位処理が行えるようにする測位システムの風景画像データベースを提供することが出来る。
 なお、ここまで説明した各実施形態では、専用の装置を想定したが、次のようなものでもよい。即ち例えば各種データ処理を行うパーソナルコンピュータ装置に、本例に相当する処理を行うボードやカードなどを装着し、各処理を、コンピュータ装置側で実行させる。このようにして、その処理を実行するソフトウェアをパーソナルコンピュータ装置に実装させて実行する構成としても良い。
 そのパーソナルコンピュータ装置などのデータ処理装置に実装されるプログラムについては、光ディスク,メモリカードなどの各種記録(記憶)媒体を介して配付しても良く、或いはインターネットなどの通信手段を介して配付しても良い。
 また、以上の実施形態は各々他の実施形態と組み合わせることができる。
 以上、実施形態を参照して本願発明を説明したが、本願発明は上記実施形態に限定されるものではない。本願発明の構成や詳細には、本願発明のスコープ内で当業者が理解し得る様々な変更をすることができる。
 この出願は、2010年10月6日に出願された日本出願特願2010−226844を基礎とする優先権を主張し、その開示の全てをここに取り込む。
 以上に述べた実施形態の一部又は全部は、以下の付記のようにも記載されうるが、以下には限られない。
(付記1)
 複数の風景画像データと該風景画像データが採取された位置である画像取得位置が対応付けて格納され、
前記複数の風景画像データの各々は、実世界上で位置もしくは形状が所定期間以上現状のまま維持される可能性の少ない事物以外の事物に対する特徴点の特徴量を含むことを特徴とする風景画像データベース。
(付記2)
 前記実世界上で位置もしくは形状が所定期間以上現状のまま維持される可能性の少ない事物が車両であることを特徴とする付記1に記載の風景画像データベース。
(付記3)
 前記実世界上で位置もしくは形状が所定期間以上現状のまま維持される可能性の少ない事物が人物であることを特徴とする付記1又は付記2に記載の風景画像データベース。
(付記4)
 付記1乃至付記3のいずれかの付記に記載された風景画像データベースと、
 風景画像を撮像する第一の撮像手段と、
現在位置情報を取得する位置情報取得手段と、
前記第一の撮像手段が取得した風景画像から特徴点の特徴量を抽出する第一の特徴点抽出手段と、
 前記第一の撮像手段が取得した風景画像から実世界上で位置もしくは形状が所定期間以上現状のまま維持される可能性の少ない事物に対応する領域を抽出する将来変動領域抽出手段と
 前記第一の撮像手段が取得した特徴点から、前記将来変動領域抽出手段が抽出した領域から抽出された特徴点を除いた特徴点の特徴量を、前記現在位置情報と対応づけて前記風景画像データベースに記憶する風景画像データベース構築手段と、
を備えることを特徴とするデータベース構築装置。
(付記5)
 前記第一の撮像手段は可視光域の波長帯の画像だけでなく、可視光域以外の波長帯の画像も撮像し、前記将来変動領域抽出手段は前記第一の撮像手段が取得した風景画像中から植物に対応する領域を抽出することを特徴とする付記4に記載のデータベース構築装置。
(付記6)
 付記1乃至付記3のいずれかの付記に記載された風景画像データベースと、
 風景画像を撮像する第二の撮像手段と、前記第二の撮像手段が取得した風景画像から特徴点の特徴量を抽出する第二の特徴点抽出手段と、前記第二の特徴点抽出手段が抽出した特徴点の特徴量と前記風景画像データベースとを照合して前記測位対象移動体の現在位置を同定する画像照合・位置同定手段と、
を備えることを特徴とする測位対象装置。
(付記7)
 前記測位対象装置は、該測位対象装置の現在の概略位置を取得する概略位置取得手段をさらに具備し、前記画像照合・位置同定手段は、前記風景画像データベースに格納された風景画像データのうち、前記測位対象装置が前記概略位置付近において生成した風景画像データのみを利用して画像照合を行うことを特徴とする付記6記載の測位対象装置。
(付記8)
 付記1乃至付記3のいずれかの付記に記載されたデータベース構築装置と、
 付記6又は付記7に記載された測位対象装置と、
を備え、前記測位対象装置の風景画像データベースは、前記データベース構築装置にて構築した風景画像データベースと同内容であることを特徴とする測位システム。
(付記9)
 前記測位対象移動体より位置情報および風景画像データ転送要求を受けたときに、本位置情報付近の位置情報に対応づけられた前記風景画像データベース中の風景画像データを送信するサーバ装置内風景画像データベースダウンロード手段を有するサーバ装置を含み、
前記測位対象移動体は、該測位対象移動体の概略位置を取得する概略位置取得手段と、前記サーバ装置内風景画像データベースダウンロード手段に対し前記位置情報および風景画像データ転送要求を送信して前記概略位置付近の位置情報に対応づけられた前記風景画像データベース中の風景画像データを取得する測位対象移動体上風景画像データベースダウンロード手段を更に有し、
 前記画像照合・位置同定手段は、前記風景画像データベースダウンロード手段が取得した前記風景画像データと前記第二の特徴点抽出手段が抽出した特徴点の特徴量とを照合して現在位置を同定することを特徴とする付記8に記載の測位システム。
(付記10)
 風景画像を撮像し、
 現在位置情報を取得し、
前記風景画像から特徴点の特徴量を抽出し、
 前記風景画像から実世界上において位置もしくは形状が将来変化する物体に対応する領域を抽出する、
ことを特徴とするデータベース構築方法。
(付記11)
 可視光域の波長帯の画像だけでなく、可視光域以外の波長帯の画像も撮像し、
前記風景画像中から植物に対応する領域を抽出する、
ことを特徴とする付記10に記載のデータベース構築方法。
(付記12)
 風景画像を撮像する第一の撮像ステップと、
現在位置情報を取得する位置情報取得ステップと、
前記第一の撮像手段が取得した風景画像から特徴点の特徴量を抽出する第一の特徴点抽出ステップと、
 前記第一の撮像手段が取得した風景画像から実世界上において位置もしくは形状が将来変化する物体に対応する領域を抽出する将来変動領域抽出ステップと、
を付記1乃至付記3のいずれかの付記に記載された風景画像データベースを備えるコンピュータに実行させる風景画像データベース構築プログラム。
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
(First embodiment)
A first embodiment of the present invention will be described in detail with reference to the drawings.
Referring to FIG. 1, the first embodiment of the present invention includes a database construction mobile mounting device 100 and a positioning target mobile mounting device 110. The database construction mobile is, for example, a car or robot dedicated to database construction. The positioning target moving body is, for example, a private vehicle or a business vehicle, or a robot provided with moving means such as wheels and feet.
The database construction mobile body mounting apparatus 100 includes a first imaging module 101 that captures a landscape image, and a position information acquisition module 102 that acquires the current position of the database construction mobile body.
Furthermore, the database construction mobile body mounting apparatus 100 includes a first feature point extraction module 103 that extracts feature amounts of feature points from a landscape image acquired by the first imaging module 101. Furthermore, the database construction mobile body mounting device 100 includes a landscape image database 104 that stores the feature quantities of the extracted feature points and the positional information in association with each other. The feature points to be extracted by the first feature point extraction module 103 and their feature amounts include, for example, well-known SIFT (Scale-invariant feature transform) features and SURF (Speeded Up Robust Features) features.
Further, the database construction mobile unit mounting apparatus 100 includes a future variation area extraction module 105 that extracts an area whose position or shape is likely to change in the near future from a landscape image captured by the imaging module. Areas that are likely to change in the near future are areas that record not only people and vehicles that are currently moving, but also people and vehicles that are temporary even if they are stationary and can move after a certain period of time. It is.
Furthermore, the database construction mobile unit mounting apparatus 100 includes a landscape image database construction module 106 having the following functions. That is, the landscape image database construction module 106 stores the feature quantities of the feature points extracted from areas other than those extracted by the future variation area extraction module 105 in the landscape image database in association with the current position information.
The positioning target moving body mounting apparatus 110 includes a second imaging module 111 that captures a landscape image, and a second feature point extraction module 112 that extracts feature amounts of feature points from the landscape image acquired by the second imaging module. including. Furthermore, the positioning object mobile unit mounting apparatus 110 includes an image collation / position identification module 113 and a landscape image database 114. The image collation / position identification module 113 collates the feature points extracted by the second feature point extraction module 112 with the feature point information in the landscape image database 104 to identify the current position of the positioning target mobile unit mounting apparatus 110. To do.
First, the configuration of the database construction mobile unit mounting apparatus 100 will be described in detail.
The first imaging module 101 is composed of an in-vehicle camera and the like, and captures a landscape image. For example, while the moving object for database construction moves on a predetermined road course, a landscape image in front of the vehicle is taken every moment.
The position information acquisition module 102 includes a highly accurate positioning module using RTK-GPS (Real-Time Kinetic GPS) and vehicle speed pulse information mounted on the database construction mobile body. The position information acquisition module 102 acquires position information of a point where the first imaging module 101, that is, the database construction mobile body has taken an image. The position information is expressed by two types of numerical values, for example, latitude and longitude.
The first feature point extraction module 103 extracts feature points from the landscape image acquired by the first imaging module 101, and extracts the coordinate positions and feature amounts of the feature points on the image as feature point information. As the feature points and feature amounts extracted by the first feature point extraction module 103, for example, well-known SIFT (Scale-invariant feature transform) features, SURF (Speeded Up Robust Features) features, and the like can be used.
The landscape image database 104 is a database that stores the feature point information extracted from the landscape image and the shooting position of the landscape image in association with each other.
An example of the description of the database is shown in FIG. The record number 201 is composed of latitude 203 and longitude 204 information and feature point information 205 as shooting position information 202 for a landscape image taken at a certain time. The feature point information 205 includes a set of sets of a feature point number 206 indicating the total number of feature points extracted at this time, a coordinate position 207 in the image of each feature point, and an image feature value 208 near the feature point.
For example, a SURF feature or the like may be used as the image feature amount 208. For example, the image feature amount 208 is expressed by a 64-dimensional vector value obtained by quantifying the change direction and change intensity of the pixel value in the local region. Also, as many records as the number of processed landscape images are created. In this embodiment, the landscape image database 104 is initially constructed in the database construction mobile unit mounting apparatus 100, but is then copied and placed in the positioning target mobile unit mounting apparatus 110 described later.
The future fluctuation area extraction module 105 extracts an area on the image corresponding to an object that is likely to change in the future position and shape in the real world, which is included in the image acquired by the first imaging module 101. In an outdoor environment, vehicles and pedestrians are examples of objects that are likely to change in position and shape in the future.
The future fluctuation area extraction module 105 extracts an area corresponding to a vehicle or a pedestrian by an image recognition method or the like. Specifically, it can be detected by an existing person detection method, vehicle detection method, or general object detection method.
Note that the area extracted by the future fluctuation area extraction module 105 is not an area corresponding to an object that is moving at the time when the first imaging module 101 acquires an image, but a movable object such as a person or a vehicle. Is an area corresponding to. In other words, at the time when the first imaging module 101 acquires an image, an area corresponding to a stationary person or vehicle is also extracted by this module.
An example of the area extracted by the future fluctuation area extraction module 105 will be described with reference to FIG. FIG. 3 is a diagram illustrating an example of an image ahead of a moving object acquired by the first imaging module 101, and includes a road surface area 301, a building area 302, and a parked vehicle area 303. In this case, the future change area extraction module 105 extracts the parked vehicle area 303 including the parked vehicle that may change in the future.
The landscape image database construction module 106 removes the feature points extracted from the region corresponding to the vehicle or pedestrian extracted by the future variation region extraction module 105 from the feature points extracted by the first feature point extraction module 103. The point information is processed as follows. That is, the landscape image database construction module 106 records this feature point information in the landscape image database 104 in association with the location information of the database construction mobile body acquired by the location information acquisition module 102. The feature point selection by the landscape image database construction module 106 will be described with reference to FIG. FIG. 4 is a diagram illustrating an image acquired by the first imaging module 101 and the positions of feature points extracted therefrom. The star in the figure indicates the position of the feature point. Information about the feature points 402 excluding the feature points 401 corresponding to the parked vehicle region 303 extracted by the future variation region extraction module 105 is recorded in the landscape image database 104.
Next, the configuration of the positioning target moving body mounting apparatus 110 will be described.
The second imaging module 111 is mounted on the positioning target moving body and captures a landscape image every moment.
The second feature point extraction module 112 extracts feature points from the landscape image acquired by the second imaging module 111, and extracts coordinate position information and image feature amounts on the images. The same feature point extraction algorithm as that of the first feature point extraction module 102 may be used.
The image collation / position identification module 113 collates the feature point information extracted by the second feature point extraction module 112 with the feature point information in the landscape image database 104 to identify the current position of the positioning target moving body.
Hereinafter, a specific matching method of feature points and a position identification method of the positioning target moving body will be described in detail.
First, the feature amount is extracted from the feature point group extracted by the second feature point extraction module 112 and one record in the landscape image database 104, that is, the feature point group stored in association with one point. A set of similar feature points is extracted. The feature amount of the feature point is expressed by a vector value. Therefore, specifically, for example, regarding the feature quantity of one feature point in the feature point group extracted by the second feature point extraction module 112 and the feature quantity of one feature point in the landscape image database 104, 2 Calculate the norm of the difference between two vectors. Then, it is only necessary to determine that the size is equal to or less than the threshold and to extract the corresponding feature points as a set.
After extracting all pairs of feature points having similar feature amounts, eight pairs are extracted at random. Thereafter, an estimated value of the relative positional relationship of the moving object with respect to the moving object position associated with the record in the landscape image database 104 is obtained using an algorithm called the 8-point method. In addition, the probability of the estimated relative position can be calculated using information on a set of all feature points having feature quantities similar to the estimated value.
However, in order to accurately obtain the relative positional relationship here, it is necessary that all eight sets of feature points correspond correctly. However, there is a possibility that some of the pairs of feature points may contain incorrect correspondences. Therefore, the maximum probability is calculated by changing the way of selecting the eight sets many times and calculating the relative position and the probability. Find and select the relative position that gives.
Further, the above is collation with respect to one record selected from the landscape image database 104, and collation is performed with feature points corresponding to other records, and about 1 to 3 results regarding a record with a high probability of estimation are selected. Thereafter, the current relative position with respect to the position where the data in the record is photographed is calculated according to the principle of triangulation, and the current position of the moving body is specified in consideration of the absolute position information obtained by photographing the data in the landscape image database 104.
A specific example will be described with reference to FIGS. FIG. 5 is a diagram schematically showing four points ( points 501, 502, 503, and 504) from which data recorded in the landscape image database 104 is collected and the current position 507 of the positioning target moving body. FIG. 6 shows the accuracy of the matching process between the feature point data corresponding to the points 501 to 504 and the feature point data extracted at the current time, the position information from the point 501 to the point 504, and the matching process. It is a figure which shows the value of the azimuth | direction which is the obtained relative positional relationship.
The relative position information of the base obtained by the 8-point method includes the three-dimensional relative azimuth information or absolute distance information of the moving body, and the light of the imaging module when the landscape image database 104 is created and the current positioning target moving body. This is three-dimensional rotation information of the shaft. However, FIG. 6 simply shows only relative orientation information on the horizontal plane.
In the case of FIGS. 5 and 6, for example, the determination can be made based on the relative position information extracted at the points 502 and 503 where the relatively high “probability” is extracted. 5 and 6, the angle 505 of the current position with respect to the point 502 estimated by the image collation / position identification module 113, and the optical axis of the current position with respect to the point 503 estimated by the image collation / position identification module 113. An angle 506 with respect to is shown. That is, a point (point indicated by 507 in FIG. 5) that is on an angle 505 that is 80 degrees rightward with respect to the point 502 and that is on an angle 506 that is 45 degrees rightward with respect to the point 503 is a positioning target moving body. It can be geometrically determined that the current position of the current position.
Next, the operation of the first embodiment will be described in detail with reference to the drawings. 7 and 8 are flowcharts showing the operations of the database construction mobile mounting device 100 and the positioning target mobile mounting device 110 of the present embodiment, respectively.
First, the operation of the database construction mobile unit mounting apparatus 100 will be described with reference to FIG.
First, the first imaging module 101 captures an image in front of the moving body (step S701). Further, the position information acquisition module 102 acquires the current accurate position information of the mobile body on which the database construction mobile body mounting apparatus 100 is mounted (step S702). Subsequently, the first feature point extraction module 103 extracts feature points from the image captured by the first imaging module 101 (step S703). Further, the future fluctuation area extraction module 105 extracts an area corresponding to a vehicle and a person from the image captured by the first imaging module (step S704).
Then, the landscape image database construction module 106 extracts only feature points that do not belong to the region corresponding to the vehicle or person extracted by the future variation region extraction module 105 from the feature points extracted by the first feature point extraction module 103. To do. After that, the feature point position and feature amount information relating to these feature points and the position information acquired by the position information acquisition module 102 are associated with each other and stored in the landscape image database 104 (step S705).
Steps S701 to S705 are repeated each time the first imaging module 101 acquires a new image. Note that step S702 is most preferably executed in synchronization with step S701.
Next, the operation of the positioning target moving body mounting apparatus 110 will be described with reference to the flowchart of FIG.
The second imaging module 111 captures an image in front of the positioning target moving body mounting apparatus 110 (step S801). The second feature point extraction module 112 extracts feature points from the image captured by the second imaging module 111 (step S802).
The image collation / position identification module 113 collates the feature points extracted by the second feature point extraction module 112 with the feature point information in the landscape image database 104 to identify and output the current position of the positioning target moving body. (Step S803).
The database construction mobile unit mounting apparatus 100 described above includes an ECU (Electronic Control Unit) 1801, an in-vehicle camera 1802, a hard disk 1803, and a high-precision GPS 1804 as shown in FIG. The above-described database construction mobile unit mounting apparatus 100 is described as a configuration in which the above-described module is mounted and operated in the apparatus of this configuration. The ECU 1801 controls the entire apparatus, and includes, for example, a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), a signal processing circuit, a power supply circuit, and the like. In other words, the above-described embodiment is realized by reading out and executing a computer program capable of realizing the functions of the flowchart and the determination logic referred to in the description to the ECU 1801 except for the functions of the imaging module and the position information acquisition module. . It is also possible to configure a microcomputer by implementing the functions executed by the ECU 1801 as hardware. Furthermore, some functions may be realized by hardware, and similar functions may be realized by cooperative operation of the hardware and the software program. The hard disk 1803 is a device for storing a landscape image database, and there is no problem even if it is constituted by a storage medium other than the hard disk such as a flash memory.
The above-described positioning target mobile unit mounting apparatus 110 includes an ECU (Electronic Control Unit) 1901, an in-vehicle camera 1902, and a hard disk 1903 as shown in FIG. 19. The above-described module is mounted on the apparatus of this configuration. It has been described as a configuration that operates. The ECU 1901 controls the entire apparatus, and includes, for example, a CPU, RAM, ROM, signal processing circuit, power supply circuit, and the like. That is, the above-described embodiment is realized by reading out and executing a computer program capable of realizing the functions of the flowchart and the determination logic referred to in the description, except for the function of the second imaging module, in the ECU 1901. It is also possible to configure a microcomputer by implementing the functions executed by the ECU 1901 as hardware. Furthermore, some functions may be realized by hardware, and similar functions may be realized by cooperative operation of the hardware and the software program. A copy of the landscape image database constructed by the database construction mobile body may be stored in the hard disk 1903. Further, as another configuration example, it may be stored in a ROM in the ECU 1901 without using a hard disk.
According to the present embodiment, the landscape image database 104 is constructed by removing specific feature points from the feature points extracted by the first feature point extraction module 103. The specific feature point is a feature point belonging to a vehicle or a pedestrian that causes a reduction in the efficiency of image collation in the image collation / position identification module in the positioning target moving body mounting apparatus 110 when the future position moves. Accordingly, the image collation / position identification module 113 can correctly collate images with a smaller number of RANSAC trials.
In the above embodiment, the description has been made assuming that the image acquired by the first imaging module is an image capturing a visible light wavelength band such as a color image or a monochrome image. However, the first imaging module may be configured by an apparatus that can also acquire an image in a wavelength band other than the visible light wavelength region, such as a multispectral camera.
In this case, the future fluctuation region extraction module 105 may further extract plants and the like. Many feature points are extracted from the area of the leaves of the street tree, but the shape of the street tree changes due to wind and growth, and the position of the feature point moves, which causes noise when performing image matching .
It is not always easy to accurately determine a region corresponding to a plant such as a roadside tree from only information in the visible light wavelength region. However, since chlorophyll is known to reflect light in the near-infrared wavelength band well, using a multispectral image that includes information in the near-infrared wavelength band makes it relatively easy to identify areas corresponding to plants. Can be extracted.
Specifically, for example, a region where the reflection intensity with respect to the near-infrared wavelength band is equal to or greater than a certain threshold value may be extracted. Although fibers such as clothes often show high reflection intensity in the near-infrared wavelength band, the future fluctuation area extraction module 105 is also a detection target for pedestrians, and it is necessary to distinguish plant areas from pedestrian areas. There is no functional problem.
In the first embodiment of the present invention, a vehicle or a person who is likely to move in the future is extracted from the image used for constructing the landscape image database by the future variation area extraction module 105. Further, a region corresponding to a plant such as a roadside tree whose shape changes with growth is extracted by the future variation region extraction module 105.
After that, since the landscape image database 104 is constructed using only the information on the feature points extracted from outside these regions, the probability of selecting a pair of feature points that correspond correctly in the RANSAC process increases, and the number of RANSAC iterations is increased. Can be reduced. RANSAC is an abbreviation for RANdom Sample Consensus.
As a result, in the positioning system that identifies the current position of the positioning target moving body by comparing the position of the feature point with the database, positioning can be performed with a smaller amount of calculation.
The effect of reducing the amount of calculation will be described in detail. Assume that all the feature points extracted by the first feature point extraction module 103 are stored in the landscape image database 104, and 100 of them are associated with the feature points in the landscape image database 104. Also, 60% of them are correct.
Then, assuming that the current position can be correctly identified when all of the 8 pairs randomly extracted from the 100 pairs of feature points are correct correspondences, the probability of correctly positioning in one RANSAC trial is approximately (0. 6) 8 = 1.7%. Therefore, it is considered that the expected value can be measured correctly once after 60 trials.
On the other hand, when a landscape image database is constructed excluding feature points corresponding to people, vehicles, roadside trees, etc., the probability of a pair of feature points that correspond correctly increases, so this probability is assumed to be 0.8. If so, the probability of correct positioning in one trial is approximately (0.8) 8 = 17%. Therefore, as an expected value, positioning can be correctly performed once in six trials. That is, in the case of this example, the amount of calculation required for image collation for positioning in the positioning target moving body can be reduced to 1/10.
Further, the effect of reducing the amount of calculation is actually an effect of reducing the cost of the positioning object mobile unit mounting apparatus. This is because, by reducing the amount of calculation, the positioning object mobile unit mounting apparatus can be configured using a lower-cost embedded processor.
In the first embodiment, the future variation region extraction module 105 that extracts regions corresponding to vehicles, people, and plants is applied to the database construction mobile body mounting device 100 for the following reason. That is, it is assumed that the database construction mobile unit mounting apparatus 100 is a small number of special mobile units that can be equipped with expensive and high-performance apparatuses. This is because it is easy to tolerate price increases. In addition, in a configuration in which a landscape image database is constructed by a server device to be described later, there is no restriction on real-time processing when constructing a landscape image database. It is because it does not become.
(Second Embodiment)
A second embodiment of the present invention will be described with reference to FIG. Referring to FIG. 9, in the second embodiment, in addition to the configuration of the apparatus in the first embodiment shown in FIG. 1, the positioning target moving body mounting apparatus 910 includes the approximate position acquisition module 901. The functions of the first embodiment shown in FIG. 1 are the same except for the approximate position acquisition module 901 and the image collation / position identification module 113.
The approximate position acquisition module 901 includes an inexpensive GPS or map matching mechanism, and acquires the current approximate position of the positioning target moving body.
The image collation / position identification module 113 collates the feature point information extracted by the second feature point extraction module 112 with the landscape image database 104, and determines the current position of the moving body with higher accuracy than the approximate position. However, in the present embodiment, only records in the landscape image database 104 stored in association with positions near the approximate position acquired by the approximate position acquisition module 901 are used for collation. This makes it possible to limit the data to be collated and reduce the amount of calculation required for collation.
Next, the operation of the second embodiment will be described in detail with reference to the drawings. Since the operation of the database construction mobile unit mounting apparatus 100 is the same as that of FIG. 3, the operation of the positioning target mobile unit mounting apparatus 110 will be described in detail with reference to the flowchart of FIG.
First, the second imaging module 111 captures an image in front of the positioning target moving body (step S1001). The second feature point extraction module 112 extracts feature points from the image acquired in step S1001 (step S1002). The approximate position acquisition module 1001 acquires the current approximate position of the positioning vehicle (step S1003). The image collation / position identification module 113 collates with the feature points extracted in step S1002 using only the feature point information in the landscape image database narrowed down by the approximate position information obtained in step S1003, and Specify and output the exact current position.
According to the second embodiment, the feature point information in the landscape image database to be collated by the image collation / position identification module can be narrowed down, so that the amount of calculation required for collation can be reduced.
(Third embodiment)
A third embodiment of the present invention will be described with reference to FIG. Referring to FIG. 11, the third embodiment is different in the following points. That is, as compared with the first embodiment shown in FIG. 1, the module in the database construction mobile unit mounting apparatus 100 is divided into two, and a new database construction mobile unit mounting apparatus 1100 and a server apparatus 1120 are provided. The installation point is different. Further, a position / image data recording module 1102 and a position / image data storage unit 1103 are added as a mechanism for exchanging landscape images and position information between the database construction mobile unit mounting apparatus 1100 and the server apparatus 1120. Is different.
First, a database construction mobile body mounting apparatus 1100 according to the third embodiment will be described.
The first imaging module 101 and the position information acquisition module 102 are the same as those in the embodiment shown in FIG.
The position / image data storage unit 1103 is data in which the first imaging module 101 and the position information acquisition module 102 store image information and position information acquired at the same time in association with each other.
The position / image data recording module 1102 is a module that records the images and position information acquired by the first imaging module 100 and the position information acquisition module 101 at the same time in the position / image data storage unit 1103.
Next, the server device 1120 will be described.
The first feature point extraction module 103 extracts feature points from the landscape image recorded in the position / image data storage unit 1103, and outputs position information and feature points of feature points in the image.
The landscape image database 104 is the same as the embodiment shown in FIG. The main scenery image database 104 is once constructed on the server, but after the database construction, it is copied and installed on the positioning vehicle.
The future variation area extraction module 105 extracts an area corresponding to a vehicle, a pedestrian, or a plant whose future position or shape may change from a landscape image in the position / image data 803 referred to by the first feature point extraction module 103. To do.
The scenery image database construction module 106 records specific feature points among the feature points extracted by the first feature point extraction module 103 in association with the position information acquired by the position information acquisition module 102 in the landscape image database 104. To do. The specific feature points are other feature points excluding the feature points extracted from the regions corresponding to the vehicle, the pedestrian, and the plant extracted by the future variation region extraction module 105.
The positioning object moving body mounting apparatus 110 is the same as that of the embodiment shown in FIG.
Next, the operation of the third embodiment will be described in detail with reference to the drawings. Since the operation of the positioning object moving body mounting apparatus 110 is the same as that shown in the flowchart of FIG. 8, the description thereof is omitted. Here, the operations of the database construction mobile body mounting apparatus 1100 and the server apparatus 1120 are shown.
First, the operation of the database construction mobile unit mounting apparatus 1100 is shown in the flowchart of FIG.
The first imaging module 101 captures an image in front of the moving body (step S1201). The position information acquisition module 102 acquires the current accurate position information of the moving body in synchronization with step S1201 (step S1202). The position / image data recording module 1102 records the image and position information acquired in steps S1201 and S1202 in association with each other in the position / image data storage unit 1103 (step S1203).
Next, the operation of the server apparatus 1120 will be described with reference to the flowchart of FIG.
First, the first feature point extraction module 101 extracts feature points from the image data of the position / image data generated by the database construction mobile body mounting apparatus 1100, and extracts the position and feature amount on the image ( Step S1301). Next, the future variation area extraction module 105 extracts areas corresponding to vehicles, pedestrians, plants, and the like from the same image data as the image data referenced in step S1301 (step S1302).
Then, the landscape image database construction module 106 removes points corresponding to the region extracted in step S1302 from the feature points extracted in step S1301. A landscape image database is constructed by associating the feature amount information (position and feature amount on the image) regarding the feature point after removal with the imaging position information stored in association with the image data referred to in step S1301. (Step S1303).
The database construction mobile body mounting apparatus 1100 described above includes an image recording apparatus 2001, an in-vehicle camera 2002, a hard disk 2003, and a high-accuracy GPS 2004 as shown in FIG. 20, and is configured to mount and operate the modules described above. As explained.
The image recording apparatus 2001 controls the entire apparatus, and includes, for example, a CPU, a RAM, a ROM, a signal processing circuit, a power supply circuit, and the like. That is, in the present embodiment described above, it is realized by reading out and executing the computer program capable of realizing the functions of the flowchart and the determination logic referred to in the description to the ECU 2001, except for the functions of the imaging module and the position information acquisition module. The It is also possible to configure a microcomputer by implementing the functions executed by the ECU 2001 as hardware. Furthermore, some functions may be realized by hardware, and similar functions may be realized by cooperative operation of the hardware and the software program.
Further, the server device 1120 described above is configured by a computer 2101 as shown in FIG. 21, and has been described as a configuration in which the module described above is mounted and operated in the device of this configuration. The computer 2101 includes, for example, a CPU, RAM, ROM, signal processing circuit, power supply circuit, hard disk, and the like. That is, the above-described embodiment is realized by reading out and executing a computer program capable of realizing the function and determination logic of the flowchart referred to in the description on the computer 2101. In addition, it is possible to configure a microcomputer by implementing functions executed by the computer 2101 as hardware. Furthermore, some functions may be realized by hardware, and similar functions may be realized by cooperative operation of the hardware and the software program. The landscape image database constructed by the database construction mobile body may be exchanged with the database construction mobile body via the hard disk in the computer 2101.
Moreover, what is necessary is just to set the apparatus mounted in the vehicle for positioning to the structure similar to FIG.
According to the present embodiment, image and position information used for constructing a landscape image database is temporarily stored as position / image data, and is moved to the server device 1120 for processing. Therefore, by strengthening the calculation capability of the server apparatus 1120, there is an effect that the processing of the future variation area extraction module 105 having a relatively large calculation amount can be processed at higher speed. In particular, when the types of areas to be extracted by the future fluctuation area extraction module 105 are increased, this embodiment in which the calculation capability is easily enhanced is suitable.
(Fourth embodiment)
A fourth embodiment of the present invention will be described with reference to FIG. Referring to FIG. 14, the fourth embodiment further includes a server download module 1401 in the server device 1420 with respect to the configuration shown in FIG. Further, the positioning target mobile unit mounting apparatus 1410 has an approximate position acquisition module 901 that acquires the current approximate position, and a positioning target mobile unit download module that can communicate with the server unit 1420 to acquire a part of the landscape image database by communication. 1402 is provided.
The functions of the database construction mobile body mounting device 1400 and the server device 1420 are substantially the same as those in the example of FIG. 11, and only the server download module 1401 is different.
The server download module 1401 operates as follows when there is a request for the approximate position information and the landscape image database download from the positioning target mobile unit download module 1402 of the positioning target mobile unit mounting apparatus 1410. That is, the server download module 1401 extracts record data generated in the vicinity of the approximate position from the landscape image database stored in the server, and transmits it to the positioning target mobile unit download module 1402.
Next, the function of the positioning object moving body mounting apparatus 1410 will be described.
The functions of the second imaging module 111 and the second feature point extraction module 112 are the same as those in the embodiment of FIG.
The positioning target mobile unit download module 1402 sends the information on the approximate position acquired by the approximate position acquisition module 601 and the request message for landscape image data to the server download module 1401 of the server device 1420. Thereafter, the positioning object moving body download module 1402 receives the corresponding landscape image data from the server download module 1401.
The image collation / position identification module 113 collates the feature point information extracted by the second feature point extraction module 112 with the landscape image data received from the server download module 1401 to determine the accurate current position of the moving object.
Next, the operation of the present embodiment will be described. The operation of the database construction mobile unit mounting apparatus is the same as the flowchart shown in FIG.
The operation of the server device 1420 will be described with reference to the flowchart of FIG. Steps S1501 to S1503 are the same as steps S1201 to S1203.
The server download module 1401 operates as follows only when a landscape image database is requested from the positioning target mobile unit download module 1402 of the positioning target mobile unit mounting apparatus 1410. That is, the server download module 1401 extracts and transmits corresponding landscape image data (step S1504).
Next, the operation of the positioning target moving body mounting apparatus 1410 will be described with reference to the flowchart of FIG.
First, the second imaging module 111 captures a landscape image (step S1601). The second feature point extraction module 112 outputs the feature amount of the feature point from the image captured in step S1601 (step S1602). The approximate position acquisition module 901 acquires the approximate position of the positioning target moving body mounting apparatus 1410 (step S1603).
The positioning target mobile unit download module 1402 communicates with the server download module 1401 of the server device 1420 and receives landscape image data corresponding to the approximate position (step S1604).
Finally, the image collation / position identification module 113 collates the landscape image data received by the positioning target mobile unit download module 1402 with the feature amounts of the feature points extracted by the second feature point extraction module 112. Thereafter, the image collation / position identification module 113 determines the current position of the moving body (step S1605).
The server device 1420 described above is configured by a computer 2201 and a communication device 2202 as shown in FIG. 22, and has been described as a configuration in which the module described above is mounted and operated in the device of this configuration. The computer 2202 includes, for example, a CPU, RAM, ROM, power supply circuit, hard disk, and the like. That is, the above-described embodiment is realized by reading out and executing a computer program capable of realizing the functions and determination logic of the flowchart referred to in the description, excluding the function of the server download module 1401. In addition, it is possible to configure a microcomputer by implementing functions executed by the computer 2201 as hardware. Furthermore, some functions may be realized by hardware, and similar functions may be realized by cooperative operation of the hardware and the software program. The communication device 2202 is hardware for wireless LAN (Local Area Network) or mobile phone data communication that executes the function of the server download module 1401.
Further, the above-described positioning target moving body mounting apparatus 1410 includes an ECU (Electronic Control Unit) 2301, a camera 2302, a hard disk 2303, a GPS 2304, and a communication device 2305 as shown in FIG. In the apparatus having this configuration, the module described above is mounted and operated. The ECU 2301 controls the entire apparatus, and includes, for example, a CPU, RAM, ROM, signal processing circuit, power supply circuit, and the like. That is, the above-described embodiment is realized by reading out and executing a computer program capable of realizing the functions and determination logic of the flowchart referred to in the description to the ECU 2301. It is also possible to configure a microcomputer by implementing the functions executed by the ECU 2301 as hardware. Furthermore, some functions may be realized by hardware, and similar functions may be realized by cooperative operation of the hardware and the software program.
The landscape image database constructed by the database construction mobile body may be stored in the hard disk 2303 as a copy. As another configuration example, it may be stored in a ROM in the ECU 2301 without using a hard disk. The communication device 2305 is hardware for wireless LAN or cellular phone data communication that executes the function of the server download module 1401.
According to the present embodiment, since it is not necessary to hold the landscape image database in the positioning target mobile unit mounting apparatus 1410, the capacity of the magnetic disk mounted on the positioning target mobile unit mounting unit 1410 can be reduced. Furthermore, since the landscape image database is stored in the server device 1420, there is an advantage that the landscape image database can be easily updated. When the landscape image database is held in the positioning target mobile unit mounting apparatus 1410, it is necessary to update the landscape image database stored in all the positioning target mobile unit mounting apparatuses 1410.
In the flowchart of the server device 1420, an example of the operation of executing steps S1501 to S1504 as a series of processes has been described. However, the construction of the landscape image database and the transmission of the landscape image data may not be performed at the same time. . That is, according to the user's operation mode designation, as shown in FIG. 17, the operation may be performed to execute either the landscape image database construction mode or the landscape image data transmission mode.
(Fifth embodiment)
Next, a fifth embodiment for carrying out the present invention will be described.
FIG. 24 shows a landscape image database according to the fifth embodiment of the present invention.
The landscape image database 2401 of this embodiment stores a plurality of landscape image data and image acquisition positions that are positions where the landscape image data is collected in association with each other.
Furthermore, the landscape image database 2401 of the present embodiment is characterized in that each of the plurality of landscape image data is a feature point for an object other than an object whose position or shape in the real world is less likely to be maintained as it is for a predetermined period or longer. Includes features.
In the fifth embodiment described above, the following effects can be obtained. In other words, a landscape image of a positioning system that can perform positioning processing on a positioning target moving body with a smaller amount of calculation by identifying feature points that are likely to be difficult to correctly associate and not storing them in the landscape image database in advance. A database can be provided.
In each embodiment described so far, a dedicated device is assumed, but the following may be used. That is, for example, a personal computer device that performs various data processing is loaded with a board or a card that performs processing corresponding to this example, and each processing is executed on the computer device side. In this way, a configuration may be adopted in which software for executing the processing is installed in a personal computer device and executed.
The program installed in the data processing device such as the personal computer device may be distributed via various recording (storage) media such as an optical disk and a memory card, or distributed via communication means such as the Internet. Also good.
In addition, each of the above embodiments can be combined with other embodiments.
While the present invention has been described with reference to the embodiments, the present invention is not limited to the above embodiments. Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present invention within the scope of the present invention.
This application claims the priority on the basis of Japanese application Japanese Patent Application No. 2010-226844 for which it applied on October 6, 2010, and takes in those the indications of all here.
Some or all of the embodiments described above can be described as in the following supplementary notes, but are not limited thereto.
(Appendix 1)
A plurality of landscape image data and an image acquisition position that is a position where the landscape image data is collected are stored in association with each other,
Each of the plurality of landscape image data includes a feature value of a feature point with respect to an object other than an object whose position or shape in the real world is unlikely to be maintained as it is for a predetermined period or more. Database.
(Appendix 2)
2. The landscape image database according to appendix 1, wherein an object that is less likely to be maintained in a current position or shape in the real world for a predetermined period or longer is a vehicle.
(Appendix 3)
The landscape image database according to appendix 1 or appendix 2, wherein the thing that is unlikely to be maintained as it is for a predetermined period or longer in the real world is a person.
(Appendix 4)
A landscape image database described in any one of supplementary notes 1 to 3;
A first imaging means for capturing a landscape image;
Position information acquisition means for acquiring current position information;
First feature point extracting means for extracting feature amounts of feature points from the landscape image acquired by the first imaging means;
A future fluctuation area extracting means for extracting an area corresponding to an object whose position or shape in the real world is less likely to be maintained as it is for a predetermined period or more from the landscape image acquired by the first imaging means;
The feature image of the feature point obtained by removing the feature point extracted from the area extracted by the future variation area extraction unit from the feature point acquired by the first imaging unit is associated with the current position information and the landscape image. Means for constructing a landscape image database stored in the database;
A database construction device comprising:
(Appendix 5)
The first imaging unit captures not only an image in the visible light wavelength band but also an image in a wavelength band other than the visible light region, and the future variation area extracting unit captures the landscape image acquired by the first imaging unit. The database construction device according to appendix 4, wherein a region corresponding to a plant is extracted from the inside.
(Appendix 6)
A landscape image database described in any one of supplementary notes 1 to 3;
A second image pickup means for picking up a landscape image; a second feature point extraction means for extracting feature amounts of feature points from the landscape image acquired by the second image pickup means; and the second feature point extraction means. Image collation / position identification means for collating the feature amount of the extracted feature point with the landscape image database to identify the current position of the positioning target moving body;
A positioning target device comprising:
(Appendix 7)
The positioning target device further includes a rough position acquisition unit that acquires a current rough position of the positioning target device, and the image collation / position identification unit includes, among landscape image data stored in the landscape image database, The positioning target device according to appendix 6, wherein the positioning target device performs image matching using only landscape image data generated near the approximate position.
(Appendix 8)
The database construction device described in any one of the supplementary notes 1 to 3;
The positioning target device described in Appendix 6 or Appendix 7,
And a landscape image database of the positioning target device has the same content as a landscape image database constructed by the database construction device.
(Appendix 9)
When a position information and landscape image data transfer request is received from the positioning object mobile body, the landscape image database in the server device that transmits the landscape image data in the landscape image database associated with the position information in the vicinity of this position information Including a server device having download means,
The positioning target moving body transmits the position information and the landscape image data transfer request to the rough position acquisition means for acquiring a rough position of the positioning target moving body and the scenery image database download means in the server device, and the outline is acquired. A positioning object moving body scenery image database download means for acquiring scenery image data in the scenery image database associated with position information near the position;
The image collation / position identification unit identifies the current position by collating the landscape image data acquired by the landscape image database download unit with the feature amount of the feature point extracted by the second feature point extraction unit. The positioning system according to appendix 8, characterized by:
(Appendix 10)
Take a landscape image,
Get current location information
Extracting feature amounts of feature points from the landscape image;
An area corresponding to an object whose position or shape changes in the real world in the future is extracted from the landscape image.
A database construction method characterized by that.
(Appendix 11)
Not only images in the visible light wavelength band, but also images in the wavelength band other than the visible light range,
Extracting an area corresponding to a plant from the landscape image;
The database construction method according to supplementary note 10, characterized by:
(Appendix 12)
A first imaging step for capturing a landscape image;
A location information acquisition step for acquiring current location information;
A first feature point extracting step of extracting feature amounts of feature points from the landscape image acquired by the first imaging means;
A future change area extracting step of extracting an area corresponding to an object whose position or shape will change in the future in the real world from the landscape image acquired by the first imaging means;
A landscape image database construction program that causes a computer comprising the landscape image database described in any one of supplementary notes 1 to 3 to be executed.
 この発明は、移動体に搭載された撮像手段により撮影した風景画像を基に、該移動体の現在位置を同定する測位システム、風景画像データベース、データベース構築装置、データベース構築方法、風景画像データベース構築プログラム及び測位対象装置に関するものであり、産業上の利用可能性を有する。 The present invention relates to a positioning system, a landscape image database, a database construction device, a database construction method, and a landscape image database construction program for identifying the current position of the mobile body based on a landscape image taken by an imaging means mounted on the mobile body. In addition, the present invention relates to a positioning target device and has industrial applicability.
 100  データベース構築用移動体搭載装置
 101  第一の撮像モジュール
 102  位置情報取得モジュール
 103  第一の特徴点抽出モジュール
 104  風景画像データベース
 105  将来変動領域抽出モジュール
 106  風景画像データベース構築モジュール
 110  測位対象移動体搭載装置
 111  第二の撮像モジュール
 112  第二の特徴点抽出モジュール
 113  画像照合・位置同定モジュール
 114  風景画像データベース
 201  レコード番号
 202  撮影位置情報
 203  緯度
 204  経度
 205  特徴点情報
 206  特徴点数
 207  座標位置
 208  画像特徴量
 301  道路面領域
 302  建築物領域
 303  駐車車両領域
 401  駐車車両領域303に対応する特徴点
 402  特徴点401を除いた特徴点
 501、502、503、504  データを収集した4地点
 505  画像照合・位置同定モジュールで推定された、地点502に対する現在位置の光軸に対する角度
 506  画像照合・位置同定モジュールで推定された、地点503に対する現在位置の光軸に対する角度
 507  測位対象移動体の現在位置
 901  概略位置取得モジュール
 910  測位対象移動体搭載装置
 1102  位置・画像データ記録モジュール
 1103  位置・画像データ格納手段
 1120  サーバ装置
 1401  サーバダウンロードモジュール
 1402  測位対象移動体ダウンロードモジュール
 1420  サーバ装置
 1801  ECU
 1802、2002  カメラ
 1803、2003  ハードディスク
 1804、2004  高精度GPS
 1901、2301  ECU
 1902、2302  カメラ
 1903、2303  ハードディスク
 2001 画像記録装置
 2101、2201 コンピュータ
 2202 通信機器
 2304 GPS
 2305 通信機器
DESCRIPTION OF SYMBOLS 100 Mobile construction apparatus for database construction 101 1st imaging module 102 Position information acquisition module 103 First feature point extraction module 104 Landscape image database 105 Future fluctuation area extraction module 106 Landscape image database construction module 110 Positioning object mobile equipment installation apparatus 111 Second imaging module 112 Second feature point extraction module 113 Image collation / position identification module 114 Landscape image database 201 Record number 202 Shooting position information 203 Latitude 204 Longitude 205 Feature point information 206 Number of feature points 207 Coordinate position 208 Image feature amount 301 road surface area 302 building area 303 parked vehicle area 401 feature points 402 corresponding to the parked vehicle area 303 feature points 501 and 5 excluding the feature point 401 2, 503, 504 Four points where data was collected 505 Angle of the current position relative to the point 502 with respect to the optical axis estimated by the image verification / position identification module 506 Current position relative to the point 503 estimated by the image verification / position identification module Angle 507 relative to the optical axis 507 Current position of the positioning target moving body 901 Approximate position acquisition module 910 Positioning target moving body mounting apparatus 1102 Position / image data recording module 1103 Position / image data storage means 1120 Server apparatus 1401 Server download module 1402 Positioning target movement Body download module 1420 server device 1801 ECU
1802, 2002 Camera 1803, 2003 Hard disk 1804, 2004 High precision GPS
1901,301 ECU
1902, 2302 Camera 1903, 2303 Hard disk 2001 Image recording device 2101, 2201 Computer 2202 Communication device 2304 GPS
2305 Communication equipment

Claims (12)

  1.  複数の風景画像データと該風景画像データが採取された位置である画像取得位置が対応付けて格納され、
    前記複数の風景画像データの各々は、実世界上で位置もしくは形状が所定期間以上現状のまま維持される可能性の少ない事物以外の事物に対する特徴点の特徴量を含むことを特徴とする風景画像データベース。
    A plurality of landscape image data and an image acquisition position that is a position where the landscape image data is collected are stored in association with each other,
    Each of the plurality of landscape image data includes a feature value of a feature point with respect to an object other than an object whose position or shape in the real world is unlikely to be maintained as it is for a predetermined period or more. Database.
  2.  前記実世界上で位置もしくは形状が所定期間以上現状のまま維持される可能性の少ない事物が車両であることを特徴とする請求項1に記載の風景画像データベース。 2. The landscape image database according to claim 1, wherein a thing having a low possibility that the position or shape in the real world is maintained as it is for a predetermined period or longer is a vehicle.
  3.  前記実世界上で位置もしくは形状が所定期間以上現状のまま維持される可能性の少ない事物が人物であることを特徴とする請求項1又は請求項2に記載の風景画像データベース。 3. The landscape image database according to claim 1 or 2, wherein a thing having a low possibility that the position or shape in the real world is maintained as it is for a predetermined period or longer is a person.
  4.  請求項1乃至請求項3のいずれかの請求項に記載された風景画像データベースと、
     風景画像を撮像する第一の撮像手段と、
    現在位置情報を取得する位置情報取得手段と、
    前記第一の撮像手段が取得した風景画像から特徴点の特徴量を抽出する第一の特徴点抽出手段と、
     前記第一の撮像手段が取得した風景画像から実世界上で位置もしくは形状が所定期間以上現状のまま維持される可能性の少ない事物に対応する領域を抽出する将来変動領域抽出手段と
     前記第一の撮像手段が取得した特徴点から、前記将来変動領域抽出手段が抽出した領域から抽出された特徴点を除いた特徴点の特徴量を、前記現在位置情報と対応づけて前記風景画像データベースに記憶する風景画像データベース構築手段と、
    を備えることを特徴とするデータベース構築装置。
    A landscape image database according to any one of claims 1 to 3;
    A first imaging means for capturing a landscape image;
    Position information acquisition means for acquiring current position information;
    First feature point extracting means for extracting feature amounts of feature points from the landscape image acquired by the first imaging means;
    A future fluctuation area extracting means for extracting an area corresponding to an object whose position or shape in the real world is not likely to be maintained as it is for a predetermined period or more from a landscape image acquired by the first imaging means; The feature amount of the feature point excluding the feature point extracted from the area extracted by the future variation area extracting unit from the feature point acquired by the imaging unit is stored in the landscape image database in association with the current position information A landscape image database construction means,
    A database construction device comprising:
  5.  前記第一の撮像手段は可視光域の波長帯の画像だけでなく、可視光域以外の波長帯の画像も撮像し、前記将来変動領域抽出手段は前記第一の撮像手段が取得した風景画像中から植物に対応する領域を抽出することを特徴とする請求項4に記載のデータベース構築装置。 The first imaging unit captures not only an image in the visible light wavelength band but also an image in a wavelength band other than the visible light region, and the future variation area extracting unit captures the landscape image acquired by the first imaging unit. The database construction device according to claim 4, wherein a region corresponding to a plant is extracted from the inside.
  6.  請求項1乃至請求項3のいずれかの請求項に記載された風景画像データベースと、
     風景画像を撮像する第二の撮像手段と、前記第二の撮像手段が取得した風景画像から特徴点の特徴量を抽出する第二の特徴点抽出手段と、前記第二の特徴点抽出手段が抽出した特徴点の特徴量と前記風景画像データベースとを照合して前記測位対象移動体の現在位置を同定する画像照合・位置同定手段と、
    を備えることを特徴とする測位対象装置。
    A landscape image database according to any one of claims 1 to 3;
    A second image pickup means for picking up a landscape image; a second feature point extraction means for extracting feature amounts of feature points from the landscape image acquired by the second image pickup means; and the second feature point extraction means. Image collation / position identification means for collating the feature amount of the extracted feature point with the landscape image database to identify the current position of the positioning target moving body;
    A positioning target device comprising:
  7.  前記測位対象装置は、該測位対象装置の現在の概略位置を取得する概略位置取得手段をさらに具備し、前記画像照合・位置同定手段は、前記風景画像データベースに格納された風景画像データのうち、前記測位対象装置が前記概略位置付近において生成した風景画像データのみを利用して画像照合を行うことを特徴とする請求項6記載の測位対象装置。 The positioning target device further includes a rough position acquisition unit that acquires a current rough position of the positioning target device, and the image collation / position identification unit includes, among landscape image data stored in the landscape image database, 7. The positioning target apparatus according to claim 6, wherein the positioning target apparatus performs image collation only using landscape image data generated near the approximate position.
  8.  請求項1乃至請求項3のいずれかの請求項に記載されたデータベース構築装置と、
     請求項6又は請求項7に記載された測位対象装置と、
    を備え、前記測位対象装置の風景画像データベースは、前記データベース構築装置にて構築した風景画像データベースと同内容であることを特徴とする測位システム。
    A database construction device according to any one of claims 1 to 3;
    A positioning target device according to claim 6 or 7, and
    And a landscape image database of the positioning target device has the same content as a landscape image database constructed by the database construction device.
  9.  前記測位対象移動体より位置情報および風景画像データ転送要求を受けたときに、本位置情報付近の位置情報に対応づけられた前記風景画像データベース中の風景画像データを送信するサーバ装置内風景画像データベースダウンロード手段を有するサーバ装置を含み、
    前記測位対象移動体は、該測位対象移動体の概略位置を取得する概略位置取得手段と、前記サーバ装置内風景画像データベースダウンロード手段に対し前記位置情報および風景画像データ転送要求を送信して前記概略位置付近の位置情報に対応づけられた前記風景画像データベース中の風景画像データを取得する測位対象移動体上風景画像データベースダウンロード手段を更に有し、
     前記画像照合・位置同定手段は、前記風景画像データベースダウンロード手段が取得した前記風景画像データと前記第二の特徴点抽出手段が抽出した特徴点の特徴量とを照合して現在位置を同定することを特徴とする請求項8に記載の測位システム。
    When a position information and landscape image data transfer request is received from the positioning object mobile body, the landscape image database in the server device that transmits the landscape image data in the landscape image database associated with the position information in the vicinity of this position information Including a server device having download means,
    The positioning target moving body transmits the position information and the landscape image data transfer request to the rough position acquisition means for acquiring a rough position of the positioning target moving body and the scenery image database download means in the server device, and the outline is acquired. A positioning object moving body scenery image database download means for acquiring scenery image data in the scenery image database associated with position information near the position;
    The image collation / position identification unit identifies the current position by collating the landscape image data acquired by the landscape image database download unit with the feature amount of the feature point extracted by the second feature point extraction unit. The positioning system according to claim 8.
  10.  風景画像を撮像し、
     現在位置情報を取得し、
    前記風景画像から特徴点の特徴量を抽出し、
     前記風景画像から実世界上において位置もしくは形状が将来変化する物体に対応する領域を抽出する、
    ことを特徴とするデータベース構築方法。
    Take a landscape image,
    Get current location information
    Extracting feature amounts of feature points from the landscape image;
    An area corresponding to an object whose position or shape changes in the real world in the future is extracted from the landscape image.
    A database construction method characterized by that.
  11.  可視光域の波長帯の画像だけでなく、可視光域以外の波長帯の画像も撮像し、
    前記風景画像中から植物に対応する領域を抽出する、
    ことを特徴とする請求項10に記載のデータベース構築方法。
    Not only images in the visible light wavelength band, but also images in the wavelength band other than the visible light range,
    Extracting an area corresponding to a plant from the landscape image;
    The database construction method according to claim 10.
  12.  風景画像を撮像する第一の撮像ステップと、
    現在位置情報を取得する位置情報取得ステップと、
    前記第一の撮像手段が取得した風景画像から特徴点の特徴量を抽出する第一の特徴点抽出ステップと、
     前記第一の撮像手段が取得した風景画像から実世界上において位置もしくは形状が将来変化する物体に対応する領域を抽出する将来変動領域抽出ステップと、
    をコンピュータに実行させることを特徴とする風景画像データベース構築プログラムを記録したプログラム記録媒体。
    A first imaging step for capturing a landscape image;
    A location information acquisition step for acquiring current location information;
    A first feature point extracting step of extracting feature amounts of feature points from the landscape image acquired by the first imaging means;
    A future change area extracting step of extracting an area corresponding to an object whose position or shape will change in the future in the real world from the landscape image acquired by the first imaging means;
    The program recording medium which recorded the landscape image database construction program characterized by making a computer perform.
PCT/JP2011/072702 2010-10-06 2011-09-26 Positioning system WO2012046671A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US13/877,944 US9104702B2 (en) 2010-10-06 2011-09-26 Positioning system
JP2012537688A JPWO2012046671A1 (en) 2010-10-06 2011-09-26 Positioning system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010-226844 2010-10-06
JP2010226844 2010-10-06

Publications (1)

Publication Number Publication Date
WO2012046671A1 true WO2012046671A1 (en) 2012-04-12

Family

ID=45927666

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/072702 WO2012046671A1 (en) 2010-10-06 2011-09-26 Positioning system

Country Status (3)

Country Link
US (1) US9104702B2 (en)
JP (1) JPWO2012046671A1 (en)
WO (1) WO2012046671A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012099010A (en) * 2010-11-04 2012-05-24 Aisin Aw Co Ltd Image processing apparatus and image processing program
CN109492541A (en) * 2018-10-18 2019-03-19 广州极飞科技有限公司 Determination method and device, plant protection method, the plant protection system of target object type
KR102096784B1 (en) * 2019-11-07 2020-04-03 주식회사 휴머놀러지 Positioning system and the method thereof using similarity-analysis of image
US10997449B2 (en) 2017-01-24 2021-05-04 Fujitsu Limited Information processing system, computer-readable recording medium recording feature-point extraction program, and feature-point extraction method
JP2021096789A (en) * 2019-12-19 2021-06-24 株式会社豊田自動織機 Self-location estimation apparatus, moving body, self-location estimation method, and self-location estimation program
WO2023238344A1 (en) * 2022-06-09 2023-12-14 日産自動車株式会社 Parking assist method and parking assist device

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104636709B (en) * 2013-11-12 2018-10-02 中国移动通信集团公司 A kind of method and device of locating and monitoring target
KR102263731B1 (en) * 2014-11-11 2021-06-11 현대모비스 주식회사 System and method for correcting position information of surrounding vehicle
US10176718B1 (en) * 2015-09-25 2019-01-08 Apple Inc. Device locator
US10354531B1 (en) 2015-09-25 2019-07-16 Apple Inc. System and method for identifying available parking locations
US10783382B2 (en) * 2016-04-06 2020-09-22 Semiconductor Components Industries, Llc Systems and methods for buffer-free lane detection
CN108230232B (en) * 2016-12-21 2021-02-09 腾讯科技(深圳)有限公司 Image processing method and related device
CN109084781A (en) * 2017-06-13 2018-12-25 纵目科技(上海)股份有限公司 Construct the method and system of the garage parking panoramic table database under relative coordinate system
KR20210030147A (en) * 2019-09-09 2021-03-17 삼성전자주식회사 3d rendering method and 3d rendering apparatus

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000356792A (en) 1999-06-16 2000-12-26 Canon Inc Optical element and photographing device
JP2007133816A (en) * 2005-11-14 2007-05-31 Nikon Corp Plant identification system and organism identification system
WO2007088453A1 (en) 2006-02-01 2007-08-09 Varioptic Optical electrowetting device
WO2007088452A1 (en) 2006-02-01 2007-08-09 Varioptic Use of bromine anions in an optical electrowetting device
JP2008310446A (en) * 2007-06-12 2008-12-25 Panasonic Corp Image retrieval system
JP2010229655A (en) 2009-03-26 2010-10-14 Sumitomo Osaka Cement Co Ltd Identification method and concrete test method for concrete test piece

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100243317B1 (en) * 1997-04-18 2000-03-02 윤종용 Car classification equipment
US6266442B1 (en) * 1998-10-23 2001-07-24 Facet Technology Corp. Method and apparatus for identifying objects depicted in a videostream
JP2004012429A (en) 2002-06-11 2004-01-15 Mitsubishi Heavy Ind Ltd Self-position/attitude identification device and self-position/attitude identification method
US20080177994A1 (en) * 2003-01-12 2008-07-24 Yaron Mayer System and method for improving the efficiency, comfort, and/or reliability in Operating Systems, such as for example Windows
JP4206036B2 (en) 2003-12-09 2009-01-07 株式会社ゼンリン Identification of landscape image capturing position using electronic map data
JP3966419B2 (en) 2004-12-15 2007-08-29 三菱電機株式会社 Change area recognition apparatus and change recognition system
JP2006208223A (en) 2005-01-28 2006-08-10 Aisin Aw Co Ltd Vehicle position recognition device and vehicle position recognition method
JP4847090B2 (en) * 2005-10-14 2011-12-28 クラリオン株式会社 Position positioning device and position positioning method
JP5040258B2 (en) * 2006-10-23 2012-10-03 株式会社日立製作所 Video surveillance apparatus, video surveillance system, and image processing method
US20100026519A1 (en) * 2008-07-30 2010-02-04 Wei-Chuan Hsiao Method of detecting and signaling deviattion of motor vehicle
US8259998B2 (en) * 2008-09-30 2012-09-04 Mazda Motor Corporation Image processing device for vehicle
US20100097226A1 (en) * 2008-10-22 2010-04-22 Leviton Manufacturing Co., Inc. Occupancy sensing with image and supplemental sensing
JP5456023B2 (en) * 2009-04-07 2014-03-26 パナソニック株式会社 Image photographing apparatus, image photographing method, program, and integrated circuit
EP3660813A1 (en) * 2009-10-07 2020-06-03 iOnRoad Technologies Ltd. Automatic content analysis method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000356792A (en) 1999-06-16 2000-12-26 Canon Inc Optical element and photographing device
JP2007133816A (en) * 2005-11-14 2007-05-31 Nikon Corp Plant identification system and organism identification system
WO2007088453A1 (en) 2006-02-01 2007-08-09 Varioptic Optical electrowetting device
WO2007088452A1 (en) 2006-02-01 2007-08-09 Varioptic Use of bromine anions in an optical electrowetting device
JP2008310446A (en) * 2007-06-12 2008-12-25 Panasonic Corp Image retrieval system
JP2010229655A (en) 2009-03-26 2010-10-14 Sumitomo Osaka Cement Co Ltd Identification method and concrete test method for concrete test piece

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012099010A (en) * 2010-11-04 2012-05-24 Aisin Aw Co Ltd Image processing apparatus and image processing program
US10997449B2 (en) 2017-01-24 2021-05-04 Fujitsu Limited Information processing system, computer-readable recording medium recording feature-point extraction program, and feature-point extraction method
CN109492541A (en) * 2018-10-18 2019-03-19 广州极飞科技有限公司 Determination method and device, plant protection method, the plant protection system of target object type
KR102096784B1 (en) * 2019-11-07 2020-04-03 주식회사 휴머놀러지 Positioning system and the method thereof using similarity-analysis of image
WO2021091053A1 (en) * 2019-11-07 2021-05-14 주식회사 휴머놀러지 Location measurement system using image similarity analysis, and method thereof
JP2021096789A (en) * 2019-12-19 2021-06-24 株式会社豊田自動織機 Self-location estimation apparatus, moving body, self-location estimation method, and self-location estimation program
WO2021125171A1 (en) * 2019-12-19 2021-06-24 株式会社豊田自動織機 Self-position estimation device, moving body, self-position estimation method, and self-position estimation program
JP7283665B2 (en) 2019-12-19 2023-05-30 株式会社豊田自動織機 Self-position estimation device, mobile object, self-position estimation method, and self-position estimation program
WO2023238344A1 (en) * 2022-06-09 2023-12-14 日産自動車株式会社 Parking assist method and parking assist device

Also Published As

Publication number Publication date
JPWO2012046671A1 (en) 2014-02-24
US20130188837A1 (en) 2013-07-25
US9104702B2 (en) 2015-08-11

Similar Documents

Publication Publication Date Title
WO2012046671A1 (en) Positioning system
Grassi et al. Parkmaster: An in-vehicle, edge-based video analytics service for detecting open parking spaces in urban environments
CN107133325B (en) Internet photo geographic space positioning method based on street view map
US20200401617A1 (en) Visual positioning system
CN109029444B (en) Indoor navigation system and method based on image matching and space positioning
US9958269B2 (en) Positioning method for a surveying instrument and said surveying instrument
CN109520500B (en) Accurate positioning and street view library acquisition method based on terminal shooting image matching
US9465129B1 (en) Image-based mapping locating system
KR102200299B1 (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
US7860269B2 (en) Auxilliary navigation system for use in urban areas
CN105339758A (en) Use of overlap areas to optimize bundle adjustment
JP2008065087A (en) Apparatus for creating stationary object map
CN111028358A (en) Augmented reality display method and device for indoor environment and terminal equipment
Amer et al. Convolutional neural network-based deep urban signatures with application to drone localization
Piras et al. Indoor navigation using Smartphone technology: A future challenge or an actual possibility?
WO2019097422A2 (en) Method and system for enhanced sensing capabilities for vehicles
CN112348887A (en) Terminal pose determining method and related device
Steinhoff et al. How computer vision can help in outdoor positioning
KR102189926B1 (en) Method and system for detecting change point of interest
Ruiz-Ruiz et al. A multisensor LBS using SIFT-based 3D models
CN110636248B (en) Target tracking method and device
CN112689234B (en) Indoor vehicle positioning method, device, computer equipment and storage medium
US20220196432A1 (en) System and method for determining location and orientation of an object in a space
KR102249381B1 (en) System for generating spatial information of mobile device using 3D image information and method therefor
CN113256731A (en) Target detection method and device based on monocular vision

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11830603

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2012537688

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 13877944

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11830603

Country of ref document: EP

Kind code of ref document: A1