WO2014045432A1 - Appareil de traitement d'image et procédé de traitement d'image - Google Patents

Appareil de traitement d'image et procédé de traitement d'image Download PDF

Info

Publication number
WO2014045432A1
WO2014045432A1 PCT/JP2012/074359 JP2012074359W WO2014045432A1 WO 2014045432 A1 WO2014045432 A1 WO 2014045432A1 JP 2012074359 W JP2012074359 W JP 2012074359W WO 2014045432 A1 WO2014045432 A1 WO 2014045432A1
Authority
WO
WIPO (PCT)
Prior art keywords
reachable
identification information
information
area
contour
Prior art date
Application number
PCT/JP2012/074359
Other languages
English (en)
Japanese (ja)
Inventor
英士 松永
安士 光男
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Priority to CN201280073561.3A priority Critical patent/CN104335010A/zh
Priority to PCT/JP2012/074359 priority patent/WO2014045432A1/fr
Priority to JP2014536528A priority patent/JPWO2014045432A1/ja
Publication of WO2014045432A1 publication Critical patent/WO2014045432A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3453Special cost functions, i.e. other than distance or default speed limit of road segments
    • G01C21/3469Fuel consumption; Energy use; Emission aspects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60LPROPULSION OF ELECTRICALLY-PROPELLED VEHICLES; SUPPLYING ELECTRIC POWER FOR AUXILIARY EQUIPMENT OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRODYNAMIC BRAKE SYSTEMS FOR VEHICLES IN GENERAL; MAGNETIC SUSPENSION OR LEVITATION FOR VEHICLES; MONITORING OPERATING VARIABLES OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRIC SAFETY DEVICES FOR ELECTRICALLY-PROPELLED VEHICLES
    • B60L2240/00Control parameters of input or output; Target parameters
    • B60L2240/70Interactions with external data bases, e.g. traffic centres
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/60Other road transportation technologies with climate change mitigation effect
    • Y02T10/72Electric energy management in electromobility
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation
    • Y02T90/10Technologies relating to charging of electric vehicles
    • Y02T90/16Information or communication technologies improving the operation of electric vehicles

Definitions

  • the present invention relates to an image processing apparatus and an image processing method for generating a reachable range of a moving body based on a residual energy amount of the moving body.
  • the use of the present invention is not limited to the image processing apparatus and the image processing method.
  • Patent Document 1 a processing device that generates a reachable range of a mobile object based on the current location of the mobile object is known (for example, see Patent Document 1 below).
  • Patent Document 1 all directions on the map are radially divided around the current location of the moving object, and the reachable intersection that is farthest from the current location of the moving object is obtained as a map information node for each divided region.
  • a beige curve obtained by connecting a plurality of acquired nodes is displayed as the reachable range of the moving object.
  • a processing device that generates a reachable range from the current location of the moving body on each road based on the remaining battery capacity and power consumption of the moving body is known (for example, see Patent Document 2 below).
  • the power consumption of the mobile body is calculated on a plurality of roads connected to the current location of the mobile body, and the travelable distance of the mobile body on each road based on the remaining battery capacity and the power consumption of the mobile body Is calculated.
  • a set of line segments obtained by acquiring the current location of the mobile body and a plurality of reachable locations of the mobile body that are separated from the current location by a travelable distance as nodes of map information and connecting the plurality of nodes Is displayed as the reachable range of the moving object.
  • Patent Document 3 a device for displaying a travelable range of an electric vehicle is known.
  • a map is divided on a mesh, and a travelable range is displayed in mesh units.
  • the outer periphery connecting the isolated point and the isolated point becomes a combination of straight lines, which cannot be displayed smoothly and lacks visibility.
  • the drawing process takes time.
  • an image processing apparatus for processing information relating to a reachable range of a mobile object, and relates to a current point of the mobile object.
  • Information and an acquisition means for acquiring information on an initial stored energy amount that is an energy amount held by the mobile body at a current location of the mobile body, and energy consumed when the mobile body travels in a predetermined section
  • a plurality of reachable points that are points that the mobile body can reach from the current point are searched.
  • Search means dividing means for dividing the map information into a plurality of areas, and a plurality of reachable points searched by the search means. Therefore, a granting unit for giving identification information for identifying whether or not each of the moving bodies can reach each of the plurality of regions divided by the dividing unit, and a region for which the identification information is given by the granting unit
  • the contour of the reachable range of the moving object is extracted from the map information, and the declination defined for each vertex of the vertex group included in the extracted contour is frequency-converted to a predetermined frequency or higher
  • Display control means for displaying on the display means the reachable range of the moving body that becomes the contour from which the frequency component has been removed by performing inverse transformation after removing the frequency component.
  • An image processing apparatus is adjacent to a contour extracting unit that extracts a contour of a reachable range of a moving object from map information, and a vertex group that constitutes a contour extracted by the contour extracting unit.
  • Conversion means for frequency-converting only the declination
  • removal means for removing frequency components equal to or higher than a predetermined frequency out of the frequency components of the declination converted by the conversion means, and frequency of declination after removal by the removal means
  • Inverting means for inversely transforming components, and displaying on the display means the reachable range of the moving object that is the contour from which the frequency component has been removed by the inverse transforming means. That.
  • An image processing method is an image processing method in an image processing apparatus for processing information relating to a reachable range of a mobile object, wherein the information relating to the current location of the mobile object, and the mobile object
  • a search step for searching for a plurality of reachable points that are reachable from the current point based on map information, the initial stored energy amount and the estimated energy consumption amount, and the map Based on the dividing step of dividing the information into a plurality of regions, and the plurality of reachable points searched by the searching step,
  • a granting step for giving identification information for identifying whether or not the mobile body can reach each of the plurality of regions divided by the splitting step, and the identification information for the region to which the identification information is given by the granting step.
  • the contour of the reachable range of the moving body is extracted, the declination defined for each vertex of the vertex group included in the extracted contour is frequency-converted, and a frequency component equal to or higher than a predetermined frequency is obtained.
  • a display control step of displaying on the display means the reachable range of the moving body that becomes the contour from which the frequency component has been removed by performing inverse transformation after removal.
  • FIG. 1 is an explanatory diagram illustrating a display example of the outline of the reachable range of the moving object.
  • FIG. 2 is a block diagram of an example of a functional configuration of the image processing apparatus according to the first embodiment.
  • FIG. 3 is a block diagram illustrating a detailed functional configuration example of the display control unit 206 illustrated in FIG. 2.
  • FIG. 4 is a flowchart illustrating an example of an image processing procedure performed by the image processing apparatus.
  • FIG. 5 is a block diagram illustrating a hardware configuration of the navigation apparatus.
  • FIG. 6 is an explanatory diagram (part 1) schematically showing an example of reachable point search by the navigation device 500.
  • FIG. FIG. 7 is an explanatory diagram (part 2) schematically showing an example of reachable point search by the navigation device 500.
  • FIG. FIG. 1 is an explanatory diagram illustrating a display example of the outline of the reachable range of the moving object.
  • FIG. 2 is a block diagram of an example of a functional configuration of the image processing apparatus according to
  • FIG. 8 is an explanatory diagram (part 3) schematically illustrating an example of a reachable point search by the navigation device 500.
  • FIG. 9 is an explanatory diagram (part 4) schematically illustrating an example of reachable point search by the navigation device 500.
  • FIG. 10 is an explanatory diagram showing an example of reachable point search by the navigation device 500.
  • FIG. 11 is an explanatory diagram showing another example of reachable point search by the navigation device 500.
  • FIG. 12 is an explanatory diagram of an example in which a reachable point by the navigation device 500 is indicated by longitude-latitude.
  • FIG. 13 is an explanatory diagram of an example in which the reachable points by the navigation device 500 are indicated by meshes.
  • FIG. 14 is an explanatory diagram illustrating an example of a closing process performed by the navigation device.
  • FIG. 15 is an explanatory diagram schematically illustrating an example of a closing process performed by the navigation device.
  • FIG. 16 is an explanatory diagram illustrating an example of an opening process performed by the navigation device.
  • FIG. 17 is an explanatory diagram schematically illustrating an example of vehicle reachable range extraction by the navigation device.
  • FIG. 18 is an explanatory diagram schematically illustrating an example of a mesh after the reachable range of the vehicle is extracted by the navigation device.
  • FIG. 19 is an explanatory diagram schematically illustrating another example of vehicle reachable range extraction by the navigation device.
  • FIG. 20A is an explanatory diagram of a contour data complementing process example.
  • FIG. 20B is an explanatory diagram of an example of a vector decomposition method in the complementing process illustrated in FIG.
  • FIG. 20C is an explanatory diagram schematically illustrating an example of vehicle reachable range extraction by the navigation device.
  • FIG. 20D is a flowchart illustrating an example of the procedure of the contour direction calculation process performed by the navigation device.
  • FIG. 21A is an explanatory diagram of a music coordinate expression of contour data.
  • FIG. 21-2 is a graph showing frequency conversion.
  • FIG. 22 is an explanatory diagram of an example of thinning out contour data.
  • FIG. 23 is a flowchart illustrating an example of a procedure of image processing by the navigation device.
  • FIG. 24 is a flowchart illustrating an example of a procedure of estimated power consumption calculation processing by the navigation device.
  • FIG. 25 is a flowchart (part 1) illustrating a procedure of reachable point search processing by the navigation device 500.
  • FIG. 26 is a flowchart (part 2) illustrating a procedure of reachable point search processing by the navigation device 500.
  • FIG. 27 is a flowchart illustrating an example of a procedure of link candidate determination processing by the navigation device.
  • FIG. 28 is a flowchart illustrating an example of a procedure of identification information provision processing by the navigation device.
  • FIG. 29 is a flowchart illustrating an example of a procedure of first identification information change processing by the navigation device 500.
  • FIG. 30 is a flowchart (part 1) illustrating an example of a procedure of reachable range contour extraction processing by the navigation device.
  • FIG. 31 is a flowchart (part 2) illustrating an example of a procedure of reachable range contour extraction processing by the navigation device.
  • FIG. 32 is a flowchart illustrating an example of a smoothing process performed by the navigation device 500.
  • FIG. 33 is an explanatory diagram schematically illustrating an example of acceleration applied to a vehicle traveling on a road having a gradient.
  • FIG. 34 is an explanatory diagram showing an example of a display example after the reachable point search process by the navigation device 500.
  • FIG. 35 is an explanatory diagram illustrating an example of a display example after the identification information providing process by the navigation device 500.
  • FIG. 36 is an explanatory diagram illustrating an example of a display example after the first identification information change process by the navigation device.
  • FIG. 37 is an explanatory diagram illustrating an example of a display example after the closing process (expansion) by the navigation device 500.
  • FIG. 38 is an explanatory diagram illustrating an example of a display example after the closing process (reduction) by the navigation device 500.
  • FIG. 39 is an explanatory diagram illustrating an example of a display example after the smoothing process by the navigation device 500.
  • FIG. 40 is a block diagram of an example of a functional configuration of the image processing system according to the second embodiment.
  • FIG. 41 is a block diagram of an example of a functional configuration of the image processing system according to the third embodiment.
  • FIG. 42 is an explanatory diagram of an example of a system configuration of the image processing apparatus according to the second embodiment.
  • FIG. 1 is an explanatory diagram illustrating a display example of the outline of the reachable range of the moving object.
  • (A) shows some outline data of the reachable range of the moving body before the smoothing process.
  • (B) is the next state of (A) and shows a state in which the contour shown in (A) is smoothed. Specifically, (B) shows the result of performing fast Fourier transform on the contour data shown in (A), removing high frequency components, and then performing inverse fast Fourier transform.
  • the contour data of (B) is a smooth curve compared to (A) because the high frequency component is removed.
  • (C) shows the next state of (B), and shows a state in which a part of the outer peripheral points on the contour data is thinned out.
  • the outer peripheral point to be thinned is, for example, a point where the absolute value of the declination is smaller than a predetermined value.
  • the contour data of (C) becomes a smooth curve compared with (B).
  • the image processing apparatus can improve the visibility of the reachable range of the moving object. Further, since the image processing apparatus according to the first embodiment uses fast Fourier transform or inverse fast Fourier transform, the smoothing process can be speeded up. In addition, since the image processing apparatus according to the first embodiment thins out the absolute value of the declination of each outer peripheral point, the smoothing process can be speeded up by a simple process.
  • FIG. 2 is a block diagram of an example of a functional configuration of the image processing apparatus according to the first embodiment.
  • the image processing apparatus 200 according to the first embodiment generates a reachable range of the moving object based on the reachable point of the moving object searched based on the remaining energy amount of the moving object and causes the display unit 210 to display the reachable range.
  • the image processing apparatus 200 includes an acquisition unit 201, a calculation unit 202, a search unit 203, a division unit 204, a grant unit 205, and a display control unit 206.
  • the energy is energy based on electricity in the case of an EV (Electric Vehicle) vehicle, for example, and in the case of HV (Hybrid Vehicle) vehicle, PHV (Plug-in Hybrid Vehicle) vehicle, etc.
  • energy is energy based on electricity and the like, for example, hydrogen or a fossil fuel that becomes a hydrogen raw material (hereinafter, EV vehicle, HV vehicle, PHV vehicle, and fuel cell vehicle are simply “ EV car ").
  • the energy is energy based on, for example, gasoline, light oil, gas, etc., for example, in the case of a gasoline vehicle, a diesel vehicle or the like (hereinafter simply referred to as “gasoline vehicle”).
  • the residual energy is, for example, energy remaining in a fuel tank, a battery, a high-pressure tank, or the like of the moving body, and is energy that can be used for the subsequent traveling of the moving body.
  • the acquisition unit 201 acquires information on the current location of the moving object on which the image processing apparatus 200 is mounted and information on the initial stored energy amount that is the amount of energy held by the moving object at the current location of the moving object. Specifically, the acquisition unit 201 acquires information (position information) related to the current location by calculating the current position of the device itself using, for example, GPS information received from a GPS satellite.
  • the acquisition unit 201 determines the remaining energy amount of the moving body managed by an electronic control unit (ECU: Electronic Control Unit) via an in-vehicle communication network that operates according to a communication protocol such as CAN (Controller Area Network). , Get the initial amount of energy.
  • ECU Electronic Control Unit
  • CAN Controller Area Network
  • the acquisition unit 201 may acquire information on the speed of the moving body, traffic jam information, and moving body information.
  • the information regarding the speed of the moving body is the speed and acceleration of the moving body.
  • the acquisition part 201 may acquire the information regarding a road from the map information memorize
  • the information on the road is, for example, a running resistance generated in the moving body due to the road type, road gradient, road surface condition, and the like.
  • the calculating unit 202 calculates an estimated energy consumption that is energy consumed when the moving body travels in a predetermined section.
  • the predetermined section is, for example, a section (hereinafter referred to as “link”) connecting one predetermined point on the road (hereinafter referred to as “node”) and another node adjacent to the one node.
  • the node may be, for example, an intersection or a stand, or a connection point between links separated by a predetermined distance.
  • the nodes and links constitute map information stored in the storage unit.
  • the map information includes, for example, vector data in which intersections (points), roads (lines and curves), regions (surfaces), colors for displaying these, and the like are digitized.
  • the calculation unit 202 estimates an estimated energy consumption amount in a predetermined section based on a consumption energy estimation formula including first information, second information, and third information. More specifically, the calculation unit 202 estimates an estimated energy consumption amount in a predetermined section based on information related to the speed of the moving object and the moving object information.
  • the moving body information is information that causes a change in the amount of energy consumed or recovered during traveling of the moving body, such as the weight of the moving body (including the number of passengers and the weight of the loaded luggage) and the weight of the rotating body.
  • the calculation unit 202 may estimate the estimated energy consumption amount in the predetermined section based on the consumption energy estimation formula further including the fourth information.
  • the energy consumption estimation formula is an estimation formula for estimating the energy consumption of the moving body in a predetermined section.
  • the energy consumption estimation formula is a polynomial composed of first information, second information, and third information, which are different factors that increase or decrease energy consumption. Further, when the road gradient is clear, fourth information is further added to the energy consumption estimation formula. Detailed description of the energy consumption estimation formula will be described later.
  • the first information is information about energy consumed when the moving body is stopped in a state where the drive source mounted on the moving body is in operation.
  • the engine is idled at a low speed to such an extent that no load is applied to the engine of the moving body. That is, when the moving body is stopped in a state where the drive source is movable, the idling is performed.
  • the moving body is stopped in a state where the driving source is movable, the moving body is in a stopped state, and when the accelerator is stepped on, the motor as the driving source starts to move.
  • the first information is, for example, energy consumption consumed when the vehicle is stopped with the engine running or when it is stopped by a signal or the like. That is, the first information is an energy consumption amount consumed due to factors not related to the traveling of the moving body, and is an energy consumption amount due to an air conditioner or an audio provided in the moving body.
  • the first information may be substantially zero in the case of an EV vehicle.
  • the second information is information related to energy consumed and recovered during acceleration / deceleration of the moving body.
  • the time of acceleration / deceleration of the moving body is a traveling state in which the speed of the moving body changes with time.
  • the time of acceleration / deceleration of the moving body is a traveling state in which the speed of the moving body changes within a predetermined time.
  • the predetermined time is a time interval at regular intervals, for example, per unit time.
  • the recovered energy is, for example, electric power charged in a battery when the mobile body is traveling.
  • the recovered energy is, for example, fuel that can be saved by reducing (fuel cut) the consumed fuel.
  • the third information is information related to energy consumed by the resistance generated when the mobile object is traveling.
  • the traveling time of the moving body is a traveling state where the speed of the moving body is constant, accelerated or decelerated within a predetermined time.
  • the resistance generated when the mobile body travels is a factor that changes the travel state of the mobile body when the mobile body travels. Specifically, the resistance generated when the mobile body travels is various resistances generated in the mobile body due to weather conditions, road conditions, vehicle conditions, and the like.
  • the resistance generated in the moving body due to the weather condition is, for example, air resistance due to weather changes such as rain and wind.
  • the resistance generated in the moving body according to the road condition is road resistance due to road gradient, pavement state of road surface, water on the road surface, and the like.
  • the resistance generated in the moving body depending on the vehicle condition is a load resistance applied to the moving body due to tire air pressure, number of passengers, loaded weight, and the like.
  • the third information is energy consumption when the moving body is driven at a constant speed, acceleration or deceleration while receiving air resistance, road resistance, and load resistance. More specifically, the third information is consumed when the moving body travels at a constant speed, acceleration or deceleration, for example, air resistance generated in the moving body due to the head wind or road surface resistance received from a road that is not paved. Energy consumption.
  • the fourth information is information related to energy consumed and recovered by a change in altitude where the moving object is located.
  • the change in altitude at which the moving body is located is a state in which the altitude at which the moving body is located changes over time.
  • the change in altitude at which the moving body is located is a traveling state in which the altitude changes when the moving body travels on a sloped road within a predetermined time.
  • the fourth information is additional information that can be obtained when the road gradient in the predetermined section is clear, thereby improving the estimation accuracy of energy consumption.
  • the search unit 203 is based on the map information stored in the storage unit, the current location and initial stored energy amount of the mobile body acquired by the acquisition unit 201, and the estimated energy consumption calculated by the calculation unit 202. Search for a plurality of reachable points that can be reached from the current point.
  • the search unit 203 in all routes that can move from the current location of the moving object, in a predetermined section that connects the predetermined points on the route from the moving object, starting from the current location of the moving object. A predetermined point and a predetermined section are searched so that the total of the estimated energy consumption is minimized. Then, the search unit 203 moves the mobile unit to a predetermined point where the total estimated energy consumption amount is within the range of the initial stored energy amount of the mobile unit in all the routes that can move from the current point of the mobile unit.
  • the search unit 203 starts from the current location of the mobile object as a starting point, all links that can be moved from the current location of the mobile object, nodes that are connected to these links, and all that can be moved from these nodes. , And all the nodes and links that can be reached by the moving object.
  • the search unit 203 each time the search unit 203 searches for a new link, the search unit 203 accumulates the estimated energy consumption of the route to which the one link is connected, and the accumulated energy consumption is minimized. Search for a node connected to the link and a plurality of links connected to this node.
  • the search unit 203 estimates the estimated energy consumption from the current location of the moving object to the node among the plurality of links connected to the node.
  • the estimated energy consumption of the relevant node is calculated using the estimated energy consumption of the link with a small amount of accumulation.
  • the search unit 203 in each of the plurality of routes including the searched nodes and links, searches all nodes whose accumulated energy consumption amount is within the range of the initial stored energy amount of the mobile object. Search as a reachable point.
  • the search unit 203 may search for a reachable point by excluding a predetermined section in which the movement of the mobile object is prohibited from candidates for searching for the reachable point of the mobile object.
  • the predetermined section in which the movement of the moving body is prohibited is, for example, a link that is one-way reverse running, or a link that is a passage-prohibited section due to time restrictions or seasonal restrictions.
  • the time restriction is, for example, that traffic is prohibited in a certain time zone by being set as a school road or an event.
  • the seasonal restriction is, for example, that traffic is prohibited due to heavy rain or heavy snow.
  • the search unit 203 selects another predetermined section as a mobile object.
  • the reachable point may be searched for by removing it from the candidates for searching for the reachable point.
  • the importance of the predetermined section is, for example, a road type.
  • the road type is a type of road that can be distinguished by differences in road conditions such as legal speed, road gradient, road width, and presence / absence of signals.
  • the road type is a narrow street that passes through a general national road, a highway, a general road, an urban area, or the like.
  • a narrow street is, for example, a road defined in the Building Standard Law with a width of less than 4 meters in an urban area.
  • the search unit 203 moves all the areas constituting one bridge or one tunnel of the map information divided by the dividing unit 204. It is preferable to search for a reachable point of the moving body so as to be included in the reachable range of the body. Specifically, for example, when the entrance of one bridge or one tunnel is a reachable point of the moving body, the search unit 203 moves on the one bridge or one tunnel from the entrance of the one bridge or one tunnel toward the exit. You may search the said reachable point so that several reachable points may be searched.
  • the entrance of one bridge or one tunnel is the starting point of one bridge or one tunnel on the side close to the current position of the moving object.
  • the dividing unit 204 divides the map information into a plurality of areas. Specifically, the dividing unit 204 converts the map information into a plurality of rectangles based on the reachable point farthest from the current point of the mobile object among the plurality of reachable points of the mobile object searched by the search unit 203. Divided into shape regions, for example, converted into a mesh of m ⁇ m dots. An m ⁇ m dot mesh is handled as raster data (image data) to which identification information is added by an adding unit 205 described later. Note that each m of m ⁇ m dots may be the same numerical value or a different numerical value.
  • the dividing unit 204 extracts the maximum longitude, the minimum longitude, the maximum latitude, and the minimum latitude, and calculates the distance from the current position of the moving object. Then, the dividing unit 204 divides the map information into a plurality of areas, for example, by dividing the size of one area when the reachable point farthest from the current position of the moving object and the current position of the moving object are equally divided into n.
  • the assigning unit 205 assigns identification information for identifying whether or not the mobile body can reach each of the plurality of areas divided by the dividing unit 204 based on the plurality of reachable points searched by the searching unit 203. To do. Specifically, when the reachable point of the moving object is included in one area divided by the dividing unit 204, the granting unit 205 can reach the one area to identify that the moving object is reachable. The identification information is assigned. After that, when the reachable point of the moving object is not included in the one area divided by the dividing unit 204, the granting unit 205 identifies that the moving object cannot reach the one area. The identification information is assigned.
  • the assigning unit 205 assigns reachable identification information “1” or unreachable identification information “0” to each area of the mesh divided into m ⁇ m. Convert to a 2D matrix data mesh of m columns.
  • the dividing unit 204 and the assigning unit 205 divide the map information in this way, convert it into a mesh of two-dimensional matrix data of m rows and m columns, and handle it as binarized raster data.
  • the assigning unit 205 includes a first changing unit 251 and a second changing unit 252 that perform identification information changing processing on a plurality of areas divided by the dividing unit 204. Specifically, the assigning unit 205 treats the mesh obtained by dividing the map information as binarized raster data by the first changing unit 251 and the second changing unit 252, and performs a closing process (a reduction process after the expansion process). Process). Further, the assigning unit 205 may perform an opening process (a process of performing an expansion process after the reduction process) by the first change unit 251 and the second change unit 252.
  • the first change unit 251 can reach the identification information of the one area when the identification information that can reach another area adjacent to the one area to which the identification information is given is given.
  • the identification information is changed (expansion process). More specifically, the first changing unit 251 includes any one of the other regions adjacent to each other in the eight directions of the lower left, lower, lower right, right, upper right, upper, upper left, and left of one rectangular region. If “1”, which is identification information that can reach that area, is assigned, the identification information of the one area is changed to “1”.
  • the second changing unit 252 receives the identification information that cannot be reached in another area adjacent to the one area to which the identification information is assigned.
  • the identification information of the area is changed to unreachable identification information (reduction process). More specifically, the second changing unit 252 may be any one of the other regions adjacent to the lower left, lower, lower right, right, upper right, upper, upper left, and left of one rectangular region. If “0”, which is identification information that cannot be reached, is assigned to the area, the identification information of the one area is changed to “0”.
  • the expansion process by the first change unit 251 and the reduction process by the second change unit 252 are performed the same number of times.
  • the granting unit 205 can reach the area including the reachable point, which is a point where the moving body can reach from the current point, among the plurality of areas divided by the dividing unit 204. Reachable identification information for identifying this is given to make the movable body reachable. Thereafter, the assigning unit 205 assigns reachable identification information to an area adjacent to the area to which the reachable identification information is assigned, and the identification information of each area so that no missing point is generated in the reachable range of the moving object. To change.
  • the reachable identification information for identifying that the reachable unit 205 is reachable is assigned to the divided map information corresponding to the entrance and exit of one bridge or one tunnel of the map information
  • Reachable identification means is assigned to the divided map information corresponding to all areas constituting one bridge or one tunnel.
  • the granting unit 205 corresponds to the entrance of one bridge or one tunnel, for example, when the reachable identification information is given to each area corresponding to the entrance and exit of one bridge or one tunnel, respectively. Identification information that can reach all areas where the moving body can move from the area to the area corresponding to the exit is given.
  • the assigning unit 205 is identification information “1” that can reach each region corresponding to an entrance and an exit of one bridge or one tunnel before the expansion processing by the first changing unit 251, for example.
  • identification information “1” that can reach each region corresponding to an entrance and an exit of one bridge or one tunnel before the expansion processing by the first changing unit 251, for example.
  • the area identification information is changed to “1”.
  • the section connecting the area corresponding to the entrance of one bridge or tunnel and the area corresponding to the exit may be a section corresponding to a road including a plurality of curves, or a single straight road. It may be a section.
  • the display control unit 206 causes the display unit 210 to display the reachable range of the mobile object together with the map information based on the identification information of the area to which the identification information is given by the granting unit 205. Specifically, the display control unit 206 converts a mesh, which is a plurality of image data to which identification information is added by the adding unit 205, into vector data, and displays the mesh on the display unit 210 together with the map information stored in the storage unit. .
  • FIG. 3 is a block diagram showing a detailed functional configuration example of the display control unit 206 shown in FIG.
  • the display control unit 206 includes a contour extraction unit 261, a complement unit 262, a conversion unit 263, a removal unit 264, an inverse conversion unit 265, and a thinning unit 266.
  • the contour extraction unit 261 can reach the moving object based on the positional relationship between one area to which reachable identification information is assigned and another area to which reachable identification information is adjacent that is adjacent to the one area.
  • the outline of the range is extracted and displayed on the display unit 210. More specifically, for example, the contour extraction unit 261 extracts contour data indicating the contour of the reachable range of the moving object using a Freeman chain code, and causes the display unit 210 to display the reachable range of the moving object. .
  • the line segment data parallel to either the X axis or the Y axis perpendicular to each other on the display screen has the same length.
  • the contour extracting unit 261 may extract the reachable range of the moving object based on the longitude / latitude information of the region to which the reachable identification information is given, and display the reachable range on the display unit 210. Specifically, for example, the contour extraction unit 261 searches for identification information “1” that is reachable from the first column of two-dimensional matrix data of m rows and m columns for each row. Then, the display control unit 206 searches a continuous area including the reachable identification information “1” in each row of the two-dimensional matrix data, and first detects the minimum longitude and minimum latitude (area of the area where “1” is detected. A rectangular area having a line segment connecting the maximum longitude and the maximum latitude (lower right coordinates of the area) of the area where “1” is detected last as a diagonal line is displayed as the reachable range of the moving object.
  • the complementing unit 262 supplements the line segment data connecting the outer peripheral points on the contour data. Specifically, for example, when the line segment data is not parallel to either the X axis or the Y axis, the complement unit 262 uses the line segment data as the X direction component line segment data. , Decomposed into Y-direction component line segment data. Thereby, each line segment data constituting the contour data has the same length.
  • the conversion unit 263 performs frequency conversion of the two-dimensional contour data using Fourier transform. Specifically, for example, the conversion unit 263 performs frequency conversion of the contour data by fast Fourier transform. More specifically, the conversion unit 263 calculates a vector sequence having adjacent vertices (outer peripheral points) on the contour data as start points and end points. When the P-type Fourier descriptor is used, the conversion unit 263 decomposes each vector into a declination and a line segment length. Thereby, the shape feature amount of the outline is obtained. Since the length of the line segment is constant, the transform unit 263 performs fast Fourier transform on the array of declinations.
  • the removal unit 264 removes the high frequency component from the conversion result converted by the conversion unit 263. Specifically, for example, the removal unit 264 removes a frequency component that is equal to or higher than a preset cutoff frequency from the conversion result. More specifically, the removal unit 264 removes the high frequency component by passing the low-frequency filter through the declination frequency component obtained by the conversion unit 263. Further, the user can adjust the smoothness of the contour data by changing the cutoff frequency.
  • the inverse conversion unit 265 returns the conversion result after removal by the removal unit 264 to the contour data.
  • the inverse transformation unit 265 returns the transformation result after removal by the removal unit 264 to the contour data by inverse fast Fourier transformation. Thereby, smoother contour data than the contour data before conversion can be obtained.
  • the thinning unit 266 Thin the vertices connecting. Thereafter, the thinning unit 266 corrects the contour data by connecting the opposite vertices of the adjacent line segment data. Thereby, smoother contour data can be obtained by simple processing.
  • FIG. 4 is a flowchart illustrating an example of an image processing procedure performed by the image processing apparatus.
  • the image processing apparatus 200 first uses the acquisition unit 201 to obtain information on the current location of the moving body and information on the initial amount of energy held by the moving body at the current location of the moving body. Are acquired (steps S401 and S402). At this time, the image processing apparatus 200 may also acquire moving body information.
  • the image processing apparatus 200 uses the calculation unit 202 to calculate an estimated energy consumption that is energy consumed when the moving body travels in a predetermined section (step S403). At this time, the image processing apparatus 200 calculates estimated energy consumption amounts in a plurality of predetermined sections connecting predetermined points on the path of the moving body. Next, the image processing apparatus 200 uses the search unit 203 based on the map information stored in the storage unit and the initial stored energy amount and the estimated energy consumption amount acquired in steps S402 and S403. A reachable point is searched (step S404).
  • the image processing apparatus 200 divides the map information made up of vector data into a plurality of regions by the dividing unit 204 and converts it into a mesh made up of raster data (step S405).
  • the image processing apparatus 200 assigns reachable or unreachable identification information by the assigning unit 205 to the plurality of regions divided in step S405 based on the plurality of reachable points searched in step S404. (Step S406).
  • the image processing apparatus 200 causes the display control unit 206 to display the reachable range of the moving object on the display unit 210 based on the identification information of the plurality of areas to which the identification information is assigned in step S406 (step S407).
  • the process according to the flowchart ends.
  • the image processing apparatus 200 divides the map information into a plurality of areas, searches for each area to determine whether or not the moving body can be reached, and moves the moving body to each area. Reachable or unreachable identification information that identifies whether the object is reachable or unreachable is assigned. Then, the image processing apparatus 200 generates a reachable range of the moving object based on the region to which reachable identification information is assigned. For this reason, the image processing apparatus 200 can generate the reachable range of the moving object in a state excluding areas where the moving object cannot travel, such as the sea, lakes, and mountain ranges. Therefore, the image processing apparatus 200 can accurately display the reachable range of the moving object.
  • the image processing apparatus 200 converts a plurality of areas obtained by dividing the map information into image data, and assigns identification information indicating that each of the plurality of areas is reachable or unreachable, and then performs a closing expansion process. For this reason, the image processing apparatus 200 can remove the missing points within the reachable range of the moving object.
  • the image processing apparatus 200 converts the plurality of areas obtained by dividing the map information into image data, and assigns identification information indicating reachability or unreachability to the plurality of areas, and then performs an opening reduction process. Therefore, the image processing apparatus 200 can remove isolated points in the reachable range of the moving object.
  • the image processing apparatus 200 can remove missing points and isolated points in the reachable range of the moving body, and thus the travelable range of the moving body can be displayed on a two-dimensional smooth surface in an easy-to-read manner. it can. Further, the image processing apparatus 200 extracts the outline of the mesh generated by dividing the map information into a plurality of regions. For this reason, the image processing apparatus 200 can smoothly display the outline of the reachable range of the moving object.
  • the image processing apparatus 200 searches for reachable points of the mobile object by narrowing down roads for searching for reachable points of the mobile object. For this reason, the image processing apparatus 200 can reduce the processing amount at the time of searching the reachable point of a mobile body. Even if the number of reachable reachable points is reduced by narrowing down the roads to search for the reachable points of the mobile object, the expansion process of closing is performed as described above, so that the reachable range of the mobile object is within the reachable range. The resulting defect point can be removed. Therefore, the image processing device 200 can reduce the amount of processing for generating the reachable range of the moving object. In addition, the image processing apparatus 200 can display the travelable range of the moving object in a two-dimensional smooth surface in an easy-to-see manner.
  • Example 1 of the present invention will be described.
  • the navigation apparatus 500 mounted on a vehicle as the image processing apparatus 200 will be described using the navigation apparatus 500 mounted on a vehicle as the image processing apparatus 200.
  • FIG. 5 is a block diagram illustrating a hardware configuration of the navigation apparatus.
  • a navigation device 500 includes a CPU 501, ROM 502, RAM 503, magnetic disk drive 504, magnetic disk 505, optical disk drive 506, optical disk 507, audio I / F (interface) 508, microphone 509, speaker 510, input device 511, A video I / F 512, a display 513, a camera 514, a communication I / F 515, a GPS unit 516, and various sensors 517 are provided.
  • the components 501 to 517 are connected by a bus 520, respectively.
  • the CPU 501 governs overall control of the navigation device 500.
  • the ROM 502 stores programs such as a boot program, an estimated energy consumption calculation program, a reachable point search program, an identification information addition program, and a map data display program.
  • the RAM 503 is used as a work area for the CPU 501. That is, the CPU 501 controls the entire navigation device 500 by executing various programs recorded in the ROM 502 while using the RAM 503 as a work area.
  • an estimated energy consumption in a link connecting one node and an adjacent node is calculated based on an energy consumption estimation formula for calculating an estimated energy consumption of the vehicle.
  • the reachable point search program a plurality of points (nodes) that can be reached with the remaining energy amount at the current point of the vehicle are searched based on the estimated energy consumption calculated in the estimation program.
  • identification information addition program identification information for identifying whether the vehicle is reachable or unreachable is assigned to a plurality of areas obtained by dividing the map information based on a plurality of reachable points searched in the search program.
  • the In the map data display program the reachable range of the vehicle is displayed on the display 513 based on the plurality of areas to which the identification information is given by the identification information giving program.
  • the magnetic disk drive 504 controls the reading / writing of the data with respect to the magnetic disk 505 according to control of CPU501.
  • the magnetic disk 505 records data written under the control of the magnetic disk drive 504.
  • an HD hard disk
  • FD flexible disk
  • the optical disk drive 506 controls reading / writing of data with respect to the optical disk 507 according to the control of the CPU 501.
  • the optical disk 507 is a detachable recording medium from which data is read according to the control of the optical disk drive 506.
  • a writable recording medium can be used as the optical disc 507.
  • an MO, a memory card, or the like can be used as a detachable recording medium.
  • Examples of information recorded on the magnetic disk 505 and the optical disk 507 include map data, vehicle information, road information, travel history, and the like.
  • Map data is used to search for a reachable point of a vehicle in a car navigation system or to display a reachable range of a vehicle.
  • Background data representing features (features) such as buildings, rivers, and the ground surface, This is vector data including road shape data that expresses the shape of the road with links and nodes.
  • the voice I / F 508 is connected to a microphone 509 for voice input and a speaker 510 for voice output.
  • the sound received by the microphone 509 is A / D converted in the sound I / F 508.
  • the microphone 509 is installed in a dashboard portion of a vehicle, and the number thereof may be one or more. From the speaker 510, a sound obtained by D / A converting a predetermined sound signal in the sound I / F 508 is output.
  • Examples of the input device 511 include a remote controller, a keyboard, and a touch panel that are provided with a plurality of keys for inputting characters, numerical values, various instructions, and the like.
  • the input device 511 may be realized by any one form of a remote control, a keyboard, and a touch panel, but may be realized by a plurality of forms.
  • the video I / F 512 is connected to the display 513.
  • the video I / F 512 is output from, for example, a graphic controller that controls the entire display 513, a buffer memory such as a VRAM (Video RAM) that temporarily records image information that can be displayed immediately, and a graphic controller.
  • a control IC for controlling the display 513 based on the image data to be processed.
  • the display 513 displays icons, cursors, menus, windows, or various data such as characters and images.
  • a TFT liquid crystal display, an organic EL display, or the like can be used as the display 513.
  • the camera 514 captures images inside or outside the vehicle.
  • the image may be either a still image or a moving image.
  • the outside of the vehicle is photographed by the camera 514 and the photographed image is analyzed by the CPU 501 or a recording medium such as the magnetic disk 505 or the optical disk 507 via the image I / F 512.
  • the communication I / F 515 is connected to the network via wireless and functions as an interface between the navigation device 500 and the CPU 501.
  • Communication networks that function as networks include in-vehicle communication networks such as CAN and LIN (Local Interconnect Network), public line networks and mobile phone networks, DSRC (Dedicated Short Range Communication), LAN, and WAN.
  • the communication I / F 515 is, for example, a public line connection module, an ETC (non-stop automatic fee payment system) unit, an FM tuner, a VICS (Vehicle Information and Communication System) / beacon receiver, or the like.
  • the GPS unit 516 receives radio waves from GPS satellites and outputs information indicating the current position of the vehicle.
  • the output information of the GPS unit 516 is used when the current position of the vehicle is calculated by the CPU 501 together with output values of various sensors 517 described later.
  • the information indicating the current position is information for specifying one point on the map data, such as latitude / longitude and altitude.
  • Various sensors 517 output information for judging the position and behavior of the vehicle, such as a vehicle speed sensor, an acceleration sensor, an angular velocity sensor, and a tilt sensor.
  • the output values of the various sensors 517 are used by the CPU 501 to calculate the current position of the vehicle and to calculate the amount of change in speed and direction.
  • the acquisition unit 201, the calculation unit 202, the search unit 203, the division unit 204, the assigning unit 205, and the display control unit 206 of the image processing apparatus 200 illustrated in FIG. 2 are the ROM 502, the RAM 503, the magnetic disk 505, and the navigation device 500 described above.
  • the CPU 501 executes a predetermined program using a program or data recorded on the optical disc 507 or the like, and realizes its function by controlling each unit in the navigation device 500.
  • the navigation apparatus 500 of the present embodiment calculates the estimated energy consumption of the vehicle on which the own apparatus is mounted.
  • the navigation device 500 is one or more of energy consumption estimation formulas including first information, second information, and third information based on, for example, speed, acceleration, and vehicle gradient. Is used to calculate the estimated energy consumption of the vehicle in a predetermined section.
  • the predetermined section is a link connecting one node (for example, an intersection) on the road and another node adjacent to the one node.
  • the navigation device 500 determines whether the vehicle is linked based on traffic jam information provided by the probe, traffic jam prediction data acquired via a server, link length or road type stored in the storage device, and the like. The travel time required to finish driving is calculated. Then, navigation device 500 calculates an estimated energy consumption amount per unit time using one of the following energy consumption estimation formulas shown in the following formulas (1) to (2), and the vehicle travels the link for travel time: Calculate the estimated energy consumption when finishing.
  • the energy consumption estimation formula shown in the above equation (1) is a theoretical formula for estimating the energy consumption per unit time during acceleration and traveling.
  • is the net thermal efficiency and ⁇ is the total transmission efficiency.
  • is negative is expressed by the above equation (2).
  • the energy consumption estimation formula shown in the above equation (2) is a theoretical formula for estimating the energy consumption per unit time during deceleration.
  • the energy consumption estimation formula per unit time during acceleration / deceleration and travel is expressed by the product of travel resistance, travel distance, net motor efficiency, and transmission efficiency.
  • the first term on the right side is the energy consumption (first information) during idling.
  • the second term on the right side is the energy consumption (fourth information) due to the gradient component and the energy consumption (third information) due to the rolling resistance component.
  • the third term on the right side is energy consumption (third information) due to the air resistance component.
  • the fourth term on the right side of the equation (1) is the energy consumption (second information) by the acceleration component.
  • the fourth term on the right side of equation (2) is the energy consumption (second information) due to the deceleration component.
  • is positive, that is, the empirical formula for calculating the estimated energy consumption per unit time during acceleration and traveling is (3) It is expressed by a formula.
  • is negative, that is, the empirical formula for calculating the estimated energy consumption per unit time during deceleration is the following formula (4): It is represented by
  • the coefficients a1 and a2 are constants set according to the vehicle situation.
  • the coefficients k1, k2, and k3 are variables based on energy consumption during acceleration. Further, the speed V is set, and other variables are the same as the above formulas (1) and (2).
  • the first term on the right side corresponds to the first term on the right side of the above equations (1) and (2).
  • the second term on the right side is the energy of the gradient resistance component in the second term on the right side and the acceleration in the fourth term on the right side in the formulas (1) and (2). It corresponds to the energy of the resistance component.
  • the third term on the right side corresponds to the energy of the rolling resistance component in the second term on the right side and the energy of the air resistance component in the third term on the right side in the above equations (1) and (2).
  • ⁇ in the second term on the right side of the equation (4) is the amount of potential energy and kinetic energy recovered (hereinafter referred to as “recovery rate”).
  • the navigation device 500 calculates the travel time required for the vehicle to travel the link as described above, and calculates the average speed and average acceleration when the vehicle travels the link. Then, the navigation device 500 uses the average speed and average acceleration of the vehicle at the link, and the vehicle travels the link in the travel time based on the energy consumption estimation formula shown in the following formula (5) or formula (6). You may calculate the estimated energy consumption at the time of finishing.
  • the energy consumption estimation formula shown in the above equation (5) is a theoretical formula for calculating the estimated energy consumption at the link when the altitude difference ⁇ h of the link on which the vehicle travels is positive.
  • the case where the altitude difference ⁇ h is positive is a case where the vehicle is traveling uphill.
  • the consumption energy estimation formula shown in the above equation (6) is a theoretical formula for calculating the estimated energy consumption amount in the link when the altitude difference ⁇ h of the link on which the vehicle travels is negative.
  • the case where the altitude difference ⁇ h is negative is a case where the vehicle is traveling downhill.
  • the first term on the right side is the energy consumption (first information) during idling.
  • the second term on the right side is the energy consumption (second information) by the acceleration resistance.
  • the third term on the right side is energy consumption consumed as potential energy (fourth information).
  • the fourth term on the right side is the energy consumption (third information) due to the air resistance and rolling resistance (running resistance) received per unit area.
  • the recovery rate ⁇ used in the above equations (1) to (6) will be described.
  • the energy consumption P acc of the acceleration component is calculated from the total energy consumption (left side) of the link from the energy at idling. This is a value obtained by subtracting the consumption (first term on the right side) and the energy consumption (fourth term on the right side) due to running resistance, and is expressed by the following equation (7).
  • the recovery rate ⁇ is about 0.7 to 0.9 for EV vehicles, about 0.6 to 0.8 for HV vehicles, and about 0.2 to 0.3 for gasoline vehicles.
  • the recovery rate of the gasoline vehicle is a ratio of energy required for acceleration and energy recovered for deceleration.
  • the navigation device 500 searches for a plurality of nodes that can be reached from the current location of the vehicle on which the device is mounted as reachable locations of the vehicle. Specifically, the navigation apparatus 500 calculates the estimated energy consumption in the link using any one or more of the energy consumption estimation formulas shown in the above formulas (1) to (6). Then, the navigation device 500 searches for a reachable node of the vehicle and sets it as a reachable point so that the total of the estimated energy consumption in the link is minimized. Below, an example of the reachable point search by the navigation apparatus 500 is demonstrated.
  • FIG. 6 to 9 are explanatory views schematically showing an example of reachable point search by the navigation device 500.
  • FIG. 6-9 the nodes of the map data (for example, intersections) are indicated by circles, and the links (predetermined sections on the road) connecting adjacent nodes are indicated by line segments (the same applies to FIGS. 10 and 11). And links shown).
  • the navigation apparatus 500 first searches for the link L1_1 that is closest to the current location 600 of the vehicle. Then, navigation device 500 searches for node N1_1 connected to link L1_1, and adds it to a node candidate for searching for a reachable point (hereinafter simply referred to as “node candidate”).
  • the navigation apparatus 500 calculates the estimated energy consumption in the link L1_1 that connects the current location 600 of the vehicle and the node N1_1 that is the node candidate using the consumption energy estimation formula. Then, the navigation device 500 writes the estimated energy consumption 3wh in the link L1_1 to the storage device (magnetic disk 505 or optical disk 507) in association with the node N1_1, for example.
  • the navigation apparatus 500 searches for all links L2_1, L2_2, and L2_3 connected to the node N1_1 and searches for reachable points (hereinafter simply “link candidates”). Said).
  • the navigation apparatus 500 calculates the estimated energy consumption amount in the link L2_1 using the energy consumption estimation formula.
  • the navigation device 500 associates the accumulated energy amount 7wh obtained by accumulating the estimated energy consumption amount 4wh in the link L2_1 and the estimated energy consumption amount 3wh in the link L1_1 with the node N2_1 connected to the link L2_1, and stores the storage device (magnetic disk 505). Or the optical disc 507) (hereinafter referred to as “set cumulative energy amount to node”).
  • the navigation apparatus 500 calculates the estimated energy consumption in the links L2_2 and L2_3, respectively, using the energy consumption estimation formula as in the case of the link L2_1. Then, the navigation apparatus 500 sets the accumulated energy amount 8wh obtained by accumulating the estimated energy consumption amount 5wh in the link L2_2 and the estimated energy consumption amount 3wh in the link L1_1 to the node N2_2 connected to the link L2_2.
  • the navigation apparatus 500 sets the accumulated energy amount 6wh obtained by accumulating the estimated energy consumption amount 3wh in the link L2_3 and the estimated energy consumption amount 3wh in the link L1_1 to the node N2_3 connected to the link L2_3. At this time, if the node for which the cumulative energy amount is set is not a node candidate, navigation device 500 adds the node to the node candidate.
  • the navigation apparatus 500 includes all links L3_1 and L3_2_1 connected to the node N2_1, all links L3_2_2, L3_3 and L3_4 connected to the node N2_2, and a link L3_5 connected to the node N2_3. Search for link candidates. Next, the navigation apparatus 500 calculates the estimated energy consumption in the links L3_1 to L3_5 using the consumption energy estimation formula.
  • the navigation apparatus 500 accumulates the estimated energy consumption 4wh in the link L3_1 to the accumulated energy amount 7wh set in the node N2_1, and sets the accumulated energy amount 11wh in the node N3_1 connected to the link L3_1.
  • the navigation apparatus 500 sets the cumulative energy amounts 13wh, 12wh, and 10wh in the nodes N3_3 to N3_5 connected to the links L3_3 to L3_5, respectively, in the links L3_3 to L3_5 as in the case of the link L3_1.
  • the navigation apparatus 500 accumulates the estimated energy consumption 5wh in the link L3_3 to the accumulated energy amount 8wh set in the node N2_2, and sets the accumulated energy amount 13wh in the node N3_3.
  • the navigation device 500 accumulates the estimated energy consumption 4wh in the link L3_4 to the accumulated energy amount 8wh set in the node N2_2, and sets the accumulated energy amount 12wh in the node N3_4.
  • the navigation device 500 accumulates the estimated energy consumption 4wh in the link L3_5 to the accumulated energy amount 6wh set in the node N2_3, and sets the accumulated energy amount 10wh in the node N3_5.
  • the navigation device 500 includes the cumulative energy amount in a plurality of routes from the vehicle current point 600 to the one node N3_2.
  • the minimum accumulated energy amount 10wh is set in the one node N3_2.
  • the navigation device 500 When there are a plurality of nodes of the same hierarchy from the current location 600 of the vehicle, such as the above-described nodes N2_1 to N2_3, the navigation device 500, for example, from a link connected to a node having a low cumulative energy amount among the nodes at the same level.
  • the estimated energy consumption and the cumulative energy amount are calculated in order.
  • the navigation device 500 calculates the estimated energy consumption in the links connected to each node in the order of the node N2_3, the node N2_1, and the node N2_2, and accumulates the accumulated energy amount in each node.
  • the navigation apparatus 500 continues to accumulate the accumulated energy amount as described above from the nodes N3_1 to N3_5 to the deeper level nodes. Then, the navigation device 500 extracts all nodes set with a cumulative energy amount equal to or less than a preset designated energy amount as reachable points of the vehicle, and obtains longitude / latitude information of the nodes extracted as reachable points. Write to the storage device in association with each node.
  • the navigation device 500 has nodes N ⁇ b> 1 ⁇ / b> _ ⁇ b> 1, N ⁇ b> 2 ⁇ / b> _ ⁇ b> 1 to which a cumulative energy amount of 10 wh or less is set, as indicated by the circles shaded in FIG. N2_2, N2_3, N3_2, and N3_5 are extracted as reachable points of the vehicle.
  • the designated energy amount set in advance is, for example, the remaining energy amount (initial stored energy amount) at the current point 600 of the vehicle.
  • FIG. 9 is an example for explaining the reachable point search, and the navigation device 500 is actually as shown in FIG. 10. Further, more nodes and links are searched in a wider range than the map data 900 shown in FIG.
  • FIG. 10 is an explanatory diagram showing an example of reachable point search by the navigation device 500.
  • the accumulated energy amount is continuously calculated for all roads (excluding narrow streets), as shown in FIG. 10, the accumulated energy amount in all nodes of each road is searched in detail without omission. Can do.
  • the estimated energy consumption of about 2 million links is calculated and accumulated throughout Japan, and the information processing amount of the navigation device 500 becomes enormous. For this reason, the navigation apparatus 500 may narrow down the road which searches for the reachable point of a mobile body based on the importance of a link etc., for example.
  • FIG. 11 is an explanatory diagram showing another example of the reachable point search by the navigation device 500.
  • the navigation device 500 calculates the cumulative energy amount on all roads (excluding narrow streets) around the current point 600 of the vehicle, and only high-importance roads are within a certain distance away. To calculate the total energy.
  • the number of nodes and the number of links searched by the navigation device 500 can be reduced, and the information processing amount of the navigation device 500 can be reduced. Therefore, the processing speed of the navigation device 500 can be improved.
  • the navigation device 500 divides the map data stored in the storage device based on the reachable point searched as described above. Specifically, the navigation device 500 converts map data composed of vector data into, for example, a 64 ⁇ 64 dot mesh (X, Y), and converts the map data into raster data (image data).
  • FIG. 12 is an explanatory diagram of an example in which a reachable point by the navigation device 500 is indicated by longitude-latitude.
  • FIG. 13 is an explanatory diagram of an example in which the reachable points by the navigation device 500 are indicated by meshes.
  • FIG. 12 shows, for example, longitude and latitude information (x, y) of reachable points searched as shown in FIGS. 10 and 11 in absolute coordinates.
  • a 64 ⁇ 64 dot mesh (X, Y) to which identification information is given based on the reachable point is illustrated in screen coordinates.
  • the navigation device 500 first generates longitude / latitude information (x, y) having a point group 1200 in absolute coordinates based on the longitude x and latitude y of each of a plurality of reachable points. .
  • the origin (0, 0) of the longitude / latitude information (x, y) is at the lower left of FIG.
  • the navigation device 500 calculates the distances w1 and w2 from the longitude ofx of the current point 600 of the vehicle to the maximum longitude x_max and the minimum longitude x_min of the reachable point farthest in the longitude x direction.
  • the navigation device 500 calculates the distances w3 and w4 from the latitude of the current point 600 of the vehicle to the maximum latitude y_max and the minimum latitude y_min of the reachable point farthest in the latitude y direction.
  • w5 max (w1, w2) from the vehicle current point 600 to the minimum longitude x_min, which is the longest of the distances w1 to w4 from the vehicle current point 600. , W3, w4)
  • map data including a plurality of reachable points such that the length of 1 / n becomes the length of one side of one element of the rectangular shape of the mesh (X,
  • the current location 600 of the vehicle is configured with a mesh (X, Y) of m ⁇ m dots.
  • the navigation device 500 converts the longitude / latitude information (x, y) into the mesh (X, Y), the navigation device 500 assigns identification information to each area of the mesh (X, Y), and the m rows and m columns. Convert to a two-dimensional matrix data (Y, X) mesh.
  • the navigation apparatus 500 identifies reachability that identifies that the vehicle can reach the one area. For example, “1” is given as information (in FIG. 13, one dot is drawn in black, for example).
  • the navigation device 500 identifies that the vehicle cannot reach the one area. For example, “0” is given as the identification information (in FIG. 13, one dot is drawn in white, for example).
  • the navigation apparatus 500 converts the map data into a mesh of two-dimensional matrix data (Y, X) of m rows and m columns in which identification information is assigned to each area obtained by dividing the map data, and the map data is binarized. Treated as raster data.
  • Each area of the mesh is represented by a rectangular area within a certain range. Specifically, as shown in FIG. 13, for example, a mesh (X, Y) of mxm dots in which a point group 1300 of a plurality of reachable points is drawn in black is generated. The origin (0, 0) of the mesh (X, Y) is at the upper left.
  • the navigation apparatus 500 of the present embodiment changes the identification information given to each area of the m ⁇ m dot mesh (X, Y) divided as described above. Specifically, the navigation apparatus 500 performs a closing process (a process of performing a reduction process after the expansion process) on a mesh of two-dimensional matrix data (Y, X) of m rows and m columns.
  • FIG. 14 is an explanatory diagram showing an example of the closing process by the navigation device.
  • FIGS. 14A to 14C are meshes of two-dimensional matrix data (Y, X) of m rows and m columns in which identification information is assigned to each region.
  • FIG. 14A shows a mesh 1400 to which identification information is given for the first time after map data division processing. That is, the mesh 1400 shown in FIG. 14A is the same as the mesh shown in FIG.
  • FIG. 14B shows the mesh 1410 after the closing process (expansion) is performed on the mesh 1400 shown in FIG.
  • FIG. 14C shows a mesh 1420 after the closing process (reduction) is performed on the mesh 1410 shown in FIG.
  • the vehicle reachable ranges 1401, 1411, and 1421 generated by a plurality of regions to which reachable identification information is assigned are blacked out. It shows in the state.
  • the missing point 1402 (the hatched reachable range 1401 in the reachable range 1401 that is included in the reachable range 1401 of the vehicle). A white background) has occurred.
  • the missing point 1402 has the number of nodes that are reachable points when narrowing down roads for searching for nodes and links in order to reduce the load of the reachable point search process by the navigation device 500. It is caused by being reduced.
  • the navigation device 500 performs a closing expansion process on the mesh 1400 after the identification information is given.
  • the closing expansion process the identification information of one area adjacent to the area to which the reachable identification information is assigned in the mesh 1400 after the identification information is given is changed to the reachable identification information.
  • the identification information of all the areas adjacent to the outermost area of the reachable range 1401 of the vehicle before the expansion process is changed to the reachable identification information.
  • the outer periphery of the reachable range 1411 of the vehicle after the expansion process is one dot at a time so as to surround the outer periphery of each outermost region of the reachable range 1401 of the vehicle before the expansion process every time the expansion process is performed. spread.
  • the navigation apparatus 500 performs a closing reduction process on the mesh 1410.
  • the closing reduction process the identification information of one area adjacent to the area to which the unreachable identification information of the mesh 1410 after the expansion process is assigned is changed to the unreachable identification information.
  • each area on the outermost periphery of the reachable range 1411 of the vehicle after the expansion process becomes an area that cannot be reached by one dot every time the reduction process is performed, and the reachable range 1411 of the vehicle after the expansion process is reached.
  • the outer circumference shrinks.
  • the outer periphery of the reachable range 1421 of the vehicle after the reduction process is substantially the same as the outer periphery of the reachable range 1401 of the vehicle before the expansion process.
  • Navigation device 500 performs the above-described expansion processing and reduction processing the same number of times. Specifically, when the expansion process is performed twice, the subsequent reduction process is also performed twice. By equalizing the number of times of the expansion process and the reduction process, the identification information of almost all areas in the outer periphery of the reachable range of the vehicle that has been changed to the identification information that can be reached by the expansion process is restored to the original information by the reduction process. It can be changed to unreachable identification information. In this way, the navigation device 500 can remove the missing point 1402 within the reachable range of the vehicle and generate the reachable range 1421 of the vehicle that can clearly display the outer periphery.
  • FIG. 15 is an explanatory diagram schematically illustrating an example of a closing process performed by the navigation device.
  • 15A to 15C show an example of a mesh of two-dimensional matrix data (Y, X) of h rows and h columns in which identification information is given to each region.
  • FIG. 15A shows a mesh 1500 after identification information is given.
  • FIG. 15B shows a mesh 1510 after closing processing (expansion) with respect to FIG.
  • FIG. 15C shows a mesh 1520 after the closing process (reduction) with respect to FIG.
  • areas 1501 and 1502 to which reachable identification information is assigned are illustrated by different hatchings.
  • identification information that can reach the region 1501 of c rows and f columns, f rows and c columns, and g rows and f columns is assigned to the mesh 1500 after the identification information is given.
  • the regions 1501 to which the reachable identification information is assigned are arranged in a separated state so that the change in the identification information after the expansion process and the reduction process becomes clear.
  • the navigation device 500 performs a closing expansion process on the mesh 1500 after such identification information is applied.
  • the navigation device 500 includes eight areas adjacent to the lower left, lower, lower right, right, upper right, upper, upper left, and left of the area 1501 in the c row and the f column. (B row e column to b row g column, c row e column, c row g column and d row e column to d row g column) 1502 is changed from unreachable identification information to reachable identification information. change.
  • the navigation device 500 can reach the identification information of the eight adjacent regions 1502 in the region 1501 of the f row c column and the g row f column similarly to the processing performed for the region 1501 of the c row f column. Change to the identification information. For this reason, the reachable range 1511 of the vehicle is wider than the reachable range of the vehicle in the mesh 1500 after giving the identification information by the amount that the identification information of the region 1502 is changed to the reachable identification information.
  • the navigation device 500 performs a closing reduction process on the mesh 1510 after the expansion process.
  • the navigation device 500 includes the b rows and e columns adjacent to the region to which the unreachable identification information is given (the white background portion of the mesh 1510 after the expansion process).
  • the identification information of the eight areas 1502 of b row g column, c row e column, c row g column, and d row e column to d row g column is changed to unreachable identification information.
  • the navigation device 500 is similar to the processing performed for the eight areas 1502 of b row e column to b row g column, c row e column, c row g column, and d row e column to d row g column.
  • G row g column, h row e column and h row g column 15 area 1502 identification information is changed to unreachable identification information.
  • the mesh 1520 after the reduction process is similar to the mesh 1500 after the identification information is added, and the three areas 1501 to which reachable identification information is assigned, and after the reduction process.
  • the reachable range 1521 of the vehicle consisting of one region 1502 that remains with the reachable identification information still attached is generated.
  • the region 1502 to which the identification information that can be reached at the time of the expansion process is given and the identification information that can be reached after the reduction process is left is generated in the reachable range of the mesh 1500 after the identification information is given. The missing point disappears.
  • the navigation device 500 performs an opening process (a process of performing an expansion process after the reduction process) on the mesh of the two-dimensional matrix data (Y, X) to generate a reachable range of the vehicle that can clearly display the outer periphery. Also good. Specifically, the navigation apparatus 500 performs an opening process as follows.
  • FIG. 16 is an explanatory diagram showing an example of the opening process by the navigation device.
  • FIGS. 16A to 16C are meshes of two-dimensional matrix data (Y, X) of m rows and m columns in which identification information is assigned to each region.
  • FIG. 16A shows a mesh 1600 after identification information is given.
  • FIG. 16B shows a mesh 1610 after the opening process (reduction) with respect to FIG.
  • FIG. 16C shows a mesh 1620 after the opening process (expansion) with respect to FIG.
  • the vehicle reachable ranges 1601, 1611, and 1621 generated by a plurality of regions to which reachable identification information is assigned are blacked out. It shows in the state.
  • the opening process is performed on the mesh 1600 after the identification information is given.
  • the isolated point 1602 can be removed.
  • the navigation device 500 performs an opening reduction process on the mesh 1600 after the identification information is given.
  • the identification information of one area adjacent to the area to which the unreachable identification information of the mesh 1600 after the identification information is added is changed to the unreachable identification information.
  • the isolated point 1602 generated in the reachable range 1601 of the vehicle before the reduction process (after the identification information is given) is removed.
  • each outermost area of the reachable range 1601 of the vehicle after the identification information is added becomes an area that cannot be reached by one dot every time the reduction process is performed, and the reachable range of the vehicle after the identification information is given
  • the outer periphery of 1601 shrinks. Further, the isolated point 1602 generated in the reachable range 1601 of the vehicle after the identification information is given is removed.
  • the navigation device 500 performs an opening expansion process on the mesh 1610.
  • the identification information of one area adjacent to the area to which the unreachable identification information of the mesh 1610 after the reduction process is assigned is changed to the reachable identification information.
  • the outer periphery of the reachable range 1621 of the vehicle after the expansion process is one dot at a time so as to surround the outer periphery of each outermost region of the reachable range 1611 of the vehicle after the reduction process every time the expansion process is performed. spread.
  • the navigation device 500 performs the expansion process and the reduction process the same number of times as in the closing process.
  • the outer periphery of the reachable range 1611 of the vehicle shrunk by the reduction process is expanded, and the outer periphery of the vehicle reachable range 1621 after the reduction process is expanded before the reduction process. Can be returned to the outer periphery of the reachable range 1601 of the vehicle.
  • the navigation apparatus 500 can generate the vehicle reachable range 1621 in which the isolated point 1602 does not occur and the outer periphery can be clearly displayed.
  • the navigation device 500 of this embodiment extracts the contour of the reachable range of the vehicle based on the identification information given to the mesh of the two-dimensional matrix data (Y, X) of m rows and m columns. Specifically, the navigation apparatus 500 extracts the outline of the reachable range of the vehicle using, for example, a Freeman chain code. More specifically, the navigation apparatus 500 extracts the outline of the reachable range of the vehicle as follows.
  • FIG. 17 is an explanatory view schematically showing an example of vehicle reachable range extraction by the navigation device.
  • FIG. 18 is explanatory drawing which shows typically an example of the mesh after the reachable range of a vehicle is extracted by a navigation apparatus.
  • FIG. 17A shows numbers indicating the adjacent directions of the regions 1710 to 1717 adjacent to the region 1700 (hereinafter referred to as “direction index (chain code)”) and eight-direction arrows corresponding to the direction index.
  • FIG. 17B shows a mesh 1720 of two-dimensional matrix data (Y, X) of h rows and h columns as an example.
  • the areas 1721 to 1734 to which reachable identification information is assigned and the areas to which reachable identification information is provided surrounded by the areas 1721 to 1734 are illustrated by hatching.
  • the direction index indicates the direction in which the line segment of the unit length is facing.
  • the coordinates corresponding to the direction index are (X + dx, Y + dy).
  • the direction index in the direction from the region 1700 toward the region 1710 adjacent to the lower left is “0”.
  • the direction index in the direction from the region 1700 toward the region 1711 adjacent to the bottom is “1”.
  • the direction index in the direction from the region 1700 toward the region 1712 adjacent to the lower right is “2”.
  • the direction index in the direction from the region 1700 toward the region 1713 adjacent to the right is “3”.
  • the direction index in the direction from the region 1700 toward the region 1714 adjacent to the upper right is “4”.
  • the direction index in the direction from the region 1700 toward the adjacent region 1715 is “5”.
  • the direction index in the direction from the region 1700 toward the region 1716 adjacent to the upper left is “6”.
  • the direction index in the direction from the region 1700 toward the region 1717 adjacent to the left is “7”.
  • the navigation device 500 searches the region 1700 adjacent to the region 1700 to which the reachable identification information “1” is assigned in the counterclockwise direction. In addition, the navigation device 500 determines the search start point of the area to which the reachable identification information adjacent to the area 1700 is assigned based on the previous direction index. Specifically, when the direction index from another area toward area 1700 is “0”, navigation device 500 has an area adjacent to the left of area 1700, that is, an area adjacent in the direction of direction index “7”. The search starts from 1717.
  • the navigation device 500 is adjacent to the lower left, lower, lower right, right, upper right, upper left of the region 1700.
  • the search is started from the matching regions, that is, the regions 1710 to 1716 adjacent in the directions of the direction indices “0”, “1”, “2”, “3”, “4”, “5”, “6”, respectively.
  • the navigation apparatus 500 detects the reachable identification information “1” from any one of the areas 1710 to 1717 from the area 1700, the areas 1710 to 1717 that have detected the reachable identification information “1”.
  • the direction indices “0” to “7” corresponding to are written in the storage device in association with the area 1700.
  • the navigation device 500 extracts the outline of the reachable range of the vehicle as follows. As shown in FIG. 17 (B), the navigation device 500 first identifies identification information that can be reached in units of rows from the region of the a row and the a column of the mesh 1720 of the two-dimensional matrix data (Y, X) of the h row and the h column. Search the area to which is assigned.
  • the navigation apparatus 500 Since the unreachable identification information is assigned to all the regions in the a-th row of the mesh 1120, the navigation apparatus 500 next moves the region from the b-th row to the b-th row in the mesh 1720. Search for identification information that can be reached.
  • the navigation device 500 detects the reachable identification information in the b row and e column region 1721 of the mesh 1720, and then counterclockwise from the b row and e column region 1721 of the mesh 1720, the contour of the reachable range of the vehicle A region having reachable identification information is searched.
  • the navigation apparatus 500 since the navigation apparatus 500 has already searched for the area of b rows and d columns adjacent to the left of the area 1721, first, the identification is made counterclockwise from the area 1722 adjacent to the lower left of the area 1721. Search whether there is an area having information.
  • the navigation apparatus 500 detects the reachable identification information of the area 1722 and stores the direction index “0” in the direction from the area 1721 toward the area 1722 in association with the area 1721 in the storage device.
  • the navigation apparatus 500 detects the reachable identification information of the area 1723 adjacent to the lower left of the area 1722, and stores the direction index “0” in the direction from the area 1722 to the area 1723 in association with the previous direction index. Store in the device.
  • the navigation device 500 determines a search start point based on the previous direction index, and searches for whether there is an area having identification information that can be reached counterclockwise from the search start point. Repeat until the corresponding arrow returns to region 1721. Specifically, navigation device 500 searches for a region having reachable identification information counterclockwise from a region adjacent to the left of region 1722, and searches for region 1724 adjacent to region 1723. The reachable identification information is detected, and the direction index “1” is stored in the storage device in association with the previous direction index.
  • navigation device 500 searches for an area having identification information that can be reached counterclockwise from the search start point, and has an area that has reachable identification information. 1724 to 1734 are sequentially detected. Then, every time the navigation device 500 acquires the direction index, the navigation device 500 associates it with the previous direction index and stores it in the storage device.
  • navigation apparatus 500 searches whether there is an area having identification information that can be reached counterclockwise from the area of b rows and f columns adjacent to the upper right of area 1734, and the adjacent area on area 1734
  • the reachable identification information 1721 is detected, and the direction index “5” is stored in the storage device in association with the previous direction index.
  • the direction index “0” ⁇ “0” ⁇ “1” ⁇ “0” ⁇ “2” ⁇ “3” ⁇ “4” ⁇ “3” ⁇ “2” ⁇ “5” ⁇ “5” ⁇ “6” ⁇ “6” ⁇ “5” is stored in this order.
  • the navigation apparatus 500 sequentially searches counterclockwise the areas 1722 to 1734 having reachable identification information adjacent to the area 1721 from the first detected area 1721 to obtain the direction index. Then, the navigation apparatus 500 fills one region in the direction corresponding to the direction index from the region 1721, thereby, as shown in FIG. 18, the contour 1801 of the reachable range of the vehicle and the portion 1802 surrounded by the contour 1801. A mesh having a vehicle reachable range 1800 is generated.
  • the navigation device 500 may extract the outline of the reachable range of the vehicle based on the longitude and latitude information of the mesh of the two-dimensional matrix data (Y, X) to which reachable identification information is assigned.
  • the navigation apparatus 500 extracts the outline of the reachable range of the vehicle as follows.
  • FIG. 19 is an explanatory diagram schematically showing another example of vehicle reachable range extraction by the navigation device.
  • a mesh 1900 of two-dimensional matrix data (Y, X) of d rows and h columns as shown in FIG. 19 will be described as an example.
  • the navigation device 500 searches the mesh 1900 for the area to which the reachable identification information “1” is assigned. Specifically, the navigation apparatus 500 first searches for the identification information “1” that can be reached from the area of a row and a column toward the area of a row and h column.
  • the navigation apparatus 500 Since the unreachable identification information “0” is assigned to all the regions in the a-th row of the mesh 1900, the navigation apparatus 500 next changes the region from the b-th row to the b-th column. An area having identification information “1” that can be reached is searched. Then, the navigation apparatus 500 acquires the minimum longitude px1 and the minimum latitude py1 (upper left coordinates of the area 1901) of the area 1901 in the b row and c column having the reachable identification information “1”.
  • the navigation apparatus 500 searches for an area having identification information “1” that can be reached from the area of b rows and d columns toward the area of b rows and h columns. Then, navigation device 500 searches for a boundary between an area having reachable identification information “1” and an area having unreachable identification information “0”, and row b having reachable identification information “1”.
  • the maximum longitude px2 and the maximum latitude py2 (lower right coordinates of the region 1902) of the region 1902 of the f column are acquired.
  • the navigation device 500 has a rectangular area whose apex is the upper left coordinates (px1, py1) of the area 1901 of b row and c column and the lower right coordinates (px2, py2) of the area 1902 of b row and f column. Fill.
  • the navigation apparatus 500 searches the mesh 1900 for the identification information “1” that can be reached from the b row and g column to the b row and h column region and further from the c row and the a column to the c row and h column.
  • the navigation apparatus 500 acquires the minimum longitude px3 and the minimum latitude py3 (the upper left coordinates of the area 1903) of the area 1903 of the c row and d column having the reachable identification information “1”.
  • the navigation apparatus 500 searches for an area having identification information “1” that can be reached from the area of the c row and the e column toward the area of the c row and the h column. Then, navigation device 500 searches for a boundary between an area having reachable identification information “1” and an area having unreachable identification information “0”, and row c having reachable identification information “1”.
  • the maximum longitude px4 and the maximum latitude py4 (lower right coordinates of the region 1904) of the region 1904 in the f column are acquired.
  • the navigation device 500 has a rectangular area whose apexes are the upper left coordinates (px3, py3) of the area 1903 in the c row and d column and the lower right coordinates (px4, py4) of the area 1904 in the c row and f column. Fill.
  • the navigation apparatus 500 searches for an area having identification information “1” that can be reached from the area of the c row and the g column to the area of the c row and the h column and further from the d row and the a column to the d row and the h column.
  • the navigation device 500 ends the process because the unreachable identification information “0” is assigned to all areas from the area of the c row and the g column to the d row and the h column.
  • the vehicle reachable range and the vehicle reachable range outline are filled. Can be obtained.
  • FIG. 20A is an explanatory diagram of a contour data complementing process example.
  • (A) shows the contour data before complementation.
  • (A) shows a state in which the vectors 2001 to 2003 are connected.
  • the vector 2001 is a vector parallel to the X-axis direction
  • the vector 2003 is also a vector parallel to the X-axis direction.
  • the vectors 2001 and 2003 are the same length.
  • a vector 2002 is a vector that intersects at 45 degrees with respect to the X axis and the Y axis.
  • the length of the vector 2003 is ⁇ 2 times the length of the vectors 2001 and 2003.
  • FIG. B shows an example of the contour data after complementation.
  • (B) shows a state in which the vector 2002 is decomposed into a vector 2004 and a vector 2005.
  • a vector 2004 is an X-axis direction component of the vector 2002
  • a vector 2005 is a Y-axis direction component of the vector 2002.
  • the vectors 2001, 2002, 2004, 2005 are the same length.
  • (C) shows another example of the contour data after complementation.
  • (C) shows a state in which the vector 2002 is decomposed into a vector 2006 and a vector 2007.
  • a vector 2006 is a Y-axis direction component of the vector 2002
  • a vector 2007 is an X-axis direction component of the vector 2002.
  • the vectors 2001, 2002, 2006, 2007 are the same length.
  • FIG. 20-2 is an explanatory diagram showing an example of a vector decomposition method in the complementing process shown in FIG. 20-1.
  • the chain code shown in FIG. 17A is used.
  • FIGS. 20B and 20C there are two types, but the decomposed vector is decomposed to be included in the reachable range.
  • the filled area is the reachable range.
  • the chain code of the vector 2101 is “0”
  • the vector 2101 is decomposed into vectors 2101a and 2101b corresponding to the chain codes “1” and “7”. Is done.
  • the chain code of the vector 2102 is “2”
  • it is decomposed into vectors 2102 a and 2102 b corresponding to the chain codes “3” and “1”.
  • the chain code of the vector 2103 is “4”, it is decomposed into vectors 2103 a and 2103 b corresponding to the chain codes “5” and “3”.
  • the chain code of the vector 2104 is “6”, it is decomposed into vectors 2104 a and 2104 b corresponding to the chain codes “7” and “5”.
  • the contour has an outer periphery (outer contour) and an inner periphery (hole) as shown in FIG. 20-2.
  • the direction of the contour is simply determined by the following procedure.
  • FIG. 20-3 is an explanatory diagram schematically illustrating an example of vehicle reachable range extraction by the navigation device.
  • a number indicating the adjacent direction of the regions 1110 to 1117 adjacent to the region 1100 hereinafter referred to as “direction index (chain code)”
  • an arrow in eight directions corresponding to the direction index and Indicates.
  • FIG. 20-3 (B) shows mesh data 1120 of two-dimensional matrix data (Y, X) of i rows and i columns as an example.
  • areas 1121 to 1138 to which reachable identification information is assigned are shown by hatching.
  • areas 1140 to 1142 to which unreachable identification information is assigned exist in areas 1121 to 1138 to which reachable identification information is assigned (illustrated in white).
  • the direction index indicates the direction in which the line segment of the unit length is facing.
  • the coordinates corresponding to the direction index are (X + dx, Y + dy).
  • the direction index in the direction from the region 1100 toward the region 1110 adjacent to the lower left is “0”.
  • the direction index in the direction from the region 1100 to the adjacent region 1111 is “1”.
  • the direction index in the direction from the region 1100 toward the region 1112 adjacent to the lower right is “2”.
  • the direction index in the direction from the region 1100 toward the region 1113 adjacent to the right is “3”.
  • the direction index in the direction from the region 1100 toward the region 1114 adjacent to the upper right is “4”.
  • the direction index in the direction from the region 1100 toward the adjacent region 1115 is “5”.
  • the direction index in the direction from the region 1100 toward the region 1116 adjacent to the upper left is “6”.
  • the direction index in the direction from the region 1100 toward the region 1117 adjacent to the left is “7”.
  • the navigation apparatus 500 searches the area to which the reachable identification information “1” adjacent to the area 1100 is assigned counterclockwise. That is, the navigation device 500 searches for an area to which the reachable identification information “1” is assigned in the counterclockwise direction around the area 1100, for example, with the area 1110 as a search start point.
  • the navigation apparatus 500 determines the search start point of the area
  • the navigation device 500 is adjacent to the upper left, left, lower left, lower, lower right, right, upper right of the region 1100.
  • Matching areas that is, areas 1116, 1117, 1110, 1111 adjacent to each other in the directions of direction indices “6”, “7”, “0”, “1”, “2”, “3”, “4”, respectively.
  • Region 1112, region 1113, and region 1114 are determined, and the search is started from the determined region.
  • the navigation apparatus 500 detects the reachable identification information “1” for the first time after starting the search, the direction index “0” corresponding to the areas 1110 to 1117 in which the reachable identification information “1” is detected. ”To“ 7 ”are written in the storage device in association with the area 1100.
  • the navigation device 500 uses such a start region that starts the search counterclockwise around the target region, which is determined based on the direction index to the target region, the navigation device 500 is as follows.
  • the contour of the reachable range of the vehicle is extracted. Note that the relationship between the direction index to the target region and the start region where the search is started is an example, and the contour of the reachable range of the vehicle can be extracted with other relationships.
  • the navigation apparatus 500 first arrives in units of rows from the region of a row and a column of the mesh data 1120 of the two-dimensional matrix data (Y, X) of i row and i column. An area that has changed from an area that has been provided with the impossible identification information to an area that has been provided with the reachable identification information is detected.
  • the navigation apparatus 500 Since unreachable identification information is given to all the regions in the a-th row of the mesh data 1120, the navigation apparatus 500 next moves the b-row and i-th column from the b-row and a-column region of the mesh data 1120. Search for identification information that can be reached toward the area.
  • the navigation apparatus 500 detects identification information (first start point of contour detection) that can be reached in the region 1121 of the b row and f column of the mesh data 1120 by scanning the b row in the horizontal direction. And the area
  • the navigation device 500 determines a region 1122 adjacent to the lower left of the region 1121 because the direction index to the region 1121 by the horizontal scanning is “3”, and determines from the determined region 1122 to the region 1121. It is searched whether there is an area having identification information that can be reached counterclockwise around the center.
  • the navigation device 500 detects the reachable identification information of the region 1122 and stores the direction index “0” in the direction from the region 1121 to the region 1122 in the storage device in association with the region 1121.
  • the navigation device 500 determines an area of b rows and e columns adjacent to the area 1122, and centers the area 1122 from the determined area of b rows and e columns. As a counterclockwise search for whether there is an area having reachable identification information.
  • the navigation apparatus 500 detects the reachable identification information of the region 1123 adjacent to the lower left of the region 1122, and stores the direction index “0” in the direction from the region 1122 to the region 1123 in association with the previous direction index. Store in the device.
  • the navigation device 500 determines a search start point based on the previous direction index, and searches for whether there is an area having identification information that can be reached counterclockwise from the search start point. The process is repeated until the corresponding arrow returns to the area 1121. Specifically, the navigation apparatus 500 determines an adjacent area on the area 1123, and determines whether there is an area having identification information that can be reached counterclockwise from the determined area around the area 1123. Retrieval is performed, the reachable identification information of the area 1124 adjacent below the area 1123 is detected, and the direction index “1” is stored in the storage device in association with the previous direction index.
  • navigation device 500 searches for an area having identification information that can be reached counterclockwise from the search start point, and has an area that has reachable identification information. 1124 to 1134 are sequentially detected. Then, every time the navigation device 500 acquires the direction index, the navigation device 500 associates it with the previous direction index and stores it in the storage device.
  • the navigation apparatus 500 determines the c rows and h columns adjacent to the right of the region 1134, and the region having the reachable identification information is located counterclockwise from the determined region of the c rows and h columns around the region 1134.
  • a search is performed to determine whether or not there is, reachable identification information of the area 1121 adjacent to the upper left of the area 1134 is detected, and the direction index “6” is stored in the storage device in association with the previous direction index.
  • the direction index “0” ⁇ “0” ⁇ “1” ⁇ “0” ⁇ “2” ⁇ “3” ⁇ “4” ⁇ “3” ⁇ “2” ⁇ “5” ⁇ “5” ⁇ “5” ⁇ “6” ⁇ “6” is stored in this order.
  • the continuous arrangement of the direction indices stored in this way represents the outline of the reachable range, as shown in FIG. 20-3 (B).
  • the direction of the outline of the reachable range represented by the direction of the continuous arrow of the direction index is counterclockwise as shown in FIG. 20-3 (B). This indicates that the region that is the outline of the reachable range is searched counterclockwise.
  • the navigation device 500 extracts other contours of the reachable range. Specifically, as shown in FIG. 20-3 (B), an area to which unreachable identification information is given by scanning the b data in the horizontal direction from b area f column area 1121 of mesh data 1120 The other area that has changed to the area to which the reachable identification information is assigned is detected. That is, the second start point for contour detection is detected. At this time, since the region once extracted as the outline of the reachable range is excluded from the detection, the region 1122 and the region 1123 are not detected as the second start point of the contour detection.
  • the region 1135 of the d row and the g column is detected as the start point of the other contour in the reachable range, that is, the second start point of the contour detection.
  • region which has the reachable identification information used as the outline of the reachable range of a vehicle is searched counterclockwise centering on the area
  • the navigation device 500 determines the region 1142 adjacent to the lower left of the region 1135 because the direction index to the region 1135 is “3” by scanning in the horizontal direction, and the determined region 1135 is the center. It is searched whether there is an area having identification information that can be reached counterclockwise.
  • the navigation device 500 detects the reachable identification information of the region 1132 and stores the direction index “2” in the direction from the region 1135 toward the region 1132 in the storage device in association with the region 1135.
  • the navigation apparatus 500 determines an area of e rows and g columns adjacent to the left of the area 1132, and centers the area 1132 from the determined area of e rows and g columns. As a result of searching whether there is an area having reachable identification information in the counterclockwise direction, reachable identification information of the area 1129 is detected, and the direction index “0” in the direction from the area 1132 to the area 1129 is set. And stored in the storage device in association with the previous direction index.
  • the navigation device 500 determines a search start point based on the previous direction index, and searches for whether there is an area having identification information that can be reached counterclockwise from the search start point. Repeat until the corresponding arrow returns to region 1135.
  • the direction index “2” ⁇ “0” ⁇ “7” ⁇ “6” ⁇ “5” ⁇ “4” is stored in the storage device as the outline of the reachable range starting from the area 1135 of the second start point. " ⁇ " 2 "is stored in this order.
  • the continuous arrangement of the direction indices stored in this way represents the second contour of the reachable range as shown in FIG. 20-3 (B).
  • the direction of the second outline of the reachable range represented by the direction of the continuous arrow of the direction index is clockwise as shown in FIG. 20-3 (B). This indicates that the region that is the outline of the reachable range has been searched clockwise.
  • the navigation device 500 extracts other contours in the reachable range. Specifically, as shown in FIG. 20-3 (B), an area to which unreachable identification information is given by scanning from the area 1135 of the d row and the g column of the mesh data 1120 in the horizontal direction of the d line. The other area that has changed to the area to which the reachable identification information is assigned is detected. That is, the third and subsequent start points for contour detection are detected. At this time, the area once searched as the outline of the reachable range is excluded from detection. This scanning continues until i rows and i columns.
  • the navigation apparatus 500 determines the search start point from the first detected area 1121 based on the previous direction index, and whether there is an area having identification information that can be reached counterclockwise from the search start point. By repeating this search process until the arrow corresponding to the direction index returns to the area 1121, the areas 1122 to 1134 are searched to acquire the direction index. Further, from the next detected area 1135, similarly, the search start point is determined based on the previous direction index, and it is searched whether there is an area having identification information that can be reached counterclockwise from the search start point.
  • the navigation device 500 is surrounded by the outer and inner contours of the reachable range of the vehicle and these outer and inner contours by painting a series of regions in the direction corresponding to the direction index from the regions 1121 and 1135. Generate mesh data that the part can reach.
  • the navigation device 500 determines a search start point based on the previous direction index, and corresponds to the direction index to search for whether there is an area having identification information that can be reached counterclockwise from the search start point.
  • the continuous trajectory of the direction index obtained by repeating until the arrow to return to the original area is counterclockwise (reverse) in the outer contour of the reachable range of the vehicle. (Clockwise).
  • the direction index of the inner contour of the reachable range in contact with the unreachable range is clockwise (clockwise).
  • the direction of the continuous arrow of the direction index is determined to be clockwise or counterclockwise, whether the contour indicated by the direction of the continuous arrow of the direction index is the outer contour of the reachable range, It is possible to determine whether the inner contour of the reachable range exists when the reachable range exists inside the reachable range.
  • the obtained contour direction (clockwise or counterclockwise) is obtained by changing the chain code (direction) for each coordinate of the reachable range contour data as shown in FIG. Index) 1150 and additional information 1160 indicating the direction of the contour are added and output to the display control unit 206.
  • the display control unit 206 can display the reachable range of the vehicle by the counterclockwise outer contour 1130, and if there is the clockwise inner contour 1140, the unreachable range of the vehicle is displayed inside the outer contour 1130. Will be able to.
  • FIG. 20D is a flowchart illustrating an example of the procedure of the contour direction calculation process performed by the navigation device.
  • the navigation device 500 sequentially scans the chain code sequence output by the contour detection (step S2201).
  • step S2202 When the absolute value of the amount of change in the approach direction to the reference pixel due to the operation of the pixel is more than half of the number of adjacent pixels (for example, 4 or more or -4 or less), the number of adjacent pixels is added or subtracted once, Correction is made so that the absolute value of the change amount in the approach direction is smaller than half the number of adjacent pixels (for example, not less than ⁇ 3 and not more than 3) (step S2202). Then, the product of the corrected change amount in the approach direction and the unit rotation angle (2 ⁇ / number of adjacent pixels) is calculated, and the cumulative rotation angle due to the transition in the approach direction is calculated (step S2203).
  • step S2204 it is determined whether or not all the lines in the approach direction have been scanned. If the scanning has not been completed (step S2204: No), the process returns to step S2201, and if the scanning has been completed (step S2204: Yes). Next, it is determined whether the cumulative rotation angle due to the transition in the approach direction is 2 ⁇ or 0 (step S2205). If the accumulated rotation angle is 2 ⁇ or 0 (step S2205: YES), it is found that the direction in which the approach direction circulates is counterclockwise (step S2206), and the above processing ends. . Here, when the accumulated value is 0, it is a special case of counterclockwise rotation. On the other hand, when the cumulative rotation angle is ⁇ 2 ⁇ (step S2205: No), it is found that the direction in which the transition of the approach direction circulates is clockwise (step S2207), and the above processing ends.
  • the contour direction processing will be specifically described using the contours (outer contour and inner contour) shown in FIG. 20-3 (B) described above.
  • the value of the direction index until one round is “0” ⁇ “0” ⁇ “1” ⁇ “0” ⁇ “2” ⁇ “3” ⁇ “4” ⁇ “3” ⁇ “2” ⁇ “5” ⁇ “5” ⁇ “5” ⁇ “6” ⁇ “6” ⁇ “0”.
  • the amount of change (difference) between the values is “0”, “1”, “ ⁇ 1”, “2”, “1”, “1”, “ ⁇ 1”, “ ⁇ 1”, “3”, respectively. , “0”, “0”, “1”, “0”, “ ⁇ 6”.
  • the value of the direction index until it goes around changes from “2” ⁇ “0” ⁇ “7” ⁇ “6” ⁇ “5” ⁇ “4” ⁇ “2” ⁇ “2”.
  • the amount of change (difference) between the values is “ ⁇ 2”, “7”, “ ⁇ 1”, “ ⁇ 1”, “ ⁇ 1”, “ ⁇ 2”, and “0”, respectively.
  • the value “7” indicated by “0” ⁇ “7” is a change corresponding to one adjacent pixel as seen in FIG. Add “-8” and set the direction index to "-1" instead of "7".
  • the contour direction (clockwise or counterclockwise) is obtained, and the contour data calculation unit 106 (direction calculation unit 107) performs the contour data of the reachable range as shown in FIG.
  • a chain code (direction index) 1150 for each coordinate and additional information 1160 indicating the direction of the contour are added and output to the display control unit 206.
  • the display control unit 206 can display the reachable range of the vehicle by the counterclockwise outer contour 1130, and if there is the clockwise inner contour 1140, the unreachable range of the vehicle is displayed inside the outer contour 1130. Will be able to.
  • the conversion unit 263 calculates a contour polygon of (latitude and longitude, real number coordinate system) from the line segment group constituting the contour data.
  • the conversion unit 263 calculates a vector sequence having adjacent vertices of the contour as start points and end points.
  • the conversion unit 263 decomposes each vector into the deflection angle ⁇ of each vector and the length U of the line segment.
  • FIG. 21A is an explanatory diagram of a music coordinate expression of contour data.
  • the conversion unit 263 calculates a line segment w_i formed by each vertex z_i from each vertex z_i as in the following equation (11).
  • the conversion unit 263 performs a fast Fourier transform on the array of declinations w_N. Specifically, for example, the conversion unit 263 performs a fast Fourier transform on the complex number array w_i representing the line segment sequence.
  • FIG. 21-2 is a graph showing frequency conversion.
  • the cut-off rate is c
  • hc (1 ⁇ c) ⁇ M / 2
  • the removal unit 264 sets all components of w_k that satisfy M / 2 ⁇ hc ⁇ k ⁇ M / 2 + hc to “0”. To remove high frequency components.
  • the inverse transform unit 265 performs an inverse fast Fourier transform to calculate an array of declinations w_N from which high frequency components have been removed. Specifically, as shown in the following equation (13), the inverse transform unit 265 performs inverse fast Fourier transform on the line segment sequence w′_k from which high-frequency components have been removed.
  • the inverse transform unit 265 obtains a reproduction curve which is a contour polygon after smoothing from the declination w_N and the length U of the line segment. That is, the inverse transform unit 265 obtains the reproduction curve z′_i from the line segment sequence w′_i from which the high frequency component has been removed, and obtains the final result.
  • the inverse transformation unit 265 performs the inverse calculation of the above equation (11) as shown in the following equation (14).
  • a smoothed curve z′_i is calculated from the original curve z_i. Further, the smoothing intensity can be adjusted by adjusting the cut-off rate c.
  • FIG. 22 is an explanatory diagram of an example of thinning out contour data.
  • vectors u and v are vectors on contour data.
  • a to C are vertices on the contour data.
  • the vertex B is an outer peripheral point to be deleted.
  • the thinning unit 266 thins out vertices connecting adjacent line segment data.
  • the cosine of ⁇ is calculated from the inner product of the vector u and the vector v and compared with a predetermined threshold value cos ⁇ th.
  • the navigation apparatus 500 generates the reachable range of the moving body based on the reachable node of the moving body searched based on the remaining energy amount of the vehicle and causes the display 513 to display the reachable range.
  • the navigation apparatus 500 is mounted on an EV car will be described as an example.
  • FIG. 23 is a flowchart showing an example of the procedure of image processing by the navigation device.
  • the navigation apparatus 500 first acquires the current location (ofx, ofy) of the vehicle on which the apparatus is mounted, for example, via the communication I / F 515 (step S2301).
  • the navigation apparatus 500 acquires the initial stored energy amount of the vehicle at the current location (ofx, ofy) of the vehicle, for example, via the communication I / F 515 (step S2302).
  • the navigation device 500 performs a reachable node search process (step S2303).
  • the navigation apparatus 500 performs mesh generation and identification information provision processing (step S2304).
  • the navigation apparatus 500 extracts the outline of the reachable range of the vehicle (step S2305).
  • the navigation apparatus 500 performs smoothing processing including interpolation processing, fast Fourier transform processing, high-frequency component removal processing, inverse fast Fourier transform processing, and thinning processing on the contour data indicating the extracted contour (step). S2306).
  • the navigation device 500 displays the reachable range of the vehicle on the display 513 based on the smoothed contour data (step S2307), and ends the processing according to this flowchart.
  • FIG. 24 is a flowchart illustrating an example of a procedure of estimated power consumption calculation processing by the navigation device. In the flowchart shown in FIG. 24, the process is performed in the reachable node search process in step S2303 described above.
  • the navigation apparatus 500 first acquires traffic jam information such as probe data and traffic jam prediction data via the communication I / F 515 (step S2401). Next, the navigation device 500 acquires the length of the link and the road type of the link (step S2402).
  • the navigation device 500 calculates the travel time of the link based on the information acquired in steps S2401 and S2402 (step S2403).
  • the travel time of the link is the time required for the vehicle to finish traveling on the link.
  • the navigation apparatus 500 calculates the average link speed based on the information acquired in steps S2401 to S2403 (step S2404).
  • the average speed of the link is an average speed when the vehicle travels on the link.
  • the navigation device 500 acquires the altitude data of the link (step S2405).
  • the navigation apparatus 500 acquires vehicle setting information (step S2406).
  • the navigation apparatus 500 uses the energy consumption estimation formula of any one of the above-described formulas (1) to (6) based on the information acquired in steps S2401 to S2406 to estimate the consumption at the link.
  • the amount of electric power is calculated (step S2407), and the processing according to this flowchart ends.
  • FIGS. 25 and 26 are flowcharts showing the procedure of reachable point search processing by the navigation device 500.
  • the navigation device 500 adds the node N (i) _j connected to the link L (i) _j closest to the search start point to the node candidates (step S2501).
  • the search start point is the current point (ofx, ofy) of the vehicle acquired in step S2501 described above.
  • the variables i and j are arbitrary numerical values.
  • a link and a node closest to the search start point are a link L (1) _j and a node N (1) _j, respectively, and are further connected to the node N (1) _j.
  • the variable j1 is an arbitrary numerical value and means that a plurality of links or nodes exist in the same hierarchy.
  • the navigation apparatus 500 determines whether or not there are one or more node candidates (step S2502).
  • step S2502 Yes
  • the navigation apparatus 500 selects a node candidate with the minimum cumulative power consumption from the current point of the vehicle to the node candidate (step S2503). For example, the following processing will be described assuming that the navigation device 500 selects the node N (i) _j as a node candidate.
  • the navigation apparatus 500 determines whether or not the cumulative power consumption from the current point of the vehicle to the node N (i) _j is smaller than the specified energy amount (step S2504).
  • the designated energy amount is, for example, the remaining energy amount of the vehicle at the current location of the vehicle. If smaller than the specified energy amount (step S2504: Yes), the navigation apparatus 500 extracts all the links L (i + 1) _j connected to the node N (i) _j (step S2505).
  • the navigation apparatus 500 selects one link L (i + 1) _j among the links L (i + 1) _j extracted in step S2505 (step S2506).
  • the navigation apparatus 500 performs candidate determination processing for determining whether or not the one link L (i + 1) _j selected in step S2506 is a link candidate (steps S2507 and S2508).
  • the navigation apparatus 500 performs the power consumption calculation process for the one link L (i + 1) _j (step S2509). Next, the navigation apparatus 500 calculates the cumulative power consumption W (i + 1) _j up to the node N (i + 1) _j connected to the one link L (i + 1) _j (step S2510). Next, the navigation apparatus 500 determines whether there is another processed route connected to the node N (i + 1) _j (step S2511).
  • the navigation apparatus 500 determines that the cumulative power consumption W (i + 1) _j from the current point of the vehicle to the node N (i + 1) _j is the cumulative amount of the other route. It is determined whether or not the power consumption is smaller (step S2512). If the accumulated power consumption is smaller than the other route (step S2512: Yes), the navigation device 500 causes the node N (i + 1) _j to accumulate the accumulated power consumption W from the current point of the vehicle to the node N (i + 1) _j. (I + 1) _j is set (step S2513).
  • step S2511 if there is no other route that has been processed (step S2511: NO), the navigation apparatus 500 proceeds to step S2513.
  • the navigation apparatus 500 determines whether or not the node N (i + 1) _j is a node candidate (step S2514). If not a node candidate (step S2514: No), the navigation device 500 adds the node N (i + 1) _j to the node candidate (step S2515).
  • step S2508 when one link L (i + 1) _j is not a link candidate (step S2508: No), the cumulative power consumption W (i + 1) _j from the current point of the vehicle to the node N (i + 1) _j is another route. If the node N (i + 1) _j is a node candidate (step S2514: Yes), the navigation device 500 proceeds to step S2516.
  • the navigation apparatus 500 determines whether or not the candidate determination process for all links L (i + 1) _j has been completed (step S2516).
  • the candidate determination process for all links L (i + 1) _j is completed (step S2516: Yes)
  • the node N (i) _j is excluded from the node candidates (step S2517), and the process returns to step S2502.
  • the navigation apparatus 500 selects a node candidate having the minimum cumulative power consumption from the current location of the vehicle from the node candidates (step S2503).
  • the node candidate selected in step S2503 is set as the next node N (i) _j, and the processes in and after step S2504 are performed.
  • step S2516: NO if the candidate determination process for all links L (i + 1) _j has not been completed (step S2516: NO), the process returns to step S2506.
  • the navigation device 500 again selects another link L (i + 1) _j connected to the node N (i) _j, and the candidate determination process for all the links L (i + 1) _j connected to the same node candidate is performed. Until the process ends (step S2516: YES), the processes from step S2507 to step S2515 are repeated.
  • step S2502 when there is no one or more node candidates (step S2502: No), when the cumulative power consumption from the current point of the vehicle to the node N (i) _j is greater than or equal to the specified energy amount (step S2504: No), navigation The apparatus 500 ends the process according to this flowchart.
  • FIG. 27 is a flowchart illustrating an example of a procedure of link candidate determination processing by the navigation device.
  • the flowchart in FIG. 27 is an example of the process performed in step S2507 described above.
  • the navigation device 500 first determines whether or not the one link L (i + 1) _j selected in step S2506 is prohibited from passing (step S2701). If the passage is not prohibited (step S2701: NO), the navigation device 500 determines whether one link L (i + 1) _j is one-way reverse running (step S2702). When it is not one-way reverse running (step S2702: No), the navigation apparatus 500 determines whether one link L (i + 1) _j is time-regulated or seasonally regulated (step S2703).
  • step S2703: No the navigation apparatus 500 uses the node N (i + 1) on the current point side of the vehicle where one link L (i + 1) _j is one link L (i + 1) _j. It is determined whether or not the importance level is lower than the link L (i) _j connected to (step S2704). When the importance level is higher than that of the link L (i) _j (step S2704: No), the navigation device 500 determines one link L (i + 1) _j as a link candidate (step S2705), and ends the processing according to this flowchart. To do.
  • step S2701 when the traffic is prohibited (step S2701: Yes), when the one-way reverse running (step S2702: Yes), when time regulation or seasonal regulation is imposed (step S2703: Yes), the link L ( i) When the importance is lower than _j (step S2704: YES), the navigation apparatus 500 ends the process according to this flowchart.
  • FIG. 28 is a flowchart illustrating an example of a procedure of identification information provision processing by the navigation device.
  • the flowchart in FIG. 28 is the processing performed in step S2304 described above.
  • the navigation apparatus 500 first acquires longitude / latitude information (x, y) of a reachable node (searchable point) (step S2801). Next, the navigation apparatus 500 acquires maximum longitude x_max, minimum longitude x_min, maximum latitude y_max, and minimum latitude y_min (step S2802).
  • the navigation apparatus 500 determines the distance w1 from the current vehicle location (ofx, ofy) acquired in step S2801 to the maximum longitude x_max, the distance w2 to the minimum longitude x_min, the distance w3 to the maximum latitude y_max, and the minimum latitude.
  • a distance w4 to y_min is calculated (step S2803).
  • the navigation apparatus 500 converts the map data from the absolute coordinate system to the screen coordinate system using the magnification mag calculated in step S2805, and generates a mesh (X, Y) of m ⁇ m dots (step S2806). .
  • step S2806 the navigation apparatus 500 gives reachable identification information to the mesh (X, Y) including the reachable node, and identifies that the mesh (X, Y) not including the reachable node is unreachable. Give information. Then, the navigation device 500 performs the first identification information changing process to remove the missing point of the mesh (X, Y) corresponding to the bridge or the tunnel (step S2807).
  • the navigation device 500 performs a second identification information change process (step S2808).
  • the navigation apparatus 500 performs a third identification information change process (step S2809), and ends the process according to this flowchart.
  • the second identification information changing process is a closing expansion process.
  • the third identification information change process is a closing reduction process.
  • the second identification information change process (step S2808) and the third identification information change process (step S2809) are performed after the first identification information change process (step S2807).
  • the first identification information change process (step S2807) may be performed.
  • FIG. 29 is a flowchart illustrating an example of a procedure of first identification information change processing by the navigation device 500.
  • the flowchart in FIG. 29 is an example of the process performed in step S2807 described above. Specifically, when the identification information of each area corresponding to the entrance and exit of the bridge or tunnel is reachable identification information, the navigation device 500 detects the missing point generated in the area corresponding to the bridge or tunnel. Remove.
  • the navigation apparatus 500 first acquires a mesh of two-dimensional matrix data of my rows and mx columns (step S2911). Next, the navigation device 500 assigns 1 to variables i and j in order to search for identification information of the area of the i-th row and j-th column of the mesh (steps S2912, S2913). Next, the navigation apparatus 500 determines whether or not the region in the i row and j column of the mesh is a bridge or a tunnel entrance (step S2914).
  • the navigation apparatus 500 determines whether the identification information of the i-th row and j-th column region of the mesh is “1”. (Step S2915). When the identification information of the area of i row and j column is “1” (step S2915: Yes), the navigation device 500 corresponds to the area of the other doorway of the bridge or tunnel corresponding to the area of i row and j column of the mesh. The position information (i1, j1) is acquired (step S2916).
  • the navigation apparatus 500 determines whether or not the identification information of the area in the i1 row j1 column of the mesh is “1” (step S2917).
  • the identification information of the area of i1 row and j1 column is “1” (step S2917: Yes)
  • the navigation apparatus 500 determines that all areas on the section connecting the area of i row and j column and the area of i1 row and j1 column are connected.
  • the position information of the area is acquired (step S2918).
  • the navigation apparatus 500 changes the identification information of each area acquired in step S2918 to “1” (step S2919). As a result, the missing point generated in the region corresponding to the bridge or tunnel connecting the region of i row and j column and the region of i1 row and j1 column is removed.
  • the navigation apparatus 500 may advance to step S2920 without performing the process of step S2919, when the identification information of each area
  • step S2914 when the area of i row and j column is not the entrance of the bridge or tunnel (step S2914: No), when the identification information of the area of i row and j column is not “1” (step S2915: No), and i1 row j1 If the identification information of the row area is not “1” (step S2917: No), the navigation apparatus 500 proceeds to step S2920.
  • the navigation apparatus 500 adds 1 to the variable j (step S2920), and determines whether or not the variable j exceeds the mx column (step S2921). If the variable j does not exceed the mx column (step S2921: NO), the navigation device 500 returns to step S2914 and repeats the subsequent processing. On the other hand, when the variable j exceeds the mx column (step S2921: Yes), the navigation apparatus 500 adds 1 to the variable i (step S2922), and determines whether the variable i exceeds the my row. (Step S2923).
  • step S2923: No If the variable i does not exceed the my line (step S2923: No), the navigation device 500 returns to step S2913, and after substituting 1 for the variable j, the subsequent processing is repeated. On the other hand, when the variable i exceeds the my line (step S2923: Yes), the navigation apparatus 500 ends the process according to this flowchart. Thereby, the navigation apparatus 500 can remove all missing points on the bridge or tunnel included in the mesh of the two-dimensional matrix data of my rows and mx columns.
  • the navigation apparatus 500 determines again whether or not the region of column i1 and j1 acquired as the other entrance of the bridge or tunnel in step S2916 is the other entrance of the bridge or tunnel (processing of step S2914). ) Is not necessary. Thereby, the navigation apparatus 500 can reduce the processing amount of a 1st identification information change process.
  • FIGS. 30 and 31 are flowcharts showing an example of the procedure of the reachable range contour extraction process by the navigation device.
  • the flowcharts of FIGS. 30 and 31 are an example of the process performed in step S2305 described above, and are the reachable range outline extraction process shown in outline 2 of outline extraction of reachable range in the navigation apparatus 500 described above.
  • the navigation apparatus 500 first acquires a mesh of two-dimensional matrix data of my rows and mx columns (step S3001). Next, the navigation apparatus 500 acquires the longitude / latitude information of each area
  • the navigation device 500 initializes the variable i and adds 1 to the variable i in order to search the identification information of the region of the i row and j column of the mesh (steps S3003 and S3004).
  • the navigation apparatus 500 determines whether or not the variable i exceeds the my line (step S3005).
  • step S3005 When the variable i does not exceed the my line (step S3005: No), the navigation device 500 initializes the variable j and adds 1 to the variable j (steps S3006 and S3007). Next, the navigation apparatus 500 determines whether or not the variable j exceeds the mx column (step S3008).
  • the navigation apparatus 500 determines whether or not the identification information of the area in the i-th row and j-th column of the mesh is “1” (step S3009). If the identification information of the i-th row and j-th column region is “1” (step S3009: Yes), the navigation device 500 acquires the upper left coordinates (px1, py1) of the i-th row and j-th column region of the mesh (step S3010). ).
  • the upper left coordinates (px1, py1) of the region of i row and j column are the minimum longitude px1 and the minimum latitude py1 of the region of i row and j column.
  • the navigation apparatus 500 determines whether or not the variable j is smaller than the mx column (step S3011). If the variable j is greater than or equal to mx columns (step S3011: No), the navigation apparatus 500 acquires the lower right coordinates (px2, py2) of the region of i rows and j columns of the mesh (step S3012).
  • the lower right coordinates (px2, py2) of the area of i row and j column are the maximum longitude px2 and the maximum latitude py2 of the area of i row and j column.
  • the navigation device 500 sets the upper left coordinates (px1, py1) acquired in step S3010 and the lower right coordinates (px2, py2) acquired in step S3012 as map data (step S3016). Then, the navigation device 500 fills a rectangular area having the upper left coordinates (px1, py1) and the lower right coordinates (px2, py2) as opposed vertices (step S3017), returns to step S3004, and repeats the subsequent processing. Do it.
  • step S3011 when the variable j is smaller than the mx column (step S3011: Yes), the navigation apparatus 500 adds 1 to the variable j (step S3013), and the identification information of the region in the i-th row and j-th column of the mesh is “1”. It is determined whether or not there is (step S3014). If the identification information of the i-th row and j-th column area is not “1” (step S3014: No), the navigation apparatus 500 acquires the lower right coordinates (px2, py2) of the i-th row and j-1th column region of the mesh ( Steps S3015) and S3016 and subsequent steps are performed.
  • step S3014: Yes If the identification information of the area of i row and j column is “1” (step S3014: Yes), the process returns to step S3011, and the subsequent processing is repeated. If the variable i exceeds the my line (step S3005: Yes), the navigation device 500 ends the process according to the flowchart. When the variable j exceeds the mx column (step S3008: Yes), the process returns to step S3004 and the subsequent processing is repeated.
  • FIG. 32 is a flowchart illustrating an example of a smoothing process performed by the navigation device 500.
  • the flowchart in FIG. 32 is an example of the process performed in step S2306 described above.
  • the navigation apparatus 500 uses the complementing unit 262 to perform contour data complementing processing indicating the contour of the reachable range of the moving object (step S3201).
  • the navigation apparatus 500 performs a fast Fourier transform (FFT process) on the array of deflection angles obtained from the contour data after the complement processing by the conversion unit 263 (step S3202).
  • the navigation apparatus 500 removes the high frequency component by passing the low frequency filter through the frequency component of the declination obtained by the frequency conversion by the removing unit 264 (step S3203).
  • the navigation device 500 reproduces the contour data by performing inverse fast Fourier transform (inverse FFT processing) on the declination frequency component from which the high frequency component has been removed by the inverse transform unit 265 (step S3204).
  • the navigation apparatus 500 performs the thinning process which thins out a vertex by the thinning part 266 (step S3205). Thereby, the contour data is smoothed.
  • FIG. 33 is an explanatory diagram schematically illustrating an example of acceleration applied to a vehicle traveling on a road having a gradient.
  • the second term on the right side of the above equation (1) indicates the acceleration A accompanying the traveling of the vehicle and the combined acceleration C of the traveling direction component B of the gravitational acceleration g. Yes.
  • the distance D of the section in which the vehicle travels is defined as the travel time T and the travel speed V.
  • the navigation apparatus 500 estimation accuracy improves by estimating a fuel consumption in consideration of a road gradient, that is, the fourth information.
  • the slope of the road on which the vehicle travels can be known using, for example, an inclinometer mounted on the navigation device 500. Further, when the inclinometer is not mounted on the navigation device 500, for example, road gradient information included in the map data can be used.
  • traveling resistance generated in the vehicle will be described.
  • the navigation device 500 calculates the running resistance by the following equation (16), for example.
  • traveling resistance is generated in a moving body during acceleration or traveling due to road type, road gradient, road surface condition, and the like.
  • FIG. 34 is an explanatory diagram showing an example of a display example after the reachable point search process by the navigation device 500.
  • FIG. 35 is an explanatory diagram illustrating an example of a display example after the identification information providing process by the navigation device 500.
  • FIG. 36 is an explanatory diagram illustrating an example of a display example after the first identification information change process by the navigation device.
  • FIG. 37 is an explanatory diagram showing an example of a display example after the closing process (expansion) by the navigation device 500.
  • FIG. 38 is an explanatory diagram illustrating an example of a display example after the closing process (reduction) by the navigation device 500.
  • the display 513 displays reachable points of a plurality of vehicles searched by the navigation device 500 together with map data.
  • the state of the display 513 illustrated in FIG. 34 is an example of information displayed on the display when the reachable point search process is performed by the navigation device 500. Specifically, this is a state in which the process of step S2303 in FIG. 23 has been performed.
  • the map data is divided into a plurality of areas by the navigation device 500, and identification information indicating that each area is reachable or unreachable is given based on the reachable point, thereby displaying the display as shown in FIG.
  • the reachable range 3500 of the vehicle based on the reachable identification information is displayed.
  • the reachable range 3500 of the vehicle includes, for example, an area corresponding to both entrances and exits of the Tokyo Bay Crossing Road (Tokyo Bay Aqua Line: registered trademark) 3510 that crosses Tokyo Bay.
  • the vehicle reachable range 3500 includes only one region 3511 out of all the regions on the Tokyo Bay crossing road 3510.
  • the first identification information changing process is performed by the navigation device 500, so that the missing points on the Tokyo Bay crossing road are removed as shown in FIG.
  • a reachable range 3620 including an area 3621 is displayed.
  • a reachable range 3700 of the vehicle from which the missing points are removed is generated.
  • the entire area 3621 on the Tokyo Bay crossing road has already been included in the reachable range 3620 by the first identification information change process, the entire area 3710 on the Tokyo Bay crossing road is The vehicle reachable range 3700 is obtained.
  • closing processing is performed by the navigation device 500, so that the outer periphery of the vehicle reachable range 3800 is substantially the same as the outer periphery of the vehicle reachable range 3500 before closing is performed, as shown in FIG. It becomes the size of.
  • the boundary of the entire region 3810 on the Tokyo Bay crossing road in FIG. 38 and the boundary of the entire region 3810 on the Tokyo Bay crossing road in FIG. 38 are displayed as boundaries depending on the mesh, but are easy to understand here. As shown by the boundary of the diagonal line.
  • the outline of the reachable range 3800 of the vehicle can be displayed smoothly. Further, since the missing point is removed by closing, the reachable range 3800 of the vehicle is displayed with a two-dimensional smooth surface 3802. Even after the closing reduction process, the entire area 3810 on the Tokyo Bay crossing road is displayed as the vehicle reachable range 3800 or its outline 3801.
  • FIG. 39 is an explanatory diagram showing an example of a display example after the smoothing process by the navigation device 500. Since the contour 3901 of the vehicle reachable range 3900 is smoothed from the state of FIG. 38 (contour 3801) by the navigation device 500, the vehicle reachable range 3900 is displayed with a two-dimensional smooth surface 3902. .
  • the map information is divided into a plurality of areas, and it is searched whether or not each mobile area can reach each area, and each mobile area can reach or reach each area. Reachable or unreachable identification information for identifying the impossibility is given. And the navigation apparatus 500 produces
  • the navigation device 500 converts a plurality of areas obtained by dividing the map information into image data, and assigns identification information indicating that each of the plurality of areas is reachable or unreachable, and then performs a closing expansion process. For this reason, the navigation apparatus 500 can remove the missing point within the reachable range of the moving body.
  • the navigation device 500 converts the plurality of areas obtained by dividing the map information into image data, and assigns identification information indicating that each of the plurality of areas is reachable or unreachable, and then performs an opening reduction process. For this reason, the navigation apparatus 500 can remove the isolated points in the reachable range of the moving object.
  • the navigation device 500 can remove missing points and isolated points from the reachable range of the moving body, and thus can display the travelable range of the moving body on a two-dimensional smooth surface in an easy-to-read manner. .
  • the navigation apparatus 500 extracts the mesh outline generated by dividing the map information into a plurality of regions. For this reason, the navigation apparatus 500 can display the outline of the reachable range of a moving body smoothly.
  • the navigation device 500 searches for reachable points of the mobile object by narrowing down roads that search for reachable points of the mobile object. For this reason, the navigation apparatus 500 can reduce the processing amount at the time of searching the reachable point of a mobile body. Even if the number of reachable reachable points is reduced by narrowing down the roads to search for the reachable points of the mobile object, the expansion process of closing is performed as described above, so that the reachable range of the mobile object is within the reachable range. The resulting defect point can be removed. Therefore, the navigation apparatus 500 can reduce the processing amount for detecting the reachable range of a mobile body. In addition, the navigation device 500 can display the travelable range of the moving body in a two-dimensional smooth surface in an easy-to-see manner.
  • the navigation device 500 decomposes the line segment data constituting the contour data into an X-axis component and a Y-axis component as preprocessing of the fast Fourier transform, and uniformizes the length of each line segment data.
  • the fast Fourier transform it is only necessary to perform the fast Fourier transform on the declination and the length of each vertex obtained from the contour data, so that the smoothing process can be speeded up.
  • the line segment data is decomposed so as to be included in the reachable range in the complementing process, a sense of incongruity is suppressed and visibility is improved as compared with the case where the line segment data is divided so as not to be included in the reachable range. be able to.
  • the smoothing process can be speeded up by a simple thinning process.
  • the image processing apparatus since the determination is made based on the magnitude of the declination, the smoothing process can be speeded up by a simple thinning process.
  • the frequency component value can be directly manipulated in this method, the degree of freedom of smoothing is high even when compared with other smoothing filters.
  • a more accurate reachable range can be displayed at a higher speed than when using other smoothing filters.
  • FIG. 40 is a block diagram of an example of a functional configuration of the image processing system according to the second embodiment.
  • a functional configuration of the image processing system 4000 according to the second embodiment will be described.
  • the image processing system 4000 according to the second embodiment includes a server 4010 and a terminal 4020.
  • the image processing system 4000 according to the second embodiment includes the function of the image processing apparatus 200 according to the first embodiment in the server 4010 and the terminal 4020.
  • the server 4010 generates information to be displayed on the display unit 210 by the terminal 4020 mounted on the mobile object. Specifically, the server 4010 detects information related to the reachable range of the mobile object and transmits it to the terminal 4020.
  • the terminal 4020 may be mounted on a mobile body, may be used in the mobile body as a mobile terminal, or may be used outside the mobile body as a mobile terminal. Terminal 4020 receives information about the reachable range of the moving object from server 4010.
  • the server 4010 includes a calculation unit 202, a search unit 203, a division unit 204, a grant unit 205, a server reception unit 4011, and a server transmission unit 4012.
  • the terminal 4020 includes an acquisition unit 201, a display control unit 206, a terminal reception unit 4021, and a terminal transmission unit 4022.
  • the server reception unit 4011 receives information transmitted from the terminal 4020. Specifically, for example, the server reception unit 4011 receives information about a mobile unit from a terminal 4020 connected to a communication network such as a public line network, a mobile phone network, DSRC, LAN, WAN, etc. via wireless.
  • the information regarding the moving body is information regarding the current position of the moving body and information regarding the initial amount of energy that is the amount of energy held by the moving body at the current position of the moving body.
  • Information received by the server reception unit 4011 is information referred to by the calculation unit 202.
  • the server transmission unit 4012 uses the plurality of areas obtained by dividing the map information to which the reachable identification information for identifying that the moving body is reachable by the assigning unit 205 as the reachable range of the moving body as a terminal 4020. Specifically, for example, the server transmission unit 4012 transmits information to a terminal 4020 connected to a communication network such as a public network, a mobile phone network, a DSRC, a LAN, or a WAN via a radio.
  • a communication network such as a public network, a mobile phone network, a DSRC, a LAN, or a WAN via a radio.
  • Terminal 4020 is connected to server 4010 in a communicable state via, for example, an information communication network of a mobile terminal or a communication unit (not shown) provided in its own device.
  • the terminal receiving unit 4021 receives information from the server 4010. Specifically, the terminal reception unit 4021 divides map information that is divided into a plurality of regions and each region is provided with identification information that is reachable or unreachable based on the reachable point of the mobile object. Receive. More specifically, for example, the terminal receiving unit 4021 receives information from a server 4010 connected to a communication network such as a public line network, a mobile phone network, DSRC, LAN, and WAN via a wireless connection.
  • a communication network such as a public line network, a mobile phone network, DSRC, LAN, and WAN via a wireless connection.
  • the terminal transmission unit 4022 transmits information regarding the moving object acquired by the acquisition unit 201 to the server 4010. Specifically, for example, the terminal transmission unit 4022 transmits information about the mobile unit to a server 4010 connected to a communication network such as a public line network, a mobile phone network, DSRC, LAN, WAN, or the like via wireless communication.
  • a communication network such as a public line network, a mobile phone network, DSRC, LAN, WAN, or the like via wireless communication.
  • the server 4010 performs estimated energy consumption calculation processing, reachable point search processing, and identification information addition processing among the image processing by the image processing apparatus 200 according to the first embodiment. Specifically, in the flowchart of FIG. 4, the terminal 4020 performs the process of step S ⁇ b> 401 and transmits the information acquired in step S ⁇ b> 401 to the server 4010.
  • the server 4010 receives information from the terminal 4020.
  • the server 4010 performs the processes in steps S402 to S406 based on the information received from the terminal 4020, and transmits the information acquired in step S406 to the terminal 4020.
  • the terminal 4020 receives information from the server 4010. Then, the terminal 4020 performs step S407 based on the information received from the server 4010, and ends the process according to this flowchart.
  • the image processing system 4000 and the image processing method according to the second embodiment can obtain the same effects as the image processing apparatus 200 and the image processing method according to the first embodiment.
  • FIG. 41 is a block diagram of an example of a functional configuration of the image processing system according to the third embodiment.
  • a functional configuration of the image processing system 4100 according to the third embodiment will be described.
  • An image processing system 4100 according to the third exemplary embodiment includes a first server 4110, a second server 4120, a third server 4130, and a terminal 4140.
  • the first server 4110 has the function of the calculation unit 202 of the image processing apparatus 200 of the first embodiment
  • the second server 4120 has the function of the search unit 203 of the image processing apparatus 200 of the first embodiment.
  • the third server 4130 has the functions of the dividing unit 204 and the assigning unit 205 of the image processing apparatus 200 of the first embodiment, and the functions of the acquisition unit 201 and the display control unit 206 of the image processing apparatus 200 of the first embodiment.
  • the terminal 4140 is provided.
  • terminal 4140 has the same configuration as terminal 4020 of the second embodiment.
  • the terminal 4140 includes an acquisition unit 201, a display control unit 206, a terminal reception unit 4141, and a terminal transmission unit 4142.
  • Terminal reception unit 4141 has the same configuration as terminal reception unit 4021 of the second embodiment.
  • Terminal transmission unit 4142 has the same configuration as terminal transmission unit 4022 of Embodiment 2.
  • the first server 4110 includes a calculation unit 202, a first server reception unit 4111, and a first server transmission unit 4112.
  • the second server 4120 includes a search unit 203, a second server reception unit 4121, and a second server transmission unit 4122.
  • the third server 4130 includes a dividing unit 204, a granting unit 205, a third server receiving unit 4131, and a third server transmitting unit 4132.
  • the first server reception unit 4111 receives information transmitted from the terminal 4140. Specifically, for example, the first server reception unit 4111 receives information from the terminal transmission unit 4142 of the terminal 4140 that is connected to a communication network such as a public line network, a mobile phone network, DSRC, LAN, WAN, etc. by radio. Receive. Information received by the first server reception unit 4111 is information referred to by the calculation unit 202.
  • the first server transmission unit 4112 transmits the information calculated by the calculation unit 202 to the second server reception unit 4121. Specifically, the first server transmission unit 4112 transmits information to the second server reception unit 4121 that is wirelessly connected to a communication network such as a public network, a mobile phone network, DSRC, LAN, or WAN. Alternatively, the information may be transmitted to the second server reception unit 4121 connected by wire.
  • a communication network such as a public network, a mobile phone network, DSRC, LAN, or WAN.
  • the information may be transmitted to the second server reception unit 4121 connected by wire.
  • the second server reception unit 4121 receives the information transmitted by the terminal transmission unit 4142 and the first server transmission unit 4112.
  • the second server reception unit 4121 includes a first server transmission unit 4112 and a terminal transmission unit that are connected to a communication network such as a public line network, a mobile phone network, DSRC, LAN, WAN, etc. via radio.
  • Information from 4142 is received.
  • the second server reception unit 4121 may receive information from the first server transmission unit 4112 connected by wire.
  • Information received by the second server reception unit 4121 is information referred to by the search unit 203.
  • the second server transmission unit 4122 transmits the information searched by the search unit 203 to the third server reception unit 4131. Specifically, for example, the second server transmission unit 4122 transmits information to a third server reception unit 4131 connected to a communication network such as a public line network, a mobile phone network, DSRC, LAN, WAN, etc. by radio. Alternatively, the information may be transmitted to the third server reception unit 4131 connected by wire.
  • a communication network such as a public line network, a mobile phone network, DSRC, LAN, WAN, etc.
  • the information may be transmitted to the third server reception unit 4131 connected by wire.
  • the third server reception unit 4131 receives the information transmitted by the terminal transmission unit 4142 and the second server transmission unit 4122.
  • the third server reception unit 4131 includes a second server transmission unit 4122 and a terminal transmission unit that are wirelessly connected to a communication network such as a public line network, a mobile phone network, DSRC, LAN, or WAN.
  • Information from 4142 may be received.
  • the third server reception unit 4131 may receive information from the second server transmission unit 4122 connected by wire.
  • Information received by the third server reception unit 4131 is information referred to by the division unit 204.
  • the third server transmission unit 4132 transmits the information generated by the provision unit 205 to the terminal reception unit 4141. Specifically, for example, the third server transmission unit 4132 transmits information to a terminal reception unit 4141 connected to a communication network such as a public line network, a mobile phone network, a DSRC, a LAN, and a WAN via a radio.
  • a communication network such as a public line network, a mobile phone network, a DSRC, a LAN, and a WAN via a radio.
  • the first server 4110 performs estimated energy consumption calculation processing
  • the second server 4120 performs reachable point search processing.
  • the third server 4130 performs the identification information adding process.
  • the terminal 4140 performs the process of step S401 and transmits the information acquired in step S401 to the first server 4110.
  • the first server 4110 receives information from the terminal 4140.
  • the first server 4110 performs steps S402 and S403 based on the information received from the terminal 4140, and transmits the information calculated in step S403 to the second server 4120.
  • the second server 4120 receives information from the first server 4110.
  • the second server 4120 performs the process of step S404 based on the information received from the first server 4110, and transmits the information searched in step S404 to the third server 4130.
  • the third server 4130 receives information from the second server 4120.
  • the third server 4130 performs the processes of steps S405 and S406 based on the information from the second server 4120, and transmits the information generated in step S406 to the terminal 4140.
  • the terminal 4140 receives information from the third server 4130. Then, the terminal 4140 performs step S407 based on the information received from the third server 4130, and ends the processing according to this flowchart.
  • the image processing system 4100 and the image processing method according to the third embodiment can obtain the same effects as the image processing device 200 and the image processing method according to the first embodiment.
  • FIG. 42 is an explanatory diagram of an example of a system configuration of the image processing apparatus according to the second embodiment.
  • the image processing system 4200 includes a navigation device 4210, a server 4220, and a network 4240 mounted on the vehicle 4230.
  • Navigation device 4210 is mounted on vehicle 4230.
  • the navigation device 4210 transmits information on the current location of the vehicle and information on the initial stored energy amount to the server 4220.
  • the navigation device 4210 displays the information received from the server 4220 on a display to notify the user.
  • Server 4220 receives information on the current location of the vehicle and information on the initial stored energy amount from navigation device 4210.
  • Server 4220 generates information regarding the reachable range of vehicle 4230 based on the received vehicle information.
  • the hardware configuration of the server 4220 and the navigation device 4210 is the same as the hardware configuration of the navigation device 500 of the first embodiment.
  • the navigation device 4210 only needs to have a hardware configuration corresponding to a function of transmitting vehicle information to the server 4220 and a function of receiving information from the server 4220 and notifying the user.
  • the acquisition system 4200 is configured such that the navigation device 4210 mounted on the vehicle is the terminal 4140 of the third embodiment, and the functional configuration of the server 4220 is distributed to the first to third servers 4110 to 4130 of the third embodiment. Also good.
  • the image processing method described in the present embodiment can be realized by executing a program prepared in advance on a computer such as a personal computer or a workstation.
  • This program is recorded on a computer-readable recording medium such as a hard disk, a flexible disk, a CD-ROM, an MO, and a DVD, and is executed by being read from the recording medium by the computer.
  • the program may be a transmission medium that can be distributed via a network such as the Internet.

Abstract

L'invention concerne un appareil de traitement d'image (200) qui génère, sur la base de la quantité d'énergie résiduelle d'un corps apte à se déplacer, une plage d'arrivée possible du corps apte à se déplacer et amène une unité d'affichage (210) à afficher la plage d'arrivée possible. Une unité d'acquisition (201) acquiert des informations concernant la position courante du corps apte à se déplacer et acquiert également des informations concernant une quantité d'énergie conservée initiale du corps apte à se déplacer à la position courante. Une unité de calcul (202) calcule une quantité de consommation d'énergie estimée qui est une quantité d'énergie consommée par le corps apte à se déplacer lorsque le corps apte à se déplacer parcourt une section prédéterminée. Une unité d'extraction (203) extrait une pluralité de positions d'arrivée possibles qui sont des positions auxquelles le corps apte à se déplacer peut arriver à partir de la position courante. Une unité de division (204) divise un élément d'informations géographiques en une pluralité de zones. Une unité d'addition (205) ajoute, à la pluralité de zones telles que divisées par l'unité de division (204), des éléments respectifs d'informations discriminatives pour distinguer si le corps apte à se déplacer peut ou non arriver aux zones respectives. Une unité de commande d'affichage (206) amène l'unité d'affichage (210) à afficher la plage d'arrivée possible du corps apte à se déplacer conjointement avec les informations géographiques.
PCT/JP2012/074359 2012-09-24 2012-09-24 Appareil de traitement d'image et procédé de traitement d'image WO2014045432A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201280073561.3A CN104335010A (zh) 2012-09-24 2012-09-24 图像处理装置和图像处理方法
PCT/JP2012/074359 WO2014045432A1 (fr) 2012-09-24 2012-09-24 Appareil de traitement d'image et procédé de traitement d'image
JP2014536528A JPWO2014045432A1 (ja) 2012-09-24 2012-09-24 画像処理装置および画像処理方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2012/074359 WO2014045432A1 (fr) 2012-09-24 2012-09-24 Appareil de traitement d'image et procédé de traitement d'image

Publications (1)

Publication Number Publication Date
WO2014045432A1 true WO2014045432A1 (fr) 2014-03-27

Family

ID=50340781

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/074359 WO2014045432A1 (fr) 2012-09-24 2012-09-24 Appareil de traitement d'image et procédé de traitement d'image

Country Status (3)

Country Link
JP (1) JPWO2014045432A1 (fr)
CN (1) CN104335010A (fr)
WO (1) WO2014045432A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017151108A (ja) * 2015-02-26 2017-08-31 インベンセンス・インコーポレーテッド マルチパス平滑化のための方法およびシステム

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106646518B (zh) * 2016-11-18 2019-06-11 北京创业公社征信服务有限公司 基于三阶贝塞尔曲线及插值的gps轨迹数据补全方法
CN108241712B (zh) * 2016-12-27 2021-04-20 北京四维图新科技股份有限公司 一种地图数据处理方法和装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0755484A (ja) * 1993-08-10 1995-03-03 Toyota Motor Corp 車載用ナビゲーション装置
JPH08171643A (ja) * 1994-12-16 1996-07-02 Nec Corp メッシュ状図形の輪郭抽出方式
JPH09102026A (ja) * 1995-10-04 1997-04-15 Hitachi Ltd ディジタル地図における予測範囲表示方法
JPH1116094A (ja) * 1997-06-20 1999-01-22 Matsushita Electric Ind Co Ltd 到達可能範囲表示装置
JP2007298744A (ja) * 2006-04-28 2007-11-15 Matsushita Electric Ind Co Ltd 地図表示装置および地図表示方法
JP2008096209A (ja) * 2006-10-10 2008-04-24 Matsushita Electric Ind Co Ltd 到達可能範囲表示装置および到達可能範囲表示方法ならびにそのプログラム
JP2010127678A (ja) * 2008-11-26 2010-06-10 Aisin Aw Co Ltd 走行案内装置、走行案内方法及びコンピュータプログラム
JP2011217509A (ja) * 2010-03-31 2011-10-27 Nissan Motor Co Ltd 電気自動車用表示装置及び表示方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004213541A (ja) * 2003-01-08 2004-07-29 Sony Corp データ処理システム及び方法、並びにコンピュータ・プログラム
JP4861534B1 (ja) * 2010-09-17 2012-01-25 パイオニア株式会社 エネルギー消費量推定装置、エネルギー消費量推定方法、エネルギー消費量推定プログラムおよび記録媒体

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0755484A (ja) * 1993-08-10 1995-03-03 Toyota Motor Corp 車載用ナビゲーション装置
JPH08171643A (ja) * 1994-12-16 1996-07-02 Nec Corp メッシュ状図形の輪郭抽出方式
JPH09102026A (ja) * 1995-10-04 1997-04-15 Hitachi Ltd ディジタル地図における予測範囲表示方法
JPH1116094A (ja) * 1997-06-20 1999-01-22 Matsushita Electric Ind Co Ltd 到達可能範囲表示装置
JP2007298744A (ja) * 2006-04-28 2007-11-15 Matsushita Electric Ind Co Ltd 地図表示装置および地図表示方法
JP2008096209A (ja) * 2006-10-10 2008-04-24 Matsushita Electric Ind Co Ltd 到達可能範囲表示装置および到達可能範囲表示方法ならびにそのプログラム
JP2010127678A (ja) * 2008-11-26 2010-06-10 Aisin Aw Co Ltd 走行案内装置、走行案内方法及びコンピュータプログラム
JP2011217509A (ja) * 2010-03-31 2011-10-27 Nissan Motor Co Ltd 電気自動車用表示装置及び表示方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017151108A (ja) * 2015-02-26 2017-08-31 インベンセンス・インコーポレーテッド マルチパス平滑化のための方法およびシステム
JP7133903B2 (ja) 2015-02-26 2022-09-09 インベンセンス・インコーポレーテッド マルチパス平滑化のための方法およびシステム

Also Published As

Publication number Publication date
JPWO2014045432A1 (ja) 2016-08-18
CN104335010A (zh) 2015-02-04

Similar Documents

Publication Publication Date Title
JP6047651B2 (ja) 画像処理装置および画像処理方法
WO2014045432A1 (fr) Appareil de traitement d'image et procédé de traitement d'image
WO2013027270A1 (fr) Dispositif de traitement d'image, dispositif de gestion de traitement d'image, terminal, dispositif de traitement et procédé de traitement d'image
JP2019049569A (ja) 画像処理装置、画像処理方法および画像処理プログラム
JP2017187498A (ja) 画像処理装置、画像処理方法および画像処理プログラム
WO2014016948A1 (fr) Dispositif et procédé de traitement d'image
JP2022089850A (ja) 画像処理装置、画像処理方法および画像処理プログラム
JP2017227652A (ja) 画像処理装置、画像処理方法および画像処理プログラム
JP6058756B2 (ja) 画像処理装置および画像処理方法
WO2014038026A1 (fr) Dispositif de traitement d'image et procédé de traitement d'image
JP2022070882A (ja) 画像処理装置、画像処理方法および画像処理プログラム
JP2018200319A (ja) 画像処理装置、画像処理方法および画像処理プログラム
WO2013114579A1 (fr) Dispositif, procédé et programme de traitement d'image
JP5816705B2 (ja) 画像処理装置、画像処理管理装置、端末、画像処理方法およびデータ構造
JP2016006695A (ja) 補充設備検索装置、補充設備検索方法および補充設備検索プログラム
WO2013125019A1 (fr) Dispositif de traitement d'image et procédé de traitement d'image
JP5819445B2 (ja) 画像処理装置、画像処理管理装置、端末および画像処理方法
CN104204728B (zh) 图像处理装置以及图像处理方法
JP5619288B2 (ja) 画像処理装置、画像処理管理装置、端末、処理装置および画像処理方法
WO2014080535A1 (fr) Dispositif de commande d'affichage, procédé de commande d'affichage, programme de commande d'affichage, système de commande d'affichage, serveur de commande d'affichage et terminal
WO2013105271A1 (fr) Appareil de traitement d'image, appareil de gestion du traitement d'une image, terminal, et procédé de traitement d'image
WO2014080506A1 (fr) Dispositif de commande d'affichage, méthode de commande d'affichage, programme de commande d'affichage, système de commande d'affichage, serveur de commande d'affichage et terminal
JP2021073581A (ja) 到達可能範囲算出装置、到達可能範囲算出方法および到達可能範囲算出プログラム
JPWO2013125019A1 (ja) 画像処理装置および画像処理方法
WO2014068685A1 (fr) Dispositif de commande d'affichage, dispositif de serveur, procédé de commande d'affichage, programme de commande d'affichage et support d'enregistrement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12884791

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2014536528

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12884791

Country of ref document: EP

Kind code of ref document: A1