WO2018134897A1 - Dispositif de détection de position et de posture, dispositif d'affichage ar, procédé de détection de position et de posture et procédé d'affichage ar - Google Patents

Dispositif de détection de position et de posture, dispositif d'affichage ar, procédé de détection de position et de posture et procédé d'affichage ar Download PDF

Info

Publication number
WO2018134897A1
WO2018134897A1 PCT/JP2017/001426 JP2017001426W WO2018134897A1 WO 2018134897 A1 WO2018134897 A1 WO 2018134897A1 JP 2017001426 W JP2017001426 W JP 2017001426W WO 2018134897 A1 WO2018134897 A1 WO 2018134897A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
unit
orientation
content
map information
Prior art date
Application number
PCT/JP2017/001426
Other languages
English (en)
Japanese (ja)
Inventor
誠治 村田
川村 友人
孝弘 松田
俊輝 中村
Original Assignee
マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by マクセル株式会社 filed Critical マクセル株式会社
Priority to PCT/JP2017/001426 priority Critical patent/WO2018134897A1/fr
Publication of WO2018134897A1 publication Critical patent/WO2018134897A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a position / orientation detection technique and an AR (Augmented Reality) display technique.
  • the present invention relates to a position / orientation detection technique and an AR display technique of an apparatus including an imaging unit.
  • Patent Document 1 Japanese Patent Laid-Open No. 2004-133867 describes that “when a user designates a three-dimensional object with a cursor on a three-dimensional map image, an actual building corresponding to the three-dimensional object is captured from the image taken by the camera at that time. The extracted image portion is extracted as texture image data and registered as texture image data of the three-dimensional object.After that, the rendering processing unit texture-maps the registered texture image data as the surface texture.
  • a navigation device is disclosed that is drawn on a three-dimensional map image (summary excerpt).
  • Patent Document 2 Japanese Patent Laid-Open No. 2004-151867 discloses that “the image feature that stores the image feature Fa to be recognized, the image feature detection unit that detects the image feature Fb from the preview image to be recognized, and the recognition based on the matching result of the image features Fa and Fb”.
  • a posture estimation unit that estimates an initial posture of a target, a tracking point selection unit that selects a tracking point Fe from an image feature Fa based on the initial posture, and a template that generates a template image of a recognition target based on the estimation result of the initial posture
  • a generating unit that matches the template image and the preview image with respect to the tracking point Fe
  • a posture tracking unit that tracks the posture of the recognition target in the preview image based on the tracking point Fe that has been successfully matched.
  • JP 2009-276266 A Japanese Unexamined Patent Publication No. 2016-066187
  • AR display is a technique for superimposing and displaying information such as images and data related to a real scene (real scene) viewed by a user.
  • information such as images and data related to a real scene (real scene) viewed by a user.
  • it is necessary to specify the position on the user side and the direction (line-of-sight direction) with high accuracy.
  • Patent Document 1 a user designates an object to be displayed with related information in an actual scene superimposed. For this reason, it takes time and effort. Also, a 3D map is displayed by pasting a sight image around the car as a texture image on a 3D model. For this reason, a three-dimensional map and a three-dimensional model corresponding to the scene around the vehicle are essential. Since image processing is performed using a 3D map and a 3D model corresponding to the real space, the amount of information to be processed increases.
  • Patent Document 2 tracks the posture change of the recognition target using an image. Although the recognition target, that is, the posture change of the object can be detected, the position on the user side cannot be detected.
  • the present invention has been made in view of the above circumstances, and an object of the present invention is to provide a position detection technique for detecting the position and orientation on the user side with high accuracy from a small amount of information with a simple configuration. Problems, configurations, and effects other than those described above will be clarified by the following description of embodiments.
  • the present invention provides a photographing unit that photographs a predetermined photographing range including two or more objects, an object detection unit that identifies the pixel position of each of the objects in a photographed image photographed by the photographing unit, and each of the objects
  • a direction calculation unit that calculates an object direction that is a direction with respect to the shooting unit using the pixel position, map information of each object, and a focal length of the shooting unit, and an object direction of each object
  • a position / orientation calculation unit that calculates the position and orientation of the photographing unit using the map information
  • the object detection unit stores information on positions and shapes of a plurality of objects in a predetermined area.
  • 2D map information corresponding to the shooting range is extracted from the 2D map information, and the 2D map information Providing the position and orientation detecting apparatus characterized by specifying the pixel position using.
  • An AR display device that displays content on a display having transparency and reflectivity in association with an object in a scene behind the display, the position and orientation detection device, and the content displayed on the display.
  • the generated display content generation unit uses the generated display content generation unit, the position and orientation of the photographing unit determined by the position / orientation detection device, and the pixel position of the object specified by the object detection unit on the display of the generated content.
  • an AR display device comprising: a superimposing unit that determines a display position; and a display unit that displays the generated content at a display position determined by the superimposing unit on the display.
  • the position and orientation on the user side can be detected with high accuracy from a small amount of information with a simple configuration.
  • FIG. 1 It is a functional block diagram of the position and orientation detection apparatus of the first embodiment.
  • (A) is a block diagram of the imaging
  • (b) is a hardware block diagram of the position and orientation detection apparatus of 1st embodiment.
  • (A) is a block diagram of the map server system of 1st embodiment
  • (b) is a block diagram of the content server system of 2nd embodiment.
  • (A)-(f) is explanatory drawing for demonstrating the position and orientation detection method of 1st embodiment. It is a flowchart of the position and orientation detection processing of the first embodiment.
  • (A) And (b) is explanatory drawing for demonstrating pattern matching.
  • (A)-(f) is explanatory drawing for demonstrating how to determine the representative point of 1st embodiment.
  • (A) And (b) is explanatory drawing for demonstrating the modification of the position and orientation calculation method of 1st embodiment.
  • (A)-(c) is explanatory drawing for demonstrating the modification of the position and orientation calculation method of 1st embodiment.
  • It is a functional block diagram of AR display device of a second embodiment.
  • (A) And (b) is explanatory drawing for demonstrating the display part of 2nd embodiment, and the display position of a related content. It is a flowchart of AR display processing of the second embodiment.
  • (A) And (b) is explanatory drawing for demonstrating the example of a display of 2nd embodiment.
  • (A) And (b) is explanatory drawing for demonstrating the example of a display of 2nd embodiment.
  • (A) And (b) is explanatory drawing for demonstrating the example of a display of 2nd embodiment.
  • (A) And (b) is explanatory drawing for demonstrating the example of a display of 2nd embodiment.
  • (A) And (b) is explanatory drawing for demonstrating the example of a display of 2nd embodiment.
  • the first embodiment is a position / orientation detection apparatus including an imaging unit.
  • the position / orientation detection apparatus according to the present embodiment uses the image acquired by the imaging unit to detect its own position and orientation including the imaging unit.
  • FIG. 1 is a functional block diagram of a position / orientation detection apparatus 100 according to the present embodiment.
  • the position / orientation detection apparatus 100 according to the present embodiment includes a control unit 101, a photographing unit 102, a rough detection sensor 103, an evaluation object extraction unit 104, an object detection unit 105, and a direction calculation unit. 106, a position / orientation calculation unit 107, a map management unit 108, and a gateway 110.
  • the position / orientation detection apparatus 100 of the present embodiment includes a captured image holding unit 121 that temporarily holds information, a positioning information holding unit 122, a two-dimensional map information holding unit 123, a map information holding unit 124, Is provided.
  • the control unit 101 monitors the operation status of each part of the position / orientation detection apparatus 100 and controls the entire position / orientation detection apparatus 100.
  • it is composed of a circuit or the like.
  • the CPU may execute the program stored in advance.
  • the photographing unit 102 photographs a scene in the real space (actual scene) and acquires a photographed image.
  • the captured image is stored in the captured image holding unit 121.
  • a shooting range including at least two or more objects is shot.
  • the photographing unit 102 includes a lens 131 that is an image forming optical system, and an image sensor 132 that converts the formed image into an electric signal.
  • the imaging device 132 is a CMOS (metal-oxide-semiconductor) image sensor, a CCD (Charge-Coupled Device) image sensor, or the like.
  • Reference numeral 133 denotes an optical axis of the lens.
  • the photographed image holding unit 121 may hold a plurality of photographed images as necessary. This is to extract temporal changes and to select a captured image with a good shooting state.
  • data obtained by performing various types of image processing on the acquired data may be stored as a captured image instead of the data itself acquired by the imaging unit 102.
  • the image processing performed here is, for example, removal of lens distortion, adjustment of color and brightness, and the like.
  • the coarse detection sensor 103 detects the position and orientation in the real space of the position / orientation detection apparatus 100 including the imaging unit 102 with coarse accuracy, and stores the detected position and orientation in the positioning information holding unit 122 as positioning information.
  • GPS Global Positioning System
  • the coarse detection sensor 103 is a GPS receiver.
  • an electronic compass is used to detect the posture.
  • the electronic compass is composed of a combination of two or more magnetic sensors. If the magnetic sensor is compatible with three axes, direction detection in a three-dimensional space is possible. If the measurement surface is limited to a horizontal plane, a biaxial magnetic sensor may be used and an electronic compass with reduced cost may be used.
  • the positioning information held by the positioning information holding unit 122 is not limited to the positioning information obtained by the coarse detection sensor 103.
  • it may be positioning information obtained by a position / orientation calculation unit 107, which will be described later, or both.
  • the positioning information is used for extraction of two-dimensional map information and map information described later, calculation of position and orientation, and the like.
  • the map management unit 108 acquires two-dimensional map information and map information in a range necessary and sufficient for processing by each unit of the position / orientation detection apparatus 100 including the visual field range of the photographing unit 102, and acquires the two-dimensional map information holding unit 123 and the map Each information is stored in the information holding unit 124.
  • the acquisition is performed via the gateway 110 via a network or the like from a server or storage device that holds such information, for example.
  • the acquisition range is determined based on the positioning information.
  • the 2D map information held in the 2D map information holding unit 123 is object information of each object included in a predetermined area.
  • the object information includes the position (position in the map), shape (appearance), feature point, and the like of each object in the area.
  • 2D map information is created from images taken with a camera, for example.
  • position information map absolute position
  • position information of the shooting range is simultaneously acquired as information for specifying a predetermined area.
  • the photographed image is analyzed, and the object and its feature point are extracted.
  • the pixel position in the image of the extracted object is specified and set as the map position.
  • the appearance shape of the object is acquired by using, for example, Google Street View.
  • the original shooting range is a two-dimensional map, and for each two-dimensional map, the map absolute position of the shooting range is associated with the position, shape, and feature point of each object in the area, and the two-dimensional map. Information. Since each two-dimensional map information has a map absolute position, the image can be deformed in accordance with the optical characteristics of the photographing unit 102.
  • an object registered in the two-dimensional map information is referred to as a registered object.
  • the two-dimensional map information may further include attribute information for each registered object.
  • the attribute information includes, for example, the type of registered object.
  • the type of registered object is, for example, a building, a road, a signboard, or the like.
  • the two-dimensional map information may be acquired by analyzing an image obtained by removing distortion based on information related to the camera and modulating the color and brightness of the photographed image.
  • the coordinates or addresses of each object in the real space are registered as map information.
  • map information for example, a Google map of Google Inc. can be used.
  • the coordinates for example, latitude and longitude are used.
  • height information may be included.
  • three-dimensional position measurement is possible.
  • a local coordinate system based on any actual location may be used.
  • a data structure in which the position measurement accuracy is increased by specializing in a limited area may be used.
  • the evaluation object extraction unit 104 extracts object candidates to be processed (hereinafter referred to as evaluation object candidates) from each registered object registered in the two-dimensional map information. For example, all registered objects in the two-dimensional map information may be set as evaluation object candidates.
  • object attribute information is stored, a registered object whose attribute information matches a predetermined condition may be extracted as an evaluation object candidate. For example, only building objects are extracted.
  • the 2D map information information held in the 2D map information holding unit 123 is used.
  • the minimum two-dimensional map group is stored in the two-dimensional map information storage unit 123.
  • the evaluation object extraction unit 104 may further narrow down 2D map information for extracting evaluation object candidates using the positioning information. For example, using the positioning information, the visual field range of the photographing unit 102 in the real space is calculated. Then, only the two-dimensional map information that matches the visual field range is scanned to extract evaluation object candidates.
  • the object detection unit 105 identifies the position (pixel position) of each extracted evaluation object candidate in the captured image.
  • An evaluation object candidate whose position is specified in the captured image is set as an evaluation object.
  • the positions of at least two evaluation objects are specified.
  • the position in the captured image is specified by pattern matching.
  • a template image used for pattern matching is created using shape information of two-dimensional map information.
  • the object detection unit 105 calculates the distance in the horizontal direction (horizontal distance) from the photographed image origin of each evaluation object using the specified pixel position.
  • the direction calculation unit 106 calculates the direction (object direction) of each evaluation object.
  • the object direction for example, an angle from the direction of the optical axis 133 of the lens 131 of the photographing unit 102 is obtained.
  • the angle is calculated using the pixel position calculated by the object detection unit 105, the horizontal distance, the map information of the evaluation object, and the focal length of the lens 131 of the photographing unit 102.
  • the direction calculation unit 106 may correct the angle error due to distortion of the lens 131 and calculate the direction.
  • the angle error due to distortion is calculated from the relationship between the angle of view of the lens 131 and the image height.
  • the relationship between the angle of view and the image height necessary for the calculation is acquired in advance.
  • the position / orientation calculation unit 107 calculates the position and orientation of the photographing unit 102 using the object direction of each evaluation object calculated by the direction calculation unit 106 and the map information.
  • the position to be calculated is a coordinate in the same coordinate system as the map information.
  • the calculated posture is the direction of the optical axis 133.
  • the posture can be represented by a direction angle, an elevation angle, a pitch angle, a yaw angle, and a low angle.
  • the gateway 110 is a communication interface.
  • the position / orientation detection apparatus 100 transmits / receives data to / from, for example, a server connected to a network via the gateway 110.
  • a server connected to a network via the gateway 110 for example, two-dimensional map information and map information are downloaded from a server connected to the Internet, or the generated two-dimensional map information is uploaded to the server as will be described later.
  • FIG. 3A shows an example of the configuration of the server (map server) system 620 from which the two-dimensional map information is acquired.
  • the map server system 620 includes a map server 621 that controls operations, a map information storage unit 622 that stores map information, a 2D map information storage unit 623 that stores 2D map information, And a communication I / F 625.
  • the map server 621 receives a request from the map management unit 108, and transmits map information and two-dimensional map information in the requested range from each storage unit to the request source.
  • each object 624 included in the two-dimensional map information may be held independently.
  • each object 624 is held in association with the two-dimensional map information including the object 624.
  • the map server 621 has a function of extracting feature points of the object 624, analyzing the two-dimensional map information 210 transmitted from the position / orientation detection apparatus 100, extracting an object, and registering the two-dimensional map information 210 as necessary. May be.
  • map server system 620 that manages the map information and the two-dimensional map information is not limited to one system. Such information may be divided and managed in a plurality of server systems on the network.
  • the position / orientation detection apparatus 100 of the present embodiment includes a CPU 141, a memory 142, a storage device 143, an input / output interface (I / F) 144, and a communication I / F 145.
  • the program is stored in advance in the storage device 143 by the CPU 141 loaded into the memory 142 and executed. All or some of the functions may be realized by hardware or a circuit such as ASIC (Application Specific Integrated Circuit) or FPGA (Field-programmable gate array).
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-programmable gate array
  • various data used for the processing of each function and various data generated during the processing are stored in the memory 142 or the storage device 143.
  • the captured image holding unit 121, the positioning information holding unit 122, the two-dimensional map information holding unit 123, and the map information holding unit 124 are constructed in, for example, the memory 142 provided in the position and orientation detection device 100.
  • Each holding unit may be realized by a memory 142 provided separately, or may be configured by a single memory 142 that is partially or entirely integrated. You may divide according to required capacity and speed. Note that the two-dimensional map information is particularly used in processing for extracting an object. For this reason, it is desirable to construct the two-dimensional map information holding unit 123 that holds this information in a memory area that can be accessed at a relatively high speed.
  • FIG. 4 (a) to 4 (f) are diagrams for explaining the position detection processing of the present embodiment, and FIG. 5 is a processing flow.
  • initial processing, shooting processing, object detection processing, direction calculation processing, and position / posture calculation processing are performed in this order.
  • the rough detection sensor 103 acquires positioning information as the approximate position and orientation of the photographing unit 102, and acquires two-dimensional map information and map information used for processing.
  • the rough detection sensor 103 acquires the rough position and the rough posture of the position / orientation detection apparatus 100 as positioning information (step S1101).
  • the acquired positioning information is stored in the positioning information holding unit 122.
  • the map management unit 108 acquires the two-dimensional map information 210 and the map information 230 necessary for processing, and registers them in the two-dimensional map information holding unit 123 and the map information holding unit 124, respectively (step S1102).
  • the map management unit 108 calculates coordinates of the visual field range of the imaging unit 102 using the positioning information. Then, two-dimensional map information corresponding to the coordinates calculated by the map absolute position is acquired. Also, map information corresponding to the calculated coordinates is acquired.
  • the photographing unit 102 photographs an actual scene including two or more objects, and obtains a photographed image 220 (step S1103).
  • object detection processing is performed.
  • the object is detected by specifying the pixel position of the evaluation object in the captured image 220.
  • the evaluation object extraction unit 104 accesses the two-dimensional map information holding unit 123 and acquires the two-dimensional map information 210. Then, as shown in FIG. 4A, registered objects in the acquired two-dimensional map information 210 are extracted (step S1104) and set as evaluation object candidates 211, 212, and 213.
  • FIG. 4A illustrates a case where three evaluation object candidates 211, 212, and 213 are extracted.
  • the object detection unit 105 generates a template image for each extracted evaluation object candidate 211, 212, 213 (step S1106). Then, the captured image 220 is scanned with the generated template image, and an evaluation object is specified in the captured image 220 (step S1107). The object detection unit 105 identifies the evaluation object in the captured image 220 for each of the extracted evaluation object candidates 211, 212, and 213, and repeats until at least two evaluation objects are detected (step S1105).
  • FIG. 4B shows an example in which two evaluation objects 221 and 222 are detected in the captured image 220.
  • the evaluation object 221 corresponds to the evaluation object candidate 211
  • the evaluation object 222 corresponds to the evaluation object candidate 212, and is detected.
  • no evaluation object corresponding to the evaluation object candidate 213 has been detected.
  • the object detection unit 105 calculates the horizontal distances PdA and PdB of the evaluation objects 221 and 222 from the origin of the captured image 220, respectively (step S1108).
  • the horizontal distances PdA and PdB are the number of pixels on the image sensor 132.
  • the origin of the captured image 220 indicated by a black dot in FIG. 4B is the point of the image sensor 132 that coincides with the direction in which the image capturing unit 102 faces, that is, the center of the optical axis 133 of the lens 131.
  • the horizontal distance is a horizontal distance between the representative point in each evaluation object 221 and 222 and the origin of the captured image.
  • the representative point is, for example, a rectangular center point that constitutes the evaluation objects 221 and 222.
  • the midpoint of the shape associated with each of the evaluation objects 221 and 222 may be used as the representative point. Details of how to determine the representative points will be described later.
  • the object detection unit 105 refers to the map information 230 of FIG. 4C and acquires the map information of the evaluation objects 221 and 222 (step S1109).
  • the object detection unit 105 uses the map absolute position of the two-dimensional map information 210 that is the basis of the evaluation objects 221 and 222 and the map position of the evaluation object candidates 211 and 212 to generate map information.
  • the objects (real objects) 231 and 232 in 230 are associated with each other.
  • map information ((XA, YA), (XB, YB)) of the associated real objects 231 and 232 is acquired.
  • FIG. 4C a two-dimensional map is illustrated for convenience, but a three-dimensional map may be used.
  • the direction calculation unit 106 performs a direction calculation process. That is, the direction calculation unit 106 calculates the object direction of each evaluation object 221 and 222 (step S1110). As shown in FIGS. 4D and 4E, the direction calculation unit 106 calculates angles ⁇ A and ⁇ B with respect to the optical axis direction of the lens 131 of the photographing unit 102 as object directions, respectively.
  • FIG. 4D is a diagram for explaining a method of estimating the existence direction of the photographing unit 102 using the horizontal distance PdA in the photographed image 220 of the evaluation object 221 corresponding to the real object 231.
  • FIG. 4E is a diagram for explaining a method of estimating the object direction using the horizontal distance PdB of the evaluation object 221 corresponding to the real object 232.
  • the image of the real object 231 is formed on the image sensor 132 through the lens 131. Therefore, the angle ⁇ A formed by the real object 231 and the optical axis 133 of the lens 131 of the photographing unit 102 is calculated geometrically and optically by the following expression (1) using the position PdA on the image sensor.
  • f is the focal length of the lens 131.
  • an angle ⁇ B formed by the real object 232 and the optical axis 133 of the lens 131 of the photographing unit 102 shown in FIG. 4E is similarly calculated by the following equation (2).
  • the lens 131 is distorted, errors are added to the calculated angles ⁇ A and ⁇ B. For this reason, it is desirable to use the lens 131 with little distortion. In addition, it is desirable that the relationship between the angle of view of the lens 131 and the image height is acquired in advance, and the angle error due to distortion of the lens 131 is corrected.
  • the position / orientation calculation unit 107 performs position / orientation calculation processing.
  • the map information A (XA, YA) and B (XB, YB) of the matching evaluation objects 221 and 222 and the object directions ⁇ A and ⁇ B are respectively referred to, and the existence range of the photographing unit 102 in the real space is determined. Ask for each.
  • photography part 102 are specified from several existing ranges (step S1111).
  • the position to be calculated is map information of the photographing unit 102.
  • the calculated posture is the direction of the optical axis 133 of the lens 131.
  • the optical axis 133 of the lens 131 forms an angle ⁇ A with the real object 231 and forms an angle ⁇ B with the real object 232.
  • the trajectory of the position that satisfies these two conditions is a trajectory of a point having a constant circumferential angle, and becomes a circle 241 passing through the real object 231 and the real object 232.
  • the radius R of the circle 241 is calculated by the following equation (3) using the positions A (XA, YA) and B (XB, YB) of the real object 231 and the real object 232 in the real space.
  • the locus where the imaging unit 102 exists can be specified as an arc AB indicated by a solid line of a circle 241.
  • the relationship between ⁇ A and ⁇ B is reversed left and right. For this reason, it does not hold as a locus where the photographing unit 102 exists.
  • the position / orientation calculation unit 107 specifies the direction of the optical axis 133 of the lens 131 of the photographing unit 102 using the direction detected by the rough detection sensor 103.
  • the position of the imaging unit 102 existing on the arc AB is uniquely determined.
  • the detected position of the imaging unit 102 is the principal point of the lens 131 provided in the imaging unit 102.
  • the detection position is not limited to the principal point. Any position in the photographing unit 102 may be used.
  • the position and orientation of the photographing unit 102 can be detected.
  • the position / orientation detection apparatus 100 repeats the above processing at predetermined time intervals, and always detects the latest position and orientation of the imaging unit 102.
  • the captured image acquisition process in step S1103 may be the first.
  • the position / orientation detection apparatus 100 starts the above process when the photographing unit 102 acquires a photographed image.
  • each evaluation object candidate 211 and 212 may be registered in another two-dimensional map information 210.
  • FIGS. 6 (a) to 6 (c) show a template image 310
  • FIG. 6B shows a captured image 220.
  • the pattern matching process is a process for determining whether or not there is the same image as the template image in a certain image. If there is the same image, the pixel position of the image can be specified.
  • the object detection unit 105 generates a plurality of template images 311 to 316 having different inclinations and sizes as shown in FIG.
  • a case where six types of template images 311 to 316 are generated is illustrated. If there is no need to distinguish, the template image 311 is used as a representative.
  • the object detection unit 105 scans the captured image 220 in the direction indicated by the arrow in FIG. 6B for each generated template image 311 and evaluates the degree of similarity.
  • a method for comparing the similarity of images there is a method of taking a difference and evaluating the histogram.
  • 0 is obtained as a difference if they completely match, and a value far from 0 is obtained as the degree of matching decreases.
  • the evaluation result of the similarity between the template image 311 and the applied area in the captured image 220 is recorded. This is repeated for each of the template images 311 to 316. The entire area of the captured image 220 is evaluated with respect to all prepared template images 311, and an area with the smallest difference is determined as a matching area.
  • a region 225 in the captured image 220 is a region where the similarity evaluation with the template image 311 is maximized. Therefore, the object detection unit 105 determines that an evaluation object exists in the captured image 220, and specifies the region 225 as the region (pixel position) of the evaluation object.
  • the object detection unit 105 can simultaneously acquire the pixel position of the evaluation object from the evaluation result. Using this, the horizontal distance PdA can be determined.
  • the orientation of the evaluation object is specified in advance and the template image to be used is specified, one template image is extracted and only that is necessary. Further, when the approximate existence area of the evaluation object in the captured image 220 is known, the evaluation object may be performed only for the area.
  • the matching region that is, the pixel position in the captured image of the evaluation object can be easily specified.
  • the shadow conditions of the two are different, the images cannot be exactly the same.
  • countermeasures related to image information related to brightness and color and countermeasures related to object distortion depending on the shooting direction may be applied to the captured image.
  • the distortion of the object due to the photographing direction means, for example, a deformation in which a three-dimensional space has a rectangular plane when an observer is facing the object, and looks almost trapezoid when viewed from an oblique direction.
  • the two-dimensional map information 210 and the captured image 220 are converted to gray scale, and the brightness is made uniform from the respective histograms.
  • a countermeasure for distortion of the object for example, as shown in FIG. 6A, when generating the template image 311 from the object, it is virtually deformed three-dimensionally. In other words, in order to detect an object even if it is deformed as described above, shape conversion is performed on the object so that a rectangle becomes a trapezoid, and template images 311 to 316 are obtained.
  • a template image is generated by performing shape conversion corresponding to each of the assumed shooting directions, and template matching is performed. Thereby, it can respond to arbitrary imaging directions.
  • a template image is generated by applying deformation such as rotation and enlargement / reduction to the object.
  • the pattern matching process described above is a general technique and has an advantage that it can be processed at high speed.
  • the determination is made within the template image used when specifying the evaluation objects 221 and 222.
  • the center of the outer shape is set as the representative point 331.
  • a rectangle indicated by a broken line is defined from the outermost shape, and the center of the rectangle is set as the representative point 331.
  • a rectangle indicated by a broken line from the outermost shape may be defined as shown in FIG. 7D, and the center of the rectangle may be set as the representative point 331.
  • a part of the rectangle may be defined by a rectangle, and the center of the rectangle may be used as the representative point 331.
  • the outermost shape is defined and the center of the rectangle is set as a representative point 331 as shown in FIG.
  • the method of defining a rectangle from the outermost shape and setting the center to the representative point 331 is simple and most desirable.
  • the outermost diameter may not be used as shown in FIG.
  • any corner of the object may be used.
  • the representative point 331 is used when calculating the horizontal distance from the origin of the evaluation object in the position and orientation detection process. For this reason, it is desirable that the representative point 331 is configured so as to be accurately associated with the map information.
  • the position / orientation detection apparatus 100 captures a predetermined photographing range including two or more objects, and the object in the photographed image photographed by the photographing unit 102.
  • An object detection unit 105 that identifies each pixel position; an object direction that is a direction of each of the objects with respect to the shooting unit 102; a pixel position; map information of each of the objects; a focal length of the shooting unit;
  • a position / orientation calculation unit 107 that calculates the position and orientation of the photographing unit using the object direction of each object and the map information.
  • the object detection unit 105 extracts the two-dimensional map information corresponding to the shooting range from the two-dimensional map information in which the information on the positions and shapes of a plurality of objects in a predetermined area is stored.
  • the pixel position is specified using dimension map information.
  • the position and orientation of the photographing unit 102 are calculated using an image actually taken by the photographing unit 102. At this time, two objects are extracted from the captured image and calculated using them. For this reason, image processing using a three-dimensional map or a three-dimensional model corresponding to the real space is unnecessary. Moreover, it does not depend on the accuracy of an external GPS or the like. Therefore, according to the present embodiment, the position and orientation of the photographing unit 102 can be detected with high accuracy with a simple configuration and a small amount of information processing. Therefore, the position and orientation of various devices whose relative position and relative direction with respect to the imaging unit 102 are known can be estimated with high accuracy.
  • the position and orientation of the photographing unit 102 included in the computer can be obtained by a pattern matching process that the computer is good at and simple geometric calculation. That is, the position / orientation detection apparatus 100 that detects its own position and orientation can be realized with a small amount of information.
  • the position and orientation detection by the above method is repeated at predetermined time intervals. Accordingly, the position and orientation of the position / orientation detection apparatus 100 including the imaging unit 102 can be identified with high accuracy and constantly with a small amount of information.
  • the position / orientation calculation unit 107 calculates the position and orientation of the photographing unit 102 using the two evaluation objects 221 and 222.
  • the calculation of the position and orientation by the position and orientation calculation unit 107 is not limited to this method. For example, three evaluation objects may be used.
  • the map information ((XA, YA), (XB, YB), (XC, YC)) of the real objects 231, 232, 233 corresponding to the three evaluation objects, and the light of the lens 131
  • the angles ( ⁇ A, ⁇ B, ⁇ C) with respect to the axis 133 are acquired.
  • the circle 241 where the photographing unit 102 exists is determined.
  • a circle 242 where the photographing unit 102 exists is determined from information on the real object 232 and the real object 233.
  • the intersection of both the circles 241 and 242 is the position of the photographing unit 102, and the direction satisfying the angle of the lens 131 with respect to the optical axis 133 is the direction of the photographing unit 102, that is, the posture.
  • the position and orientation of the photographing unit 102 can be obtained only from the detection result of the evaluation object.
  • the circle 241 and the circle 242 be as far apart as possible. Therefore, for example, in the example of FIG. 8A, the position of the photographing unit 102 is determined using the circle 241 and the circle 242 instead of the circle passing through the circle 241, the real object 231 and the real object 233. It is desirable to do.
  • the number of evaluation objects used when specifying the position and orientation of the imaging unit 102 is not limited to three. Three or more may be sufficient.
  • N is an integer of 1 or more.
  • N is an integer of 1 or more.
  • the position and orientation of the photographing unit 102 can be specified if three or more objects can be detected.
  • the position and orientation detection accuracy is further improved.
  • the position / orientation calculation unit 107 may calculate the position and orientation of the photographing unit 102 using two evaluation objects and information about the traffic infrastructure around the photographing unit 102, for example. This method will be described with reference to FIG. Here, the case where the information on the road 234 is used as the information on the traffic infrastructure will be described as an example. In addition, the photographing unit 102 is assumed to be mounted on a moving body traveling on the road 234.
  • the shape of the traffic infrastructure such as the road 234 is defined by the standard.
  • the imaging unit 102 can recognize the road 234 on which the mounted moving body is traveling, and obtain the traveling direction (the optical axis direction of the imaging unit 102).
  • the obtained information on the optical axis direction is used in place of the result of the rough detection sensor 103 of the above embodiment, and the position and orientation of the photographing unit 102 are determined.
  • route information such as bridges and intersections and position information such as signs and traffic lights may be used.
  • the position and orientation of the photographing unit 102 may be detected using the two-dimensional map information 210 using the road 234 itself as an evaluation object. Thereby, the evaluation object to be used can be reduced.
  • the position and orientation can be specified without using the orientation information of the coarse detection sensor 103. For this reason, accuracy can be improved.
  • FIG. 9A to FIG. 9C are diagrams for explaining this modification. Here, two or more evaluation objects are used.
  • the appearance of the real object 231 corresponding to the evaluation object 221 has a rectangular shape 251 as shown in FIG.
  • the external shape obtained from the captured image 220 is a trapezoid 252 as shown in FIG.
  • the positional relationship between the real object 231 and the photographing unit 102 in the real space can be specified from the deformation amount.
  • the object detection unit 105 When pattern matching the evaluation object in the captured image, the object detection unit 105 generates a template image by adding deformation parameters such as scaling, deformation, rotation, and distortion based on the shape 251 viewed from the front.
  • the evaluation object 221 is pattern-matched with the generated template image.
  • a normal line (indicated by an arrow in the figure) of the front shape 261 of the real object 231 corresponding to the evaluation object 221 is obtained using the deformation parameter of the template image used for pattern matching.
  • the direction of the normal of the front shape 261 that is a plane is determined from the deformation amount of the shape in the captured image 220. Can be sought. Then, using this, the direction of the photographing unit 102 (direction of the optical axis 133) can be obtained.
  • the direction of the building surface in the real space may be obtained from the map information 230 or the like.
  • the position and orientation detection method according to the present modification does not directly use the values of the coarse detection sensor 103 such as the GPS or the electronic compass, and therefore the position and orientation are not affected by the accuracy of the coarse detection sensor 103. Can be obtained with high accuracy.
  • each position and orientation calculation method described above is preferably selected according to the required accuracy.
  • the position and orientation of the photographing unit 102 are calculated using the horizontal distance of each evaluation object has been described.
  • the vertical distance may also be measured and the height direction of the object may be estimated.
  • the map management unit 108 uses the positioning information acquired by the rough detection sensor 103 to extract two-dimensional map information and map information necessary for the process, and stores them in the holding unit. Yes.
  • the map management unit 108 uses the positioning information acquired by the rough detection sensor 103 to extract two-dimensional map information and map information necessary for the process, and stores them in the holding unit. Yes.
  • it is not limited to this.
  • the information may be used to extract necessary two-dimensional map information and map information.
  • the map management unit 108 can extract necessary and sufficient two-dimensional map information with higher accuracy, and the processing accuracy is improved.
  • the position and orientation can be calculated even in a place where a signal from the GPS cannot be received.
  • the position / orientation detection apparatus 100 may further include a two-dimensional map generation unit 109.
  • the two-dimensional map generation unit 109 analyzes the captured image 220 acquired by the imaging unit 102 using the position and orientation of the imaging unit 102, and generates two-dimensional map information 210. At this time, the pixel position of the object in the captured image 220 and the appearance of the object are used. That is, the pixel position of each object detected by the object detection unit 105 is set as a map position, and the shape used for pattern matching is set as an object shape.
  • the two-dimensional map information 210 may be used as it is.
  • the two-dimensional map generation unit 109 associates the position of the object in the real space with the appearance of the photographed object, and generates two-dimensional map information.
  • an unknown object that is not registered in the two-dimensional map for example, a newly constructed building, is newly registered as two-dimensional map information by associating the appearance with the position information. be able to.
  • Non-known objects include, for example, buildings that are not registered in the two-dimensional map information 210 and the map information 230, and buildings that have been renovated and have changed appearance.
  • a plurality of two-dimensional map information 210 including the same object is acquired.
  • learning is performed using the information stored in the server through the network and the two-dimensional map information 210 stored in the two-dimensional map information holding unit 123 to calculate the feature points of the object.
  • the result is used for object pattern matching. If it is the method of matching by a feature point, even if it is a case where a shield exists between an object and the imaging
  • the position of the object may be specified using a plurality of captured images 220 having different acquisition times. Using the position of the target object in the captured image 220 and the position and orientation of the imaging unit 102, it is possible to specify an area where an object in real space may exist.
  • the specified area is a straight line.
  • the position / orientation detection apparatus 100 moves, if a similar process is performed after a predetermined time, another straight line is obtained as the object existence region.
  • the intersection of the two straight lines obtained by the two processes is the position where the object actually exists (position in real space, latitude, longitude, coordinates, etc.).
  • the position of the object candidate in the real space may be calculated from the change in the position and orientation of the photographing unit 102 and the change in the horizontal distance of the object candidate.
  • 2D map information 210 can be updated in real time by additionally registering the obtained object information in the existing 2D map information 210.
  • the appearance can be updated if the object is already registered in the two-dimensional map and the position is known. If the object moves, it can be updated to the latest position information.
  • the position / orientation detection apparatus 100 obtains the position of the object in the real space using information used for detecting the position / orientation. Therefore, there is no need to calculate again to generate the two-dimensional map information. For this reason, it is possible to keep the two-dimensional map information up-to-date while reducing the calculation cost.
  • a 3D map data model may be used.
  • map data indicating the actual state of a building by its position and height
  • the shape of the building in the real world is known.
  • an image taken from the side surface may be pasted on map data having a three-dimensional shape to obtain two-dimensional map information.
  • Google Street View can be used for images taken from the side.
  • an image taken from the side may be pasted on 3D map data, and a virtual camera may be placed in the map data model for CG rendering.
  • a virtual camera may be placed in the map data model for CG rendering.
  • the present embodiment is an AR display device 500 including the position and orientation detection device 100 of the first embodiment.
  • the present embodiment a case where the position / orientation detection apparatus 100 is mounted on an automobile and AR display is performed on the windshield of the automobile will be described as an example.
  • FIG. 10 is a functional block diagram of the AR display device 500 of the present embodiment.
  • the AR display device 500 of the present embodiment includes a position / orientation detection device 100, a display content generation unit 501, a display content selection unit 502, a superimposition unit 503, a display unit 504, and instruction reception.
  • the case where the control unit 101 and the gateway 110 are shared with the position / orientation detection apparatus 100 will be described as an example.
  • the content holding unit 511 is a memory that temporarily holds content including AR content, and is configured by a memory that can be accessed at high speed.
  • the normal content and the AR content are collectively referred to as content unless it is particularly necessary to distinguish them.
  • the position / orientation detection apparatus 100 basically has the same configuration as the position / orientation detection apparatus 100 of the first embodiment. However, in the position / orientation detection apparatus 100 of the present embodiment, the object detection unit 105 may be configured to detect and hold the pixel positions of all the evaluation objects extracted by the evaluation object extraction unit 104. This information is used when the superimposing unit 503, which will be described later, determines the position where the content is superimposed. In addition, the control unit 101 of the position / orientation detection apparatus 100 controls the operation of each unit of the entire AR display device 500.
  • the gateway 110 functions as a communication interface for the AR display device 500.
  • the instruction receiving unit 506 receives an operation input from the user (driver) 530.
  • selection of content or a condition of content to be displayed is received.
  • the reception is performed via an existing operation device such as an operation button, a touch panel, a keyboard, or a mouse.
  • the extraction unit 507 acquires content that may be used in the processing of the AR display device 500 from the content stored in the server or other storage device.
  • the acquired content is stored in the content holding unit 511. Since the information processed by the AR display device 500 is mainly peripheral information, the processing cost is reduced by acquiring and processing information limited to information that may be processed.
  • the content to be acquired may be determined in consideration of information such as the traveling direction and speed of the vehicle on which the AR display device 500 is mounted. For example, when the speed of the moving object is v and the time required to download and hold the information in the holding unit is T, information in at least a circle with a radius Tv centered on the photographing unit 102 is acquired. Stored in the content holding unit 511. Further, when the travel route of the mounted vehicle is specified, information around the route may be extracted and stored in the content holding unit 511.
  • FIG. 3B is a diagram for explaining the content server system 610.
  • advertisements, contents, and AR contents are held and provided as information.
  • the content server system 610 includes a content server 611, a content storage unit 612, an AR content storage unit 613, an advertisement storage unit 614, and a communication I / F 615.
  • the advertisement is the text, image, video, etc. of the advertisement provided by the advertiser.
  • the content is information according to the service content. For example, a moving image for entertainment, a game, etc. are mentioned.
  • the AR content is content that is assumed to be AR-superposed, and includes meta information such as a display position and a posture in the real space in addition to the content itself.
  • the meta information may be defined in advance or may be dynamically updated according to an instruction from the AR display device 500.
  • the content server 611 outputs information held in each holding unit to the outside via the communication I / F 615 in response to an instruction received via the communication I / F 615. Further, the information received via the communication I / F 615 is held in the corresponding holding unit.
  • the content server 611 may integrate the above-described information and provide the AR display device 500 through a network. As an example of integrating each information, for example, an advertisement is inserted into entertainment content.
  • the AR content may be selected and extracted by a method similar to the method by which the position / orientation detection apparatus 100 extracts the two-dimensional map information and the map information.
  • the content that the user desires to view may be instructed via the instruction receiving unit 506 and extracted according to the instruction.
  • the display content selection unit 502 selects the content to be displayed from the content held in the content holding unit 511 according to the content display condition. Thereafter, the display conditions may be determined in advance or may be designated by the user via the instruction receiving unit 506.
  • the instruction receiving unit 506 may be a motion recognition device that recognizes the user's motion.
  • a recognition device such as gesture recognition that detects a user's movement by a camera, voice recognition that is detected by a microphone, and gaze recognition that detects a gaze can be used.
  • gesture recognition that detects a user's movement by a camera
  • voice recognition that is detected by a microphone
  • gaze recognition that detects a gaze
  • a voice recognition device can handle a variety of operations because it can give relatively detailed instructions. If it is a line-of-sight recognition apparatus, even if it operates, it is hard to be perceived by others around it, and it can consider surrounding environment.
  • an operator's voice may be registered in advance so that the user who has input voice can be identified.
  • the user may be configured to limit the contents accepted by voice recognition.
  • the display unit 504 displays the content to be displayed on the windshield according to the instruction of the superimposing unit 503.
  • the display unit 504 of the present embodiment includes a projector 521 and a display (projection area) 522 as shown in FIG.
  • the display 522 is realized by combining optical components having transparency and reflectivity, and is disposed on the windshield.
  • the scene in real space behind the display 522 is transmitted through the display 522.
  • the video (image) generated by the projector 521 is reflected by the display 522.
  • the user 530 views an image in which a scene in real space that has passed through the display 522 and an image reflected by the display 522 are superimposed.
  • the display 522 is the entire area of the windshield, the content of the user 530 who is looking forward can be covered, and the content can be displayed in a wide range over the actual scene spreading forward.
  • HUD Head-Up Display
  • the display content generation unit 501 generates display content to be displayed on the windshield as a display destination from the selected content.
  • the content is generated in a display mode suitable for superimposed display on a scene that is seen by the user's eyes through the windshield. For example, the size, color, brightness, etc. are determined.
  • the display mode is determined in accordance with a user instruction or in accordance with a predetermined rule.
  • the superimposing unit 503 determines the display position on the display 522 of the display content generated by the display content generating unit 501. First, the superimposing unit 503 specifies the display position (arrangement position) of the object on the display 522. Based on the display position of the object, the display position of the related content of the object is determined. The display position of the object is calculated using the position and orientation of the image capturing unit 102 detected by the position / orientation detection apparatus 100 and the pixel position of each object on the captured image 220 captured by the image capturing unit 102.
  • the superimposing unit 503 of the present embodiment includes a geometric relationship between a location (in-vehicle reference position) as a position reference of an automobile and the position and orientation of the photographing unit 102, an in-vehicle reference position, and a user (driver).
  • the geometrical relationship with the average visual field range 531 of 530 is held in advance.
  • the position on the display 522 corresponding to the pixel position in the captured image of the corresponding evaluation object is calculated using the correspondence relationship.
  • the display position of the related content may be set at the intersection of the line-of-sight direction in which the user 530 looks at the evaluation object 223 and the display 522.
  • the photographing unit 102 photographs the evaluation object 223. From the photographing position of the evaluation object 223 in the photographed image, the direction in which the evaluation object 223 exists with respect to the photographing unit 102 is known.
  • the relative position of the evaluation object 223 with respect to the photographing unit 102 is obtained.
  • a line of sight 551 for viewing the evaluation object 223 from the eye position of the user 530 is obtained.
  • a point 552 where the line of sight 551 intersects the display 522 may be a display position of related content.
  • the related content may be a display offset by a designated position with respect to the evaluation object 223.
  • the AR display device 500 is also provided with a CPU 141, a memory 142, a storage device 143, an input / output interface (I / F) 144, and a communication I / F 145, similar to the position and orientation detection device 100. It is realized with.
  • the program is stored in advance in the storage device 143 by the CPU 141 loaded into the memory 142 and executed. All or some of the functions may be realized by hardware or a circuit such as ASIC (Application Specific Integrated Circuit) or FPGA (Field-programmable gate array).
  • various data used for the processing of each function and various data generated during the processing are stored in the memory 142 or the storage device 143.
  • the content holding unit 511 is constructed in the memory 142 or the like, for example.
  • FIG. 12 is a processing flow of the AR display processing of the present embodiment.
  • the AR display process may be performed in synchronization with the position / orientation detection process performed by the position / orientation detection apparatus 100, or may be configured to be performed independently.
  • the instruction receiving unit 506 receives in advance the conditions (display conditions) of the content to be displayed.
  • the position / orientation detection apparatus 100 detects the position and orientation of the photographing unit 102 (step S2101). Note that the position / orientation detection apparatus 100 determines the position and orientation of the imaging unit 102 by the same method as in the first embodiment. This process is executed at predetermined time intervals.
  • the display content selection unit 502 selects the content to be displayed from the contents held in the content holding unit 511 (step S2102). Here, it is determined whether or not to display each content according to the display condition. Only content that matches the display conditions is displayed. The selection may be performed for each object to be superimposed, for example.
  • the display content generation unit 501, the superimposition unit 503, and the display unit 504 repeat the following processing for all selected display contents (step S2103).
  • the display content generation unit 501 determines the display mode of the selected content (step S2104).
  • the superimposing unit 503 determines the display position of the selected display content (step S2105). At this time, the superimposing unit 503 uses the position and orientation of the imaging unit 102 detected by the position / orientation detection apparatus 100 in step S2101 and the position of each object in the captured image 220.
  • the display unit 504 displays the content in the display mode determined by the display content generation unit 501 at the position calculated by the superimposing unit 503 (step S2106). The above processing is repeated for all contents.
  • the superimposing unit 503 displays the latest position and orientation of the photographing unit 102 at that time and the captured image 220.
  • the display position is determined using the position information of each object.
  • the AR display device 500 is an AR display device that displays content in association with an object in a scene behind the display 522 on the display 522 having transparency and reflection.
  • the position and orientation detection apparatus 100 according to the first embodiment, the display content generation unit 501 that generates the content to be displayed on the display 522, and the position and orientation of the photographing unit 102 determined by the position and orientation detection apparatus 100.
  • a display unit 504 that displays the generated content at the display position determined.
  • the position of the object is specified in the captured image acquired by the imaging unit 102, and the display position of the related content is determined using the specified position. Therefore, if only the amount of deviation between the photographing range 541 by the photographing unit 102 and the user's viewpoint is corrected, the display position of the object on the display can be accurately specified.
  • content can be displayed without using the relative position between the object and the AR display device 500.
  • the position and orientation of the photographing unit 102 are acquired by the position and orientation detection apparatus 100 of the first embodiment. That is, the value of the coarse detection sensor 103 such as GPS or an electronic compass is not directly used. As a result, high accuracy can be obtained regardless of the accuracy of the coarse detection sensor 103. Therefore, it is possible to provide the AR display device 500 that can superimpose the AR with high accuracy.
  • the coarse detection sensor 103 such as GPS or an electronic compass
  • the display 522 has transparency
  • the display 522 may be opaque.
  • the synthesized image may be displayed on the captured image 220 captured by the capturing unit 102.
  • the brightness of the captured image 220 may be reduced, for example.
  • an image with reduced brightness dark image
  • the case where the AR display device 500 is mounted on an automobile has been described as an example.
  • the usage form of the AR display device 500 is not limited to this.
  • the form which a user wears and uses may be sufficient.
  • a small AR display device 500 can be provided.
  • An example of such a configuration is HMD (Head Mounted Display).
  • the position / orientation detection apparatus 100 is also mounted on the HMD.
  • a position and orientation detection apparatus 100 for detecting the position of the HMD may be provided separately from the HMD.
  • the position / orientation detection apparatus 100 captures an HMD. Then, using the method of generating the two-dimensional map information using the HMD as an object, the position (latitude and longitude, coordinates) of the HMD in the real space is calculated. Then, the display position of the content is determined using this position information. Thereby, the content can be displayed at a desired position with higher accuracy.
  • the AR display device 500 can detect the absolute position and orientation even when mounted on a moving body, it is highly accurate even if the scene viewed by the user constantly changes.
  • the content can be displayed at a desired position.
  • the AR display device 500 can calculate the position and orientation of the photographing unit 102 with high accuracy and can superimpose AR with high accuracy. If high-precision AR superimposition becomes possible, the expression accuracy of AR content will increase and the expressive power will be enriched.
  • the AR display device 500 does not have to include the photographing unit 102 in itself.
  • image data captured by another image capturing device for example, a navigation device or a drive recorder in the case of an in-vehicle device
  • a navigation device for example, a navigation device or a drive recorder in the case of an in-vehicle device
  • a drive recorder for example, a navigation device or a drive recorder in the case of an in-vehicle device
  • the AR display device 500 of the present embodiment may further include an eye tracking device 508 as shown in FIG.
  • the eye tracking device 508 is a device that tracks the user's line of sight.
  • the eye tracking device 508 is used to estimate the user's field of view and display the content in consideration of the direction of the user's line of sight. For example, the content is displayed on the display 522 in an area specified by the user's line-of-sight direction and field of view.
  • the superimposing unit 503 of the present embodiment calculates the position for displaying the content from the relative position between the photographing unit 102 and the display 522 and the user's line-of-sight direction and field of view calculated by the eye tracking device 508.
  • the user's line-of-sight direction calculated by the eye tracking device 508 is used to correct the position for displaying the content.
  • an object in the direction in which the user is facing in real space can be identified from the line-of-sight direction of the user detected by the eye tracking device 508 and the position and orientation of the photographing unit 102.
  • This object is an object that the user is watching. By using this, for example, it is possible to select and display content related to the object being watched by the user.
  • the output of the eye tracking device 508 and the detection result of the object detection unit 105 are input to the display content selection unit 502.
  • the display content selection unit 502 uses the user's line-of-sight direction and the pixel position of each object to identify the object that the user is watching and selects an object related to the object.
  • FIG. 14 (a) shows an example of the display position.
  • a building 711, a sign 712, and a guide plate 713 are illustrated as an example of an object (real object) that exists in real space. These real objects have known positions and appearances.
  • the superimposing unit 503 determines the display position so that the related contents 811, 812, and 813 are displayed on the display 522 based on the pixel position of the evaluation object corresponding to each real object.
  • FIG. 14A shows an example of superimposing display on a real object.
  • the display content generation unit 501 and the superimposition unit 503 may classify the display content according to the meta information, and determine the display mode and the display position according to the classification result. For example, the display mode and the display position may be determined based on the update frequency, the required timing, the importance level, and the like. A display example in this case is shown in FIG.
  • a distant display area (distant display area) 731 such as the sky or a display area where the bonnet can be seen (front side)
  • the display mode and the display position are determined so as to be displayed in (display area) 732.
  • a display mode and a display position are determined so as to be displayed in a road display area 733 that is a display area along the road. To do.
  • the display mode and the display position are determined so as to be displayed in the side display area 734 positioned on the side with respect to the user's traveling direction.
  • the display mode and the display position are determined so that the signal and the sign are displayed in the aerial display area 735 where the signal and the sign are not visually observed.
  • the display mode and the display position may be determined so that the content is displayed in the display area (front display area) 736 in front of the windshield.
  • the far region 731 and the near region 732 are not affected by the movement and the scene and hardly change. That is, the far region 731 and the near region 732 are regions (small change regions) in which the change in the appearance of the real space with the movement of the automobile is small.
  • regions small change regions
  • the processing by the superimposing unit 503 that calculates the display position according to the real space can be reduced. For this reason, if it comprises so that a static content may be displayed on a small change area
  • the AR superimposition process can be efficiently reduced without giving the user a sense of incongruity. Thereby, the processing load of the AR display device 500 can be reduced.
  • the side display area 734 remains as it is even when the object flows and moves away from the front by moving. Therefore, by displaying information that needs to be tracked in the side surface display area 734, the user can track the information even if the automobile has moved. Also, by displaying important information in the aerial display area 735, high readability can be obtained without impairing the visibility of the scene in the real space. In addition, by displaying in the front display area 736, the user can view the content without greatly removing the line of sight from the front.
  • AR overlay display with high visibility and readability for the user is possible. Moreover, this AR superimposed display can be realized without any instruction from the user.
  • the classification of display content is not limited to that based on meta information. For example, it may be performed according to a predetermined time for maintaining the display of the content.
  • the operation mode of the moving body may be determined, and the display mode and / or display position may be determined according to the determination result. For example, content is displayed in the front display area 736 only in the automatic operation mode.
  • the display content generation unit 501 and / or the superimposition unit 503 are configured to receive a signal indicating whether or not the automatic operation mode is set, for example, from the ECU or the like of the moving body.
  • the automatic operation mode of the moving object is a mode in which the user does not need to actively operate and the moving object automatically operates. At this time, the user does not need to pay attention to the actual surrounding scene. However, during actual driving, it may be necessary for the user to actively drive depending on the surrounding traffic environment, time, place, etc., and there are cases where attention must be paid to the actual surrounding scene.
  • the line of sight is facing away from the front.
  • the content is displayed in front of the user. Therefore, the content can be viewed without removing the line of sight from the front. Even when the content is viewed during automatic driving, the user's line of sight is facing forward. For this reason, even when the automatic driving mode is canceled and it is necessary to call attention to the actual scene, it is possible to smoothly draw attention forward.
  • the display mode and display position of the content displayed on the display 522 may be changed depending on the level of alerting required by the user.
  • the display content generation unit 501 and / or the superimposition unit 503 receives signals such as a user's consciousness level, fatigue level, and driving state from a moving object or a sensor attached to the user. Furthermore, you may receive information, such as the surrounding traffic condition which the navigation mounted in the motor vehicle or the said vehicle grasps
  • the display content generation unit 501 and / or the superimposition unit 503 combine these to determine the alert level.
  • the content display content is not limited. Then, the display mode and / or the display position are determined so that the display content of the content is simplified as the alert level increases and the alert is required.
  • the amount of information of the content decreases as the alerting becomes necessary, but the readability increases. For this reason, since the necessity for gazing decreases, the user can pay attention to others while obtaining information from the content.
  • the retreat area 737 is an area that does not interfere with driving. For example, the outside of the windshield from the front.
  • the save area 737 may be set smaller than the original display area.
  • the display in the retreat area 737 is performed, for example, when the automatic operation mode is switched to the normal operation mode.
  • the image may be continuously changed when retracted or reduced.
  • the display content generation unit 501 and / or the superimposition unit 503 may be configured to detect in advance a situation such as switching to the normal operation mode using a navigation system or the like, and notify the evacuation. .
  • the advance notice is performed, for example, by displaying a warning text in the warning area 743.
  • a warning text instead of displaying the warning text, a change in color tone, a countdown display, or an audio warning may be used.
  • the content display may be completely canceled when the alert level reaches a predetermined level or when a predetermined condition is satisfied.
  • the case where the predetermined condition is satisfied is, for example, a case where the automatic operation mode is canceled.
  • the display content generation unit 501 and / or the superimposition unit 503 increases the transparency of the display content. Thereby, it appears to the user that the displayed content fades out. This allows the user to see only the actual scene in front. In addition, before raising transparency, you may comprise so that a notice display may be performed by control of the display content production
  • the display restriction may be canceled as shown in FIG.
  • the case where the predetermined condition in this case is satisfied is, for example, a case where the automatic operation mode is set.
  • the display content generation unit 501 and / or the superimposition unit 503 cancels the display restriction such as in the automatic operation mode, enlarges the size of the front display area 736, and determines the display mode and the display position to display the content.
  • meta information may be used to display entertainment content such as a document or video that requires attention on a large screen.
  • the display area change control may be performed by detecting the driving location, traffic conditions, and user conditions.
  • the user's driving skill level may be controlled in addition to the criterion.
  • the content display area and the display method can be changed according to the environmental condition, the driving condition, and the like.
  • the AR display device 500 of the present modification since the display of content can be switched, both forward alerting and readability can be achieved.
  • FIG. 16A is a diagram for explaining a method of associating a real object with content.
  • FIG. 16A illustrates, for example, a case where the image is displayed in the display area 741 facing the user.
  • a ribbon-like drawing effect 751 in which the content is drawn from the real object 714 is displayed.
  • the superimposing unit 503 first obtains the coordinates of the display position of the real object 714 on the display 522 from the direction of the user's line of sight and the relative position of the AR display device 500 and the real object 714. Then, a ribbon-like image is generated so as to connect the display position of the real object 714 and the display coordinates of the display area 741.
  • the drawer effect 751 may be a string.
  • the drawing effect 751 may be translucent. When translucent, the relationship can be clarified without disturbing the actual scene.
  • the string may be tied around the real object 714 and the display area 741 so as not to disturb the field of view.
  • the content to be displayed is information that the user is interested in, it is possible to suppress a decrease in readability to other AR content.
  • the user's line-of-sight direction detected by the eye tracking device 508 is used.
  • the degree of interest of the user is specified based on the degree of coincidence between the viewing direction of the user and the display position of the real object.
  • the degree of interest of the user may be determined by an active selection operation by the user. In this case, the intention of the user can be reflected.
  • this method may be used to accumulate user selection results and use them for other processing.
  • a reference marker 761 may be used as another method for associating a real object with content.
  • the display content generation unit 501 and / or the superimposition unit 503 superimpose the reference marker 761 on the real object 714. Then, the content related to the real object 714 is displayed in an arbitrary information display area 742. At this time, the reference marker 761 is also displayed in the information display area 742.
  • the configuration in which AR content related to an object is displayed using the above-described extraction effect 751 is particularly useful when the object itself has information.
  • the real object 715 is a signboard.
  • the signboard contains information.
  • the content has further information.
  • the display content generation unit 501 and / or the superimposition unit 503 determine the display mode and the display position so that the content related to the real object 715 is displayed in the information display area 742 set at an arbitrary position. At this time, the display content generation unit 501 and / or the superimposition unit 503 displays the extraction effect 751 between the real object 715 and the information display area 742.
  • the user can obtain more information than viewing the signboard.
  • FIG. 17A an image obtained by photographing the guide plate 713 is displayed in the display area 744 as content related to the guide plate 713.
  • the display area 744 may be at an arbitrary position, but the size is larger than that of the guide plate 713. This makes it easier for the user to grasp the information on the guide plate 713.
  • a reference marker 761 may be displayed on both.
  • the object detection unit 105 detects the guide plate 713 from the captured image captured by the imaging unit 102. Then, the display content selection unit 502 selects an image of the guide board 713 as the display content. In addition, the display content generation unit 501 and / or the superposition unit 503 determine a display mode and a display position so as to realize the display.
  • the AR content to be displayed may be hierarchized. As shown in FIG. 17B, a group of related contents 821 to 825 are displayed in the same display area 745. Here, a case where five contents are displayed is illustrated. However, the number of contents displayed in one display area 745 is arbitrary.
  • a set of contents to be displayed is selected and generated by the display content selection unit 502.
  • the display content selection unit 502 generates a set of contents using, for example, meta information.
  • Each content includes a company name, an icon indicating the company, a product description, a price display, a reference URL, and the like.
  • the display content selection unit 502, the display content generation unit 501, and the superimposition unit 503 determine the selection timing, the display mode, and the display position so that the display content selection unit 502, the display content generation unit 501, and the superimposition unit 503 display them according to the user's selection and timing. .
  • the content display is often updated in real time. For this reason, if the amount of information to be displayed at a time is large, the user's understanding may be hindered. In order to avoid this, the contents to be presented to the user at once are simplified and sequentially displayed. In such a case, this hierarchical set of contents is used. Thereby, the information included in the content can be provided with good readability.
  • the content included in the content set may be tailored as a story, and the story may be advanced in time series.
  • content with mascots may be displayed.
  • the content with a mascot includes a content 831 and a mascot 841.
  • a popular mascot 841 is added and displayed.
  • the eye tracking device 508 determines whether or not the user's line of sight has remained in the display area 746 of the content 831 for a predetermined period.
  • the display content generation unit 501 and the superposition unit 503 display the mascot 841 in the display area 746. Thereby, it is possible to operate such that the mascot 841 is displayed only when the user reads the content 831. By performing such an operation, the user's interest can be effectively attracted to the content 831.
  • the viewing rate of the content 831 increases. In this way, high-value-added content display can be realized with good readability for the user.
  • the display content may be transmitted to the outside and stored.
  • the transmission destination may be an information processing terminal 910 as shown in FIG. Further, it may be another storage device connected to the network. Transmission is performed via the gateway 110. If information is transmitted to another storage device and stored, the capacity of the information processing terminal 910 is not compressed. With this configuration, the displayed information can be reused. The displayed content can be utilized later, which increases convenience for the user.
  • the display content selection unit 502 transmits the instructed content from the selected content.
  • indication may be received via the above-mentioned motion recognition apparatus.
  • indication may be received via the above-mentioned motion recognition apparatus.
  • the identifier is given by, for example, the display content selection unit 502 or the display content generation unit 501.
  • the display content selection unit 502 or the display content generation unit 501 adds an identifier by using meta information, sequentially assigning predetermined characters and numbers, or the like.
  • the content may be one that gives decoration to the object.
  • the decoration may be a thing with high entertainment property.
  • Fig. 10 shows a display example in this case.
  • contents 816 and 817 are respectively displayed at positions corresponding to a real building 716 and a car 717, and these are decorated.
  • the atmosphere can be raised from driving.
  • the display content selection unit 502 selects the contents 816 and 817 to be displayed.
  • the AR display apparatus 500 was mounted in moving bodies, such as a motor vehicle, for example, it is not limited to this.
  • it may be mounted on the HMD and used by pedestrians.
  • a normal screen or the like may be used for the display 522, superimposed on other images, and used indoors.
  • this invention is not limited to the above-mentioned Example, Various modifications are included.
  • the above-described embodiments have been described in detail for easy understanding of the present invention, and are not necessarily limited to those having all the configurations described.
  • a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment.
  • SYMBOLS 100 Position and orientation detection apparatus, 101: Control part, 102: Imaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Automation & Control Theory (AREA)
  • Studio Devices (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention a pour objet de détecter, à l'aide d'une configuration simple, la position et la posture d'un utilisateur avec une grande précision à partir d'une petite quantité d'informations. L'invention concerne un dispositif de détection de position et de posture comprenant : une unité de capture d'image (102) permettant de capturer une image d'une plage de capture d'image prédéterminée comprenant deux objets ou plus ; une unité de détection d'objet (105) permettant de spécifier une position de pixel de chacun des objets dans l'image capturée par l'unité de capture d'image (102) ; une unité de calcul d'orientation (106) permettant de calculer une orientation de chacun des objets par rapport à l'unité de capture d'image (102) à l'aide de la position de pixel, d'informations cartographiques sur chacun des objets et d'une distance focale de l'unité de capture d'image ; et une unité de calcul de position et de posture (107) permettant de calculer la position et la posture de l'unité de capture d'image (102) à l'aide de l'orientation de chacun des objets et des informations cartographiques. L'unité de détection d'objet (105) extrait des informations cartographiques bidimensionnelles correspondant à la plage de capture d'image à partir d'informations cartographiques bidimensionnelles, des informations sur des positions et des formes d'une pluralité d'objets à l'intérieur d'une zone prédéterminée étant mémorisées, et spécifie une position de pixel à l'aide des informations cartographiques bidimensionnelles.
PCT/JP2017/001426 2017-01-17 2017-01-17 Dispositif de détection de position et de posture, dispositif d'affichage ar, procédé de détection de position et de posture et procédé d'affichage ar WO2018134897A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/001426 WO2018134897A1 (fr) 2017-01-17 2017-01-17 Dispositif de détection de position et de posture, dispositif d'affichage ar, procédé de détection de position et de posture et procédé d'affichage ar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/001426 WO2018134897A1 (fr) 2017-01-17 2017-01-17 Dispositif de détection de position et de posture, dispositif d'affichage ar, procédé de détection de position et de posture et procédé d'affichage ar

Publications (1)

Publication Number Publication Date
WO2018134897A1 true WO2018134897A1 (fr) 2018-07-26

Family

ID=62908970

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/001426 WO2018134897A1 (fr) 2017-01-17 2017-01-17 Dispositif de détection de position et de posture, dispositif d'affichage ar, procédé de détection de position et de posture et procédé d'affichage ar

Country Status (1)

Country Link
WO (1) WO2018134897A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110069136A (zh) * 2019-04-29 2019-07-30 努比亚技术有限公司 一种穿戴状态识别方法、设备及计算机可读存储介质
CN110399039A (zh) * 2019-07-03 2019-11-01 武汉子序科技股份有限公司 一种基于眼动跟踪的虚实场景融合方法
CN111665943A (zh) * 2020-06-08 2020-09-15 浙江商汤科技开发有限公司 一种位姿信息展示方法及装置
CN112711982A (zh) * 2020-12-04 2021-04-27 科大讯飞股份有限公司 视觉检测方法、设备、系统以及存储装置
TWI731624B (zh) * 2020-03-18 2021-06-21 宏碁股份有限公司 估計頭戴顯示器位置的方法、電腦裝置及頭戴顯示器
WO2022161140A1 (fr) * 2021-01-27 2022-08-04 上海商汤智能科技有限公司 Procédé et appareil de détection cible, et dispositif informatique et support de stockage
US20230249618A1 (en) * 2017-09-22 2023-08-10 Maxell, Ltd. Display system and display method

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006287435A (ja) * 2005-03-31 2006-10-19 Pioneer Electronic Corp 情報処理装置、そのシステム、その方法、そのプログラム、および、そのプログラムを記録した記録媒体
JP2007121528A (ja) * 2005-10-26 2007-05-17 Fujifilm Corp 地図作成更新システムおよび地図作成更新方法
JP2008287379A (ja) * 2007-05-16 2008-11-27 Hitachi Ltd 道路標識データ入力システム
JP2010066042A (ja) * 2008-09-09 2010-03-25 Toshiba Corp 画像照射システムおよび画像照射方法
JP2011053163A (ja) * 2009-09-04 2011-03-17 Clarion Co Ltd ナビゲーション装置および車両制御装置
JP2011169808A (ja) * 2010-02-19 2011-09-01 Equos Research Co Ltd 運転アシストシステム
JP2012035745A (ja) * 2010-08-06 2012-02-23 Toshiba Corp 表示装置、画像データ生成装置及び画像データ生成プログラム
JP2014009993A (ja) * 2012-06-28 2014-01-20 Navitime Japan Co Ltd 情報処理システム、情報処理装置、サーバ、端末装置、情報処理方法、及びプログラム
JP2015217798A (ja) * 2014-05-16 2015-12-07 三菱電機株式会社 車載情報表示制御装置
JP2016070716A (ja) * 2014-09-29 2016-05-09 三菱電機株式会社 情報表示制御システムおよび情報表示制御方法
JP2016090557A (ja) * 2014-10-31 2016-05-23 英喜 菅沼 移動体用の測位システム

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006287435A (ja) * 2005-03-31 2006-10-19 Pioneer Electronic Corp 情報処理装置、そのシステム、その方法、そのプログラム、および、そのプログラムを記録した記録媒体
JP2007121528A (ja) * 2005-10-26 2007-05-17 Fujifilm Corp 地図作成更新システムおよび地図作成更新方法
JP2008287379A (ja) * 2007-05-16 2008-11-27 Hitachi Ltd 道路標識データ入力システム
JP2010066042A (ja) * 2008-09-09 2010-03-25 Toshiba Corp 画像照射システムおよび画像照射方法
JP2011053163A (ja) * 2009-09-04 2011-03-17 Clarion Co Ltd ナビゲーション装置および車両制御装置
JP2011169808A (ja) * 2010-02-19 2011-09-01 Equos Research Co Ltd 運転アシストシステム
JP2012035745A (ja) * 2010-08-06 2012-02-23 Toshiba Corp 表示装置、画像データ生成装置及び画像データ生成プログラム
JP2014009993A (ja) * 2012-06-28 2014-01-20 Navitime Japan Co Ltd 情報処理システム、情報処理装置、サーバ、端末装置、情報処理方法、及びプログラム
JP2015217798A (ja) * 2014-05-16 2015-12-07 三菱電機株式会社 車載情報表示制御装置
JP2016070716A (ja) * 2014-09-29 2016-05-09 三菱電機株式会社 情報表示制御システムおよび情報表示制御方法
JP2016090557A (ja) * 2014-10-31 2016-05-23 英喜 菅沼 移動体用の測位システム

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230249618A1 (en) * 2017-09-22 2023-08-10 Maxell, Ltd. Display system and display method
CN110069136A (zh) * 2019-04-29 2019-07-30 努比亚技术有限公司 一种穿戴状态识别方法、设备及计算机可读存储介质
CN110069136B (zh) * 2019-04-29 2022-10-11 中食安泓(广东)健康产业有限公司 一种穿戴状态识别方法、设备及计算机可读存储介质
CN110399039A (zh) * 2019-07-03 2019-11-01 武汉子序科技股份有限公司 一种基于眼动跟踪的虚实场景融合方法
TWI731624B (zh) * 2020-03-18 2021-06-21 宏碁股份有限公司 估計頭戴顯示器位置的方法、電腦裝置及頭戴顯示器
CN111665943A (zh) * 2020-06-08 2020-09-15 浙江商汤科技开发有限公司 一种位姿信息展示方法及装置
CN111665943B (zh) * 2020-06-08 2023-09-19 浙江商汤科技开发有限公司 一种位姿信息展示方法及装置
CN112711982A (zh) * 2020-12-04 2021-04-27 科大讯飞股份有限公司 视觉检测方法、设备、系统以及存储装置
WO2022161140A1 (fr) * 2021-01-27 2022-08-04 上海商汤智能科技有限公司 Procédé et appareil de détection cible, et dispositif informatique et support de stockage

Similar Documents

Publication Publication Date Title
WO2018134897A1 (fr) Dispositif de détection de position et de posture, dispositif d'affichage ar, procédé de détection de position et de posture et procédé d'affichage ar
US10029700B2 (en) Infotainment system with head-up display for symbol projection
US11373357B2 (en) Adjusting depth of augmented reality content on a heads up display
EP2208021B1 (fr) Procédé et dispositif pour mappage de données de capteur de distance sur des données de capteur d'image
US8773534B2 (en) Image processing apparatus, medium recording image processing program, and image processing method
JP5443134B2 (ja) シースルー・ディスプレイに現実世界の対象物の位置をマークする方法及び装置
US8395490B2 (en) Blind spot display apparatus
JP6176541B2 (ja) 情報表示装置、情報表示方法及びプログラム
US20140285523A1 (en) Method for Integrating Virtual Object into Vehicle Displays
US20120224060A1 (en) Reducing Driver Distraction Using a Heads-Up Display
CN117058345A (zh) 增强现实显示器
US11525694B2 (en) Superimposed-image display device and computer program
JP2007080060A (ja) 対象物特定装置
KR101573576B1 (ko) Avm 시스템의 이미지 처리 방법
JP2007198962A (ja) 車両用案内表示装置
US20210327113A1 (en) Method and arrangement for producing a surroundings map of a vehicle, textured with image information, and vehicle comprising such an arrangement
JPWO2016031229A1 (ja) 道路地図作成システム、データ処理装置および車載装置
JP2004265396A (ja) 映像生成システム及び映像生成方法
JP5086824B2 (ja) 追尾装置及び追尾方法
CN115176457A (zh) 图像处理设备、图像处理方法、程序和图像呈现系统
JP2009077022A (ja) 運転支援システム及び車両
CN113011212B (zh) 图像识别方法、装置及车辆
CN111241946B (zh) 一种基于单dlp光机增大fov的方法和系统
CN111243102B (zh) 一种基于扩散膜改造增大fov的方法和系统
JP4858017B2 (ja) 運転支援装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17892855

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17892855

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP