WO2004006207A1 - 交通機関自動案内装置 - Google Patents
交通機関自動案内装置 Download PDFInfo
- Publication number
- WO2004006207A1 WO2004006207A1 PCT/JP2003/008465 JP0308465W WO2004006207A1 WO 2004006207 A1 WO2004006207 A1 WO 2004006207A1 JP 0308465 W JP0308465 W JP 0308465W WO 2004006207 A1 WO2004006207 A1 WO 2004006207A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- image
- transportation
- image data
- dimensional
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096708—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
- G08G1/096725—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096733—Systems involving transmission of highway information, e.g. weather, speed limits where a selection of the information might take place
- G08G1/09675—Systems involving transmission of highway information, e.g. weather, speed limits where a selection of the information might take place where a selection from the received information takes place in the vehicle
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096766—Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission
- G08G1/096783—Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission where the origin of the information is a roadside individual element
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096766—Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission
- G08G1/096791—Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission where the origin of the information is another vehicle
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G3/00—Traffic control systems for marine craft
- G08G3/02—Anti-collision systems
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/04—Anti-collision systems
Definitions
- the present invention automatically recognizes the situation of the operating transportation system, notifies the operator of the situation by letters, images, voices, etc., guides the operating route, and provides appropriate information when operating.
- the present invention relates to a transportation automatic guidance device for instructing various actions. Background art
- signs and signs installed on roads are automatically read and their meanings are recognized, and the situation of translating vehicles, overtaking vehicles, oncoming vehicles, etc. running on the road is recognized, There was no device to automatically notify the driver or passenger of the situation and guide the driver on the road. Therefore, the passenger sitting in the passenger seat reads the signs and signs installed on the road, informs the driver of the details, and looks at the translational vehicles, overtaking vehicles, oncoming vehicles, etc. running on the road. In addition, the driver was informed of the situation, and in some cases, the driver was instructed to perform operations such as stepping on a brake.
- Such conventional technologies are not limited to vehicles running on the road surface such as automobiles, but also vehicles running on tracks such as trains, and also sailing on the sea such as airplanes and other aircraft navigating spaces. The same was true for the hull. Disclosure of the invention
- Such problems are not limited to vehicles running on road surfaces such as automobiles, but also vehicles running on tracks such as trains, as well as airframes such as aircraft, and hulls navigating the sea such as ships. And so on.
- the present invention has been made in view of such conventional problems, and is capable of processing an enormous amount of image data, audio data, and the like at a high speed, and is capable of processing an ever-changing transportation system such as an automobile.
- the aim is to provide an automatic transportation guidance device that can automatically recognize the surrounding conditions and automatically guide the operating route such as a road. Target.
- a transportation automatic guidance device of the present invention includes an input device for acquiring the surroundings of an operating transportation as an image, a voice, and the like, and an image data and a voice data related to the transportation in advance.
- the comparison result between the stored database and the comparison device that compares the image data, audio data, and the like obtained by the input device with the image data, audio data, and the like stored in the database matches.
- an output device for recognizing and specifying the content of the data in the event of a failure and an output device for recognizing and identifying the result of the recognition by the recognition device to a driver or the like by text, image, sound, or the like.
- the image data, sound data, etc. corresponding to the object acquired by the input device do not exist in the database
- the image data, sound data, etc., corresponding to the new object are regarded as the position on the map.
- a storage device newly stored in the database in association with the image data, and image data, audio data, and the like corresponding to the object are different from the image data, audio data, and the like stored in the database. It is preferable to have a data updating device for updating to audio data or the like and storing the updated data in the database.
- a judgment device which makes some judgment based on the items recognized or specified by the recognition device, and notifies an operator based on the judgment result by a character, an image, a voice, or the like by the output device. It is preferred to have.
- the determination device makes some determination based on the items recognized or specified by the recognition device, and further instructs the output device to perform a predetermined operation based on the determination result. It is preferable that the device automatically operates. Further, in the automatic transportation device, one or more devices constituting the device may be connected to another device via a communication line.
- a plane image conversion device that converts perspective image data on the surroundings of the transportation acquired by the input device into plane image data that eliminates perspective, and a plane converted by the comparison device.
- a planar image recognition device for recognizing and specifying the contents of the image data based on a result of comparing the image data with the image data stored in the database, and a flat image recognition device for recognizing and specifying the data.
- An image content measurement device for measuring various spatial physical quantities of the target object, and a plane development processing device composed of the following may be provided.
- the plane image conversion device may have a function of converting 360-degree omni-directional image data of the surroundings of the transportation acquired by the input device into plane image data. ,.
- a traffic information detection device that acquires the situation around the transportation facility as image data, measurement data, and the like is installed on the route of the transportation facility, and the image data, measurement data, and the like acquired by the traffic information detection device are acquired. You may be able to receive it.
- the traffic information detecting device includes a graphic device for performing computer graphics based on the acquired image data and measurement data.
- an image acquisition unit that acquires images using an input device mounted on a transportation system, a temporary image recording unit that records acquired images for a certain period, and a clue point that automatically extracts clue points for obtaining corresponding points in images.
- An automatic extraction unit a corresponding point detection unit that extracts two or more images having a distance difference and obtains corresponding points of a plurality of clue points in each image, and a position and a direction of an input device from the detected corresponding points.
- Calculate A positional relationship recognition unit consisting of an input device position and direction measurement unit, and an actual measurement scale conversion unit that converts the obtained relative distance value of the three-dimensional coordinates of the input device position into an absolute distance value using the actual measurement value A device may be attached.
- FIG. 1 is a configuration diagram of a transportation automatic guidance device of the present invention
- FIG. 2 is an explanatory diagram showing an operation of the transportation automatic guidance device of the present invention when the transportation organization is an automobile.
- FIG. 3 is an explanatory diagram showing the operation of the transportation automatic guidance device of the present invention when the transportation is a track vehicle.
- FIG. 4 is a diagram showing the transportation of the present invention when the transportation is an aircraft.
- FIG. 5 is an explanatory diagram showing the operation of the automatic guidance device
- FIG. 5 is an explanatory diagram showing the operation of the transportation automatic guidance device of the present invention when the transportation is a ship
- FIG. FIG. 7 is a configuration diagram of a plane development processing device attached to the engine driving guide device.FIG.
- FIG. 7 is a diagram for developing a perspective image acquired by a video camera into a plane and converting the perspective image to a plane image excluding perspective
- FIGS. 8 and 9 show the variables used.
- (A) is a perspective image acquired by the input device
- (B) is a plane image converted by the plane image conversion device
- FIG. 10 is a space measured by the image content measurement device.
- FIG. 11 is an explanatory diagram showing physical quantities
- FIG. 11 is an explanatory diagram showing a concept of transforming a 360 ° omnidirectional image into a plane image and converting it into a planar image
- FIG. Figure 13 is an example of a plane image converted from the entire surrounding image.
- Figure 13 is an explanatory diagram showing an example of installation of a traffic information detection device on the road.
- FIG. 15 is an explanatory diagram showing an example of transmission to a guidance device.
- FIG. 15 is an example of an output screen displayed as a computer graphic image when a vehicle is running on a road, and FIG. Enters the intersection
- FIG. 17 is an example of an output screen displayed as a computer graphic image.
- FIG. 17 is a configuration diagram of a positional relationship recognition device attached to the automatic transportation guidance device of the present invention
- FIG. 18 is a position diagram of FIG.
- FIG. 19 is a diagram showing an example of a target part selected as a clue point and a part of the target part in the relation recognition device.
- FIG. 19 is a conceptual diagram showing an overlapping state of visual fields by a plurality of in-vehicle cameras. BEST MODE FOR CARRYING OUT THE INVENTION
- the automatic transportation guidance device 1 of the present invention comprises an input device 2, a database 3, a comparison device 4, a recognition device 5, an output device 6, a judgment device 7, and a storage device 8, And a data updating device 9.
- the input device 2 acquires a situation around the transportation as an image, a sound, or the like when the transportation is operated. For example, images are acquired by a video camera and sound is acquired by a microphone.
- the database 3 stores in advance, in the case of a car, a road sign, a road sign, a traffic information board, and the like, and image data, voice data, and the like regarding the car and the like.
- trains, railroad crossings, level crossing signals, moving obstacles such as cars, platforms, etc. and in the case of c- aircraft that stores image data, voice data, etc. related to trains, airport shapes, It stores image data and audio data for airport runways, taxiways, structures such as control towers, moving obstacles such as cars, and aircraft.
- the port shape, port quay, wharf, navigation signs, and image data and audio data related to the ship are stored.
- the comparison device 4 converts the image data, audio data, and the like acquired by the input device 2 Compare with image data, audio data, etc. stored in database 3.
- the recognition device 5 recognizes and specifies the content of the data based on the result of comparing the image data, the audio data, and the like.
- the output device 6 notifies the operator of the result recognized and recognized by the recognition device 5 by characters, images, sounds, or the like.
- the storage device 8 stores image data, audio data, and the like corresponding to the new object when the image data and audio data corresponding to the object acquired by the input device 2 do not exist in the database 3 in the first place. Is newly stored in the database 3 in correspondence with the position on the map.
- the data updating device 9 is different from the image data, the voice data, etc., in which the image data, the voice data, etc. corresponding to the target object are stored in the database 3 even if the target object exists at a predetermined position on the map. In such a case, it is updated to new image data, audio data, and the like, and stored in the database 3.
- the judging device 7 makes some judgment based on the items recognized or specified by the recognizing device 5 and notifies the operator based on the judgment result by text, image, voice, etc. by the output device 6. It is.
- the determination device 7 may instruct the output device 6 to perform a predetermined operation based on the determination result, and automatically operate a braking device such as a brake and a steering device such as a handle.
- the automatic vehicle guidance system 1 automatically recognizes the situation of the vehicle, and notifies the driver or passenger of the situation by text, image, sound, etc. Then, an example of guiding the road and instructing an appropriate operation when traveling on the road will be specifically described.
- Example 1 As shown in FIG. 2, when a road sign 12 with a speed limit of 50 km is acquired as image data by the input device 2, first, the comparison device 4 compares the image data stored in the database 3 with the image data stored in the database 3. The data is compared and recognized by the recognition device 5 as road sign data with a speed limit of 50 km.
- the output device 6 notifies the driver or the fellow passenger that the speed limit is 50 km by text, image, sound, or the like. In this way, according to the automatic transportation guidance device 1, even if the driver or the passenger misses the road sign 12, it is ensured that the speed limit is 50 km on the currently running road. It is possible to prevent traffic violations due to excessive speeds and to prevent rear-end collisions.
- the comparison device 4 compares the image data with the image data stored in the database 3.
- the other vehicle 13 is recognized by the recognition device 5 as an overtaking vehicle.
- the determination device 7 determines that the other vehicle 13 does not accelerate while passing, and the output device 6 instructs the driver or a passenger to not accelerate. It is notified by text, image, sound, etc.
- the device 4 compares the image data and the sound data stored in the database 3 with the recognition device 5, and recognizes that the red light of the level crossing signal 14 in front blinks and the alarm is sounding.
- the judgment device 7 makes a judgment that it must stop, and furthermore, considering that it is dangerous to enter a level crossing and is prohibited by law, the output device 6 Instruct the driver to apply the brake. As a result, the vehicle is automatically stopped at a safe position before the railroad crossing because the output device 6 operates the brake.
- the automatic transportation guidance device 1 even when the driver or the passenger does not notice that the red light of the level crossing signal 14 flashes ahead and the alarm sounds, Also, if you do not take the appropriate action even if you are aware of it, make sure that the red light of the level crossing signal 14 flashes and the alarm sounds, and that entering the level crossing is dangerous. Since appropriate measures such as notification and automatic stop can be instructed and executed, it is possible to prevent the occurrence of accidents that cause enormous damage.
- the automatic transportation system 1 automatically recognizes the situation in which the vehicle is located, and informs the driver or passenger of the situation by text, image, voice, etc.
- An example of notifying, guiding the track, and instructing an appropriate operation when traveling on the track will be specifically described.
- the input device 2 When 21 is acquired as image data, first, the comparison device 4 compares it with the image data stored in the database 3, and the recognition device 5 recognizes that the stop station is approaching.
- the judging device 7 judges that the vehicle is decelerating because the stopping station is approaching, and the output device 6 instructs the driver or a passenger to decelerate, and outputs a character or image. It is notified by voice or the like.
- the automatic transportation guidance device 1 even if the driver or passenger does not notice the presence of the bratform 21 in front of the vehicle, it is ensured that the stop station is approached. In addition, an instruction to decelerate can prevent a mistake of passing a stop station due to an error.
- the comparison device 4 compares the image data and the sound data stored in the database 3 with the recognition device 5, and the red light of the forward crossing signal 23 blinks and the alarm sounds by the recognition device 5, and It is recognized that a moving obstacle 24 such as a car exists on the level crossing 22.
- the determination device 7 determines that an emergency stop is required, and furthermore, considering that it is dangerous to enter the level crossing 22, applies a brake to the output device 6. Instruct to work immediately. As a result, the output device 6 immediately activates the brake, so that the vehicle automatically stops suddenly at a safe position just before the crossing surface 22.
- the driver or the passenger can move forward. If the red light of the railroad crossing signal 23 flashes and the alarm sounds, and it is not noticed that there is a moving obstacle 24 such as a car on the railroad crossing surface 22, Even if you notice that you do not take the appropriate action, make sure that the red light at the level crossing signal 23 flashes, the alarm sounds, and that entering the level crossing 22 is dangerous. It is possible to notify and to instruct and execute an appropriate action to automatically stop the vehicle suddenly, so that it is possible to prevent the occurrence of a railroad crossing accident or the like that would cause enormous damage.
- the automatic transportation guidance system automatically recognizes the situation where the aircraft is placed, and notifies the pilot or passenger of the situation by text, image, voice, etc.
- An example of guiding the navigation route and the inside of the airport, and instructing the proper operation when traveling through the navigation route and the airport will be described in detail.
- the determination device 7 determines that an emergency stop is required, and further considers that it is dangerous to move forward, and immediately applies the brake to the output device 6. To instruct. As a result, since the output device 6 immediately activates the brake, the own body 31 automatically stops suddenly at a safe position in front of the moving obstacle.
- the pilot or the passenger notices that there is a moving obstacle such as the other aircraft 32 and the car 33 in front of the own aircraft 31. Even if you did not notice it, and even if you noticed that you did not take the appropriate action, make sure that there is a moving obstacle ahead and it is dangerous to move forward.
- an appropriate measure of automatically stopping the vehicle can be instructed and executed, it is possible to prevent an aircraft accident or the like that causes enormous damage.
- the automatic transportation system 1 automatically recognizes the situation where the ship's hull is placed, and notifies the pilot or passenger of the situation by text, images, voices, etc.
- An example of notifying, guiding the route and the inside of the port, and instructing appropriate actions when navigating through the route and the port will be specifically described.
- a vessel 41 navigates to dock in a port 45
- a waterway, wharf 43, and quay 44 are acquired as image data by the input device 2
- the comparison device 4 compares the image data and the audio data stored in the database 3, and the recognition device 5 recognizes the waterway, the pier 43, and the quay 44.
- the judging device 7 judges the navigation route from the docking position of the quay 44, the current position of the ship 41, and the relevant laws and regulations such as the Port Regulations Law and the Marine Traffic Safety Law, and the output device. According to 6, the pilot or passenger is notified of the navigation route by text, image, sound, etc.
- the automatic transportation guidance device 1 even when the pilot or the passenger does not know the shape of the port 45, the navigation route to the berth 44 berth position can be easily known. It is possible to berth smoothly at the quay 44 and alleviate the congestion of ships in the port 45.
- Example 8 As shown in FIG. 5, when the input device 2 acquires the other vessel 42, the wharf 43, the quay 44, the shallow water, etc. in front of the own vessel 41 as image data and audio data, first compare The device 4 compares it with the image data and audio data stored in the database 3, and the recognition device 5 recognizes the existence of other ships 42, wharves 43, quays 44, shallows, etc. .
- the judging device 7 makes a judgment that an emergency stop or a change of route is necessary. Further, considering that it is dangerous to proceed as it is, the vessel 4 1 To immediately take emergency stop or route change procedures. As a result, the output device 6 causes the ship 41 to stop immediately or change the course, so that the ship 41 automatically stops at a safe position or moves to a safe position.
- the automatic transportation guidance device 1 even when the pilot or the passenger does not notice that the other ship 42, the pier 43, the quay 44, the shallows, etc. exist ahead, If you do not take appropriate measures even if you are aware, be sure to notify that other vessels 42, wharves 43, quays 44, shoals, etc. are present and that it is dangerous to sail as it is. In addition, since appropriate measures such as automatic emergency stop and route change can be instructed and executed, it is possible to prevent the occurrence of a serious accident such as a ship accident.
- any one of the input device 2, the database 3, the comparison device 4, the recognition device 5, the output device 6, the judgment device 7, the storage device 8, and the data update device 9 constituting the device is included.
- This device or a plurality of devices may be connected to other devices constituting the automatic transportation planning device 1 via a communication line. According to such a configuration, it is also possible to remotely control transportation at a central control center or the like.
- the transportation automatic guidance apparatus of the present invention a huge amount of image data, voice data, and the like can be processed at high speed, and the situation around the transportation that changes every moment can be obtained. It can automatically recognize and automatically guide the operating route.
- the automatic transportation guide device 1 of the present invention may be provided with a plane deployment processing device 51.
- the plane expansion processing device 51 converts the perspective image acquired by the input device 2 about the surroundings of the operating transportation system into a plane image in which the perspective is eliminated by expanding the plane, and based on this, It performs measurement processing of distance, area, etc., and as shown in FIG. 6, a planar image conversion device 52, a planar image recognition device 53, a planar image combining device 54, and an image content measuring device 5 It consists of 5 and.
- the surroundings of the vehicle are acquired by the input device 2 as a perspective image.
- the plane image converter 52 converts the acquired perspective image into a plane image by removing the perspective by expanding the plane into a plane.
- the perspective image acquired by the video camera, which is the input device 2 is flattened by the following equations (1) and (2) using the variables shown in FIG. Is converted.
- y V ⁇ 2 1/2 ⁇ h ⁇ cos ( ⁇ / 4- ⁇ )-cos ( ⁇ - ⁇ ) /
- ⁇ u ⁇ h ⁇ c ⁇ s ( ⁇ - ⁇ ) / (f-sin / 3) (2)
- 0 is the angle between the optical axis of the camera and the plane such as the road surface
- f is the focal length of the camera
- h is the height of the camera
- 3 is the point at h + y distance from directly below the camera and the camera.
- Angle between the line segment connecting the camera and the plane such as the road surface V is the vertical coordinate from the origin on the camera projection surface
- u is the horizontal coordinate from the origin on the camera projection surface
- y is the road surface
- X is the horizontal coordinate on a plane such as a road surface, with the origin at the point h from directly below the camera on the plane.
- the planar image recognition device 53 recognizes and identifies the content of the converted planar image data based on the result of comparison by the comparison device 4 with the image data stored in the database 3. It is.
- the plane image combining device 54 appropriately combines the converted plane image data to generate large screen plane image data.
- the image content measuring device 55 measures various spatial physical quantities of the object recognized and specified by the planar image recognition device 53.
- the spatial physical quantities include position, distance, area, height, depth, volume, speed, acceleration, and the like.
- the plane image conversion device 52 acquires As shown in B), it is converted to planar image data viewed from above vertically without perspective.
- the plane image recognition device 53 recognizes and identifies the contents of the converted plane image data, and scans the plane image. It is recognized and identified that there is a lane 56.
- various spatial physical quantities such as lane width and lane parallelism are measured by the image content measuring device 55 with respect to the traveling lane 56 as the object.
- the traveling speed of the vehicle can be measured by measuring the moving distance per time.
- the planar image converter 52 As shown in), it is converted to planar image data viewed from above vertically without perspective.
- the plane image recognition device 53 recognizes and specifies the content of the converted plane image data, and recognizes and specifies that the line 57 exists in the plane image.
- various spatial physical quantities such as the rail width and the rail parallelism are measured by the image content measuring device 55 with respect to the track 57 as the object. Further, if the perspective image data is obtained in real time by the input device 2, the traveling speed of the own vehicle can be measured by measuring the moving distance per time.
- various spatial physical quantities relating to the situation around the transportation system can be acquired when the transportation system is operated, in addition to the automatic transportation system guidance device 1. Can be. Further, based on the acquired spatial physical quantities, the automatic transportation guide device 1 can also issue more appropriate notifications and instructions.
- the plane development processing device 51 may not only have a function of converting a perspective image into a plane image but also have a function of converting a 360-degree omnidirectional image (spherical image) into a plane image. .
- the 360-degree all-around image is an image that captures the entire circumference of the operating transportation system, that is, the front, rear, left, right, up, down, and all directions.
- (1) composite images captured by multiple cameras 2 Install a curved mirror on the front of the camera and capture the image reflected on the curved mirror, 3 Rotate one camera and combine the images captured at each position, 4 Attach a fisheye lens to the camera Processing images taken in a wide range, etc.
- the converted plane image becomes a plane viewed from a desired viewpoint. It can be regarded as a projected image.
- Figure 12 shows an example in which the surroundings of a car traveling on a road surface are acquired as a 360-degree whole-circumference image by the above-mentioned method, and the top view of the car is converted to a plane image. Show.
- a traffic information detection device 61 is separately installed on the route of transportation, and the image data, measurement data, etc. acquired by the traffic information detection device 61 are transmitted to the automatic guidance device 1 for transportation. I'm sorry.
- the traffic information detection device 6 1 is an input device 2 in the transportation guide device 1, It comprises a database 3, a comparison device 4, a recognition device 5, a storage device 8, a data updating device 9, a plane development processing device 51, and a graphic device 62.
- the graphic device 62 converts the image data and measurement data acquired from the input device 2 into computer graphics (CG). Next, an example of transmitting image data, measurement data, and the like acquired by the traffic information detection device 61 installed on the operation route to the automatic transportation guidance device 1 when the transportation is an automobile will be specifically described. .
- the traffic information detection device 61 is installed at a gate 63, an illumination light 64, or the like attached to a road, for example, as shown in FIG. Then, as shown in FIG. 14, the image data, measurement data, and the like acquired by the traffic information detecting device 61 are:
- data is transmitted to the base station 65 installed for centralized management, and then distributed to terminal stations 66 installed at appropriate locations. Sent to each mounted vehicle.
- the image data and measurement data distributed from the traffic information detection device 61 to each vehicle equipped with the automatic transportation guidance device 1 via the base station 65 and the terminal station 66 are converted to computer graphic images (CG).
- CG computer graphic images
- the automatic transportation guidance device 1 for each vehicle can easily understand the figures, figures, characters, etc., as shown in Fig. 15 using the output device 6. It is displayed and very convenient.
- the coordinates of the own vehicle are fixed at appropriate positions, and the situation around the running own vehicle, that is, the position or display of road signs, road markings, traffic information boards, other vehicles, etc.
- traffic information detection device 61 Every time they pass, they change every moment.
- What is displayed on the output screen is not limited to a two-dimensional computer graphic image, but may be a three-dimensional computer graphic image (3DCG).
- It may be an image synthesized with a real image developed on a plane.
- the output screen shown in Fig. 15 shows the case where there is another vehicle that violates or is likely to violate the safety zone of the vehicle that is set in advance.
- the driver or passenger is notified by letter, voice, or the like.
- Recognition of traffic information by transmission from the traffic information detection device 61 to the transportation automatic guidance device 1 means that, as shown in Figure 16, the left and right roads become blind spots from the vehicle, especially at intersections etc. In such cases, traffic conditions on the left and right roads can be easily grasped, which is extremely useful for traffic safety measures.
- the output screen shown in Fig. 16 shows the case where another vehicle is located at a position where the left and right roads are blind spots from the own vehicle, and the driver or fellow passenger is notified by characters, voice, etc. Notified.
- the automatic transportation guide device 1 is provided with a device capable of accurately recognizing the positional relationship between the operating transportation system and the surroundings, the transportation system can be guided more effectively.
- the positional relationship recognition device 101 recognizes the three-dimensional position of the own vehicle using not only the white line attached to the road, but also all the objects in the image acquired by the camera, and The vehicle can be positioned three-dimensionally in the surrounding environment.
- the positional relationship recognition device 101 not only the white line attached to the road, but also all the objects present in the video become clues for three-dimensional measurement. Also, not the object itself but the part of the object that is extracted and extracted by image processing is used as a clue. In addition, a plurality of parts that are likely to be clues for three-dimensional measurement in the object are extracted. Another characteristic is that instead of specifying the target from the beginning, the detected target is used as a clue.
- the clues in the video are automatically tracked by applying an image processing technique such as a matching method in each of the continuous images generated by the traveling of the vehicle.
- an image processing technique such as a matching method in each of the continuous images generated by the traveling of the vehicle.
- a predetermined calculation is performed to detect and recognize the three-dimensional position and direction of the vehicle.
- multiple cameras are mounted so that their fields of view overlap each other, three-dimensional measurement is performed by detecting corresponding points in the overlapping fields of view, and three-dimensional coordinates are formed from surrounding clues. Or in the surrounding three-dimensional image acquired by another method, the three-dimensional position and direction of the vehicle.
- ⁇ ⁇ The position of the vehicle can be obtained three-dimensionally from the data of the characteristic points obtained by driving in the past, using the characteristic points as clues.
- ⁇ ⁇ The position and direction of the vehicle in the area not shown in the image can be obtained by calculation.
- the positional relationship recognizing device 101 it is possible to exhibit various functions that cannot be performed by the position detection of the own vehicle only by the white line. And determine the surrounding traffic situation and communicate it to the operator, or Based on this, the vehicle can be controlled directly, and more advanced automatic guidance can be realized, which can contribute to traffic safety effectively.
- the positional relationship recognition device 101 of the present invention is based on image information of a road surface, surrounding objects, and the like captured by an input device such as a video camera mounted on the vehicle.
- An image acquisition unit 102 that detects a three-dimensional position and recognizes a positional relationship with a road surface.
- the image acquisition unit 102 acquires an image using an input device mounted on a vehicle.
- An image temporary recording unit 103 for recording images for a certain period of time, a cue point automatic extraction unit 104 for automatically extracting cue points for obtaining corresponding points in an image, and two or more images having a distance difference, A corresponding point detector 105 that detects corresponding points of a plurality of clue points in each image, and an input device position / direction measuring unit 10 that calculates the position and direction of the input device from the detected corresponding points. 6 and the three-dimensional coordinates of the calculated input device position. And an actual measurement scale conversion unit 107 that converts the relative distance to an absolute distance using an actual measurement value.
- images from an input device such as a video camera mounted on the own vehicle can be recorded.
- An image that changes over time with progress is obtained.
- the result is the same as if multiple cameras were photographed side by side on the road where the vehicle was traveling. In this way, even if only one camera is used, multiple parallaxes can be obtained by moving the camera.
- the automatic cue point extraction unit 104 extracts the outline of the image or divides it into color regions in order to make a characteristic portion in the image a cue point.
- the characteristic part of is automatically extracted.
- the clue point means a feature point in the image that is a corresponding point in order to find a corresponding point between images taken at different points.
- the clue point may be the object itself, but it is more often a part of the object.
- the feature points mentioned here are not the feature points for humans, but for the computer that performs image processing. Image processing is extremely advantageous because only feature points are required. It is not necessary to specify the key points to be detected from the beginning, and the target object or a part thereof that can be easily detected at the site at that time can be set as key points.
- the corresponding point detecting unit 105 and the input device position / direction measuring unit 106 detect a plurality of corresponding points by an image matching method or the like, and calculate a camera or the like based on the coordinates of each corresponding point. The position and direction of the input device are determined.
- the speed of the input device that is, the vehicle speed, the vehicle acceleration, and the vehicle traveling direction can be obtained.
- the camera position data obtained by the calculation is a relative distance, and in order to convert it to an absolute distance, it is necessary to have a known distance of one or more places with respect to the coordinates of the corresponding point or vehicle position to be measured. However, it is sufficient to calibrate as a value that does not change the distance due to the running of the vehicle, for example, using the installation height of the camera or a known distance from the beginning in the image.
- the distance between the cameras can be a known distance.
- the measured distance can be converted into an absolute distance by the measured scale converter 107 as a known distance.
- the camera position and direction can be measured three-dimensionally.
- the positional relationship recognition device 101 performs three-dimensional measurement of a plurality of cue points from corresponding points in each image of the plurality of cue points, and a corresponding point three-dimensional coordinate which obtains a relationship between them and a camera position as three-dimensional coordinates.
- a measuring unit 108 may be added.
- the three-dimensional data of the clue point is also obtained at the same time in the calculation process of obtaining the three-dimensional data of the position and the direction of the camera.
- This makes it possible to arrange camera positions in a three-dimensional position, a three-dimensional array, and a three-dimensional distribution of a plurality of clue points. That is, the position and direction of the vehicle can be three-dimensionally arranged in the same three-dimensional coordinate system as the three-dimensional distribution of a plurality of surrounding clue points.
- the camera mounted on the vehicle in other words, in the three-dimensional coordinates including the surrounding buildings, telephone poles, roads, etc., the camera mounted on the vehicle, more precisely, on the vehicle Can be positioned three-dimensionally.
- a three-dimensional data recording unit 109 for recording the three-dimensional coordinates of the corresponding point obtained by the corresponding point three-dimensional measuring unit 108 may be added.
- the clue points for which the three-dimensional coordinates have already been acquired can be used for calculating the position and direction of the vehicle when traveling around the area later. It can be used as an index.
- a three-dimensional data reading unit 110 that reads the three-dimensional data of the clue points accumulated in the three-dimensional data recording unit 109 from the three-dimensional data recording unit 109 when traveling around the next time, It is also possible to add a corresponding point comparison unit 111 that compares the data with the three-dimensional data acquired during the next and subsequent runs to find a coincidence point and increase the calculation accuracy of the vehicle position.
- the three-dimensional data is read out from the three-dimensional data recording unit 109 by the three-dimensional data readout unit 110, and if there is a change in the coordinates of the previous clue point each time the vehicle travels thereafter, the coordinates are updated. And record it again as new data.
- the data in the three-dimensional data storage unit 109 is updated and newly added, increasing the number of clue points or improving the position accuracy, and the position and direction of the vehicle.
- the calculation accuracy of is also improved.
- the positional accuracy of the clue point becomes extremely high. Furthermore, if three-dimensional map data in which the clue points are extended to pixel units is previously generated by a dedicated device, a three-dimensional space around the traveling road is formed, and the position and direction of the vehicle are included therein. Can be placed.
- an object whose absolute coordinates are known is selected as a corresponding point, and the input position direction indicator is selected.
- the absolute coordinate transformation unit 112 that gives absolute coordinates to the three-dimensional data acquired by the measurement unit 106 and the corresponding point three-dimensional measurement unit 108, and the three-dimensional coordinates of the clue points existing in a certain area in the absolute coordinate system
- a coordinate integration unit 113 for integration According to the absolute coordinate conversion unit 112, for example, the absolute coordinates of the camera position can be obtained by GPS, or the three-dimensional data obtained as the relative distance using the object whose latitude and longitude are already known as a clue Can be converted to coordinates.
- the coordinate integration unit 113 the coordinates of several clue points can be unified and displayed in a common absolute coordinate system, and the three-dimensional data immediately acquires the absolute coordinates.
- the clue point obtains the absolute coordinates, it can be used as common three-dimensional data of the clue point after the next time or in other vehicles.
- the site data that matches the three-dimensional data that collects the clue points matches the position, arrangement, and distribution of the clue points, and thus the absolute coordinates are obtained. From there, the camera position, that is, the absolute coordinates of the vehicle, can be obtained immediately.
- the map and the clue points are combined, and a new database is created. It will be possible to obtain a map that records the data on the position and direction of the road.
- the name and attribute of the clue point are recorded and stored in association with the position data of the clue point, and the name and attribute adding unit 1 for adding the name and attribute of the object to which the clue point belongs to the coordinate data of the cue point. It is also possible to add 14 and a database 115 that records and records the coordinates, names and attributes of the added clue points in a map in association with each other.
- the name and attributes of the object to which the clue point belongs are known in advance, or If the name is known by image recognition, the name of the object and its general properties and the properties specific to the object can also be obtained as attribute data.
- the clue point at the time of measurement is associated with the clue point in the database, the name and property of the object at the clue point can be read.
- a display section 116 for displaying and informing the driver or the like of the various calculation results as appropriate may be added.
- the driver or administrator can observe them and judge the situation. .
- the situation determination unit 1 17 automatically determines the situation of the vehicle based on the positional relationship between the surrounding vehicle and the position of the vehicle with the road, and automatically uses the result of the situation determination to automatically determine the purpose of the vehicle.
- An automatic vehicle control unit 118 that automatically performs the operation (brake operation, speed control operation, handle operation, alarm operation, etc.) that is suitable for the vehicle may be added.
- the situation determination unit 117 the position and direction of the vehicle, the names and attributes of the objects to which the clue points belong, the names and attributes of the clue points, the positional relationship between the road surface and the vehicle, signs, road markings, etc.
- the overall situation of the vehicle can be determined.
- the vehicle can be operated automatically or semi-automatically via the automatic vehicle control unit 118. Also, by judging the positional relationship between the surrounding situation and the vehicle, the vehicle position information can be transmitted to the driver and the like.
- multiple cameras are installed and images are captured.
- the multiple camera image acquisition unit 1-19 that overlaps all or part of the field of view of each camera, and the tertiary calculated from the parallax based on the movement distance of a single camera Original distance measurement and multiple force Using both 3D distance measurement calculated from camera-to-camera parallax and 3D distance movement using a single camera, using the 3D distance data of the clue point obtained by the overlapping field of view of multiple cameras as the reference length
- a calibration unit 120 that converts the three-dimensional distance data obtained by the distance parallax method into an absolute distance may be added.
- the parallax between the cameras can be reduced at the overlapped portion. Can be generated.
- the greatest feature of 3D distance measurement by the multiple camera view overlapping method is that it can measure moving objects. Also, according to the multiple camera view overlapping method, high accuracy can be obtained in the measurement of a short distance.
- the distance between cameras in the multiple camera method can be made substantially long, which is advantageous when measuring a long distance three-dimensional distance.
- the distance between cameras in a vehicle-mounted camera is at most about 1 m, but the distance between cameras in the camera moving distance method is not only lm, 100 m, and 100 m, but also lkm and 10 km. Is possible.
- the short distance is measured three-dimensionally by the overlapping visual field method
- the long distance is measured three-dimensionally by the camera movement distance method that can obtain large parallax.
- the principle of measurement is the same for both singular and multiple cameras.
- a feature of the multi-camera system with overlapping visual fields is that the distance between cameras can be taken as the reference length in parallax measurement. Therefore, the absolute distance can be measured, and the distance measured by the single camera movement method can be calibrated based on the distance data of the intermediate distance measurement obtained from the distance measurement based on parallax. To long distances. That is, the position, speed, acceleration, and traveling direction of a vehicle traveling ahead can be measured.
- the three-dimensional shape of the object to which the clue point belongs is represented and arranged in the correct position in the coordinate system defined on the display screen by three-dimensional computer graphics, and the name, attribute, and A shape coordinate attribute display unit 120 for displaying other objects and the own vehicle on a display screen thereof and reproducing a three-dimensional space, and a three-dimensional computer showing various objects represented on the display screen Touching the image with graphics or clicking with the mouse, or displaying only the real image, touching the target object with the hand on the displayed real image, or clicking with the mouse
- the target object for example, the name, coordinates, shape, and other related data attributes of the target object are displayed, and data related to the specified target object is input.
- the user interface section 121 which can instruct various operations and actions to the target object, can be added.
- those clue points mean that they correspond to the respective objects to which they belong.
- the vehicle and its surroundings are represented by 3DCG (three-dimensional computer graphics), and the three-dimensional coordinate system is used. Can be displayed. Of course, other attributes, and even objects with no clue points, can be displayed because their shape and position coordinates are known.
- the situation can be determined more appropriately. Also, by specifying the object on the displayed 3DCG screen by clicking it with the mouse or touching it with the hand, etc., or by giving an instruction directly by voice, the content of the instruction is given by the voice recognition device. Understand and read the attributes of the target object from the database and display it.
- the two-dimensional data of the real image and the two-dimensional projected image data of the three-dimensional computer graphics are overlapped with each other so that the shapes match each other, and only the real image is displayed. If the target object is specified by clicking on the target image with the mouse or touching it with the hand, etc., the data structure is configured so that the real image and the 3D CG image match each other so that they overlap each other. Then, by specifying the data of the corresponding object in 3DCG, the name, coordinates, attributes, and other related information relating to the object can be called from the database or written.
- an external communication unit 122 connected to another vehicle or another communication point via a communication line to transmit and receive information may be added.
- the 3D information and the mobile object information of the clue point generated by the own vehicle are provided.
- the three-dimensional information and the moving object information of the clue point generated by the other vehicle at a different position can be received from the other vehicle.
- clue point information analyzed from images acquired by cameras installed at fixed points in the surrounding area position information of vehicles including the own vehicle, speed information, and information on the result of determining the situation, such as traffic jams
- the fixed station transmits information, accident information, etc., and the own vehicle receives and displays such information, or receives and displays information that cannot be acquired by the own vehicle, and further acquires the own vehicle Add the received information to the information To determine the situation and display highly accurate information.
- the positional relation recognition device 101 recognizes the positional relation between the operating transportation means and the surroundings when the transport means is a vehicle running on a road surface such as an automobile will be specifically described. .
- Fig. 19 As shown in Fig. 19, four super-wide-angle cameras equipped with fisheye lenses are installed on the roof of the vehicle so that their fields of view partially overlap, and the distance between the vehicles is measured using the overlapping part of the camera's field of view. And speed measurement.
- three-dimensional measurement is performed based on the parallax between multiple cameras for short-range cue points, and corresponding points of long-range cue points are detected based on the movement distance by detecting corresponding points of cue points by moving a single camera. Three-dimensional measurements were performed.
- the calculation based on the parallax between the cameras and the calculation based on the motion parallax due to the movement of the camera are basically the same type of calculation, but differ in which is the unknown.
- the moving distance of the camera is an unknown number, but in the case of parallax between the cameras, the distance between the cameras can be measured in advance, so that the known number can be used.
- the subsequent processing to develop the acquired image into a spherical coordinate format pasted on the spherical surface.
- the positional relationship between the cameras it is desirable to accurately measure the distance and direction between the cameras.
- the distance between the cameras can also be obtained as an unknown value by calculation, so that the position may be appropriately set.
- the inter-camera distance can also be used as a reference for actual measurement value conversion.
- the direction of the camera can be a known number, but if the direction is expected to shift slightly due to vibration or the like depending on the method of fixing the camera, the camera direction can be treated as an unknown number.
- the clue point can be increased to the number of pixels.
- one or more clue points are accurately measured based on the parallax of the cameras with overlapping visual fields and used as the reference value, and converted to the actual measurement scale, or the height of the camera from the road surface as the reference value is used as the actual measurement scale. Can be converted to Further, an object having a known length in the image can be set as the reference length.
- the corresponding points of the clue points can be obtained by image recognition in the images of the cameras taken at the same time using the distance between the cameras as a known reference length.
- Cue points can be automatically extracted in real time from the image by processing the outline of the image.
- cross intersections, triangle intersections, and square intersections in the contoured image are extracted from the image. Since real-time processing is not required for data collection of clue points, a considerable number of clue points can be calculated by offline processing. It is also possible to extend the entire pixel.
- some of the many clue points are set as characteristic points as clue points, and the corresponding points are determined by an image matching method or the like. Once the corresponding point is determined, the three-dimensional distance to the clue point can be obtained by calculation. At the same time, the position and direction of the force camera are also calculated.
- a clue point is found, post-processing may be performed.Therefore, it is preferable to associate the three-dimensional position of the clue point with the map, link the name and attribute of the object to which the clue point belongs, and record it in a database. . Thus, in the next and subsequent runs, it is possible to know the name of the corresponding point to which it belongs and the attributes of the object only by finding the clue point. By using the database, it is possible to recognize the object to which the clue point belongs, to understand the situation where the vehicle is placed, and to determine the next action to be taken simply by finding the position of the clue point. Then, based on the result of the determination, the vehicle direction, speed, and the like can be appropriately controlled, and the vehicle can be automatically guided appropriately.
- the display screen functions as a user interface, allowing the user to specify the target vehicle, start communication, send and receive data, and identify the vehicle.
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/519,919 US7804424B2 (en) | 2002-07-03 | 2003-07-03 | Automatic guide apparatus for traffic facilities |
EP03741177A EP1536393A4 (en) | 2002-07-03 | 2003-07-03 | AUTOMATIC GUIDING APPARATUS FOR PUBLIC TRANSPORT |
AU2003281400A AU2003281400A1 (en) | 2002-07-03 | 2003-07-03 | Automatic guide apparatus for public transport |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2002-194283 | 2002-07-03 | ||
JP2002194283 | 2002-07-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004006207A1 true WO2004006207A1 (ja) | 2004-01-15 |
Family
ID=30112299
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2003/008465 WO2004006207A1 (ja) | 2002-07-03 | 2003-07-03 | 交通機関自動案内装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US7804424B2 (ja) |
EP (1) | EP1536393A4 (ja) |
AU (1) | AU2003281400A1 (ja) |
WO (1) | WO2004006207A1 (ja) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE602004024930D1 (de) * | 2004-07-16 | 2010-02-11 | Fourie | Strassenzustands-informationsapparatus, -system und -verfahren |
US9207675B1 (en) * | 2005-02-11 | 2015-12-08 | Enovation Controls, Llc | Event sensor |
GB2437137A (en) * | 2006-04-03 | 2007-10-17 | Autoliv Development Ab | Drivers aid that sensors the surrounding of the vehicle, and with a positioning system compares the detected objects to predict the driving conditions |
JP4755556B2 (ja) * | 2006-09-04 | 2011-08-24 | クラリオン株式会社 | 車載装置 |
US20080137910A1 (en) * | 2006-11-27 | 2008-06-12 | Hanae Suzuki | Locating method for locating a predetermined spot on a road and a locating apparatus using the method |
JPWO2008099483A1 (ja) * | 2007-02-15 | 2010-05-27 | パイオニア株式会社 | 表示制御装置、表示制御方法、表示制御プログラムおよび記録媒体 |
US8862395B2 (en) * | 2011-01-31 | 2014-10-14 | Raytheon Company | Coded marker navigation system and method |
JP5594246B2 (ja) * | 2011-07-20 | 2014-09-24 | 株式会社デンソー | 車線認識装置 |
KR20130065114A (ko) * | 2011-12-09 | 2013-06-19 | 현대자동차주식회사 | Gps정보를 이용한 상대차량 위치파악방법 |
US9036025B2 (en) * | 2012-01-11 | 2015-05-19 | International Business Macines Corporation | System and method for inexpensive railroad track imaging for inspection |
CN102889892B (zh) * | 2012-09-13 | 2015-11-25 | 东莞宇龙通信科技有限公司 | 实景导航的方法及导航终端 |
US10410516B1 (en) | 2018-05-24 | 2019-09-10 | Veoneer Us, Inc. | Systems and methods for vehicle geofencing management |
US11270162B2 (en) * | 2018-10-30 | 2022-03-08 | Here Global B.V. | Method and apparatus for detecting objects of interest in an environment |
CN113022244B (zh) * | 2018-11-13 | 2022-08-09 | 黄奕坤 | 三栖智能驾驶电动轿车 |
JP7002823B2 (ja) * | 2018-12-06 | 2022-01-20 | アルパイン株式会社 | 案内音声出力制御システムおよび案内音声出力制御方法 |
US11624630B2 (en) * | 2019-02-12 | 2023-04-11 | International Business Machines Corporation | Using augmented reality to present vehicle navigation requirements |
US20230102205A1 (en) * | 2021-09-30 | 2023-03-30 | Lenovo (United States) Inc. | Road sign information presentation |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5844505A (en) * | 1997-04-01 | 1998-12-01 | Sony Corporation | Automobile navigation system |
JPH11271074A (ja) * | 1998-03-20 | 1999-10-05 | Fujitsu Ltd | 目印画像照合装置及び目印画像照合方法及びプログラム記憶媒体 |
JP2000222681A (ja) * | 1999-01-29 | 2000-08-11 | Fujitsu Ltd | 道路情報システムおよび記録媒体 |
JP2000283772A (ja) * | 1999-03-31 | 2000-10-13 | Matsushita Electric Ind Co Ltd | 走行位置表示装置 |
JP2000293670A (ja) * | 1999-04-08 | 2000-10-20 | Asia Air Survey Co Ltd | ビデオ画像の道路標識自動認識方法及び道路標識自動認識装置並びに道路標識自動認識のプログラムを記憶した記憶媒体 |
JP2001289631A (ja) * | 2000-04-06 | 2001-10-19 | Nippon Signal Co Ltd:The | 距離測定装置及び距離測定方法 |
JP2002139327A (ja) * | 2000-11-02 | 2002-05-17 | Sony Corp | ナビゲーション装置、マーク対応動作属性登録方法およびナビゲーション装置における動作実行方法 |
JP2002163643A (ja) * | 2000-11-28 | 2002-06-07 | Toshiba Corp | 運転案内装置 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE59509929D1 (de) * | 1994-07-06 | 2002-01-24 | Volkswagen Ag | Verfahren zur Ermittlung der Sichtweite, insbesondere für die Bewegung eines Kraftfahrzeuges |
DE19736774A1 (de) * | 1997-08-23 | 1999-02-25 | Bosch Gmbh Robert | Verfahren zur Informationsanzeige in einem Kraftfahrzeug |
DE19842176A1 (de) * | 1998-09-15 | 2000-03-16 | Bosch Gmbh Robert | Verfahren und Vorrichtung zur Verkehrszeichenerkennung und Navigation |
US6213401B1 (en) * | 1998-11-19 | 2001-04-10 | Michael Louis Brown | Speed limit detecting system |
DE19952153A1 (de) * | 1999-10-29 | 2001-05-03 | Volkswagen Ag | Verfahren und Einrichtung zur elektronischen Erkennung von Verkehrszeichen |
US6671615B1 (en) * | 2000-05-02 | 2003-12-30 | Navigation Technologies Corp. | Navigation system with sign assistance |
JP2002240659A (ja) * | 2001-02-14 | 2002-08-28 | Nissan Motor Co Ltd | 車両周囲の状況を判断する装置 |
JP4578795B2 (ja) * | 2003-03-26 | 2010-11-10 | 富士通テン株式会社 | 車両制御装置、車両制御方法および車両制御プログラム |
US7444004B2 (en) * | 2004-03-29 | 2008-10-28 | Fujifilm Corporation | Image recognition system, image recognition method, and machine readable medium storing thereon an image recognition program |
JP4321821B2 (ja) * | 2005-01-28 | 2009-08-26 | アイシン・エィ・ダブリュ株式会社 | 画像認識装置及び画像認識方法 |
-
2003
- 2003-07-03 AU AU2003281400A patent/AU2003281400A1/en not_active Abandoned
- 2003-07-03 WO PCT/JP2003/008465 patent/WO2004006207A1/ja active Application Filing
- 2003-07-03 EP EP03741177A patent/EP1536393A4/en not_active Withdrawn
- 2003-07-03 US US10/519,919 patent/US7804424B2/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5844505A (en) * | 1997-04-01 | 1998-12-01 | Sony Corporation | Automobile navigation system |
JPH11271074A (ja) * | 1998-03-20 | 1999-10-05 | Fujitsu Ltd | 目印画像照合装置及び目印画像照合方法及びプログラム記憶媒体 |
JP2000222681A (ja) * | 1999-01-29 | 2000-08-11 | Fujitsu Ltd | 道路情報システムおよび記録媒体 |
JP2000283772A (ja) * | 1999-03-31 | 2000-10-13 | Matsushita Electric Ind Co Ltd | 走行位置表示装置 |
JP2000293670A (ja) * | 1999-04-08 | 2000-10-20 | Asia Air Survey Co Ltd | ビデオ画像の道路標識自動認識方法及び道路標識自動認識装置並びに道路標識自動認識のプログラムを記憶した記憶媒体 |
JP2001289631A (ja) * | 2000-04-06 | 2001-10-19 | Nippon Signal Co Ltd:The | 距離測定装置及び距離測定方法 |
JP2002139327A (ja) * | 2000-11-02 | 2002-05-17 | Sony Corp | ナビゲーション装置、マーク対応動作属性登録方法およびナビゲーション装置における動作実行方法 |
JP2002163643A (ja) * | 2000-11-28 | 2002-06-07 | Toshiba Corp | 運転案内装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1536393A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP1536393A4 (en) | 2009-11-18 |
US20060087453A1 (en) | 2006-04-27 |
EP1536393A1 (en) | 2005-06-01 |
AU2003281400A1 (en) | 2004-01-23 |
US7804424B2 (en) | 2010-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2004006207A1 (ja) | 交通機関自動案内装置 | |
CN110264783A (zh) | 基于车路协同的车辆防碰撞预警系统及方法 | |
CN112307594B (zh) | 道路数据采集和模拟场景建立一体化系统及方法 | |
CN110276988A (zh) | 一种基于碰撞预警算法的辅助驾驶系统 | |
KR100675399B1 (ko) | 차량위치정보표시장치 및 방법 | |
CN106340197A (zh) | 一种车路协同辅助驾驶系统及方法 | |
CN106114217A (zh) | 行驶控制装置 | |
CN105008857A (zh) | 用于汽车的智能视频导航 | |
CN106652647A (zh) | 用于船舶航海训练模拟系统的本船分系统 | |
CN106324618A (zh) | 基于激光雷达检测车道线的系统及其实现方法 | |
JP4327062B2 (ja) | ナビゲーション装置 | |
JP5898539B2 (ja) | 車両走行支援システム | |
CN110491156A (zh) | 一种感知方法、装置及系统 | |
CN114442101B (zh) | 基于成像毫米波雷达的车辆导航方法、装置、设备及介质 | |
CN110471085A (zh) | 一种轨道检测系统 | |
JP3857698B2 (ja) | 走行環境認識装置 | |
US11496707B1 (en) | Fleet dashcam system for event-based scenario generation | |
JP2004046875A (ja) | 交通機関自動案内装置 | |
Palmer et al. | The autonomous siemens tram | |
CN114397672A (zh) | 一种基于定位技术的列车主动障碍物检测方法及装置 | |
Tagiew et al. | Osdar23: Open sensor data for rail 2023 | |
Gehrig et al. | System architecture for an intersection assistant fusing image, map, and gps information | |
JP7291015B2 (ja) | 周囲物体認識方法及び周囲物体認識装置 | |
JP2019049811A (ja) | 車両制御装置、車両制御方法、およびプログラム | |
JP4241350B2 (ja) | 移動体補助案内装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2003741177 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 2003741177 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2006087453 Country of ref document: US Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10519919 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 10519919 Country of ref document: US |