WO2021111858A1 - Moving body detection method, roadside device, and vehicle-mounted device - Google Patents
Moving body detection method, roadside device, and vehicle-mounted device Download PDFInfo
- Publication number
- WO2021111858A1 WO2021111858A1 PCT/JP2020/042726 JP2020042726W WO2021111858A1 WO 2021111858 A1 WO2021111858 A1 WO 2021111858A1 JP 2020042726 W JP2020042726 W JP 2020042726W WO 2021111858 A1 WO2021111858 A1 WO 2021111858A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- moving body
- pedestrian
- observation device
- position information
- vehicle
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
Definitions
- the present disclosure relates to a moving body detection method for detecting a moving body existing in the vicinity, a roadside device, and an in-vehicle device.
- in-vehicle sensors such as radar, lidar (LiDAR), and camera.
- ITS Intelligent Transport System
- a roadside machine installed at an intersection exists on the road around the intersection using a roadside sensor such as a radar. It detects moving objects (pedestrians, vehicles, etc.) and provides information on moving objects to vehicles through road-to-vehicle communication. As a result, it is possible for the vehicle side to recognize a moving object that is out of sight from the vehicle.
- the information of the moving body detected by the roadside machine is provided to a vehicle such as an autonomous driving vehicle, it is determined that the moving body detected by the roadside machine and the moving body detected by the in-vehicle terminal are different. , Recognize moving objects doubly.
- the position information, the moving direction, and the moving locus of the moving body are generally used as the identification information.
- the mobile body is identified by comparing the identification information acquired by each of the roadside unit and the in-vehicle terminal.
- the positioning error of the position information becomes large, the same moving body is erroneously determined as another moving body. Therefore, a technique capable of accurately performing the identification process of the moving body is desired, but in the above-mentioned conventional technique, there is no proposal for suppressing such an erroneous determination of the identification process, and the identification is performed accurately. There was a problem that it could not be done.
- a mobile body detection method, a roadside device, and a roadside device capable of accurately determining whether or not the mobile bodies detected by each of the two observation devices are the same.
- the main purpose is to provide an in-vehicle device.
- the first observation device and the second observation device detect a moving body existing on the road based on the output of the sensor and acquire the position information of the moving body.
- the behavior characteristics of the moving body are detected, and one of the first observation device, the second observation device, and a processing device different from the first and second observation devices is the first.
- the moving object detected by each of the first observation device and the second observation device depends on whether or not the behavior characteristics generated by each of the observation device and the second observation device match.
- the configuration is such that an identification process for determining whether or not they are the same is performed.
- the roadside device of the present disclosure detects a moving body existing on the road based on the output of the roadside sensor, acquires the position information of the moving body, and detects the behavior characteristics of the moving body.
- the configuration is such that the position information and the information regarding the behavior characteristics are transmitted to the in-vehicle device as mobile identification information.
- the in-vehicle device of the present disclosure detects a moving body existing on the road based on the output of the in-vehicle sensor, acquires the position information of the moving body, and detects the behavior characteristics of the moving body. Whether or not the moving body detected by each of the roadside device and the own device is the same, depending on whether or not the behavior characteristic received from the roadside device and the behavior characteristic generated by the own device match.
- the configuration is such that the identification process for determination is performed.
- the identification of the moving body, that is, the moving body detected by each of the two observation devices is the same. Whether or not it can be determined accurately.
- a sequence diagram showing a procedure of processing performed by the mobile detection system according to the first embodiment A block diagram showing a schematic configuration of a pedestrian terminal 5 according to the first embodiment.
- Explanatory drawing which shows the outline of the moving body detection system which concerns on 2nd Embodiment
- Explanatory drawing which shows the outline of the moving body detection system which concerns on 3rd Embodiment A flow chart showing an operation procedure of the pedestrian terminal 5 according to the third embodiment.
- Explanatory drawing which shows the outline of the moving body detection system which concerns on 4th Embodiment
- Explanatory drawing which shows the outline of the moving body detection system which concerns on 5th Embodiment
- a block diagram showing a schematic configuration of a pedestrian terminal 5 according to a sixth embodiment.
- the first observation device and the second observation device detect a moving body existing on the road based on the output of the sensor, and the moving body is detected. Any of the first observation device, the second observation device, and a processing device different from the first and second observation devices, which acquires the position information of the moving object and detects the behavior characteristics of the moving object. In each of the first observation device and the second observation device, depending on whether or not the behavior characteristics generated by each of the first observation device and the second observation device match.
- the configuration is such that an identification process is performed to determine whether or not the detected moving objects are the same.
- the identification of the moving body that is, whether or not the moving body detected by each of the two observation devices is the same. It is possible to accurately determine whether or not.
- the first observation device and the second observation device have at least one of the amplitude of the vertical movement of the moving body, the period of the vertical movement, the moving direction, and the moving speed as the behavior characteristics. It is configured to detect whether or not.
- the moving body can be accurately identified based on the behavior characteristics of the moving body.
- the terminal device held by the mobile body transmits an identification signal having radio wave characteristics assigned to the own device, and the first observation device and the second observation device receive the identification signal.
- the moving body is identified based on the radio wave characteristics of the identification signal.
- the radio wave characteristics detected by each of the two observation devices are compared.
- the identification of the moving body can be performed more accurately. Further, even in a state of being shielded by an obstacle, the moving body can be identified based on the position of the radio wave transmission source, so that the accuracy of identifying the moving body can be improved.
- the first observation device and the second observation device generate a radio wave characteristic image that visualizes the radio wave characteristics of the received identification signal, and the position of the radio wave transmission source of the identification signal. Based on the information, the radio wave characteristic image is superimposed and drawn on the image captured by the camera to generate a moving body image including the radio wave characteristic image, and the first observation device, the second observation device, and the above.
- One of the processing devices is configured to compare the moving body image including the radio wave characteristic image acquired by each of the first observation device and the second observation device in the identification process.
- the moving object image by extracting the moving object image from the image taken by the camera, it is possible to appropriately acquire the radio wave characteristic image corresponding to the moving object. Further, by comparing the behavior characteristics of the moving body detected by each of the first observation device and the second observation device, and by comparing the moving body image including the radio wave characteristic image detected by each device. , It is possible to further improve the accuracy of identification of a moving object by distinguishing it from the reflected wave from the radio wave transmission source. Further, even when the radio wave characteristic image is the same for a plurality of moving bodies, the moving body can be identified based on the comparison of the moving body images including the radio wave characteristic image, so that the accuracy of identifying the moving body can be improved.
- the terminal device transmits the identification signal at a timing corresponding to the behavior characteristics of the moving body, and the first observation device and the second observation device receive the identification signal.
- the configuration is such that the behavior characteristics of the moving body are acquired based on the timing.
- the behavior characteristics of the moving body such as vertical movement can be notified from the terminal device to the first observation device and the second observation device.
- the first observation device and the second observation device display the moving body on the captured image of the camera based on the position information of the moving body acquired based on the output of the sensor.
- a detection frame and cutting out a region of the detection frame from the captured image a moving object image is extracted, and any one of the first observation device, the second observation device, and the processing device can be used.
- the identification process the moving body images acquired by each of the first observation device and the second observation device are compared.
- the moving body images acquired by each of the two observation devices are compared.
- the identification of the moving body can be performed more accurately.
- any one of the first observation device, the second observation device, and the processing device of the first observation device and the second observation device in the identification process is such that a plurality of the moving body images acquired at each time are compared.
- the indicator light held by the moving body lights up with the lighting characteristics assigned to the own device, and the first observation device recognizes the indicator light from the captured image of the camera.
- the lighting characteristic is detected, and the moving body is identified based on the lighting characteristic.
- any of the first observation device, the second observation device, and the processing device is a moving body detected by the first observation device and the second observation device.
- the first observation device is based on the distance from the first observation device to the moving body and the second observation device is based on the distance from the moving body.
- the position of the moving body is determined by selecting either the position information of the moving body acquired by the observation device 1 or the position information of the moving body acquired by the second observation device.
- the position of the moving body can be determined with the more accurate position information.
- the tenth invention is a tracking mode in which, when any one of the first observation device, the second observation device, and the processing device succeeds in the identification process, the position of the moving body is determined by the tracking process. If the tracking process fails, the mode returns to the identification mode in which the position of the moving body is determined by the identification process.
- the processing load can be reduced by omitting the identification process.
- the first observation device determines whether or not the moving body that is the source of the message and the moving body detected by using the sensor are the same.
- a message including instruction information for ignoring the message from the terminal device held by the mobile body as having been identified is sent to the second observation device.
- the position information of the mobile body notified from the first observation device and the own device The position of the moving body is determined based on the acquired position information.
- the highly accurate position information acquired by either the first observation device or the second observation device is adopted, and the less accurate position information notified from the terminal device of the mobile body is not adopted. can do.
- the first observation device is a roadside device
- the second observation device is an in-vehicle device.
- the first observation device and the second observation device are roadside devices.
- the fourteenth invention based on the output of the roadside sensor, a moving body existing on the road is detected, the position information of the moving body is acquired, and the behavior characteristics of the moving body are detected.
- the configuration is such that the position information and the information related to the behavior characteristics are transmitted to the in-vehicle device as mobile identification information.
- the fifteenth invention detects a moving body existing on the road based on the output of the vehicle-mounted sensor, acquires the position information of the moving body, detects the behavior characteristics of the moving body, and detects the roadside. It is determined whether or not the moving body detected by each of the roadside device and the own device is the same, depending on whether the behavior characteristic received from the device and the behavior characteristic generated by the own device match.
- the configuration is such that the identification process is performed.
- FIG. 1 is an overall configuration diagram of a mobile body detection system according to the first embodiment.
- This moving object detection system detects moving objects (pedestrians, vehicles, etc.) existing on the road and supports the driving of vehicle 1 (autonomous driving vehicle).
- This moving object detection system includes an in-vehicle terminal 2 (vehicle-mounted device, second observation device) mounted on the vehicle 1, an automatic driving ECU 3 (travel control device), and a roadside device 4 (roadside device,) installed on the road. It includes a first observation device) and a pedestrian terminal 5 (pedestrian device) possessed by a pedestrian.
- ITS communication is performed between the in-vehicle terminal 2, the pedestrian terminal 5, and the roadside device 4.
- This ITS communication is a wireless communication using a frequency band (for example, 700 MHz band or 5.8 GHz band) adopted in a safe driving support wireless system using ITS (Intelligent Transport System).
- ITS Intelligent Transport System
- a message including necessary information such as the position information of the vehicle 1 and the pedestrian is transmitted and received.
- the one performed between the in-vehicle terminals 2 is referred to as vehicle-to-vehicle communication
- the one performed between the roadside unit 4 and the in-vehicle terminal 2 is referred to as road-to-vehicle communication
- the in-vehicle terminal 2 and the roadside device 4 can also perform ITS communication (communication between pedestrians and vehicles, communication between pedestrians) with the pedestrian terminal 5.
- the in-vehicle terminal 2 transmits and receives a message including position information and the like to and from another in-vehicle terminal 2 by ITS communication (vehicle-to-vehicle communication), determines the risk of collision between vehicles 1, and determines the risk of collision. If there is, a warning activation operation is performed for the driver. The alert activation operation may be performed using a car navigation device (not shown) connected to the in-vehicle terminal 2. Further, the in-vehicle terminal 2 transmits and receives a message to and from the pedestrian terminal 5 by ITS communication (pedestrian-vehicle communication), and determines the risk of collision between the pedestrian and the vehicle 1.
- ITS communication vehicle-to-vehicle communication
- the automatic driving ECU 3 detects obstacles around the vehicle 1 based on the output of the vehicle-mounted sensor 11, detects the state of the vehicle 1, and controls the running of the vehicle 1.
- the roadside unit 4 notifies the in-vehicle terminal 2 and the pedestrian terminal 5 of the existence of a vehicle 1 or a pedestrian located in the vicinity of the own device by ITS communication (road-to-vehicle communication, road-to-walk communication). This makes it possible to prevent a collision when turning left or right at an intersection outside the line of sight.
- the roadside machine 4 distributes traffic information to the in-vehicle terminal 2 and the pedestrian terminal 5.
- the pedestrian terminal 5 transmits and receives a message including position information and the like to and from the in-vehicle terminal 2 by ITS communication (pedestrian-vehicle communication), determines the risk of collision between the pedestrian and the vehicle 1, and determines the risk of collision. If there is a danger, a warning activation action for pedestrians will be performed.
- ITS communication pedestrian-vehicle communication
- the vehicle 1 is equipped with an in-vehicle sensor 11 and an in-vehicle camera 12.
- the in-vehicle sensor 11 is a radar, a rider, or the like.
- the in-vehicle terminal 2 detects a pedestrian (moving body) existing on the road around the own vehicle based on the output of the in-vehicle sensor 11.
- the in-vehicle camera 12 photographs the road around the own device.
- the vehicle-mounted camera 12 can also be used as the vehicle-mounted sensor 11.
- the roadside unit 4 includes a roadside sensor 42 and a roadside camera 43 in addition to an antenna 41 that transmits and receives radio waves for ITS communication.
- the roadside sensor 42 is a radar, a rider, a camera, or the like.
- the roadside machine 4 detects a pedestrian (moving body) existing on the road around the own device based on the output of the roadside sensor 42.
- the roadside camera 43 photographs the road around the own device.
- the roadside camera 43 can also be used as the roadside sensor 42.
- the moving body to be processed is a pedestrian
- the moving body to be processed may be a vehicle.
- FIG. 2 is an explanatory diagram showing an outline of a moving object detection process and an identification process performed by the roadside machine 4. The same process is performed on the in-vehicle terminal 2.
- the roadside machine 4 detects a pedestrian (moving body) existing on the road around the own device based on the output of the roadside sensor 42 (moving body detection process).
- the relative position information of the pedestrian with respect to the own device that is, the direction in which the pedestrian exists as seen from the roadside machine 4, and the walking from the roadside machine 4.
- the distance to the person is required.
- the absolute position information (latitude, longitude) of the pedestrian is obtained based on the relative position information of the pedestrian and the position information (latitude, longitude) of the installation position of the own device.
- three-dimensional position information of the pedestrian that is, two-dimensional position information of the pedestrian and height information are required based on the relative position information (direction, distance) of the pedestrian. ..
- the two-dimensional position information absolute position information (latitude, longitude) may be used, but position information on the horizontal plane (XY plane) of an appropriate coordinate system may be used.
- the height information is information on the height from the road surface under the feet of the representative point of the pedestrian's body.
- the road surface under the feet of a pedestrian is set as a reference horizontal plane (XY plane).
- the center point of the upper body of the person it is preferable to use the center point of the upper body of the person as a representative point.
- pedestrian distance information distance from the roadside machine 4 to the pedestrian is required based on the position information regarding the installation position of the roadside machine 4 and the position information of the pedestrian.
- the roadside machine 4 detects a pedestrian (moving object) from the image captured by the camera by image recognition using a machine learning model such as deep learning, and the pedestrian on the captured image.
- the position information (coordinates) of the pedestrian may be acquired, and the absolute position information (latitude, longitude) of the pedestrian may be acquired from the position information of the pedestrian on the captured image.
- the vehicle-mounted terminal 2 it is determined whether or not the pedestrians detected by the vehicle-mounted terminal 2 and the roadside machine 4 are the same (identification process).
- the moving body identification information for identifying a pedestrian is acquired by each of the in-vehicle terminal 2 and the roadside machine 4, and the moving body identification acquired by each of the in-vehicle terminal 2 and the roadside machine 4.
- Pedestrians are identified by comparing the information.
- the pedestrian is identified by using the position information of the pedestrian acquired by each of the in-vehicle terminal 2 and the roadside machine 4 as the moving object identification information. Further, in the present embodiment, the pedestrian is identified by using the behavior characteristics of the pedestrian (amplitude and period of vertical movement of the body, moving direction, moving speed, etc.) as moving body identification information. Further, in the present embodiment, the pedestrian terminal 5 transmits an identification signal of the radio wave characteristic assigned to each of them, and the radio wave characteristic of the identification signal is used as the moving object identification information to identify the pedestrian. ..
- the position information of the pedestrian acquired by each of the in-vehicle terminal 2 and the roadside device 4 is compared, the behavior characteristics of the pedestrian are compared, and the radio wave characteristics are compared. If the items match, it is determined that the pedestrians detected by the in-vehicle terminal 2 and the roadside device 4 are the same. Therefore, even if it is determined that some of the comparison items do not match due to a measurement error or the like, it is sufficient if the other comparison items match, and the pedestrian can be identified accurately.
- the pedestrian detected by the roadside machine 4 is not among the pedestrians detected by the in-vehicle terminal 2, that is, due to a shield such as another moving body (pedestrian, vehicle) or a building. Pedestrians who cannot be seen from the vehicle are excluded.
- the pedestrian A is detected by both the roadside machine 4 and the in-vehicle terminal 2, but the pedestrian B is blocked by another vehicle and is not detected by the in-vehicle terminal 2, and is therefore the target of identification. Be outside.
- the moving body identification information position information, behavior characteristics, radio wave characteristics
- the moving body acquired by each of the in-vehicle terminal 2 and the roadside machine 4 is compared.
- the difference in the identification information is within the permissible range (within the range of the error), it may be determined that the pedestrians are the same.
- the position information, behavior characteristics, and radio wave characteristics of the moving body are used as the moving body identification information to identify the pedestrian, but other features such as detection are performed. It is also possible to use the size of the moved body as the moving body identification information.
- FIG. 3 is an explanatory diagram showing an outline of the behavior characteristic acquisition process performed by the roadside machine 4. The same process is performed on the in-vehicle terminal 2.
- the behavior characteristics of the pedestrian (moving body) detected by each of the vehicle-mounted terminal 2 and the roadside machine 4 are compared, and the pedestrians detected by each of the vehicle-mounted terminal 2 and the roadside machine 4 are the same. Whether or not it is determined (identification process).
- the behavior characteristics include the amplitude and period of the pedestrian's body movement (vertical movement) in the vertical direction (Z-axis direction), and the pedestrian's movement direction and movement speed on the horizontal plane (XY plane). get. Since such behavior characteristics differ for each pedestrian (moving body), it can be used as pedestrian identification information. For example, in the case of a pedestrian, the body moves up and down during walking, so that the vertical movement is detected, but in the case of a vehicle, there is almost no vertical movement. In addition, the amplitude of vertical movement is small in the case of children, but the amplitude of vertical movement is large in the case of adults.
- the behavior characteristics of the pedestrian are detected by using the three-dimensional position information of the pedestrian, that is, the two-dimensional position information of the pedestrian and the height information. Specifically, the amplitude and cycle of the vertical movement of the pedestrian's body are detected based on the change state of the pedestrian's height information. Further, the movement direction and the movement speed of the pedestrian are obtained based on the change state of the two-dimensional position information of the pedestrian.
- the behavior characteristics of the pedestrian are detected based on the three-dimensional position information of the pedestrian acquired from the detection results of the roadside sensor 42 and the in-vehicle sensor 11, but the radio wave transmission acquired by the radio wave transmission source detection process is performed.
- the behavior characteristics of the pedestrian may be detected based on the position information of the source, that is, the position information of the radio wave characteristic image.
- the movement locus obtained by connecting the positions of the pedestrians at each time may be acquired as a behavior characteristic.
- the three-dimensional movement locus of the pedestrian on the XYZ space may be acquired.
- FIG. 4 is an explanatory diagram showing an outline of radio wave characteristic image generation processing performed by the roadside machine 4. The same process is performed on the in-vehicle terminal 2.
- the pedestrian terminal 5 transmits an identification signal of the radio wave characteristic assigned to each person, and the pedestrian is identified by using this radio wave characteristic as the moving object identification information.
- the roadside machine 4 generates a radio wave characteristic image that visualizes the radio wave characteristics of the identification signal. This radio wave characteristic image is drawn in a color and shape corresponding to the radio wave characteristics of each radio wave transmission source (pedestrian terminal 5).
- the radio wave characteristic image is composed of a circular ring and a + mark.
- the + mark indicates the position of the detected radio wave transmission source (pedestrian terminal 5).
- the ring represents the frequency of the identification signal and the transmission cycle of the identification signal (the interval at which the identification signal is transmitted).
- the ring is drawn around the + mark.
- the ring when the frequency is 700 MHz, the ring is drawn in red, and when the frequency is 800 MHz, the ring is drawn in blue.
- the frequency is switched between 700 MHz and 800 MHz, the red ring and the blue ring are drawn in an overlapping state.
- the radio wave characteristic image is composed of a circular ring and a + mark, and the frequency and transmission cycle of the identification signal are the ring color (red, blue) and the form (single, double).
- the radio wave characteristic image is not limited to such a configuration, and if the frequency and transmission cycle of the identification signal can be identified, other images can be used. Configuration is also possible.
- FIG. 5 is an explanatory diagram showing an outline of the image composition processing performed by the roadside machine 4. The same process is performed on the in-vehicle terminal 2.
- the relative radio wave transmission source based on the roadside unit 4 is based on the reception status of the identification signal transmitted from the pedestrian terminal 5 (radio wave transmission source). Position information can be obtained.
- the roadside machine 4 acquires the position information (coordinates) of the radio wave transmission source on the captured image of the roadside camera 43 shown in FIG. 5A based on the relative position information of the radio wave transmission source. ..
- the radio wave characteristic image is superimposed and drawn on the position of the radio wave transmission source on the captured image, and the composite image shown in FIG. 5B is generated. ..
- the radio wave characteristic images for each pedestrian are displayed at overlapping positions, the radio wave characteristic images are drawn in a semi-transparent state.
- the radio wave characteristics of each pedestrian can be recognized from the radio wave characteristic images even when the radio wave characteristic images of each pedestrian overlap.
- the radio wave characteristic image when the radio wave characteristic image is located within the detection frame of the pedestrian on the image captured by the camera, the radio wave characteristic image visualizes the identification signal transmitted from the pedestrian terminal 5.
- the radio wave characteristic image located outside the detection frame of the pedestrian is a radio wave of a radio wave transmission source other than the pedestrian terminal 5 or a reflected wave in which the identification signal transmitted from the pedestrian terminal 5 is reflected by a wall or the like. Therefore, it is possible to determine whether or not the radio wave transmission source of the radio wave characteristic image is the pedestrian terminal 5 depending on whether or not the radio wave characteristic image is located within the detection frame of the pedestrian.
- FIG. 6 is an explanatory diagram showing an outline of a moving body image extraction process performed by the roadside machine 4. The same process is performed on the in-vehicle terminal 2.
- the position information of the pedestrian on the captured image of the roadside camera 43 is obtained based on the relative position information (direction, distance) of the pedestrian acquired from the output of the roadside sensor 42. Desired. Then, as shown in FIG. 6A, a pedestrian detection frame is set on the captured image based on the position information of the pedestrian on the captured image (frame setting process).
- the image area of the detection frame of the moving body is cut out from the captured image, and the moving body image is extracted.
- the moving body image in which the radio wave characteristic image is superimposed and drawn can be acquired by cutting out the image area of the detection frame of the moving body in the state where the radio wave characteristic image is superimposed and drawn on the captured image.
- FIG. 7 is an explanatory diagram showing an outline of a position determination process performed by the in-vehicle terminal 2 and an identification process performed by the roadside machine 4.
- the in-vehicle terminal 2 is one of the pedestrian position information acquired by each of the roadside unit 4 and the in-vehicle terminal 2. One is selected based on the high accuracy (reliability), and the position of the pedestrian is determined.
- the vehicle-mounted terminal 2 uses the position information acquired by each of the roadside unit 4 and the vehicle-mounted terminal 2 as a different person and mounts the vehicle.
- the position of the pedestrian is determined by the position information acquired by the terminal 2.
- the vehicle-mounted terminal 2 selects the position information based on the distance from the observing subject (vehicle-mounted terminal 2, roadside unit 4) to the pedestrian (moving body). Specifically, it is considered that the closer the distance from the observing subject to the pedestrian is, the higher the accuracy (reliability) of the position information is in the in-vehicle terminal 2, and the distance from the roadside machine 4 to the pedestrian and the distance from the vehicle. Compare with the distance to the pedestrian, and select the position information acquired by the observer who has the shorter distance to the pedestrian.
- the vehicle and the pedestrian move, which of the pedestrian position information acquired by the roadside machine 4 and the pedestrian position information acquired by the in-vehicle terminal 2 is selected is determined by the vehicle and the pedestrian. , And changes according to the positional relationship of the roadside machine 4. For example, when the vehicle is away from the pedestrian, the position information acquired by the roadside machine 4 is selected, and when the vehicle approaches the pedestrian, the position information acquired by the in-vehicle terminal 2 is selected.
- both position information is selected with each pedestrian as a different person. ..
- the position information is selected based on the distance from the observing subject (vehicle-mounted terminal 2 and roadside device 4) to the pedestrian.
- the roadside device 4 is always the vehicle-mounted terminal 2.
- the position information acquired by the roadside machine 4 may always be selected.
- the pedestrian terminal 5 periodically broadcasts a message including the position information of the pedestrian. Then, the in-vehicle terminal 2 and the roadside device 4 can acquire the position information of the pedestrian by receiving the message transmitted from the pedestrian terminal 5. However, the accuracy of the pedestrian position information acquired by the pedestrian terminal 5 is low. Therefore, in the in-vehicle terminal 2, the pedestrian detected by the roadside unit 4 using the roadside sensor 42 and the pedestrian possessing the pedestrian terminal 5 from which the message is transmitted are determined to be different persons, and the pedestrian May be recognized twice.
- the roadside machine 4 determines whether or not the pedestrian possessing the pedestrian terminal 5 from which the message is transmitted and the pedestrian detected by the roadside machine 4 are the same. Perform processing. Then, when the identification is successful, the roadside machine 4 instructs the in-vehicle terminal 2 to ignore the message from the pedestrian terminal 5 possessed by the identified pedestrian. Specifically, in the message transmitted from the roadside unit 4 to the in-vehicle terminal 2, the roadside unit 4 includes the pedestrian's position information, behavior characteristic information, and radio wave characteristic image as detection results for the identified pedestrian. The message ID (ignore message ID) given to the message transmitted from the pedestrian terminal 5 is added as instruction information to ignore the message from the corresponding pedestrian terminal 5. The message ID is information that identifies the pedestrian terminal 5 that is the source of the message.
- the in-vehicle terminal 2 possesses the pedestrian detected by the roadside machine 4 using the roadside sensor 42 and the pedestrian terminal 5 from which the message is transmitted. It is possible to avoid recognizing a pedestrian twice by judging the pedestrian as a different person.
- FIG. 8 is a sequence diagram showing a procedure of processing performed by the mobile detection system.
- the pedestrian terminal 5 periodically transmits an identification signal based on the radio wave characteristics (frequency, transmission cycle) assigned to the own device. Further, the pedestrian terminal 5 periodically transmits a message of ITS communication.
- This message includes pedestrian position information (latitude, longitude), radio wave characteristic information (frequency, transmission cycle) regarding the identification signal transmitted from the own device, and a message ID that identifies the source of the message. ..
- the above-mentioned mobile object detection process and behavior characteristic acquisition process are periodically performed. Further, when the roadside machine 4 receives the identification signal from the pedestrian terminal 5, the radio wave transmission source detection process, the radio wave characteristic image generation process, the image composition process, and the moving body image extraction process are performed. Further, when the roadside machine 4 receives the message from the pedestrian terminal 5, the identification process is performed. Then, the roadside machine 4 transmits a message of ITS communication.
- This message includes pedestrian position information (latitude, longitude) detected by the roadside machine 4, a moving body image including a radio wave characteristic image, radio wave characteristic information, and moving body behavior information (amplitude, period, etc.).
- the message ID (message ID) included in the message received from the pedestrian terminal 5 as instruction information for ignoring the distance information (distance from the own device to the pedestrian) and the message from the pedestrian terminal 5 possessed by the identified pedestrian. Ignore message ID) and. If the identification is not successful, that is, if the pedestrian possessing the pedestrian terminal 5 from which the message is sent is different from the pedestrian detected by using the roadside sensor 42, the ignored message ID is Not added.
- the moving object detection process and the behavior characteristic acquisition process are periodically performed. Further, when the in-vehicle terminal 2 receives the identification signal from the pedestrian terminal 5, the radio wave transmission source detection process, the radio wave characteristic image generation process, the image composition process, and the moving body image extraction process are performed as in the roadside device 4. .. Further, when the in-vehicle terminal 2 receives the message from the pedestrian terminal 5 and the message from the roadside machine 4, the identification process and the position determination process are performed.
- FIG. 9 is a block diagram showing a schematic configuration of the pedestrian terminal 5.
- the pedestrian terminal 5 includes an ITS communication unit 51, an identification signal transmission unit 52, a positioning unit 53, a memory 54, and a processor 55.
- the ITS communication unit 51 transmits and receives a message between the in-vehicle terminal 2 and the roadside unit 4 by ITS communication (communication between pedestrians and roads). When the message is transmitted, the message is broadcast.
- the identification signal transmission unit 52 transmits an identification signal based on the radio wave characteristics (frequency, transmission cycle) assigned to the own device at regular intervals.
- the positioning unit 53 measures the position of its own device by a satellite positioning system such as GNSS (Global Navigation Satellite System), that is, GPS (Global Positioning System) or QZSS (Quasi-Zenith Satellite System), and position information of its own device. Get (latitude, longitude).
- GNSS Global Navigation Satellite System
- GPS Global Positioning System
- QZSS Quadasi-Zenith Satellite System
- the memory 54 stores a program or the like executed by the processor 55.
- the processor 55 By executing the program stored in the memory 54, the processor 55 performs various processes, for example, processes related to message transmission / reception by the ITS communication unit 51, processes related to transmission of the identification signal by the identification signal transmission unit 52, and the like. ..
- FIG. 10 is a block diagram showing a schematic configuration of the roadside machine 4.
- the roadside machine 4 includes an ITS communication unit 44, an identification signal receiving unit 45, a memory 46, and a processor 47, in addition to the roadside sensor 42 and the roadside camera 43.
- the ITS communication unit 44 transmits and receives a message between the in-vehicle terminal 2 and the pedestrian terminal 5 by ITS communication (road-to-vehicle communication, road-to-step communication). When the message is transmitted, the message is broadcast.
- the identification signal receiving unit 45 receives the identification signal transmitted from the pedestrian terminal 5.
- the memory 46 stores a program or the like executed by the processor 47.
- the processor 47 performs various processes related to information collection by executing the program stored in the memory 46.
- the processor 47 performs mobile detection processing, behavior characteristic acquisition processing, radio wave transmission source detection processing, radio wave characteristic image generation processing, image composition processing, mobile image extraction processing, identification processing, and the like.
- the processor 47 detects a pedestrian (moving object) based on the output of the roadside sensor 42 and acquires the position information of the pedestrian. Specifically, the processor 47 first acquires the relative position information (direction, distance) of the pedestrian with respect to the roadside machine 4 based on the output of the roadside sensor 42. Next, the processor 47 determines the absolute position information (latitude) of the pedestrian based on the relative position information (direction, distance) of the pedestrian and the position information (latitude, longitude) of the installation position of the own device. , Longitude).
- the processor 47 acquires the three-dimensional position information of the pedestrian based on the relative position information (direction, distance) of the pedestrian. Further, the processor 47 bases the pedestrian's position information (latitude, longitude) and the position information of the installation position of the own device (latitude, longitude) on the pedestrian's distance information (distance from the own device to the pedestrian). ) To get.
- the processor 47 detects the behavior characteristics of the pedestrian (moving body) (amplification and period of vertical movement of the body, moving direction, moving speed, etc.) based on the three-dimensional position information of the pedestrian. And acquire the behavior characteristic information of the pedestrian.
- the processor 47 detects the radio wave transmission source (pedestrian terminal 5) based on the reception status of the identification signal in the identification signal reception unit 45, and acquires the position information of the radio wave transmission source. ..
- the processor 47 detects the radio wave characteristics (frequency and transmission cycle) of the identification signal received by the identification signal receiving unit 45, and generates a radio wave characteristic image that visualizes the radio wave characteristics of the identification signal. ..
- the processor 47 In the image composition process, the processor 47 generates a composite image in which the radio wave characteristic image is superimposed and drawn on the captured image of the roadside camera 43.
- the processor 47 detects the position information (detection) of the pedestrian on the captured image of the roadside camera 43 based on the relative position information (direction, distance) of the pedestrian with respect to the own device. Get the frame coordinates). Then, the processor 47 cuts out the image area of the detection frame of the moving body from the captured image based on the position information of the pedestrian on the captured image, and acquires the moving body image. At this time, the moving body image including the radio wave characteristic image can be acquired by cutting out the image area of the detection frame of the moving body in the state where the radio wave characteristic image is superimposed and synthesized on the captured image.
- the processor 47 depends on whether or not the radio wave characteristics (frequency and transmission cycle) included in the message received from the pedestrian terminal 5 match the radio wave characteristics of the identification signal received from the pedestrian terminal 5. Therefore, it is determined whether or not the pedestrian possessing the pedestrian terminal 5 from which the message is transmitted and the pedestrian detected by the own device using the roadside sensor 42 are the same.
- FIG. 11 is a block diagram showing a schematic configuration of the vehicle 1.
- the vehicle 1 includes a steering ECU 15, a driving ECU 16, and a braking ECU 17 in addition to the vehicle-mounted terminal 2, the automatic driving ECU 3, the vehicle-mounted sensor 11, and the vehicle-mounted camera 12.
- the automatic driving ECU 3 is connected to the steering ECU 15, the driving ECU 16, and the braking ECU 17, and controls the steering ECU 15, the driving ECU 16, and the braking ECU 17 based on the detection result of the in-vehicle sensor 11 to automatically drive the vehicle 1 (autonomous driving). ) Is realized.
- the steering ECU 15 controls the steering mechanism of the own vehicle
- the drive ECU 16 controls the drive mechanism (engine, electric motor, etc.) of the own vehicle
- the braking ECU 17 brakes the own vehicle. It controls the mechanism.
- the in-vehicle terminal 2 includes an ITS communication unit 21, an identification signal receiving unit 22, a positioning unit 23, a memory 24, and a processor 25.
- the ITS communication unit 21 transmits and receives a message between the pedestrian terminal 5, another in-vehicle terminal 2, and the roadside device 4 by ITS communication (pedestrian-to-vehicle communication, vehicle-to-vehicle communication, road-to-vehicle communication).
- ITS communication peer-to-vehicle communication, vehicle-to-vehicle communication, road-to-vehicle communication.
- the positioning unit 23 measures the position of its own device by GNSS, that is, a satellite positioning system such as GPS or QZSS, and acquires the position information (latitude, longitude) of its own device.
- GNSS a satellite positioning system such as GPS or QZSS
- the identification signal receiving unit 22 receives the identification signal transmitted from the pedestrian terminal 5.
- the memory 24 stores a program or the like executed by the processor 25.
- the processor 25 performs various processes related to information collection by executing the program stored in the memory 24.
- the processor 25 performs mobile detection processing, behavior characteristic acquisition processing, radio wave transmission source detection processing, radio wave characteristic image generation processing, image composition processing, mobile image extraction processing, identification processing, position determination processing, and the like. Do.
- the processor 25 detects a pedestrian (moving object) based on the output of the in-vehicle sensor 11 and acquires the position information of the pedestrian. Specifically, the processor 25 first acquires the relative position information (direction, distance) of the pedestrian with respect to the own device based on the output of the vehicle-mounted sensor 11. Next, the processor 25 determines the absolute position information (latitude) of the pedestrian based on the relative position information (direction, distance) of the pedestrian and the position information (latitude, longitude) of the current position of the own device. , Longitude).
- the processor 25 acquires the three-dimensional position information of the pedestrian based on the relative position information (direction, distance) of the pedestrian. Further, the processor 25 bases the pedestrian's position information (latitude, longitude) and the position information of the current position of the own device (latitude, longitude) on the pedestrian's distance information (distance from the own device to the pedestrian). ) To get.
- the processor 25 detects the behavior characteristics of the pedestrian (moving body) (amplification and period of vertical movement of the body, moving direction, moving speed, etc.) based on the three-dimensional position information of the pedestrian. And acquire the behavior characteristic information of the pedestrian.
- the processor 25 detects the radio wave transmission source (pedestrian terminal 5) based on the reception status of the identification signal in the identification signal reception unit 22, and acquires the position information of the radio wave transmission source. ..
- the processor 25 detects the radio wave characteristics (frequency and transmission cycle) of the identification signal received by the identification signal receiving unit 22, and generates a radio wave characteristic image that visualizes the radio wave characteristics of the identification signal. ..
- the processor 25 In the image composition process, the processor 25 generates a composite image in which the radio wave characteristic image is superimposed and drawn on the captured image of the vehicle-mounted camera 12.
- the processor 25 detects the position information (detection) of the pedestrian on the captured image of the in-vehicle camera 12 based on the relative position information (direction, distance) of the pedestrian with respect to the own device. Get the frame coordinates). Then, the processor 25 cuts out the image area of the detection frame of the moving body from the captured image based on the position information of the pedestrian on the captured image, and acquires the moving body image. At this time, the moving body image including the radio wave characteristic image can be acquired by cutting out the image area of the detection frame of the moving body in the state where the radio wave characteristic image is superimposed and synthesized on the captured image.
- the processor 25 determines whether or not the pedestrian detected by the roadside machine 4 and the pedestrian detected by the own device are the same. At this time, the position information of the pedestrian included in the message received from the roadside machine 4 and the position information of the pedestrian acquired by the own device are compared, and the moving body included in the message received from the roadside machine 4 is also compared. The image (including the radio wave characteristic image) and the moving body image (including the radio wave characteristic image) generated by the own device are compared, and the pedestrian behavior characteristic information included in the message received from the roadside machine 4 is also used. , The behavior characteristic information of the pedestrian generated by the own device is compared with, and it is determined whether or not the pedestrians detected by the roadside machine 4 and the own device are the same.
- the processor 25 selects either the pedestrian position information acquired by the roadside machine 4 or the pedestrian position information acquired by the own device according to the result of the identification process, and selects the road. Determine the position of the pedestrians above. Specifically, when the pedestrians detected by the roadside machine 4 and the own device are the same, the pedestrian position information acquired by the roadside machine 4 and the pedestrian position information acquired by the in-vehicle terminal 2 Of these, the position information with the shorter distance to the pedestrian (higher accuracy) is selected. On the other hand, if the pedestrians detected by the roadside machine 4 and the own device are not the same, the position information acquired by the in-vehicle terminal 2 is selected.
- the distance from the roadside machine 4 to the pedestrian is notified from the roadside machine 4 to the in-vehicle terminal 2, but the processor 25 of the in-vehicle terminal 2 and the position information regarding the installation position of the roadside machine 4 are used.
- the distance from the roadside machine 4 to the pedestrian may be calculated based on the position information of the pedestrian.
- the moving body to be processed is a pedestrian
- the moving body to be processed may be a vehicle
- the identification signal is transmitted to the in-vehicle terminal 2.
- An identification signal transmitting unit is provided to transmit.
- FIG. 12 is a flow chart showing an operation procedure of the pedestrian terminal 5.
- the identification signal transmitting unit 52 transmits an identification signal based on the radio wave characteristics (frequency, transmission cycle) assigned to the own device (ST101). Further, the positioning unit 53 measures the position of its own device by GNSS and acquires the position information of a pedestrian (ST102).
- the processor 55 generates an ITS communication message (ST103). Then, the ITS communication unit 51 transmits a message (ST104).
- This message includes pedestrian position information (latitude, longitude), radio wave characteristic information (frequency, transmission cycle) regarding the identification signal transmitted from the own device, and a message ID that identifies the source of the message. ..
- 13 and 14 are flow charts showing the operation procedure of the roadside machine 4.
- the processor 47 detects a pedestrian (moving object) existing on the surrounding road based on the output of the roadside sensor 42, and puts it on the surrounding road. It is determined whether or not there is a pedestrian (ST201).
- the processor 47 determines the relative position information of the pedestrian with respect to the own device based on the output of the roadside sensor 42 (Yes). Orientation, distance) is acquired. Further, the processor 47 determines the pedestrian's position information (latitude, longitude) and the pedestrian's height based on the pedestrian's relative position information and the position information (latitude, longitude) of the current position of the own device. The information and the distance information of the pedestrian (distance from the own device to the pedestrian) are acquired (moving object detection process) (ST202).
- the processor 47 detects the behavior characteristics of the pedestrian (moving body) based on the position information and the height information (three-dimensional position information) of the pedestrian, and the behavior characteristic information of the pedestrian (vertical movement).
- Acquire behavior characteristic acquisition processing (ST203) to acquire the amplitude and period, the moving direction, and the moving speed.
- the processor 47 receives the received identification.
- the radio wave characteristics (frequency and transmission cycle) of the signal are detected, and the position information of the radio wave transmission source is acquired (radio wave transmission source detection processing) (ST212).
- the processor 47 generates a radio wave characteristic image (radio wave source visualization image) that visualizes the radio wave characteristics of the identification signal for each pedestrian (radio wave characteristic image generation processing) (ST213).
- the processor 47 acquires a photographed image from the roadside camera 43 that photographs the surrounding road (ST214). Next, the processor 47 acquires the position information (coordinates) of the pedestrian on the captured image of the roadside camera 43 based on the relative position information (direction, distance) of the pedestrian with respect to the roadside machine 4. (ST215).
- the processor 47 generates a composite image in which the radio wave characteristic image is superimposed and drawn on the captured image of the roadside camera 43 based on the position information of the radio wave transmission source (image synthesis processing) (ST216). Then, the processor 47 cuts out an image area of the pedestrian detection frame from the image captured by the camera based on the position information of the pedestrian on the image captured by the camera, and obtains a moving body image (including a radio wave characteristic image). Acquire (moving body image extraction process) (ST217).
- the processor 47 receives the message from the pedestrian terminal 5.
- a pedestrian who possesses the pedestrian terminal 5 from which the message is transmitted depending on whether or not the radio wave characteristics (frequency and transmission cycle) included in the above match with the radio wave characteristics of the identification signal received from the pedestrian terminal 5.
- the processor 47 As instruction information for ignoring the message from the pedestrian terminal 5 possessed by the pedestrian, a message of ITS communication to which a message ID (ignore message ID) included in the message received from the pedestrian terminal 5 is added is created ( ST223). Then, the ITS communication unit 44 transmits a message to the in-vehicle terminal 2 (ST225).
- the ITS communication unit 44 transmits a message to the in-vehicle terminal 2 (ST225).
- the message transmitted to the in-vehicle terminal 2 includes the pedestrian position information (latitude, longitude) detected by the roadside machine 4 and the movement including the radio wave characteristic image regardless of whether or not the ignore message ID is added.
- Body image, radio wave characteristic information, moving body behavior information, and distance information are included.
- 15 and 16 are flow charts showing an operation procedure of the in-vehicle terminal 2.
- the processor 25 detects a pedestrian (moving body) existing on the surrounding road based on the output of the vehicle-mounted sensor 11 and puts it on the surrounding road. It is determined whether or not there is a pedestrian (ST301).
- the processor 25 determines the relative position information of the pedestrian with respect to the own device based on the output of the in-vehicle sensor 11 (Yes). Orientation, distance) is acquired. Further, the processor 25 determines the pedestrian position information (latitude, longitude) and the pedestrian height based on the pedestrian relative position information and the position information (latitude, longitude) of the current position of the own device. The information and the distance information of the pedestrian (distance from the own device to the pedestrian) are acquired (moving object detection process) (ST302).
- the processor 25 detects the behavior characteristics of the pedestrian (moving body) based on the position information and the height information (three-dimensional position information) of the pedestrian, and the behavior characteristics information of the pedestrian (vertical movement).
- Acquire behavior characteristic acquisition processing (ST303) to acquire the amplitude and period, the moving direction, and the moving speed.
- the processor 25 receives the received identification.
- the radio wave characteristics (frequency and transmission cycle) of the signal are detected, and the position information of the radio wave transmission source is acquired (radio wave transmission source detection processing) (ST312).
- the processor 25 generates a radio wave characteristic image (radio wave source visualization image) that visualizes the radio wave characteristics of the identification signal for each pedestrian (radio wave characteristic image generation processing) (ST313).
- the processor 25 acquires a photographed image from the in-vehicle camera 12 that photographs the surrounding roads (ST314). Next, the processor 25 acquires the position information (coordinates) of the pedestrian on the captured image of the in-vehicle camera 12 based on the relative position information (direction, distance) of the pedestrian with respect to the vehicle (). ST315).
- the processor 25 generates a composite image in which the radio wave characteristic image is superimposed and drawn on the captured image of the vehicle-mounted camera 12 based on the position information of the radio wave transmission source (image composition processing) (ST316). Then, the processor 25 cuts out an image area of the pedestrian detection frame from the image captured by the camera based on the position information of the pedestrian on the image captured by the in-vehicle camera 12, and includes a moving body image (including a radio wave characteristic image). ) (Movement image extraction process) (ST317).
- the ITS communication unit 21 receives the message transmitted from the pedestrian terminal 5 (Yes in ST321), and further receives the message transmitted from the roadside unit 4. Then (Yes in ST322), the processor 25 determines whether or not the message ID included in the message received from the pedestrian terminal 5 is the same as the ignored message ID included in the message received from the roadside device 4 (ST323). ..
- the processor 25 ignores the message of the pedestrian terminal 5 and the pedestrian.
- the location information included in the message of the terminal 5 is excluded from the target (ST324).
- the processor 25 determines whether or not the pedestrian detected by the roadside machine 4 and the pedestrian detected by the own device are the same (identification process) (ST325).
- the position information of the pedestrian included in the message received from the roadside machine 4 and the position information of the pedestrian acquired by the own device are compared, and the moving body included in the message received from the roadside machine 4 is also compared.
- the image (including the radio wave characteristic image) and the moving body image (including the radio wave characteristic image) generated by the own device are compared, and the pedestrian behavior characteristic information included in the message received from the roadside machine 4 is also used.
- the behavior characteristic information of the pedestrian generated by the own device is compared with, and it is determined whether or not the pedestrian detected by the roadside machine 4 and the pedestrian detected by the own device are the same.
- the processor 25 acquires the position information of the pedestrian by the roadside machine 4. , And the position information of the pedestrian with the shorter distance (high accuracy) to the pedestrian is selected from the position information of the pedestrian acquired by the in-vehicle terminal 2 (moving body position determination process) (ST326).
- the pedestrian detected by the roadside machine 4 and the pedestrian detected by the own device are not the same (No in ST325), the pedestrian position information acquired by the roadside machine 4 and the pedestrian position information acquired by the processor 25, and The position information of the pedestrian acquired by the in-vehicle terminal 2 is used as another person, and the position information acquired by the in-vehicle terminal 2 is selected (moving body position determination process) (ST327).
- the automatic driving ECU 3 acquires the position information of the pedestrian from the in-vehicle terminal 2, and controls the traveling of the own vehicle based on the position information of the pedestrian so as to avoid a collision with the pedestrian.
- FIG. 17 is an explanatory diagram showing an outline of the mobile body detection system according to the second embodiment.
- the vehicle-mounted terminal 2 identifies a pedestrian by comparing the moving body images including the radio wave characteristic images generated by the vehicle-mounted terminal 2 and the roadside device 4, that is, the vehicle-mounted terminal 2 and the roadside device 4. It is determined whether or not the pedestrians detected in each of 4 are the same (identification process).
- the vehicle-mounted terminal 2 identifies a pedestrian by comparing the moving body images that do not include the radio wave characteristic images generated by the vehicle-mounted terminal 2 and the roadside machine 4.
- the first point is to compare the position information of the pedestrian detected by each of the in-vehicle terminal 2 and the roadside machine 4, and to compare the behavior characteristics of the pedestrian detected by each of the in-vehicle terminal 2 and the roadside machine 4. It is the same as the embodiment.
- one of the moving body images is flipped left and right and then compared.
- the in-vehicle terminal 2 succeeds in identifying the pedestrian, that is, if it is determined that the pedestrian detected by the roadside machine 4 and the pedestrian detected by the in-vehicle terminal 2 are the same. After that, the latest position information and moving object image of the pedestrian are acquired by the tracking process of the pedestrian. Then, if the tracking process fails, the re-identification process is performed.
- pedestrians are tracked by image recognition. Specifically, the similarity between the person detected this time and the person detected last time is calculated by image recognition for the moving object image, and the person detected this time is associated with the person detected last time based on the similarity. This makes it possible to acquire the position information and the moving body image of the same pedestrian (moving body ID).
- FIG. 18 is a block diagram showing a schematic configuration of the roadside machine 4.
- the roadside machine 4 similarly to the first embodiment (see FIG. 10), the roadside machine 4 includes the roadside sensor 42, the roadside camera 43, the ITS communication unit 44, the memory 46, and the processor 47, but the identification signal The receiving unit 45 is omitted.
- the processor 47 performs the mobile detection process, the behavior characteristic acquisition process, the mobile image extraction process, and the identification process as in the first embodiment, but the radio wave source detection process, the radio wave characteristic image generation process, and the image synthesis. No processing is performed.
- FIG. 19 is a block diagram showing a schematic configuration of the vehicle 1.
- the vehicle 1 includes an in-vehicle terminal 2, an automatic driving ECU 3, an in-vehicle sensor 11, an in-vehicle camera 12, a steering ECU 15, a drive ECU 16, and a braking ECU 17. ..
- the in-vehicle terminal 2 includes an ITS communication unit 21, a positioning unit 23, a memory 24, and a processor 25, as in the first embodiment, but the identification signal receiving unit 22 is omitted.
- the processor 25 performs mobile detection processing, behavior characteristic acquisition processing, moving body image extraction processing, identification processing, and position determination processing, but does not perform radio wave source detection processing, radio wave characteristic image generation processing, and image composition processing. ..
- FIG. 20 is a flow chart showing an operation procedure of the roadside machine 4.
- the processor 47 detects pedestrians (moving objects) existing on the surrounding roads based on the output of the roadside sensor 42, and determines whether or not there are pedestrians on the surrounding roads. (ST401).
- the processor 47 determines the relative position information of the pedestrian with respect to the own device based on the output of the roadside sensor 42 (Yes). Orientation, distance) is acquired. Further, the processor 47 determines the pedestrian position information (latitude, longitude) and the pedestrian height based on the pedestrian relative position information and the position information (latitude, longitude) of the installation position of the own device. The information and the distance information of the pedestrian (distance from the own device to the pedestrian) are acquired (moving object detection process) (ST402).
- the processor 47 detects the behavior characteristics of the pedestrian (moving body) based on the position information and the height information (three-dimensional position information) of the pedestrian, and the behavior characteristic information of the pedestrian (vertical movement).
- Acquire behavior characteristic acquisition process (ST403) to acquire amplitude and period, movement direction, movement speed).
- the processor 47 acquires a photographed image from the roadside camera 43 that photographs the surrounding road (ST404). Next, the processor 47 acquires the position information (coordinates) of the pedestrian on the captured image of the roadside camera 43 based on the relative position information (direction, distance) of the pedestrian with respect to the roadside machine 4. (ST405).
- the processor 47 cuts out an image area of the pedestrian detection frame from the captured image of the roadside camera 43 based on the position information of the pedestrian on the captured image, and acquires a moving body image (moving body image). Extraction process) (ST406).
- the processor 47 generates a message for ITS communication (ST407). Then, the ITS communication unit 44 transmits a message to the in-vehicle terminal 2 (ST408).
- This message includes position information (latitude, longitude) about a pedestrian detected by the roadside machine 4, a moving body image, moving body behavior information, and distance information (distance from the own device to the moving body). Is done.
- FIG. 21 is a flow chart showing an operation procedure of the in-vehicle terminal 2.
- the processor 25 detects a pedestrian (moving body) existing on the surrounding road based on the output of the vehicle-mounted sensor 11 and puts it on the surrounding road. It is determined whether or not there is a pedestrian (ST501).
- the processor 25 determines the relative position information of the pedestrian with respect to the own device based on the output of the in-vehicle sensor 11 (Yes). Orientation, distance) is acquired. Further, the processor 25 determines the pedestrian position information (latitude, longitude) and the pedestrian height based on the pedestrian relative position information and the position information (latitude, longitude) of the current position of the own device. The longitude information and the distance information of the pedestrian (distance from the own device to the pedestrian) are acquired (moving object detection process) (ST502).
- the processor 25 detects the behavior characteristics of the pedestrian (moving body) based on the position information and the height information (three-dimensional position information) of the pedestrian, and the behavior characteristics information of the pedestrian (vertical movement).
- Acquire behavior characteristic acquisition processing (ST503) to acquire amplitude and period, movement direction, movement speed).
- the processor 25 acquires a photographed image from the in-vehicle camera 12 that photographs the surrounding roads (ST504). Next, the processor 25 acquires the position information (coordinates) of the pedestrian on the captured image of the in-vehicle camera 12 based on the relative position information (direction, distance) of the pedestrian with respect to the own vehicle. (ST505).
- the processor 25 cuts out an image area of the pedestrian detection frame from the image captured by the camera based on the position information of the pedestrian on the image captured by the vehicle-mounted camera 12, and acquires a moving body image (movement). Body image extraction process) (ST506).
- the processor 25 is detected by the roadside unit 4. It is determined whether or not the moving body and the moving body detected by the own device are the same (identification process) (ST512). At this time, the position information of the pedestrian included in the message received from the roadside machine 4 and the position information of the pedestrian acquired by the own device are compared, and the moving body included in the message received from the roadside machine 4 is also compared.
- the image is compared with the moving object image generated by the own device, and the pedestrian behavior characteristic information included in the message received from the roadside machine 4 and the pedestrian behavior characteristic information generated by the own device are Is compared, and it is determined whether or not the moving body detected by the roadside machine 4 and the pedestrian detected by the own device are the same.
- the processor 25 acquires the position information of the pedestrian by the roadside machine 4. , And the position information of the pedestrian with the shorter distance (high accuracy) to the pedestrian is selected from the position information of the pedestrian acquired by the in-vehicle terminal 2 (moving body position determination process) (ST513).
- the processor 25 shifts from the identification mode in the initial state to the tracking mode (ST515).
- this tracking mode pedestrians are tracked by image recognition. Then, if the pedestrian tracking fails (Yes in ST516), the mode returns to the identification mode (ST517).
- both position information is selected (moving body position determination process) (ST514).
- FIG. 22 is an explanatory diagram showing an outline of the mobile body detection system according to the third embodiment.
- the pedestrian (moving body) is identified based on the radio wave characteristics of the identification signal transmitted from the pedestrian terminal 5, but in the present embodiment, the pedestrian terminal 5 possessed by the pedestrian is used.
- An indicator light 57 is provided, and a pedestrian is identified based on the lighting characteristics (lighting color, etc.) of the indicator light 57.
- the indicator light 57 is provided with a light source such as an LED.
- the indicator lamp 57 performs a lighting operation according to the lighting characteristics assigned to each of them. That is, the lighting characteristic of the indicator light 57 becomes the pedestrian terminal 5, that is, the information for identifying the pedestrian (terminal identification information, moving object identification information).
- the characteristic of the indicator lamp 57 may be a lighting cycle or a blinking pattern in addition to the lighting color, or may be a combination of the lighting color and the lighting cycle.
- the indicator light 57 is attached to the outside of the pedestrian's clothes or luggage so that the pedestrian can be easily recognized from the outside by wireless communication or wired communication separately from the housing of the pedestrian terminal 5.
- the configuration may be such that it is connected to the terminal 5, but the indicator light 57 may be integrally provided in the housing of the pedestrian terminal 5.
- the processor 47 performs a moving body detection process, a moving body image extraction process, a lighting characteristic acquisition process, an identification process, and the like.
- the processor 47 detects a pedestrian (moving object) existing on the surrounding road based on the output of the roadside sensor 42, and acquires the relative position information of the pedestrian.
- the processor 47 acquires the position information of the pedestrian on the captured image of the roadside camera 43 based on the relative position information of the pedestrian. Next, the processor sets a pedestrian detection frame on the captured image based on the position information of the pedestrian on the captured image (frame setting process). Then, the processor 47 cuts out the image area of the detection frame of the moving body from the captured image and extracts the moving body image.
- the processor 47 detects the pedestrian indicator light 57 by image recognition of the moving object image extracted by the moving object image extraction process, and acquires the lighting characteristic of the indicator light 57.
- the processor 47 match the lighting characteristics of the indicator light 57 included in the message received from the pedestrian terminal 5 with the lighting characteristics of the indicator light 57 recognized by the own device from the captured image of the roadside camera 43? Depending on whether or not, it is determined whether or not the pedestrian possessing the pedestrian terminal 5 from which the message is transmitted and the pedestrian detected by the own device using the roadside sensor 42 are the same.
- the processor 25 performs identification processing and the like.
- the processor 25 determines whether or not the message ID included in the message directly received from the pedestrian terminal 5 and the message ID of the pedestrian terminal 5 included in the message received from the roadside machine 4 match. Therefore, it is determined whether or not the pedestrian possessing the pedestrian terminal 5 from which the message is transmitted and the pedestrian notified of the highly accurate position information from the roadside machine 4 are the same.
- FIG. 23 is a flow chart showing an operation procedure of the pedestrian terminal 5.
- the indicator light 57 performs a lighting operation with the lighting characteristics (color, etc.) assigned to the own device (ST601). Further, the positioning unit 53 measures the position of the own device and acquires the position information of the pedestrian (ST602).
- the processor 55 creates a message for ITS communication (ST603). Then, the ITS communication unit 51 transmits a message (ST604).
- This message includes pedestrian position information (latitude, longitude), information on the lighting characteristics of the indicator lamp 57, and a message ID that identifies the source of the message.
- the message transmitted from the pedestrian terminal 5 is received by both the roadside machine 4 and the in-vehicle terminal 2.
- FIG. 24 is a flow chart showing an operation procedure of the roadside machine 4.
- the processor 47 detects a pedestrian (moving body) existing on the surrounding road based on the output of the roadside sensor 42, and on the surrounding road. It is determined whether or not there is a pedestrian (ST701).
- the processor 47 determines the relative position information of the pedestrian with respect to the own device based on the output of the roadside sensor 42 (Yes). Orientation, distance) is acquired. Further, the processor 47 acquires (moves) the pedestrian's position information (latitude, longitude) and the like based on the pedestrian's relative position information and the position information (latitude, longitude) of the current position of the own device. Body detection processing) (ST702).
- the processor 47 acquires a photographed image from the roadside camera 43 that photographs the surrounding road (ST703). Next, the processor 47 acquires the position information (coordinates) of the pedestrian on the captured image of the roadside camera 43 based on the relative position information (direction, distance) of the pedestrian with respect to the roadside machine 4. (ST704).
- the processor 47 cuts out an image area of the pedestrian detection frame from the image captured by the camera based on the position information of the pedestrian on the image captured by the roadside camera 43, and acquires a moving body image (movement). Body image extraction process) (ST705).
- the processor 47 acquires the lighting characteristics of the indicator lamp 57 possessed by the pedestrian reflected in the moving body image by image recognition for the moving body image (lighting characteristic acquisition process) (ST706).
- the processor 47 starts from the pedestrian terminal 5.
- the lighting characteristics of the notified indicator light 57 that is, the lighting characteristics of the indicator light 57 included in the message received from the pedestrian terminal 5, and the lighting characteristics of the indicator light 57 recognized from the captured image of the roadside camera 43 by the own device. Is determined whether or not they match (ST712).
- the processor 47 uses the pedestrian terminal. Create an ITS communication message with the message ID (identified message ID) added to the message from No. 5 (ST713). Then, the ITS communication unit 44 transmits a message to the in-vehicle terminal 2 (ST715).
- the processor 47 determines the identified message ID. Creates a message for ITS communication to which is not added (ST714). Then, the ITS communication unit 44 transmits a message to the in-vehicle terminal 2 (ST715).
- the message transmitted to the in-vehicle terminal 2 includes, in addition to the identified ID, pedestrian position information acquired by the own device using the roadside sensor 42.
- FIG. 25 is a flow chart showing an operation procedure of the in-vehicle terminal 2.
- the processor 25 determines whether or not the message ID included in the message directly received from the pedestrian terminal 5 matches the identified message ID included in the message received from the roadside device 4 (identification process) (ST803).
- the processor 25 ignores the message from the pedestrian terminal 5.
- the position information of the pedestrian notified from the roadside machine 4, that is, the position information included in the message received from the roadside machine 4 is selected (moving body position determination process) (ST804).
- the processor 25 determines the position information of the pedestrian notified from the pedestrian terminal 5. That is, both the position information included in the message directly received from the pedestrian terminal 5 and the position information of the pedestrian notified from the roadside machine 4 are selected (moving body position determination process) (ST805).
- the indicator light 57 is recognized only by the roadside machine 4, but the pedestrian indicator light 57 is recognized from the vehicle.
- the in-vehicle terminal 2 also recognizes the indicator light 57.
- the pedestrian is identified by comparing the lighting characteristics of the indicator lamp 57 acquired by each of the roadside machine 4 and the own device. Further, also in the present embodiment, as in the first embodiment, the position information of the pedestrian acquired by each of the roadside machine 4 and the own device is compared and the behavior characteristics of the pedestrian are compared.
- FIG. 26 is an explanatory diagram showing an outline of the mobile body detection system according to the fourth embodiment.
- the vehicle-mounted terminal 2 compares the moving body images (not including the radio wave characteristic image) generated by each of the vehicle-mounted terminal 2 and the roadside unit 4, so that the roadside unit 4 It is determined whether or not the pedestrian detected in 1 and the pedestrian detected by the in-vehicle terminal 2 are the same (identification process).
- pedestrians are identified using a plurality of moving body images. That is, the latest predetermined number of moving body images, specifically, the latest moving body image included in the latest predetermined period and the moving body image at each time in the past are to be compared.
- the roadside machine 4 and the in-vehicle terminal 2 can collect moving object images at each time for each pedestrian by tracking a person by image recognition or the like. Further, a plurality of moving body images to be compared may be compared in all combinations and determined to be the same in any combination.
- the roadside machine 4 obtains a moving body image at time t, t + 1, t + 2, and the in-vehicle terminal 2 obtains a moving body image at time t + 1, t + 2, and these moving body images are compared. By doing so, the pedestrian is identified. Further, since the positional relationship with respect to the pedestrian is opposite between the roadside machine 4 and the in-vehicle terminal 2, for example, the moving object image at time t + 1 of the roadside camera 43 and the moving object image at time t + 2 of the in-vehicle camera 12 are compared. When doing so, the moving body image at time t + 1 of the roadside camera 43 is flipped left and right and then compared.
- the configuration of the roadside machine 4 according to this embodiment is the same as that of the second embodiment (see FIG. 18).
- the configuration of the in-vehicle terminal 2 is also the same as that of the second embodiment (see FIG. 19).
- the operation procedure of the roadside machine 4 is substantially the same as that of the second embodiment (see FIG. 20), but after the mobile image extraction process (ST406), the processor 47 moves to the pedestrian detected by the own device.
- a body ID is assigned, and the pedestrian's moving body ID, position information, and moving body image are stored in the memory 46.
- the processor 47 collects the latest predetermined number of moving body images of the pedestrian based on the moving body ID, and generates an ITS communication message.
- This message includes the latest position information of the pedestrian, the latest predetermined number of moving body images of the pedestrian, and the like.
- the operation procedure of the in-vehicle terminal 2 is substantially the same as that of the second embodiment (see FIG. 21), but after the mobile image extraction process (ST506), the processor 47 gives the pedestrian ID detected by the own device. Is added, and the moving body ID, the position information, and the moving body image of the pedestrian are stored in the memory 24. Further, in the identification process (ST512), the processor 47 compares the plurality of moving body images included in the message received from the roadside machine 4 with the plurality of moving body images generated by the own device.
- FIG. 27 is an explanatory diagram showing an outline of the mobile body detection system according to the fifth embodiment.
- the identification process it is determined whether or not the moving bodies (pedestrians) detected by each of the in-vehicle terminal 2 and the roadside machine 4 are the same.
- the moving bodies (pedestrians) detected by each of the two roadside machines 4 (first and second observation devices) installed on both sides (diagonal positions) of the intersection are the same. Judge whether or not.
- a moving body that cannot be detected by one roadside machine 4 because it is shielded by another moving body (vehicle or the like) or a shield such as a building may be detected by the other roadside machine 4. is there. Therefore, in one roadside machine 4, if the observation results of the two roadside machines 4 are integrated and transmitted to the in-vehicle terminal 2, the accuracy of identification of the moving body can be further improved.
- FIG. 28 is a block diagram showing a schematic configuration of the pedestrian terminal 5 according to the sixth embodiment.
- the roadside machine 4 and the vehicle-mounted terminal 2 detect the behavior characteristics of the pedestrian based on the three-dimensional position information acquired from the outputs of the roadside sensor 42 and the vehicle-mounted sensor 11.
- the pedestrian terminal 5 detects the behavior characteristics of the pedestrian, and the pedestrian terminal 5 notifies the roadside machine 4 and the in-vehicle terminal 2 of information on the behavior characteristics of the pedestrian.
- the pedestrian terminal 5 is provided with a behavior sensor 58.
- the behavior sensor 58 is an acceleration sensor, a gyro sensor, or the like, and detects the movement of the pedestrian's body.
- the pedestrian terminal 5 transmits the identification signal according to the walking tempo of the pedestrian
- the roadside unit 4 and the in-vehicle terminal 2 each transmit the identification signal based on the reception timing of the identification signal.
- Behavior characteristics cycle of vertical movement
- the radio wave characteristic image is drawn on the captured image of the roadside camera 43 or the in-vehicle camera 12 in response to the reception of the identification signal in each of the roadside unit 4 and the in-vehicle terminal 2 in response to the reception of the identification signal in each of the roadside unit 4 and the in-vehicle terminal 2
- the radio wave characteristic image appears in the moving body image. It is possible to acquire the behavior characteristics of a pedestrian based on the timing of the image.
- the pedestrian terminal 5 by changing the radio wave characteristics of the identification signal according to the moving direction of the pedestrian, walking is performed based on the radio wave characteristics of the identification signal received by each of the roadside unit 4 and the in-vehicle terminal 2. It is possible to acquire the behavior characteristics (movement direction) of a person.
- the logic of the identification signal, the transmission frequency, and the like may be changed according to the moving direction of the pedestrian.
- the two observation devices perform the moving object detection process, and one of the observation devices acquires the detection result of the moving object from the other observation device and performs the identification process.
- the in-vehicle terminal 2 performs the identification process
- the roadside machine 4 performs the identification process.
- a processing device different from the observation devices may acquire the detection results of the moving object from the two observation devices and perform the identification process.
- the processing device is, for example, a server device connected to the observation device via an appropriate communication medium (for example, a cellular communication network). In this case, the processing device may also perform the position determination process.
- the two roadside devices 4 serve as observation devices that perform the moving object detection process, and one of the roadside devices 4 performs the identification process, but the two in-vehicle terminals 2 move. It may be an observation device that performs body detection processing, and one of the vehicle-mounted terminals 2 may perform identification processing. That is, one in-vehicle terminal 2 may acquire the detection result of the moving object from the other in-vehicle terminal 2 and perform the identification process.
- the roadside device 4 and the in-vehicle terminal 2 are observation devices that perform mobile object detection processing, but the pedestrian terminal 5 can be an observation device that performs mobile object detection processing. ..
- the pedestrian terminal 5 is provided with a sensor (radar or the like) for detecting the moving body and a camera for photographing the moving body.
- the two pedestrian terminals 5 may be an observation device that performs the moving object detection process, and one of the pedestrian terminals 5 may perform the identification process.
- the roadside machine 4 and the pedestrian terminal 5 may be an observation device that performs a moving object detection process, and the pedestrian terminal 5 among them may perform the identification process.
- the in-vehicle terminal 2 and the pedestrian terminal 5 may be an observation device that performs the moving object detection process, and either the in-vehicle terminal 2 or the pedestrian terminal 5 may perform the identification process.
- the position information (latitude, longitude) of the pedestrian (moving body), the behavior characteristics (amplification and period of vertical movement of the body, the moving direction, the moving speed, etc.), and the radio wave characteristics of the identification signal are obtained.
- the pedestrian (moving body) is identified by using it as the moving body identification information for identifying the pedestrian (moving body), but other information may be used as the moving body identification information.
- the information collected by the roadside machine 4 or the in-vehicle terminal 2 can be used in the roadside machine 4 or the in-vehicle terminal 2 for other purposes in addition to being used for the identification process of the moving body.
- their behavior histories position, speed, direction, etc.
- the behavior history is similar to that of an accompanying caregiver or guide dog. Therefore, when there are a plurality of moving objects having similar behavior histories, it is assumed that the person has a high risk such as a child or a physically handicapped person. Therefore, it is possible to detect a pedestrian with a high degree of risk by having similar behavior histories in a plurality of adjacent moving objects. As a result, a person with a high degree of risk can be recognized at an early stage on the vehicle side.
- information on pedestrians with a high degree of risk may be detected by infrastructure equipment such as communication equipment installed in traffic lights and the like.
- infrastructure equipment such as communication equipment installed in traffic lights and the like.
- a person with a high risk can be recognized earlier by setting the transmission power higher than usual from the infrastructure equipment to the vehicle and transmitting the power.
- vehicle information for driving in a hurry or drunk driving may be notified to infrastructure equipment such as communication equipment installed at a traffic light or the like.
- infrastructure equipment such as communication equipment installed at a traffic light or the like.
- the moving body detection method, the roadside device, and the in-vehicle device according to the present disclosure accurately determine whether or not the moving bodies detected by each of the two observation devices (roadside device, in-vehicle terminal) are the same. It has the effect of being able to perform, and is useful as a moving body detection method for detecting moving bodies existing in the vicinity, a roadside device, an in-vehicle device, and the like.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Traffic Control Systems (AREA)
Abstract
[Problem] To enable accurately determining whether or not moving bodies detected by two observation devices (a roadside device, a vehicle-mounted device) are the same. [Solution] This roadside device 4 detects a moving body (a pedestrian, etc.) present on the road on the basis of the output of a roadside sensor 42, acquires position information of the moving body and detects behavior characteristics of the moving body, and transmits the position information and information relating to the behavior characteristics to the vehicle-mounted terminal 2. Further, the vehicle-mounted terminal detects a moving body present on the road on the basis of the output of a vehicle-mounted sensor 11, acquires position information of the moving body and detects behavior characteristics of the moving body, and performs identification processing to determine whether or not the moving bodies detected by the roadside device and the local device are the same, in accordance with whether or not the behavior characteristics received from the roadside device and the behavior characteristics generated by the local device match.
Description
本開示は、周辺に存在する移動体を検知する移動体検知方法、路側装置、および車載装置に関するものである。
The present disclosure relates to a moving body detection method for detecting a moving body existing in the vicinity, a roadside device, and an in-vehicle device.
自動運転車などの車両において、レーダ、ライダー(LiDAR)、カメラなどの車載センサを用いて、自車両の周囲に存在する障害物(歩行者、車両など)を検知する技術が知られている。一方、ITS(Intelligent Transport System:高度道路交通システム)を利用した安全運転支援無線システムでは、交差点に設置された路側機において、レーダなどの路側センサを用いて、交差点の周辺の道路上に存在する移動体(歩行者、車両など)を検知して、路車間通信により、移動体の情報を車両に提供するようにしている。これにより、車両から見通し外となる移動体を車両側で認識することができる。
In vehicles such as self-driving cars, there is known a technology for detecting obstacles (pedestrians, vehicles, etc.) existing around the own vehicle by using in-vehicle sensors such as radar, lidar (LiDAR), and camera. On the other hand, in a safe driving support radio system using ITS (Intelligent Transport System), a roadside machine installed at an intersection exists on the road around the intersection using a roadside sensor such as a radar. It detects moving objects (pedestrians, vehicles, etc.) and provides information on moving objects to vehicles through road-to-vehicle communication. As a result, it is possible for the vehicle side to recognize a moving object that is out of sight from the vehicle.
ところが、路側機で検知された移動体の情報を、自動運転車などの車両に提供する場合、路側機で検知された移動体と、車載端末で検知された移動体とを別物と判断して、移動体を二重に認識する。また、路側機で検知された移動体が、見通し外、すなわち、実際に車両から見えないか否かを判別することができない。このため、自動運転車などの車両において、適切な運転制御ができなくなる。そこで、車載端末および路側機の各々で検知された歩行者が同一であるか否かを判定する同定が必要になる。
However, when the information of the moving body detected by the roadside machine is provided to a vehicle such as an autonomous driving vehicle, it is determined that the moving body detected by the roadside machine and the moving body detected by the in-vehicle terminal are different. , Recognize moving objects doubly. In addition, it is not possible to determine whether or not the moving body detected by the roadside machine is out of sight, that is, whether or not it is actually visible from the vehicle. Therefore, in a vehicle such as an autonomous driving vehicle, appropriate driving control cannot be performed. Therefore, it is necessary to identify whether or not the pedestrians detected by the in-vehicle terminal and the roadside machine are the same.
このような移動体の同定を行うには、移動体の位置情報に加え、移動体を識別するために移動体に固有の情報を、路側機と車載端末との各々で取得して、路車間通信を利用して、その情報を路側機と車載端末との間で交換するようにするとよい。
In order to identify such a moving body, in addition to the position information of the moving body, information unique to the moving body is acquired by each of the roadside unit and the in-vehicle terminal in order to identify the moving body, and the distance between road vehicles is obtained. It is advisable to use communication to exchange the information between the roadside unit and the in-vehicle terminal.
このような路側機と車載端末との各々で取得した情報を路側機と車載端末との間で交換する技術として、従来、車両側で得られた情報とインフラ側で得られた情報とを、車両側で共通のデータ形式として処理する技術が知られている(特許文献1参照)。
As a technique for exchanging the information acquired by each of the roadside unit and the in-vehicle terminal between the roadside unit and the in-vehicle terminal, conventionally, the information obtained on the vehicle side and the information obtained on the infrastructure side are exchanged. A technique for processing as a common data format on the vehicle side is known (see Patent Document 1).
さて、路側機および車載端末の各々で検知された移動体が同一物であるか否かを判定する同定処理は、従来一般的に、移動体の位置情報や移動方向や移動軌跡を識別情報として利用して、路側機および車載端末の各々で取得した識別情報を比較することで、移動体の同定が行われる。しかしながら、このような同定方法では、位置情報の測位誤差が大きくなると、同一の移動体が別の移動体と誤判定される。このため、移動体の同定処理を精度よく行うことができる技術が望まれるが、前記従来の技術では、このような同定処理の誤判定を抑制することに関する提案は何らなく、同定を精度よく行うことができないという問題があった。
By the way, in the identification process for determining whether or not the moving bodies detected by the roadside unit and the in-vehicle terminal are the same, the position information, the moving direction, and the moving locus of the moving body are generally used as the identification information. The mobile body is identified by comparing the identification information acquired by each of the roadside unit and the in-vehicle terminal. However, in such an identification method, when the positioning error of the position information becomes large, the same moving body is erroneously determined as another moving body. Therefore, a technique capable of accurately performing the identification process of the moving body is desired, but in the above-mentioned conventional technique, there is no proposal for suppressing such an erroneous determination of the identification process, and the identification is performed accurately. There was a problem that it could not be done.
そこで、本開示は、2つの観測装置(路側機、車載端末)の各々で検知された移動体が同一であるか否かの判定を精度よく行うことができる移動体検知方法、路側装置、および車載装置を提供することを主な目的とする。
Therefore, in the present disclosure, a mobile body detection method, a roadside device, and a roadside device capable of accurately determining whether or not the mobile bodies detected by each of the two observation devices (roadside device, in-vehicle terminal) are the same. The main purpose is to provide an in-vehicle device.
本開示の移動体検知方法は、第1の観測装置および第2の観測装置が、センサの出力に基づいて、道路上に存在する移動体を検知して、その移動体の位置情報を取得すると共に、その移動体の挙動特性を検出し、前記第1の観測装置、前記第2の観測装置、並びに前記第1および第2の観測装置とは別の処理装置のいずれかが、前記第1の観測装置および前記第2の観測装置の各々で生成した前記挙動特性が一致するか否かに応じて、前記第1の観測装置および前記第2の観測装置の各々で検知された移動体が同一であるか否かを判定する同定処理を行う構成とする。
In the moving body detection method of the present disclosure, the first observation device and the second observation device detect a moving body existing on the road based on the output of the sensor and acquire the position information of the moving body. At the same time, the behavior characteristics of the moving body are detected, and one of the first observation device, the second observation device, and a processing device different from the first and second observation devices is the first. The moving object detected by each of the first observation device and the second observation device depends on whether or not the behavior characteristics generated by each of the observation device and the second observation device match. The configuration is such that an identification process for determining whether or not they are the same is performed.
また、本開示の路側装置は、路側センサの出力に基づいて、道路上に存在する移動体を検知して、その移動体の位置情報を取得すると共に、その移動体の挙動特性を検出し、前記位置情報および前記挙動特性に関する情報を移動体識別情報として車載装置に送信する構成とする。
Further, the roadside device of the present disclosure detects a moving body existing on the road based on the output of the roadside sensor, acquires the position information of the moving body, and detects the behavior characteristics of the moving body. The configuration is such that the position information and the information regarding the behavior characteristics are transmitted to the in-vehicle device as mobile identification information.
また、本開示の車載装置は、車載センサの出力に基づいて、道路上に存在する移動体を検知して、その移動体の位置情報を取得すると共に、その移動体の挙動特性を検出し、路側装置から受信した挙動特性と、自装置で生成した前記挙動特性とが一致するか否かに応じて、前記路側装置および自装置の各々で検知された移動体が同一であるか否かを判定する同定処理を行う構成とする。
Further, the in-vehicle device of the present disclosure detects a moving body existing on the road based on the output of the in-vehicle sensor, acquires the position information of the moving body, and detects the behavior characteristics of the moving body. Whether or not the moving body detected by each of the roadside device and the own device is the same, depending on whether or not the behavior characteristic received from the roadside device and the behavior characteristic generated by the own device match. The configuration is such that the identification process for determination is performed.
本開示によれば、2つの観測装置の各々で検出された移動体の挙動特性を比較することにより、移動体の同定、すなわち、2つの観測装置の各々で検知された移動体が同一であるか否かの判定を精度よく行うことができる。
According to the present disclosure, by comparing the behavioral characteristics of the moving body detected by each of the two observation devices, the identification of the moving body, that is, the moving body detected by each of the two observation devices is the same. Whether or not it can be determined accurately.
前記課題を解決するためになされた第1の発明は、第1の観測装置および第2の観測装置が、センサの出力に基づいて、道路上に存在する移動体を検知して、その移動体の位置情報を取得すると共に、その移動体の挙動特性を検出し、前記第1の観測装置、前記第2の観測装置、並びに前記第1および第2の観測装置とは別の処理装置のいずれかが、前記第1の観測装置および前記第2の観測装置の各々で生成した前記挙動特性が一致するか否かに応じて、前記第1の観測装置および前記第2の観測装置の各々で検知された移動体が同一であるか否かを判定する同定処理を行う構成とする。
In the first invention made to solve the above problems, the first observation device and the second observation device detect a moving body existing on the road based on the output of the sensor, and the moving body is detected. Any of the first observation device, the second observation device, and a processing device different from the first and second observation devices, which acquires the position information of the moving object and detects the behavior characteristics of the moving object. In each of the first observation device and the second observation device, depending on whether or not the behavior characteristics generated by each of the first observation device and the second observation device match. The configuration is such that an identification process is performed to determine whether or not the detected moving objects are the same.
これによると、2つの観測装置の各々で検出された移動体の挙動特性を比較することにより、移動体の同定、すなわち、2つの観測装置の各々で検知された移動体が同一であるか否かの判定を精度よく行うことができる。
According to this, by comparing the behavior characteristics of the moving body detected by each of the two observation devices, the identification of the moving body, that is, whether or not the moving body detected by each of the two observation devices is the same. It is possible to accurately determine whether or not.
また、第2の発明は、前記第1の観測装置および前記第2の観測装置が、前記挙動特性として、移動体の上下動の振幅、上下動の周期、移動方向、および移動速度の少なくともいずれかを検出する構成とする。
Further, in the second invention, the first observation device and the second observation device have at least one of the amplitude of the vertical movement of the moving body, the period of the vertical movement, the moving direction, and the moving speed as the behavior characteristics. It is configured to detect whether or not.
これによると、移動体の挙動特性に基づいて移動体を精度よく識別することができる。
According to this, the moving body can be accurately identified based on the behavior characteristics of the moving body.
また、第3の発明は、移動体が保持する端末装置が、自装置に割り当てられた電波特性を有する識別信号を発信し、前記第1の観測装置および前記第2の観測装置が、受信した前記識別信号の電波特性に基づいて移動体を識別する構成とする。
Further, in the third invention, the terminal device held by the mobile body transmits an identification signal having radio wave characteristics assigned to the own device, and the first observation device and the second observation device receive the identification signal. The moving body is identified based on the radio wave characteristics of the identification signal.
これによると、第1の観測装置および前記第2の観測装置の各々で検出された移動体の挙動特性の比較に加えて、2つの観測装置の各々で検出された電波特性を比較することにより、移動体の同定をより一層精度よく行うことができる。また、障害物に遮蔽された状態でも、電波発信源の位置に基づいて移動体を識別できるため、移動体の同定の精度を高めることができる。
According to this, in addition to the comparison of the behavior characteristics of the moving object detected by each of the first observation device and the second observation device, the radio wave characteristics detected by each of the two observation devices are compared. , The identification of the moving body can be performed more accurately. Further, even in a state of being shielded by an obstacle, the moving body can be identified based on the position of the radio wave transmission source, so that the accuracy of identifying the moving body can be improved.
また、第4の発明は、前記第1の観測装置および前記第2の観測装置が、受信した前記識別信号の電波特性を可視化した電波特性画像を生成し、前記識別信号の電波発信源の位置情報に基づいて、カメラの撮影画像上に前記電波特性画像を重畳描画して、前記電波特性画像を含む移動体画像を生成し、前記第1の観測装置、前記第2の観測装置、および前記処理装置のいずれかが、前記同定処理において、前記第1の観測装置および前記第2の観測装置の各々で取得した前記電波特性画像を含む移動体画像を比較する構成とする。
Further, in the fourth invention, the first observation device and the second observation device generate a radio wave characteristic image that visualizes the radio wave characteristics of the received identification signal, and the position of the radio wave transmission source of the identification signal. Based on the information, the radio wave characteristic image is superimposed and drawn on the image captured by the camera to generate a moving body image including the radio wave characteristic image, and the first observation device, the second observation device, and the above. One of the processing devices is configured to compare the moving body image including the radio wave characteristic image acquired by each of the first observation device and the second observation device in the identification process.
これによると、カメラの撮影画像から移動体画像を抽出することで、その移動体に対応する電波特性画像を適切に取得することができる。さらに、第1の観測装置および第2の観測装置の各々で検出された移動体の挙動特性の比較に加えて、各々の装置で検出された電波特性画像を含む移動体画像を比較することにより、電波発信源からの反射波と区別して、より一層移動体の同定の精度を高めることができる。また、複数の移動体で電波特性画像が同じ場合でも、電波特性画像を含む移動体画像の比較に基づいて移動体を識別できるため、移動体の同定の精度を高めることができる。
According to this, by extracting the moving object image from the image taken by the camera, it is possible to appropriately acquire the radio wave characteristic image corresponding to the moving object. Further, by comparing the behavior characteristics of the moving body detected by each of the first observation device and the second observation device, and by comparing the moving body image including the radio wave characteristic image detected by each device. , It is possible to further improve the accuracy of identification of a moving object by distinguishing it from the reflected wave from the radio wave transmission source. Further, even when the radio wave characteristic image is the same for a plurality of moving bodies, the moving body can be identified based on the comparison of the moving body images including the radio wave characteristic image, so that the accuracy of identifying the moving body can be improved.
また、第5の発明は、前記端末装置が、移動体の挙動特性に応じたタイミングで前記識別信号を発信し、前記第1の観測装置および前記第2の観測装置が、前記識別信号の受信タイミングに基づいて、移動体の挙動特性を取得する構成とする。
Further, in the fifth invention, the terminal device transmits the identification signal at a timing corresponding to the behavior characteristics of the moving body, and the first observation device and the second observation device receive the identification signal. The configuration is such that the behavior characteristics of the moving body are acquired based on the timing.
これによると、上下動などの移動体の挙動特性を、端末装置から第1の観測装置および第2の観測装置に通知することができる。
According to this, the behavior characteristics of the moving body such as vertical movement can be notified from the terminal device to the first observation device and the second observation device.
また、第6の発明は、前記第1の観測装置および前記第2の観測装置が、前記センサの出力に基づいて取得した移動体の位置情報に基づいて、カメラの撮影画像上に移動体の検出枠を設定し、前記撮影画像から前記検出枠の領域を切り出すことで、移動体画像を抽出し、前記第1の観測装置、前記第2の観測装置、および前記処理装置のいずれかが、前記同定処理において、前記第1の観測装置および前記第2の観測装置の各々で取得した前記移動体画像を比較する構成とする。
Further, in the sixth aspect of the present invention, the first observation device and the second observation device display the moving body on the captured image of the camera based on the position information of the moving body acquired based on the output of the sensor. By setting a detection frame and cutting out a region of the detection frame from the captured image, a moving object image is extracted, and any one of the first observation device, the second observation device, and the processing device can be used. In the identification process, the moving body images acquired by each of the first observation device and the second observation device are compared.
これによると、第1の観測装置および第2の観測装置の各々で検出された移動体の挙動特性の比較に加えて、2つの観測装置の各々で取得した移動体画像を比較することにより、移動体の同定をより一層精度よく行うことができる。
According to this, in addition to the comparison of the behavior characteristics of the moving body detected by each of the first observation device and the second observation device, the moving body images acquired by each of the two observation devices are compared. The identification of the moving body can be performed more accurately.
また、第7の発明は、前記第1の観測装置、前記第2の観測装置、および前記処理装置のいずれかが、前記同定処理において、前記第1の観測装置および前記第2の観測装置の各々で取得した各時刻の複数の前記移動体画像を比較する構成とする。
Further, in the seventh invention, any one of the first observation device, the second observation device, and the processing device of the first observation device and the second observation device in the identification process. The configuration is such that a plurality of the moving body images acquired at each time are compared.
これによると、複数の移動体画像に基づいて移動体の同定をより一層適切に行うことができる。
According to this, it is possible to more appropriately identify a moving body based on a plurality of moving body images.
また、第8の発明は、移動体が保持する表示灯が、自装置に割り当てられた点灯特性で点灯し、前記第1の観測装置が、カメラの撮影画像から前記表示灯を認識して、その点灯特性を検出し、その点灯特性に基づいて移動体を識別する構成とする。
Further, in the eighth invention, the indicator light held by the moving body lights up with the lighting characteristics assigned to the own device, and the first observation device recognizes the indicator light from the captured image of the camera. The lighting characteristic is detected, and the moving body is identified based on the lighting characteristic.
これによると、表示灯の点灯特性を識別情報として、移動体を精度よく識別することができる。
According to this, it is possible to accurately identify a moving object by using the lighting characteristics of the indicator light as identification information.
また、第9の発明は、前記第1の観測装置、前記第2の観測装置、および前記処理装置のいずれかが、前記第1の観測装置で検知された移動体と前記第2の観測装置で検知された移動体とが同一であると判定した場合には、前記第1の観測装置から移動体までの距離、および前記第2の観測装置から移動体までの距離に基づいて、前記第1の観測装置で取得した移動体の位置情報と、前記第2の観測装置で取得した移動体の位置情報とのいずれかを選択して、移動体の位置を確定する構成とする。
Further, in the ninth invention, any of the first observation device, the second observation device, and the processing device is a moving body detected by the first observation device and the second observation device. When it is determined that the moving body detected in is the same, the first observation device is based on the distance from the first observation device to the moving body and the second observation device is based on the distance from the moving body. The position of the moving body is determined by selecting either the position information of the moving body acquired by the observation device 1 or the position information of the moving body acquired by the second observation device.
これによると、精度の高い方の位置情報で移動体の位置を確定することができる。
According to this, the position of the moving body can be determined with the more accurate position information.
また、第10の発明は、前記第1の観測装置、前記第2の観測装置、および前記処理装置のいずれかが、前記同定処理が成功すると、追跡処理により移動体の位置を確定する追跡モードに移行し、前記追跡処理が失敗すると、前記同定処理により移動体の位置を確定する同定モードに復帰する構成とする。
Further, the tenth invention is a tracking mode in which, when any one of the first observation device, the second observation device, and the processing device succeeds in the identification process, the position of the moving body is determined by the tracking process. If the tracking process fails, the mode returns to the identification mode in which the position of the moving body is determined by the identification process.
これによると、同定処理を省略して処理負荷を軽減することができる。
According to this, the processing load can be reduced by omitting the identification process.
また、第11の発明は、前記第1の観測装置が、メッセージの送信元の移動体と、前記センサを用いて検知された移動体とが、同一であるか否かを判定する第1の同定処理を行い、前記第1の同定処理が成功すると、該当する移動体を同定済みとしてその移動体が保持する端末装置からのメッセージを無視する指示情報を含むメッセージを前記第2の観測装置に送信し、前記第2の観測装置が、前記第1の観測装置で検知された移動体と、自装置で検知された移動体と、が同一であるか否かを判定する第2の同定処理を行うと共に、前記指示情報に基づいて、同定済みの移動体が保持する前記端末装置からのメッセージを無視して、前記第1の観測装置から通知された移動体の位置情報と、自装置で取得した位置情報と、に基づいて、移動体の位置を確定する構成とする。
Further, in the eleventh invention, the first observation device determines whether or not the moving body that is the source of the message and the moving body detected by using the sensor are the same. When the identification process is performed and the first identification process is successful, a message including instruction information for ignoring the message from the terminal device held by the mobile body as having been identified is sent to the second observation device. A second identification process of transmitting and determining whether or not the moving body detected by the first observing device and the moving body detected by the own device are the same by the second observing device. And, based on the instruction information, ignoring the message from the terminal device held by the identified mobile body, the position information of the mobile body notified from the first observation device and the own device The position of the moving body is determined based on the acquired position information.
これによると、第1の観測装置および第2の観測装置のいずれかで取得した精度の高い位置情報を採用して、移動体の端末装置から通知される精度の低い位置情報を採用しないようにすることができる。
According to this, the highly accurate position information acquired by either the first observation device or the second observation device is adopted, and the less accurate position information notified from the terminal device of the mobile body is not adopted. can do.
また、第12の発明は、前記第1の観測装置が、路側装置であり、前記第2の観測装置が、車載装置である構成とする。
Further, in the twelfth invention, the first observation device is a roadside device, and the second observation device is an in-vehicle device.
これによると、路側機および車載端末の各々で検知された移動体が同一であるか否かの判定を精度よく行うことができる。
According to this, it is possible to accurately determine whether or not the moving objects detected by the roadside unit and the in-vehicle terminal are the same.
また、第13の発明は、前記第1の観測装置および前記第2の観測装置が、路側装置である構成とする。
Further, in the thirteenth invention, the first observation device and the second observation device are roadside devices.
これによると、2台の路側機の各々で検知された移動体が同一であるか否かの判定を精度よく行うことができる。
According to this, it is possible to accurately determine whether or not the moving objects detected by each of the two roadside machines are the same.
また、第14の発明は、路側センサの出力に基づいて、道路上に存在する移動体を検知して、その移動体の位置情報を取得すると共に、その移動体の挙動特性を検出し、前記位置情報および前記挙動特性に関する情報を移動体識別情報として車載装置に送信する構成とする。
Further, in the fourteenth invention, based on the output of the roadside sensor, a moving body existing on the road is detected, the position information of the moving body is acquired, and the behavior characteristics of the moving body are detected. The configuration is such that the position information and the information related to the behavior characteristics are transmitted to the in-vehicle device as mobile identification information.
これによると、第1の発明と同様に、路側機および車載端末の各々で検知された移動体が同一であるか否かの判定を精度よく行うことができる。
According to this, as in the first invention, it is possible to accurately determine whether or not the moving objects detected by the roadside machine and the in-vehicle terminal are the same.
また、第15の発明は、車載センサの出力に基づいて、道路上に存在する移動体を検知して、その移動体の位置情報を取得すると共に、その移動体の挙動特性を検出し、路側装置から受信した挙動特性と、自装置で生成した前記挙動特性とが一致するか否かに応じて、前記路側装置および自装置の各々で検知された移動体が同一であるか否かを判定する同定処理を行う構成とする。
Further, the fifteenth invention detects a moving body existing on the road based on the output of the vehicle-mounted sensor, acquires the position information of the moving body, detects the behavior characteristics of the moving body, and detects the roadside. It is determined whether or not the moving body detected by each of the roadside device and the own device is the same, depending on whether the behavior characteristic received from the device and the behavior characteristic generated by the own device match. The configuration is such that the identification process is performed.
これによると、第1の発明と同様に、路側機および車載端末の各々で検知された移動体が同一であるか否かの判定を精度よく行うことができる。
According to this, as in the first invention, it is possible to accurately determine whether or not the moving objects detected by the roadside machine and the in-vehicle terminal are the same.
以下、本開示の実施の形態を、図面を参照しながら説明する。
Hereinafter, embodiments of the present disclosure will be described with reference to the drawings.
(第1実施形態)
図1は、第1実施形態に係る移動体検知システムの全体構成図である。 (First Embodiment)
FIG. 1 is an overall configuration diagram of a mobile body detection system according to the first embodiment.
図1は、第1実施形態に係る移動体検知システムの全体構成図である。 (First Embodiment)
FIG. 1 is an overall configuration diagram of a mobile body detection system according to the first embodiment.
この移動体検知システムは、道路上に存在する移動体(歩行者、車両など)を検知して、車両1(自動運転車両)の運転を支援するものである。この移動体検知システムは、車両1に搭載される車載端末2(車載装置、第2の観測装置)、および自動運転ECU3(走行制御装置)と、道路に設置される路側機4(路側装置、第1の観測装置)と、歩行者が所持する歩行者端末5(歩行者装置)と、を備えている。
This moving object detection system detects moving objects (pedestrians, vehicles, etc.) existing on the road and supports the driving of vehicle 1 (autonomous driving vehicle). This moving object detection system includes an in-vehicle terminal 2 (vehicle-mounted device, second observation device) mounted on the vehicle 1, an automatic driving ECU 3 (travel control device), and a roadside device 4 (roadside device,) installed on the road. It includes a first observation device) and a pedestrian terminal 5 (pedestrian device) possessed by a pedestrian.
車載端末2、歩行者端末5および路側機4の間ではITS通信が行われる。このITS通信は、ITS(Intelligent Transport System:高度道路交通システム)を利用した安全運転支援無線システムで採用されている周波数帯(例えば700MHz帯や5.8GHz帯)を利用した無線通信である。このITS通信では、車両1や歩行者の位置情報などの所要の情報を含むメッセージを送受信する。
ITS communication is performed between the in-vehicle terminal 2, the pedestrian terminal 5, and the roadside device 4. This ITS communication is a wireless communication using a frequency band (for example, 700 MHz band or 5.8 GHz band) adopted in a safe driving support wireless system using ITS (Intelligent Transport System). In this ITS communication, a message including necessary information such as the position information of the vehicle 1 and the pedestrian is transmitted and received.
なお、このITS通信のうち、車載端末2同士の間で行われるものを車車間通信、路側機4と車載端末2との間で行われるものを路車間通信とそれぞれ呼称する。また、車載端末2および路側機4は、歩行者端末5との間でもITS通信(歩車間通信、路歩間通信)を行うことができる。
Of these ITS communications, the one performed between the in-vehicle terminals 2 is referred to as vehicle-to-vehicle communication, and the one performed between the roadside unit 4 and the in-vehicle terminal 2 is referred to as road-to-vehicle communication. Further, the in-vehicle terminal 2 and the roadside device 4 can also perform ITS communication (communication between pedestrians and vehicles, communication between pedestrians) with the pedestrian terminal 5.
車載端末2は、ITS通信(車車間通信)により他の車載端末2との間で、位置情報などを含むメッセージを送受信して、車両1同士の衝突の危険性を判定し、衝突の危険性がある場合には、運転者に対する注意喚起動作を行う。なお、注意喚起動作は、車載端末2と接続されたカーナビゲーション装置(図示せず)を用いて行うとよい。また、車載端末2は、ITS通信(歩車間通信)により歩行者端末5との間でメッセージを送受信して、歩行者と車両1との衝突の危険性を判定する。
The in-vehicle terminal 2 transmits and receives a message including position information and the like to and from another in-vehicle terminal 2 by ITS communication (vehicle-to-vehicle communication), determines the risk of collision between vehicles 1, and determines the risk of collision. If there is, a warning activation operation is performed for the driver. The alert activation operation may be performed using a car navigation device (not shown) connected to the in-vehicle terminal 2. Further, the in-vehicle terminal 2 transmits and receives a message to and from the pedestrian terminal 5 by ITS communication (pedestrian-vehicle communication), and determines the risk of collision between the pedestrian and the vehicle 1.
自動運転ECU3は、車載センサ11の出力に基づいて、車両1の周囲の障害物を検知し、また、車両1の状態を検出して、車両1の走行を制御する。
The automatic driving ECU 3 detects obstacles around the vehicle 1 based on the output of the vehicle-mounted sensor 11, detects the state of the vehicle 1, and controls the running of the vehicle 1.
路側機4は、ITS通信(路車間通信、路歩間通信)により、自装置の周辺に位置する車両1や歩行者の存在を、車載端末2や歩行者端末5に通知する。これにより、見通し外の交差点における右左折の際の衝突を防止することができる。なお、路側機4では、この他に、交通情報を車載端末2や歩行者端末5に配信する。
The roadside unit 4 notifies the in-vehicle terminal 2 and the pedestrian terminal 5 of the existence of a vehicle 1 or a pedestrian located in the vicinity of the own device by ITS communication (road-to-vehicle communication, road-to-walk communication). This makes it possible to prevent a collision when turning left or right at an intersection outside the line of sight. In addition to this, the roadside machine 4 distributes traffic information to the in-vehicle terminal 2 and the pedestrian terminal 5.
歩行者端末5は、ITS通信(歩車間通信)により車載端末2との間で、位置情報などを含むメッセージを送受信して、歩行者と車両1との衝突の危険性を判定し、衝突の危険性がある場合には、歩行者に対する注意喚起動作を行う。
The pedestrian terminal 5 transmits and receives a message including position information and the like to and from the in-vehicle terminal 2 by ITS communication (pedestrian-vehicle communication), determines the risk of collision between the pedestrian and the vehicle 1, and determines the risk of collision. If there is a danger, a warning activation action for pedestrians will be performed.
車両1には、車載センサ11と、車載カメラ12と、が搭載されている。この車載センサ11は、レーダ、ライダーなどである。車載端末2は、車載センサ11の出力に基づいて、自車両の周囲の道路上に存在する歩行者(移動体)を検知する。車載カメラ12は、自装置の周辺の道路を撮影する。なお、車載カメラ12を車載センサ11と兼用することもできる。
The vehicle 1 is equipped with an in-vehicle sensor 11 and an in-vehicle camera 12. The in-vehicle sensor 11 is a radar, a rider, or the like. The in-vehicle terminal 2 detects a pedestrian (moving body) existing on the road around the own vehicle based on the output of the in-vehicle sensor 11. The in-vehicle camera 12 photographs the road around the own device. The vehicle-mounted camera 12 can also be used as the vehicle-mounted sensor 11.
路側機4は、ITS通信用の電波を送受信するアンテナ41の他に、路側センサ42と、路側カメラ43と、を備えている。この路側センサ42は、レーダ、ライダー、カメラなどである。路側機4は、路側センサ42の出力に基づいて、自装置の周囲の道路上に存在する歩行者(移動体)を検知する。路側カメラ43は、自装置の周辺の道路を撮影する。なお、路側カメラ43を路側センサ42と兼用することもできる。
The roadside unit 4 includes a roadside sensor 42 and a roadside camera 43 in addition to an antenna 41 that transmits and receives radio waves for ITS communication. The roadside sensor 42 is a radar, a rider, a camera, or the like. The roadside machine 4 detects a pedestrian (moving body) existing on the road around the own device based on the output of the roadside sensor 42. The roadside camera 43 photographs the road around the own device. The roadside camera 43 can also be used as the roadside sensor 42.
なお、本実施形態では、処理対象となる移動体を歩行者とした例について説明するが、処理対象となる移動体は車両であってもよい。
In the present embodiment, an example in which the moving body to be processed is a pedestrian will be described, but the moving body to be processed may be a vehicle.
次に、第1実施形態に係る路側機4で行われる移動体検知処理および同定処理について説明する。図2は、路側機4で行われる移動体検知処理および同定処理の概要を示す説明図である。なお、車載端末2でも同様の処理が行われる。
Next, the moving object detection process and the identification process performed by the roadside machine 4 according to the first embodiment will be described. FIG. 2 is an explanatory diagram showing an outline of a moving object detection process and an identification process performed by the roadside machine 4. The same process is performed on the in-vehicle terminal 2.
本実施形態では、路側機4が、路側センサ42の出力に基づいて、自装置の周囲の道路上に存在する歩行者(移動体)を検知する(移動体検知処理)。
In the present embodiment, the roadside machine 4 detects a pedestrian (moving body) existing on the road around the own device based on the output of the roadside sensor 42 (moving body detection process).
このとき、まず、路側センサ42の出力に基づいて、自装置を基準にした歩行者の相対的な位置情報、すなわち、路側機4から見た歩行者が存在する方位と、路側機4から歩行者までの距離とが求められる。そして、歩行者の相対的な位置情報と、自装置の設置位置の位置情報(緯度、経度)とに基づいて、歩行者の絶対的な位置情報(緯度、経度)が求められる。
At this time, first, based on the output of the roadside sensor 42, the relative position information of the pedestrian with respect to the own device, that is, the direction in which the pedestrian exists as seen from the roadside machine 4, and the walking from the roadside machine 4. The distance to the person is required. Then, the absolute position information (latitude, longitude) of the pedestrian is obtained based on the relative position information of the pedestrian and the position information (latitude, longitude) of the installation position of the own device.
また、本実施形態では、歩行者の相対的な位置情報(方位、距離)に基づいて、歩行者の3次元位置情報、すなわち、歩行者の2次元位置情報と、高さ情報とが求められる。2次元位置情報は、絶対的な位置情報(緯度、経度)を利用してもよいが、適宜な座標系の水平面(XY平面)上での位置情報としてもよい。また、高さ情報は、歩行者の身体の代表点の足下の路面からの高さに関する情報である。なお、歩行者の足下の路面は、基準となる水平面(XY平面)として設定される。なお、人物の歩行に応じた上下動を検出するには、人物の上半身の中心点などを代表点とするとよい。
Further, in the present embodiment, three-dimensional position information of the pedestrian, that is, two-dimensional position information of the pedestrian and height information are required based on the relative position information (direction, distance) of the pedestrian. .. As the two-dimensional position information, absolute position information (latitude, longitude) may be used, but position information on the horizontal plane (XY plane) of an appropriate coordinate system may be used. Further, the height information is information on the height from the road surface under the feet of the representative point of the pedestrian's body. The road surface under the feet of a pedestrian is set as a reference horizontal plane (XY plane). In addition, in order to detect the vertical movement according to the walking of the person, it is preferable to use the center point of the upper body of the person as a representative point.
また、本実施形態では、路側機4の設置位置に関する位置情報と、歩行者の位置情報とに基づいて、歩行者の距離情報(路側機4から歩行者までの距離)が求められる。
Further, in the present embodiment, pedestrian distance information (distance from the roadside machine 4 to the pedestrian) is required based on the position information regarding the installation position of the roadside machine 4 and the position information of the pedestrian.
なお、路側機4は、移動体検知処理において、ディープラーニング等による機械学習モデルを用いた画像認識により、カメラの撮影画像から歩行者(移動体)を検知して、撮影画像上での歩行者の位置情報(座標)を取得して、その撮影画像上での歩行者の位置情報から歩行者の絶対的な位置情報(緯度、経度)を取得するようにしてもよい。
In the moving object detection process, the roadside machine 4 detects a pedestrian (moving object) from the image captured by the camera by image recognition using a machine learning model such as deep learning, and the pedestrian on the captured image. The position information (coordinates) of the pedestrian may be acquired, and the absolute position information (latitude, longitude) of the pedestrian may be acquired from the position information of the pedestrian on the captured image.
また、本実施形態では、車載端末2において、車載端末2および路側機4の各々で検知された歩行者が同一であるか否かを判定する(同定処理)。この同定処理では、歩行者(移動体)を識別する移動体識別情報を、車載端末2および路側機4の各々で取得して、その車載端末2および路側機4の各々で取得した移動体識別情報を比較することで、歩行者の同定が行われる。
Further, in the present embodiment, in the vehicle-mounted terminal 2, it is determined whether or not the pedestrians detected by the vehicle-mounted terminal 2 and the roadside machine 4 are the same (identification process). In this identification process, the moving body identification information for identifying a pedestrian (moving body) is acquired by each of the in-vehicle terminal 2 and the roadside machine 4, and the moving body identification acquired by each of the in-vehicle terminal 2 and the roadside machine 4. Pedestrians are identified by comparing the information.
まず、本実施形態では、車載端末2および路側機4の各々で取得した歩行者の位置情報を移動体識別情報として利用して、歩行者の同定が行われる。また、本実施形態では、歩行者の挙動特性(身体の上下動の振幅および周期、移動方向、移動速度など)を移動体識別情報として利用して、歩行者の同定が行われる。さらに、本実施形態では、歩行者端末5が、各自に割り当てられた電波特性の識別信号を発信し、この識別信号の電波特性を移動体識別情報として利用して、歩行者の同定が行われる。
First, in the present embodiment, the pedestrian is identified by using the position information of the pedestrian acquired by each of the in-vehicle terminal 2 and the roadside machine 4 as the moving object identification information. Further, in the present embodiment, the pedestrian is identified by using the behavior characteristics of the pedestrian (amplitude and period of vertical movement of the body, moving direction, moving speed, etc.) as moving body identification information. Further, in the present embodiment, the pedestrian terminal 5 transmits an identification signal of the radio wave characteristic assigned to each of them, and the radio wave characteristic of the identification signal is used as the moving object identification information to identify the pedestrian. ..
すなわち、本実施形態では、車載端末2および路側機4の各々で取得した歩行者の位置情報の比較と、歩行者の挙動特性の比較と、電波特性の比較とが行われ、いずれかの比較事項で一致した場合に、車載端末2および路側機4の各々で検知された歩行者が同一であると判定される。このため、測定誤差などが原因で一部の比較事項で一致しないと判定された場合でも、他の比較事項で一致すればよく、歩行者の同定を精度よく行うことができる。
That is, in the present embodiment, the position information of the pedestrian acquired by each of the in-vehicle terminal 2 and the roadside device 4 is compared, the behavior characteristics of the pedestrian are compared, and the radio wave characteristics are compared. If the items match, it is determined that the pedestrians detected by the in-vehicle terminal 2 and the roadside device 4 are the same. Therefore, even if it is determined that some of the comparison items do not match due to a measurement error or the like, it is sufficient if the other comparison items match, and the pedestrian can be identified accurately.
ここで、路側機4で検知された歩行者が、車載端末2で検知された歩行者の中にいない場合、すなわち、他の移動体(歩行者、車両)や建築物などの遮蔽物により、車両から見えない歩行者は対象外とする。図2に示す例では、歩行者Aは、路側機4および車載端末2の双方で検知されるが、歩行者Bは、別の車両に遮られて車載端末2で検知されないため、同定の対象外となる。
Here, when the pedestrian detected by the roadside machine 4 is not among the pedestrians detected by the in-vehicle terminal 2, that is, due to a shield such as another moving body (pedestrian, vehicle) or a building. Pedestrians who cannot be seen from the vehicle are excluded. In the example shown in FIG. 2, the pedestrian A is detected by both the roadside machine 4 and the in-vehicle terminal 2, but the pedestrian B is blocked by another vehicle and is not detected by the in-vehicle terminal 2, and is therefore the target of identification. Be outside.
なお、車載端末2および路側機4の各々で取得した移動体識別情報(位置情報、挙動特性、電波特性)の比較を行う際には、車載端末2および路側機4の各々で取得した移動体識別情報の差異が許容範囲内(誤差の範囲内)である場合に、歩行者が同一であると判定すればよい。
When comparing the moving body identification information (position information, behavior characteristics, radio wave characteristics) acquired by each of the in-vehicle terminal 2 and the roadside machine 4, the moving body acquired by each of the in-vehicle terminal 2 and the roadside machine 4 is compared. When the difference in the identification information is within the permissible range (within the range of the error), it may be determined that the pedestrians are the same.
また、本実施形態では、移動体(歩行者)の位置情報、挙動特性、および電波特性を移動体識別情報として利用して、歩行者の同定が行われるが、この他の特徴、例えば、検知された移動体の大きさを移動体識別情報として利用することもできる。
Further, in the present embodiment, the position information, behavior characteristics, and radio wave characteristics of the moving body (pedestrian) are used as the moving body identification information to identify the pedestrian, but other features such as detection are performed. It is also possible to use the size of the moved body as the moving body identification information.
次に、第1実施形態に係る路側機4で行われる挙動特性取得処理について説明する。図3は、路側機4で行われる挙動特性取得処理の概要を示す説明図である。なお、車載端末2でも同様の処理が行われる。
Next, the behavior characteristic acquisition process performed by the roadside machine 4 according to the first embodiment will be described. FIG. 3 is an explanatory diagram showing an outline of the behavior characteristic acquisition process performed by the roadside machine 4. The same process is performed on the in-vehicle terminal 2.
本実施形態では、車載端末2および路側機4の各々で検知した歩行者(移動体)の挙動特性を比較して、車載端末2および路側機4の各々で検知された歩行者が同一であるか否かを判定する(同定処理)。
In the present embodiment, the behavior characteristics of the pedestrian (moving body) detected by each of the vehicle-mounted terminal 2 and the roadside machine 4 are compared, and the pedestrians detected by each of the vehicle-mounted terminal 2 and the roadside machine 4 are the same. Whether or not it is determined (identification process).
本実施形態では、挙動特性として、上下方向(Z軸方向)の歩行者の身体の動き(上下動)の振幅および周期と、水平面(XY平面)上での歩行者の移動方向および移動速度を取得する。このような挙動特性は歩行者(移動体)ごとに異なるため、歩行者の識別情報として利用することができる。例えば、歩行者の場合には、歩行中に身体が上下に動くため、上下動が検出されるが、車両の場合には上下動が殆どない。また、子供の場合には上下動の振幅が小さいが、大人の場合には上下動の振幅が大きくなる。
In the present embodiment, the behavior characteristics include the amplitude and period of the pedestrian's body movement (vertical movement) in the vertical direction (Z-axis direction), and the pedestrian's movement direction and movement speed on the horizontal plane (XY plane). get. Since such behavior characteristics differ for each pedestrian (moving body), it can be used as pedestrian identification information. For example, in the case of a pedestrian, the body moves up and down during walking, so that the vertical movement is detected, but in the case of a vehicle, there is almost no vertical movement. In addition, the amplitude of vertical movement is small in the case of children, but the amplitude of vertical movement is large in the case of adults.
また、歩行者(移動体)の挙動特性は、歩行者の3次元位置情報、すなわち、歩行者の2次元位置情報と、高さ情報とを用いて検出される。具体的には、歩行者の高さ情報の変化状況に基づいて、歩行者の身体の上下動の振幅および周期が検出される。また、歩行者の2次元の位置情報の変化状況に基づいて、歩行者の移動方向および移動速度が求められる。
Further, the behavior characteristics of the pedestrian (moving body) are detected by using the three-dimensional position information of the pedestrian, that is, the two-dimensional position information of the pedestrian and the height information. Specifically, the amplitude and cycle of the vertical movement of the pedestrian's body are detected based on the change state of the pedestrian's height information. Further, the movement direction and the movement speed of the pedestrian are obtained based on the change state of the two-dimensional position information of the pedestrian.
なお、本実施形態では、路側センサ42や車載センサ11の検出結果から取得した歩行者の3次元位置情報に基づいて歩行者の挙動特性を検出するが、電波発信源検出処理により取得した電波発信源の位置情報、すなわち、電波特性画像の位置情報に基づいて、歩行者の挙動特性を検出するようにしてもよい。
In the present embodiment, the behavior characteristics of the pedestrian are detected based on the three-dimensional position information of the pedestrian acquired from the detection results of the roadside sensor 42 and the in-vehicle sensor 11, but the radio wave transmission acquired by the radio wave transmission source detection process is performed. The behavior characteristics of the pedestrian may be detected based on the position information of the source, that is, the position information of the radio wave characteristic image.
また、歩行者の各時刻の位置を結ぶことで得られる移動軌跡を挙動特性として取得するようにしてもよい。この場合、水平面(XY平面)上での歩行者の2次元の移動軌跡の他に、XYZ空間上での歩行者の3次元の移動軌跡を取得するようにしてもよい。
Alternatively, the movement locus obtained by connecting the positions of the pedestrians at each time may be acquired as a behavior characteristic. In this case, in addition to the two-dimensional movement locus of the pedestrian on the horizontal plane (XY plane), the three-dimensional movement locus of the pedestrian on the XYZ space may be acquired.
次に、第1実施形態に係る路側機4で行われる電波特性画像生成処理について説明する。図4は、路側機4で行われる電波特性画像生成処理の概要を示す説明図である。なお、車載端末2でも同様の処理が行われる。
Next, the radio wave characteristic image generation process performed by the roadside machine 4 according to the first embodiment will be described. FIG. 4 is an explanatory diagram showing an outline of radio wave characteristic image generation processing performed by the roadside machine 4. The same process is performed on the in-vehicle terminal 2.
本実施形態では、歩行者端末5が、各自に割り当てられた電波特性の識別信号を発信し、この電波特性を移動体識別情報として利用して、歩行者の同定が行われる。特に本実施形態では、路側機4において、識別信号の電波特性を可視化した電波特性画像が生成される。この電波特性画像は、電波発信源(歩行者端末5)ごとの電波特性に応じた色および形状で描画される。
In the present embodiment, the pedestrian terminal 5 transmits an identification signal of the radio wave characteristic assigned to each person, and the pedestrian is identified by using this radio wave characteristic as the moving object identification information. In particular, in the present embodiment, the roadside machine 4 generates a radio wave characteristic image that visualizes the radio wave characteristics of the identification signal. This radio wave characteristic image is drawn in a color and shape corresponding to the radio wave characteristics of each radio wave transmission source (pedestrian terminal 5).
図4に示す例では、電波特性画像が、円形のリングと+印とで構成される。+印は、検知された電波発信源(歩行者端末5)の位置を表す。リングは、識別信号の周波数と、識別信号の発信周期(識別信号を発信する間隔)と、を表す。リングは、+印を中心にして描画される。
In the example shown in FIG. 4, the radio wave characteristic image is composed of a circular ring and a + mark. The + mark indicates the position of the detected radio wave transmission source (pedestrian terminal 5). The ring represents the frequency of the identification signal and the transmission cycle of the identification signal (the interval at which the identification signal is transmitted). The ring is drawn around the + mark.
例えば、周波数が700MHzである場合、リングが赤色で描画され、周波数が800MHzである場合、リングが青色で描画される。周波数が700MHzと800MHzとで切り替えられる場合には、赤色のリングと青色のリングとが重なった状態で描画される。
For example, when the frequency is 700 MHz, the ring is drawn in red, and when the frequency is 800 MHz, the ring is drawn in blue. When the frequency is switched between 700 MHz and 800 MHz, the red ring and the blue ring are drawn in an overlapping state.
また、発信周期が100msである場合、同心円状の二重のリングが描画され、発信周期が1sである場合、一重のリングが描画される。発信周期が100msと1sとで切り替えられる場合には、二重のリングと一重のリングとが交互に描画される。
Further, when the transmission cycle is 100 ms, a concentric double ring is drawn, and when the transmission cycle is 1 s, a single ring is drawn. When the transmission cycle is switched between 100 ms and 1 s, the double ring and the single ring are drawn alternately.
なお、本実施形態では、電波特性画像が、円形のリングと+印とで構成され、また、識別信号の周波数および発信周期が、リングの色(赤色、青色)および形態(一重、二重)で表現されるようにしたが、これはあくまで一例であり、電波特性画像は、このような構成に限定されるものではなく、識別信号の周波数や発信周期等を識別可能であれば、他の構成も可能である。
In the present embodiment, the radio wave characteristic image is composed of a circular ring and a + mark, and the frequency and transmission cycle of the identification signal are the ring color (red, blue) and the form (single, double). However, this is just an example, and the radio wave characteristic image is not limited to such a configuration, and if the frequency and transmission cycle of the identification signal can be identified, other images can be used. Configuration is also possible.
次に、第1実施形態に係る路側機4で行われる画像合成処理について説明する。図5は、路側機4で行われる画像合成処理の概要を示す説明図である。なお、車載端末2でも同様の処理が行われる。
Next, the image composition process performed by the roadside machine 4 according to the first embodiment will be described. FIG. 5 is an explanatory diagram showing an outline of the image composition processing performed by the roadside machine 4. The same process is performed on the in-vehicle terminal 2.
本実施形態では、路側機4において、歩行者端末5(電波発信源)から発信される識別信号の受信状況に基づいて、路側機4(観測主体)を基準にした電波発信源の相対的な位置情報が得られる。次に、路側機4は、その電波発信源の相対的な位置情報に基づいて、図5(A)に示す路側カメラ43の撮影画像上での電波発信源の位置情報(座標)を取得する。そして、その撮影画像上での電波発信源の位置情報に基づいて、撮影画像上における電波発信源の位置に電波特性画像を重畳描画して、図5(B)に示す合成画像が生成される。
In the present embodiment, in the roadside unit 4, the relative radio wave transmission source based on the roadside unit 4 (observing subject) is based on the reception status of the identification signal transmitted from the pedestrian terminal 5 (radio wave transmission source). Position information can be obtained. Next, the roadside machine 4 acquires the position information (coordinates) of the radio wave transmission source on the captured image of the roadside camera 43 shown in FIG. 5A based on the relative position information of the radio wave transmission source. .. Then, based on the position information of the radio wave transmission source on the captured image, the radio wave characteristic image is superimposed and drawn on the position of the radio wave transmission source on the captured image, and the composite image shown in FIG. 5B is generated. ..
また、歩行者ごとの複数の電波特性画像が重なる位置に表示される場合には、電波特性画像が半透過状態で描画される。これにより、歩行者ごとの電波特性画像が重なる状態でも、電波特性画像から各歩行者の電波特性を認識することができる。
Also, when multiple radio wave characteristic images for each pedestrian are displayed at overlapping positions, the radio wave characteristic images are drawn in a semi-transparent state. As a result, the radio wave characteristics of each pedestrian can be recognized from the radio wave characteristic images even when the radio wave characteristic images of each pedestrian overlap.
ここで、カメラの撮影画像上において歩行者の検出枠内に電波特性画像が位置する場合に、その電波特性画像が歩行者端末5から発信された識別信号を可視化したものである。一方、歩行者の検出枠外に位置する電波特性画像は、歩行者端末5以外の電波発信源の電波や、歩行者端末5から発信された識別信号が壁などで反射された反射波である。したがって、電波特性画像が歩行者の検出枠内に位置するか否かにより、電波特性画像の電波発信源が歩行者端末5か否かを判別することができる。
Here, when the radio wave characteristic image is located within the detection frame of the pedestrian on the image captured by the camera, the radio wave characteristic image visualizes the identification signal transmitted from the pedestrian terminal 5. On the other hand, the radio wave characteristic image located outside the detection frame of the pedestrian is a radio wave of a radio wave transmission source other than the pedestrian terminal 5 or a reflected wave in which the identification signal transmitted from the pedestrian terminal 5 is reflected by a wall or the like. Therefore, it is possible to determine whether or not the radio wave transmission source of the radio wave characteristic image is the pedestrian terminal 5 depending on whether or not the radio wave characteristic image is located within the detection frame of the pedestrian.
次に、第1実施形態に係る路側機4で行われる移動体画像抽出処理について説明する。図6は、路側機4で行われる移動体画像抽出処理の概要を示す説明図である。なお、車載端末2でも同様の処理が行われる。
Next, the moving body image extraction process performed by the roadside machine 4 according to the first embodiment will be described. FIG. 6 is an explanatory diagram showing an outline of a moving body image extraction process performed by the roadside machine 4. The same process is performed on the in-vehicle terminal 2.
本実施形態では、路側機4において、路側センサ42の出力から取得した歩行者の相対的な位置情報(方位、距離)に基づいて、路側カメラ43の撮影画像上での歩行者の位置情報が求められる。そして、図6(A)に示すように、その撮影画像上での歩行者の位置情報に基づいて、撮影画像上に歩行者の検出枠が設定される(枠付け処理)。
In the present embodiment, in the roadside machine 4, the position information of the pedestrian on the captured image of the roadside camera 43 is obtained based on the relative position information (direction, distance) of the pedestrian acquired from the output of the roadside sensor 42. Desired. Then, as shown in FIG. 6A, a pedestrian detection frame is set on the captured image based on the position information of the pedestrian on the captured image (frame setting process).
次に、図6(B-1),(B-2)に示すように、撮影画像から移動体の検出枠の画像領域が切り出されて、移動体画像が抽出される。このとき、撮影画像上に電波特性画像が重畳描画された状態で、移動体の検出枠の画像領域を切り出すことで、電波特性画像が重畳描画された移動体画像を取得することができる。
Next, as shown in FIGS. 6 (B-1) and 6 (B-2), the image area of the detection frame of the moving body is cut out from the captured image, and the moving body image is extracted. At this time, the moving body image in which the radio wave characteristic image is superimposed and drawn can be acquired by cutting out the image area of the detection frame of the moving body in the state where the radio wave characteristic image is superimposed and drawn on the captured image.
ところで、車載端末2および路側機4の各々で取得した移動体画像(電波特性画像を含む)を比較することで、車載端末2および路側機4の各々で検知された歩行者が同一であるか否かを判定する(同定処理)。このとき、歩行者(移動体)が別の歩行者などの遮蔽物に隠れた状態となる場合、移動体画像のみでは、歩行者が十分に見えないため、歩行者を適切に識別できないが、電波特性画像が重畳描画されていると、歩行者を適切に識別することができる。なお、歩行者が遮蔽物に隠れた状態や歩行者端末が隠れている(例えば、歩行者端末がカバンやポケット内に入っている)状態であっても、電波の透過特性により歩行者端末5を電波発信源として検出できるため、カメラの撮影画像上に電波特性画像を適切に描画することができる。
By the way, by comparing the moving body images (including the radio wave characteristic images) acquired by each of the in-vehicle terminal 2 and the roadside machine 4, are the pedestrians detected by each of the in-vehicle terminal 2 and the roadside machine 4 the same? Determine whether or not (identification process). At this time, if the pedestrian (moving body) is hidden by a shield such as another pedestrian, the pedestrian cannot be sufficiently seen from the moving body image alone, so that the pedestrian cannot be properly identified. Pedestrians can be properly identified when the radio wave characteristic image is superimposed and drawn. Even when the pedestrian is hidden by a shield or the pedestrian terminal is hidden (for example, the pedestrian terminal is in a bag or pocket), the pedestrian terminal 5 due to the radio wave transmission characteristic. Can be detected as a radio wave transmission source, so that a radio wave characteristic image can be appropriately drawn on the image taken by the camera.
次に、第1実施形態に係る車載端末2で行われる位置確定処理および路側機4で行われる同定処理について説明する。図7は、車載端末2で行われる位置確定処理および路側機4で行われる同定処理の概要を示す説明図である。
Next, the position determination process performed by the in-vehicle terminal 2 and the identification process performed by the roadside machine 4 according to the first embodiment will be described. FIG. 7 is an explanatory diagram showing an outline of a position determination process performed by the in-vehicle terminal 2 and an identification process performed by the roadside machine 4.
本実施形態では、まず、車載端末2において、路側機4で路側センサ42を用いて検知された歩行者と、車載端末2で車載センサ11を用いて検知された歩行者と、が同一か否かが判定される。路側機4および車載端末2の各々で検知された歩行者が同一であると判定されると、車載端末2は、路側機4および車載端末2の各々で取得した歩行者の位置情報のいずれか一方を、精度(信頼度)の高さに基づいて選択して、当該歩行者の位置を確定する。
In the present embodiment, first, in the vehicle-mounted terminal 2, whether or not the pedestrian detected by the roadside machine 4 using the roadside sensor 42 and the pedestrian detected by the vehicle-mounted terminal 2 using the vehicle-mounted sensor 11 are the same. Is judged. When it is determined that the pedestrians detected by each of the roadside unit 4 and the in-vehicle terminal 2 are the same, the in-vehicle terminal 2 is one of the pedestrian position information acquired by each of the roadside unit 4 and the in-vehicle terminal 2. One is selected based on the high accuracy (reliability), and the position of the pedestrian is determined.
一方、路側機4および車載端末2の各々で検知された歩行者が同一でないと判定されると、車載端末2は、路側機4および車載端末2の各々で取得した位置情報を別人として、車載端末2で取得した位置情報で歩行者の位置を確定する。
On the other hand, when it is determined that the pedestrians detected by the roadside machine 4 and the vehicle-mounted terminal 2 are not the same, the vehicle-mounted terminal 2 uses the position information acquired by each of the roadside unit 4 and the vehicle-mounted terminal 2 as a different person and mounts the vehicle. The position of the pedestrian is determined by the position information acquired by the terminal 2.
特に本実施形態では、車載端末2が、観測主体(車載端末2、路側機4)から歩行者(移動体)までの距離に基づいて、位置情報を選択する。具体的には、車載端末2が、観測主体から歩行者までの距離が近い程、位置情報の精度(信頼度)が高いものとみなして、路側機4から歩行者までの距離と、車両から歩行者までの距離とを比較して、歩行者までの距離が短い方の観測主体で取得した位置情報を選択する。
In particular, in the present embodiment, the vehicle-mounted terminal 2 selects the position information based on the distance from the observing subject (vehicle-mounted terminal 2, roadside unit 4) to the pedestrian (moving body). Specifically, it is considered that the closer the distance from the observing subject to the pedestrian is, the higher the accuracy (reliability) of the position information is in the in-vehicle terminal 2, and the distance from the roadside machine 4 to the pedestrian and the distance from the vehicle. Compare with the distance to the pedestrian, and select the position information acquired by the observer who has the shorter distance to the pedestrian.
ここで、車両および歩行者は移動するため、路側機4で取得した歩行者の位置情報と、車載端末2で取得した歩行者の位置情報とのいずれが選択されるかは、車両、歩行者、および路側機4の位置関係に応じて変化する。例えば、車両が歩行者から離れている状態では、路側機4で取得した位置情報が選択され、車両が歩行者に近づくと、車載端末2で取得した位置情報が選択される。
Here, since the vehicle and the pedestrian move, which of the pedestrian position information acquired by the roadside machine 4 and the pedestrian position information acquired by the in-vehicle terminal 2 is selected is determined by the vehicle and the pedestrian. , And changes according to the positional relationship of the roadside machine 4. For example, when the vehicle is away from the pedestrian, the position information acquired by the roadside machine 4 is selected, and when the vehicle approaches the pedestrian, the position information acquired by the in-vehicle terminal 2 is selected.
また、車載端末2では、路側機4で検知された歩行者と車載端末2で検知された歩行者とが同一でないと判定されると、各々の歩行者を別人として両方の位置情報を選択する。
Further, in the in-vehicle terminal 2, when it is determined that the pedestrian detected by the roadside machine 4 and the pedestrian detected by the in-vehicle terminal 2 are not the same, both position information is selected with each pedestrian as a different person. ..
なお、本実施形態では、観測主体(車載端末2および路側機4)から歩行者までの距離に基づいて位置情報が選択されるようにしたが、例えば、常に路側機4の方が車載端末2より位置情報の精度が高い場合には、常に路側機4で取得した位置情報が選択されるようにしてもよい。
In the present embodiment, the position information is selected based on the distance from the observing subject (vehicle-mounted terminal 2 and roadside device 4) to the pedestrian. For example, the roadside device 4 is always the vehicle-mounted terminal 2. When the accuracy of the position information is higher, the position information acquired by the roadside machine 4 may always be selected.
また、本実施形態では、歩行者端末5が、歩行者の位置情報を含むメッセージを、定期的にブロードキャストで送信する。そして、車載端末2および路側機4は、歩行者端末5から送信されるメッセージを受信することで、歩行者の位置情報を取得することができる。しかしながら、歩行者端末5で取得した歩行者の位置情報は精度が低い。このため、車載端末2において、路側機4において路側センサ42を用いて検知された歩行者と、メッセージの送信元の歩行者端末5を所持する歩行者と、を別人と判断して、歩行者を二重に認識する場合がある。
Further, in the present embodiment, the pedestrian terminal 5 periodically broadcasts a message including the position information of the pedestrian. Then, the in-vehicle terminal 2 and the roadside device 4 can acquire the position information of the pedestrian by receiving the message transmitted from the pedestrian terminal 5. However, the accuracy of the pedestrian position information acquired by the pedestrian terminal 5 is low. Therefore, in the in-vehicle terminal 2, the pedestrian detected by the roadside unit 4 using the roadside sensor 42 and the pedestrian possessing the pedestrian terminal 5 from which the message is transmitted are determined to be different persons, and the pedestrian May be recognized twice.
そこで、本実施形態では、路側機4が、メッセージの送信元の歩行者端末5を所持する歩行者と、路側機4で検知された歩行者と、が同一であるか否かを判定する同定処理を行う。そして、路側機4が、同定が成功すると、同定済みの歩行者が所持する歩行者端末5からのメッセージを無視するように車載端末2に指示する。具体的には、路側機4が、路側機4から車載端末2に送信するメッセージに、同定済みの歩行者に関する検知結果として、歩行者の位置情報、挙動特性情報および電波特性画像の他に、該当する歩行者端末5からのメッセージを無視する旨の指示情報として、歩行者端末5から送信されるメッセージに付与されたメッセージID(無視メッセージID)を付加する。なお、メッセージIDは、メッセージの送信元の歩行者端末5を識別する情報となる。
Therefore, in the present embodiment, the roadside machine 4 determines whether or not the pedestrian possessing the pedestrian terminal 5 from which the message is transmitted and the pedestrian detected by the roadside machine 4 are the same. Perform processing. Then, when the identification is successful, the roadside machine 4 instructs the in-vehicle terminal 2 to ignore the message from the pedestrian terminal 5 possessed by the identified pedestrian. Specifically, in the message transmitted from the roadside unit 4 to the in-vehicle terminal 2, the roadside unit 4 includes the pedestrian's position information, behavior characteristic information, and radio wave characteristic image as detection results for the identified pedestrian. The message ID (ignore message ID) given to the message transmitted from the pedestrian terminal 5 is added as instruction information to ignore the message from the corresponding pedestrian terminal 5. The message ID is information that identifies the pedestrian terminal 5 that is the source of the message.
これにより、車載端末2では、同定済みの歩行者からのメッセージを無視するため、路側センサ42を用いて路側機4で検知された歩行者と、メッセージの送信元の歩行者端末5を所持する歩行者と、を別人と判断して、歩行者を二重に認識することを避けることができる。
As a result, in order to ignore the message from the identified pedestrian, the in-vehicle terminal 2 possesses the pedestrian detected by the roadside machine 4 using the roadside sensor 42 and the pedestrian terminal 5 from which the message is transmitted. It is possible to avoid recognizing a pedestrian twice by judging the pedestrian as a different person.
次に、第1実施形態に係る移動体検知システムで行われる処理の手順について説明する。図8は、移動体検知システムで行われる処理の手順を示すシーケンス図である。
Next, the procedure of the processing performed by the mobile detection system according to the first embodiment will be described. FIG. 8 is a sequence diagram showing a procedure of processing performed by the mobile detection system.
歩行者端末5では、自装置に割り当てられた電波特性(周波数、発信周期)による識別信号が定期的に発信される。また、歩行者端末5では、ITS通信のメッセージが定期的に送信される。
The pedestrian terminal 5 periodically transmits an identification signal based on the radio wave characteristics (frequency, transmission cycle) assigned to the own device. Further, the pedestrian terminal 5 periodically transmits a message of ITS communication.
このメッセージには、歩行者の位置情報(緯度、経度)と、自装置から発信される識別信号に関する電波特性情報(周波数、発信周期)と、メッセージの送信元を識別するメッセージIDとが含まれる。
This message includes pedestrian position information (latitude, longitude), radio wave characteristic information (frequency, transmission cycle) regarding the identification signal transmitted from the own device, and a message ID that identifies the source of the message. ..
路側機4では、前記の移動体検知処理、および挙動特性取得処理が定期的に行われる。また、路側機4では、歩行者端末5からの識別信号を受信すると、前記の電波発信源検出処理、電波特性画像生成処理、画像合成処理、および移動体画像抽出処理が行われる。また、路側機4では、歩行者端末5からのメッセージを受信すると、同定処理が行われる。そして、路側機4では、ITS通信のメッセージが送信される。
In the roadside machine 4, the above-mentioned mobile object detection process and behavior characteristic acquisition process are periodically performed. Further, when the roadside machine 4 receives the identification signal from the pedestrian terminal 5, the radio wave transmission source detection process, the radio wave characteristic image generation process, the image composition process, and the moving body image extraction process are performed. Further, when the roadside machine 4 receives the message from the pedestrian terminal 5, the identification process is performed. Then, the roadside machine 4 transmits a message of ITS communication.
このメッセージには、路側機4で検知した歩行者の位置情報(緯度、経度)と、電波特性画像を含む移動体画像と、電波特性情報と、移動体挙動情報(振幅、周期など)と、距離情報(自装置から歩行者までの距離)と、同定済みの歩行者が所持する歩行者端末5からのメッセージを無視する指示情報として、歩行者端末5から受信したメッセージに含まれるメッセージID(無視メッセージID)と、が含まれる。なお、同定が成功しなかった場合、すなわち、メッセージの送信元の歩行者端末5を所持する歩行者と、路側センサ42を用いて検知された歩行者とが異なる場合には、無視メッセージIDは付加されない。
This message includes pedestrian position information (latitude, longitude) detected by the roadside machine 4, a moving body image including a radio wave characteristic image, radio wave characteristic information, and moving body behavior information (amplitude, period, etc.). The message ID (message ID) included in the message received from the pedestrian terminal 5 as instruction information for ignoring the distance information (distance from the own device to the pedestrian) and the message from the pedestrian terminal 5 possessed by the identified pedestrian. Ignore message ID) and. If the identification is not successful, that is, if the pedestrian possessing the pedestrian terminal 5 from which the message is sent is different from the pedestrian detected by using the roadside sensor 42, the ignored message ID is Not added.
車載端末2では、路側機4と同様に、移動体検知処理、および挙動特性取得処理が定期的に行われる。また、車載端末2では、歩行者端末5からの識別信号を受信すると、路側機4と同様に、電波発信源検出処理、電波特性画像生成処理、画像合成処理、移動体画像抽出処理が行われる。また、車載端末2では、歩行者端末5からのメッセージを受信し、かつ、路側機4からのメッセージを受信すると、同定処理、および位置確定処理が行われる。
In the in-vehicle terminal 2, similarly to the roadside machine 4, the moving object detection process and the behavior characteristic acquisition process are periodically performed. Further, when the in-vehicle terminal 2 receives the identification signal from the pedestrian terminal 5, the radio wave transmission source detection process, the radio wave characteristic image generation process, the image composition process, and the moving body image extraction process are performed as in the roadside device 4. .. Further, when the in-vehicle terminal 2 receives the message from the pedestrian terminal 5 and the message from the roadside machine 4, the identification process and the position determination process are performed.
次に、第1実施形態に係る歩行者端末5の概略構成について説明する。図9は、歩行者端末5の概略構成を示すブロック図である。
Next, the schematic configuration of the pedestrian terminal 5 according to the first embodiment will be described. FIG. 9 is a block diagram showing a schematic configuration of the pedestrian terminal 5.
歩行者端末5は、ITS通信部51と、識別信号発信部52と、測位部53と、メモリ54と、プロセッサ55と、を備えている。
The pedestrian terminal 5 includes an ITS communication unit 51, an identification signal transmission unit 52, a positioning unit 53, a memory 54, and a processor 55.
ITS通信部51は、ITS通信(歩車間通信、路歩間通信)により、車載端末2および路側機4との間でメッセージを送受信する。なお、メッセージの送信時にはメッセージがブロードキャストで送信される。
The ITS communication unit 51 transmits and receives a message between the in-vehicle terminal 2 and the roadside unit 4 by ITS communication (communication between pedestrians and roads). When the message is transmitted, the message is broadcast.
識別信号発信部52は、自装置に割り当てられた電波特性(周波数、発信周期)による識別信号を一定周期で発信する。
The identification signal transmission unit 52 transmits an identification signal based on the radio wave characteristics (frequency, transmission cycle) assigned to the own device at regular intervals.
測位部53は、GNSS(Global Navigation Satellite System)、すなわち、GPS(Global Positioning System)、QZSS(Quasi-Zenith Satellite System)などの衛星測位システムにより自装置の位置を測定して、自装置の位置情報(緯度、経度)を取得する。
The positioning unit 53 measures the position of its own device by a satellite positioning system such as GNSS (Global Navigation Satellite System), that is, GPS (Global Positioning System) or QZSS (Quasi-Zenith Satellite System), and position information of its own device. Get (latitude, longitude).
メモリ54は、プロセッサ55で実行されるプログラムなどを記憶する。
The memory 54 stores a program or the like executed by the processor 55.
プロセッサ55は、メモリ54に記憶されたプログラムを実行することで、各種の処理、例えば、ITS通信部51によるメッセージの送受信に関する処理や、識別信号発信部52による識別信号の発信に関する処理などを行う。
By executing the program stored in the memory 54, the processor 55 performs various processes, for example, processes related to message transmission / reception by the ITS communication unit 51, processes related to transmission of the identification signal by the identification signal transmission unit 52, and the like. ..
次に、第1実施形態に係る路側機4の概略構成について説明する。図10は、路側機4の概略構成を示すブロック図である。
Next, the schematic configuration of the roadside machine 4 according to the first embodiment will be described. FIG. 10 is a block diagram showing a schematic configuration of the roadside machine 4.
路側機4は、路側センサ42、および路側カメラ43の他に、ITS通信部44と、識別信号受信部45と、メモリ46と、プロセッサ47と、を備えている。
The roadside machine 4 includes an ITS communication unit 44, an identification signal receiving unit 45, a memory 46, and a processor 47, in addition to the roadside sensor 42 and the roadside camera 43.
ITS通信部44は、ITS通信(路車間通信、路歩間通信)により、車載端末2および歩行者端末5との間でメッセージを送受信する。なお、メッセージの送信時にはメッセージがブロードキャストで送信される。
The ITS communication unit 44 transmits and receives a message between the in-vehicle terminal 2 and the pedestrian terminal 5 by ITS communication (road-to-vehicle communication, road-to-step communication). When the message is transmitted, the message is broadcast.
識別信号受信部45は、歩行者端末5から発信される識別信号を受信する。
The identification signal receiving unit 45 receives the identification signal transmitted from the pedestrian terminal 5.
メモリ46は、プロセッサ47で実行されるプログラムなどを記憶する。
The memory 46 stores a program or the like executed by the processor 47.
プロセッサ47は、メモリ46に記憶されたプログラムを実行することで情報収集に係る各種の処理を行う。本実施形態では、プロセッサ47が、移動体検知処理、挙動特性取得処理、電波発信源検出処理、電波特性画像生成処理、画像合成処理、移動体画像抽出処理、および同定処理などを行う。
The processor 47 performs various processes related to information collection by executing the program stored in the memory 46. In the present embodiment, the processor 47 performs mobile detection processing, behavior characteristic acquisition processing, radio wave transmission source detection processing, radio wave characteristic image generation processing, image composition processing, mobile image extraction processing, identification processing, and the like.
移動体検知処理では、プロセッサ47が、路側センサ42の出力に基づいて、歩行者(移動体)を検知して、歩行者の位置情報を取得する。具体的には、プロセッサ47が、まず、路側センサ42の出力に基づいて、路側機4を基準にした歩行者の相対的な位置情報(方位、距離)を取得する。次に、プロセッサ47が、歩行者の相対的な位置情報(方位、距離)と、自装置の設置位置の位置情報(緯度、経度)とに基づいて、歩行者の絶対的な位置情報(緯度、経度)を取得する。
In the moving object detection process, the processor 47 detects a pedestrian (moving object) based on the output of the roadside sensor 42 and acquires the position information of the pedestrian. Specifically, the processor 47 first acquires the relative position information (direction, distance) of the pedestrian with respect to the roadside machine 4 based on the output of the roadside sensor 42. Next, the processor 47 determines the absolute position information (latitude) of the pedestrian based on the relative position information (direction, distance) of the pedestrian and the position information (latitude, longitude) of the installation position of the own device. , Longitude).
また、移動体検知処理では、プロセッサ47が、歩行者の相対的な位置情報(方位、距離)に基づいて、歩行者の3次元位置情報を取得する。また、プロセッサ47が、歩行者の位置情報(緯度、経度)と、自装置の設置位置の位置情報(緯度、経度)とに基づいて、歩行者の距離情報(自装置から歩行者までの距離)を取得する。
Further, in the moving object detection process, the processor 47 acquires the three-dimensional position information of the pedestrian based on the relative position information (direction, distance) of the pedestrian. Further, the processor 47 bases the pedestrian's position information (latitude, longitude) and the position information of the installation position of the own device (latitude, longitude) on the pedestrian's distance information (distance from the own device to the pedestrian). ) To get.
挙動特性取得処理では、プロセッサ47が、歩行者の3次元位置情報に基づいて、歩行者(移動体)の挙動特性(身体の上下動の振幅および周期、移動方向、移動速度など)を検出して、歩行者の挙動特性情報を取得する。
In the behavior characteristic acquisition process, the processor 47 detects the behavior characteristics of the pedestrian (moving body) (amplification and period of vertical movement of the body, moving direction, moving speed, etc.) based on the three-dimensional position information of the pedestrian. And acquire the behavior characteristic information of the pedestrian.
電波発信源検出処理では、プロセッサ47が、識別信号受信部45での識別信号の受信状況に基づいて、電波発信源(歩行者端末5)を検出して、電波発信源の位置情報を取得する。
In the radio wave transmission source detection process, the processor 47 detects the radio wave transmission source (pedestrian terminal 5) based on the reception status of the identification signal in the identification signal reception unit 45, and acquires the position information of the radio wave transmission source. ..
電波特性画像生成処理では、プロセッサ47が、識別信号受信部45で受信した識別信号の電波特性(周波数および発信周期)を検出して、その識別信号の電波特性を可視化した電波特性画像を生成する。
In the radio wave characteristic image generation process, the processor 47 detects the radio wave characteristics (frequency and transmission cycle) of the identification signal received by the identification signal receiving unit 45, and generates a radio wave characteristic image that visualizes the radio wave characteristics of the identification signal. ..
画像合成処理では、プロセッサ47が、路側カメラ43の撮影画像上に電波特性画像が重畳描画した合成画像を生成する。
In the image composition process, the processor 47 generates a composite image in which the radio wave characteristic image is superimposed and drawn on the captured image of the roadside camera 43.
移動体画像抽出処理では、プロセッサ47が、自装置を基準にした歩行者の相対的な位置情報(方位、距離)に基づいて、路側カメラ43の撮影画像上での歩行者の位置情報(検出枠の座標)を取得する。そして、プロセッサ47が、撮影画像上での歩行者の位置情報に基づいて、撮影画像から移動体の検出枠の画像領域を切り出して、移動体画像を取得する。このとき、撮影画像上に電波特性画像が重畳合成された状態で、移動体の検出枠の画像領域を切り出すことで、電波特性画像を含む移動体画像を取得することができる。
In the moving body image extraction process, the processor 47 detects the position information (detection) of the pedestrian on the captured image of the roadside camera 43 based on the relative position information (direction, distance) of the pedestrian with respect to the own device. Get the frame coordinates). Then, the processor 47 cuts out the image area of the detection frame of the moving body from the captured image based on the position information of the pedestrian on the captured image, and acquires the moving body image. At this time, the moving body image including the radio wave characteristic image can be acquired by cutting out the image area of the detection frame of the moving body in the state where the radio wave characteristic image is superimposed and synthesized on the captured image.
同定処理では、プロセッサ47が、歩行者端末5から受信したメッセージに含まれる電波特性(周波数および発信周期)と、歩行者端末5から受信した識別信号の電波特性とが一致するか否かに応じて、メッセージの送信元の歩行者端末5を所持する歩行者と、自装置で路側センサ42を用いて検知された歩行者とが、同一であるか否かを判定する。
In the identification process, the processor 47 depends on whether or not the radio wave characteristics (frequency and transmission cycle) included in the message received from the pedestrian terminal 5 match the radio wave characteristics of the identification signal received from the pedestrian terminal 5. Therefore, it is determined whether or not the pedestrian possessing the pedestrian terminal 5 from which the message is transmitted and the pedestrian detected by the own device using the roadside sensor 42 are the same.
次に、第1実施形態に係る車両1の概略構成について説明する。図11は、車両1の概略構成を示すブロック図である。
Next, the schematic configuration of the vehicle 1 according to the first embodiment will be described. FIG. 11 is a block diagram showing a schematic configuration of the vehicle 1.
車両1は、車載端末2、自動運転ECU3、車載センサ11、および車載カメラ12の他に操舵ECU15と、駆動ECU16と、制動ECU17と、を備えている。
The vehicle 1 includes a steering ECU 15, a driving ECU 16, and a braking ECU 17 in addition to the vehicle-mounted terminal 2, the automatic driving ECU 3, the vehicle-mounted sensor 11, and the vehicle-mounted camera 12.
自動運転ECU3は、操舵ECU15、駆動ECU16、および制動ECU17と接続され、車載センサ11の検出結果に基づいて、操舵ECU15、駆動ECU16、および制動ECU17を制御して、車両1の自動運転(自律走行)を実現する。
The automatic driving ECU 3 is connected to the steering ECU 15, the driving ECU 16, and the braking ECU 17, and controls the steering ECU 15, the driving ECU 16, and the braking ECU 17 based on the detection result of the in-vehicle sensor 11 to automatically drive the vehicle 1 (autonomous driving). ) Is realized.
ここで、操舵ECU15は、自車両の操舵機構を制御するものであり、駆動ECU16は、自車両の駆動機構(エンジンや電動モータなど)を制御するものであり、制動ECU17は、自車両の制動機構を制御するものである。
Here, the steering ECU 15 controls the steering mechanism of the own vehicle, the drive ECU 16 controls the drive mechanism (engine, electric motor, etc.) of the own vehicle, and the braking ECU 17 brakes the own vehicle. It controls the mechanism.
車載端末2は、ITS通信部21と、識別信号受信部22と、測位部23と、メモリ24と、プロセッサ25と、を備えている。
The in-vehicle terminal 2 includes an ITS communication unit 21, an identification signal receiving unit 22, a positioning unit 23, a memory 24, and a processor 25.
ITS通信部21は、ITS通信(歩車間通信、車車間通信、路車間通信)により、歩行者端末5、他の車載端末2、および路側機4との間でメッセージを送受信する。なお、メッセージの送信時にはメッセージがブロードキャストで送信される。
The ITS communication unit 21 transmits and receives a message between the pedestrian terminal 5, another in-vehicle terminal 2, and the roadside device 4 by ITS communication (pedestrian-to-vehicle communication, vehicle-to-vehicle communication, road-to-vehicle communication). When the message is transmitted, the message is broadcast.
測位部23は、GNSS、すなわち、GPS、QZSSなどの衛星測位システムにより自装置の位置を測定して、自装置の位置情報(緯度、経度)を取得する。
The positioning unit 23 measures the position of its own device by GNSS, that is, a satellite positioning system such as GPS or QZSS, and acquires the position information (latitude, longitude) of its own device.
識別信号受信部22は、歩行者端末5から発信される識別信号を受信する。
The identification signal receiving unit 22 receives the identification signal transmitted from the pedestrian terminal 5.
メモリ24は、プロセッサ25で実行されるプログラムなどを記憶する。
The memory 24 stores a program or the like executed by the processor 25.
プロセッサ25は、メモリ24に記憶されたプログラムを実行することで情報収集に係る各種の処理を行う。本実施形態では、プロセッサ25が、移動体検知処理、挙動特性取得処理、電波発信源検出処理、電波特性画像生成処理、画像合成処理、移動体画像抽出処理、同定処理、および位置確定処理などを行う。
The processor 25 performs various processes related to information collection by executing the program stored in the memory 24. In the present embodiment, the processor 25 performs mobile detection processing, behavior characteristic acquisition processing, radio wave transmission source detection processing, radio wave characteristic image generation processing, image composition processing, mobile image extraction processing, identification processing, position determination processing, and the like. Do.
移動体検知処理では、プロセッサ25が、車載センサ11の出力に基づいて、歩行者(移動体)を検知して、歩行者の位置情報を取得する。具体的には、プロセッサ25が、まず、車載センサ11の出力に基づいて、自装置を基準にした歩行者の相対的な位置情報(方位、距離)を取得する。次に、プロセッサ25が、歩行者の相対的な位置情報(方位、距離)と、自装置の現在位置の位置情報(緯度、経度)とに基づいて、歩行者の絶対的な位置情報(緯度、経度)を取得する。
In the moving object detection process, the processor 25 detects a pedestrian (moving object) based on the output of the in-vehicle sensor 11 and acquires the position information of the pedestrian. Specifically, the processor 25 first acquires the relative position information (direction, distance) of the pedestrian with respect to the own device based on the output of the vehicle-mounted sensor 11. Next, the processor 25 determines the absolute position information (latitude) of the pedestrian based on the relative position information (direction, distance) of the pedestrian and the position information (latitude, longitude) of the current position of the own device. , Longitude).
また、移動体検知処理では、プロセッサ25が、歩行者の相対的な位置情報(方位、距離)に基づいて、歩行者の3次元位置情報を取得する。また、プロセッサ25が、歩行者の位置情報(緯度、経度)と、自装置の現在位置の位置情報(緯度、経度)とに基づいて、歩行者の距離情報(自装置から歩行者までの距離)を取得する。
Further, in the moving object detection process, the processor 25 acquires the three-dimensional position information of the pedestrian based on the relative position information (direction, distance) of the pedestrian. Further, the processor 25 bases the pedestrian's position information (latitude, longitude) and the position information of the current position of the own device (latitude, longitude) on the pedestrian's distance information (distance from the own device to the pedestrian). ) To get.
挙動特性取得処理では、プロセッサ25が、歩行者の3次元位置情報に基づいて、歩行者(移動体)の挙動特性(身体の上下動の振幅および周期、移動方向、移動速度など)を検出して、歩行者の挙動特性情報を取得する。
In the behavior characteristic acquisition process, the processor 25 detects the behavior characteristics of the pedestrian (moving body) (amplification and period of vertical movement of the body, moving direction, moving speed, etc.) based on the three-dimensional position information of the pedestrian. And acquire the behavior characteristic information of the pedestrian.
電波発信源検出処理では、プロセッサ25が、識別信号受信部22での識別信号の受信状況に基づいて、電波発信源(歩行者端末5)を検出して、電波発信源の位置情報を取得する。
In the radio wave transmission source detection process, the processor 25 detects the radio wave transmission source (pedestrian terminal 5) based on the reception status of the identification signal in the identification signal reception unit 22, and acquires the position information of the radio wave transmission source. ..
電波特性画像生成処理では、プロセッサ25が、識別信号受信部22で受信した識別信号の電波特性(周波数および発信周期)を検出して、その識別信号の電波特性を可視化した電波特性画像を生成する。
In the radio wave characteristic image generation process, the processor 25 detects the radio wave characteristics (frequency and transmission cycle) of the identification signal received by the identification signal receiving unit 22, and generates a radio wave characteristic image that visualizes the radio wave characteristics of the identification signal. ..
画像合成処理では、プロセッサ25が、車載カメラ12の撮影画像上に電波特性画像が重畳描画した合成画像を生成する。
In the image composition process, the processor 25 generates a composite image in which the radio wave characteristic image is superimposed and drawn on the captured image of the vehicle-mounted camera 12.
移動体画像抽出処理では、プロセッサ25が、自装置を基準にした歩行者の相対的な位置情報(方位、距離)に基づいて、車載カメラ12の撮影画像上での歩行者の位置情報(検出枠の座標)を取得する。そして、プロセッサ25が、撮影画像上での歩行者の位置情報に基づいて、撮影画像から移動体の検出枠の画像領域を切り出して、移動体画像を取得する。このとき、撮影画像上に電波特性画像が重畳合成された状態で、移動体の検出枠の画像領域を切り出すことで、電波特性画像を含む移動体画像を取得することができる。
In the moving body image extraction process, the processor 25 detects the position information (detection) of the pedestrian on the captured image of the in-vehicle camera 12 based on the relative position information (direction, distance) of the pedestrian with respect to the own device. Get the frame coordinates). Then, the processor 25 cuts out the image area of the detection frame of the moving body from the captured image based on the position information of the pedestrian on the captured image, and acquires the moving body image. At this time, the moving body image including the radio wave characteristic image can be acquired by cutting out the image area of the detection frame of the moving body in the state where the radio wave characteristic image is superimposed and synthesized on the captured image.
同定処理では、プロセッサ25が、路側機4で検知された歩行者と自装置で検知された歩行者とが同一であるか否かを判定する。このとき、路側機4から受信したメッセージに含まれる歩行者の位置情報と、自装置で取得した歩行者の位置情報と、を比較し、また、路側機4から受信したメッセージに含まれる移動体画像(電波特性画像を含む)と、自装置で生成した移動体画像(電波特性画像を含む)と、を比較し、また、路側機4から受信したメッセージに含まれる歩行者の挙動特性情報と、自装置で生成した歩行者の挙動特性情報と、を比較して、路側機4および自装置の各々で検知された歩行者が同一であるか否かを判定する。
In the identification process, the processor 25 determines whether or not the pedestrian detected by the roadside machine 4 and the pedestrian detected by the own device are the same. At this time, the position information of the pedestrian included in the message received from the roadside machine 4 and the position information of the pedestrian acquired by the own device are compared, and the moving body included in the message received from the roadside machine 4 is also compared. The image (including the radio wave characteristic image) and the moving body image (including the radio wave characteristic image) generated by the own device are compared, and the pedestrian behavior characteristic information included in the message received from the roadside machine 4 is also used. , The behavior characteristic information of the pedestrian generated by the own device is compared with, and it is determined whether or not the pedestrians detected by the roadside machine 4 and the own device are the same.
位置確定処理では、プロセッサ25が、同定処理の結果に応じて、路側機4で取得した歩行者の位置情報、および自装置で取得した歩行者の位置情報のいずれか一方を選択して、道路上に存在する歩行者の位置を確定する。具体的には、路側機4および自装置の各々で検知された歩行者が同一である場合に、路側機4で取得した歩行者の位置情報、および車載端末2で取得した歩行者の位置情報のうち、歩行者までの距離が短い(精度の高い)方の位置情報が選択される。一方、路側機4および自装置の各々で検知された歩行者が同一でない場合には、車載端末2で取得した位置情報が選択される。
In the position determination process, the processor 25 selects either the pedestrian position information acquired by the roadside machine 4 or the pedestrian position information acquired by the own device according to the result of the identification process, and selects the road. Determine the position of the pedestrians above. Specifically, when the pedestrians detected by the roadside machine 4 and the own device are the same, the pedestrian position information acquired by the roadside machine 4 and the pedestrian position information acquired by the in-vehicle terminal 2 Of these, the position information with the shorter distance to the pedestrian (higher accuracy) is selected. On the other hand, if the pedestrians detected by the roadside machine 4 and the own device are not the same, the position information acquired by the in-vehicle terminal 2 is selected.
なお、本実施形態では、路側機4から歩行者までの距離を路側機4から車載端末2に通知するようにしたが、車載端末2のプロセッサ25が、路側機4の設置位置に関する位置情報と、歩行者の位置情報とに基づいて、路側機4から歩行者までの距離を算出するようにしてもよい。
In the present embodiment, the distance from the roadside machine 4 to the pedestrian is notified from the roadside machine 4 to the in-vehicle terminal 2, but the processor 25 of the in-vehicle terminal 2 and the position information regarding the installation position of the roadside machine 4 are used. , The distance from the roadside machine 4 to the pedestrian may be calculated based on the position information of the pedestrian.
また、本実施形態では、処理対象となる移動体を歩行者とした例について説明しているが、処理対象となる移動体は車両であってもよく、この場合、車載端末2に、識別信号を発信する識別信号発信部が設けられる。
Further, in the present embodiment, an example in which the moving body to be processed is a pedestrian is described, but the moving body to be processed may be a vehicle, and in this case, the identification signal is transmitted to the in-vehicle terminal 2. An identification signal transmitting unit is provided to transmit.
次に、第1実施形態に係る歩行者端末5の動作手順について説明する。図12は、歩行者端末5の動作手順を示すフロー図である。
Next, the operation procedure of the pedestrian terminal 5 according to the first embodiment will be described. FIG. 12 is a flow chart showing an operation procedure of the pedestrian terminal 5.
歩行者端末5では、識別信号発信部52が、自装置に割り当てられた電波特性(周波数、発信周期)による識別信号を発信する(ST101)。また、測位部53が、GNSSにより自装置の位置を計測して、歩行者の位置情報を取得する(ST102)。
In the pedestrian terminal 5, the identification signal transmitting unit 52 transmits an identification signal based on the radio wave characteristics (frequency, transmission cycle) assigned to the own device (ST101). Further, the positioning unit 53 measures the position of its own device by GNSS and acquires the position information of a pedestrian (ST102).
次に、歩行者端末5では、プロセッサ55が、ITS通信のメッセージを生成する(ST103)。そして、ITS通信部51が、メッセージを送信する(ST104)。このメッセージには、歩行者の位置情報(緯度、経度)と、自装置から発信される識別信号に関する電波特性情報(周波数、発信周期)と、メッセージの送信元を識別するメッセージIDとが含まれる。
Next, in the pedestrian terminal 5, the processor 55 generates an ITS communication message (ST103). Then, the ITS communication unit 51 transmits a message (ST104). This message includes pedestrian position information (latitude, longitude), radio wave characteristic information (frequency, transmission cycle) regarding the identification signal transmitted from the own device, and a message ID that identifies the source of the message. ..
次に、第1実施形態に係る路側機4の動作手順について説明する。図13、図14は、路側機4の動作手順を示すフロー図である。
Next, the operation procedure of the roadside machine 4 according to the first embodiment will be described. 13 and 14 are flow charts showing the operation procedure of the roadside machine 4.
図13(A)に示すように、路側機4では、プロセッサ47が、路側センサ42の出力に基づいて、周辺の道路上に存在する歩行者(移動体)を検出し、周辺の道路上に歩行者が存在するか否かを判定する(ST201)。
As shown in FIG. 13A, in the roadside machine 4, the processor 47 detects a pedestrian (moving object) existing on the surrounding road based on the output of the roadside sensor 42, and puts it on the surrounding road. It is determined whether or not there is a pedestrian (ST201).
ここで、周辺の道路上に歩行者が存在する場合には(ST201でYes)、プロセッサ47が、路側センサ42の出力に基づいて、自装置を基準にした歩行者の相対的な位置情報(方位、距離)を取得する。さらに、プロセッサ47が、歩行者の相対的な位置情報と、自装置の現在位置の位置情報(緯度、経度)とに基づいて、歩行者の位置情報(緯度、経度)と、歩行者の高さ情報と、歩行者の距離情報(自装置から歩行者までの距離)を取得する(移動体検知処理)(ST202)。
Here, when a pedestrian is present on the surrounding road (Yes in ST201), the processor 47 determines the relative position information of the pedestrian with respect to the own device based on the output of the roadside sensor 42 (Yes). Orientation, distance) is acquired. Further, the processor 47 determines the pedestrian's position information (latitude, longitude) and the pedestrian's height based on the pedestrian's relative position information and the position information (latitude, longitude) of the current position of the own device. The information and the distance information of the pedestrian (distance from the own device to the pedestrian) are acquired (moving object detection process) (ST202).
次に、プロセッサ47が、歩行者の位置情報および高さ情報(3次元位置情報)に基づいて、歩行者(移動体)の挙動特性を検出して、歩行者の挙動特性情報(上下動の振幅および周期、移動方向、移動速度)を取得する(挙動特性取得処理)(ST203)。
Next, the processor 47 detects the behavior characteristics of the pedestrian (moving body) based on the position information and the height information (three-dimensional position information) of the pedestrian, and the behavior characteristic information of the pedestrian (vertical movement). Acquire (behavior characteristic acquisition processing) (ST203) to acquire the amplitude and period, the moving direction, and the moving speed.
また、図13(B)に示すように、路側機4では、識別信号受信部45が、歩行者端末5から発信される識別信号を受信すると(ST211でYes)、プロセッサ47が、受信した識別信号の電波特性(周波数および発信周期)を検出して、電波発信源の位置情報を取得する(電波発信源検出処理)(ST212)。
Further, as shown in FIG. 13 (B), in the roadside machine 4, when the identification signal receiving unit 45 receives the identification signal transmitted from the pedestrian terminal 5 (Yes in ST211), the processor 47 receives the received identification. The radio wave characteristics (frequency and transmission cycle) of the signal are detected, and the position information of the radio wave transmission source is acquired (radio wave transmission source detection processing) (ST212).
次に、プロセッサ47が、識別信号の電波特性を可視化した電波特性画像(電波発生源可視化画像)を歩行者ごとに生成する(電波特性画像生成処理)(ST213)。
Next, the processor 47 generates a radio wave characteristic image (radio wave source visualization image) that visualizes the radio wave characteristics of the identification signal for each pedestrian (radio wave characteristic image generation processing) (ST213).
また、プロセッサ47が、周辺の道路を撮影する路側カメラ43から撮影画像を取得する(ST214)。次に、プロセッサ47が、路側機4を基準にした歩行者の相対的な位置情報(方位、距離)に基づいて、路側カメラ43の撮影画像上での歩行者の位置情報(座標)を取得する(ST215)。
Further, the processor 47 acquires a photographed image from the roadside camera 43 that photographs the surrounding road (ST214). Next, the processor 47 acquires the position information (coordinates) of the pedestrian on the captured image of the roadside camera 43 based on the relative position information (direction, distance) of the pedestrian with respect to the roadside machine 4. (ST215).
次に、プロセッサ47が、電波発信源の位置情報に基づいて、路側カメラ43の撮影画像上に電波特性画像が重畳描画した合成画像を生成する(画像合成処理)(ST216)。そして、プロセッサ47が、カメラの撮影画像上での歩行者の位置情報に基づいて、カメラの撮影画像から歩行者の検出枠の画像領域を切り出して、移動体画像(電波特性画像を含む)を取得する(移動体画像抽出処理)(ST217)。
Next, the processor 47 generates a composite image in which the radio wave characteristic image is superimposed and drawn on the captured image of the roadside camera 43 based on the position information of the radio wave transmission source (image synthesis processing) (ST216). Then, the processor 47 cuts out an image area of the pedestrian detection frame from the image captured by the camera based on the position information of the pedestrian on the image captured by the camera, and obtains a moving body image (including a radio wave characteristic image). Acquire (moving body image extraction process) (ST217).
また、図14に示すように、路側機4では、ITS通信部44が、歩行者端末5から送信されるメッセージを受信すると(ST221でYes)、プロセッサ47が、歩行者端末5から受信したメッセージに含まれる電波特性(周波数および発信周期)と、歩行者端末5から受信した識別信号の電波特性とが一致するか否かに応じて、メッセージの送信元の歩行者端末5を所持する歩行者と、自装置で路側センサ42を用いて検知された歩行者とが、同一であるか否かを判定する(同定処理)(ST222)。
Further, as shown in FIG. 14, in the roadside machine 4, when the ITS communication unit 44 receives the message transmitted from the pedestrian terminal 5 (Yes in ST221), the processor 47 receives the message from the pedestrian terminal 5. A pedestrian who possesses the pedestrian terminal 5 from which the message is transmitted, depending on whether or not the radio wave characteristics (frequency and transmission cycle) included in the above match with the radio wave characteristics of the identification signal received from the pedestrian terminal 5. And, it is determined whether or not the pedestrian detected by the own device using the roadside sensor 42 is the same (identification process) (ST222).
ここで、メッセージの送信元の歩行者端末5を所持する歩行者と、路側センサ42を用いて検知された歩行者とが同一である場合には(ST222でYes)、プロセッサ47が、同定済みの歩行者が所持する歩行者端末5からのメッセージを無視する指示情報として、歩行者端末5から受信したメッセージに含まれるメッセージID(無視メッセージID)が付加されたITS通信のメッセージを作成する(ST223)。そして、ITS通信部44が、メッセージを車載端末2に送信する(ST225)。
Here, if the pedestrian possessing the pedestrian terminal 5 from which the message is transmitted and the pedestrian detected by using the roadside sensor 42 are the same (Yes in ST222), the processor 47 has been identified. As instruction information for ignoring the message from the pedestrian terminal 5 possessed by the pedestrian, a message of ITS communication to which a message ID (ignore message ID) included in the message received from the pedestrian terminal 5 is added is created ( ST223). Then, the ITS communication unit 44 transmits a message to the in-vehicle terminal 2 (ST225).
一方、メッセージの送信元の歩行者端末5を所持する歩行者と、路側センサ42を用いて検知された歩行者とが異なる場合には(ST222でNo)、無視メッセージIDが付加されていないメッセージを作成する(ST224)。そして、ITS通信部44が、メッセージを車載端末2に送信する(ST225)。
On the other hand, if the pedestrian possessing the pedestrian terminal 5 from which the message is transmitted and the pedestrian detected by using the roadside sensor 42 are different (No in ST222), the message to which the ignore message ID is not added is not added. Is created (ST224). Then, the ITS communication unit 44 transmits a message to the in-vehicle terminal 2 (ST225).
なお、車載端末2に送信するメッセージには、無視メッセージIDが付加されるか否かに関係なく、路側機4で検知した歩行者の位置情報(緯度、経度)と、電波特性画像を含む移動体画像と、電波特性情報と、移動体挙動情報と、距離情報(自装置から歩行者までの距離)と、が含まれる。
It should be noted that the message transmitted to the in-vehicle terminal 2 includes the pedestrian position information (latitude, longitude) detected by the roadside machine 4 and the movement including the radio wave characteristic image regardless of whether or not the ignore message ID is added. Body image, radio wave characteristic information, moving body behavior information, and distance information (distance from own device to pedestrian) are included.
次に、第1実施形態に係る車載端末2の動作手順について説明する。図15、図16は、車載端末2の動作手順を示すフロー図である。
Next, the operation procedure of the in-vehicle terminal 2 according to the first embodiment will be described. 15 and 16 are flow charts showing an operation procedure of the in-vehicle terminal 2.
図15(A)に示すように、車載端末2では、プロセッサ25が、車載センサ11の出力に基づいて、周辺の道路上に存在する歩行者(移動体)を検出し、周辺の道路上に歩行者が存在するか否かを判定する(ST301)。
As shown in FIG. 15A, in the vehicle-mounted terminal 2, the processor 25 detects a pedestrian (moving body) existing on the surrounding road based on the output of the vehicle-mounted sensor 11 and puts it on the surrounding road. It is determined whether or not there is a pedestrian (ST301).
ここで、周辺の道路上に歩行者が存在する場合には(ST301でYes)、プロセッサ25が、車載センサ11の出力に基づいて、自装置を基準にした歩行者の相対的な位置情報(方位、距離)を取得する。さらに、プロセッサ25が、歩行者の相対的な位置情報と、自装置の現在位置の位置情報(緯度、経度)とに基づいて、歩行者の位置情報(緯度、経度)と、歩行者の高さ情報と、歩行者の距離情報(自装置から歩行者までの距離)を取得する(移動体検知処理)(ST302)。
Here, when a pedestrian is present on the surrounding road (Yes in ST301), the processor 25 determines the relative position information of the pedestrian with respect to the own device based on the output of the in-vehicle sensor 11 (Yes). Orientation, distance) is acquired. Further, the processor 25 determines the pedestrian position information (latitude, longitude) and the pedestrian height based on the pedestrian relative position information and the position information (latitude, longitude) of the current position of the own device. The information and the distance information of the pedestrian (distance from the own device to the pedestrian) are acquired (moving object detection process) (ST302).
次に、プロセッサ25が、歩行者の位置情報および高さ情報(3次元位置情報)に基づいて、歩行者(移動体)の挙動特性を検出して、歩行者の挙動特性情報(上下動の振幅および周期、移動方向、移動速度)を取得する(挙動特性取得処理)(ST303)。
Next, the processor 25 detects the behavior characteristics of the pedestrian (moving body) based on the position information and the height information (three-dimensional position information) of the pedestrian, and the behavior characteristics information of the pedestrian (vertical movement). Acquire (behavior characteristic acquisition processing) (ST303) to acquire the amplitude and period, the moving direction, and the moving speed.
また、図15(B)に示すように、車載端末2では、識別信号受信部22が、歩行者端末5から発信される識別信号を受信すると(ST311でYes)、プロセッサ25が、受信した識別信号の電波特性(周波数および発信周期)を検出して、電波発信源の位置情報を取得する(電波発信源検出処理)(ST312)。
Further, as shown in FIG. 15 (B), in the vehicle-mounted terminal 2, when the identification signal receiving unit 22 receives the identification signal transmitted from the pedestrian terminal 5 (Yes in ST311), the processor 25 receives the received identification. The radio wave characteristics (frequency and transmission cycle) of the signal are detected, and the position information of the radio wave transmission source is acquired (radio wave transmission source detection processing) (ST312).
次に、プロセッサ25が、識別信号の電波特性を可視化した電波特性画像(電波発生源可視化画像)を歩行者ごとに生成する(電波特性画像生成処理)(ST313)。
Next, the processor 25 generates a radio wave characteristic image (radio wave source visualization image) that visualizes the radio wave characteristics of the identification signal for each pedestrian (radio wave characteristic image generation processing) (ST313).
また、プロセッサ25が、周辺の道路を撮影する車載カメラ12から撮影画像を取得する(ST314)。次に、プロセッサ25が、車両を基準にした歩行者の相対的な位置情報(方位、距離)に基づいて、車載カメラ12の撮影画像上での歩行者の位置情報(座標)を取得する(ST315)。
Further, the processor 25 acquires a photographed image from the in-vehicle camera 12 that photographs the surrounding roads (ST314). Next, the processor 25 acquires the position information (coordinates) of the pedestrian on the captured image of the in-vehicle camera 12 based on the relative position information (direction, distance) of the pedestrian with respect to the vehicle (). ST315).
次に、プロセッサ25が、電波発信源の位置情報に基づいて、車載カメラ12の撮影画像上に電波特性画像が重畳描画した合成画像を生成する(画像合成処理)(ST316)。そして、プロセッサ25が、車載カメラ12の撮影画像上での歩行者の位置情報に基づいて、カメラの撮影画像から歩行者の検出枠の画像領域を切り出して、移動体画像(電波特性画像を含む)を取得する(移動体画像抽出処理)(ST317)。
Next, the processor 25 generates a composite image in which the radio wave characteristic image is superimposed and drawn on the captured image of the vehicle-mounted camera 12 based on the position information of the radio wave transmission source (image composition processing) (ST316). Then, the processor 25 cuts out an image area of the pedestrian detection frame from the image captured by the camera based on the position information of the pedestrian on the image captured by the in-vehicle camera 12, and includes a moving body image (including a radio wave characteristic image). ) (Movement image extraction process) (ST317).
また、図16に示すように、車載端末2では、ITS通信部21が、歩行者端末5から送信されるメッセージを受信し(ST321でYes)、さらに、路側機4から送信されるメッセージを受信すると(ST322でYes)、プロセッサ25が、歩行者端末5から受信したメッセージに含まれるメッセージIDが、路側機4から受信したメッセージに含まれる無視メッセージIDと同一か否かを判定する(ST323)。
Further, as shown in FIG. 16, in the in-vehicle terminal 2, the ITS communication unit 21 receives the message transmitted from the pedestrian terminal 5 (Yes in ST321), and further receives the message transmitted from the roadside unit 4. Then (Yes in ST322), the processor 25 determines whether or not the message ID included in the message received from the pedestrian terminal 5 is the same as the ignored message ID included in the message received from the roadside device 4 (ST323). ..
ここで、歩行者端末5のメッセージに含まれるメッセージIDが無視メッセージIDと同一である場合には(ST323でYes)、プロセッサ25が、その歩行者端末5のメッセージを無視して、その歩行者端末5のメッセージに含まれる位置情報を対象から除外する(ST324)。
Here, if the message ID included in the message of the pedestrian terminal 5 is the same as the ignore message ID (Yes in ST323), the processor 25 ignores the message of the pedestrian terminal 5 and the pedestrian. The location information included in the message of the terminal 5 is excluded from the target (ST324).
次に、プロセッサ25が、路側機4で検知された歩行者と自装置で検知された歩行者とが同一であるか否かを判定する(同定処理)(ST325)。このとき、路側機4から受信したメッセージに含まれる歩行者の位置情報と、自装置で取得した歩行者の位置情報と、を比較し、また、路側機4から受信したメッセージに含まれる移動体画像(電波特性画像を含む)と、自装置で生成した移動体画像(電波特性画像を含む)と、を比較し、また、路側機4から受信したメッセージに含まれる歩行者の挙動特性情報と、自装置で生成した歩行者の挙動特性情報と、を比較して、路側機4で検知された歩行者と自装置で検知された歩行者とが同一であるか否かを判定する。
Next, the processor 25 determines whether or not the pedestrian detected by the roadside machine 4 and the pedestrian detected by the own device are the same (identification process) (ST325). At this time, the position information of the pedestrian included in the message received from the roadside machine 4 and the position information of the pedestrian acquired by the own device are compared, and the moving body included in the message received from the roadside machine 4 is also compared. The image (including the radio wave characteristic image) and the moving body image (including the radio wave characteristic image) generated by the own device are compared, and the pedestrian behavior characteristic information included in the message received from the roadside machine 4 is also used. , The behavior characteristic information of the pedestrian generated by the own device is compared with, and it is determined whether or not the pedestrian detected by the roadside machine 4 and the pedestrian detected by the own device are the same.
ここで、路側機4で検知された移動体と自装置で検知された移動体とが同一である場合には(ST325でYes)、プロセッサ25が、路側機4で取得した歩行者の位置情報、および車載端末2で取得した歩行者の位置情報のうち、歩行者までの距離が短い(精度の高い)方の位置情報を選択する(移動体位置確定処理)(ST326)。
Here, if the moving body detected by the roadside machine 4 and the moving body detected by the own device are the same (Yes in ST325), the processor 25 acquires the position information of the pedestrian by the roadside machine 4. , And the position information of the pedestrian with the shorter distance (high accuracy) to the pedestrian is selected from the position information of the pedestrian acquired by the in-vehicle terminal 2 (moving body position determination process) (ST326).
一方、路側機4で検知された歩行者と自装置で検知された歩行者とが同一でない場合には(ST325でNo)、プロセッサ25が、路側機4で取得した歩行者の位置情報、および車載端末2で取得した歩行者の位置情報を別人として、車載端末2で取得した位置情報を選択する(移動体位置確定処理)(ST327)。
On the other hand, if the pedestrian detected by the roadside machine 4 and the pedestrian detected by the own device are not the same (No in ST325), the pedestrian position information acquired by the roadside machine 4 and the pedestrian position information acquired by the processor 25, and The position information of the pedestrian acquired by the in-vehicle terminal 2 is used as another person, and the position information acquired by the in-vehicle terminal 2 is selected (moving body position determination process) (ST327).
なお、自動運転ECU3は、歩行者の位置情報を車載端末2から取得して、その歩行者の位置情報に基づいて、歩行者との衝突が回避されるように自車両の走行を制御する。
The automatic driving ECU 3 acquires the position information of the pedestrian from the in-vehicle terminal 2, and controls the traveling of the own vehicle based on the position information of the pedestrian so as to avoid a collision with the pedestrian.
(第2実施形態)
次に、第2実施形態について説明する。なお、ここで特に言及しない点は前記の実施形態と同様である。図17は、第2実施形態に係る移動体検知システムの概要を示す説明図である。 (Second Embodiment)
Next, the second embodiment will be described. It should be noted that the points not particularly mentioned here are the same as those in the above-described embodiment. FIG. 17 is an explanatory diagram showing an outline of the mobile body detection system according to the second embodiment.
次に、第2実施形態について説明する。なお、ここで特に言及しない点は前記の実施形態と同様である。図17は、第2実施形態に係る移動体検知システムの概要を示す説明図である。 (Second Embodiment)
Next, the second embodiment will be described. It should be noted that the points not particularly mentioned here are the same as those in the above-described embodiment. FIG. 17 is an explanatory diagram showing an outline of the mobile body detection system according to the second embodiment.
第1実施形態では、車載端末2が、車載端末2および路側機4の各々で生成した電波特性画像を含む移動体画像を比較することで、歩行者の同定、すなわち、車載端末2および路側機4の各々で検知した歩行者が同一であるか否かの判定を行うようにした(同定処理)。
In the first embodiment, the vehicle-mounted terminal 2 identifies a pedestrian by comparing the moving body images including the radio wave characteristic images generated by the vehicle-mounted terminal 2 and the roadside device 4, that is, the vehicle-mounted terminal 2 and the roadside device 4. It is determined whether or not the pedestrians detected in each of 4 are the same (identification process).
一方、本実施形態では、車載端末2が、車載端末2および路側機4の各々で生成した電波特性画像を含まない移動体画像を比較することで、歩行者の同定を行う。なお、車載端末2および路側機4の各々で検知した歩行者の位置情報を比較する点と、車載端末2および路側機4の各々で検知した歩行者の挙動特性を比較する点は、第1実施形態と同様である。
On the other hand, in the present embodiment, the vehicle-mounted terminal 2 identifies a pedestrian by comparing the moving body images that do not include the radio wave characteristic images generated by the vehicle-mounted terminal 2 and the roadside machine 4. The first point is to compare the position information of the pedestrian detected by each of the in-vehicle terminal 2 and the roadside machine 4, and to compare the behavior characteristics of the pedestrian detected by each of the in-vehicle terminal 2 and the roadside machine 4. It is the same as the embodiment.
また、本実施形態では、路側機4と車載端末2とで歩行者に対する位置関係が逆向きである場合には、一方の移動体画像を左右反転した上で比較する。
Further, in the present embodiment, when the positional relationship with respect to the pedestrian is opposite between the roadside machine 4 and the in-vehicle terminal 2, one of the moving body images is flipped left and right and then compared.
また、本実施形態では、車載端末2において、歩行者の同定が成功する、すなわち、路側機4で検知した歩行者と車載端末2で検知した歩行者とが同一であると判定されると、以降は歩行者の追跡処理により、歩行者の最新の位置情報および移動体画像を取得する。そして、追跡処理が失敗すると、再同定処理が実施される。
Further, in the present embodiment, if the in-vehicle terminal 2 succeeds in identifying the pedestrian, that is, if it is determined that the pedestrian detected by the roadside machine 4 and the pedestrian detected by the in-vehicle terminal 2 are the same. After that, the latest position information and moving object image of the pedestrian are acquired by the tracking process of the pedestrian. Then, if the tracking process fails, the re-identification process is performed.
追跡処理では、画像認識により歩行者の追跡が行われる。具体的には、移動体画像に対する画像認識により、今回検出した人物と前回検出した人物との類似度を算出して、その類似度に基づいて、今回検出した人物を前回検出した人物とを対応付けることで、同一の歩行者(移動体ID)の位置情報および移動体画像を取得することができる。
In the tracking process, pedestrians are tracked by image recognition. Specifically, the similarity between the person detected this time and the person detected last time is calculated by image recognition for the moving object image, and the person detected this time is associated with the person detected last time based on the similarity. This makes it possible to acquire the position information and the moving body image of the same pedestrian (moving body ID).
次に、第2実施形態に係る路側機4の概略構成について説明する。図18は、路側機4の概略構成を示すブロック図である。
Next, the schematic configuration of the roadside machine 4 according to the second embodiment will be described. FIG. 18 is a block diagram showing a schematic configuration of the roadside machine 4.
本実施形態では、第1実施形態(図10参照)と同様に、路側機4が、路側センサ42、路側カメラ43、ITS通信部44、メモリ46、およびプロセッサ47を備えているが、識別信号受信部45が省略されている。
In the present embodiment, similarly to the first embodiment (see FIG. 10), the roadside machine 4 includes the roadside sensor 42, the roadside camera 43, the ITS communication unit 44, the memory 46, and the processor 47, but the identification signal The receiving unit 45 is omitted.
プロセッサ47は、第1実施形態と同様に、移動体検知処理、挙動特性取得処理、移動体画像抽出処理、および同定処理を行うが、電波発信源検出処理、電波特性画像生成処理、および画像合成処理は行わない。
The processor 47 performs the mobile detection process, the behavior characteristic acquisition process, the mobile image extraction process, and the identification process as in the first embodiment, but the radio wave source detection process, the radio wave characteristic image generation process, and the image synthesis. No processing is performed.
次に、第2実施形態に係る車両1の概略構成について説明する。図19は、車両1の概略構成を示すブロック図である。
Next, the schematic configuration of the vehicle 1 according to the second embodiment will be described. FIG. 19 is a block diagram showing a schematic configuration of the vehicle 1.
本実施形態では、第1実施形態(図11参照)と同様に、車両1が、車載端末2、自動運転ECU3、車載センサ11、車載カメラ12、操舵ECU15、駆動ECU16、制動ECU17を備えている。
In the present embodiment, as in the first embodiment (see FIG. 11), the vehicle 1 includes an in-vehicle terminal 2, an automatic driving ECU 3, an in-vehicle sensor 11, an in-vehicle camera 12, a steering ECU 15, a drive ECU 16, and a braking ECU 17. ..
車載端末2は、第1実施形態と同様に、ITS通信部21と、測位部23、メモリ24、およびプロセッサ25を備えているが、識別信号受信部22が省略されている。
The in-vehicle terminal 2 includes an ITS communication unit 21, a positioning unit 23, a memory 24, and a processor 25, as in the first embodiment, but the identification signal receiving unit 22 is omitted.
プロセッサ25は、移動体検知処理、挙動特性取得処理、移動体画像抽出処理、同定処理、および位置確定処理を行うが、電波発信源検出処理、電波特性画像生成処理、および画像合成処理は行わない。
The processor 25 performs mobile detection processing, behavior characteristic acquisition processing, moving body image extraction processing, identification processing, and position determination processing, but does not perform radio wave source detection processing, radio wave characteristic image generation processing, and image composition processing. ..
次に、第2実施形態に係る路側機4の動作手順について説明する。図20は、路側機4の動作手順を示すフロー図である。
Next, the operation procedure of the roadside machine 4 according to the second embodiment will be described. FIG. 20 is a flow chart showing an operation procedure of the roadside machine 4.
路側機4では、プロセッサ47が、路側センサ42の出力に基づいて、周辺の道路上に存在する歩行者(移動体)を検出し、周辺の道路上に歩行者が存在するか否かを判定する(ST401)。
In the roadside machine 4, the processor 47 detects pedestrians (moving objects) existing on the surrounding roads based on the output of the roadside sensor 42, and determines whether or not there are pedestrians on the surrounding roads. (ST401).
ここで、周辺の道路上に歩行者が存在する場合には(ST401でYes)、プロセッサ47が、路側センサ42の出力に基づいて、自装置を基準にした歩行者の相対的な位置情報(方位、距離)を取得する。さらに、プロセッサ47が、歩行者の相対的な位置情報と、自装置の設置位置の位置情報(緯度、経度)とに基づいて、歩行者の位置情報(緯度、経度)と、歩行者の高さ情報と、歩行者の距離情報(自装置から歩行者までの距離)を取得する(移動体検知処理)(ST402)。
Here, when a pedestrian is present on the surrounding road (Yes in ST401), the processor 47 determines the relative position information of the pedestrian with respect to the own device based on the output of the roadside sensor 42 (Yes). Orientation, distance) is acquired. Further, the processor 47 determines the pedestrian position information (latitude, longitude) and the pedestrian height based on the pedestrian relative position information and the position information (latitude, longitude) of the installation position of the own device. The information and the distance information of the pedestrian (distance from the own device to the pedestrian) are acquired (moving object detection process) (ST402).
次に、プロセッサ47が、歩行者の位置情報および高さ情報(3次元位置情報)に基づいて、歩行者(移動体)の挙動特性を検出して、歩行者の挙動特性情報(上下動の振幅および周期、移動方向、移動速度)を取得する(挙動特性取得処理)(ST403)。
Next, the processor 47 detects the behavior characteristics of the pedestrian (moving body) based on the position information and the height information (three-dimensional position information) of the pedestrian, and the behavior characteristic information of the pedestrian (vertical movement). Acquire (behavior characteristic acquisition process) (ST403) to acquire amplitude and period, movement direction, movement speed).
また、プロセッサ47が、周辺の道路を撮影する路側カメラ43から撮影画像を取得する(ST404)。次に、プロセッサ47が、路側機4を基準にした歩行者の相対的な位置情報(方位、距離)に基づいて、路側カメラ43の撮影画像上での歩行者の位置情報(座標)を取得する(ST405)。
Further, the processor 47 acquires a photographed image from the roadside camera 43 that photographs the surrounding road (ST404). Next, the processor 47 acquires the position information (coordinates) of the pedestrian on the captured image of the roadside camera 43 based on the relative position information (direction, distance) of the pedestrian with respect to the roadside machine 4. (ST405).
次に、プロセッサ47が、撮影画像上での歩行者の位置情報に基づいて、路側カメラ43の撮影画像から歩行者の検出枠の画像領域を切り出して、移動体画像を取得する(移動体画像抽出処理)(ST406)。
Next, the processor 47 cuts out an image area of the pedestrian detection frame from the captured image of the roadside camera 43 based on the position information of the pedestrian on the captured image, and acquires a moving body image (moving body image). Extraction process) (ST406).
次に、プロセッサ47が、ITS通信のメッセージを生成する(ST407)。そして、ITS通信部44が、メッセージを車載端末2に送信する(ST408)。このメッセージには、路側機4で検知した歩行者に関する位置情報(緯度、経度)と、移動体画像と、移動体挙動情報と、距離情報(自装置から移動体までの距離)と、が含まれる。
Next, the processor 47 generates a message for ITS communication (ST407). Then, the ITS communication unit 44 transmits a message to the in-vehicle terminal 2 (ST408). This message includes position information (latitude, longitude) about a pedestrian detected by the roadside machine 4, a moving body image, moving body behavior information, and distance information (distance from the own device to the moving body). Is done.
次に、第2実施形態に係る車載端末2の動作手順について説明する。図21は、車載端末2の動作手順を示すフロー図である。
Next, the operation procedure of the in-vehicle terminal 2 according to the second embodiment will be described. FIG. 21 is a flow chart showing an operation procedure of the in-vehicle terminal 2.
図21(A)に示すように、車載端末2では、プロセッサ25が、車載センサ11の出力に基づいて、周辺の道路上に存在する歩行者(移動体)を検出し、周辺の道路上に歩行者が存在するか否かを判定する(ST501)。
As shown in FIG. 21 (A), in the vehicle-mounted terminal 2, the processor 25 detects a pedestrian (moving body) existing on the surrounding road based on the output of the vehicle-mounted sensor 11 and puts it on the surrounding road. It is determined whether or not there is a pedestrian (ST501).
ここで、周辺の道路上に歩行者が存在する場合には(ST501でYes)、プロセッサ25が、車載センサ11の出力に基づいて、自装置を基準にした歩行者の相対的な位置情報(方位、距離)を取得する。さらに、プロセッサ25が、歩行者の相対的な位置情報と、自装置の現在位置の位置情報(緯度、経度)とに基づいて、歩行者の位置情報(緯度、経度)と、歩行者の高さ情報と、歩行者の距離情報(自装置から歩行者までの距離)を取得する(移動体検知処理)(ST502)。
Here, when a pedestrian is present on the surrounding road (Yes in ST501), the processor 25 determines the relative position information of the pedestrian with respect to the own device based on the output of the in-vehicle sensor 11 (Yes). Orientation, distance) is acquired. Further, the processor 25 determines the pedestrian position information (latitude, longitude) and the pedestrian height based on the pedestrian relative position information and the position information (latitude, longitude) of the current position of the own device. The longitude information and the distance information of the pedestrian (distance from the own device to the pedestrian) are acquired (moving object detection process) (ST502).
次に、プロセッサ25が、歩行者の位置情報および高さ情報(3次元位置情報)に基づいて、歩行者(移動体)の挙動特性を検出して、歩行者の挙動特性情報(上下動の振幅および周期、移動方向、移動速度)を取得する(挙動特性取得処理)(ST503)。
Next, the processor 25 detects the behavior characteristics of the pedestrian (moving body) based on the position information and the height information (three-dimensional position information) of the pedestrian, and the behavior characteristics information of the pedestrian (vertical movement). Acquire (behavior characteristic acquisition processing) (ST503) to acquire amplitude and period, movement direction, movement speed).
また、プロセッサ25が、周辺の道路を撮影する車載カメラ12から撮影画像を取得する(ST504)。次に、プロセッサ25が、自車両を基準にした歩行者の相対的な位置情報(方位、距離)に基づいて、車載カメラ12の撮影画像上での歩行者の位置情報(座標)を取得する(ST505)。
Further, the processor 25 acquires a photographed image from the in-vehicle camera 12 that photographs the surrounding roads (ST504). Next, the processor 25 acquires the position information (coordinates) of the pedestrian on the captured image of the in-vehicle camera 12 based on the relative position information (direction, distance) of the pedestrian with respect to the own vehicle. (ST505).
次に、プロセッサ25が、車載カメラ12の撮影画像上での歩行者の位置情報に基づいて、カメラの撮影画像から歩行者の検出枠の画像領域を切り出して、移動体画像を取得する(移動体画像抽出処理)(ST506)。
Next, the processor 25 cuts out an image area of the pedestrian detection frame from the image captured by the camera based on the position information of the pedestrian on the image captured by the vehicle-mounted camera 12, and acquires a moving body image (movement). Body image extraction process) (ST506).
また、図21(B)に示すように、車載端末2では、ITS通信部21が、路側機4から送信されるメッセージを受信すると(ST511でYes)、プロセッサ25が、路側機4で検知された移動体と自装置で検知された移動体とが同一であるか否かを判定する(同定処理)(ST512)。このとき、路側機4から受信したメッセージに含まれる歩行者の位置情報と、自装置で取得した歩行者の位置情報と、を比較し、また、路側機4から受信したメッセージに含まれる移動体画像と、自装置で生成した移動体画像と、を比較し、また、路側機4から受信したメッセージに含まれる歩行者の挙動特性情報と、自装置で生成した歩行者の挙動特性情報と、を比較して、路側機4で検知された移動体と自装置で検知された歩行者とが同一であるか否かを判定する。
Further, as shown in FIG. 21 (B), in the in-vehicle terminal 2, when the ITS communication unit 21 receives the message transmitted from the roadside unit 4 (Yes in ST511), the processor 25 is detected by the roadside unit 4. It is determined whether or not the moving body and the moving body detected by the own device are the same (identification process) (ST512). At this time, the position information of the pedestrian included in the message received from the roadside machine 4 and the position information of the pedestrian acquired by the own device are compared, and the moving body included in the message received from the roadside machine 4 is also compared. The image is compared with the moving object image generated by the own device, and the pedestrian behavior characteristic information included in the message received from the roadside machine 4 and the pedestrian behavior characteristic information generated by the own device are Is compared, and it is determined whether or not the moving body detected by the roadside machine 4 and the pedestrian detected by the own device are the same.
ここで、路側機4で検知された歩行者と自装置で検知された歩行者とが同一である場合には(ST512でYes)、プロセッサ25が、路側機4で取得した歩行者の位置情報、および車載端末2で取得した歩行者の位置情報のうち、歩行者までの距離が短い(精度の高い)方の位置情報を選択する(移動体位置確定処理)(ST513)。
Here, if the pedestrian detected by the roadside machine 4 and the pedestrian detected by the own device are the same (Yes in ST512), the processor 25 acquires the position information of the pedestrian by the roadside machine 4. , And the position information of the pedestrian with the shorter distance (high accuracy) to the pedestrian is selected from the position information of the pedestrian acquired by the in-vehicle terminal 2 (moving body position determination process) (ST513).
次に、プロセッサ25が、初期状態の同定モードから追跡モードに移行する(ST515)。この追跡モードでは、画像認識により歩行者の追跡が行われる。そして、歩行者の追跡が失敗すると(ST516でYes)、同定モードに戻る(ST517)。
Next, the processor 25 shifts from the identification mode in the initial state to the tracking mode (ST515). In this tracking mode, pedestrians are tracked by image recognition. Then, if the pedestrian tracking fails (Yes in ST516), the mode returns to the identification mode (ST517).
一方、路側機4で検知された歩行者と自装置で検知された歩行者とが同一でない場合には(ST512でNo)、プロセッサ25が、路側機4で取得した歩行者の位置情報、および車載端末2で取得した歩行者の位置情報を別人として、両方の位置情報を選択する(移動体位置確定処理)(ST514)。
On the other hand, if the pedestrian detected by the roadside machine 4 and the pedestrian detected by the own device are not the same (No in ST512), the pedestrian position information acquired by the roadside machine 4 and the pedestrian position information acquired by the processor 25 With the pedestrian position information acquired by the in-vehicle terminal 2 as another person, both position information is selected (moving body position determination process) (ST514).
(第3実施形態)
次に、第3実施形態について説明する。なお、ここで特に言及しない点は前記の実施形態と同様である。図22は、第3実施形態に係る移動体検知システムの概要を示す説明図である。 (Third Embodiment)
Next, the third embodiment will be described. It should be noted that the points not particularly mentioned here are the same as those in the above-described embodiment. FIG. 22 is an explanatory diagram showing an outline of the mobile body detection system according to the third embodiment.
次に、第3実施形態について説明する。なお、ここで特に言及しない点は前記の実施形態と同様である。図22は、第3実施形態に係る移動体検知システムの概要を示す説明図である。 (Third Embodiment)
Next, the third embodiment will be described. It should be noted that the points not particularly mentioned here are the same as those in the above-described embodiment. FIG. 22 is an explanatory diagram showing an outline of the mobile body detection system according to the third embodiment.
第1実施形態では、歩行者端末5から発信させる識別信号の電波特性に基づいて歩行者(移動体)を識別するようにしたが、本実施形態では、歩行者が所持する歩行者端末5に表示灯57が設けられており、この表示灯57の点灯特性(点灯色など)に基づいて歩行者を識別する。
In the first embodiment, the pedestrian (moving body) is identified based on the radio wave characteristics of the identification signal transmitted from the pedestrian terminal 5, but in the present embodiment, the pedestrian terminal 5 possessed by the pedestrian is used. An indicator light 57 is provided, and a pedestrian is identified based on the lighting characteristics (lighting color, etc.) of the indicator light 57.
表示灯57は、LEDなどの光源を備えている。表示灯57は、各自に割り当てられた点灯特性で点灯動作を行う。すなわち、表示灯57の点灯特性が、歩行者端末5、すなわち、歩行者を識別する情報(端末識別情報、移動体識別情報)となる。
The indicator light 57 is provided with a light source such as an LED. The indicator lamp 57 performs a lighting operation according to the lighting characteristics assigned to each of them. That is, the lighting characteristic of the indicator light 57 becomes the pedestrian terminal 5, that is, the information for identifying the pedestrian (terminal identification information, moving object identification information).
なお、表示灯57の特性は、点灯色の他に、点灯周期や点滅パターンでもよく、また、点灯色と点灯周期とを組み合わせたものでもよい。
The characteristic of the indicator lamp 57 may be a lighting cycle or a blinking pattern in addition to the lighting color, or may be a combination of the lighting color and the lighting cycle.
また、表示灯57は、歩行者の着衣の外側や荷物に装着して、外部から容易に認識できるように、歩行者端末5の筐体とは別体として、無線通信または有線通信で歩行者端末5と接続された構成とすればよいが、歩行者端末5の筐体に表示灯57が一体的に設けられた構成としてもよい。
Further, the indicator light 57 is attached to the outside of the pedestrian's clothes or luggage so that the pedestrian can be easily recognized from the outside by wireless communication or wired communication separately from the housing of the pedestrian terminal 5. The configuration may be such that it is connected to the terminal 5, but the indicator light 57 may be integrally provided in the housing of the pedestrian terminal 5.
路側機4では、プロセッサ47が、移動体検知処理、移動体画像抽出処理、点灯特性取得処理、および同定処理などを行う。
In the roadside machine 4, the processor 47 performs a moving body detection process, a moving body image extraction process, a lighting characteristic acquisition process, an identification process, and the like.
移動体検知処理では、プロセッサ47が、路側センサ42の出力に基づいて、周辺の道路上に存在する歩行者(移動体)を検出して、歩行者の相対的な位置情報を取得する。
In the moving object detection process, the processor 47 detects a pedestrian (moving object) existing on the surrounding road based on the output of the roadside sensor 42, and acquires the relative position information of the pedestrian.
移動体画像抽出処理では、プロセッサ47が、歩行者の相対的な位置情報に基づいて、路側カメラ43の撮影画像上での歩行者の位置情報を取得する。次に、プロセッサが、撮影画像上での歩行者の位置情報に基づいて、撮影画像上に歩行者の検出枠を設定する(枠付け処理)。そして、プロセッサ47が、撮影画像から移動体の検出枠の画像領域を切り出して、移動体画像を抽出する。
In the moving body image extraction process, the processor 47 acquires the position information of the pedestrian on the captured image of the roadside camera 43 based on the relative position information of the pedestrian. Next, the processor sets a pedestrian detection frame on the captured image based on the position information of the pedestrian on the captured image (frame setting process). Then, the processor 47 cuts out the image area of the detection frame of the moving body from the captured image and extracts the moving body image.
点灯特性取得処理では、プロセッサ47が、移動体画像抽出処理で抽出された移動体画像に対する画像認識により、歩行者の表示灯57を検知して、その表示灯57の点灯特性を取得する。
In the lighting characteristic acquisition process, the processor 47 detects the pedestrian indicator light 57 by image recognition of the moving object image extracted by the moving object image extraction process, and acquires the lighting characteristic of the indicator light 57.
同定処理では、プロセッサ47が、歩行者端末5から受信したメッセージに含まれる表示灯57の点灯特性と、自装置で路側カメラ43の撮影画像から認識した表示灯57の点灯特性とが一致するか否かに応じて、メッセージの送信元の歩行者端末5を所持する歩行者と、自装置で路側センサ42を用いて検知された歩行者と、が同一であるか否かを判定する。
In the identification process, does the processor 47 match the lighting characteristics of the indicator light 57 included in the message received from the pedestrian terminal 5 with the lighting characteristics of the indicator light 57 recognized by the own device from the captured image of the roadside camera 43? Depending on whether or not, it is determined whether or not the pedestrian possessing the pedestrian terminal 5 from which the message is transmitted and the pedestrian detected by the own device using the roadside sensor 42 are the same.
車載端末2では、プロセッサ25が、同定処理などを行う。
In the in-vehicle terminal 2, the processor 25 performs identification processing and the like.
同定処理では、プロセッサ25が、歩行者端末5から直接受信したメッセージに含まれるメッセージIDと、路側機4から受信したメッセージに含まれる歩行者端末5のメッセージIDとが一致するか否かに応じて、メッセージの送信元の歩行者端末5を所持する歩行者と、路側機4から高精度の位置情報を通知された歩行者とが、同一であるか否かを判定する。
In the identification process, the processor 25 determines whether or not the message ID included in the message directly received from the pedestrian terminal 5 and the message ID of the pedestrian terminal 5 included in the message received from the roadside machine 4 match. Therefore, it is determined whether or not the pedestrian possessing the pedestrian terminal 5 from which the message is transmitted and the pedestrian notified of the highly accurate position information from the roadside machine 4 are the same.
次に、第3実施形態に係る歩行者端末5の動作手順について説明する。図23は、歩行者端末5の動作手順を示すフロー図である。
Next, the operation procedure of the pedestrian terminal 5 according to the third embodiment will be described. FIG. 23 is a flow chart showing an operation procedure of the pedestrian terminal 5.
歩行者端末5では、表示灯57が、自装置に割り当てられた点灯特性(色など)で点灯動作を行う(ST601)。また、測位部53が、自装置の位置を計測して、歩行者の位置情報を取得する(ST602)。
In the pedestrian terminal 5, the indicator light 57 performs a lighting operation with the lighting characteristics (color, etc.) assigned to the own device (ST601). Further, the positioning unit 53 measures the position of the own device and acquires the position information of the pedestrian (ST602).
次に、歩行者端末5では、プロセッサ55が、ITS通信のメッセージを作成する(ST603)。そして、ITS通信部51が、メッセージを送信する(ST604)。このメッセージには、歩行者の位置情報(緯度、経度)と、表示灯57の点灯特性に関する情報と、メッセージの送信元を識別するメッセージIDとが含まれる。なお、歩行者端末5から送信されるメッセージは、路側機4および車載端末2の双方で受信される。
Next, in the pedestrian terminal 5, the processor 55 creates a message for ITS communication (ST603). Then, the ITS communication unit 51 transmits a message (ST604). This message includes pedestrian position information (latitude, longitude), information on the lighting characteristics of the indicator lamp 57, and a message ID that identifies the source of the message. The message transmitted from the pedestrian terminal 5 is received by both the roadside machine 4 and the in-vehicle terminal 2.
次に、第3実施形態に係る路側機4の動作手順について説明する。図24は、路側機4の動作手順を示すフロー図である。
Next, the operation procedure of the roadside machine 4 according to the third embodiment will be described. FIG. 24 is a flow chart showing an operation procedure of the roadside machine 4.
図24(A)に示すように、路側機4では、プロセッサ47が、路側センサ42の出力に基づいて、周辺の道路上に存在する歩行者(移動体)を検出し、周辺の道路上に歩行者が存在するか否かを判定する(ST701)。
As shown in FIG. 24 (A), in the roadside machine 4, the processor 47 detects a pedestrian (moving body) existing on the surrounding road based on the output of the roadside sensor 42, and on the surrounding road. It is determined whether or not there is a pedestrian (ST701).
ここで、周辺の道路上に歩行者が存在する場合には(ST701でYes)、プロセッサ47が、路側センサ42の出力に基づいて、自装置を基準にした歩行者の相対的な位置情報(方位、距離)を取得する。さらに、プロセッサ47が、歩行者の相対的な位置情報と、自装置の現在位置の位置情報(緯度、経度)とに基づいて、歩行者の位置情報(緯度、経度)などを取得する(移動体検知処理)(ST702)。
Here, when a pedestrian is present on the surrounding road (Yes in ST701), the processor 47 determines the relative position information of the pedestrian with respect to the own device based on the output of the roadside sensor 42 (Yes). Orientation, distance) is acquired. Further, the processor 47 acquires (moves) the pedestrian's position information (latitude, longitude) and the like based on the pedestrian's relative position information and the position information (latitude, longitude) of the current position of the own device. Body detection processing) (ST702).
また、プロセッサ47が、周辺の道路を撮影する路側カメラ43から撮影画像を取得する(ST703)。次に、プロセッサ47が、路側機4を基準にした歩行者の相対的な位置情報(方位、距離)に基づいて、路側カメラ43の撮影画像上での歩行者の位置情報(座標)を取得する(ST704)。
Further, the processor 47 acquires a photographed image from the roadside camera 43 that photographs the surrounding road (ST703). Next, the processor 47 acquires the position information (coordinates) of the pedestrian on the captured image of the roadside camera 43 based on the relative position information (direction, distance) of the pedestrian with respect to the roadside machine 4. (ST704).
次に、プロセッサ47が、路側カメラ43の撮影画像上での歩行者の位置情報に基づいて、カメラの撮影画像から歩行者の検出枠の画像領域を切り出して、移動体画像を取得する(移動体画像抽出処理)(ST705)。
Next, the processor 47 cuts out an image area of the pedestrian detection frame from the image captured by the camera based on the position information of the pedestrian on the image captured by the roadside camera 43, and acquires a moving body image (movement). Body image extraction process) (ST705).
次に、プロセッサ47が、移動体画像に対する画像認識により、移動体画像に写る歩行者が所持する表示灯57の点灯特性を取得する(点灯特性取得処理)(ST706)。
Next, the processor 47 acquires the lighting characteristics of the indicator lamp 57 possessed by the pedestrian reflected in the moving body image by image recognition for the moving body image (lighting characteristic acquisition process) (ST706).
また、図24(B)に示すように、路側機4では、ITS通信部44が、歩行者端末5から送信されるメッセージを受信すると(ST711でYes)、プロセッサ47が、歩行者端末5から通知された表示灯57の点灯特性、すなわち、歩行者端末5から受信したメッセージに含まれる表示灯57の点灯特性と、自装置で路側カメラ43の撮影画像から認識した表示灯57の点灯特性とが一致するか否かを判定する(ST712)。
Further, as shown in FIG. 24 (B), in the roadside machine 4, when the ITS communication unit 44 receives the message transmitted from the pedestrian terminal 5 (Yes in ST711), the processor 47 starts from the pedestrian terminal 5. The lighting characteristics of the notified indicator light 57, that is, the lighting characteristics of the indicator light 57 included in the message received from the pedestrian terminal 5, and the lighting characteristics of the indicator light 57 recognized from the captured image of the roadside camera 43 by the own device. Is determined whether or not they match (ST712).
ここで、歩行者端末5から通知された表示灯57の点灯特性と、自装置で認識した表示灯57の点灯特性とが一致する場合には(ST712でYes)、プロセッサ47が、歩行者端末5からのメッセージに付加されたメッセージID(同定済みメッセージID)を付加したITS通信のメッセージを作成する(ST713)。そして、ITS通信部44が、メッセージを車載端末2に送信する(ST715)。
Here, if the lighting characteristics of the indicator light 57 notified from the pedestrian terminal 5 and the lighting characteristics of the indicator light 57 recognized by the own device match (Yes in ST712), the processor 47 uses the pedestrian terminal. Create an ITS communication message with the message ID (identified message ID) added to the message from No. 5 (ST713). Then, the ITS communication unit 44 transmits a message to the in-vehicle terminal 2 (ST715).
一方、歩行者端末5から通知された表示灯57の点灯特性と、自装置で認識した表示灯57の点灯特性とが一致しない場合には(ST712でNo)、プロセッサ47が、同定済みメッセージIDが付加されていないITS通信のメッセージを作成する(ST714)。そして、ITS通信部44が、メッセージを車載端末2に送信する(ST715)。
On the other hand, if the lighting characteristics of the indicator light 57 notified from the pedestrian terminal 5 and the lighting characteristics of the indicator light 57 recognized by the own device do not match (No in ST712), the processor 47 determines the identified message ID. Creates a message for ITS communication to which is not added (ST714). Then, the ITS communication unit 44 transmits a message to the in-vehicle terminal 2 (ST715).
なお、車載端末2に送信されるメッセージには、同定済みIDの他に、自装置で路側センサ42を用いて取得した歩行者の位置情報などが含まれる。
The message transmitted to the in-vehicle terminal 2 includes, in addition to the identified ID, pedestrian position information acquired by the own device using the roadside sensor 42.
次に、第3実施形態に係る車載端末2の動作手順について説明する。図25は、車載端末2の動作手順を示すフロー図である。
Next, the operation procedure of the in-vehicle terminal 2 according to the third embodiment will be described. FIG. 25 is a flow chart showing an operation procedure of the in-vehicle terminal 2.
車載端末2では、ITS通信部21が、歩行者端末5から送信されるメッセージを受信し(ST801でYes)、さらに、路側機4から送信されるメッセージを受信すると(ST802でYes)、プロセッサ25が、歩行者端末5から直接受信したメッセージに含まれるメッセージIDが、路側機4から受信したメッセージに含まれる同定済みメッセージIDと一致するか否かを判定する(同定処理)(ST803)。
In the in-vehicle terminal 2, when the ITS communication unit 21 receives the message transmitted from the pedestrian terminal 5 (Yes in ST801) and further receives the message transmitted from the roadside device 4 (Yes in ST802), the processor 25 However, it is determined whether or not the message ID included in the message directly received from the pedestrian terminal 5 matches the identified message ID included in the message received from the roadside device 4 (identification process) (ST803).
ここで、歩行者端末5から直接受信したメッセージに含まれるメッセージIDが同定済みメッセージIDと一致する場合には(ST803でYes)、プロセッサ25が、その歩行者端末5からのメッセージを無視して、路側機4から通知された歩行者の位置情報、すなわち、路側機4から受信したメッセージに含まれる位置情報を選択する(移動体位置確定処理)(ST804)。
Here, if the message ID included in the message directly received from the pedestrian terminal 5 matches the identified message ID (Yes in ST803), the processor 25 ignores the message from the pedestrian terminal 5. , The position information of the pedestrian notified from the roadside machine 4, that is, the position information included in the message received from the roadside machine 4 is selected (moving body position determination process) (ST804).
一方、歩行者端末5から直接受信したメッセージに含まれるメッセージIDが同定済みメッセージIDと一致しない場合には(ST803でNo)、プロセッサ25が、歩行者端末5から通知された歩行者の位置情報、すなわち、歩行者端末5から直接受信したメッセージに含まれる位置情報と、路側機4から通知された歩行者の位置情報との双方を選択する(移動体位置確定処理)(ST805)。
On the other hand, if the message ID included in the message directly received from the pedestrian terminal 5 does not match the identified message ID (No in ST803), the processor 25 determines the position information of the pedestrian notified from the pedestrian terminal 5. That is, both the position information included in the message directly received from the pedestrian terminal 5 and the position information of the pedestrian notified from the roadside machine 4 are selected (moving body position determination process) (ST805).
なお、図25に示す例では、車両から歩行者の表示灯57が見えない場合の例として、路側機4のみで表示灯57の認識が行われるようにしたが、車両から歩行者の表示灯57が見える場合には、車載端末2でも表示灯57の認識が行われる。この場合、路側機4および自装置の各々で取得した表示灯57の点灯特性を比較して、歩行者の同定が行われる。また、本実施形態でも、第1実施形態と同様に、路側機4および自装置の各々で取得した歩行者の位置情報の比較および歩行者の挙動特性の比較が行われる。
In the example shown in FIG. 25, as an example of the case where the pedestrian indicator light 57 cannot be seen from the vehicle, the indicator light 57 is recognized only by the roadside machine 4, but the pedestrian indicator light 57 is recognized from the vehicle. When the 57 can be seen, the in-vehicle terminal 2 also recognizes the indicator light 57. In this case, the pedestrian is identified by comparing the lighting characteristics of the indicator lamp 57 acquired by each of the roadside machine 4 and the own device. Further, also in the present embodiment, as in the first embodiment, the position information of the pedestrian acquired by each of the roadside machine 4 and the own device is compared and the behavior characteristics of the pedestrian are compared.
(第4実施形態)
次に、第4実施形態について説明する。なお、ここで特に言及しない点は前記の実施形態と同様である。図26は、第4実施形態に係る移動体検知システムの概要を示す説明図である。 (Fourth Embodiment)
Next, the fourth embodiment will be described. It should be noted that the points not particularly mentioned here are the same as those in the above-described embodiment. FIG. 26 is an explanatory diagram showing an outline of the mobile body detection system according to the fourth embodiment.
次に、第4実施形態について説明する。なお、ここで特に言及しない点は前記の実施形態と同様である。図26は、第4実施形態に係る移動体検知システムの概要を示す説明図である。 (Fourth Embodiment)
Next, the fourth embodiment will be described. It should be noted that the points not particularly mentioned here are the same as those in the above-described embodiment. FIG. 26 is an explanatory diagram showing an outline of the mobile body detection system according to the fourth embodiment.
本実施形態では、第2実施形態と同様に、車載端末2が、車載端末2および路側機4の各々で生成した移動体画像(電波特性画像を含まない)を比較することで、路側機4で検知した歩行者と車載端末2で検知した歩行者とが同一であるか否かを判定する(同定処理)。
In the present embodiment, as in the second embodiment, the vehicle-mounted terminal 2 compares the moving body images (not including the radio wave characteristic image) generated by each of the vehicle-mounted terminal 2 and the roadside unit 4, so that the roadside unit 4 It is determined whether or not the pedestrian detected in 1 and the pedestrian detected by the in-vehicle terminal 2 are the same (identification process).
特に本実施形態では、複数の移動体画像を用いて歩行者の同定が行われる。すなわち、直近の所定数の移動体画像、具体的には、直近の所定期間に含まれる最新の移動体画像と過去の各時刻の移動体画像とが比較対象となる。なお、路側機4および車載端末2は、画像認識などによる人物追跡を行うことで、各歩行者に関する各時刻の移動体画像を収集することができる。また、比較対象となる複数の移動体画像が、全ての組み合わせで比較され、いずれかの組み合わせで同一と判定されればよい。
Particularly in this embodiment, pedestrians are identified using a plurality of moving body images. That is, the latest predetermined number of moving body images, specifically, the latest moving body image included in the latest predetermined period and the moving body image at each time in the past are to be compared. The roadside machine 4 and the in-vehicle terminal 2 can collect moving object images at each time for each pedestrian by tracking a person by image recognition or the like. Further, a plurality of moving body images to be compared may be compared in all combinations and determined to be the same in any combination.
図26に示す例では、路側機4において、時刻t,t+1,t+2の移動体画像が得られ、車載端末2において、時刻t+1,t+2の移動体画像が得られ、これらの移動体画像を比較することで、歩行者の同定が行われる。また、路側機4と車載端末2とで歩行者に対する位置関係が逆向きであるため、例えば、路側カメラ43の時刻t+1の移動体画像と、車載カメラ12の時刻t+2の移動体画像とを比較する際には、路側カメラ43の時刻t+1の移動体画像を左右反転した上で比較する。
In the example shown in FIG. 26, the roadside machine 4 obtains a moving body image at time t, t + 1, t + 2, and the in-vehicle terminal 2 obtains a moving body image at time t + 1, t + 2, and these moving body images are compared. By doing so, the pedestrian is identified. Further, since the positional relationship with respect to the pedestrian is opposite between the roadside machine 4 and the in-vehicle terminal 2, for example, the moving object image at time t + 1 of the roadside camera 43 and the moving object image at time t + 2 of the in-vehicle camera 12 are compared. When doing so, the moving body image at time t + 1 of the roadside camera 43 is flipped left and right and then compared.
なお、本実施形態に係る路側機4の構成は第2実施形態(図18参照)と同様である。車載端末2の構成も第2実施形態(図19参照)と同様である。
The configuration of the roadside machine 4 according to this embodiment is the same as that of the second embodiment (see FIG. 18). The configuration of the in-vehicle terminal 2 is also the same as that of the second embodiment (see FIG. 19).
また、路側機4の動作手順は第2実施形態(図20参照)と略同様であるが、移動体画像抽出処理(ST406)の後に、プロセッサ47が、自装置で検知された歩行者に移動体IDを付与して、その歩行者の移動体IDと位置情報と移動体画像とをメモリ46に蓄積する。そして、プロセッサ47が、移動体IDに基づいて、歩行者の直近の所定数の移動体画像を収集して、ITS通信のメッセージを生成する。このメッセージには、歩行者の最新の位置情報、歩行者の直近の所定数の移動体画像などが含まれる。
Further, the operation procedure of the roadside machine 4 is substantially the same as that of the second embodiment (see FIG. 20), but after the mobile image extraction process (ST406), the processor 47 moves to the pedestrian detected by the own device. A body ID is assigned, and the pedestrian's moving body ID, position information, and moving body image are stored in the memory 46. Then, the processor 47 collects the latest predetermined number of moving body images of the pedestrian based on the moving body ID, and generates an ITS communication message. This message includes the latest position information of the pedestrian, the latest predetermined number of moving body images of the pedestrian, and the like.
車載端末2の動作手順も第2実施形態(図21参照)と略同様であるが、移動体画像抽出処理(ST506)の後に、プロセッサ47が、自装置で検知された歩行者に移動体IDを付与して、その歩行者の移動体IDと位置情報と移動体画像とをメモリ24に蓄積する。また、同定処理(ST512)では、プロセッサ47が、路側機4から受信したメッセージに含まれる複数の移動体画像と、自装置で生成した複数の移動体画像と、を比較する。
The operation procedure of the in-vehicle terminal 2 is substantially the same as that of the second embodiment (see FIG. 21), but after the mobile image extraction process (ST506), the processor 47 gives the pedestrian ID detected by the own device. Is added, and the moving body ID, the position information, and the moving body image of the pedestrian are stored in the memory 24. Further, in the identification process (ST512), the processor 47 compares the plurality of moving body images included in the message received from the roadside machine 4 with the plurality of moving body images generated by the own device.
(第5実施形態)
次に、第5実施形態について説明する。なお、ここで特に言及しない点は前記の実施形態と同様である。図27は、第5実施形態に係る移動体検知システムの概要を示す説明図である。 (Fifth Embodiment)
Next, the fifth embodiment will be described. It should be noted that the points not particularly mentioned here are the same as those in the above-described embodiment. FIG. 27 is an explanatory diagram showing an outline of the mobile body detection system according to the fifth embodiment.
次に、第5実施形態について説明する。なお、ここで特に言及しない点は前記の実施形態と同様である。図27は、第5実施形態に係る移動体検知システムの概要を示す説明図である。 (Fifth Embodiment)
Next, the fifth embodiment will be described. It should be noted that the points not particularly mentioned here are the same as those in the above-described embodiment. FIG. 27 is an explanatory diagram showing an outline of the mobile body detection system according to the fifth embodiment.
前記の実施形態では、同定処理において、車載端末2および路側機4の各々で検知した移動体(歩行者)が同一であるか否かを判定するようにした。一方、本実施形態では、交差点の両側(対角位置)に設置された2台の路側機4(第1,第2の観測装置)の各々で検知した移動体(歩行者)が同一であるか否かを判定する。
In the above embodiment, in the identification process, it is determined whether or not the moving bodies (pedestrians) detected by each of the in-vehicle terminal 2 and the roadside machine 4 are the same. On the other hand, in the present embodiment, the moving bodies (pedestrians) detected by each of the two roadside machines 4 (first and second observation devices) installed on both sides (diagonal positions) of the intersection are the same. Judge whether or not.
この場合、一方の路側機4では他の移動体(車両など)や建築物などの遮蔽物に遮蔽されて検知できない移動体(歩行者、車両)が、他方の路側機4では検知できる場合がある。そこで、一方の路側機4において、2台の路側機4の観測結果を統合して車載端末2に送信するようにすると、移動体の同定の精度をより一層高めることができる。
In this case, a moving body (pedestrian, vehicle) that cannot be detected by one roadside machine 4 because it is shielded by another moving body (vehicle or the like) or a shield such as a building may be detected by the other roadside machine 4. is there. Therefore, in one roadside machine 4, if the observation results of the two roadside machines 4 are integrated and transmitted to the in-vehicle terminal 2, the accuracy of identification of the moving body can be further improved.
(第6実施形態)
次に、第6実施形態について説明する。なお、ここで特に言及しない点は前記の実施形態と同様である。図28は、第6実施形態に係る歩行者端末5の概略構成を示すブロック図である。 (Sixth Embodiment)
Next, the sixth embodiment will be described. It should be noted that the points not particularly mentioned here are the same as those in the above-described embodiment. FIG. 28 is a block diagram showing a schematic configuration of thepedestrian terminal 5 according to the sixth embodiment.
次に、第6実施形態について説明する。なお、ここで特に言及しない点は前記の実施形態と同様である。図28は、第6実施形態に係る歩行者端末5の概略構成を示すブロック図である。 (Sixth Embodiment)
Next, the sixth embodiment will be described. It should be noted that the points not particularly mentioned here are the same as those in the above-described embodiment. FIG. 28 is a block diagram showing a schematic configuration of the
前記の実施形態では、路側機4および車載端末2において、路側センサ42や車載センサ11の出力から取得した3次元位置情報に基づいて、歩行者の挙動特性を検出するようにした。
In the above embodiment, the roadside machine 4 and the vehicle-mounted terminal 2 detect the behavior characteristics of the pedestrian based on the three-dimensional position information acquired from the outputs of the roadside sensor 42 and the vehicle-mounted sensor 11.
一方、本実施形態では、歩行者端末5において、歩行者の挙動特性を検出して、歩行者の挙動特性に関する情報を歩行者端末5から路側機4および車載端末2に通知する。特に本実施形態では、歩行者端末5が、挙動センサ58を備えている。この挙動センサ58は、加速度センサやジャイロセンサなどであり、歩行者の身体の動きを検出する。
On the other hand, in the present embodiment, the pedestrian terminal 5 detects the behavior characteristics of the pedestrian, and the pedestrian terminal 5 notifies the roadside machine 4 and the in-vehicle terminal 2 of information on the behavior characteristics of the pedestrian. In particular, in the present embodiment, the pedestrian terminal 5 is provided with a behavior sensor 58. The behavior sensor 58 is an acceleration sensor, a gyro sensor, or the like, and detects the movement of the pedestrian's body.
この場合、例えば、歩行者端末5において、歩行者の歩行テンポに合わせて識別信号を発信するようにすると、路側機4および車載端末2の各々において、識別信号の受信タイミングに基づいて、歩行者の挙動特性(上下動の周期)を取得することができる。また、路側機4および車載端末2の各々において、識別信号の受信に応じて、路側カメラ43や車載カメラ12の撮影画像上に電波特性画像を描画する場合、移動体画像に電波特性画像が出現するタイミングに基づいて、歩行者の挙動特性を取得することができる。
In this case, for example, if the pedestrian terminal 5 transmits the identification signal according to the walking tempo of the pedestrian, the roadside unit 4 and the in-vehicle terminal 2 each transmit the identification signal based on the reception timing of the identification signal. Behavior characteristics (cycle of vertical movement) can be acquired. Further, when the radio wave characteristic image is drawn on the captured image of the roadside camera 43 or the in-vehicle camera 12 in response to the reception of the identification signal in each of the roadside unit 4 and the in-vehicle terminal 2, the radio wave characteristic image appears in the moving body image. It is possible to acquire the behavior characteristics of a pedestrian based on the timing of the image.
また、歩行者端末5において、歩行者の移動方向に応じて識別信号の電波特性を変化させることで、路側機4および車載端末2の各々において、受信した識別信号の電波特性に基づいて、歩行者の挙動特性(移動方向)を取得することができる。なお、歩行者の移動方向に応じて、識別信号の論理や送信頻度などを変化させてもよい。
Further, in the pedestrian terminal 5, by changing the radio wave characteristics of the identification signal according to the moving direction of the pedestrian, walking is performed based on the radio wave characteristics of the identification signal received by each of the roadside unit 4 and the in-vehicle terminal 2. It is possible to acquire the behavior characteristics (movement direction) of a person. The logic of the identification signal, the transmission frequency, and the like may be changed according to the moving direction of the pedestrian.
ところで、前記の実施形態では、2つの観測装置が移動体検知処理を行い、その一方の観測装置が、他方の観測装置から移動体の検知結果を取得して、同定処理を行うようにした。具体的には、第1実施形態などでは、車載端末2が同定処理を行い、第5実施形態では、路側機4が同定処理を行うようにした。
By the way, in the above-described embodiment, the two observation devices perform the moving object detection process, and one of the observation devices acquires the detection result of the moving object from the other observation device and performs the identification process. Specifically, in the first embodiment and the like, the in-vehicle terminal 2 performs the identification process, and in the fifth embodiment, the roadside machine 4 performs the identification process.
一方、観測装置(車載端末2、路側機4)とは別の処理装置が、2つの観測装置から移動体の検知結果を取得して、同定処理を行うようにしてもよい。ここで、処理装置とは、例えば、観測装置と適宜な通信媒体(例えば、セルラー通信網など)を介して接続されたサーバ装置などである。この場合、位置確定処理なども処理装置が行うようにしてもよい。
On the other hand, a processing device different from the observation devices (vehicle-mounted terminal 2, roadside device 4) may acquire the detection results of the moving object from the two observation devices and perform the identification process. Here, the processing device is, for example, a server device connected to the observation device via an appropriate communication medium (for example, a cellular communication network). In this case, the processing device may also perform the position determination process.
また、前記の第5実施形態では、2台の路側機4が、移動体検知処理を行う観測装置となり、その一方の路側機4が同定処理を行うが、2台の車載端末2が、移動体検知処理を行う観測装置となり、その一方の車載端末2が同定処理を行うようにしてもよい。すなわち、一方の車載端末2が、他方の車載端末2から移動体の検知結果を取得して、同定処理を行うようにしてもよい。
Further, in the fifth embodiment, the two roadside devices 4 serve as observation devices that perform the moving object detection process, and one of the roadside devices 4 performs the identification process, but the two in-vehicle terminals 2 move. It may be an observation device that performs body detection processing, and one of the vehicle-mounted terminals 2 may perform identification processing. That is, one in-vehicle terminal 2 may acquire the detection result of the moving object from the other in-vehicle terminal 2 and perform the identification process.
また、前記の実施形態では、路側機4および車載端末2が、移動体検知処理を行う観測装置となるが、歩行者端末5が、移動体検知処理を行う観測装置となる構成も可能である。この場合、歩行者端末5に、移動体を検知するためのセンサ(レーダなど)や、移動体を撮影するカメラが設けられる。
Further, in the above-described embodiment, the roadside device 4 and the in-vehicle terminal 2 are observation devices that perform mobile object detection processing, but the pedestrian terminal 5 can be an observation device that performs mobile object detection processing. .. In this case, the pedestrian terminal 5 is provided with a sensor (radar or the like) for detecting the moving body and a camera for photographing the moving body.
さらに、2台の歩行者端末5が、移動体検知処理を行う観測装置となり、その一方の歩行者端末5が同定処理を行う構成としてもよい。また、路側機4および歩行者端末5が、移動体検知処理を行う観測装置となり、そのうちの歩行者端末5が同定処理を行う構成としてもよい。また、車載端末2および歩行者端末5が、移動体検知処理を行う観測装置となり、車載端末2および歩行者端末5のいずれか一方が同定処理を行う構成としてもよい。
Further, the two pedestrian terminals 5 may be an observation device that performs the moving object detection process, and one of the pedestrian terminals 5 may perform the identification process. Further, the roadside machine 4 and the pedestrian terminal 5 may be an observation device that performs a moving object detection process, and the pedestrian terminal 5 among them may perform the identification process. Further, the in-vehicle terminal 2 and the pedestrian terminal 5 may be an observation device that performs the moving object detection process, and either the in-vehicle terminal 2 or the pedestrian terminal 5 may perform the identification process.
以上のように、本出願において開示する技術の例示として、実施形態を説明した。しかしながら、本開示における技術は、これに限定されず、変更、置き換え、付加、省略などを行った実施形態にも適用できる。また、上記の実施形態で説明した各構成要素を組み合わせて、新たな実施形態とすることも可能である。
As described above, an embodiment has been described as an example of the technology disclosed in this application. However, the technique in the present disclosure is not limited to this, and can be applied to embodiments in which changes, replacements, additions, omissions, etc. have been made. It is also possible to combine the components described in the above embodiments to form a new embodiment.
例えば、前記の実施形態では、歩行者(移動体)の位置情報(緯度、経度)、挙動特性(身体の上下動の振幅および周期、移動方向、移動速度など)、識別信号の電波特性を、歩行者(移動体)を識別する移動体識別情報として利用して、歩行者(移動体)の同定が行われるが、これ以外の情報を移動体識別情報として利用してもよい。
For example, in the above-described embodiment, the position information (latitude, longitude) of the pedestrian (moving body), the behavior characteristics (amplification and period of vertical movement of the body, the moving direction, the moving speed, etc.), and the radio wave characteristics of the identification signal are obtained. The pedestrian (moving body) is identified by using it as the moving body identification information for identifying the pedestrian (moving body), but other information may be used as the moving body identification information.
また、このような路側機4や車載端末2で収集した情報を、路側機4や車載端末2において、移動体の同定処理に利用する他に、別の用途で利用することもできる。例えば、集団登下校を行う児童の場合、互いに行動履歴(位置、速度、方向など)が類似する。また、体の不自由な人物の場合、同行する介助者や盲導犬などと行動履歴が類似する。このため、行動履歴が類似する移動体が複数存在する場合には、児童や体の不自由な人物などの危険度が高い人物と想定される。そこで、近接する複数の移動体で行動履歴が類似することにより、危険度が高い歩行者を検知することができる。これにより、危険度が高い人物を車両側で早期に認識できるようになる。
Further, the information collected by the roadside machine 4 or the in-vehicle terminal 2 can be used in the roadside machine 4 or the in-vehicle terminal 2 for other purposes in addition to being used for the identification process of the moving body. For example, in the case of children who go to and leave school in groups, their behavior histories (position, speed, direction, etc.) are similar to each other. In addition, in the case of a physically handicapped person, the behavior history is similar to that of an accompanying caregiver or guide dog. Therefore, when there are a plurality of moving objects having similar behavior histories, it is assumed that the person has a high risk such as a child or a physically handicapped person. Therefore, it is possible to detect a pedestrian with a high degree of risk by having similar behavior histories in a plurality of adjacent moving objects. As a result, a person with a high degree of risk can be recognized at an early stage on the vehicle side.
また、危険度が高い歩行者に関する情報は、信号機などに設置された通信機器などのインフラ設備で検知してもよい。この場合、インフラ設備から車両に対して送信電力を通常より高く設定して送信するなどにより、危険度が高い人物をより早期に認識できるようになる。
In addition, information on pedestrians with a high degree of risk may be detected by infrastructure equipment such as communication equipment installed in traffic lights and the like. In this case, a person with a high risk can be recognized earlier by setting the transmission power higher than usual from the infrastructure equipment to the vehicle and transmitting the power.
また、車両の挙動特性情報(速度、軌跡など)に基づいて、同定処理を行う他に、挙動の安定性を判定することで、危険走行(急な加速や減速、蛇行走行など)を行う車両を検知することができる。これにより、煽り運転や飲酒運転を行う車両を早期に発見して、運転者や警察に通知することができる。
In addition to performing identification processing based on vehicle behavior characteristic information (speed, trajectory, etc.), vehicles that perform dangerous driving (sudden acceleration, deceleration, meandering, etc.) by determining the stability of behavior. Can be detected. As a result, it is possible to detect a vehicle that is driving in a hurry or drunk driving at an early stage and notify the driver or the police.
また、煽り運転や飲酒運転を行う車両情報を、信号機などに設置された通信機器などのインフラ設備に通知してもよい。これにより、歩行者に対し当該車両情報を早期に通知することができ、さらには、該当箇所に近づかないように促すなどの危険回避が可能となる。
In addition, vehicle information for driving in a hurry or drunk driving may be notified to infrastructure equipment such as communication equipment installed at a traffic light or the like. As a result, it is possible to notify the pedestrian of the vehicle information at an early stage, and further, it is possible to avoid danger such as urging the pedestrian to stay away from the relevant part.
本開示に係る移動体検知方法、路側装置、および車載装置は、2つの観測装置(路側機、車載端末)の各々で検知された移動体が同一であるか否かの判定を精度よく行うことができる効果を有し、周辺に存在する移動体を検知する移動体検知方法、路側装置、および車載装置などとして有用である。
The moving body detection method, the roadside device, and the in-vehicle device according to the present disclosure accurately determine whether or not the moving bodies detected by each of the two observation devices (roadside device, in-vehicle terminal) are the same. It has the effect of being able to perform, and is useful as a moving body detection method for detecting moving bodies existing in the vicinity, a roadside device, an in-vehicle device, and the like.
1 車両
2 車載端末(車載装置、第2の観測装置、端末装置)
3 自動運転ECU(走行制御装置)
4 路側機(路側装置、第1の観測装置)
5 歩行者端末(歩行者装置、端末装置)
11 車載センサ
12 車載カメラ
21 ITS通信部
22 識別信号受信部
23 測位部
24 メモリ
25 プロセッサ
42 路側センサ
43 路側カメラ
44 ITS通信部
45 識別信号受信部
46 メモリ
47 プロセッサ
51 ITS通信部
52 識別信号発信部
53 測位部
54 メモリ
55 プロセッサ
57 表示灯
58 挙動センサ 1Vehicle 2 In-vehicle terminal (in-vehicle device, second observation device, terminal device)
3 Automatic operation ECU (travel control device)
4 Roadside machine (roadside device, first observation device)
5 Pedestrian terminal (pedestrian device, terminal device)
11 In-vehicle sensor 12 In-vehicle camera 21 ITS communication unit 22 Identification signal reception unit 23 Positioning unit 24 Memory 25 Processor 42 Roadside sensor 43 Roadside sensor 44 ITS communication unit 45 Identification signal reception unit 46 Memory 47 Processor 51 ITS communication unit 52 Identification signal transmission unit 53 Positioning unit 54 Memory 55 Processor 57 Indicator light 58 Behavior sensor
2 車載端末(車載装置、第2の観測装置、端末装置)
3 自動運転ECU(走行制御装置)
4 路側機(路側装置、第1の観測装置)
5 歩行者端末(歩行者装置、端末装置)
11 車載センサ
12 車載カメラ
21 ITS通信部
22 識別信号受信部
23 測位部
24 メモリ
25 プロセッサ
42 路側センサ
43 路側カメラ
44 ITS通信部
45 識別信号受信部
46 メモリ
47 プロセッサ
51 ITS通信部
52 識別信号発信部
53 測位部
54 メモリ
55 プロセッサ
57 表示灯
58 挙動センサ 1
3 Automatic operation ECU (travel control device)
4 Roadside machine (roadside device, first observation device)
5 Pedestrian terminal (pedestrian device, terminal device)
11 In-
Claims (15)
- 第1の観測装置および第2の観測装置が、
センサの出力に基づいて、道路上に存在する移動体を検知して、その移動体の位置情報を取得すると共に、その移動体の挙動特性を検出し、
前記第1の観測装置、前記第2の観測装置、並びに前記第1および第2の観測装置とは別の処理装置のいずれかが、
前記第1の観測装置および前記第2の観測装置の各々で生成した前記挙動特性が一致するか否かに応じて、前記第1の観測装置および前記第2の観測装置の各々で検知された移動体が同一であるか否かを判定する同定処理を行うことを特徴とする移動体検知方法。 The first observation device and the second observation device
Based on the output of the sensor, it detects a moving body existing on the road, acquires the position information of the moving body, and detects the behavior characteristics of the moving body.
One of the first observation device, the second observation device, and a processing device different from the first and second observation devices.
It was detected by each of the first observation device and the second observation device, depending on whether or not the behavior characteristics generated by each of the first observation device and the second observation device match. A moving object detection method characterized by performing an identification process for determining whether or not the moving objects are the same. - 前記第1の観測装置および前記第2の観測装置が、
前記挙動特性として、移動体の上下動の振幅、上下動の周期、移動方向、および移動速度の少なくともいずれかを検出することを特徴とする請求項1に記載の移動体検知方法。 The first observation device and the second observation device
The moving body detection method according to claim 1, wherein at least one of the vertical movement amplitude, the vertical movement cycle, the moving direction, and the moving speed of the moving body is detected as the behavior characteristic. - 移動体が保持する端末装置が、
自装置に割り当てられた電波特性を有する識別信号を発信し、
前記第1の観測装置および前記第2の観測装置が、
受信した前記識別信号の電波特性に基づいて移動体を識別することを特徴とする請求項1に記載の移動体検知方法。 The terminal device held by the mobile body
Sends an identification signal with the radio wave characteristics assigned to the own device,
The first observation device and the second observation device
The mobile object detection method according to claim 1, wherein a mobile object is identified based on the radio wave characteristics of the received identification signal. - 前記第1の観測装置および前記第2の観測装置が、
受信した前記識別信号の電波特性を可視化した電波特性画像を生成し、
前記識別信号の電波発信源の位置情報に基づいて、カメラの撮影画像上に前記電波特性画像を重畳描画して、前記電波特性画像を含む移動体画像を生成し、
前記第1の観測装置、前記第2の観測装置、および前記処理装置のいずれかが、
前記同定処理において、前記第1の観測装置および前記第2の観測装置の各々で取得した前記電波特性画像を含む移動体画像を比較することを特徴とする請求項3に記載の移動体検知方法。 The first observation device and the second observation device
A radio wave characteristic image that visualizes the radio wave characteristics of the received identification signal is generated.
Based on the position information of the radio wave transmission source of the identification signal, the radio wave characteristic image is superimposed and drawn on the image taken by the camera to generate a moving body image including the radio wave characteristic image.
Any of the first observation device, the second observation device, and the processing device
The mobile body detection method according to claim 3, wherein in the identification process, a mobile body image including the radio wave characteristic image acquired by each of the first observation device and the second observation device is compared. .. - 前記端末装置が、
移動体の挙動特性に応じたタイミングで前記識別信号を発信し、
前記第1の観測装置および前記第2の観測装置が、
前記識別信号の受信タイミングに基づいて、移動体の挙動特性を取得することを特徴とする請求項3に記載の移動体検知方法。 The terminal device
The identification signal is transmitted at a timing according to the behavior characteristics of the moving body, and the identification signal is transmitted.
The first observation device and the second observation device
The moving body detection method according to claim 3, wherein the behavior characteristics of the moving body are acquired based on the reception timing of the identification signal. - 前記第1の観測装置および前記第2の観測装置が、
前記センサの出力に基づいて取得した移動体の位置情報に基づいて、カメラの撮影画像上に移動体の検出枠を設定し、
前記撮影画像から前記検出枠の領域を切り出すことで、移動体画像を抽出し、
前記第1の観測装置、前記第2の観測装置、および前記処理装置のいずれかが、
前記同定処理において、前記第1の観測装置および前記第2の観測装置の各々で取得した前記移動体画像を比較することを特徴とする請求項1に記載の移動体検知方法。 The first observation device and the second observation device
Based on the position information of the moving body acquired based on the output of the sensor, the detection frame of the moving body is set on the captured image of the camera.
By cutting out the region of the detection frame from the captured image, a moving body image is extracted.
Any of the first observation device, the second observation device, and the processing device
The mobile body detection method according to claim 1, wherein in the identification process, the moving body images acquired by each of the first observation device and the second observation device are compared. - 前記第1の観測装置、前記第2の観測装置、および前記処理装置のいずれかが、
前記同定処理において、前記第1の観測装置および前記第2の観測装置の各々で取得した各時刻の複数の前記移動体画像を比較することを特徴とする請求項6に記載の移動体検知方法。 Any of the first observation device, the second observation device, and the processing device
The mobile body detection method according to claim 6, wherein in the identification process, a plurality of the moving body images acquired by each of the first observation device and the second observation device are compared. .. - 移動体が保持する表示灯が、
自装置に割り当てられた点灯特性で点灯し、
前記第1の観測装置が、
カメラの撮影画像から前記表示灯を認識して、その点灯特性を検出し、その点灯特性に基づいて移動体を識別することを特徴とする請求項1に記載の移動体検知方法。 The indicator light held by the moving body
Lights up with the lighting characteristics assigned to the own device,
The first observation device
The moving body detection method according to claim 1, wherein the indicator lamp is recognized from an image captured by a camera, its lighting characteristics are detected, and a moving body is identified based on the lighting characteristics. - 前記第1の観測装置、前記第2の観測装置、および前記処理装置のいずれかが、
前記第1の観測装置で検知された移動体と前記第2の観測装置で検知された移動体とが同一であると判定した場合には、前記第1の観測装置から移動体までの距離、および前記第2の観測装置から移動体までの距離に基づいて、前記第1の観測装置で取得した移動体の位置情報と、前記第2の観測装置で取得した移動体の位置情報とのいずれかを選択して、移動体の位置を確定することを特徴とする請求項1に記載の移動体検知方法。 Any of the first observation device, the second observation device, and the processing device
When it is determined that the moving body detected by the first observing device and the moving body detected by the second observing device are the same, the distance from the first observing device to the moving body, And, based on the distance from the second observation device to the moving body, either the position information of the moving body acquired by the first observation device or the position information of the moving body acquired by the second observation device. The moving body detection method according to claim 1, wherein the position of the moving body is determined by selecting the above. - 前記第1の観測装置、前記第2の観測装置、および前記処理装置のいずれかが、
前記同定処理が成功すると、追跡処理により移動体の位置を確定する追跡モードに移行し、
前記追跡処理が失敗すると、前記同定処理により移動体の位置を確定する同定モードに復帰することを特徴とする請求項1に記載の移動体検知方法。 Any of the first observation device, the second observation device, and the processing device
When the identification process is successful, the process shifts to the tracking mode in which the position of the moving object is determined by the tracking process.
The moving object detection method according to claim 1, wherein when the tracking process fails, the identification mode returns to the identification mode in which the position of the moving object is determined by the identification process. - 前記第1の観測装置が、
メッセージの送信元の移動体と、前記センサを用いて検知された移動体とが、同一であるか否かを判定する第1の同定処理を行い、
前記第1の同定処理が成功すると、該当する移動体を同定済みとしてその移動体が保持する端末装置からのメッセージを無視する指示情報を含むメッセージを前記第2の観測装置に送信し、
前記第2の観測装置が、
前記第1の観測装置で検知された移動体と、自装置で検知された移動体と、が同一であるか否かを判定する第2の同定処理を行うと共に、前記指示情報に基づいて、同定済みの移動体が保持する前記端末装置からのメッセージを無視して、前記第1の観測装置から通知された移動体の位置情報と、自装置で取得した位置情報と、に基づいて、移動体の位置を確定することを特徴とする請求項1に記載の移動体検知方法。 The first observation device
The first identification process for determining whether or not the moving body that is the source of the message and the moving body detected by using the sensor are the same is performed.
When the first identification process is successful, a message including instruction information including instruction information for ignoring the message from the terminal device held by the moving body as having identified the corresponding moving body is transmitted to the second observing device.
The second observation device
A second identification process for determining whether or not the moving body detected by the first observation device and the moving body detected by the own device are the same is performed, and based on the instruction information, the second identification process is performed. Ignoring the message from the terminal device held by the identified mobile body, the movement is based on the position information of the moving body notified from the first observation device and the position information acquired by the own device. The moving object detection method according to claim 1, wherein the position of the body is determined. - 前記第1の観測装置が、路側装置であり、
前記第2の観測装置が、車載装置であることを特徴とする請求項1に記載の移動体検知方法。 The first observation device is a roadside device.
The moving object detection method according to claim 1, wherein the second observation device is an in-vehicle device. - 前記第1の観測装置および前記第2の観測装置が、路側装置であることを特徴とする請求項1に記載の移動体検知方法。 The moving object detection method according to claim 1, wherein the first observation device and the second observation device are roadside devices.
- 路側センサの出力に基づいて、道路上に存在する移動体を検知して、その移動体の位置情報を取得すると共に、その移動体の挙動特性を検出し、
前記位置情報および前記挙動特性に関する情報を移動体識別情報として車載装置に送信することを特徴とする路側装置。 Based on the output of the roadside sensor, it detects a moving body existing on the road, acquires the position information of the moving body, and detects the behavior characteristics of the moving body.
A roadside device characterized in that information regarding the position information and the behavior characteristics is transmitted to an in-vehicle device as moving object identification information. - 車載センサの出力に基づいて、道路上に存在する移動体を検知して、その移動体の位置情報を取得すると共に、その移動体の挙動特性を検出し、
路側装置から受信した挙動特性と、自装置で生成した前記挙動特性とが一致するか否かに応じて、前記路側装置および自装置の各々で検知された移動体が同一であるか否かを判定する同定処理を行うことを特徴とする車載装置。 Based on the output of the in-vehicle sensor, it detects a moving body existing on the road, acquires the position information of the moving body, and detects the behavior characteristics of the moving body.
Whether or not the moving body detected by each of the roadside device and the own device is the same, depending on whether or not the behavior characteristic received from the roadside device and the behavior characteristic generated by the own device match. An in-vehicle device characterized by performing a determination identification process.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019-221293 | 2019-12-06 | ||
JP2019221293A JP2021092840A (en) | 2019-12-06 | 2019-12-06 | Mobile body detection method, road side device, and on-vehicle device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021111858A1 true WO2021111858A1 (en) | 2021-06-10 |
Family
ID=76221578
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/042726 WO2021111858A1 (en) | 2019-12-06 | 2020-11-17 | Moving body detection method, roadside device, and vehicle-mounted device |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2021092840A (en) |
WO (1) | WO2021111858A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114577224A (en) * | 2022-02-24 | 2022-06-03 | 中汽创智科技有限公司 | Object positioning method and device, electronic equipment and storage medium |
WO2023145494A1 (en) * | 2022-01-27 | 2023-08-03 | 京セラ株式会社 | Information processing device and information processing method |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7345118B2 (en) * | 2021-12-24 | 2023-09-15 | パナソニックIpマネジメント株式会社 | Communication device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006154967A (en) * | 2004-11-25 | 2006-06-15 | Nissan Motor Co Ltd | Risk minimum locus generating device, and dangerous situation warning device using it |
JP2007279808A (en) * | 2006-04-03 | 2007-10-25 | Honda Motor Co Ltd | Vehicle periphery monitoring device |
JP2016085686A (en) * | 2014-10-28 | 2016-05-19 | パイオニア株式会社 | Determination device, determination method and determination program |
-
2019
- 2019-12-06 JP JP2019221293A patent/JP2021092840A/en active Pending
-
2020
- 2020-11-17 WO PCT/JP2020/042726 patent/WO2021111858A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006154967A (en) * | 2004-11-25 | 2006-06-15 | Nissan Motor Co Ltd | Risk minimum locus generating device, and dangerous situation warning device using it |
JP2007279808A (en) * | 2006-04-03 | 2007-10-25 | Honda Motor Co Ltd | Vehicle periphery monitoring device |
JP2016085686A (en) * | 2014-10-28 | 2016-05-19 | パイオニア株式会社 | Determination device, determination method and determination program |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023145494A1 (en) * | 2022-01-27 | 2023-08-03 | 京セラ株式会社 | Information processing device and information processing method |
CN114577224A (en) * | 2022-02-24 | 2022-06-03 | 中汽创智科技有限公司 | Object positioning method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
JP2021092840A (en) | 2021-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021111858A1 (en) | Moving body detection method, roadside device, and vehicle-mounted device | |
US10699142B2 (en) | Systems and methods for traffic signal light detection | |
US10847032B2 (en) | Apparatus for informing parking position and method thereof | |
JP4293917B2 (en) | Navigation device and intersection guide method | |
JP4434224B2 (en) | In-vehicle device for driving support | |
JP4883242B2 (en) | Moving body position detection device | |
EP3819897B1 (en) | Driving support method and driving support device | |
JP2009140008A (en) | Dangerous traveling information provision device, dangerous traveling decision program and dangerous traveling decision method | |
JP5200568B2 (en) | In-vehicle device, vehicle running support system | |
JP6520687B2 (en) | Driving support device | |
JP2007010335A (en) | Vehicle position detecting device and system | |
JP2008033774A (en) | Notification system for traffic light information and on-board unit | |
JP2008215991A (en) | Positioning device and positioning system | |
EP3618031A1 (en) | Roadside device, control method of roadside device, vehicle, and recording medium | |
US20230111327A1 (en) | Techniques for finding and accessing vehicles | |
JP4639681B2 (en) | Vehicle object detection device | |
JP2005010938A (en) | Traveling supporting system and on-vehicle terminal equipment | |
JP2012211843A (en) | Position correction device and inter-vehicle communication system | |
JP2017126213A (en) | Intersection state check system, imaging device, on-vehicle device, intersection state check program and intersection state check method | |
EP3965395A1 (en) | Apparatus, method, and computer program for a first vehicle and for estimating a position of a second vehicle at the first vehicle, vehicle | |
KR20140085848A (en) | Detecting system of traffic signal based on vehicle position, and detecting method using the system | |
JP2009037462A (en) | Traffic information providing system and method | |
JP4472658B2 (en) | Driving support system | |
CN113626545B (en) | Vehicle, apparatus, method and computer program for determining a merged environment map | |
JP2012048293A (en) | Driving support device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20896387 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20896387 Country of ref document: EP Kind code of ref document: A1 |