WO2023139978A1 - Dispositif de caméra monté sur véhicule, système de caméra monté sur véhicule et procédé de stockage d'image - Google Patents

Dispositif de caméra monté sur véhicule, système de caméra monté sur véhicule et procédé de stockage d'image Download PDF

Info

Publication number
WO2023139978A1
WO2023139978A1 PCT/JP2022/045873 JP2022045873W WO2023139978A1 WO 2023139978 A1 WO2023139978 A1 WO 2023139978A1 JP 2022045873 W JP2022045873 W JP 2022045873W WO 2023139978 A1 WO2023139978 A1 WO 2023139978A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
image
erroneous
camera device
vehicle camera
Prior art date
Application number
PCT/JP2022/045873
Other languages
English (en)
Japanese (ja)
Inventor
正幸 小林
Original Assignee
日立Astemo株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立Astemo株式会社 filed Critical 日立Astemo株式会社
Priority to DE112022005228.7T priority Critical patent/DE112022005228T5/de
Publication of WO2023139978A1 publication Critical patent/WO2023139978A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/02Ensuring safety in case of control system failures, e.g. by diagnosing, circumventing or fixing failures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles

Definitions

  • the present invention relates to an in-vehicle camera device, an in-vehicle camera system, and an image storage method that generate learning data for machine learning from images captured while driving.
  • AD autonomous driving
  • ADAS advanced driver-assistance systems
  • Machine learning uses data under various circumstances for learning.
  • learning data By increasing the diversity of data used for machine learning (hereinafter referred to as "learning data"), it is possible to improve the recognition accuracy of an AI network that has machine-learned them.
  • a learning data generating device disclosed in Patent Literature 1, which acquires learning data from output data of an in-vehicle sensor based on a driver's manual driving operation.
  • the learning data generation device includes a storage unit that stores various data, a display unit that can display images, a driving environment acquisition unit that acquires information related to the vehicle's driving environment, an operation acquisition unit that acquires input related to vehicle operation, and learning data that associates information related to the driving environment and input related to operation. and a control unit for generating, the control unit causes the display unit to display information about the driving environment acquired by the driving environment acquisition unit, and generates learning data in which the input related to the operation acquired by the operation acquisition unit is associated with the displayed information related to the driving environment.”
  • the learning data generation device of Patent Document 1 has the ultimate goal of realizing highly accurate automated driving, as stated in the subject column of the abstract, "Generate learning data for realizing highly accurate driving operations during automated driving.”
  • the learning data generation device of Patent Document 1 is a system in which "the simulator system (learning data generation device) 1 is a system in which a vehicle travels in a virtually constructed driving environment and acquires the driving operations of the driver while driving.” There was a possibility that the accuracy of automated driving could not be sufficiently ensured in unlearned situations.
  • an object of the present invention to provide an in-vehicle camera device, an in-vehicle camera system, and an image storage method that automatically extract and store images before and after the occurrence of misrecognition of the external environment from various images captured in an actual driving environment, contributing to the improvement of problematic AI networks.
  • the in-vehicle camera device of the present invention is provided with a control feedback unit that receives feedback on the automatic control of the own vehicle, an erroneous control determination unit that determines erroneous automatic control based on the driver's driving operation information, and an image storage unit that saves an image, and saves the image when it is determined that an erroneous automatic control has occurred.
  • FIG. 2 is a bird's-eye view for explaining the field-of-view angle of view of an in-vehicle camera device mounted on the own vehicle.
  • 1 is a functional block diagram of an in-vehicle camera system according to an embodiment
  • FIG. 4 is a bird's-eye view for explaining the behavior of the own vehicle when misidentification occurs.
  • 5 is a flowchart for explaining a process of saving images before and after misidentification.
  • FIG. 3 is a bird's-eye view for explaining the behavior of a vehicle when mistracking occurs.
  • Fig. 6A legend. 4 is a flowchart for explaining a process of saving images before and after mistracking;
  • FIG. 4 is a bird's-eye view for explaining the behavior of the own vehicle when misidentification occurs.
  • 5 is a flowchart for explaining a process of saving images before and after misidentification.
  • the in-vehicle camera device 100 is a device that identifies and saves images that caused AD control or ADAS control against the driver's will based on the details of the driver's manual driving operation, and additionally uses the saved images to make the AI network for external environment recognition learn.
  • the vehicle-mounted camera system is a system that includes the vehicle-mounted camera device 100 and a vehicle control system 200 that controls the own vehicle based on the output of the device. Details of each will be described below.
  • FIG. 1 is a plan view for explaining the field of view angle of view of an in-vehicle camera device 100 mounted on a vehicle.
  • the in-vehicle camera device 100 is attached to the inner surface of the windshield of the vehicle or the like facing forward, and incorporates a left imaging unit 1L and a right imaging unit 1R.
  • the left imaging unit 1L is an imaging unit arranged on the left side of the on-vehicle camera device 100 for capturing the left image PL .
  • the right imaging unit 1R is arranged on the right side of the on - vehicle camera device 100 to capture the right image PR .
  • the right visual field V R and the left visual field V L overlap in front of the host vehicle.
  • this overlapping visual field area will be referred to as "stereo visual field area R S "
  • the visual field area obtained by removing the stereo visual field area R S from the right visual field VR will be referred to as the "right monocular visual field area R R "
  • the visual field area obtained by removing the stereo visual field area R S from the left visual field V L will be referred to as the "left monocular visual field area R L ".
  • FIG. 2 is a diagram showing the correlation of each visual field area described above.
  • the upper stage is a left image P L captured by the left imaging section 1L
  • the middle stage is a right image P R captured by the right imaging section 1R
  • the lower stage is a composite image P C obtained by synthesizing the left image PL and the right image PR .
  • the composite image PC is divided into a right monocular viewing region RR , a stereo viewing region RS , and a left monocular viewing region RL as shown.
  • FIG. 3 is a functional block diagram of the in-vehicle camera system according to this embodiment.
  • the in-vehicle camera system of this embodiment is a system that includes an in-vehicle camera device 100 and a vehicle control system 200, and receives output data from each sensor, which will be described later.
  • the vehicle-mounted camera device 100 includes a stereo matching unit 2, a monocular detection unit 3, a monocular distance measurement unit 4, a template creation unit 5, an image storage unit 6, a similar part search unit 7, an angle of view identification unit 8, a stereo detection unit 9, a speed calculation unit 10, a vehicle information input unit 11, a stereo distance measurement unit 12, a type identification unit 13, a driver operation input unit 14, and an identification history storage unit.
  • the vehicle-mounted camera device 100 is connected to a vehicle speed sensor 31, a vehicle steering angle sensor 32, a yaw rate sensor 33, a steering sensor 34, an accelerator pedal sensor 35, and a brake pedal sensor 36 mounted on the own vehicle, and receives output data of each sensor.
  • the configuration other than the left imaging unit 1L and the right imaging unit 1R is specifically a computer equipped with hardware such as an arithmetic device, a storage device, and a communication device.
  • arithmetic units such as CPU (Central Processing Unit), GPU (Graphics Processing Unit), FPGA (Field Programmable Gate Array), ASIC (Application Specific Integrated Circuit), and CPLD (Complex Programmable Logic Device) execute a predetermined program obtained from a program recording medium or a distribution server outside the vehicle, realizing each processing unit such as the stereo matching unit 2.
  • a storage device such as a semiconductor memory identifies Each storage unit such as the history storage unit 15 is implemented, but hereinafter, such well-known technology in the computer field will be omitted as appropriate.
  • the vehicle control system 200 is connected to the in-vehicle camera device 100 and each of the sensors described above, and is a system that automatically controls the warning, braking, and steering of the own vehicle for the purposes of collision avoidance and reduction, following the preceding vehicle, and maintaining the own lane, according to the recognition results of the in-vehicle camera device 100.
  • the left imaging unit 1L and the right imaging unit 1R are monocular cameras equipped with imaging sensors (CMOS (Complementary Metal Oxide Semiconductor), etc.) that convert light into electric signals.
  • CMOS Complementary Metal Oxide Semiconductor
  • the information converted into an electric signal by each imaging sensor is further converted into image data representing a captured image within each imaging unit.
  • the images captured by the left imaging unit 1L and the right imaging unit 1R are sent as the left image PL and the right image PR to the stereo matching unit 2, the monocular detection unit 3, the monocular distance measurement unit 4, the template creation unit 5, the image storage unit 6, the similar part search unit 7, and the angle of view identification unit 8 at predetermined intervals (for example, every 17 ms).
  • the stereo matching unit 2 receives data including image data from the left imaging unit 1L and the right imaging unit 1R, and processes the data to calculate parallax.
  • Parallax is a difference in image coordinates in which the same object is captured, which is caused by differences in the positions of a plurality of imaging units.
  • the parallax is large at short distances and small at long distances, and the distance can be calculated from the parallax.
  • the stereo matching unit 2 also corrects the distortion of the image data of the left image PL and the right image PR . For example, the distortion of the image data is corrected so that objects having the same height and the same depth distance, which are called a central projection model or a perspective projection model, are arranged horizontally in the image coordinates.
  • left imaging section 1L and the right imaging section 1R are arranged side by side in the horizontal direction is that the correction is made so that they are aligned in the horizontal direction.
  • the corrected left image PL and right image PR one of these is used as reference image data as a reference, and the other is used as comparison image data for comparison to obtain parallax.
  • the monocular detection unit 3 detects a specific three-dimensional object appearing in the left image PL or the right image PR .
  • a specific three-dimensional object is an object that needs to be detected in order to realize appropriate AD control or ADAS control, and specifically includes pedestrians, other vehicles, and bicycles around the vehicle.
  • Detection by the monocular detection unit 3 is made to detect three-dimensional objects within a certain range from the vehicle-mounted camera.
  • the detection result by the monocular detection unit 3 includes information on the image coordinates of the left image PL or the right image PR in which the object to be detected is captured. For example, the detection result is held as the vertical and horizontal image coordinates of the upper left and lower left of a rectangular frame (hereinafter referred to as "monocular detection frame") surrounding the detection object.
  • the stereo detection unit 9 detects locations within a certain size range at the same distance as three-dimensional objects.
  • the detection result by the stereo detector 9 includes parallax image coordinate information of the detected object.
  • the detection result is held as the vertical and horizontal image coordinates of the upper left and lower left of a rectangular frame (hereinafter referred to as "stereo detection frame") surrounding the detected object.
  • the type identification unit 13 identifies the type of the object detected by the monocular detection unit 3 or the stereo detection unit 9 using an identification network such as a template image, pattern, or machine learning dictionary.
  • an identification network such as a template image, pattern, or machine learning dictionary.
  • the types of the objects identified by the type identification unit 13 are four-wheeled vehicles, two-wheeled vehicles, and pedestrians, the directions of the front surfaces of the objects, which are moving directions thereof, are also identified, and it is also identified whether they are moving bodies (three-dimensional objects moving against the background) that are assumed to cross in front of the own vehicle.
  • the orientation of the front surface of these mobile objects is the orientation of the front surface of the object with respect to the in-vehicle camera device 100 . In this embodiment, the operation is limited to the directions of movement that are nearly orthogonal.
  • Whether the direction of movement is nearly orthogonal or not is determined based on the angle of view and the orientation of the front surface of the object with respect to the vehicle-mounted camera device 100 . For example, in the case of an object captured on the right side of the in-vehicle camera device 100, the surface and side of the object in the traveling direction are captured. In the case of an angle of 45°, an object with a direction of motion close to orthogonal can be identified when the front and side faces are viewed at a 45° orientation.
  • the type identification unit 13 transmits to the speed calculation unit 10 the determination result as to whether or not the moving direction of the object is nearly orthogonal.
  • the monocular distance measurement unit 4 identifies the position of a specific object detected by the monocular detection unit 3, and obtains the distance and direction from the left imaging unit 1L or the right imaging unit 1R.
  • the specified distance and direction are represented by a coordinate system that can specify the position on the plane of the depth distance in front of the own vehicle and the lateral distance in the lateral direction.
  • information expressed in a polar coordinate system expressed by a Eugrid distance, which is a distance from the camera, and a direction may be held, and a trigonometric function may be used for interconversion between the depth and two horizontal axes.
  • the monocular distance measurement unit 4 uses an overhead image obtained by projecting the left image P L , the right image P R , and the composite image P C onto the road surface, and specifies the position from the vertical and horizontal coordinates of the overhead image and the vertical and horizontal scales of the overhead image with respect to the actual road surface.
  • it is not essential to use a bird's-eye view image to specify the position. It may be specified by performing geometric calculations using extrinsic parameters of the position and orientation of the camera, and information on the focal length, the pixel pitch of the imaging device, and the distortion of the optical system.
  • the stereo ranging unit 12 identifies the position of the object detected by the stereo detection unit 9 and identifies the distance and direction.
  • the specified distance and direction are represented by a coordinate system that can specify the position on the plane of the depth distance in front of the own vehicle and the lateral distance in the lateral direction.
  • information expressed in a polar coordinate system expressed by a Eugrid distance, which is a distance from the camera, and a direction may be held, and a trigonometric function may be used for interconversion between the depth and two horizontal axes.
  • the stereo ranging unit 12 calculates the distance in the depth direction from the parallax of the object.
  • the stereo ranging unit 12 uses an average or a mode when there is variation in the calculated parallax of the object. When the disparity of parallax is large, a method of taking a specific outlier may be used.
  • the horizontal distance is obtained from the horizontal angle of view of the detection frame of the stereo detection unit 9 and the depth distance using a trigonometric function.
  • the template creation unit 5 selects one of the captured images, cuts out a specific region from this captured image (hereinafter referred to as "detected image”), and uses it as a template image. Specifically, the template creation unit 5 creates a template image for searching for similar portions from pixel information inside and around the monocular detection frame of the object detected by the monocular detection unit 3 .
  • This template image is scaled to a predetermined image size. When enlarging or reducing the template image, maintain the aspect ratio or do not change it significantly.
  • the brightness values of the pixels themselves or the reduced image is divided into a plurality of small regions (hereinafter also referred to as "kernels"), and the relationship between the brightness values in the kernels is stored as a feature of the image.
  • Image features are extracted in various forms, and there are various feature amounts such as left-right and top-bottom average brightness differences in the kernel, average brightness differences between the periphery and the center, and brightness averages and variances.
  • the background In the camera image, the background is reflected around the target, and when the image is cut out, the background is mixed. However, the background changes depending on the time, even if the target is the same, as the target and the camera move.
  • machine learning is performed using camera images for the shape and texture of the type of target to be tracked among the feature quantities, and which feature quantities should be held with what weights to search for similar locations are stored as a tracking network.
  • the image storage unit 6 holds the left image PL and the right image PR for a certain period of time, or until a certain number of left images PL and right images PR at different times are accumulated.
  • a temporary storage device such as a DRAM (Dynamic Random Access Memory) is used for this holding.
  • a plurality of addresses and ranges in the storage device to be held may be determined in advance, and the transfer destination address may be changed for each captured image, and after one cycle, the area where the old image is stored may be overwritten. Since the address is determined in advance, there is no need to notify the address when reading out the held image.
  • the similar part searching unit 7 searches the images stored in the image storage unit 6 for parts similar to the template image created by the template creating unit 5 .
  • an image for which the similarity search section 7 searches for a similarity to the template image will be referred to as a "search target image”.
  • the search target image is an image different from the image at the time detected by the monocular detection unit 3 .
  • the selection of an image at which time is based on selecting an image captured at a time before or after the image at the detected time. Similar locations are likely to exist near the coordinates of the monocular detection frame at the time of detection, and since it is expected that there will be little change in brightness due to changes in the orientation of the detection object and changes in exposure with respect to the template image, high-precision search is possible.
  • the detection result of the monocular detection unit 3, the detection result of the stereo detection unit 9, the distance measurement result of the monocular distance measurement unit 4, and the distance measurement result of the stereo distance measurement unit 12, the position of the same target at each time is tracked.
  • the process of creating a template image from the image at a certain time and searching for and tracking similar portions in the image at a different time is called image tracking processing by a tracking network.
  • the imaging interval is short, an image from two or more previous times may be selected as the search target image.
  • the image to be selected may be changed according to the vehicle speed. For example, if the vehicle speed is equal to or greater than a threshold, an older image close to the imaging time of the detected image is selected, and if the vehicle speed is slower than the threshold, an older image than the close old image may be selected.
  • the vehicle speed is high, if the image is taken at a time close to the detected time, the appearance does not change greatly from the template image, and it is easy to ensure search accuracy.
  • the vehicle speed is slow, the positional change of the detected object in the image for searching for a similar part increases with respect to the detected time, and the accuracy of the calculated speed can be improved.
  • the angle-of-view specifying unit 8 specifies the horizontal angle of view of the camera for a specific object appearing in the similar location specified by the similar location searching unit 7 . However, if the height is specified along with the position, then the vertical image is also specified. Also, even if there is a possibility that the camera rolls, the speed can be calculated with high accuracy by specifying both the horizontal and vertical angles of view.
  • the horizontal angle of view can be obtained by a trigonometric function from the ratio of the depth distance and the horizontal distance.
  • the speed calculation unit 10 receives the distance measurement result from the monocular distance measurement unit 4 or the stereo distance measurement unit 12, receives the angle of view of the similar location from the angle of view identification unit 8, receives vehicle behavior information from the vehicle information input unit 11, and calculates the speed of the detected specific object. However, instead of receiving the vehicle speed from the vehicle information input unit 11, the speed calculation unit 10 may use the calculated differential value of the relative depth distance as the vehicle speed. This is useful when the depth distance can be measured with high accuracy and the horizontal distance is measured with low accuracy. Also, if the depth velocity is used instead of the vehicle velocity, it is possible to accurately predict a collision with a crossing vehicle that is not orthogonal.
  • the vehicle information input unit 11 receives the vehicle speed information from the vehicle speed sensor 31 that measures the speed of the vehicle, the steering angle information from the vehicle steering angle sensor 32 that measures the steering angle of the steered wheels, and the turning speed of the vehicle from the yaw rate sensor 33 that measures the turning speed of the vehicle.
  • the vehicle information input unit 11 may be realized by a communication module compatible with a communication port (for example, IEEE802.3), or may be realized by an AD converter capable of reading voltage and current.
  • the driver operation input unit 14 is connected to a steering sensor 34 that acquires information on the angle of turning the steering wheel, an accelerator pedal sensor 35 that acquires information on the amount of depression of the access pedal, and a brake pedal sensor 36 that acquires information on the amount of depression of the brake pedal.
  • the identification history storage unit 15 stores the three-dimensional object (hereinafter referred to as "target") identified by the type identification unit 13 together with the target identifier and the time of the target.
  • the target identifier is a unique code assigned to each target. For example, if one vehicle and three pedestrians are identified as targets, identifiers such as vehicle A, pedestrian A, pedestrian B, and pedestrian C are assigned to each target.
  • the tracking history storage unit 16 stores the success or failure of the template image creation by the template creation unit 5 and the similar location search unit 7, the template image when searching for similar locations at each time, the success or failure of the similar location search results, and the image detected as the similar location, together with the identifier of the target and the time.
  • the position history storage unit 17 stores the location of the same target at each time, along with the target identifier and time, based on the search result, the detection result of the monocular detection unit 3, the detection result of the stereo detection unit 9, the distance measurement result of the monocular distance measurement unit 4, and the distance measurement result of the stereo distance measurement unit 12.
  • the speed history storage unit 18 stores the speed calculated by the speed calculation unit 10 together with the identifier of the target and the time.
  • the control feedback unit 23 is connected to the vehicle control system 200, and inputs control feedback information such as vehicle control content and time controlled by the vehicle control system 200, target target identifier, control decision reason, control cancellation content, and control cancellation reason.
  • the erroneous control determination unit 21 receives the control feedback information of the own vehicle from the control feedback unit 23 and receives the driver's operation information from the driver's operation input unit 14 . Further, the vehicle control system 200 performs vehicle control based on the target object information output from the in-vehicle camera device 100, and after starting the alarm and automatic braking, it is determined whether or not the control is canceled by the driver's operation. Then, when there is such a driving operation, the erroneous control determination unit 21 determines that erroneous control by the vehicle control system 200 has occurred. The details of the erroneous control determination unit 21 will be described later.
  • the erroneous identification identification unit 19 identifies cases of erroneous identification among the identification results of the targets that were the target of erroneous control after the erroneous control determination unit 21 determines that an erroneous control has occurred.
  • the details of the erroneous identification specifying unit 19 will be described later.
  • the erroneous tracking identification unit 20 identifies a case of erroneous tracking among the tracking results of the target that was the target of erroneous control after the erroneous control determination unit 21 determined that the target was erroneously controlled. The details of the mistracking identification unit 20 will be described later.
  • the erroneous recognition image storage unit 22 extracts and stores images before and after the erroneous identification and erroneous tracking specified by the erroneous identification specifying unit 19 and the erroneous tracking specifying unit 20 .
  • the additional learning unit 24 additionally learns the AI network for recognizing the external environment using the images before and after the erroneous identification and erroneous tracking stored in the erroneous recognition image storage unit 22 .
  • FIG. 4 is a bird's-eye view showing an example of a situation in which erroneous identification occurs as a result of recognizing the external environment using an identification network or the like in the type identification unit 13 of the in-vehicle camera device 100 .
  • the sampling period of the external environment is set to 200 ms here, the sampling period is not limited to this example.
  • FIG. 4(a) is a bird's-eye view illustrating the external environment of the own vehicle at time T - 1 , which is 200 ms before time T0, which will be described later.
  • the vehicle is traveling at a constant speed on a straight road, and the in-vehicle camera device 100 attached to the vehicle has not detected a three-dimensional object in front of the vehicle.
  • FIG. 4B is a bird's-eye view illustrating the external environment of the host vehicle at time T0 when automatic control (automatic deceleration) is started.
  • the in-vehicle camera device 100 erroneously detects a non-existent pedestrian as a result of processing the captured image using a slightly problematic identification network or the like. Therefore, the vehicle control system 200 that receives the detection result of the in-vehicle camera device 100 controls the braking system of the own vehicle and decelerates the own vehicle rapidly in order to prevent contact with a nonexistent pedestrian.
  • a situation in which a non-existing pedestrian is erroneously detected is, for example, a situation in which a lump of exhaust gas is erroneously identified as a pedestrian.
  • FIG. 4(c) is a bird's-eye view illustrating the external environment of the host vehicle at time T1 , 200 ms after time T0 .
  • the driver notices that an abnormality (erroneous identification) has occurred in the in-vehicle camera device 100 because automatic deceleration has started without any reason to brake the vehicle.
  • FIG. 4D is a bird's-eye view illustrating the external environment of the host vehicle at time T2 , 400 ms after time T0 .
  • the driver who has confirmed the safety in front depresses the accelerator pedal in order to restore the decelerated speed to the pre-deceleration speed to accelerate the own vehicle.
  • FIG. 4(e) is a bird's-eye view illustrating the external environment of the host vehicle at time T3 , 600 ms after time T0 .
  • the vehicle has entered the area of the pedestrian that was misidentified at time T0 , but since there are no pedestrians there, the vehicle can safely pass through the area.
  • the vehicle control system 200 starts automatic control based on the identification result of the on-vehicle camera device 100, if the driver performs a manual driving operation contrary to the automatic control and there is no contact with the identified three-dimensional object, it can be determined that an erroneous identification has occurred in the on-vehicle camera device 100.
  • the apparatus of the present invention memorizes and additionally learns images of pedestrian targets misidentified at time T0 , which are determined to be misidentified.
  • FIG. 5 is a flowchart to which the mechanism described in FIG. 4 is applied, and shows a method of saving an image at the time when an erroneous identification occurs in the on-vehicle camera device 100, and additionally learning an identification network or the like based on the saved image. Each step will be described below.
  • step S1 the erroneous control determination unit 21 determines whether or not the automatic control for collision prevention has operated based on the data from the vehicle control system 200 that the control feedback unit 23 has obtained.
  • the automatic control for collision prevention includes, for example, automatic braking for collision avoidance and mitigation, control for sounding a collision alarm, steering control for avoiding obstacles, and the like. Then, if the automatic control based on the output of the in-vehicle camera device 100 operates, the process proceeds to step S2, and if it does not operate, step S1 is executed again after a certain period of time.
  • step S2 the erroneous control determination unit 21 determines whether or not a manual driving operation contrary to automatic control has been performed, based on data from the steering sensor 34 and the accelerator pedal 35 obtained by the driver operation input unit 14.
  • the driver performs a manual operation contrary to the automatic control after the automatic control for preventing contact with the detected three-dimensional object is activated
  • the operation is accepted as the operation for canceling the stop control, for the following reasons. That is, when the driver strongly depresses the accelerator pedal exceeding the threshold value Th2, there is a possibility that the driver misunderstands the access pedal as the brake pedal and strongly depresses it, and it is considered inappropriate to accept this as an operation to cancel the stop control. In addition, if the driver continues to depress the accelerator pedal weakly below the threshold Th1, the driver may not be aware of the possibility of a collision, and it is considered inappropriate to accept this as an operation to cancel the stop control. Therefore, only when the depression amount is between the threshold Th1 and the threshold Th2, the automatic brake is released and treated as erroneous control.
  • ⁇ Second example of erroneous control>> A situation is assumed in which the vehicle control system 200 determines that the vehicle has collided with a target and starts automatic braking with the goal of stopping the vehicle before the collision point. In this case, after the vehicle is stopped by automatic braking, the driver depresses the accelerator pedal to restart the vehicle within a predetermined time, and the vehicle passes before the collision point.
  • the time is, for example, 0.5 seconds, which is sufficiently shorter than the time required for the obstacle to retreat when the obstacle is present at the collision point.
  • the time for the obstacle to retreat is the time for moving at the crossing speed by the width of the vehicle if the collision point is in the left-right center of the own vehicle.
  • step S3 If the target is a non-vehicle such as a pedestrian, there is a possibility that the vehicle will be surprised and stop in a situation where there is a possibility of collision with the own vehicle.
  • the vehicle control system 200 After the vehicle control system 200 has stopped before the collision point, if the vehicle restarts and passes through the collision point in a short period of time after the driver operates the accelerator pedal, it is determined that a driving operation contrary to the automatic control has been performed, and the process proceeds to step S3.
  • the vehicle control system 200 determines a collision with a target and performs automatic steering operation to turn the vehicle to the right or left of the target in order to avoid the target, thereby controlling the collision avoidance operation. In this case, even if the driver performs a steering operation that interferes with the automatic steering operation, it is determined that a driving operation contrary to the automatic control has been performed, and the process proceeds to step S3.
  • the above steering operation is, for example, when the vehicle control system 200 controls the automatic steering operation to turn the steering wheel clockwise by 45°, but the driver applies a force in the counterclockwise direction to make the steering wheel turn to a state sufficiently smaller than 45°, for example, to only about 22°.
  • step S3 the erroneous control determination unit 21 calculates the travel trajectory after automatic control (the trajectory of the vehicle's position) based on the outputs of the vehicle speed sensor 31, the vehicle steering angle sensor 32, and the yaw rate sensor 33, and stores it together with the driving operation. Specifically, information on accelerator pedal operation, brake pedal operation, steering operation, vehicle speed, vehicle steering angle, and vehicle yaw rate is obtained from various sensors as a result of the driver's driving operation, and the travel trajectory necessary for the subsequent determination in step S4 is calculated and estimated.
  • step S4 the erroneous identification identifying unit 19 identifies the occurrence of erroneous identification by determining whether the vehicle has traveled in manual operation on a travel trajectory that can avoid collision with the identified target, assuming that the identification result at the start of automatic control was correct. Specifically, the trajectory of the own vehicle estimated in step S3 is compared with the identification history and position history stored in the identification history storage unit 15 and the position history storage unit 17, and the collision avoidance of pedestrians, bicycles, vehicles, etc. is determined in the identification history for all historical positions of the identification result set by the vehicle control system 200, and whether or not the collision could not be avoided is determined.
  • step S5 If there is even one historical position judged to be a travel locus where collision avoidance cannot be avoided, it is determined that the driver has selected the travel locus after determining that the driver can pass through that position safely (that is, it is determined that there was an erroneous identification that caused the vehicle control system 200 to activate an essentially unnecessary automatic control), and the process proceeds to step S5. On the other hand, if not, it is determined that the automatic control by the vehicle control system 200 was necessary (that is, the identification that caused the vehicle control system 200 to activate the automatic control was appropriate), and the process returns to step S1.
  • step S ⁇ b>5 the erroneous identification specifying unit 19 reads the image of the erroneous identification, which is temporarily stored in the image storage unit 6 , and saves it in the erroneous recognition image storage unit 22 .
  • the stored position and time in the history and the identification image corresponding to the identifier of the target object are stored in the memory.
  • the number of images to be stored in the erroneously recognized image storage unit 22 can be reduced to, for example, at most several images (a data amount of about several hundred KB) for which erroneous identification occurs, so that the storage capacity of the erroneously recognized image storage unit 22 can be significantly reduced compared to the case where all the images captured while driving are stored as learning data.
  • step S6 the additional learning unit 24 performs additional machine learning of an identification network, etc., using the erroneously identified images stored in the erroneously recognized image storage unit 22 in step S5 as learning data.
  • the identification image stored in the misrecognized image storage unit 22 may be used as learning data for additional learning, or learning data obtained by processing the stored identification image (for example, learning data obtained by enlarging or reducing the identification image, or learning data obtained by rotating the identification image) may be used as learning data for additional learning.
  • learning data obtained by processing the stored identification image for example, learning data obtained by enlarging or reducing the identification image, or learning data obtained by rotating the identification image
  • step S5 it is desirable to store a label specifying the mode of misidentification together with the identification image. Since the vehicle control system 200 of this embodiment is designed to change the determination of whether or not to execute vehicle control according to the type of detected target (another vehicle, pedestrian, bicycle, etc.), an incorrect determination of the type of target is considered as a possible cause of erroneous control. Therefore, an identifier for identifying an erroneously identified identification network and a type of target erroneously identified by the identification network are recorded in the label attached to the identification image.
  • step S5 In order to make the erroneous identification network learn that the identification was erroneous, along with the learning data (images), it is necessary to have type class information on how the target was erroneously identified. Therefore, by storing the type class information in the label in step S5, in step S6, using the image at the time of misidentification occurrence, it is possible to pinpoint additional machine learning for the problem identification network or the like specified by the label, and to improve the problem identification network or the like. In addition, because it is possible to exclude problem-free identification networks, etc., from the targets of additional learning, it is possible to avoid adverse effects such as the occurrence of erroneous learning of normal identification networks, etc., and the increase in computational load due to unnecessary additional learning of normal identification networks, etc.
  • FIG. 6A is a bird's-eye view showing an example of a situation in which mistracking occurs as a result of recognizing the external environment using a template image with some problem, a tracking network, or the like in the similar location search unit 7 of the vehicle-mounted camera device 100
  • FIG. 6B is a legend for FIG. 6A.
  • the bird's-eye view shown in the figure is created, for example, by taking into account changes over time in projected images obtained by affine transforming the picked-up left image P L and right image PR using a predetermined affine table, and changes over time in the position of the vehicle. Since the method for creating the bird's-eye view of the surroundings of the vehicle using such a method is a well-known technique, the details thereof will not be described.
  • FIGS. 6A (a) to (c) are overhead views illustrating the external environment of the host vehicle at times T ⁇ 3 , T ⁇ 2 , and T ⁇ 1 600 ms, 400 ms, and 200 ms before time T 0 described later, respectively.
  • the vehicle is traveling on a straight road at a constant speed, and the in-vehicle camera device 100 tracks the movement of the pedestrian walking on the right sidewalk using a tracking network or the like. Since the vehicle control system 200 determines that the own vehicle will not come into contact with the pedestrian during the period from time T -3 to T -1 , automatic control for avoiding contact with the pedestrian is not performed at these times.
  • FIG. 6A(d) is a bird's-eye view illustrating the external environment of the host vehicle at time T0 when automatic control (automatic deceleration) is started.
  • the in-vehicle camera device 100 erroneously tracks the pedestrian who is actually on the sidewalk as having jumped out onto the roadway as a result of processing the picked-up image using a slightly problematic template image, tracking network, or the like. Therefore, the vehicle control system 200 that has received the tracking result of the in-vehicle camera device 100 controls the braking system of the own vehicle and decelerates the own vehicle rapidly in order to prevent contact with a non-existent pedestrian on the road.
  • a situation in which a non-existing pedestrian on the road is erroneously detected is, for example, a situation in which a rising flag or plants existing in the direction of the pedestrian's path fluctuate due to a gust of wind, etc., and the fluctuation is misidentified as a pedestrian jumping out onto the road.
  • OB -3 , OB -2 , OB -1 and OB 0 indicate the positions of targets (pedestrians) detected at times T -3 , T -2 , T -1 and T 0 respectively. Note that the position OB 0 is erroneously detected.
  • v ⁇ 1 , v 0 , and v 1 are movement vectors of the target (pedestrian) during the periods of time T ⁇ 3 to T ⁇ 2 , time T ⁇ 2 to T ⁇ 1 , and time T ⁇ 1 to T 0 , respectively, and are obtained by converting the velocity vector of the target (pedestrian) calculated by the in-vehicle camera device 100 into the amount of movement in each period.
  • E L and E R are the left and right ends of the travel range of the own vehicle, and indicate the left and right ends of the range in which the own vehicle is predicted to travel based on the steering angle, yaw rate, and speed of the own vehicle.
  • the vehicle control system 200 of this embodiment determines that there is a possibility of contact with the target when a moving target enters a predetermined contact determination range including the travel range of the vehicle, and activates automatic control such as braking and steering to avoid contact with the target. Therefore, in FIG. 6B, at time T0 when the target position OB0 within the contact determination range is detected, the vehicle control system 200, for example, activates automatic control to rapidly decelerate the own vehicle.
  • the length of the vehicle travel range is limited to a range in which the vehicle is expected to travel within a predetermined time period (for example, within 5 seconds), and even if a target exists, it is not necessary to include it in the travel range of the vehicle at that point in time where it is not necessary to immediately implement contact avoidance control.
  • the width of the contact determination range may be changed according to the type information of the road being traveled obtained from GNSS (Global Navigation Satellite System) or the like. For example, it may be narrow on highways where the possibility of approaching targets (pedestrians, other vehicles, etc.) from the side is low, and may be widened on general roads where there is a high possibility of targets approaching from the side.
  • GNSS Global Navigation Satellite System
  • FIG. 6A(e) is a bird's-eye view illustrating the external environment of the host vehicle at time T1 , 200 ms after time T0 .
  • the driver notices that some abnormality (erroneous tracking) has occurred in the in-vehicle camera device 100 because automatic deceleration has started without any reason to brake the vehicle. Therefore, if the safety ahead (absence of a vehicle in front, absence of pedestrians, etc.) can be confirmed, the driver depresses the accelerator pedal in order to restore the decelerated speed to the pre-deceleration speed, thereby accelerating the own vehicle.
  • the vehicle approaches the area of the erroneously detected pedestrian, but since there are no pedestrians in the area (the actual pedestrian is on the right sidewalk), the vehicle can safely pass through the area.
  • the vehicle control system 200 starts automatic control based on the tracking result of the vehicle-mounted camera device 100, if the driver performs a manual driving operation contrary to the automatic control and does not come into contact with the tracked three-dimensional object, it can be determined that the vehicle-mounted camera device 100 has mistracked.
  • FIG. 7 is a flowchart to which the mechanism described in FIG. 6A is applied, and shows a method for storing images before and after the time when mistracking occurred in the in-vehicle camera device 100, and additionally learning a template image, a tracking network, etc. based on the stored images.
  • Each step will be described below. 5 and 7 can be performed in parallel, and redundant description of points common to the flowchart of FIG. 5 will be omitted as necessary.
  • step S1 the erroneous control determination unit 21 determines whether or not the automatic control for collision prevention has operated based on the data from the vehicle control system 200 that the control feedback unit 23 has obtained.
  • step S2 the erroneous control determination unit 21 determines whether or not a manual driving operation contrary to automatic control has been performed, based on data from the steering sensor 34 and the accelerator pedal 35 obtained by the driver operation input unit 14. As described with reference to FIG. 6A , if the driver performs a manual operation contrary to the automatic control after the automatic control to prevent contact with the target being tracked has been activated, it is considered that the driver is performing the operation after confirming safety.
  • step S3 the erroneous control determination unit 21 calculates the travel trajectory after automatic control (the trajectory of the vehicle's position) based on the outputs of the vehicle speed sensor 31, the vehicle steering angle sensor 32, and the yaw rate sensor 33, and stores it together with the driving operation.
  • step S4a the erroneous tracking identification unit 20 identifies the occurrence of erroneous tracking by determining whether or not the vehicle has traveled manually along a trajectory that can avoid contact with the tracked target, assuming that the tracking result at the start of automatic control was correct.
  • step S5a the mistracking identification unit 20 reads images before and after the occurrence of mistracking (for example, images for one second before and after the automatic control start time) from the image storage unit 6 and stores them in the misrecognition image storage unit 22.
  • mistracking for example, images for one second before and after the automatic control start time
  • step S6a the additional learning unit 24 performs additional machine learning of template images, tracking networks, etc., using the images before and after the occurrence of mistracking, which were stored in the misrecognized image storage unit 22 in step S5a, as learning data.
  • the additional machine learning unit 24 performs additional machine learning of template images, tracking networks, etc., using the images before and after the occurrence of mistracking, which were stored in the misrecognized image storage unit 22 in step S5a, as learning data.
  • step S5a of FIG. 7 it is desirable to follow step S5 of FIG. 5 and store a label identifying the mode of mistracking together with the identification image.
  • the additional learning in step S6a can be executed only for template images, tracking networks, etc. that need to be improved by the same action as described in FIG. 5, so various effects similar to those in FIG. 5 can be obtained.
  • step S4a there is a possibility that the same object as the template or the same object that appears in the image before the mistracked portion exists near the location identified as the occurrence of mistracking in step S4a. Therefore, by storing the images before and after the mistracked portion by enlarging the area to which the detection frame is attached by tracking, an image including the location where the same object appears can be stored, and the location that should be tracked correctly can be appropriately additionally learned during additional learning.
  • FIG. 8 is a bird's-eye view for explaining the behavior of the own vehicle at the time of misidentification and the method of determining miscontrol/misidentification, taking into account the movement of pedestrians, with respect to the target identified at time T0 and the control of the subject vehicle being activated.
  • the pedestrian prediction range shown in FIG. 8(c) is obtained by multiplying the elapsed time after the pedestrian is identified in FIG.
  • the moving speed of the pedestrian is a speed that is preset as a pedestrian-like speed, and may be assumed to be an average walking speed, for example, set to 5 km/h.
  • the pedestrian existence radius in the legend of FIG. 8 indicates the radius of the pedestrian prediction range, and is the length L obtained by multiplying the elapsed time T after the pedestrian is identified and the movement speed VP of the pedestrian.
  • the target identified at time T0 is at time T3 and the pedestrian presence prediction range is covered by the own vehicle, so it is determined that the target identified at time T0 was erroneously identified.
  • a target at a time when an erroneous identification is made is specified based on the overlapping degree of the pedestrian prediction range and the own vehicle range with respect to the detection/identification target at each time when the collision is determined when the automatic control is performed.
  • FIG. 9 is a flow chart for explaining a process of storing images before and after misidentification, taking into account the movement of pedestrians. Note that redundant description of the points in common with FIG. 5 is omitted.
  • Step S7 in FIG. 9 is a process performed after step S3, and is a process of estimating the predicted range of the pedestrian after automatic control.
  • the pedestrian presence radius is obtained from the elapsed time T after the pedestrian is identified and the pedestrian movement speed VP to specify the pedestrian presence range.
  • the subsequent step S4b is a process for determining whether the manual operation to avoid the identification target is performed, and it is determined whether the manual operation to avoid the identification target has been performed based on the travel trajectory verified in step S3 and the pedestrian prediction range estimated in step S7. When the vehicle covers the pedestrian prediction range on the travel path of the own vehicle, it is determined that the manual operation to avoid the identification target of the pedestrian was not performed.
  • ⁇ Effect of this embodiment> it is possible to automatically extract and save images before and after the occurrence of misrecognition of the external environment, which contributes to the improvement of problematic AI networks, from among various images captured in an actual driving environment.
  • the in-vehicle camera device of the present embodiment uses the automatically extracted and saved images as learning data without cooperating with the outside, so that the problematic AI network can be made to perform additional learning, and the quality of the AI network can be improved.
  • the present invention is not limited to the above-described embodiments, and includes various modifications.
  • the above-described embodiments have been described in detail in order to explain the present invention in an easy-to-understand manner, and are not necessarily limited to those having all the described configurations.
  • it is possible to replace part of the configuration of one embodiment with the configuration of another embodiment and it is also possible to add the configuration of another embodiment to the configuration of one embodiment.
  • each of the above configurations, functions, processing units, processing means, etc. may be realized in hardware, for example, by designing a part or all of them with an integrated circuit.
  • each of the above configurations, functions, etc. may be realized by software by a processor interpreting and executing a program for realizing each function.
  • Information such as programs, tables, and files that implement each function can be stored in a recording device such as a memory, a hard disk, an SSD (Solid State Drive), or a recording medium such as a semiconductor memory card.
  • control lines and information lines indicate what is considered necessary for explanation, and not all control lines and information lines are necessarily indicated on the product. In practice, it may be considered that almost all configurations are interconnected.
  • the configuration of the functional blocks is merely an example. Some functional configurations shown as separate functional blocks may be configured integrally, or a configuration represented by one functional block diagram may be divided into two or more functions. Further, a configuration may be adopted in which part of the functions of each functional block is provided in another functional block.
  • SYMBOLS 100 Vehicle-mounted camera apparatus, 1L... Left imaging part, 1R... Right imaging part, 2... Stereo matching part, 3... Monocular detection part, 4... Monocular distance measurement part, 5... Template preparation part, 6... Image storage part, 7... Similar part search part, 8... Angle-of-view specification part, 9... Stereo detection part, 10... Speed calculation part, 11... Vehicle information input part, 12... Stereo distance measurement part, 13... Type identification part, 14... Driver operation input part, 15... Identification history storage part 16 Tracking history storage unit 17 Position history storage unit 18 Speed history storage unit 19 Incorrect identification identification unit 20 Incorrect tracking identification unit 21 Incorrect control identification unit 22 Incorrect recognition image storage unit 23 Control feedback unit 24 Additional learning unit 200 Vehicle control system

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un dispositif de caméra monté sur véhicule qui extrait et stocke automatiquement des images avant et après une reconnaissance erronée d'un environnement externe, parmi des images capturées dans un environnement de déplacement réel. Le dispositif de caméra monté sur véhicule pour identifier des objets comprend une unité de rétroaction de commande pour accepter une rétroaction concernant une commande automatisée d'un véhicule hôte, une unité de détermination de commande erronée pour déterminer une commande automatisée erronée sur la base d'informations d'opération de conduite relatives à un conducteur, et une unité de stockage d'image pour stocker des images, des images étant stockées lorsqu'une commande automatisée erronée est déterminée.
PCT/JP2022/045873 2022-01-20 2022-12-13 Dispositif de caméra monté sur véhicule, système de caméra monté sur véhicule et procédé de stockage d'image WO2023139978A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
DE112022005228.7T DE112022005228T5 (de) 2022-01-20 2022-12-13 An einem fahrzeug montierte kameravorrichtung, an einem fahrzeug montiertes kamerasystem und bildspeicherverfahren

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022007132A JP2023106029A (ja) 2022-01-20 2022-01-20 車載カメラ装置、車載カメラシステム、および、画像保存方法
JP2022-007132 2022-01-20

Publications (1)

Publication Number Publication Date
WO2023139978A1 true WO2023139978A1 (fr) 2023-07-27

Family

ID=87348154

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/045873 WO2023139978A1 (fr) 2022-01-20 2022-12-13 Dispositif de caméra monté sur véhicule, système de caméra monté sur véhicule et procédé de stockage d'image

Country Status (3)

Country Link
JP (1) JP2023106029A (fr)
DE (1) DE112022005228T5 (fr)
WO (1) WO2023139978A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016136332A (ja) * 2015-01-23 2016-07-28 沖電気工業株式会社 情報処理装置、情報処理方法及び記憶媒体
JP2016210285A (ja) * 2015-05-08 2016-12-15 トヨタ自動車株式会社 誤認識判定装置
JP2017138282A (ja) * 2016-02-05 2017-08-10 トヨタ自動車株式会社 自動運転システム
JP2019096137A (ja) * 2017-11-24 2019-06-20 トヨタ自動車株式会社 信号機認識装置
JP2019159659A (ja) * 2018-03-12 2019-09-19 パナソニックIpマネジメント株式会社 情報処理装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020160513A (ja) 2019-03-25 2020-10-01 株式会社豊田中央研究所 学習データ生成装置および学習データ生成方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016136332A (ja) * 2015-01-23 2016-07-28 沖電気工業株式会社 情報処理装置、情報処理方法及び記憶媒体
JP2016210285A (ja) * 2015-05-08 2016-12-15 トヨタ自動車株式会社 誤認識判定装置
JP2017138282A (ja) * 2016-02-05 2017-08-10 トヨタ自動車株式会社 自動運転システム
JP2019096137A (ja) * 2017-11-24 2019-06-20 トヨタ自動車株式会社 信号機認識装置
JP2019159659A (ja) * 2018-03-12 2019-09-19 パナソニックIpマネジメント株式会社 情報処理装置

Also Published As

Publication number Publication date
JP2023106029A (ja) 2023-08-01
DE112022005228T5 (de) 2024-08-29

Similar Documents

Publication Publication Date Title
CN106255899B (zh) 用于将对象用信号通知给配备有此装置的车辆的导航模块的装置
EP3366540B1 (fr) Appareil de traitement d'informations et support d'enregistrement lisible par ordinateur non transitoire
JP4755227B2 (ja) 物体を認識するための方法
WO2017171082A1 (fr) Dispositif de commande de véhicule et procédé de commande de véhicule
US11370420B2 (en) Vehicle control device, vehicle control method, and storage medium
US20190276049A1 (en) Driving assistance system
US12024161B2 (en) Vehicular control system
Zhang et al. A novel vehicle reversing speed control based on obstacle detection and sparse representation
JP2012106735A (ja) 分岐路進入判定装置
CN113838060A (zh) 用于自主车辆的感知系统
JP2019028653A (ja) 物体検出方法及び物体検出装置
CN109195849B (zh) 摄像装置
US20220234581A1 (en) Vehicle control method, vehicle control device, and vehicle control system including same
JP2006004188A (ja) 障害物認識方法及び障害物認識装置
WO2023139978A1 (fr) Dispositif de caméra monté sur véhicule, système de caméra monté sur véhicule et procédé de stockage d'image
US11893715B2 (en) Control device and control method for mobile object, storage medium, and vehicle
CN116242375A (zh) 一种基于多传感器的高精度电子地图生成方法和系统
CN115959109A (zh) 车辆控制装置、车辆控制方法及存储介质
JP2003121543A (ja) 車両用走行車線判断装置
JP2023110364A (ja) 物体追跡装置、物体追跡方法、およびプログラム
CN113875223A (zh) 外部环境识别装置
Wang et al. A monovision-based 3D pose estimation system for vehicle behavior prediction
JP7534889B2 (ja) 車外環境認識装置
US11938879B2 (en) Vehicle control device, information processing apparatus, operation methods thereof, and storage medium
JP2019172032A (ja) 自動制動装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22922129

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 112022005228

Country of ref document: DE