WO2023112127A1 - Dispositif de reconnaissance d'image et procédé de reconnaissance d'image - Google Patents

Dispositif de reconnaissance d'image et procédé de reconnaissance d'image Download PDF

Info

Publication number
WO2023112127A1
WO2023112127A1 PCT/JP2021/045985 JP2021045985W WO2023112127A1 WO 2023112127 A1 WO2023112127 A1 WO 2023112127A1 JP 2021045985 W JP2021045985 W JP 2021045985W WO 2023112127 A1 WO2023112127 A1 WO 2023112127A1
Authority
WO
WIPO (PCT)
Prior art keywords
detection
image recognition
area
distance
image
Prior art date
Application number
PCT/JP2021/045985
Other languages
English (en)
Japanese (ja)
Inventor
郭介 牛場
達夫 最首
Original Assignee
日立Astemo株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立Astemo株式会社 filed Critical 日立Astemo株式会社
Priority to PCT/JP2021/045985 priority Critical patent/WO2023112127A1/fr
Publication of WO2023112127A1 publication Critical patent/WO2023112127A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the present invention relates to an image recognition device and an image recognition method for recognizing three-dimensional objects in an image captured by a camera.
  • an object detection unit 1 that detects an object region in which an object exists from an image acquired from a camera 9, and an object distance that calculates the variance of the distance that indicates the distance distribution of the object region It has an information calculation unit 3 and an object identification unit 4 for identifying the type of the object based on the dispersion of distances indicating the distance distribution.”.
  • the object detection unit 1 is means for detecting an object area from the image acquired by the camera 9.
  • This process is a process for detecting a lump of rigid body area from the image.
  • Various means can be used.For example, when a stereo camera is used as a camera for acquiring images, the distance on the image can be obtained from the parallax.The areas on the image that are adjacent and close to each other can be grouped. , the object area can be obtained.”
  • Patent Document 1 in the case of a stereo camera that detects an object based on the parallax of images obtained from two cameras, the obtained parallax is grouped to detect an object in a three-dimensional space. .
  • conventional devices that detect objects from images may detect one object as multiple objects or multiple objects as one object if the texture of the object cannot be obtained. had a nature.
  • the threshold for extracting invalid areas where parallax cannot be obtained is optimized according to the application used and the object to be detected. do not have. For this reason, for example, when a grouping threshold is set for detecting a person, an object larger than a person, such as a car, may be divided into a plurality of objects. On the other hand, if the threshold value is set according to a large object such as a car, a small-sized object such as a person or a pole on the road may be acquired by combining with surrounding objects.
  • the present invention provides an image recognition apparatus capable of appropriately detecting a three-dimensional object such as a vehicle or a pedestrian to be monitored by a driving support system or an automatic driving system as an independent single object, and an image recognition apparatus.
  • the purpose is to provide a method.
  • the image recognition apparatus of the present invention includes: an area detection unit that detects a plurality of detection areas from an image; a temporary combination processing unit that temporarily combines the plurality of detection areas; a same object determination processing unit that determines whether the detection areas can be identified as one object;
  • the image recognition apparatus includes a detection area integration processing unit that determines the detection area and reflects it in the object detection result.
  • three-dimensional objects such as vehicles and pedestrians to be monitored by driving support systems, automatic driving systems, etc. can be appropriately detected as independent single objects. can be done.
  • FIG. 1 is a block diagram showing the overall configuration of an image recognition apparatus according to a first embodiment
  • FIG. 4 is a flow chart showing the operation of the image recognition apparatus according to the first embodiment
  • FIG. 3 is a flowchart showing processing operations of detection processing in FIG. 2 ;
  • FIG. An example of temporary combination processing by the image recognition apparatus according to the first embodiment.
  • An example of a detection area set by the image recognition apparatus of the first embodiment An example of a small area set by the image recognition apparatus of the first embodiment.
  • An example of a suitable method for extracting temporary joint regions. 9 is a flow chart showing processing operations of the image recognition apparatus of the second embodiment.
  • FIG. 1 An image recognition device 1 according to Example 1 of the present invention will be described with reference to FIGS. 1 to 15.
  • FIG. 1 is a diagrammatic representation of Example 1 of the present invention.
  • FIG. 1 is a block diagram showing the overall configuration of the image recognition device 1 of this embodiment.
  • This image recognition device 1 is a stereo camera mounted on a vehicle (hereinafter referred to as "self-vehicle V0"), and has an environment recognition function of a driving support system and an automatic driving system.
  • the environment recognition function is a function of recognizing three-dimensional objects such as pedestrians, vehicles, traffic lights, signs, white lines, car tail lights, and headlights from the captured image in front of the vehicle.
  • the driving support system, automatic driving system, etc. of the own vehicle V0 can control the brakes, steering, etc. based on the recognition results of the image recognition device 1, and execute driving support control and automatic driving control according to the environment outside the vehicle. can do.
  • the image recognition apparatus 1 of this embodiment includes a left camera 11, a right camera 12, an image input interface 13, an image processing unit 14, an arithmetic processing unit 15, a storage unit 16, A CAN interface 17 and a control processing unit 18 are provided.
  • the image recognition device 1 is also connected to a steering system, a driving system, a braking system, etc. of the host vehicle V0 (not shown) via a CAN (Controller Area Network).
  • portions other than the left camera 11 and the right camera 12 are a computer equipped with hardware such as an arithmetic device such as a CPU, a storage device such as a semiconductor memory, and a communication device, or , is a hard logic circuit. Then, the arithmetic unit executes the program developed in the storage device, or executes the control in which the hard logic circuit is incorporated, thereby realizing each function of the image processing unit 14, the arithmetic processing unit 15, etc., which will be described later.
  • each part of the image recognition apparatus 1 will be described in detail while omitting such well-known techniques as appropriate.
  • the left camera 11 is a camera that captures a left image P1 using an image sensor
  • the right camera 12 is a camera that captures a right image P2 using an image sensor.
  • the left camera 11 and the right camera 12 are installed on the upper part of the inner surface of the windshield or the like so that the front of the vehicle V0 can be captured. , and the right image P2).
  • the image input interface 13 takes in the left image P1 captured by the left camera 11 and the right image P2 captured by the right camera 12.
  • the internal bus 19 is a bus that relays communication among the image input interface 13, the image processing unit 14, the arithmetic processing unit 15, the storage unit 16, the CAN interface 17, and the control processing unit 18.
  • the captured image P (P1, P2) is transmitted to the image processing unit 14 and the arithmetic processing unit 15, and the outputs of both processing units are transmitted to the storage unit 16.
  • the image processing unit 14 compares the left image P1 and the right image P2 captured by the image input interface 13, and corrects device-specific deviations caused by the image sensor and performs noise interpolation on each image P. Image correction is performed and stored in the storage unit 16 . Further, the image processing unit 14 calculates mutually corresponding portions between the left image P1 and the right image P2, obtains parallax information, and stores this as distance information corresponding to each pixel on the image in the storage unit. 16. In the following, it is assumed that various types of processing are basically performed using distance information, but various types of processing may be performed using parallax information instead of distance information.
  • the arithmetic processing unit 15 uses the image information and distance information (parallax information) stored in the storage unit 16 to recognize three-dimensional objects in order to grasp the environment around the vehicle, and obtains the recognition results and intermediate processing results. is stored in the storage unit 16. After recognizing a three-dimensional object in the image P, the arithmetic processing unit 15 uses the recognition result to perform calculations for vehicle control. A vehicle control policy obtained as a result of vehicle control calculation and part of the recognition result are transmitted to the vehicle-mounted network CAN via the CAN interface 17, thereby controlling the vehicle.
  • the control processing unit 18 prevents abnormal operations by monitoring whether each processing unit is operating abnormally or whether an error has occurred during data transfer.
  • Step S1 Image processing
  • the image processing unit 14 executes image processing. Specifically, each of the left image P1 and the right image P2 captured by the image input interface 13 is subjected to processing such as correction for absorbing the inherent peculiarity of the imaging device, and the left image P1′ and the right image P2 are converted into the left image P1′ and the right image P2. ' is output.
  • This corrected image P′ (left image P1′, right image P2′) is stored in the image buffer B1 in the storage unit 16.
  • Step S2 Parallax processing
  • the image processing unit 14 executes parallax processing. Specifically, the left and right corrected images P′ (P1′, P2′) corrected in step S1 are used to compare the corrected images with each other. Get parallax information. Due to the parallax between the left and right images, a point of interest on the image of the three-dimensional object can be obtained as distance information to the three-dimensional object by the principle of triangulation. This processing result (parallax information, distance information) is stored in the parallax buffer B2 in the storage unit 16.
  • Step S3 Detection processing
  • the arithmetic processing unit 15 uses the parallax information obtained in step S2 to detect a three-dimensional object in the three-dimensional space.
  • This detection result (three-dimensional object detection area) is stored in the detection area buffer B ⁇ b>3 in the storage unit 16 .
  • FIG. 3 is a diagram exemplifying three-dimensional object detection regions R1 to R4 provisionally set on the image after the processing in step S2.
  • the right direction of the image is defined as the positive direction of the X-axis
  • the upward direction is defined as the positive direction of the Y-axis.
  • the shape of the detection area on the image may be an irregular shape, but hereinafter, the detection area is assumed to be rectangular in order to facilitate the calculation of the temporary combination processing described later.
  • the detection area R1 normally detects a pedestrian as a single object, and the detection area R2 normally detects a preceding vehicle as a single object.
  • the detection area R3 erroneously detects the front part of the vehicle V1, which is a single object, as an independent object
  • the detection area R4 erroneously detects the rear part of the vehicle V1, which is a single object, as an independent object.
  • the reason why the single object vehicle V1 is divided into the detection area R3 and the detection area R4 and is erroneously detected is, for example, as follows. (1) If sufficient texture information is not available for the side surface of the vehicle V1, parallax cannot be obtained at that portion, so the side surface of the vehicle V1 cannot be detected integrally, and may be detected separately. (2) If the three-dimensional object detection parameter is set inappropriately, erroneous segmentation may occur even when there is sufficient texture information on the sides of the vehicle V1. For example, when a detection parameter is set to detect a portion having a distance in the depth direction on the side surface of the vehicle V1 as a separate three-dimensional object, or when a detection parameter for detecting a pedestrian is used to detect the vehicle V1. As a result, each of the front and rear parts of the vehicle V1 is erroneously detected as an independent object.
  • step S3 the detection area R3 and the detection area R4, which have been divided and detected, are combined as a single three-dimensional object detection area, and the details of this combination process will be described later.
  • Step S4 Recognition processing
  • the arithmetic processing unit 15 performs recognition processing for identifying the type of the three-dimensional object on the detection area set on the image in step S3.
  • This recognition processing is performed using the image information recorded in the image buffer B1 and the parallax information recorded in the parallax buffer B2. This is the same even when a radar such as a millimeter wave and an image sensor such as a camera are combined instead of the stereo camera.
  • Step S5 Vehicle Control Processing
  • the arithmetic processing unit 15 considers the three-dimensional object recognition result in step S4 and the state of the vehicle V0 (speed, steering angle, etc.), for example, issues a warning to the occupant, and warns the vehicle V0. Controls such as braking and steering angle adjustment. Alternatively, avoidance control for the recognized three-dimensional object is determined, and the result is output via the CAN interface 17 as automatic control information. Thereby, a desired driving support system, an automatic driving system, or the like can be realized.
  • step S3 ⁇ Details of step S3> Next, the details of step S3 executed by the arithmetic processing unit 15 will be described with reference to the flowchart of FIG.
  • Step S3a Area detection processing
  • the arithmetic processing unit 15 detects the detection area of the three-dimensional object using the parallax information obtained in step S2, and stores the detected detection area in the detection area buffer B3.
  • a detection area as illustrated in FIG. 3 is provisionally set on the image.
  • a single object may be erroneously detected as multiple three-dimensional objects (see detection regions R3 and R4 in FIG. 3). can be accurately detected as a single object.
  • Step S3b Temporary connection processing
  • the arithmetic processing unit 15 creates a rectangular temporary combined area C by combining the plurality of detection areas R stored in the detection area buffer B3.
  • a rectangle that inscribes a plurality of rectangular detection areas is created as one temporary combined area C.
  • step S3a As a result of the processing in step S3a, many detection areas may be set on the image. However, it is not realistic to perform the provisional connection processing for all combinations of detection regions because the computational cost becomes huge together with the subsequent processing. Therefore, in this step, in order to reduce the load of this combination processing, temporary combination processing is executed in the following procedure.
  • the two adjacent detection areas R are used to create a temporary combined area C.
  • adjoining on an image refers to the relationship between detection areas whose lateral positions (X positions) are closest to each other.
  • the adjacent detection areas are specified based on the X position, but the adjacent detection areas may be specified based on the Y position. Also, both the X and Y positions may be used as references.
  • a temporary joint area C1 indicated by a dashed line is created.
  • a temporary connection region C2 indicated by a dashed line is also created.
  • the detection region R is temporarily combined, but also the detection region R and the temporary combination region C may be further temporarily combined.
  • the detection region R3 is the closest detection region in the lateral position (X position). Therefore, the temporary joint region C1 and the detection region R3 adjacent thereto may be temporarily joined to form the temporary joint region C3.
  • the temporary connection area C3 in FIG. 6 is an area including the front portion of the vehicle on the left side and the front portion of the vehicle on the right side.
  • Temporary joins such as FIG. 7 shows an example where such a temporary connection is required.
  • the detection area is set so that an object of standard size can be detected.
  • Detection areas R1, R2, and R3 in FIG. 7 are areas where part of a large truck is erroneously detected as one detection area.
  • the temporary combined area where the detection areas R1 and R2 are combined and the temporary combined area where the detection areas R2 and R3 are combined do not form a frame that correctly detects the large truck. Therefore, in such a case, by combining the remaining detection areas with the temporary combination area of the detection areas R1 and R2 or the temporary combination area of the detection areas R2 and R3, one It is possible to acquire the temporary combined area C1 that has become the detection area.
  • FIG. 8 shows an example of detection on a public road.
  • the detection regions R1 to R5 each have a depth distance as illustrated.
  • the objects for which the temporary combined area C is to be created are limited to, for example, those whose difference from the distance of the detection area is within 10%.
  • the detection area has information on the horizontal distance and Euclidean distance in the real space, it is possible to temporarily combine detection areas with greatly different horizontal distances and Euclidean distances based on that information. good. As a result, for example, it is possible to prevent the generation of a temporary joint region that includes both the detection region R2 and the detection region R3 in FIG. 5, which have significantly different lateral distances.
  • the threshold value for creating the temporary area is not limited to 10%, and may be a fixed value or dynamically determined in consideration of the accuracy of the sensor in terms of distance. Also, the combination based on the distance may be combined with the condition of being adjacent on the image. This makes it possible to reduce the computational cost.
  • step S3a when the object in the three-dimensional space is assumed to be a cylinder or a cube, and detection is performed in such a way that the area in the depth direction is restricted, the detection area is divided.
  • the binding method As shown in FIG. 9, only one side of a large vehicle (bus, truck, etc.) present at the edge of the angle of view is imaged, but since it is long in the depth direction, in step S3a, a plurality of detection areas R1 and R2 are captured. It may be detected separately.
  • the division in the depth direction is a concept necessary for dividing and detecting a plurality of objects that are adjacent to each other on the image, such as when vehicles are stopped in a row in the oncoming lane, but the environment shown in FIG. Below, the sensing regions R1, R2 need to be combined.
  • the determination is made by focusing on the distance of pixels or small areas within the detection area.
  • small areas having the same distance information in the vertical direction are used.
  • the small areas R1a to R1d are small areas obtained by dividing the detection area R1 into four areas, each of which has a depth distance, and the average of these depth distances is the depth distance of the detection area R1 of 8 m.
  • the small regions R2a to R2c are small regions obtained by dividing the detection region R2 into three regions, each of which has a depth distance, and the average depth distance of the detection region R2 is 10 m.
  • the detection area R1 is 8 m away from the detection area R2 by 10 m
  • the small areas R1d and R2a are adjacent to each other on the image and have a close depth distance according to the reference in FIG. have.
  • a temporary joint region C1 using the detection regions R1 and R2
  • a small area was created in the vertical direction, but this is based on the result of using an example of stably obtaining the distance in the depth direction by averaging the distance of each pixel in the vertical direction.
  • the method of defining the small areas is not limited to this.
  • FIG. 11 exemplifies the camera angle of view of the own vehicle V0 and the corresponding image P.
  • the vehicles V1 to V4 in the oncoming lane appear as a row of vehicles in the captured image P.
  • FIG. When the objects (vehicles V1 to V4) are imaged overlappingly in this manner, only the visible portions such as the detection regions R1 and R2 are acquired as the detection regions in step S3a.
  • the detection regions R1 and R2 detected at this time always include the side regions of the vehicles V2 and V3.
  • FIG. 12 is a conceptual diagram exemplifying a small area set inside the detection area R1 of FIG. 11 and a small area set inside the detection area R2. Although only the small regions R1a and R2a are shown in FIG. 12, a plurality of small regions are set within each detection region.
  • FIG. 13 plots the horizontal position on the image of each small area in FIG. 12 and the depth distance.
  • the relationship between the lateral position on the image P and the depth distance when the vehicle group (vehicles V1 to V4) is detected is expressed as shown in the lower graph of FIG.
  • the tilt is calculated using a certain small area and its left and right adjacent small areas.
  • step S3b if the inclinations are the same, temporary combination is performed assuming that they capture one piece of the same object. Further, since the position in the depth direction and the tilt obtained therefrom vary depending on the accuracy of the sensor, if the difference in tilt is equal to or less than a threshold, it may be determined that the objects are the same object. In the above description, three points are acquired as small areas, but this is also determined by sensor accuracy, and is not limited to the above.
  • Step S3c same object determination process
  • the arithmetic processing unit 15 determines whether or not the temporary combined areas C created in step S3b are the same object. Determination of the same object is determined by the target to be determined and the desired accuracy. For example, information that is not calculated during object detection, or when a series of horizontal or oblique edges are connected, is determined to be the same object. Also, the colors of the small regions are compared, and if the difference is less than a certain value, it is determined that they are the same object. Moreover, when the object to be detected is limited, such as a vehicle, a more accurate method can be used.
  • FIG. 14 shows an example of pattern matching with model patterns.
  • a provisional connection region C1 provided by the provisional connection processing is obtained in a form that includes the vehicle on the image.
  • the similarity between the image area corresponding to this temporary combined area C1 and the predetermined model pattern M is compared, and if the similarity is equal to or higher than a certain level, it is determined that they are the same object.
  • the degree of similarity is calculated by performing edge extraction in the temporary connection area C1 and performing normalized correlation.
  • the same determination method is not limited to the above, and may be determined arbitrarily for an object that causes an event that is divided and detected by the region detection processing in step S3a.
  • Step S3d Detection area integration processing
  • the arithmetic processing unit 15 updates the detection area buffer B3 with the temporary combination areas that have passed the temporary combination processing in step S3b and the identity determination process in step S3c.
  • the area to be recorded is selected based on the calculation cost required for subsequent processing and the accuracy of each processing. For example, if the accuracy of identity determination processing is high, it is selected not to store the detection regions that are the basis of the temporary combined regions. In addition, it is conceivable that overlapping detection regions on the image may be obtained by such combining processing. An example is shown in FIG.
  • Temporary joint regions C1 and C2 are obtained for two vehicles facing sideways, and a temporary joint region C3 is also obtained so as to straddle the two vehicles.
  • the temporary combined region C3 can be rejected by using the degree of similarity obtained in the identity determination process in step S3c. For example, the degree of similarity between the temporary joint region C1 and the model pattern M is compared with the degree of similarity between the temporary joint region C3 and the model pattern M, and the temporary joint region C3 having a relatively low degree of similarity with the model pattern M is rejected, The temporary combined area C1 with a relatively high degree of similarity is updated and saved in the detection area buffer B3.
  • the degree of similarity between the temporary joint region C2 and the model pattern M is compared with the degree of similarity between the temporary joint region C3 and the model pattern M, and the temporary joint region C3 having a relatively low degree of similarity with the model pattern M is rejected.
  • the temporary combined area C2 having a relatively high degree of similarity is updated and stored in the detection area buffer B3.
  • three-dimensional objects such as vehicles and pedestrians to be monitored by a driving support system, an automatic driving system, etc. can be appropriately detected as independent single objects. can do.
  • an image recognition device 1' according to Example 2 of the present invention will be described using FIG. Duplicate descriptions of common points with the first embodiment will be omitted.
  • a stereo camera composed of the left camera 11 and the right camera 12 is used to detect a three-dimensional object.
  • ' and radar sensor 12' are used to detect three-dimensional objects. The processing operation of this embodiment will be described below with reference to FIG.
  • step S1 the image P captured by the optical camera 11' is subjected to image processing such as correction for absorbing the inherent peculiarities of the imaging device.
  • the processing result of the image processing is stored in the image buffer B1. Also, the distance to the three-dimensional object is obtained by the radar sensor 12'.
  • step S3 a solid object in the three-dimensional space is detected based on the distance to the solid object.
  • Distance information used for detection is stored in the distance buffer B4. Further, in the detection processing in step S3, the image and the distance are associated with each other according to the necessity of the post-processing.
  • step S4 recognition processing is performed to specify the type of the three-dimensional object for the detection area set by the detection processing in step S3.
  • step S3 of the present embodiment since the distance to the three-dimensional object output from the radar sensor 12' is used as an input, it is necessary to perform detection processing in consideration of the sensor characteristics of the radar sensor 12' used for distance measurement.
  • the processing after determining the detection area can be performed in the same manner as the configuration using the stereo cameras described in the image recognition apparatus 1 .
  • Reference Signs List 1 1′ image recognition device 11, 12 camera 13 image input interface 14 image processing unit 15 arithmetic processing unit 16 storage unit 17 CAN interface 18 control processing unit 19 ... internal bus, CAN ... in-vehicle network, B1 ... image buffer, B2 ... parallax buffer, B3 ... detection area buffer, B4 ... distance buffer, V ... vehicle, R ... detection area, C ... temporary combination area

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne des cibles qui doivent être considérées comme étant le même objet dans un espace tridimensionnel pouvant être détectées en tant que pluralité d'objets en raison de l'influence de l'environnement externe, des caractéristiques des cibles, des restrictions sur les paramètres de détection, etc. L'invention concerne un dispositif de reconnaissance d'image comprenant : une unité de détection de région qui détecte une pluralité de régions de détection à partir d'une image ; une unité de traitement de liaison temporaire qui relie temporairement la pluralité de régions de détection ; une unité de traitement de détermination d'identité d'objet qui détermine si la pluralité de régions de détection reliées temporairement peut être identifiée en tant qu'objet unique ; et une unité de traitement d'intégration de région de détection qui, sur la base du résultat de la détermination par l'unité de traitement de détermination d'identité d'objet, détermine une combinaison de régions de détection parmi la pluralité de régions de détection qui doivent être reconnues comme un seul objet, et reflète cette détermination dans le résultat de détection d'objet.
PCT/JP2021/045985 2021-12-14 2021-12-14 Dispositif de reconnaissance d'image et procédé de reconnaissance d'image WO2023112127A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/045985 WO2023112127A1 (fr) 2021-12-14 2021-12-14 Dispositif de reconnaissance d'image et procédé de reconnaissance d'image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/045985 WO2023112127A1 (fr) 2021-12-14 2021-12-14 Dispositif de reconnaissance d'image et procédé de reconnaissance d'image

Publications (1)

Publication Number Publication Date
WO2023112127A1 true WO2023112127A1 (fr) 2023-06-22

Family

ID=86774096

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/045985 WO2023112127A1 (fr) 2021-12-14 2021-12-14 Dispositif de reconnaissance d'image et procédé de reconnaissance d'image

Country Status (1)

Country Link
WO (1) WO2023112127A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012120856A1 (fr) * 2011-03-10 2012-09-13 パナソニック株式会社 Dispositif et procédé de détection d'objets
JP2021018605A (ja) * 2019-07-19 2021-02-15 株式会社Subaru 画像処理装置
JP2021081789A (ja) * 2019-11-14 2021-05-27 日立Astemo株式会社 物体識別装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012120856A1 (fr) * 2011-03-10 2012-09-13 パナソニック株式会社 Dispositif et procédé de détection d'objets
JP2021018605A (ja) * 2019-07-19 2021-02-15 株式会社Subaru 画像処理装置
JP2021081789A (ja) * 2019-11-14 2021-05-27 日立Astemo株式会社 物体識別装置

Similar Documents

Publication Publication Date Title
US7957559B2 (en) Apparatus and system for recognizing environment surrounding vehicle
JP6013884B2 (ja) 物体検出装置及び物体検出方法
US9349070B2 (en) Vehicle external environment recognition device
US20200074212A1 (en) Information processing device, imaging device, equipment control system, mobile object, information processing method, and computer-readable recording medium
WO2005036371A2 (fr) Detection d'objets en mouvement faisant appel a une vision artificielle en conditions de faible d'eclairage
JP6743882B2 (ja) 画像処理装置、機器制御システム、撮像装置、画像処理方法及びプログラム
JP6283105B2 (ja) ステレオカメラ装置、ステレオカメラ装置を設置した車両及びプログラム
WO2019181591A1 (fr) Caméra stéréo embarquée
US9524645B2 (en) Filtering device and environment recognition system
JP5097681B2 (ja) 地物位置認識装置
JP6631691B2 (ja) 画像処理装置、機器制御システム、撮像装置、画像処理方法、及び、プログラム
JP6899673B2 (ja) 物体距離検出装置
JP7261006B2 (ja) 車外環境認識装置
WO2023112127A1 (fr) Dispositif de reconnaissance d'image et procédé de reconnaissance d'image
JP2018163530A (ja) 対象物検知装置、対象物検知方法、及び対象物検知プログラム
JP2018073049A (ja) 画像認識装置、画像認識システム、及び画像認識方法
JP6891082B2 (ja) 物体距離検出装置
WO2020054260A1 (fr) Dispositif de reconnaissance d'image
JPWO2020129517A1 (ja) 画像処理装置
WO2018097269A1 (fr) Dispositif de traitement d'informations, dispositif d'imagerie, système de commande d'équipement, objet mobile, procédé de traitement d'informations et support d'enregistrement lisible par ordinateur
CN112334944B (zh) 摄像机装置的标志识别方法及标志识别装置
JP2015219212A (ja) ステレオカメラ装置及び距離算出方法
JP7446445B2 (ja) 画像処理装置、画像処理方法、及び車載電子制御装置
CN111133439B (zh) 全景监视系统
JP7379268B2 (ja) 画像処理装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21968045

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023567317

Country of ref document: JP

Kind code of ref document: A