WO2021084915A1 - Dispositif de reconnaissance d'image - Google Patents
Dispositif de reconnaissance d'image Download PDFInfo
- Publication number
- WO2021084915A1 WO2021084915A1 PCT/JP2020/033886 JP2020033886W WO2021084915A1 WO 2021084915 A1 WO2021084915 A1 WO 2021084915A1 JP 2020033886 W JP2020033886 W JP 2020033886W WO 2021084915 A1 WO2021084915 A1 WO 2021084915A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- information
- dimensional object
- processing unit
- parallax
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/18—Extraction of features or characteristics of the image
- G06V30/1801—Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes or intersections
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
Definitions
- the present invention relates to an image recognition device.
- Patent Document 1 in a situation where an apparently moving three-dimensional object and another three-dimensional object overlap, a pedestrian existing inside the region is traced by tracking feature points inside a predetermined region containing the three-dimensional object.
- a recognition device that detects a moving three-dimensional object such as is proposed.
- Patent Document 2 proposes a method using machine learning, and also proposes to perform recognition by combining an image taken by an optical camera with distance information obtained from stereo matching or radar. ing.
- texture information taken by an optical camera is used for recognition of an object, and erroneous recognition occurs in a photograph drawn on a wall or a signboard or a similar silhouette generated by a combination of natural objects. doing. This is because when the recognition process is performed using the image of the optical camera and the distance image corresponding to the image, the information on the pixels, the distance, and the area in which they are put together becomes too large and cannot be realized at a realistic cost.
- the present invention has been made in view of the above circumstances, and an object of the present invention is to provide an image recognition device capable of accurately detecting a three-dimensional object and improving recognition performance while suppressing an increase in cost. There is.
- the image recognition device of the present invention that solves the above-mentioned problems is an image recognition device that recognizes a three-dimensional object on an image captured by an imaging unit, and with respect to a detection area of the three-dimensional object set on the image.
- the distance information or the parallax information of the three-dimensional object is numerically converted, and the numerically converted distance information or the parallax information is combined with the image information of the image to perform a recognition process for specifying the type of the three-dimensional object. And.
- an image recognition device capable of accurately detecting a three-dimensional object and improving recognition performance while suppressing an increase in cost.
- FIG. 3 It is a block diagram which shows the functional block composition (Example 3) of the image recognition apparatus involved in the three-dimensional object recognition processing. It is a flowchart which shows the detail (Example 3) of the three-dimensional object recognition processing. It is a schematic diagram which shows the procedure of creating the background removal edge image which removed the background edge from the luminance image using weight information. It is a flowchart which shows the operation in the image recognition apparatus of another example.
- FIG. 1 is a block diagram showing an overall configuration of an image recognition device 100 according to the present embodiment.
- the image recognition device 100 is mounted on a vehicle (hereinafter, may be referred to as a own vehicle), and the left camera (imaging unit) 101 and the right camera (imaging unit) 102 (hereinafter, simply referred to as simply) arranged side by side in front of the vehicle.
- Cameras 101 and 102 are provided.
- the cameras 101 and 102 constitute a stereo camera, and image a three-dimensional object in front of the vehicle such as a pedestrian, a vehicle, a signal, a sign, a white line, a tail lamp of the vehicle, and a headlight.
- the image recognition device 100 includes a processing device 110 that recognizes the outside environment of the vehicle based on the information (image information) of the image in front of the vehicle captured by the cameras 101 and 102. Then, the vehicle (own vehicle) controls the brake, steering, and the like based on the recognition result by the image recognition device 100.
- the processing device 110 of the image recognition device 100 captures the images captured by the cameras 101 and 102 from the image input interface 103.
- the image information taken in from the image input interface 103 is sent to the image processing unit 104 via the internal bus 109. Then, it is processed by the arithmetic processing unit 105, and the result in the process of processing, the image information of the final result, and the like are stored in the storage unit 106.
- the image processing unit 104 includes a first image obtained from the image sensor of the left camera 101 (hereinafter, may be referred to as a left image) and a second image obtained from the image sensor of the right camera 102 (hereinafter, referred to as a right image). For each image, correction of device-specific deviation caused by the image sensor and image correction such as noise interpolation are performed, and this is stored in the storage unit 106 as image information. .. Further, the image processing unit 104 calculates the points corresponding to each other between the first image and the second image, obtains the parallax information, and obtains the parallax information as the distance information corresponding to each pixel on the image. Is stored in the storage unit 106.
- the image processing unit 104 is connected to the arithmetic processing unit 105, the CAN interface 107, and the control processing unit 108 via the internal bus 109.
- the arithmetic processing unit 105 recognizes a three-dimensional object in order to grasp the environment around the vehicle by using the image information and the distance information (parallax information) stored in the storage unit 106. A part of the recognition result of the three-dimensional object and the intermediate processing result is stored in the storage unit 106. After recognizing a three-dimensional object with respect to the captured image, the arithmetic processing unit 105 calculates the vehicle control using the recognition result. The vehicle control policy obtained as a result of the vehicle control calculation and a part of the recognition result are transmitted to the in-vehicle network CAN111 via the CAN interface 107, whereby the vehicle is controlled.
- the control processing unit 108 monitors whether each processing unit has caused an abnormal operation, whether an error has occurred during data transfer, and the like, and prevents the abnormal operation.
- the image processing unit 104, the arithmetic processing unit 105, and the control processing unit 108 may be composed of a single computer unit or a plurality of computer units.
- FIG. 2 is a flowchart showing the operation of the image recognition device 100.
- an image is captured by the left camera 101 and the right camera 102 provided in the image recognition device 100, and each of the captured image information 121 and 122 absorbs the unique characteristics of the image sensor.
- Image processing S203 such as correction of The processing result of the image processing S203 is stored in the image buffer 161.
- the image buffer 161 is provided in the storage unit 106 of FIG.
- the parallax processing S204 is performed. Specifically, the two images corrected by the image processing S203 are used to collate the images with each other, thereby obtaining parallax information of the images obtained by the left camera 101 and the right camera 102. By the parallax of the left and right images, a certain point of interest on the image of the three-dimensional object is obtained as the distance to the three-dimensional object by the principle of triangulation.
- the processing result of the parallax processing S204 is stored in the parallax buffer 162.
- the parallax buffer 162 is provided in the storage unit 106 of FIG. Further, the information recorded in the parallax buffer 162 may be converted into distance information and then used for the subsequent processing.
- the image processing S203 and the parallax processing S204 are performed by the image processing unit 104 of FIG. 1, and the finally obtained image information and the parallax information are stored in the storage unit 106.
- FIG. 3 is a diagram showing a three-dimensional object detection region (also referred to as a three-dimensional object region) set on the image by the three-dimensional object detection process S205.
- FIG. 3 shows a pedestrian detection area 301 and a vehicle detection area 302 detected by the cameras 101 and 102 on the image as a result of the three-dimensional object detection process S205.
- These detection areas 301 and 302 indicate areas where pedestrians or vehicles exist on the image, and even if they are rectangular as shown in FIG.
- the detection area will be treated as a rectangle, and a pedestrian will be mainly used as an example of a three-dimensional object.
- a recognition process for specifying the type of the three-dimensional object is performed for the detection area set on the image by the three-dimensional object detection process S205.
- the three-dimensional object to be recognized by the three-dimensional object recognition process S206 is, for example, a pedestrian, a vehicle, a signal, a sign, a white line, a tail lamp of a car, a headlight, or the like, and the type of any of these is specified.
- This three-dimensional object recognition process S206 is performed using the image information recorded in the image buffer 161 and the parallax information recorded in the parallax buffer 162.
- the information in the parallax buffer 162 may cause erroneous recognition because the relationship between the object and the background exists infinitely. This is the same even when a radar such as a millimeter wave is combined with an image sensor such as a camera.
- the details of the three-dimensional object recognition process S206 that solves this problem will be described later.
- a warning is issued to the occupant in consideration of the recognition result of the three-dimensional object in the three-dimensional object recognition process S206 and the state of the own vehicle (speed, steering angle, etc.), and the own vehicle.
- the control for braking and adjusting the steering angle of the vehicle is determined, or the avoidance control for the recognized three-dimensional object is determined, and the result is output as automatic control information via the CAN interface 107 (S208).
- the three-dimensional object detection process S205, the three-dimensional object recognition process S206, and the vehicle control process S207 are performed by the arithmetic processing unit 105 of FIG.
- the program shown in the flowchart of FIG. 2 and the flowchart of FIG. 5 described later can be executed by a computer equipped with a CPU, memory, and the like. All processing or some processing may be realized by a hard logic circuit. Further, this program can be provided by storing it in the storage medium of the image recognition device 100 in advance. Alternatively, the program can be stored and provided in an independent storage medium, or the program can be recorded and stored in the storage medium of the image recognition device 100 via a network line. It may be supplied as a computer-readable computer program product in various forms such as a data signal (carrier wave).
- FIG. 4 is a block diagram showing a functional block configuration (Example 1) of the image recognition device 100 related to the three-dimensional object recognition process S206.
- FIG. 5 is a flowchart showing the details (Example 1) of the three-dimensional object recognition process S206.
- the three-dimensional object recognition process S206 of FIG. 2, that is, the flowchart shown in FIG. 5 is normalized to the information of the parallax buffer 162 provided in the arithmetic processing unit 105 as shown in FIG.
- the normalization processing unit 401 normalizes the parallax corresponding to the detection area acquired by the three-dimensional object detection process S205 among the information contained in the parallax buffer 162 (FIG. 5: S501).
- the value s i of each parallax is numerically converted into the value S i after normalization based on the following equation (1).
- s max and s min are, for example, the maximum and minimum values of the parallax value before normalization
- S max and S min are the maximum and minimum values after normalization.
- S max and S min shall be arbitrarily determined according to the format of the information used in the three-dimensional object recognition process S206.
- s max and s min may be arbitrarily determined according to the format of the information used in the three-dimensional object recognition process S206. For example, in a stereo camera, the accuracy of parallax and distance is poor due to being dragged when the signal / noise ratio near the region where the brightness value is small is poor due to the sensor characteristics, or when the resolution in the region where the brightness value is saturated is unstable. It is conceivable that In such a case, s max and s min may be set to arbitrary values based on the original pixel information, sensor characteristics, etc., or may be converted and used based on a certain conversion formula such as 10% carry-up or round-down. In addition, regardless of the accuracy of the original image, in the case of a radar sensor or the like, it is conceivable to use s max and s min excluding outliers based on the erroneous measurement occurrence rate in the region.
- the equation used for the normalization process S501 may be defined as the following equation (2).
- (Number 2) s avr is the average value of the parallax values in the detection area.
- the method used for normalization shall be arbitrarily determined according to the format of the information used in the three-dimensional object recognition process S206.
- the parallax information corresponding to the detection area is numerically converted and normalized based on an arbitrary rule, but it goes without saying that the distance information corresponding to the detection area may be numerically converted and normalized. is there.
- the recognition processing unit 402 performs recognition processing by combining the information of the image buffer 161 and the normalization information of the parallax buffer 162 (parallax information or distance information after the normalization processing) (FIG. 5: S502).
- the recognition process S502 uses, for example, pattern matching that compares a luminance image in the image buffer 161 with a predetermined pattern using a normalized correlation or the like, or determination by a classifier created by using machine learning. ..
- a method such as using the average value of the pattern matching result of the luminance image and the pattern matching result of the normalized parallax information as the final judgment value, or the luminance image and the normalized parallax A method of identifying by a classifier created by machine learning using the difference in information as a feature quantity is used.
- FIG. 6 is a block diagram showing a functional block configuration (Example 2) of the image recognition device 100 related to the three-dimensional object recognition process S206.
- FIG. 7 is a flowchart showing the details (Example 2) of the three-dimensional object recognition process S206.
- the three-dimensional object recognition process S206 of FIG. 2, that is, the flowchart shown in FIG. 7, is based on the information (misparity information) of the disparity buffer 162 provided in the arithmetic processing unit 105, as shown in FIG.
- the weight generation processing unit 601 that creates weights corresponding to each pixel of the image of the image buffer 161 and the recognition processing unit 602 that recognizes the weight information created by the weight generation processing unit 601 together with the information of the image buffer information 161. Will be implemented.
- the weight generation processing unit 601 calculates the weight corresponding to each pixel of the image of the image buffer 161 (the image corresponding to the detection area acquired by the three-dimensional object detection processing S205) from the information of the parallax buffer 162. Generate (FIG. 7: S701).
- the detection area obtained by the three-dimensional object detection process S205 includes a background portion in addition to the recognition target that is the foreground portion. At this time, if the recognition target, which is the foreground part, and the background part are treated in the same way, it causes erroneous recognition. Therefore, in the weight generation process S701, the weight is created using the parallax information.
- the weight is 1 for pixels having a parallax value s i satisfying the following equation (3), and other than that. Give a weight such that is 0. (Number 3)
- This weight is used, for example, to mask the luminance information obtained from the image buffer 161.
- the weight generation processing unit 601 may use the median value instead of the average value s avr , and instead of determining the threshold value s th , obtains a value deviating from the variance or standard deviation of the parallax in the detection region. You can also. For example, a weight is given so that pixels not included in the standard deviation of 3 ⁇ are 0 and others are 1. The designer may arbitrarily determine the maximum and minimum (in other words, the range) of this weight, and assign it linearly or according to an arbitrary function.
- the weight can be created, for example, by creating a histogram from the parallax value s i in the detection area and selecting either the foreground or the background mountain generated in the histogram. For example, a weight is given so that the pixel having the parallax value s i corresponding to the foreground to be recognized is 1 and the other pixels are 0.
- the weight corresponding to each pixel is generated (by numerical conversion) from the parallax information of the three-dimensional object, but from the distance information of the three-dimensional object.
- the weight corresponding to each pixel may be generated (by numerical conversion), or instead of each pixel, the weight corresponding to each distance (corresponding to each pixel) or each parallax may be generated. Is.
- the recognition processing unit 602 performs recognition processing using the image information of the image buffer 161 and the weight information created by the weight generation processing unit 601 (FIG. 7: S702).
- the recognition process S702 includes, for example, a method such as pattern matching in which a weighted value of a luminance image in the image buffer 161 and a predetermined pattern are compared using a normalized correlation or the like, or a luminance image and a weight.
- the recognition processing unit 602 can use the parallax information and the distance information obtained from the parallax buffer 162 in combination with the image information and the weight information for recognition. For example, after masking each of the luminance image and the parallax image with a weight, a method of identifying the two types after masking and a discriminator characterized by the difference thereof is used.
- FIG. 8 is a block diagram showing a functional block configuration (Example 3) of the image recognition device 100 related to the three-dimensional object recognition process S206.
- FIG. 9 is a flowchart showing the details (Example 3) of the three-dimensional object recognition process S206.
- the three-dimensional object recognition process S206 of FIG. 2 that is, the flowchart shown in FIG. 9, shows the weight generation process unit 801 provided in the arithmetic processing unit 105 and the normalization process as shown in FIG. It is carried out by unit 802 and recognition processing unit 803.
- the weight generation processing unit 801 uses the information of the parallax buffer 162 to obtain an image of the image buffer 161 (in the three-dimensional object detection process S205). A weight corresponding to each pixel of the acquired detection area) is generated (FIG. 9: S901).
- a weight is created in which the value within the range of an arbitrary threshold value th is set to 1 from the median value of parallax, and the other values are set to 0.
- the normalization processing unit 802 normalizes the parallax information corresponding to the detection area acquired by the three-dimensional object detection processing S205 based on the weight created by the weight generation processing unit 801 (FIG. 9). : S902).
- S902 for example, when a binary weight of 0 or 1 is obtained, the maximum and minimum values of the parallax having the weight of 1 are set to s max and s min, and are based on the following equation (4). Normalize each parallax. (Number 4)
- S i that exceeds S max and S i that is less than S min are obtained, a value that can be judged to be an invalid value may be added to the normalization result. For example, in a system that is premised on handling a finite positive value, exception handling can be considered in which a negative value is treated as an invalid value.
- the weight corresponding to each pixel is generated (by numerical conversion) from the parallax information of the three-dimensional object, but from the distance information of the three-dimensional object.
- the weight corresponding to each pixel may be generated (by numerical conversion), or instead of each pixel, the weight corresponding to each distance (corresponding to each pixel) or each parallax may be generated.
- the parallax information corresponding to the detection area is numerically converted and normalized, it goes without saying that the distance information corresponding to the detection area may be numerically converted and normalized.
- the recognition processing unit 803 performs recognition using the image information of the image buffer 161 and the parallax information (parallax information after normalization processing) created by the normalization processing unit 802 (FIG. 9: S903). .. Further, the recognition processing unit 803 can use the weight information created by the weight generation processing unit 801 in combination with the image information and the normalization information for recognition. For example, the edge image 1001 created by using edge extraction from the luminance image shown in FIG. 10 is multiplied by the weight information 1002 to create an edge image (background-removed edge image) 1003 from which the background edge is removed. Recognition is performed using the background removal edge image 1003 and the normalized parallax image.
- the recognition process S903 may use a pattern matching technique such as normalization correlation. Further, a classifier may be used in which the product or difference of the two types of information is input.
- the normalization process alone is affected by the characteristics of the background part.
- only the weight generation process causes a difference in recognition performance depending on the distance of the foreground portion and the like. Therefore, by performing the weight generation process and the normalization process together, it is possible to recognize the image without being affected by the combination of the foreground and the background and the distance of the foreground, which improves the recognition performance. Connect.
- all the parallax information can be replaced with distance information.
- the image recognition device 100 using a stereo camera composed of a pair of cameras 101 and 102 has been described. However, it may be realized by using an image recognition device 100A that does not use a stereo camera.
- FIG. 11 is a flowchart showing the operation of the image recognition device 100A.
- the same parts as those of the operation in the image recognition device 100 shown in FIG. 2 are designated by the same reference numerals, and the description thereof will be omitted.
- the image recognition device 100A includes an optical camera (hereinafter, simply referred to as a camera) 1101 and a radar sensor 1102 as an imaging unit.
- a three-dimensional object is detected.
- the image is captured by the camera 1101, and the captured image information is subjected to image processing S203 such as correction for absorbing the unique characteristics of the image sensor.
- the processing result of the image processing S203 is stored in the image buffer 161.
- the radar sensor 1102 obtains the distance to the three-dimensional object as sensor information.
- the three-dimensional object detection process S213 detects a three-dimensional object in the three-dimensional space based on the distance to the three-dimensional object.
- the distance information used for detection is stored in the distance buffer 163.
- the distance buffer 163 is provided, for example, in the storage unit 106 of FIG. Further, in the three-dimensional object detection process S213, the image and the distance are associated with each other as necessary for the subsequent process.
- the three-dimensional object recognition process S214 in substantially the same manner as the above-mentioned image recognition device 100 (here, using the distance information of the three-dimensional object), with respect to the detection area set on the image by the three-dimensional object detection process S213. Performs recognition processing to identify the type of three-dimensional object.
- the subsequent processing can be performed in the same manner as the configuration by the stereo camera described in the image recognition device 100. Further, the image recognition device 100A does not require a plurality of images in the image processing S203.
- the image recognition devices 100 and 100A of the present embodiment described above are three-dimensional with respect to the detection region of a three-dimensional object set on the image captured by the cameras 101, 102 and 1101 as the imaging unit.
- the distance information or the parallax information of the object is numerically converted, and the numerically converted distance information or the parallax information is combined with the image information of the image to perform a recognition process for specifying the type of the three-dimensional object.
- the distance information or parallax information of the three-dimensional object to be recognized is normalized with respect to the information of each pixel obtained from the cameras 101, 102, 1101 and the corresponding distance or parallax information. (Figs. 4 and 5), or mask distance information or parallax information other than the recognition target, or change the weight of pixel information and distance information or parallax information (Figs. 6 and 7), or combine them (Fig. 8, 9) By doing so, recognition that combines pixel information and distance information or parallax information is realized.
- the image recognition devices 100 and 100A of the present embodiment can improve the positive recognition rate with respect to the detection areas 301 and 302 of the three-dimensional object set on the images captured by the cameras 101, 102 and 1101. it can.
- the shape (appearance on the image) similar to the recognition target generated by the combination of the foreground and the background has the effect of suppressing erroneous recognition of the target. Therefore, according to the present embodiment, it is possible to accurately detect a three-dimensional object and improve the recognition performance while suppressing an increase in cost.
- a stereo camera or a monocular camera composed of two cameras is used, but three or more cameras may be used.
- the front camera that images the front of the vehicle (in other words, acquires the image of the front of the vehicle) is illustrated, it is natural that a rear camera or a side camera that images the rear of the vehicle or the side of the vehicle may be used. Is.
- the present invention is not limited to the above-described embodiments, and other embodiments that can be considered within the scope of the technical idea of the present invention are also included within the scope of the present invention as long as the features of the present invention are not impaired. ..
- the above-described embodiment has been described in detail in order to explain the present invention in an easy-to-understand manner, and is not necessarily limited to the one including all the described configurations. Further, the configuration may be a combination of the above-described embodiment and a modified example.
- each of the above configurations, functions, processing units, processing means, etc. may be realized by hardware by designing a part or all of them by, for example, an integrated circuit. Further, each of the above configurations, functions, and the like may be realized by software by the processor interpreting and executing a program that realizes each function. Information such as programs, tables, and files that realize each function can be stored in a memory, a hard disk, a storage device such as an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD.
- SSD Solid State Drive
- control lines and information lines indicate those that are considered necessary for explanation, and not all control lines and information lines are necessarily indicated on the product. In practice, it can be considered that almost all configurations are interconnected.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
La présente invention concerne un dispositif de reconnaissance d'image qui peut détecter avec précision un objet tridimensionnel et avoir des performances de reconnaissance améliorées tout en réduisant au minimum les augmentations de coût. En ce qui concerne les informations concernant chaque pixel obtenu à partir de caméras 101, 102 et 1101 et les informations concernant une distance ou une parallaxe correspondant à celles-ci, les informations de distance ou les informations de parallaxe pour un objet tridimensionnel à reconnaître sont normalisées, ou les informations de distance ou les informations de parallaxe autres que celles de l'objet à reconnaître sont masquées, ou la pondération des informations de pixel et des informations de distance ou des informations de parallaxe est modifiée, ou les techniques ci-dessus sont combinées, permettant ainsi de mettre en œuvre une reconnaissance dans laquelle les informations de pixel et les informations de distance ou les informations de parallaxe sont combinées.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE112020004377.0T DE112020004377T5 (de) | 2019-10-29 | 2020-09-08 | Bilderkennungsvorrichtung |
JP2021554138A JP7379523B2 (ja) | 2019-10-29 | 2020-09-08 | 画像認識装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019-196340 | 2019-10-29 | ||
JP2019196340 | 2019-10-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021084915A1 true WO2021084915A1 (fr) | 2021-05-06 |
Family
ID=75715095
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/033886 WO2021084915A1 (fr) | 2019-10-29 | 2020-09-08 | Dispositif de reconnaissance d'image |
Country Status (3)
Country | Link |
---|---|
JP (1) | JP7379523B2 (fr) |
DE (1) | DE112020004377T5 (fr) |
WO (1) | WO2021084915A1 (fr) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019124537A (ja) * | 2018-01-15 | 2019-07-25 | キヤノン株式会社 | 情報処理装置及びその制御方法及びプログラム、並びに、車両の運転支援システム |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6752024B2 (ja) | 2016-02-12 | 2020-09-09 | 日立オートモティブシステムズ株式会社 | 画像処理装置 |
JP6764378B2 (ja) | 2017-07-26 | 2020-09-30 | 株式会社Subaru | 車外環境認識装置 |
-
2020
- 2020-09-08 DE DE112020004377.0T patent/DE112020004377T5/de active Pending
- 2020-09-08 WO PCT/JP2020/033886 patent/WO2021084915A1/fr active Application Filing
- 2020-09-08 JP JP2021554138A patent/JP7379523B2/ja active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019124537A (ja) * | 2018-01-15 | 2019-07-25 | キヤノン株式会社 | 情報処理装置及びその制御方法及びプログラム、並びに、車両の運転支援システム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2021084915A1 (fr) | 2021-05-06 |
JP7379523B2 (ja) | 2023-11-14 |
DE112020004377T5 (de) | 2022-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7206583B2 (ja) | 情報処理装置、撮像装置、機器制御システム、移動体、情報処理方法およびプログラム | |
JP6701253B2 (ja) | 車外環境認識装置 | |
JP2013190421A (ja) | 車両において通行物体位置検出を向上する方法 | |
JP2013203337A (ja) | 運転支援装置 | |
JP2014115978A (ja) | 移動物体認識装置及びこれを用いた報知装置及びその移動物体認識装置に用いる移動物体認識用プログラム及び移動物体認識装置を備えた移動体 | |
CN110659547B (zh) | 物体识别方法、装置、车辆和计算机可读存储介质 | |
US9524645B2 (en) | Filtering device and environment recognition system | |
JP6631691B2 (ja) | 画像処理装置、機器制御システム、撮像装置、画像処理方法、及び、プログラム | |
WO2021084915A1 (fr) | Dispositif de reconnaissance d'image | |
US20200210730A1 (en) | Vehicle exterior environment recognition apparatus | |
KR20210147405A (ko) | 객체 인식을 수행하는 전자 장치 및 이의 동작 방법 | |
WO2019175920A1 (fr) | Dispositif de spécification de brouillard, procédé de spécification de brouillard et programme de spécification de brouillard | |
JP7283268B2 (ja) | 情報処理装置及び車載システム | |
JP2018146495A (ja) | 物体検出装置、物体検出方法、物体検出プログラム、撮像装置、及び、機器制御システム | |
JP7466695B2 (ja) | 画像処理装置 | |
WO2020036039A1 (fr) | Dispositif de caméra stéréo | |
JP2021051348A (ja) | 物体距離推定装置及び物体距離推定方法 | |
JP7277666B2 (ja) | 処理装置 | |
JP2021113753A (ja) | 曇り判定装置および曇り判定方法 | |
WO2018097269A1 (fr) | Dispositif de traitement d'informations, dispositif d'imagerie, système de commande d'équipement, objet mobile, procédé de traitement d'informations et support d'enregistrement lisible par ordinateur | |
JP5890816B2 (ja) | フィルタリング装置および環境認識システム | |
US20230096864A1 (en) | Imaging processing device | |
CN115063772B (zh) | 一种车辆编队后车检测方法、终端设备及存储介质 | |
WO2023112127A1 (fr) | Dispositif de reconnaissance d'image et procédé de reconnaissance d'image | |
KR20230003953A (ko) | 환경 변화 적응형 특징 생성기를 적용한 차량용 경량 딥러닝 처리 장치 및 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20883110 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021554138 Country of ref document: JP Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20883110 Country of ref document: EP Kind code of ref document: A1 |