CN113228130B - Image processing apparatus - Google Patents

Image processing apparatus Download PDF

Info

Publication number
CN113228130B
CN113228130B CN201980084842.0A CN201980084842A CN113228130B CN 113228130 B CN113228130 B CN 113228130B CN 201980084842 A CN201980084842 A CN 201980084842A CN 113228130 B CN113228130 B CN 113228130B
Authority
CN
China
Prior art keywords
image
correction
area
pedestrian
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201980084842.0A
Other languages
Chinese (zh)
Other versions
CN113228130A (en
Inventor
鸨亮辅
大塚裕史
牛场郭介
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Astemo Ltd
Original Assignee
Hitachi Astemo Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Astemo Ltd filed Critical Hitachi Astemo Ltd
Publication of CN113228130A publication Critical patent/CN113228130A/en
Application granted granted Critical
Publication of CN113228130B publication Critical patent/CN113228130B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image processing device which can correctly identify whether a pedestrian is present or not even under the condition that the whole body of the pedestrian present in the periphery cannot be detected. In a camera image (1101) obtained when the feet are cut off in the daytime, the result of correcting the recognition region (11020) so as to use the upper body including the head of a pedestrian as the recognition region is the recognition region (1102). In a camera image (1103) in which the headlights of a vehicle are low beams at night, a camera image (1103) in which it can be determined that the head of a pedestrian is not included in a detection area is corrected to form an identification area (1104) by correcting the identification area (11030) so that the upper body including the head of the pedestrian is formed as the identification area. In a camera image (1105) in which headlights of a vehicle are high beams at night, an image (11060) in which the feet of a pedestrian are cut off is corrected so that the upper body including the head of the pedestrian is a recognition region, and the image is a recognition region (1106).

Description

Image processing apparatus
Technical Field
The present invention relates to an image processing device for pedestrian recognition in an on-vehicle camera.
Background
In recent years, due to the spread of in-vehicle sensor devices, driving assistance systems and automatic driving systems have been developed. In more advanced system development, it is necessary to identify whether an object existing in the periphery of an automobile is a vehicle, a pedestrian, or the like.
As described in patent document 1, a method of recognizing a type of an object existing in the periphery of an automobile is employed in which an object detected by an on-vehicle camera, a radar, or the like is recognized using a recognizer generated in advance by machine learning.
Documents of the prior art
Patent document
Patent document 1: japanese patent laid-open publication No. 2013-232080
Disclosure of Invention
Problems to be solved by the invention
In the pedestrian recognition in the prior art, the recognition performance is maintained by performing recognition processing of the whole body of a pedestrian on the premise that the whole body of the pedestrian is reflected in a detected region.
Therefore, when a pedestrian is present at a short distance such as outside the view angle of the lower body of the pedestrian, when the lower body of the pedestrian is blocked by a hedge, a guardrail, or the like, or when the whole body of the pedestrian cannot be detected due to the irradiation condition of the night headlights, the pedestrian recognition performance is degraded.
The purpose of the present invention is to realize an image processing device that can accurately recognize whether a pedestrian is present or not even when the whole body of the pedestrian present in the periphery cannot be detected.
Means for solving the problems
In order to achieve the above object, the present invention is configured as follows.
An image processing apparatus includes: a detection unit that detects image information of an external environment; a recognition area correction amount calculation unit that calculates a correction amount for recognizing a recognition area of the three-dimensional object based on the image information detected by the detection unit; an identification region correction unit that corrects the identification region for identifying the image information of the three-dimensional object, based on the correction amount calculated by the identification region correction amount calculation unit; and a recognition processing unit that performs recognition processing on the three-dimensional object in the recognition area corrected by the recognition area correcting unit.
ADVANTAGEOUS EFFECTS OF INVENTION
According to the present invention, it is possible to realize an image processing device capable of accurately recognizing whether a pedestrian is present or not even when the whole body of a pedestrian present in the periphery cannot be detected.
Drawings
Fig. 1 is a block diagram showing an overall configuration of an image processing apparatus according to an embodiment of the present invention.
Fig. 2 is a diagram showing a process flow in the stereo camera apparatus.
Fig. 3 is a diagram illustrating a result of a three-dimensional object detection process on a camera image.
Fig. 4 is a diagram showing a processing configuration according to an embodiment.
Fig. 5 is a diagram showing details of a processing configuration according to an embodiment.
Fig. 6 is a diagram showing an example of a camera image at night and an operation result of three-dimensional object detection processing.
Fig. 7 is a diagram showing a three-dimensional object detection result in a case where a blocking object is included in a detection area in a three-dimensional object detection process.
Fig. 8 is a diagram showing an image of a three-dimensional object with its underfoot cut.
Fig. 9 is a diagram showing a relationship between the angle of view of the camera and the distance of the pedestrian.
Fig. 10 is a diagram showing a distance relationship between a headlight irradiation range and a pedestrian.
Fig. 11 is a diagram showing an example of the recognition area correction processing result.
Detailed Description
Embodiments of the present invention will be described with reference to the drawings.
Examples
Fig. 1 is a block diagram showing an overall configuration of an image processing apparatus according to an embodiment of the present invention.
In fig. 1, an in-vehicle stereo camera device 100 according to an embodiment of the present invention is mounted on a vehicle and recognizes an environment outside the vehicle based on image information of an external environment of a photographing target area in front of the vehicle. The in-vehicle stereo camera device 100 recognizes white lines of a road, pedestrians, vehicles, other three-dimensional objects, signals, signs, illumination lamps, and the like, and adjusts braking, steering, and the like of a vehicle (own vehicle) on which the stereo camera device 100 is mounted.
The in-vehicle stereo camera device 100 includes: 2 cameras (a left camera 101, a right camera 102 (an image pickup unit that picks up an external environment to obtain an image) disposed on the left and right sides for acquiring image information, and an image input interface 103 for controlling the image pickup by the cameras 101 and 102 and taking in the picked-up image.
The image captured by the image input interface 103 is transmitted via the bus 109, processed by the image processing unit 104 and the arithmetic processing unit 105, and the storage unit 106 stores image data and the like that are the results of the processing in the middle and the final results.
The image processing unit 104 compares the 1 st image obtained from the image pickup device of the camera 101 with the 2 nd image obtained from the image pickup device of the camera 102, corrects the device-specific variation due to the image pickup devices of the camera 101 and the camera 102, performs image correction such as noise interpolation, and stores the image correction in the storage unit 106.
In addition, the position of the 1 st image from the camera 101 and the 2 nd image from the camera 102 corresponding to each other is calculated, parallax information is calculated, and the parallax information is stored in the storage section 106 in the same manner as described above.
The arithmetic processing unit 105 recognizes various objects necessary for sensing the environment around the vehicle using the image and the parallax information (distance information with respect to each point on the image) stored in the storage unit 106.
The various objects refer to people, vehicles, other obstacles, signal lamps, signs, back lights and headlights of the vehicles, and the like. These recognition results and a part of the intermediate calculation results are recorded in the storage unit 106 in the same manner as described above. After various object identifications are performed on the captured image, command values and the like necessary for vehicle control are calculated using these identification results.
Part of the control policy of the vehicle and the object recognition result obtained as the calculation result is transmitted to the in-vehicle network CAN110 through the CAN interface 107, whereby the braking of the vehicle is performed.
The structure is as follows: regarding these operations, the control processing unit 108 monitors whether or not an abnormal operation has occurred in each processing unit, and whether or not an error or the like has occurred during data transfer, thereby preventing the abnormal operation.
The image processing unit 104 is connected to a control processing unit 108, a storage unit 106, an arithmetic processing unit 105, an input/output unit 103 between the image pickup devices of the left camera 101 and the right camera 102, and an input/output unit 107 with an external in-vehicle network CAN110 via an internal bus 109.
The image input interface 103, the image processing section 104, the storage section 106, the arithmetic processing section 105, the input/output section 107, and the control processing section 108 are constituted by a single or a plurality of computer units.
The storage unit 106 is configured by a memory or the like that stores, for example, image information obtained by the image processing unit 104, image information created from the result of scanning by the arithmetic processing unit 105, and the like.
The input/output unit 107 connected to the external in-vehicle network CAN110 outputs information output from the in-vehicle stereo camera device 100 to a control system (not shown) of the host vehicle via the external in-vehicle network CAN 110.
Fig. 2 is a diagram showing a flow of processing in the stereo camera apparatus 100.
First, in the in-vehicle stereoscopic camera device 100, the left and right cameras 101 and 102 capture images, and the image processing unit 104 performs image processing 205 for correcting the image data 203 and 204 captured by the cameras to absorb the characteristics inherent to the image pickup device. The processing result is stored in the image buffer 206. The image buffer 206 is provided in the storage section 106 of fig. 1.
In addition, the images are collated with each other using the corrected 2 images, thereby obtaining parallax information of the images obtained by the left and right cameras. From the parallax of the left and right images, it becomes clear where and where on the images of the left and right cameras 101 and 102 a certain point of view on the target object corresponds, and the distance to the target object can be obtained from the principle of triangulation.
This processing is performed by the parallax processing 207. The image processing unit 104 in fig. 1 performs image processing 205 and parallax processing 207, and the resulting image and parallax information are stored in the storage unit 106.
Using the parallax images obtained as described above, detection processing 208 for detecting a three-dimensional object in a three-dimensional space is performed. Further, various object recognition processing 209 is performed using the image and the parallax information stored in the storage unit 106. The objects to be recognized include a person, a car, another three-dimensional object, a logo, a traffic light, a backlight, and the like, and the details of the recognition processing are determined by restrictions such as characteristics of the objects and processing time spent on the system.
Further, in consideration of the result of the object recognition and the state (speed, steering angle, etc.) of the host vehicle, the policy of braking such as braking and steering angle adjustment of the host vehicle by the vehicle control process 210, for example, issuing a warning to the occupant, or the policy of avoidance control of the object based on the braking is determined, and the result is output through the CAN interface 108.
The three-dimensional object detection processing 208, the various object recognition processing 209, and the vehicle control processing 210 are performed by the arithmetic processing unit 105 in fig. 1. Each of these processing methods is constituted by, for example, a single or a plurality of computer units, and is configured to be able to exchange data with each other.
The parallax or distance of each pixel of the left and right images is obtained by the parallax processing 207, and combined into a three-dimensional object in a three-dimensional space by the three-dimensional object detection processing unit 208, and various object recognition processing 209 is performed based on the position and area on the image. In this case, in order to stably recognize the object in the various object recognition processes 209, it is necessary to match the three-dimensional object region on the image with the image of the object to be recognized.
However, in the stereo camera, there is a case where the object regions on the image to be recognized cannot be completely matched due to the brightness of the external environment, the variation in the image capturing performance between the cameras, the shading by foreign matter on the glass surface, and the like.
This is the same even in the case of combining a radar of millimeter waves or the like and an image sensor of a camera or the like. Therefore, in various recognition processes, one embodiment of the present invention is applied.
The following describes the configuration on the premise of a stereo camera device.
Fig. 3 is a diagram illustrating the result of the stereoscopic object detection processing 208 on the camera image.
In fig. 3, a three-dimensional object region 301 as a result of the three-dimensional object detection processing 208 is obtained for each object having a height above the road surface, such as a pedestrian, a vehicle, a tree, and a street lamp, which are present in the three-dimensional space, and is projected as a region on an image.
The solid object region 301 may be a rectangular region as shown in fig. 3, or may be an amorphous region obtained from parallax or distance. In the latter-stage processing, the processing is generally performed as a rectangle in order to facilitate the processing by the computer. In the present embodiment, the area is treated as a rectangle, and details of each treatment are described below using a pedestrian as an example of a three-dimensional object.
Fig. 4 is a diagram showing a processing structure of the present embodiment. As described above, the various object identifications 209 are performed by the arithmetic processing unit 105. Therefore, the configuration shown in fig. 4 is an internal configuration of the arithmetic processing unit 105.
In addition, the following description will be made on the assumption of a stereo camera.
In fig. 4, the detection unit 401 detects the distance from the imaging unit to the three-dimensional object based on the image information of the external environment captured by the cameras 101 and 102 as the imaging unit, and detects whether or not the object to be detected includes an occlusion or the like in the object area to be detected based on the state of the external environment such as luminance and weather, the distance distribution of the object area to be detected obtained from the three-dimensional object detection processing 208, and the like.
The region correction amount calculation unit (recognition region correction amount calculation unit) 402 calculates a correction amount for recognizing the recognition region of the three-dimensional object based on the image information detected by the detection unit 401. That is, the correction amount of the identification region of the three-dimensional object is calculated from at least one of the brightness of the detected external environment, the distance to the three-dimensional object, the irradiation range of the headlight, and the position or size of the blocking object.
The recognition area correction unit 403 corrects the recognition area based on the correction amount calculated by the area correction amount calculation unit 402.
In the recognition processing unit 404, the three-dimensional object in the set (corrected) recognition area is recognized. The identification processing may be performed by the following technique, for example.
There is a method of template matching in which a template having a feature of an identification target prepared in advance is compared with an identification region, and using an identifier (software) in which a feature quantity such as a luminance image, HOG, or Haar-Like is combined with a machine learning method such as a support vector machine, ada-Boost, or deep learning.
Further, the edge shape or the like may be recognized by artificially determined threshold determination.
Fig. 5 is a diagram showing the details of the processing structure of the present embodiment (the details of each part shown in fig. 4).
In fig. 5, the external environment determination process 501 of the detection unit 401 determines the state of the brightness of the external environment when the three-dimensional object detection process 208 is performed. Fig. 6 is a diagram showing an example of an external environment, and is a diagram showing a camera image 601 which is an example of a night camera image and an operation result of the stereoscopic object detection processing unit 208.
As shown in fig. 3, if the external environment is daytime, the entire body that detects a pedestrian moving into an image is included in a three-dimensional object region (detection region) 301. However, when the image becomes dark at night or the like, as shown in fig. 6, the pedestrian cannot be separated from the background, and in the three-dimensional object detection processing 208, it is assumed that the detection region 602 includes the lower half of the pedestrian but does not include the upper half and does not include the entire pedestrian.
In the determination of the brightness of the external environment, the brightness distribution in the image may be obtained by image processing, and the time may be directly obtained from the internal information of the vehicle. Further, it may be determined that the camera is daytime if the shutter speed of the camera subjected to exposure adjustment is equal to or less than a threshold value, and that the camera is nighttime if the shutter speed is equal to or more than the threshold value.
In addition to the shutter speed, detailed external environment determination such as tunnel interior and dusk can be performed by observing the luminance distribution in the image, luminance change between the front and rear frames, and the like.
In fig. 5, the blocking object detection processing 502 of the detection unit 401 detects whether or not an object other than a pedestrian is included in the detection region when the three-dimensional object detection processing 208 is performed, and determines whether or not the pedestrian is blocked.
Fig. 7 is a diagram showing a three-dimensional object detection result in the case where the three-dimensional object detection processing 208 includes a mask in the detection region 701. In fig. 7, when a parallax distribution 702 in a detection area (recognition area) 701 is observed and there are many areas with different parallaxes, the distance between each area is calculated, it is determined that a pedestrian (three-dimensional object) is blocked by a blocking object 703, and the areas of the blocking object 703 and the pedestrian are distinguished by areas located in the near direction and the depth direction in the detection area. Then, the correction amount of the identification region is calculated so as to exclude the blocking object 703 from the detection region (identification region) 701, and the three-dimensional object in the identification region after the correction is identified.
In fig. 5, in the distance determination process 503 of the region correction amount calculation unit 402, when the detected three-dimensional object is a pedestrian, it is determined whether or not the detected three-dimensional object is located at a distance below the foot of the pedestrian, and the correction amount of the recognition region is calculated. Fig. 8 is a diagram showing an image in a state where the feet of a three-dimensional object (a pedestrian in fig. 8) are cut off. As shown in fig. 8, whether or not to cut off the underfoot of the three-dimensional object can be determined by whether or not the lower end of the detection region 801 on the image contacts the lower end of the captured image.
Fig. 9 is a diagram showing a relationship between the angle of view of the camera and the distance of the pedestrian. As shown in fig. 9, when the three-dimensional object is a pedestrian, a method of calculating the distance under the foot from the image based on the distance between the detected three-dimensional object and the vehicle, the camera shooting angle 901, the camera installation height, and the camera posture, and the like can be considered.
The distance for cutting off the foot from the image is set in advance as a distance set value and stored in the storage unit 106, and as a condition for cutting off the foot distance from the image, it is determined whether or not the distance detected by the imaging unit (the camera 101, the camera 102) is equal to or less than the distance set value.
When the detected distance is equal to or less than the distance setting value and it can be determined that the upper portion (head) of the detected three-dimensional object is not within the identification region, the identification region correction unit 403 corrects the upper region of the enlarged (expanded) identification region. In this case, the identification region correction unit 403 performs correction such that the correction amount of the upper end region of the identification region is smaller as the detected distance is larger, within a range where the detected distance is equal to or smaller than the distance setting value.
In fig. 5, a headlamp irradiation range determination correction process 504 of the area correction amount calculation section 402 calculates a correction amount of the recognition area so that the head of a pedestrian is included in the recognition area, in accordance with the irradiation state of the vehicle headlamps (headlamps irradiating the front of the external environment). Fig. 10 is a diagram showing a distance relationship between a headlight irradiation range and a pedestrian. As shown in fig. 10 a, when the headlight is a low beam (when the irradiation range of the headlight is set low), the optical axis of the headlight is oriented in the direction of irradiating the road surface, and therefore, the closer the pedestrian is to the vehicle, the closer the light irradiation range 1001 is to the lower side of the foot, and the vicinity of the head is not irradiated any more.
Therefore, as shown in fig. 10 (B), if a pedestrian is observed with the camera image 1002, the head becomes dark and is mixed with the background, and therefore the result of the three-dimensional object detection processing 208 is a detection region 1003 that does not include the head of the pedestrian.
As shown in fig. 10C, in the case of a high beam (in the case where the irradiation range of the headlamp is set high), the head is included in the irradiation range 1004 even when a pedestrian is located nearby because the optical axis is directed upward. If a pedestrian is observed using the camera image 1005 at this time, light is also irradiated to the head as shown in fig. 10 (D), and the result of the three-dimensional object detection processing 208 is a detection region 1006 including the head of the pedestrian.
Therefore, the calculation of the correction amounts corresponding to the low beam and the high beam is performed. The correction amount is determined as follows: for example, in the case of a low beam, if there is a pedestrian at a position where the head is difficult to be included depending on the irradiation range and the distance, the amount of the head body, for example, 25cm is set as the correction amount in the upper end direction of the recognition area (correction is performed to expand the upper area of the recognition area) with the size in the real space. In the case of a high beam, the head is easily included, and therefore the correction amount is about four and a half of the head, for example, 6cm.
The low beam and the high beam may be determined by directly referring to the interior information of the vehicle, performing image processing on a captured image, and calculating the distribution of the brightness. The correction amount may be calculated in consideration of the diffusion characteristics of the light according to the type of the headlight (LED, halogen, etc.) of the vehicle equipment. The correction amount may be changed using the result of the external environment determination processing 501. For example, in the external environment determination process 501, when it is determined that the degree of darkness is low, it is determined that the head of the pedestrian can be sufficiently detected even when the headlights are low beams, and adjustment may be performed to reduce the correction amount with respect to the case of low beams at night.
In the detection result correction processing 505 of the region correction amount calculation unit 402, when the detection unit 401 determines that there is an obstruction in the detection region, the correction amount of the identification region is calculated so that the obstruction is not included in the identification region using the result of the obstruction detection processing 502. For example, the following methods can be cited: the parallax distribution of the object on the near side and the object on the far side are grouped from the parallax distribution 702 of fig. 7 based on the parallax distribution, the regions of the obstacles 703 and pedestrians are distinguished, and the correction amount of the recognition region is calculated so as not to include the region of the obstacles 703.
In the detection result correction processing 505, the following processing may be performed: the positional relationship between the blocking object 703 and the pedestrian is found in detail using not only the parallax information but also the color information, the extraction of the feature amount, and the like, and the correction amount is calculated so as not to include the blocking object 703.
The recognition area correction processing unit 506 of the recognition area correction unit 403 adjusts the size of the recognition area based on the correction amounts calculated by the area correction amount calculation unit 402. The correction amounts calculated by the distance determination correction process 503, the headlamp irradiation range determination correction process 504, and the detection result correction process 505 are integrated on the basis of the three-dimensional object detection region as the identification region, and the correction is performed on the final identification region.
The recognition area correction processing unit 506 performs correction such that, for example, an area corresponding to half the height from the upper portion of the detection frame of the three-dimensional object to the floor surface is reset as the recognition area based on the determination result.
When the object is a pedestrian, the recognition area is corrected so that the upper body including the head of the pedestrian is included in the recognition area, and it is determined whether or not the object in the corrected recognition area is a pedestrian. This can improve the accuracy of identifying whether or not the pedestrian is present.
Fig. 11 is a diagram showing an example of the recognition area correction processing result.
In fig. 11 (a), for example, in a camera image 1101 in which the feet are cut off in the daytime, in order to set the upper body including the head of a pedestrian as a recognition region, the result of correcting the recognition region 11020 is a recognition region 1102.
In fig. 11 (B), in the camera image 1103 in which the headlights of the host vehicle are low beams at night, the recognition region 11030 is corrected to be the recognition region 1104 so that the upper body including the head of the pedestrian is set as the recognition region, with respect to the camera image 1103 in which it can be determined that the head of the pedestrian is not included in the detection region.
In fig. 11 (C), in a camera image 1105 in which headlights of the vehicle are high beams at night, in order to set the upper body including the head of the pedestrian as a recognition region, an image 11060 in which the feet of the pedestrian are cut off is corrected and set as a recognition region 1106.
The recognition area correction calculation unit 402 performs correction such that the correction amount of the upper area of the recognition area is larger in the case of a low beam (in the case where the irradiation range of the headlight that irradiates the front of the external environment is set low) than in the case of a high beam (in the case where the irradiation range of the headlight is set high).
Further, it is also possible that the recognition area after the correction matches the detection area before the correction, based on the calculation result of the correction amount of the recognition area.
As described above, according to one embodiment of the present invention, when it is determined that a part of a pedestrian is shielded by a shield object, the distance between each region and the host vehicle is calculated, an object in the near region is determined to be removed as a shield object, and whether the pedestrian is present or not is determined from the remaining images.
Further, the identification region is corrected so that the upper body part including the head becomes the identification region when it is determined that the foot of the object is cut off, and the identification region is corrected so that the region including the head becomes the identification region when it is determined that the head is not irradiated, and whether or not the pedestrian is identified using the identification region having the upper body part including the head.
Therefore, it is possible to realize the image processing device capable of accurately recognizing whether or not a pedestrian is present in the periphery of the own vehicle even in a case where the whole body of the pedestrian cannot be detected.
In the recognition processing 404, a plurality of recognizers may be prepared, and the recognizers may be switched according to the correction result of the recognition area.
For example, when the upper body part of the pedestrian becomes the recognition area based on the result of the correction, the recognition is performed using a recognizer created using the learning data of the upper body part of the pedestrian, not the whole body recognizer.
In the above example, the present invention is applied to an image processing apparatus mounted on a vehicle, but the present invention can also be applied to an image processing apparatus mounted on a moving object (article transport vehicle or the like) other than a vehicle.
The above example is an example of an image processing apparatus that recognizes a pedestrian (person), but the present invention can also be applied to an image processing apparatus that recognizes a moving object other than a person. For example, the present invention can also be applied to an image processing device for recognizing other vehicles, small animals, and the like.
The detection region is a region for detecting an object, and the identification region is a region for identifying the object detected in the detection region. Therefore, the recognition area may correct the detection area, but the correction result may be the same for both.
Description of the symbols
100 \8230, a three-dimensional camera device, 101 \8230, a left camera, 102 \8230, a right camera, 103 \8230, an image input interface, 104 \8230, an image processing section, 105 \8230, an operation processing section, 106 \8230, a storage section, 107 \8230, a CAN interface, 108 \8230, a control processing section, 109 \8230, an internal bus, 110 \8230, external vehicle-mounted networks CAN,203, 204 \8230, image data, 205 \8230, image processing, 206 \8230, an image buffer, 207 \8230, parallax processing, 8230, three-dimensional object detection, 209 \\ 8230, various object identification processing, 210 \8230, vehicle control processing, 301 \/30, a three-dimensional area, 401 \8230, 8230, a detection section, 402 \\ \ area correction calculation section, 403 \8230, a recognition area correction unit 404 \8230, a recognition processing unit 501 \8230, external environment detection processing 502 \8230, shield detection processing 503 \8230, distance judgment correction processing 504 \8230, headlamp irradiation range judgment correction processing 505 \8230, detection result correction processing 506 \8230, recognition area correction processing 601, 1002, 1005, 1101, 1103 and 1105 \8230, camera images 701, 801, 1003, 1006 \8230, detection areas 702 8230, parallax distribution 703 \8230, shield 901 \8230, camera shooting angle of view 1001 \8230, light irradiation range 1102, 1104, 1106, 11020, 11030 and 60 \30, and recognition areas.

Claims (5)

1. An image processing apparatus is characterized by comprising:
an imaging unit that obtains an image by imaging an external environment;
a detection unit that detects image information of the external environment;
a recognition area correction amount calculation unit that calculates a correction amount for recognizing a recognition area of a three-dimensional object based on the image information detected by the detection unit;
an identification region correction unit that corrects the identification region for identifying the image information of the three-dimensional object, based on the correction amount calculated by the identification region correction amount calculation unit; and
a recognition processing unit that performs recognition processing on the three-dimensional object in the recognition area corrected by the recognition area correcting unit,
the detection unit detects a distance from the imaging unit to the three-dimensional object,
the identification area correction unit performs correction for enlarging an upper area of the identification area when the distance detected by the detection unit is equal to or less than a distance set value and when it is determined that the upper portion of the three-dimensional object is not in the identification area,
when the distance detected by the detection unit is equal to or less than a distance set value, the identification area correction unit performs correction so as to reduce the magnitude of correction of the upper area of the identification area as the distance detected by the detection unit increases.
2. The image processing apparatus according to claim 1,
the detection section detects the brightness of the external environment,
the identification region correction amount calculation unit calculates the correction amount of the identification region based on the luminance detected by the detection unit.
3. The image processing apparatus according to claim 1,
the detection unit detects a blocking object that blocks the three-dimensional object,
the identification region correction amount calculation unit calculates the correction amount of the identification region so as to exclude the obstruction detected by the detection unit from the identification region.
4. The image processing apparatus according to claim 2,
the recognition area correction amount calculation unit performs correction such that, when an irradiation range of the headlight irradiated in front of the external environment is set low, a correction amount of an upper area of the recognition area is larger than that when the irradiation range of the headlight is set high.
5. The image processing apparatus according to claim 1,
the three-dimensional object is a pedestrian, and the image processing device is mounted on a vehicle.
CN201980084842.0A 2018-12-18 2019-11-19 Image processing apparatus Active CN113228130B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2018236756 2018-12-18
JP2018-236756 2018-12-18
PCT/JP2019/045181 WO2020129517A1 (en) 2018-12-18 2019-11-19 Image processing device

Publications (2)

Publication Number Publication Date
CN113228130A CN113228130A (en) 2021-08-06
CN113228130B true CN113228130B (en) 2023-01-17

Family

ID=71101241

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980084842.0A Active CN113228130B (en) 2018-12-18 2019-11-19 Image processing apparatus

Country Status (3)

Country Link
JP (1) JP7201706B2 (en)
CN (1) CN113228130B (en)
WO (1) WO2020129517A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2023107532A (en) * 2022-01-24 2023-08-03 株式会社豊田自動織機 Image processing device of person detection system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007058805A (en) * 2005-08-26 2007-03-08 Fuji Heavy Ind Ltd Forward environment recognition device
JP2010188970A (en) * 2009-02-20 2010-09-02 Honda Motor Co Ltd Periphery monitoring device for vehicle
JP2011086097A (en) * 2009-10-15 2011-04-28 Daihatsu Motor Co Ltd Obstacle detection device
JP2011165170A (en) * 2010-01-15 2011-08-25 Toyota Central R&D Labs Inc Object detection device and program
CN103136509A (en) * 2011-11-24 2013-06-05 富士重工业株式会社 Exterior environment recognition device and exterior environment recognition method
CN104680124A (en) * 2013-11-28 2015-06-03 现代摩比斯株式会社 Device And Method For Detecting Pedestrains
WO2018008461A1 (en) * 2016-07-05 2018-01-11 日立オートモティブシステムズ株式会社 Image processing device
CN108241852A (en) * 2016-12-26 2018-07-03 佳能株式会社 Human body detecting device and method, information processing equipment and method and storage medium
JP2018109824A (en) * 2016-12-28 2018-07-12 株式会社デンソーテン Electronic control device, electronic control system, and electronic control method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4425642B2 (en) * 2004-01-08 2010-03-03 富士重工業株式会社 Pedestrian extraction device
JP5712968B2 (en) * 2012-05-31 2015-05-07 株式会社デンソー Human detection device
JP6091586B1 (en) * 2015-11-12 2017-03-08 三菱電機株式会社 VEHICLE IMAGE PROCESSING DEVICE AND VEHICLE IMAGE PROCESSING SYSTEM

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007058805A (en) * 2005-08-26 2007-03-08 Fuji Heavy Ind Ltd Forward environment recognition device
JP2010188970A (en) * 2009-02-20 2010-09-02 Honda Motor Co Ltd Periphery monitoring device for vehicle
JP2011086097A (en) * 2009-10-15 2011-04-28 Daihatsu Motor Co Ltd Obstacle detection device
JP2011165170A (en) * 2010-01-15 2011-08-25 Toyota Central R&D Labs Inc Object detection device and program
CN103136509A (en) * 2011-11-24 2013-06-05 富士重工业株式会社 Exterior environment recognition device and exterior environment recognition method
CN104680124A (en) * 2013-11-28 2015-06-03 现代摩比斯株式会社 Device And Method For Detecting Pedestrains
WO2018008461A1 (en) * 2016-07-05 2018-01-11 日立オートモティブシステムズ株式会社 Image processing device
CN108241852A (en) * 2016-12-26 2018-07-03 佳能株式会社 Human body detecting device and method, information processing equipment and method and storage medium
JP2018109824A (en) * 2016-12-28 2018-07-12 株式会社デンソーテン Electronic control device, electronic control system, and electronic control method

Also Published As

Publication number Publication date
WO2020129517A1 (en) 2020-06-25
JP7201706B2 (en) 2023-01-10
CN113228130A (en) 2021-08-06
JPWO2020129517A1 (en) 2021-10-07

Similar Documents

Publication Publication Date Title
CN111212756B (en) Method and apparatus for controlling an illumination system of a vehicle
CN107852465B (en) Vehicle-mounted environment recognition device
CN108496176B (en) Method for identifying objects in the surrounding area of a motor vehicle, driver assistance system and motor vehicle
CN104185588B (en) Vehicle-mounted imaging system and method for determining road width
JP5680573B2 (en) Vehicle driving environment recognition device
US8952616B2 (en) Apparatus for controlling head lamp for vehicle
US8848980B2 (en) Front vehicle detecting method and front vehicle detecting apparatus
US20200074212A1 (en) Information processing device, imaging device, equipment control system, mobile object, information processing method, and computer-readable recording medium
JP6468568B2 (en) Object recognition device, model information generation device, object recognition method, and object recognition program
JP2013073305A (en) Image processing device
JP2016206721A (en) Road mark detection apparatus and road mark detection method
WO2013168744A1 (en) Method and device for detecting vehicle light sources
CN111046741A (en) Method and device for identifying lane line
JP6983334B2 (en) Image recognition device
CN113228130B (en) Image processing apparatus
JP4007578B2 (en) Headlamp irradiation range control method and headlamp apparatus
US20150091714A1 (en) Filtering device and environment recognition system
WO2011016257A1 (en) Distance calculation device for vehicle
JP2011253222A (en) Front region detection device and vehicle control device
KR101511586B1 (en) Apparatus and method for controlling vehicle by detection of tunnel
JP7261006B2 (en) External environment recognition device
JP4791086B2 (en) Collision avoidance system
JP2020126304A (en) Out-of-vehicle object detection apparatus
JP2018163530A (en) Object detection device, object detection method, and object detection program
US11938858B2 (en) Headlight control device, headlight control system, and headlight control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant