CN114762019A - Camera system - Google Patents

Camera system Download PDF

Info

Publication number
CN114762019A
CN114762019A CN202080082086.0A CN202080082086A CN114762019A CN 114762019 A CN114762019 A CN 114762019A CN 202080082086 A CN202080082086 A CN 202080082086A CN 114762019 A CN114762019 A CN 114762019A
Authority
CN
China
Prior art keywords
distance
image
unit
road surface
camera system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080082086.0A
Other languages
Chinese (zh)
Inventor
小林正幸
菲利普·戈麦斯卡巴雷罗
远藤健
大里琢马
汤浅一贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Astemo Ltd
Original Assignee
Hitachi Astemo Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Astemo Ltd filed Critical Hitachi Astemo Ltd
Publication of CN114762019A publication Critical patent/CN114762019A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The invention provides an image processing system and a camera system capable of properly detecting an object even if the optical axis of a camera is changed up and down. The camera system of the present invention includes: a distance acquisition unit that acquires a distance for the distance acquisition area; a camera that acquires an image for an image acquisition region at least a part of which overlaps with the distance acquisition region; a template matching detection unit that detects an object by performing template matching processing on an image using a template; a road surface estimation unit that estimates a position of the road surface from the distance calculated by the distance acquisition unit; and a search line calculation unit that calculates a search line in the image based on the position of the road surface. The search line is a line along a position where the distance on the road surface in the image is a predetermined value, and the template matching process is performed along the search line.

Description

Camera system
Technical Field
The present invention relates to a camera system.
Background
Various techniques for detecting a moving object in an image from an image captured by a camera or the like are known. Examples of such techniques are described in patent documents 1 and 2. The technique described in patent document 1 describes the following: "as a factor for flexibly using the background subtraction method and the saliency calculation method, the background subtraction method is used to detect a short-distance object captured in an image, focusing on the distance in the image, and the saliency calculation method is used to detect a long-distance object whose pixel value change is small and which is difficult to detect by the background subtraction method. Further, a region above the vanishing point of the image in which an object is unlikely to be captured (pedestrian is unlikely to appear) within the image is excluded in advance from the region where object detection is performed ".
Patent document 2 describes that "the probability of determining the correct object region is improved by setting the template image and the search image to the optimum sizes with reference to the distance information (distance map) estimated using the parallax image. The information of the determined 1 object region is output from the object tracking unit 161 ".
Documents of the prior art
Patent document
Patent document 1: japanese patent laid-open No. 2007-64894
Patent document 2: japanese patent laid-open publication No. 2019-79440
Non-patent document
Non-patent document 1: yang, H.Hongo, and S.Tanimoto, "A New Approach for In-Vehicle Camera Detection by group motion Compensation", 200811 th International IEEE Conference on Intelligent Transportation Systems,2008.
Disclosure of Invention
Problems to be solved by the invention
In the case of an on-vehicle camera system, the conventional technique has the following problems: when attempting to detect an object in a region where parallax does not occur and a distant region by using template matching processing or a saliency calculation method, the size of the template and the saliency to be extracted cannot be determined, and the processing load is large.
For example, in the technique of patent document 1, the saliency calculation method is used in a region limited to a distant region and a region below a vanishing point, but if it is considered that the optical axis of the camera moves up and down due to the up and down movement of the vehicle, the region is insufficiently reduced, and the processing load is high.
In addition, the technique of patent document 2 does not have a method of determining the template size of the area where no parallax is present, and if no parallax is present in the entire area, detection is difficult.
The present invention has been made in view of the above-described problems, and an object thereof is to provide an image processing system and a camera system capable of appropriately detecting an object even if the optical axis of a camera is changed up and down.
Means for solving the problems
An example of a camera system according to the present invention includes:
a distance acquisition unit that acquires a distance for the distance acquisition area;
a camera that acquires an image for an image acquisition region at least a part of which overlaps with the distance acquisition region;
a template matching detection unit that detects an object by performing template matching processing on the image using a template;
a road surface estimation unit that estimates a position of a road surface from the distance calculated by the distance acquisition unit; and
A detection line calculation unit that calculates a detection line in the image based on a position of the road surface,
the search line is a line along a position at a prescribed distance on the road surface in the image,
and performing the template matching processing along the probe line.
The present specification includes the disclosure of japanese patent application No. 2019-227407, which forms the basis of the priority of the present application.
ADVANTAGEOUS EFFECTS OF INVENTION
According to the image processing system and the camera system of the present invention, the specific object can be detected with a low processing load even in the in-vehicle system in which the orientation of the camera is changed vertically.
Drawings
Fig. 1 is a block diagram showing an example of the configuration of a camera system according to embodiment 1 of the present invention.
Fig. 2 is a diagram showing a specific example of the field angle of view of the image processing system according to embodiment 1.
Fig. 3 is a diagram showing a specific example of an image captured by the stereo camera of fig. 2.
Fig. 4 is a diagram showing an example of search processing performed by the monocular distance detection unit shown in fig. 1.
Fig. 5 is a diagram showing a method of determining the size of the search frame in example 1.
Fig. 6 is a block diagram showing an example of the configuration of the camera system of embodiment 2.
Fig. 7 is a view showing an example of the angle of view of the imaging unit in embodiment 2.
Fig. 8 is a block diagram showing an example of the configuration of a camera system according to embodiment 3.
Fig. 9 is a view showing an example of the angle of view of the distance measuring sensor unit and the imaging unit according to embodiment 3.
Fig. 10 is a view showing an example of the angle of view of the distance measuring sensor unit and the imaging unit according to embodiment 3.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
[ example 1]
The image processing system according to embodiment 1 of the present invention detects a target object in an image (for example, a shutter image captured by a camera), and can detect a moving object, a three-dimensional object, or a specific object, for example. The "moving object" refers to an object that is moving against a background. The "background" is not limited to a region of an image that is composed of unchanged pixels. For example, in the case of an in-vehicle camera that performs front sensing, a road surface is an example of a background, and a pedestrian or a vehicle that moves relative to the road surface is an example of a moving body. In addition, for example, in the case of a fixed monitoring camera, a person, an animal, a moved object, and the like moving in a camera image are examples of a moving object.
The "three-dimensional object" means an object having a height with respect to a road surface, and means a vehicle, a pedestrian, a two-wheeled vehicle, a utility pole, and the like. The "specific object" means, for example, a specific object intentionally intended to be detected, for example, a vehicle, a pedestrian, a two-wheeled vehicle, or the like. The specific object may be detected by preparing a template, a pattern, a machine-learned dictionary, or the like in advance and detecting the specific object based on the similarity with the template, the pattern, or the machine-learned dictionary.
Fig. 1 is a block diagram showing an example of the configuration of the camera system of the present embodiment. The camera system includes a plurality of image pickup units 101, a stereo matching unit 102, a road surface estimation unit 103, a size calculation unit 104, a search line calculation unit 105, a template search method determination unit 106, a monocular distance detection unit 107, and a monocular overhead difference detection unit 108.
The imaging unit 101 includes an imaging sensor, and is configured as a known camera, for example. The camera sensor is provided with a lens for shooting the outside of the device. The image sensor is, for example, a cmos (complementary Metal Oxide semiconductor) -equipped image sensor, and converts light into an electrical signal.
The information converted into an electric signal by the image sensor of the image pickup unit 101 is further converted into image data representing an image in the image pickup unit 101. The image data contains luminance values of pixels. The luminance value can be expressed, for example, in the form of a digital value, and expressed as a luminance value of each color or a luminance value of a single color. As the luminance value of each color, for example, rgb (red Green blue) or rc (red clear) can be used.
The imaging unit 101 transmits image data to the stereo matching unit 102, the monocular distance detection unit 107, and the monocular overhead view difference detection unit 108.
The imaging unit 101 has a function of changing exposure conditions of the imaging sensor. For example, an image sensor is provided with an electronic shutter such as a global shutter or a rolling shutter, and shooting can be performed with an exposure time set to an arbitrary time. When a photodiode that accumulates electric charge when receiving light is used as a light receiving element, the exposure time is the time from resetting the accumulated electric charge of the photodiode to taking out the electric charge to read out information relating to the luminance value.
When the exposure time is prolonged, the amount of accumulated electric charge increases, and the brightness value to be read increases. On the other hand, if the exposure time is shortened, the accumulated electric charge is reduced, and the read luminance is lowered. Therefore, the brightness of the image obtained from the image sensor changes in a manner correlated with the exposure time.
In the imaging unit 101, the amount of charge taken out from the photodiode is converted into a voltage, and a/d (analog digital) conversion is performed to obtain a digital value. The imaging unit 101 includes an amplifier for a/D conversion, and may be configured such that the gain of the amplifier can be changed as part of the exposure conditions. In this case, the read luminance value varies according to the gain setting of the amplifier. If the gain is increased, the luminance value is increased, and if the gain is decreased, the luminance value is decreased.
Here, in general, when the luminance value is decreased due to a change in the exposure condition (exposure time and gain in the above example), the luminance value of a dark object becomes 0 or the luminance value of an object having a low contrast coefficient becomes uniform, and it is sometimes impossible to distinguish between a contour and a shade. This problem is more pronounced in the case of representing the luminance value in the form of a digital value, but a problem of the same nature may occur in the case of representing it in the form of an analog value. Similarly, when the luminance value is increased by a change in the exposure condition, the luminance value of the bright object may become the maximum value, and the contour of the object and the shade of the object may no longer be discriminated. Therefore, the exposure condition is preferably set according to the brightness of the subject to be photographed.
The image pickup section 101 can take a moving image by repeatedly releasing electronic shutters in a time-series manner and acquiring image data of each shutter. The number of times the electronic shutter is released and image data is output Per unit time is referred to as a frame rate, and the frame rate Per 1 second is expressed in units of fps (frame Per second).
The stereo matching unit 102 receives data including image data from the plurality of image pickup units 101 and processes the data, thereby calculating a parallax between the image data. The parallax is a parallax of image data from the imaging unit 101 other than the reference imaging unit 101 with respect to image data from the reference imaging unit 101. The parallax represents a difference in image coordinates in which the same object is reflected due to a difference in the positions of the plurality of imaging units. If the distance is short, the parallax is large, and if the distance is long, the parallax is small, and the distance can be calculated from the parallax.
For example, 2 imaging units 101 are provided, and the fields of view of the imaging units overlap partially and do not overlap partially. Fig. 2 shows a specific example of the field angle of the image processing system of fig. 1. The example is an example in which 2 image pickup units 101 are attached toward the front of the vehicle. The camera system of the present embodiment is mounted in a vehicle 201.
The camera system includes a stereo camera 202, and the stereo camera 202 includes 2 image pickup units 101. The imaging unit 101 for imaging the right field of view is particularly shown as an imaging unit 203 in fig. 2. The imaging unit 101 for imaging the left field of view is particularly shown as an imaging unit 204 in fig. 2. The imaging units 203 and 204 have overlapping fields of view in the front center field of view of the vehicle 201.
The imaging unit 203 has a field of view 205. The imaging unit 204 has a field of view 206. The imaging unit 203 has a right view on the right side of the vehicle 201, which is outside the view 206 of the imaging unit 204. The imaging unit 204 has a left field of view on the left side of the vehicle 201 that is outside of the field of view 205 of the imaging unit 204.
Fig. 3 shows a specific example of an image captured by the stereo camera 202 of fig. 2. Fig. 3 (a) shows an image 301 acquired by the imaging unit 203, fig. 3 (b) shows an image 302 acquired by the imaging unit 204, and fig. 3 (c) shows a composite image 303 of these 2 images.
The image 301 is an image of the imaging unit 203 facing right, and is captured near the center and on the right in front. The image 302 is an image of the left-facing image pickup unit 204, and is captured near the center and on the left in front.
The composite image 303 is a left and right image pickup section composite image obtained by combining images picked up by the image pickup section 203 and the image pickup section 204. The composite image 303 includes a region 304, a region 305, and a region 306. The region 304 corresponds to a visual field region not shown in the image 301 in the image 302, and is a left monocular visual field. The region 305 is a portion corresponding to a visual field region not shown in the image 302 in the image 301, and is a right monocular visual field.
The regions 304 and 305 are monocular regions photographed by one camera. The region 306 is a portion mapped in both the images 301 and 302, and is a stereoscopic visual field (distance acquisition region). In this way, the imaging units 203 and 204 acquire images for a region (image acquisition region) at least a part of which overlaps with the stereoscopic vision field (distance acquisition region).
The stereo matching unit 102 may correct distortion of the image data. For example, distortion of image data is corrected by using a central projection model or a perspective projection model so that objects at the same height and depth are horizontally arranged in an image. When the stereo matching process is performed on the corrected left and right images (for example, images of 2 imaging units 101), the parallax is determined by comparing the image data (reference image data) of 1 imaging unit 101 with the image data (comparative image data) of the other imaging unit 101.
As described above, in the present embodiment, the stereo matching unit 102 functions as a distance acquisition unit that acquires the distance of an object (for example, a road surface) appearing in the field of view with respect to the stereoscopic field of view. In particular, in the present embodiment, the stereo matching unit 102 includes a plurality of cameras (image pickup units 203 and 204) and calculates the distance by performing stereo matching. With this configuration, an expensive distance measuring unit is not required, and the cost can be reduced.
In the present specification, the definition and the measurement method of the "distance" may be appropriately determined by those skilled in the art. For example, the "distance of the road surface" refers to a distance between the road surface and a reference point determined for the camera system. The starting point of the distance (e.g., a specific reference point) may be the midpoint between the imaging unit 203 and the imaging unit 204, may be a specific portion of the vehicle 201 (e.g., the center of the front end of the vehicle), or may be the position of another device (e.g., a distance measuring device). The distance may be measured by a known method based on the calculated parallax, for example.
In the present embodiment, a specific object is detected in the regions 304 and 305. Alternatively, in the present embodiment, a specific object is detected within the regions 304, 305, and 306.
In the above description, the center projection model or the perspective projection model is used, but the projection model is not limited thereto. For example, in the case where the angle of view is wide, the image data may be projected onto a cylinder or a spherical surface, for example, in order to suppress the size of the image data. For example, matching is performed along search lines arranged in objects at the same distance on the projection plane.
When determining the parallax, the reference image data and the vertical coordinates of the vanishing point of the comparison image data are matched, and it is checked for each coordinate of the reference image data, which horizontal coordinate of the same vertical coordinate of the comparison image data shows the same object, by a method such as ssd (sum of Squared difference) or sad (sum of Absolute difference).
In addition, SSD or SAD is used in the foregoing, but other methods are also possible. For example, fast (features from accessed Segment test) or brief (binary Robust Independent element features) methods for extracting corner feature points and checking whether the feature points are the same feature points may be combined to perform matching processing on the same object.
The road surface estimation unit 103 estimates the position of the road surface from the distance calculated by the stereo matching unit 102. In the estimation of the position of the road surface, for example, a portion, a point, or an area having a specific parallax corresponding to the road surface is estimated as the road surface from the parallax data received from the stereo matching unit 102.
The road surface estimating unit 103 may have information such as the height from the road surface of the image pickup unit 101 and the attachment angle of the image pickup unit 101 in advance, and may determine a relationship between coordinates and a distance on the image data reflecting the road surface, and specify a point where a parallax according to the relationship is generated as a point representing the road surface.
Although the road surface estimating unit 103 of the present embodiment uses parallax, it may not use parallax. For example, the result of the distance measurement, for example, a mapping of the result obtained by lidar (light Detection and ranging) or millimeter-wave radar distance measurement to the coordinates of the camera image may also be used.
In the case of LiDAR, the mapping may be performed geometrically based on the camera's relationship to the location and orientation of the LiDAR. The process of making the road surface estimation may be performed after the geometric transformation or before the geometric transformation. The search can be performed by template matching or the like, as long as the road surface is determined at a timing that can be used in the process of determining the search line by the search line calculation unit 105 and the geometric transformation is performed.
The size calculation unit 104 calculates the size of a frame (used by the monocular distance detection unit 107) from which an image is cut out during execution of template matching processing and the like. The size calculating unit 104 calculates the size from the depth distance on the search line of the same depth distance. That is, the size calculating unit 104 functions as a template size determining unit.
The search line calculation unit 105 calculates a search line used by the monocular distance detection unit 107 in the template matching process from points indicating a road surface having the same depth distance. A probe line is a line within an image or region thereof that extends along a location at the same depth distance. In the present embodiment, the distance from the camera system as a starting point is a line along a position where the distance is a predetermined value (including a value within a range in which a predetermined tolerance is considered) on the road surface in the image.
The template search method determination unit 106 determines a search method for the template matching process (executed by the monocular distance detection unit 107) based on the position of the road surface and the size of the frame determined by the size calculation unit 104.
By thus determining the template size, the accuracy of the template matching process is improved, or the processing load is reduced.
The monocular distant detection unit 107 is a template matching detection unit that detects an object (e.g., a specific object) by performing template matching processing using templates with respect to the images acquired by the imaging units 203 and 204.
The monocular distant detection section 107 searches for an object matching the feature of the specific object to be detected by template matching processing. The template matching process may include a pattern matching process, and may also include a matching process based on a machine-learned dictionary. The machine-learned dictionary can be generated by learning various changes of the specific object by means of Adaboost using Haar Like feature quantities, for example. At this time, the monocular distance detecting unit 107 cuts out the target region for the template matching process from the image data using the frame having the size determined by the template searching method determining unit 106. The monocular distant detection section 107 detects the specific object in this manner. For example, it is determined whether or not the image of the target area is similar to the image of the specific object, thereby determining whether or not the specific object is reflected in the target area.
In the search processing for the specific object, a specific region on an image is cut out, and the similarity between the image of the cut-out region and a model (or model image) represented by a template, a pattern, or a machine-learned dictionary is checked. In the cropping, the model is preferably aligned to an appropriate size and angle (e.g., close to the size and angle of the actual image) according to the size and angle of the specific image, for example, by a normalization process or the like. However, since the vehicle, the pedestrian, or the two-wheeled vehicle is rarely reflected on the camera with a large inclination with respect to the road surface, the angle may be considered appropriate, and in this case, the process of aligning the angle may be omitted.
The model may be matched to the appropriate size by knowing the actual size of the model, the distance between the projected specific object and the imaging unit 101, and the focal length and pixel pitch of the imaging unit.
Consider, for example, a case where positions on a road surface at the same distance from the camera system are horizontally arranged. Since vehicles, pedestrians, motorcycles, and the like stand in contact with the road surface, if they are at equal distances from the camera system, it is considered that their lower ends are horizontally aligned with each other (that is, the Y coordinates are coincident). In this case, the distance from the camera system to the specific objects is considered to be equal to the distance to the road surface located at the lower end thereof (here, it is considered that the difference in distance due to the difference in vertical position on the vertical line is negligible, and in the case where this is not the case, it may be converted or corrected as appropriate).
It is assumed in the foregoing that positions at the same distance on the road surface are horizontally arranged. In this case, the search line is a horizontal straight line. However, even when the road surface is inclined, the image pickup unit 101 is inclined, or when distortion occurs in the periphery of the image, the positions at the same distance may be arranged not horizontally but obliquely or on a curve.
In such a case, a line (probe line) connecting positions at the same distance on the road surface may be defined as a line expressed by a model function having 1 or more parameters, the parameters may be determined by a least square method or the like, and the probe line may be specifically determined by a numerical expression, a table or the like.
In this way, the road surface can be estimated and the search line can be determined even for the regions (for example, the regions 304 and 305) where the parallax cannot be obtained. The monocular distance detecting unit 107 searches for a specific object by template matching or the like along the determined probe line. For example, a frame is set with the probe line as the lower end, and the area within the frame is cut out to determine whether the specific object is reflected.
Fig. 4 shows an example of search processing performed by the monocular distance detection unit 107. The composite image 400 is obtained by combining the images of the left and right image pickup units 101, and corresponds to the composite image 303 in fig. 3. The road surface portion 401 appears on the composite image 400. The same distance point 402 represents a point with equal parallax, that is, a point with equal distance.
The search line calculation unit 105 specifies the search line 403 in the image from the same distance point 402. The search line 403 can be calculated based on the position (or distance) of the road surface, for example. The probe line 403 is, for example, a line along a position on the road surface in the image where the distance is a predetermined value (including a value within a range in which a predetermined tolerance is considered).
The frame 404 is a frame of an image area cut out for comparison with a specific object by the template matching process of the monocular distance detection section 107. A specific object 405 appears at a position integrated with the exploration line 403.
In the present embodiment, a search line 403 is created on the road surface, a frame 404 having a specific size is arranged on the search line 403, and the frame 404 is moved left and right in a state where the lower end of the frame 404 coincides with the search line 403. In this manner, the template matching process of the monocular distance detection unit 107 is performed along the probe line 403. The monocular distant detection unit 107 performs template matching processing on images cut out from the frame 404 at different positions while moving the frame 404 left and right, and determines whether or not the images are similar to a specific object.
In the present embodiment, the template matching process is performed in the monocular area (for example, areas 304 and 305 in fig. 3), and is not performed in the stereoscopic visual field (for example, area 306 in fig. 3). In this way, the number of template matching processes can be reduced, and the processing load can be reduced. However, as a modification, the template matching process may be performed in the entire or a part of the stereoscopic vision field.
In the case where an image cut at any position is similar to a specific object, it is detected that the specific object is reflected at the position. It is determined that the distance to the detected specific object is equal to the distance to the search line 403.
Here, a case where the orientation of the imaging unit 101 changes vertically is considered. Since the road surface moves up and down in the image when the orientation of the imaging unit 101 changes up and down, the up-and-down position of the road surface at a predetermined distance in the image is not fixed, and template matching needs to be performed within a range of the assumed up-and-down movement.
Therefore, in the conventional technique, the template matching process must be performed by moving a frame having a certain size to each position within a range extending two-dimensionally in the vertical direction, the horizontal direction, and the vertical direction, resulting in a high processing load.
In contrast, according to the image processing system and the camera system of embodiment 1 of the present invention, even if the road surface moves up and down in the image, since the search line 403 corresponding to the road surface is set, the moving range of the frame is limited only to the left and right direction (or the one-dimensional range along the search line 403), and the processing load is low. In this way, even in an in-vehicle system in which the orientation of the camera is changed vertically, the specific object can be detected with a low processing load.
Multiple probe lines 403 may also be generated. For example, separate search lines 403 may be generated for different distances, and the monocular distance detection unit 107 may perform template matching processing for each search line 403. In this way, a particular object can be explored through a template matching process at various distances. When a distance range in which a specific object is to be detected is predetermined, a plurality of probe lines are created in the distance range and searched for. This can suppress unnecessary searching outside the distance range and shorten the processing time.
When a plurality of search lines 403 are set, the interval between adjacent search lines 403 can be designed arbitrarily. For example, each search line 403 may be set with a certain pixel interval on the image, or may be set with a certain distance interval on the road surface.
Here, when the template matching process can be performed even with a certain degree of distance deviation, the interval between the probe lines 403 may be determined according to the robustness of the template matching process. For example, when the template matching process is robust and a specific object can be detected even if there is a large difference between the distance of the search line 403 and the distance from the specific object, the interval between adjacent search lines 403 may be designed to be large. This reduces the number of probe lines 403, and shortens the processing time.
The intervals of the positions (for example, intervals in the left-right direction) where the template matching processing is performed on one probe line 403 can also be designed in the same manner. For example, in the case where the template matching process is robust and the specific object can be detected even if the block 404 is far away from the specific object, the interval of the template matching process may be designed to be large. Thus, the number of times of template matching processing can be reduced, and the processing time can be shortened.
Fig. 5 shows a method of sizing the box 404. Fig. 5 shows a conceptual positional relationship of a specific object 501, a lens 502 of the image pickup section 101, an image pickup sensor 503 (image pickup element) of the image pickup section 101, and a road surface 504, viewed from a horizontal direction orthogonal to the optical axis.
The focal length 505 of the imaging unit 101 is f. A depth distance 506 (distance) between the specific object 501 and the imaging unit 101 (more precisely, the lens 502) is Z. The height 507 in the image (the height of the specific object 501 projected on the image sensor 503) is a value to be obtained. The height 508 of the specific object 501 is set to h.
Equation 509 represents the relationship of these parameters. From equation 509, the height of the frame 404 in the image can be determined. The height is shown above, but the width can also be determined. In this case, the height h is replaced with a width, and the height 507 in the image is replaced with a width in the image.
The monocular overhead view difference detecting unit 108 receives data including image data from the imaging unit 101 and processes the data, thereby detecting an object in the image. The object (e.g., a moving object) can be detected by detecting the object using an overhead difference method.
The overhead view difference method is a method of comparing overhead images at different times with respect to a part of the images (the 1 st area). In the overhead view difference method, a background difference of the overhead view image may be acquired, and a time difference of the overhead view image may be acquired.
In such a case, the monocular distant detection section 107 may perform template matching processing on a part of the image (2 nd region). The 2 nd region may be a region different from the 1 st region, and particularly may be a region not overlapping with the 1 st region. In this way, the template matching process performed by the monocular distance detection unit 107 can be partially omitted, and the processing load can be reduced.
Further, the monocular overhead difference detection unit 108 may be omitted.
[ example 2]
Example 2 of the present invention will be explained. Example 2 is basically the same as example 1, but differs in the following respects. In example 2, the imaging units are provided on the four sides of the vehicle, and the detection of the entire periphery can be performed. Therefore, the horizontal field of view of the image pickup unit is wide.
Fig. 6 is a block diagram showing an example of the configuration of the camera system of embodiment 2. The camera system of embodiment 2 includes an image pickup unit 601 capable of picking up an image with a horizontal angle of view exceeding 180 degrees. The camera system further includes a stereo matching unit 602, and the stereo matching unit 602 receives images from the plurality of image capturing units 601 and performs stereo matching processing in a region where the plurality of image capturing units overlap. This can estimate the distance of the road surface and realize template matching processing.
Fig. 7 is a view showing an example of the angle of view of the imaging unit 601 when the camera system of embodiment 2 is mounted on a vehicle. The 1 st image pickup section 601 has a horizontal field angle 701. The 2 nd image pickup section 601 has a horizontal angle of view 702. The 3 rd image pickup section 601 has a horizontal field angle 703. The 4 th image pickup section 601 has a horizontal angle of view 704.
A region at the left end of the horizontal angle of view 701 overlaps a region at the front end of the horizontal angle of view 702. A region at the right end of the horizontal angle of view 701 overlaps a region at the front end of the horizontal angle of view 703. The area of the left end of the horizontal angle of view 704 overlaps the area of the rear end of the horizontal angle of view 702. A region at the right end of the horizontal angle of view 704 overlaps a region at the rear end of the horizontal angle of view 703.
The camera system of example 2 performs stereo matching in an area where a plurality of angles of view overlap, performs road surface estimation in the area where stereo matching is performed, and performs template matching by creating probe lines connecting points having parallax corresponding to the same distance based on the result.
[ example 3]
Example 3 of the present invention will be explained. Example 3 is basically the same as example 1, but is different in the following respects. In example 3, a distance measuring sensor unit is provided, and a road surface is estimated based on distance information of the distance measuring sensor unit, and a search line is determined to perform template matching processing.
Fig. 8 is a block diagram showing an example of the configuration of a camera system according to embodiment 3. The camera system includes a distance measurement sensor unit 801, and the distance measurement sensor unit 801 can acquire three-dimensional distance information of the surroundings. The distance measurement sensor unit 801 is, for example, lidar (laser Imaging Detection and ranging). In this embodiment, the distance measuring sensor unit 801 measures a distance.
The camera system includes a data integration unit 802, and the data integration unit 802 receives the three-dimensional distance information from the distance measurement sensor unit 801, receives the camera image from the imaging unit 101, and superimposes the distance information after coordinate conversion on the camera image. In the superimposing process, the three-dimensional distance information is converted into camera image coordinates with the three-dimensional position of the imaging unit 101 as the viewpoint, and superimposed. The distance of the road surface is deduced, and thus the template matching process can be realized.
Fig. 9 shows an example of the angles of view of the distance measuring sensor unit 801 and the imaging unit 101 when the camera system of embodiment 3 is mounted on a vehicle. The distance measurement sensor portion 801 has a field angle 901 (distance acquisition area). The distance measuring sensor unit 801 is attached to, for example, a front bumper of a vehicle. The image pickup unit 101 has a field angle 902. The imaging unit 101 is attached to, for example, a windshield of a vehicle.
The camera system of embodiment 3 performs template matching processing outside the field angle 901 of the distance measurement sensor section 801. That is, the template matching process is performed only on a portion that does not overlap with the angle of view 901 of the distance measurement sensor section 801 among the angle of view 902 of the image pickup section 101. In this way, the range of performing the template matching processing is limited, thereby reducing the processing load.
As a modification of embodiment 3, the camera system may use a range sensor unit 801 having an angle of view as shown in fig. 10. The imaging unit 101 has a field angle 1001. The distance measurement sensor portion 801 has a field angle 1002 (distance acquisition area).
The angle of view 1002 of the distance measuring sensor portion 801 covers the entire circumference 360 degrees. Therefore, the detection of the object can be performed by only the distance measuring sensor unit 801. However, it is sometimes insufficient to identify the type of object by means of distance information alone.
In the camera system according to the present modification, the distance measurement sensor unit 801 performs template matching processing in the field angle 1002 to identify the type of the object (for example, whether or not the object is a specific object). For example, a search line is determined within the angle of view 1001 of the imaging unit 101 based on the road surface estimation result using the entire peripheral angle of view 1002 of the distance measurement sensor unit 801, and template matching processing is performed along the search line. This enables the type of the object to be efficiently identified.
The present invention is not limited to the above-described embodiments and modifications, and various modifications are also included in the present invention. For example, the above embodiments are described in detail to explain the present invention in an easily understandable manner, and the present invention is not necessarily limited to all the configurations described.
Note that a part of the configuration of one embodiment may be replaced with the configuration of another embodiment, and the configuration of one embodiment may be added to the configuration of another embodiment. Further, some of the components of the embodiments may be added, deleted, or replaced with other components.
Further, each of the above-described configurations, functions, processing units, processing methods, and the like may be partially or entirely realized in hardware by designing or the like using an integrated circuit, for example. Each of the above-described configurations, functions, and the like may be realized by software by interpreting and executing a program for realizing each function by a processor. Information such as programs, tables, and files for realizing the functions may be stored in a recording device such as a memory, a hard disk, an ssd (solid State drive), or a recording medium such as an IC card or an SD card.
In addition, the control lines and the information lines are shown as what is considered necessary for the description, and not all of the control lines and the information lines are necessarily shown on the product. In practice, it can be said that almost all the constituents are connected to each other.
Description of the symbols
101. 203, 204, 601 … imaging unit
102 … stereo matching unit (distance acquisition unit)
103 … road surface estimating section
104 … size calculating part (template size determining part)
105 … search line calculating unit
106 … template search method determination unit
107 … monocular distance detection part
108 … monocular overhead difference detection unit
201 … vehicle
202 … stereo camera
301. 302 … image
303. 400 … composite image
401 … road surface part
402 … same distance point
403 … exploration line
404 … box
405. 501 … object
502 … lens
503 … image pickup sensor
504 … road surface
505 … focal length
506 … depth distance
507. 508 … height
601 … image pickup part
602 … stereo matching unit (distance acquisition unit)
801 … distance measuring sensor unit
802 … data integration section.
All publications, patents, and patent applications cited in this specification are herein incorporated by reference as if fully set forth.

Claims (8)

1. A camera system, characterized in that,
the camera system includes:
a distance acquisition unit that acquires a distance for the distance acquisition area;
a camera that acquires an image for an image acquisition region at least a part of which overlaps with the distance acquisition region;
a template matching detection unit that detects an object by performing template matching processing on the image using a template;
A road surface estimation unit that estimates a position of the road surface based on the distance calculated by the distance acquisition unit; and
a detection line calculation unit that calculates a detection line in the image based on a position of the road surface,
the search line is a line along a position at a prescribed distance on the road surface in the image,
and performing the template matching processing along the probe line.
2. The camera system of claim 1,
the distance acquisition unit includes a plurality of cameras and calculates a distance by performing stereo matching.
3. The camera system according to claim 2,
the bird's-eye view difference detection unit is further provided with a bird's-eye view difference detection unit for detecting an object by a bird's-eye view difference method of comparing bird's-eye view images at different times with respect to the No. 1 region in the image,
the template matching detection unit performs the template matching process in a 2 nd region different from the 1 st region in the image.
4. The camera system of claim 1,
the vehicle further includes a template size determination unit that determines the size of the template based on the position of the road surface.
5. The camera system of claim 1,
the template matching process is performed in a monocular area photographed by one camera.
6. The camera system according to claim 1,
the distance acquisition unit includes a distance measurement sensor unit for measuring a distance.
7. The camera system according to claim 6,
and performing the template matching processing outside the distance acquisition area.
8. The camera system according to claim 6,
and performing the template matching processing in the distance acquisition area to identify the category of the object.
CN202080082086.0A 2019-12-17 2020-10-09 Camera system Pending CN114762019A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2019-227407 2019-12-17
JP2019227407A JP7293100B2 (en) 2019-12-17 2019-12-17 camera system
PCT/JP2020/038395 WO2021124657A1 (en) 2019-12-17 2020-10-09 Camera system

Publications (1)

Publication Number Publication Date
CN114762019A true CN114762019A (en) 2022-07-15

Family

ID=76431396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080082086.0A Pending CN114762019A (en) 2019-12-17 2020-10-09 Camera system

Country Status (3)

Country Link
JP (1) JP7293100B2 (en)
CN (1) CN114762019A (en)
WO (1) WO2021124657A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2023076169A (en) * 2021-11-22 2023-06-01 日立Astemo株式会社 Image processing device

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002099997A (en) * 2000-09-26 2002-04-05 Mitsubishi Motors Corp Detection device for moving object
JP2004120585A (en) * 2002-09-27 2004-04-15 Mitsubishi Electric Corp On-vehicle driving lane recognition device
JP2004144644A (en) * 2002-10-25 2004-05-20 Fuji Heavy Ind Ltd Topography recognition device, topography recognition means, and moving quantity detection method of moving body
JP2006318061A (en) * 2005-05-10 2006-11-24 Olympus Corp Apparatus, method, and program for image processing
JP2007064894A (en) * 2005-09-01 2007-03-15 Fujitsu Ten Ltd Object detector, object detecting method, and object detection program
JP2007281989A (en) * 2006-04-10 2007-10-25 Fuji Heavy Ind Ltd Stereoscopic monitoring apparatus
US20080089557A1 (en) * 2005-05-10 2008-04-17 Olympus Corporation Image processing apparatus, image processing method, and computer program product
JP2010020710A (en) * 2008-07-14 2010-01-28 Mazda Motor Corp Device for detecting road side fixed object
WO2011090053A1 (en) * 2010-01-21 2011-07-28 クラリオン株式会社 Obstacle detection warning device
JP2013140515A (en) * 2012-01-05 2013-07-18 Toyota Central R&D Labs Inc Solid object detection device and program
JP2013161241A (en) * 2012-02-03 2013-08-19 Toyota Motor Corp Object recognition device and object recognition method
JP2013174494A (en) * 2012-02-24 2013-09-05 Ricoh Co Ltd Image processing device, image processing method, and vehicle
JP2014021017A (en) * 2012-07-20 2014-02-03 Sanyo Electric Co Ltd Information acquisition device and object detection device
JP2014052307A (en) * 2012-09-07 2014-03-20 Sanyo Electric Co Ltd Information acquisition device and object detection device
WO2014050285A1 (en) * 2012-09-27 2014-04-03 日立オートモティブシステムズ株式会社 Stereo camera device
CN104881661A (en) * 2015-06-23 2015-09-02 河北工业大学 Vehicle detection method based on structure similarity
US20160014406A1 (en) * 2014-07-14 2016-01-14 Sadao Takahashi Object detection apparatus, object detection method, object detection program, and device control system mountable to moveable apparatus
WO2016092925A1 (en) * 2014-12-09 2016-06-16 クラリオン株式会社 Approaching vehicle detection device
CN106228110A (en) * 2016-07-07 2016-12-14 浙江零跑科技有限公司 A kind of barrier based on vehicle-mounted binocular camera and drivable region detection method
JP2017045283A (en) * 2015-08-26 2017-03-02 株式会社ソニー・インタラクティブエンタテインメント Information processing apparatus and information processing method
JP2017096777A (en) * 2015-11-25 2017-06-01 日立オートモティブシステムズ株式会社 Stereo camera system
JP2017167970A (en) * 2016-03-17 2017-09-21 株式会社リコー Image processing apparatus, object recognition apparatus, device control system, image processing method, and program
CN109631829A (en) * 2018-12-17 2019-04-16 南京理工大学 A kind of binocular distance measuring method of adaptive Rapid matching

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002099997A (en) * 2000-09-26 2002-04-05 Mitsubishi Motors Corp Detection device for moving object
JP2004120585A (en) * 2002-09-27 2004-04-15 Mitsubishi Electric Corp On-vehicle driving lane recognition device
JP2004144644A (en) * 2002-10-25 2004-05-20 Fuji Heavy Ind Ltd Topography recognition device, topography recognition means, and moving quantity detection method of moving body
JP2006318061A (en) * 2005-05-10 2006-11-24 Olympus Corp Apparatus, method, and program for image processing
US20080089557A1 (en) * 2005-05-10 2008-04-17 Olympus Corporation Image processing apparatus, image processing method, and computer program product
JP2007064894A (en) * 2005-09-01 2007-03-15 Fujitsu Ten Ltd Object detector, object detecting method, and object detection program
JP2007281989A (en) * 2006-04-10 2007-10-25 Fuji Heavy Ind Ltd Stereoscopic monitoring apparatus
JP2010020710A (en) * 2008-07-14 2010-01-28 Mazda Motor Corp Device for detecting road side fixed object
WO2011090053A1 (en) * 2010-01-21 2011-07-28 クラリオン株式会社 Obstacle detection warning device
JP2013140515A (en) * 2012-01-05 2013-07-18 Toyota Central R&D Labs Inc Solid object detection device and program
JP2013161241A (en) * 2012-02-03 2013-08-19 Toyota Motor Corp Object recognition device and object recognition method
JP2013174494A (en) * 2012-02-24 2013-09-05 Ricoh Co Ltd Image processing device, image processing method, and vehicle
JP2014021017A (en) * 2012-07-20 2014-02-03 Sanyo Electric Co Ltd Information acquisition device and object detection device
JP2014052307A (en) * 2012-09-07 2014-03-20 Sanyo Electric Co Ltd Information acquisition device and object detection device
WO2014050285A1 (en) * 2012-09-27 2014-04-03 日立オートモティブシステムズ株式会社 Stereo camera device
US20160014406A1 (en) * 2014-07-14 2016-01-14 Sadao Takahashi Object detection apparatus, object detection method, object detection program, and device control system mountable to moveable apparatus
WO2016092925A1 (en) * 2014-12-09 2016-06-16 クラリオン株式会社 Approaching vehicle detection device
CN104881661A (en) * 2015-06-23 2015-09-02 河北工业大学 Vehicle detection method based on structure similarity
JP2017045283A (en) * 2015-08-26 2017-03-02 株式会社ソニー・インタラクティブエンタテインメント Information processing apparatus and information processing method
JP2017096777A (en) * 2015-11-25 2017-06-01 日立オートモティブシステムズ株式会社 Stereo camera system
JP2017167970A (en) * 2016-03-17 2017-09-21 株式会社リコー Image processing apparatus, object recognition apparatus, device control system, image processing method, and program
CN106228110A (en) * 2016-07-07 2016-12-14 浙江零跑科技有限公司 A kind of barrier based on vehicle-mounted binocular camera and drivable region detection method
CN109631829A (en) * 2018-12-17 2019-04-16 南京理工大学 A kind of binocular distance measuring method of adaptive Rapid matching

Also Published As

Publication number Publication date
WO2021124657A1 (en) 2021-06-24
JP7293100B2 (en) 2023-06-19
JP2021096638A (en) 2021-06-24

Similar Documents

Publication Publication Date Title
JP5440461B2 (en) Calibration apparatus, distance measurement system, calibration method, and calibration program
US10909395B2 (en) Object detection apparatus
CN107272021B (en) Object detection using radar and visually defined image detection areas
JP4406381B2 (en) Obstacle detection apparatus and method
US10776946B2 (en) Image processing device, object recognizing device, device control system, moving object, image processing method, and computer-readable medium
CN107980138B (en) False alarm obstacle detection method and device
US9747524B2 (en) Disparity value deriving device, equipment control system, movable apparatus, and robot
KR101551026B1 (en) Method of tracking vehicle
CN109410264B (en) Front vehicle distance measuring method based on laser point cloud and image fusion
US10832431B2 (en) Image processing apparatus, object recognition apparatus, equipment control system, image processing method, and computer-readable recording medium
CN110717445B (en) Front vehicle distance tracking system and method for automatic driving
EP3150961B1 (en) Stereo camera device and vehicle provided with stereo camera device
KR101709317B1 (en) Method for calculating an object's coordinates in an image using single camera and gps
JP6516012B2 (en) Image processing apparatus, object recognition apparatus, device control system, image processing method and program
US20200193184A1 (en) Image processing device and image processing method
CN114762019A (en) Camera system
KR102065337B1 (en) Apparatus and method for measuring movement information of an object using a cross-ratio
EP3428876A1 (en) Image processing device, apparatus control system, imaging device, image processing method, and program
CN111986248B (en) Multi-vision sensing method and device and automatic driving automobile
JP2007233487A (en) Pedestrian detection method, device, and program
JP7064400B2 (en) Object detection device
JP2022002045A (en) Partial image generating device and computer program for partial image generation
JP2015219212A (en) Stereoscopic camera device and distance calculation method
CN113959398B (en) Distance measurement method and device based on vision, drivable equipment and storage medium
US20220383643A1 (en) Object detection device, object detection system, mobile object, and object detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination