CN111998780B - Target ranging method, device and system - Google Patents

Target ranging method, device and system Download PDF

Info

Publication number
CN111998780B
CN111998780B CN201910447707.XA CN201910447707A CN111998780B CN 111998780 B CN111998780 B CN 111998780B CN 201910447707 A CN201910447707 A CN 201910447707A CN 111998780 B CN111998780 B CN 111998780B
Authority
CN
China
Prior art keywords
target
vehicle
image
blind area
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910447707.XA
Other languages
Chinese (zh)
Other versions
CN111998780A (en
Inventor
方先成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910447707.XA priority Critical patent/CN111998780B/en
Publication of CN111998780A publication Critical patent/CN111998780A/en
Application granted granted Critical
Publication of CN111998780B publication Critical patent/CN111998780B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/022Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by means of tv-camera scanning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Measurement Of Optical Distance (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a target ranging method, which comprises the following steps: detecting a target to be detected in each frame of image acquired by a vehicle-mounted camera; traversing each frame of image, and judging whether a target to be detected in the currently traversed image enters the blind area; if so, selecting an image which has the smallest difference value between the acquisition time of one frame and the acquisition time of the currently traversed image and does not enter the blind area from the acquired images; and determining the distance measurement information when the target to be measured does not enter the blind area based on the selected image, and determining the actual distance from the vehicle when the target to be measured enters the blind area according to the distance measurement information. When the object to be measured in the current traversed image is judged to enter the blind area, the distance between the object to be measured and the vehicle when the object to be measured is located in the blind area is obtained according to the distance measurement information when the object to be measured is located in the non-blind area, and therefore the problem that distance measurement cannot be carried out on the object when the object is located in the blind area on the basis of pure vision is solved.

Description

Target ranging method, device and system
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a system for measuring a distance of a target.
Background
In the field of vehicle-assisted driving, the adopted vision-based target ranging module can only measure the distance of the exposed whole target in the image (namely, the target in the image has an intersection with the road surface), and when a part of the target enters a vehicle blind area, the distance of the target cannot be calculated by adopting the conventional target ranging module because a camera cannot shoot the intersection of the target and the road surface.
Disclosure of Invention
In view of this, the present application provides a target ranging method, device and system to solve the problem that the currently used target ranging module cannot calculate the distance of the target in the vehicle blind area.
According to a first aspect of embodiments of the present application, a target ranging method is provided, where the method includes:
detecting a target to be detected in each frame of image acquired by a vehicle-mounted camera, wherein the vehicle-mounted camera is positioned above a vehicle engine hood, and the region between the visible region boundary of the vehicle-mounted camera and the vehicle is a blind region;
traversing each frame of image, and judging whether a target to be detected in the currently traversed image enters the blind area;
if so, selecting an image which has the smallest difference value between the acquisition time of one frame and the acquisition time of the currently traversed image and does not enter the blind area from the acquired images;
and determining the distance measurement information when the target to be measured does not enter the blind area based on the selected image, and determining the actual distance from the vehicle when the target to be measured enters the blind area according to the distance measurement information.
According to a second aspect of embodiments of the present application, there is provided a target ranging apparatus, the apparatus comprising:
the detection module is used for detecting a target to be detected in each frame of image acquired by a vehicle-mounted camera, the vehicle-mounted camera is positioned above a vehicle engine hood, and an area between the visible area boundary of the vehicle-mounted camera and the vehicle is a blind area;
the traversal judging module is used for traversing each frame of image and judging whether a target to be detected in the current traversed image enters the blind area or not;
the selecting module is used for selecting an image which has the minimum difference value between the acquisition time of one frame and the acquisition time of the currently traversed image and does not enter the blind area from the acquired images when the judgment result is yes;
and the determining module is used for determining the distance measuring information when the target to be measured does not enter the blind area based on the selected image, and determining the actual distance from the vehicle when the target to be measured enters the blind area according to the distance measuring information.
According to a third aspect of embodiments of the present application, there is provided a target ranging system, the system including:
the vehicle-mounted camera is used for acquiring images and sending the images to the processor;
the processor is used for detecting a target to be detected in each frame of image, the vehicle-mounted camera is positioned above a vehicle engine hood, and a region between the visible region boundary of the vehicle-mounted camera and the vehicle is a blind region; traversing each frame of image, and judging whether a target to be detected in the currently traversed image enters the blind area; if yes, selecting an image which has the minimum difference value between the acquisition time of one frame and the acquisition time of the image traversed at present and is not in the blind area of the distance target to be detected from the acquired images, determining distance measurement information when the distance target to be detected does not enter the blind area based on the selected image, and determining the actual distance from the vehicle when the distance target to be detected enters the blind area according to the distance measurement information.
Use this application embodiment, through detecting the distance to be measured target in every frame image that vehicle-mounted camera gathered, vehicle-mounted camera is located the top of vehicle bonnet, vehicle-mounted camera's visual zone border with region between the vehicle is the blind area, then traverses every frame image, judges whether the distance to be measured target in the image of current traversal gets into the blind area, if, then select a frame acquisition time and the minimum just of difference of the acquisition time of the image of current traversal in the image of gathering, the distance to be measured target does not get into the image of blind area to confirm based on the image of selection the distance to be measured target distance to the distance to be measured information when not getting into the blind area, and the foundation the distance to be measured information is confirmed the distance to be measured target distance to when getting into the blind area the actual distance of vehicle.
Based on the above description, when it is judged that the target to be measured in the currently traversed image enters the blind area, the distance between the target to be measured and the vehicle when the target to be measured is located in the blind area is obtained according to the distance measurement information when the target to be measured is located in the non-blind area, so that the problem that the distance of the target cannot be measured when the target is located in the blind area on the basis of pure vision is solved.
Drawings
1A-1B are schematic diagrams of a vehicle blind spot illustrated herein according to an exemplary embodiment;
FIG. 2A is a flowchart illustrating an embodiment of a target ranging method according to an exemplary embodiment of the present application;
fig. 2B is a schematic diagram illustrating an all-exposed state of a target to be measured according to the embodiment shown in fig. 2A;
fig. 2C is a schematic diagram illustrating an object to be measured entering a blind area according to the embodiment shown in fig. 2A;
FIG. 3A is a flow chart illustrating an embodiment of another target ranging method according to an exemplary embodiment of the present application;
FIG. 3B is a diagram illustrating key information matching according to the embodiment shown in FIG. 3A;
FIG. 3C is a schematic diagram illustrating the principle of aperture imaging according to the embodiment of FIG. 3A;
FIG. 3D is a schematic view of another aperture imaging principle shown in the embodiment of FIG. 3A;
FIG. 3E is a schematic diagram illustrating a distance measurement of a target located in a blind spot according to the embodiment shown in FIG. 3A;
FIG. 4 is a block diagram of a target ranging system according to an exemplary embodiment of the present application;
fig. 5 is a block diagram illustrating an embodiment of a target ranging device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The currently used target ranging module based on vision adopts a small hole imaging principle to measure the distance of a complete target in an image only, namely a camera is required to shoot an intersection point of the target and a road surface. When the target enters the blind area of the vehicle, the camera cannot shoot the intersection point of the target and the road surface, and the target distance measuring module cannot measure the distance of the target by adopting a small-hole imaging principle, so that the driver cannot know the distance from the target in front of the vehicle, and hidden dangers are brought to driving safety.
The blind area of the vehicle refers to an area which cannot be shot in a short distance in front due to the fact that a vehicle engine hood shields or a camera drives a vehicle-mounted camera arranged on a front windshield of the vehicle, as shown in fig. 1A, the blind area is caused by the fact that the vehicle engine hood shields, and fig. 1B is the blind area caused by the fact that the vehicle-mounted camera is erected at an angle.
In order to measure the distance of a target entering a blind area, a scheme of multi-sensor fusion is adopted, namely a distance radar is installed on a vehicle while a vehicle-mounted camera is arranged, and the distance of the target in the blind area of the vehicle is measured by using the distance radar, but the hardware cost is undoubtedly increased, and radar distance measurement cannot be realized in some low-end auxiliary driving products only provided with the vehicle-mounted camera.
For solving above-mentioned problem, this application provides a target range finding method, through detecting the distance to be measured target in every frame image that vehicle-mounted camera gathered, vehicle-mounted camera is located the top of vehicle bonnet, vehicle-mounted camera's visual zone border with region between the vehicle is the blind area, then traverses every frame image, judges whether the distance to be measured target in the image of current traversal gets into the blind area, if, then select a frame acquisition time and the minimum just of difference of the acquisition time of the image of current traversal in the image of gathering, the distance to be measured target does not get into the image of blind area to confirm based on the image of selection the distance to be measured target does not get into the range finding information when the blind area, and the basis the distance to be measured information is confirmed the distance to be measured target gets into the blind area the actual distance of vehicle.
Based on the above description, when it is judged that the target to be measured in the currently traversed image enters the blind area, the distance from the vehicle to the target to be measured when the target to be measured is located in the blind area is obtained through the distance measurement information when the target to be measured is located in the non-blind area, so that the defect that the distance of the target cannot be measured when the target is located in the blind area on the basis of pure vision is overcome.
It should be noted that, for a target which does not appear in a non-blind area but directly appears in the blind area, distance measurement information when the target is located in the non-blind area can be obtained by looking around an image collected by a camera, so as to further realize the distance when the target is located in the blind area.
It should be further noted that, in the multi-sensor fusion, the blind area distance measurement result based on vision may also participate in the multi-sensor fusion scheme, and a more accurate distance measurement result is obtained through comparison.
Fig. 2A is a flowchart of an embodiment of a target ranging method according to an exemplary embodiment of the present application, where the target ranging method may be applied to a processor, and the processor may be integrated in a vehicle-mounted camera or not integrated in the vehicle-mounted camera.
As shown in fig. 2A, the target ranging method includes the following steps:
step 201: and detecting the target to be detected in each frame of image acquired by the vehicle-mounted camera.
The vehicle-mounted camera is positioned above the vehicle engine hood, and can be arranged at any position above the vehicle engine hood, such as a vehicle roof, a vehicle front windshield and the like, which is convenient to mount. However, no matter where the vehicle-mounted camera is disposed above the hood of the vehicle, due to the problem of the shielding of the hood of the vehicle or the installation angle of the camera, the region between the visible region boundary of the vehicle-mounted camera and the head of the vehicle cannot be shot, and a blind area is formed.
The blind area is exemplarily a blind area in front of the vehicle, and may include a vehicle dead front blind area and a side front blind area.
In an embodiment, a preset type of target in an image acquired by a vehicle-mounted camera is detected, a target frame of the target is obtained, and then the target in the target frame which has no connection relation with other target frames is determined as the target to be measured with respect to the target frame obtained in each image.
In practical application, targets which are shielded by other targets and are not exposed completely do not directly enter a blind area of a vehicle in a real scene, target frames of the shielded targets are connected with other target frames, and the targets which are not connected with other target frames are likely to enter the blind area, so that the exposed targets can be used as targets to be measured.
For example, the preset type of object may be a vehicle, a person, or the like, and the object frame for containing the object may be a quadrangle.
For example, a traditional detection algorithm may be adopted for a target detection mode in an image, and a network model based on a deep learning method may also be adopted for target detection, which is not limited in the present application.
Step 202: and traversing each frame of image, and judging whether the target to be measured in the currently traversed image enters a blind area, if so, executing the step 203, otherwise, continuing to execute the step 202.
In an embodiment, the pixel distance between the target to be measured in the currently traversed image and the designated edge of the image is determined, the ratio between the pixel distance and the image height is determined, and if the ratio exceeds a preset threshold value, it is determined that the target to be measured in the currently traversed image does not enter the blind area; and if the ratio does not exceed a preset threshold value, determining that the target to be measured in the current traversed image enters the blind area.
The specified image edge refers to the lower edge of the image, and the target point indicated by the pixel point of the lower edge of the image in the actual space is the target point closest to the vehicle-mounted camera. When the ratio of the distance between the target and the lower edge of the image to the height of the whole image exceeds a preset threshold, the target is still in the exposure state and does not enter the blind area, and when the ratio of the distance between the target and the lower edge of the image to the height of the whole image does not exceed the preset threshold, a part of the target enters the blind area and is in the exposure state, and the preset threshold can be set according to practical experience, for example, the preset threshold can be set to be 5%.
In an embodiment, the target in front of the vehicle can be accurately measured based on the pinhole imaging principle adopted by pure vision target ranging, so that when the target to be measured is determined to be in an exposed state, the target to be measured can be further filtered by utilizing the condition in front of the vehicle, namely, if the target to be measured meets the condition in front, the target to be measured is reserved, and if the target to be measured does not meet the condition in front, the target to be measured is discarded.
For example, the straight ahead condition may be that the distance between the subject and the camera in the horizontal direction (the distance is obtained using the pinhole imaging principle) is smaller than a preset multiple of the vehicle width, which may be 0.5 times, for example.
In an exemplary scenario, as shown in fig. 2B, the target to be measured is located right in front of the vehicle, does not enter the blind area, and is in the state of full exposure, and as shown in fig. 2C, a part of the target to be measured enters the blind area, and is in the state of full exposure.
It should be noted that, when detecting an object in each frame of image, the object may be detected in an object tracking manner, that is, a label is used for the same object in each frame of image, so that the objects may be associated in each frame of image.
Step 203: and selecting an image which has the minimum difference between the acquisition time of one frame and the acquisition time of the currently traversed image and does not enter the blind area from the acquired images.
The image with the minimum difference between the acquisition time and the acquisition time of the currently traversed image and the distance target not entering the blind area refers to the image acquired last time before the distance target enters the blind area.
It should be noted that, when selecting an image from the acquired images, a condition that there are few targets near the target to be measured may be considered, that is, an image that does not enter the blind area, has the smallest difference between the acquisition time and the acquisition time of the currently traversed image, and has few targets near the target may be selected, so as to improve the accuracy of the subsequent calculation of the distance measurement information.
Step 204: and determining the distance measurement information when the target to be measured does not enter the blind area based on the selected image, and determining the actual distance from the vehicle when the target to be measured enters the blind area according to the distance measurement information.
In an embodiment, after the target to be detected enters the blind area, the vehicle-mounted camera still collects images in real time, so that the actual distance from the vehicle to the target to be detected to the vehicle when the target to be detected is in the blind area needs to be determined once according to the ranging information every time the vehicle-mounted camera collects one frame of image in the period, so as to prompt the driver in real time.
The process of step 204 can be referred to the following description of the embodiment shown in fig. 3A, and will not be described in detail here.
In this application embodiment, through the distance to be measured target that detects in every frame image that vehicle-mounted camera gathered, vehicle-mounted camera is located the top of vehicle bonnet, vehicle-mounted camera's visual zone border with region between the vehicle is the blind area, then traverses every frame image, judges whether the distance to be measured in the image of current traversal target gets into the blind area, if, then select a frame acquisition time and the minimum just of difference of the acquisition time of the image of current traversal in the image of gathering, the distance to be measured target does not get into the image of blind area to confirm based on the image of selection the distance to be measured target when not getting into the blind area information, and then the basis distance to be measured information confirms the distance to be measured target gets into the blind area the actual distance of vehicle.
Based on the above description, when it is judged that the target to be measured in the currently traversed image enters the blind area, the distance from the vehicle to the target to be measured when the target to be measured is located in the blind area is obtained through the distance measurement information when the target to be measured is located in the non-blind area, so that the defect that the distance of the target cannot be measured when the target is located in the blind area on the basis of pure vision is overcome.
Fig. 3A is a flowchart of another target distance measuring method according to an exemplary embodiment of the present application, and based on the embodiment shown in fig. 2A, this embodiment exemplarily illustrates how to determine distance measuring information when the target to be measured does not enter the blind area based on the selected image, and determine an actual distance to the vehicle when the target to be measured enters the blind area according to the distance measuring information.
As shown in fig. 3A, the target ranging method further includes the following steps:
step 301: and extracting key information of the target to be measured in the current traversed image.
The key information is pixel information of a junction of the target to be measured and an image designated edge, wherein the image designated edge refers to an image lower edge, namely a target point indicated by a pixel point of the image lower edge in an actual space is a target point closest to the vehicle-mounted camera.
For example, the pixel information at the intersection of the distance measurement object and the specified edge of the image may be a line or a point.
Step 302: and searching pixel points matched with the key information in the selected image.
In an embodiment, the pixel points are pixel points in an area occupied by the target to be measured in the selected image, so that the pixel points matched with the key information can be searched in the area occupied by the target to be measured in the selected image.
For example, the search matching manner may be a sift matching manner, or may also be a surf matching manner, which is not limited in this application.
Taking the key information as a line as an example, as shown in fig. 3B, (B) is an image selected from the acquired images, in (B), a line (white dotted line) is extracted from the intersection of the object to be measured and the lower edge of the image, and the pixel occupied by the line is the key information to be extracted; (a) in the step (a), pixel points matched with the key information are searched in the area occupied by the target to be detected in the image (the pixel points occupied by the white dotted lines are the matched pixel points).
Step 303: and determining the actual distance from the vehicle-mounted camera when the target to be detected does not enter the blind area according to the selected image.
In an embodiment, since the selected image is an image in which the target to be measured does not enter the blind area, the actual distance from the target to be measured to the vehicle-mounted camera can be obtained by using a pinhole imaging principle.
The distance formula for measuring the target distance camera based on the pinhole imaging principle is as follows:
Figure BDA0002074147250000091
wherein, c1=cos(θ)、s1=sin(θ)、c2=cos(β)、s2Sin (β); theta denotes the pitch angle of the camera; beta denotes the yaw angle of the camera; h represents the installation height of the camera from the ground; f. ofx、fyRespectively, the equivalent focal length of the camera in the direction X, Y, (u)0,v0) Representing camera optical center coordinates; (u, v) represents the pixel coordinates of the target at the intersection with the ground in the image; x is the number ofw、yWWhat is indicated is the actual distance of the object from the camera in the horizontal direction, which refers to the X-axis direction in the camera coordinate system (the optical center of the camera is the origin, the optical axis of the camera is the Z-axis, and the X-axis and the Y-axis are parallel to the X-axis and the Y-axis, respectively, of the image coordinate system), and in the longitudinal direction, which refers to the Z-axis direction in the camera coordinate system, i.e., the direction of the optical axis of the camera, respectively.
Based on the formula, the actual distances from the target to be measured to the vehicle-mounted camera in the horizontal direction and the longitudinal direction respectively can be obtained by extracting the pixel coordinates of the pixel points at the intersection of the target to be measured and the ground from the selected image and inputting the extracted pixel coordinates into the formula.
It should be noted that, in this embodiment, it is required to calculate the longitudinal actual distance from the target to the vehicle, and therefore, the actual distance from the target to be measured to the vehicle-mounted camera in the longitudinal direction may be reserved.
Step 304: and determining the actual height of the target point indicated by the pixel point from the ground according to the actual distance from the vehicle-mounted camera when the target to be detected does not enter the blind area, and taking the actual height and the actual distance from the vehicle-mounted camera when the target to be detected does not enter the blind area as ranging information.
Since the depth information of the image is lost by the imaging of a single convex lens, the actual height of the target cannot be directly obtained by using the pinhole imaging principle, but if the camera is installed perpendicular to the ground (namely the pitch angle of the camera is 0 °), the target height with higher accuracy can be obtained by using the pinhole imaging principle and combining a mathematical relation.
As shown in fig. 3C, the camera imaging plane is AB, H' is the height of the target in the image (which can be obtained in the process of detecting the target), H is the actual height of the target, the installation height of the camera is H (known quantity), the camera focal length is f (known quantity), and assuming that the actual distance between the target and the camera in the longitudinal direction obtained by the above-mentioned pinhole-based imaging principle is y, the similarity formula can be obtained by the triangle similarity principle:
Figure BDA0002074147250000101
it should be noted that in the actual installation of the camera, the pitch angle of the camera may not be completely 0 °, and there is usually a deviation, as shown in fig. 3D, the pitch angle of the camera is θ, and the height of the target projection on the imaging plane is AC, rather than AB, but the deviation is small and can be approximated to 0 °, and the target height can still be estimated using the above formula.
Based on the above description, the target height can be calculated using the above-described similar formula as long as the imaging plane of the vehicle-mounted camera satisfies the condition of being perpendicular to the ground.
In an optional mode, the actual height of the target point indicated by the pixel point from the ground can be directly calculated by using the above-mentioned similar formula, and the implementation process is as follows: and determining the pixel distance between the pixel point and the specified boundary of the target to be detected in the selected image, and determining the actual height between the target point indicated by the pixel point and the ground according to the pixel distance and the actual distance between the pixel point and the vehicle-mounted camera when the pixel point does not enter the blind area.
For example, the specified boundary of the target to be measured may be the lower edge of the target frame where the target to be measured is located, so that the pixel distance between the pixel point and the lower edge of the target frame where the target to be measured is located may be determined, and then the pixel distance is substituted into the above-mentioned similarity formula as h' and the actual distance y from the vehicle-mounted camera when the target does not enter the blind area, so as to obtain the actual height h from the ground of the target point indicated by the pixel point.
In another optional mode, the pixel height of the area occupied by the target to be measured can be determined in the selected image, and the actual height of the target to be measured can be determined according to the pixel height and the actual distance from the vehicle-mounted camera when the target to be measured does not enter the blind area. And then determining the proportion of the pixel distance between the pixel point and the specified boundary of the target to be detected to the pixel height in the selected image, and determining the actual height of the target point indicated by the pixel point from the ground according to the proportion and the actual height of the target to be detected.
For example, taking the area occupied by the target to be measured as the target frame, and the specified boundary of the target to be measured as the lower edge of the target frame to be measured, the pixel height h' of the target frame to be measured and the actual distance y from the vehicle-mounted camera when the target to be measured does not enter the blind area may be substituted into the above-mentioned similarity formula to obtain the actual height of the target to be measured, and then the proportion of the pixel distance between the pixel point and the lower edge of the target frame to be measured to the whole target frame is determined, and the actual height of the target point indicated by the pixel point from the ground is obtained according to the proportion and the actual height of the target to be measured.
Step 305: and determining the moving distance of the target to be detected relative to the vehicle between the acquisition time of the selected image and the acquisition time of the currently traversed image according to the actual height of the target point indicated by the pixel point from the ground and the actual distance from the vehicle-mounted camera when the target point does not enter the blind area.
Step 306: and determining the actual distance from the vehicle-mounted camera when the target to be detected enters the blind area according to the actual distance from the vehicle-mounted camera when the target to be detected does not enter the blind area and the moving distance relative to the vehicle.
For the process from step 305 to step 306, as shown in fig. 3E, after the target enters the blind area, the AB region cannot be seen in the camera, and the actual height AB of the point a (belonging to the target point indicated by the key information) from the ground can be obtained through step 304, that is, the actual height AB of the target point indicated by the pixel point matched with the key information from the ground, IH is the actual distance from the camera when the target to be measured does not enter the blind area, that is, the actual distance from the vehicle-mounted camera in the longitudinal direction of the target obtained by step 303 based on the pinhole imaging principle, and the height OH of the camera from the ground is a known quantity, so according to the parallel relationship:
Figure BDA0002074147250000111
the movement distance IB of the distance object to be measured relative to the vehicle between the acquisition time of the selected image and the acquisition time of the currently traversed image is obtained.
And obtaining the actual distance HB from the vehicle-mounted camera when the target enters the blind area from HB to IH-IB.
Step 307: and determining the actual distance from the vehicle when the target to be detected enters the blind area according to the actual distance from the vehicle-mounted camera when the target to be detected enters the blind area and the installation distance between the vehicle-mounted camera and the vehicle.
For example, in practical applications, in order to avoid rear-end collision, the driver is concerned about the distance between the target and the vehicle head, and therefore the actual distance from the vehicle when the target to be detected enters the blind area may be the actual distance from the target to be detected to the vehicle head.
The vehicle-mounted camera is mounted above the vehicle engine hood, so that the mounting distance between the vehicle-mounted camera and the vehicle needs to be obtained in advance, and the actual distance between the target to be measured and the vehicle when the target to be measured enters the blind area is subtracted from the mounting distance to obtain the actual distance between the target to be measured and the vehicle when the target to be measured enters the blind area.
So far, the flow of the embodiment shown in fig. 3A is completed, and the distance between the target to be measured and the vehicle when entering the blind area can be measured by the embodiment shown in fig. 3A.
Fig. 4 is a block diagram of a target ranging system according to an exemplary embodiment of the present application, where the system includes:
the vehicle-mounted camera 410 is used for acquiring images and sending the images to the processor 420;
the processor 420 is used for detecting a target to be detected in each frame of image, the vehicle-mounted camera is positioned above a hood of the vehicle, and a region between a visible region boundary of the vehicle-mounted camera and the vehicle is a blind region; traversing each frame of image, and judging whether a target to be detected in the currently traversed image enters the blind area; if so, selecting an image which has the minimum difference value between the acquisition time of one frame and the acquisition time of the image traversed currently and is not in the blind area from the acquired images, determining the distance measurement information when the distance measurement target does not enter the blind area based on the selected image, and determining the actual distance to the vehicle when the distance measurement target enters the blind area according to the distance measurement information.
For the detailed implementation process of each step in the processor 420 in the system embodiment, reference may be made to the implementation process of the corresponding step in the method, which is not described herein again.
Fig. 5 is a block diagram of an embodiment of a target ranging device according to an exemplary embodiment of the present application, where the target ranging device may be applied to a processor, and the target ranging device includes:
the detection module 510 is configured to detect a target to be detected in each frame of image acquired by a vehicle-mounted camera, where the vehicle-mounted camera is located above a vehicle hood, and a region between a visible region boundary of the vehicle-mounted camera and the vehicle is a blind region;
a traversal judging module 520, configured to traverse each frame of image, and judge whether a target to be detected in a currently traversed image enters the blind area;
a selecting module 530, configured to select, when the determination result is yes, an image from the acquired images, where a difference between a frame of acquisition time and an acquisition time of a currently traversed image is minimum, and the target to be detected does not enter the blind area;
the determining module 540 is configured to determine, based on the selected image, distance measurement information when the target to be measured does not enter the blind area, and determine, according to the distance measurement information, an actual distance to the vehicle when the target to be measured enters the blind area.
In an optional implementation manner, the detection module 510 is specifically configured to detect a preset type of target in an image acquired by a vehicle-mounted camera, and obtain a target frame of the target; and determining the target in the target frame which has no connection relation with other target frames as the target to be measured according to the target frame obtained in each frame of image.
In an optional implementation manner, the traversal determining module 520 is specifically configured to determine, in a currently traversed image, a pixel distance from a target to be detected to a specified edge of the image; determining a ratio between the pixel distance and an image height; if the ratio exceeds a preset threshold value, determining that the target to be measured in the current traversed image does not enter the blind area; and if the ratio does not exceed a preset threshold value, determining that the target to be measured in the current traversed image enters the blind area.
In an optional implementation manner, the determining module 540 is specifically configured to, in a process of determining, based on a selected image, ranging information when the target to be measured does not enter a blind area, extract key information of the target to be measured from a currently traversed image, where the key information is pixel information at a junction of the target to be measured and an image-specified edge; searching pixel points matched with the key information in the selected image, wherein the pixel points are pixel points of the target to be measured in the area occupied by the selected image; determining the actual distance from the vehicle-mounted camera when the target to be detected does not enter the blind area according to the selected image; and determining the actual height of the target point indicated by the pixel point from the ground according to the actual distance from the vehicle-mounted camera when the target to be detected does not enter the blind area, and taking the actual height and the actual distance from the vehicle-mounted camera when the target to be detected does not enter the blind area as ranging information.
In an optional implementation manner, the determining module 540 is specifically configured to determine, in a selected image, a pixel distance between the pixel point and the specified boundary of the target to be detected in the process of determining an actual height between the target point indicated by the pixel point and the ground according to an actual distance between the target to be detected and the vehicle-mounted camera when the target to be detected does not enter the blind area; determining the actual height of the target point indicated by the pixel point from the ground according to the pixel distance and the actual distance from the vehicle-mounted camera when the target point does not enter the blind area; or determining the pixel height of the area occupied by the target to be detected in the selected image, and determining the actual height of the target to be detected according to the pixel height and the actual distance from the vehicle-mounted camera when the target does not enter the blind area; determining the proportion of the pixel distance between the pixel point and the specified boundary of the target to be detected to the pixel height in the selected image; determining the actual height of the target point indicated by the pixel point from the ground according to the proportion and the actual height of the target to be detected from the ground; wherein, the imaging plane of the vehicle-mounted camera is vertical to the ground.
In an optional implementation manner, the determining module 540 is specifically configured to, in the process of determining the actual distance to the vehicle when the target to be detected enters the blind area according to the ranging information, determine a moving distance of the target to be detected relative to the vehicle between the acquisition time of the selected image and the acquisition time of the currently traversed image according to the actual height of the target point indicated by the pixel point from the ground and the actual distance to the vehicle-mounted camera when the target to be detected does not enter the blind area; determining the actual distance from the vehicle-mounted camera when the target to be detected enters the blind area according to the actual distance from the vehicle-mounted camera when the target to be detected does not enter the blind area and the moving distance relative to the vehicle; and determining the actual distance from the vehicle when the target to be detected enters the blind area according to the actual distance from the vehicle-mounted camera when the target to be detected enters the blind area and the installation distance between the vehicle-mounted camera and the vehicle.
The specific details of the implementation process of the functions and actions of each unit in the above device are the implementation processes of the corresponding steps in the above method, and are not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (10)

1. A method for ranging a target, the method comprising:
detecting a target to be detected in each frame of image acquired by a vehicle-mounted camera, wherein the vehicle-mounted camera is positioned above a vehicle engine hood, and the region between the visible region boundary of the vehicle-mounted camera and the vehicle is a blind region;
traversing each frame of image, and judging whether a target to be detected in the currently traversed image enters the blind area;
if so, selecting an image which has the smallest difference value between the acquisition time of one frame and the acquisition time of the currently traversed image and does not enter the blind area from the acquired images;
and determining the distance measurement information when the target to be measured does not enter the blind area based on the selected image, and determining the actual distance from the vehicle when the target to be measured enters the blind area according to the distance measurement information.
2. The method of claim 1, wherein detecting the object to be detected in each frame of image acquired by the vehicle-mounted camera comprises:
detecting a preset type of target in an image acquired by a vehicle-mounted camera, and obtaining a target frame of the target;
and determining the target in the target frame which has no connection relation with other target frames as the target to be measured according to the target frame obtained in each frame of image.
3. The method of claim 1, wherein determining whether the object to be measured in the currently traversed image enters the blind area comprises:
determining the pixel distance between the target to be detected and the designated edge of the image in the current traversed image;
determining a ratio between the pixel distance and an image height;
if the ratio exceeds a preset threshold value, determining that the target to be measured in the current traversed image does not enter the blind area;
and if the ratio does not exceed a preset threshold value, determining that the target to be measured in the current traversed image enters the blind area.
4. The method of claim 1, wherein determining ranging information when the target to be measured does not enter the blind area based on the selected image comprises:
extracting key information of the target to be measured in the current traversed image, wherein the key information is pixel information of a junction of the target to be measured and the appointed edge of the image;
searching pixel points matched with the key information in the selected image, wherein the pixel points are pixel points of the target to be measured in the area occupied by the selected image;
determining the actual distance from the vehicle-mounted camera when the target to be detected does not enter the blind area according to the selected image;
and determining the actual height of the target point indicated by the pixel point from the ground according to the actual distance from the vehicle-mounted camera when the target to be detected does not enter the blind area, and taking the actual height and the actual distance from the vehicle-mounted camera when the target to be detected does not enter the blind area as ranging information.
5. The method of claim 4, wherein determining the actual height of the target point indicated by the pixel point from the ground according to the actual distance from the vehicle-mounted camera when the target to be detected does not enter the blind area comprises:
determining the pixel distance between the pixel point and the specified boundary of the distance to be measured and the target in the selected image; determining the actual height of the target point indicated by the pixel point from the ground according to the pixel distance and the actual distance from the vehicle-mounted camera when the target point does not enter the blind area; alternatively, the first and second electrodes may be,
determining the pixel height of the area occupied by the target to be detected in the selected image, and determining the actual height of the target to be detected according to the pixel height and the actual distance from the vehicle-mounted camera when the target does not enter the blind area; determining the proportion of the pixel distance between the pixel point and the specified boundary of the target to be detected to the pixel height in the selected image; determining the actual height of the target point indicated by the pixel point from the ground according to the proportion and the actual height of the target to be detected from the ground;
wherein, the imaging plane of the vehicle-mounted camera is vertical to the ground.
6. The method of claim 4, wherein determining the actual distance to the vehicle when the target to be measured enters the blind zone according to the ranging information comprises:
determining the moving distance of the target to be detected relative to the vehicle between the acquisition time of the selected image and the acquisition time of the currently traversed image according to the actual height of the target point indicated by the pixel point from the ground and the actual distance from the vehicle-mounted camera when the target to be detected does not enter the blind area;
determining the actual distance from the vehicle-mounted camera when the target to be detected enters the blind area according to the actual distance from the vehicle-mounted camera when the target to be detected does not enter the blind area and the moving distance relative to the vehicle;
and determining the actual distance from the vehicle when the target to be detected enters the blind area according to the actual distance from the vehicle-mounted camera when the target to be detected enters the blind area and the installation distance between the vehicle-mounted camera and the vehicle.
7. An object ranging apparatus, the apparatus comprising:
the detection module is used for detecting a target to be detected in each frame of image acquired by a vehicle-mounted camera, the vehicle-mounted camera is positioned above a vehicle engine hood, and an area between the visible area boundary of the vehicle-mounted camera and the vehicle is a blind area;
the traversal judgment module is used for traversing each frame of image and judging whether a target to be detected in the currently traversed image enters the blind area or not;
the selecting module is used for selecting an image which has the minimum difference value between the acquisition time of one frame and the acquisition time of the currently traversed image and does not enter the blind area from the acquired images when the judgment result is yes;
and the determining module is used for determining the distance measuring information when the target to be measured does not enter the blind area based on the selected image, and determining the actual distance from the vehicle when the target to be measured enters the blind area according to the distance measuring information.
8. The apparatus according to claim 7, wherein the traversal determining module is specifically configured to determine, in the currently traversed image, a pixel distance from a target to be detected to a specified edge of the image; determining a ratio between the pixel distance and an image height; if the ratio exceeds a preset threshold value, determining that the target to be measured in the current traversed image does not enter the blind area; and if the ratio does not exceed a preset threshold value, determining that the target to be measured in the current traversed image enters the blind area.
9. The apparatus according to claim 7, wherein the determining module is specifically configured to, in the process of determining the ranging information when the object to be measured does not enter the blind area based on the selected image, extract key information of the object to be measured from the currently traversed image, where the key information is pixel information at an intersection of the object to be measured and a specified edge of the image; searching pixel points matched with the key information in the selected image, wherein the pixel points are pixel points of the target to be measured in the area occupied by the selected image; determining the actual distance from the vehicle-mounted camera when the target to be detected does not enter the blind area according to the selected image; and determining the actual height of the target point indicated by the pixel point from the ground according to the actual distance from the vehicle-mounted camera when the target to be detected does not enter the blind area, and taking the actual height and the actual distance from the vehicle-mounted camera when the target to be detected does not enter the blind area as ranging information.
10. An object ranging system, the system comprising:
the vehicle-mounted camera is used for acquiring images and sending the images to the processor;
the processor is used for detecting a target to be detected in each frame of image, the vehicle-mounted camera is positioned above a vehicle engine hood, and a region between the visible region boundary of the vehicle-mounted camera and the vehicle is a blind region; traversing each frame of image, and judging whether a target to be detected in the currently traversed image enters the blind area; if so, selecting an image which has the minimum difference value between the acquisition time of one frame and the acquisition time of the image traversed currently and is not in the blind area from the acquired images, determining the distance measurement information when the distance measurement target does not enter the blind area based on the selected image, and determining the actual distance to the vehicle when the distance measurement target enters the blind area according to the distance measurement information.
CN201910447707.XA 2019-05-27 2019-05-27 Target ranging method, device and system Active CN111998780B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910447707.XA CN111998780B (en) 2019-05-27 2019-05-27 Target ranging method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910447707.XA CN111998780B (en) 2019-05-27 2019-05-27 Target ranging method, device and system

Publications (2)

Publication Number Publication Date
CN111998780A CN111998780A (en) 2020-11-27
CN111998780B true CN111998780B (en) 2022-07-01

Family

ID=73461282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910447707.XA Active CN111998780B (en) 2019-05-27 2019-05-27 Target ranging method, device and system

Country Status (1)

Country Link
CN (1) CN111998780B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112799098B (en) * 2020-12-08 2024-04-19 深兰科技(上海)有限公司 Radar blind area monitoring method and device, electronic equipment and storage medium
CN113103957B (en) * 2021-04-28 2023-07-28 上海商汤临港智能科技有限公司 Blind area monitoring method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011242860A (en) * 2010-05-14 2011-12-01 Toyota Motor Corp Obstacle recognition apparatus
CN202368473U (en) * 2011-12-06 2012-08-08 陈麒先 Monitoring device for blind zone in front of vehicle
CN103237685A (en) * 2010-12-30 2013-08-07 明智汽车公司 Apparatus and method for displaying a blind spot
CN104036275A (en) * 2014-05-22 2014-09-10 东软集团股份有限公司 Method and device for detecting target objects in vehicle blind areas
CN103895572B (en) * 2014-04-22 2016-05-18 北京汽车研究总院有限公司 Vehicle traveling information display unit and method in a kind of field of front vision
CN106650708A (en) * 2017-01-19 2017-05-10 南京航空航天大学 Visual detection method and system for automatic driving obstacles
CN108010383A (en) * 2017-09-29 2018-05-08 北京车和家信息技术有限公司 Blind zone detection method, device, terminal and vehicle based on driving vehicle
CN108230393A (en) * 2016-12-14 2018-06-29 贵港市瑞成科技有限公司 A kind of distance measuring method of intelligent vehicle forward vehicle

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2887039B2 (en) * 1993-03-26 1999-04-26 三菱電機株式会社 Vehicle periphery monitoring device
US10368035B2 (en) * 2015-03-20 2019-07-30 Nec Corporation Monitoring system, monitoring method, and monitoring program
CN106153000A (en) * 2016-06-17 2016-11-23 合肥工业大学 A kind of front vehicles distance detection method
CN106197382B (en) * 2016-08-25 2018-06-05 北方电子研究院安徽有限公司 A kind of vehicle-mounted single camera target dynamic distance measuring method
CN107953827A (en) * 2016-10-18 2018-04-24 杭州海康威视数字技术股份有限公司 A kind of vehicle blind zone method for early warning and device
DE102017002221A1 (en) * 2017-03-08 2018-09-13 Man Truck & Bus Ag Technology for monitoring a blind spot area
CN107038713A (en) * 2017-04-12 2017-08-11 南京航空航天大学 A kind of moving target method for catching for merging optical flow method and neutral net
JP6881221B2 (en) * 2017-10-19 2021-06-02 トヨタ自動車株式会社 Vehicle speed control device
CN108830260A (en) * 2018-07-05 2018-11-16 苏州天瞳威视电子科技有限公司 A kind of blind area detection system method of view-based access control model

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011242860A (en) * 2010-05-14 2011-12-01 Toyota Motor Corp Obstacle recognition apparatus
CN103237685A (en) * 2010-12-30 2013-08-07 明智汽车公司 Apparatus and method for displaying a blind spot
CN202368473U (en) * 2011-12-06 2012-08-08 陈麒先 Monitoring device for blind zone in front of vehicle
CN103895572B (en) * 2014-04-22 2016-05-18 北京汽车研究总院有限公司 Vehicle traveling information display unit and method in a kind of field of front vision
CN104036275A (en) * 2014-05-22 2014-09-10 东软集团股份有限公司 Method and device for detecting target objects in vehicle blind areas
CN108230393A (en) * 2016-12-14 2018-06-29 贵港市瑞成科技有限公司 A kind of distance measuring method of intelligent vehicle forward vehicle
CN106650708A (en) * 2017-01-19 2017-05-10 南京航空航天大学 Visual detection method and system for automatic driving obstacles
CN108010383A (en) * 2017-09-29 2018-05-08 北京车和家信息技术有限公司 Blind zone detection method, device, terminal and vehicle based on driving vehicle

Also Published As

Publication number Publication date
CN111998780A (en) 2020-11-27

Similar Documents

Publication Publication Date Title
US10115027B2 (en) Barrier and guardrail detection using a single camera
EP3382336B1 (en) Stereo camera device
JP5157067B2 (en) Automatic travel map creation device and automatic travel device.
JP3596314B2 (en) Object edge position measuring device and moving object traffic judging device
Hautiere et al. Automatic fog detection and estimation of visibility distance through use of an onboard camera
JP3733875B2 (en) Road white line recognition device
CN106289159B (en) Vehicle distance measurement method and device based on distance measurement compensation
CN104916163B (en) Parking space detection method
JP4930046B2 (en) Road surface discrimination method and road surface discrimination device
DE102018108027B4 (en) Object detection device
JP4793324B2 (en) Vehicle monitoring apparatus and vehicle monitoring method
CN107796373B (en) Distance measurement method based on monocular vision of front vehicle driven by lane plane geometric model
CN108645375B (en) Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system
JP3596339B2 (en) Inter-vehicle distance measurement device
CN110008893A (en) A kind of automobile driving running deviation automatic testing method based on vehicle-mounted imaging sensor
EP2293588A1 (en) Method for using a stereovision camera arrangement
JP5539250B2 (en) Approaching object detection device and approaching object detection method
CN111998780B (en) Target ranging method, device and system
CN114114312A (en) Three-dimensional target detection method based on fusion of multi-focal-length camera and laser radar
Lion et al. Smart speed bump detection and estimation with kinect
CN109886064A (en) Determination can driving space boundary method
JP2005157731A (en) Lane recognizing device and method
JP4936045B2 (en) Vehicle color discrimination device, method and program
KR20160125803A (en) Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest
CN111414857A (en) Front vehicle detection method based on vision multi-feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant