WO2010133099A1 - 一种目标检测方法、系统及立体视觉系统 - Google Patents
一种目标检测方法、系统及立体视觉系统 Download PDFInfo
- Publication number
- WO2010133099A1 WO2010133099A1 PCT/CN2010/070846 CN2010070846W WO2010133099A1 WO 2010133099 A1 WO2010133099 A1 WO 2010133099A1 CN 2010070846 W CN2010070846 W CN 2010070846W WO 2010133099 A1 WO2010133099 A1 WO 2010133099A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- camera
- image
- target
- diameter
- target detection
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/246—Calibration of cameras
Definitions
- Target detection method system and stereo vision system
- the present invention relates to the field of visual technologies, and in particular, to a target detection method, system, and stereo vision system. Background technique
- the image is image preprocessed, and then the image is binarized by the threshold segmentation method. Then the brop connected domain detection algorithm is used to extract the target connected domain and calculate the position of the target in the image.
- the position calculation method generally adopts a method of finding the center of gravity, and after obtaining the position of the center of gravity of the target, the coordinate information can be used to perform subsequent processing such as three-dimensional reconstruction and target tracking.
- the lens is adjusted to the focus state. When the target size is small or the distance is long, the imaging is small, which makes it difficult to accurately extract the target position.
- the technical solution of the present invention is as follows:
- a target detection method includes the following steps:
- step A the camera is adjusted to be in a defocused state.
- the adjusting the camera is in a defocused state, and specifically includes: adjusting an image distance according to a diameter of a required speckle of the camera.
- the adjustment amount of adjusting the image distance according to the diameter of the required speckle of the camera is:
- a target detection system is applied to acquire images on a camera, and perform target detection according to the collected images, which includes:
- An adjustment unit for adjusting the camera to be in a defocused state before acquiring an image is an adjustment unit for adjusting the camera to be in a defocused state before acquiring an image.
- the adjusting unit is configured to adjust an image distance according to a diameter of a required speckle of the camera.
- the adjusting unit is configured to adjust an image distance according to a diameter of a required speckle of the camera, zl
- Adjustment amount is: 2 "- ⁇ , wherein: ⁇ ⁇ ⁇ , (,) is the image distance clear imaging range; of an entrance pupil diameter. 2A, ZL diffuse spot diameter of one target detection system for a stereoscopic vision. Including at least two cameras, wherein the first camera is configured to collect images in a defocused state;
- a second camera for acquiring an image in a focused or defocused state
- the first camera or the second camera is further configured to perform target detection according to the images acquired by the first camera and the second camera, and reconstruct a target three-dimensional position according to the target detection results of the first camera and the second camera.
- the beneficial effects of the present invention are as follows:
- the object detection method, system and stereo vision system provided by the invention adjust the image distance of the lens, increase the imaging area of the target, increase the detection range of the target, and make the camera in the state of astigmatism.
- the recognition area is increased due to the speckle, the number of pixels is increased, the influence of the interfering pixels is reduced, and the accuracy of calculating the sub-pixel center of gravity coordinates is improved.
- FIG. 1 is a flowchart of a target detection method according to an embodiment of the present invention
- FIG. 2 is a schematic structural diagram of a target detection system according to an embodiment of the present invention.
- FIG. 3 is a schematic structural diagram of an image acquiring and processing unit according to an embodiment of the present disclosure
- FIG. 4 is a schematic diagram of imaging of diffuse spots according to an embodiment of the present invention.
- FIG. 5 is a schematic diagram of a minimum imaging point variable speckle pattern provided by an embodiment of the present invention.
- FIG. 6 is an actual image of diffuse imaging according to an embodiment of the present invention. detailed description
- the present invention provides a method, a system and a stereoscopic system for the purpose of the present invention.
- the present invention will be further described in detail below with reference to the accompanying drawings.
- An embodiment of the present invention provides a target detection method, system, and stereo vision system, the method comprising: determining a diameter of a diffusion spot of a lens; and adjusting an image distance to a defocus state according to a diameter of the diffusion spot.
- FIG. 1 is a method for detecting a target according to an embodiment of the present invention, including the steps of:
- the image distance adjustment amount is: , zl
- a ⁇ . ⁇ P ⁇ P2 , ( A , ) is the clear imaging image distance range; 2a is the entrance pupil diameter, and zl is the diameter of the diffusion spot.
- the step 103, the processing the image, and extracting the position of the detected target in the image specifically includes:
- the embodiment of the present invention further provides a target detection system.
- the system includes:
- the adjusting unit 320 is configured to adjust the image distance of the camera to a defocus state, specifically: adjusting the image distance to a defocus state according to the diameter of the diffusion spot.
- the adjustment of the diameter adjustment image distance for the speckle is: 2 "".
- system further comprises:
- the image acquisition and processing unit 330 is configured to obtain an image of the target to be measured after the adjustment unit 320 adjusts the size of the image distance, and process the image to extract a position of the detected target in the image.
- the image acquisition and processing unit 330 includes:
- pre-processing sub-unit 331 for pre-processing all images according to median filtering
- a binarization processing sub-unit 332, configured to perform binarization processing on the image acquired by each camera according to the threshold segmentation method
- the extracting sub-unit 333 is configured to extract, according to the blob binary connected domain detection technology, the connected domain of the detected target from the binarized processed image;
- the coordinate acquiring unit 334 is configured to calculate the sub-pixel-level barycentric coordinates of the detected object in each image according to the threshold centroid method.
- An embodiment of the present invention further provides a stereo vision system for target detection, including at least two cameras, wherein the first camera is configured to acquire an image in a defocused state;
- a second camera for acquiring an image in a focused or defocused state
- the first camera or the second camera is further configured to perform target detection according to the images acquired by the first camera and the second camera, and reconstruct a target three-dimensional position according to the target detection results of the first camera and the second camera. Specifically: performing target detection according to the images collected by the first camera and the second camera to obtain coordinates of each pixel point; according to the coordinate of each pixel point The sub-pixel center of gravity coordinates are calculated, and the three-dimensional space coordinates of the target center of gravity are calculated according to the sub-pixel level barycentric coordinates. Among them, the sub-pixel level barycentric coordinate calculation method is:
- the center of gravity coordinate is calculated using the center of gravity method with a threshold.
- Set the background grayscale threshold K then use the formula to calculate the center of gravity coordinates.
- the three-dimensional coordinates of the center of gravity of the target object ⁇ , , ) are calculated by using the camera coordinate system as the world coordinate system according to the following formula.
- the above embodiment can be applied to at least two cameras or cameras, or a system composed of a plurality of cameras or cameras. In the defocused state, the image will be blurred, and the recognition of the target becomes difficult.
- a multi-camera coordination method can be adopted, and at least one lens is adjusted to a defocused state by the above method and system, and the two cameras A and B are used. For example, the camera A is adjusted to the in-focus state, and the camera B is in the defocused state.
- the target is recognized in the imaging of the camera A, and the target is extracted in the imaging of the camera A or the camera B;
- the imaging in A is small, the target is identified in the imaging of the camera A or the camera B, and the target is extracted in the imaging of the camera B.
- the camera with the two cameras in the in-focus state and the defocus state is used for target recognition and extraction, and high recognition and position extraction precision can be realized for both large and small targets.
- the accuracy of 3D reconstruction coordinates can be improved.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Measurement Of Optical Distance (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
本发明提出了一种目标检测方法、]系统及立体视觉系统,该方法包括:确定镜头的弥散斑的直径;根据所述弥散斑的直径将像距调整到散焦状态。通过调整镜头的像距,增大目标的成像面积,增大目标物的检测范围,使相机在弥散斑的状态下进行采集图像,由于弥散斑使识别面积增大,像素增多,减少干扰像素的影响,同时使计算出亚像素级重心坐标的精度提高。
Description
一种目标检测方法、 系统及立体视觉系统 技术领域
本发明涉及视觉技术领域,特别涉及一种目标检测方法、 系统及立体视觉 系统。 背景技术
镜头在聚焦状态下, 采集图像进行图像预处理, 然后利用阈值分割法将图 像二值化处理后, 再利用 blob连通域检测算法提取出目标连通域, 并计算目 标在图像中的位置。所述位置计算方法一般采用求重心的方法, 求得目标重心 位置后, 还可以利用这些坐标信息, 进行三维重建、 目标跟踪等后续处理。 但 是现有技术采集图像时,都是将镜头调整至聚焦状态, 当目标尺寸较小或者距 离较远时, 成像较小, 这给目标位置的精确提取带来困难。
发明内容
本发明的目的是, 针对上述现有技术存在的缺陷提供了一种目标检测方 法、 系统及立体视觉系统, 有效的提高了目标位置坐标计算的精确度。 本发明的技术方案如下:
一种目标检测方法, 包括步骤:
A、 使用摄像机采集图像;
B、 根据所述摄像机采集到的图像进行目标检测; 其中, 在步骤 A之前, 调整所述摄像机处于散焦状态。 其中, 所述调整摄像机处于散焦状态, 具体包括: 根据所述摄像机所需弥 散斑的直径调整像距。
其中, 所述根据所述摄像机所需弥散斑的直径调整像距的调整量为:
, zl
dz =—— χ out p
2" , 其中: Α<。 _Ρ<Ρ2, ( A , ) 为清晰成像像距范围; 2a 为入射光瞳直径, zl为弥散斑的直径。
一种目标检测系统,应用于摄像机上采集图像, 以及根据所述采集到的图 像进行目标检测, 其包括:
调整单元, 用于在采集图像之前调整所述摄像机处于散焦状态。
其中, 所述调整单元, 用于根据所述摄像机所需弥散斑的直径调整像距。 其中, 所述调整单元, 用于根据所述摄像机所需弥散斑的直径调整像距的 , zl
dz =—— xout p
调整量为: 2" -μ , 其中: Ρ οΐρ ρΐ, ( , )为清晰成像像距 范围; 2a为入射光瞳直径, zl为弥散斑的直径。 一种用于目标检测的立体视觉系统, 包括至少两台摄像机, 其中, 第一 摄像机, 用于在散焦状态采集图像;
第二摄像机, 用于在聚焦或者散焦状态采集图像;
所述第一摄像机或者第二摄像机,还用于根据所述第一摄像机和第二摄像 机采集到的图像进行目标检测,根据所述第一摄像机和第二摄像机的目标检测 结果重建目标三维位置。
本发明的有益效果为: 本发明提供的目标检测方法、 系统及立体视觉系统 通过调整镜头的像距, 增大目标的成像面积, 增大目标物的检测范围, 使相机 在弥散斑的状态下进行采集图像, 由于弥散斑使识别面积增大, 像素增多, 减 少干扰像素的影响, 同时使计算出亚像素级重心坐标的精度提高。
附图说明
下面将结合附图及实施例对本发明作进一步说明, 附图中:
图 1为本发明实施例提供的目标检测方法的流程图;
图 2为本发明实施例提供的目标检测系统结构示意图;
图 3为本发明实施例提供的图像获取及处理单元的结构示意图;
图 4为本发明实施例提供的弥散斑成像示意图;
图 5为本发明实施例提供的最小成像点变弥散斑模型图;
图 6为本发明实施例提供的弥散斑成像的实际图像。 具体实施方式
本发明提供了一种目标检测方法、 系统及立体视觉系统, 为使本发明的目 的、 技术方案及优点更加清楚、 明确, 以下参照附图并举实施例对本发明进一 步详细说明。
本发明实施例提供一种目标检测方法、系统及立体视觉系统,该方法包括: 确定镜头的弥散斑的直径; 根据所述弥散斑的直径将像距调整到散焦状态。通 过调整镜头的像距, 增大目标的成像面积, 如图 6所示, 增大目标物的检测范 围,使相机在弥散斑的状态下进行采集图像, 即调整像距到弥散斑直径超过清 晰成像的状态, 如图 4中平面 1和平面 2, 由于弥散斑使识别面积增大, 像素 增多, 减少干扰像素的影响, 同时使计算出亚像素级重心坐标的精度提高。
图 1是本发明实施例提供的目标检测方法, 包括步骤:
101、 确定摄像机或者相机的镜头的弥散斑的直径。 当目标成像为一个像 素时, 为了有效地实现目标的亚像素级的检测, 故弥散斑至少要扩散为 3*3 个像素, 如图 5所示。 弥散斑最小直径为: min(zl)=3 x^2 + 2 。
102、 根据所述弥散斑的直径将像距调整到散焦状态。 所述像距调整量为: , zl
dz out p
2" , 其中: A <。 ― P < P2 , ( A , ) 为清晰成像像距范围; 2a 为入射光瞳直径, zl为弥散斑的直径。
103、 获取被测目标的图像, 对所述图像进行处理, 提取被检测目标在图 像中的位置。
其中, 该步骤 103、 对所述图像进行处理, 提取被检测目标在图像中的位 置, 具体包括:
1 )、根据中值滤波对所有图像进行预处理; 也可以采用其他方法对所述图 像进行预处理。
2 )、根据阈值分割方法进行二值化处理每个相机采集的图像; 根据本领域 技术人员所知晓的, 还可以采用其他方法进行二值化处理每个相机采集的图 像。
3 )、 根据 blob二值连通域检测技术, 从二值化处理后的图像中提取出被 检测目标的连通域; 根据本领域技术人员所知晓的,还可以采取其他的检测技 术提取被检测目标的连通域。
4 )、根据带阈值重心法计算每个图像中检测物的亚像素级重心坐标。根据 本领域技术人员所知晓的,还可以采取其他的方法计算每个图像中检测物的亚 像素级重心坐标。
相应的, 本发明实施例还提供一种目标检测系统, 如图 2所示, 该系统包 括:
调整单元 320, 用于将摄像机的像距调整到散焦状态, 具体为: 根据所述 弥散斑的直径将像距调整到散焦状态。
其中: 所述弥散斑的最小直径为: min(zl)= 3 x /^r^7 , 所述像距调整量
, zl
dz = x out p
为: 2" 其中: A <。 ― P < P2 , ( A , ) 为清晰成像像距范围;
2a为入射光瞳直径, zl为弥散斑的直径。 所述调整单元, 根据所述摄像机所
, zl
dz =—— x out _ p
需弥散斑的直径调整像距的调整量为: 2" " 。
在进一步的实施例中, 该系统还包括:
图像获取及处理单元 330, 用于在调整单元 320调整像距的大小之后, 获 取被测目标的图像, 对所述图像进行处理, 提取被检测目标在图像中的位置。
其中, 所述图像获取及处理单元 330, 如图 3所示, 包括:
预处理子单元 331 , 用于根据中值滤波对所有图像进行预处理;
二值化处理子单元 332, 用于根据阈值分割方法进行二值化处理每个相机 采集的图像;
提取子单元 333 ,用于根据 blob二值连通域检测技术,从二值化处理后的 图像中提取出被检测目标的连通域;
坐标获取单元 334, 用于根据带阈值重心法计算每个图像中检测物的亚像 素级重心坐标。
本发明实施例还提供一种用于目标检测的立体视觉系统,包括至少两台摄 像机, 其中, 第一摄像机, 用于在散焦状态采集图像;
第二摄像机, 用于在聚焦或者散焦状态采集图像;
所述第一摄像机或者第二摄像机,还用于根据所述第一摄像机和第二摄像 机采集到的图像进行目标检测,根据所述第一摄像机和第二摄像机的目标检测 结果重建目标三维位置。具体为: 根据所述第一摄像机和第二摄像机采集到的 图像进行目标检测得出每一个像素点的坐标;根据所述每一个像素点的坐标计
算出亚像素级重心坐标,以及根据所述亚像素级重心坐标计算目标物重心三维 空间坐标。 其中, 亚像素级重心坐标计算的方法为:
根据像素序列中坐标及其在原图中对应的灰度值,采用带阈值的重心法计 算重心坐标。 设置背景灰度阈值 K, 然后利用公式计算重心坐标。
∑∑[Ρ(χ,γ)-Κ]χ
=1 1
∑∑F(x,y)
∑∑[Ρ(χ,γ)-Κ]γ
^0
∑∑F(x,y) 目标物重心三维坐标计算方法为:
分别以每个摄像机坐标系作为世界坐标系按照下式计算目标物重心三维 坐标 ^, , )。 这里由第 i个摄像机和第 j个摄像机的数据进行计算, 可设置 j=i+l。
_ 称 )
, ¾ =1 其中, M是摄像机的总数, 是第 i个相机的权重值, 并且 ^ , i=l 和 i=l 。 上述实施例可以应用在至少两台摄像机或者照相机上,也可以是多台摄像 机或者照相机组成的系统。在散焦状态下,图像会模糊, 目标的识别变得困难, 为此可采用多摄像机协调的方法,通过上述方法和系统将至少一个镜头调整至 散焦状态下, 以两摄像机 A、 B为例说明该问题, 摄像机 A被调整至聚焦状 态, 摄像机 B为散焦状态, 当摄像机 A中成像较大时, 在摄像机 A成像中识 别目标,在摄像机 A或摄像机 B成像中提取目标; 当摄像机 A中成像较小时, 在摄像机 A或者摄像机 B成像中识别目标, 在摄像机 B的成像中提取目标。 同时使用两台摄像机分别处于聚焦状态和散焦状态下的摄像机进行目标识别、 提取,对大、小目标均可实现较高的识别、位置提取精度。对于多摄像机系统, 可以提高三维重建坐标的精度。 应说明的是, 以上实施例仅用以说明本发明的技术方案而非限制,尽管参
照较佳实施例对本发明进行了详细说明, 本领域的普通技术人员应当理解, 可 以对本发明的技术方案进行修改或者等同替换,而不脱离本发明技术方案的精 神和范围, 其均应涵盖在本发明的权利要求范围当中。
Claims
1、 一种目标检测方法, 包括步骤:
A、 使用摄像机采集图像;
B、 根据所述摄像机采集到的图像进行目标检测;
其特征在于, 在步骤 A之前, 调整所述摄像机处于散焦状态。
2、 如权利要求 1所述目标检测方法, 其特征在于, 所述调整摄像机处于 散焦状态, 具体包括: 根据所述摄像机所需弥散斑的直径调整像距。
3、 如权利要求 2 所述目标检测方法, 其特征在于, 所述根据所述摄像机
, zl
dz = xout p
所需弥散斑的直径调整像距的调整量为: 2" ,其中: Pi <o t_p<P2 ^ ) 为清晰成像像距范围; 2a为入射光瞳直径, zl为弥散斑的直径。
4、 一种目标检测系统, 应用于摄像机上采集图像, 以及根据所述采集到 的图像进行目标检测, 其特征在于, 包括:
调整单元, 用于在采集图像之前调整所述摄像机处于散焦状态。
5、 如权利要求 4所述目标检测系统, 其特征在于, 所述调整单元, 用于 根据所述摄像机所需弥散斑的直径调整像距。
6、 如权利要求 5 所述目标检测系统, 其特征在于, 所述调整单元, 用于
, zl
dz =—— χ out p 根据所述摄像机所需弥散斑的直径调整像距的调整量为: 2" " ,其中: p1 < out_p< P2 , ( A , P2 )为清晰成像像距范围; 2a为入射光瞳直径, zl为弥 散斑的直径。
7、 一种用于目标检测的立体视觉系统, 包括至少两台摄像机, 其特征在 于, 第一摄像机, 用于在散焦状态采集图像;
第二摄像机, 用于在聚焦或者散焦状态采集图像;
所述第一摄像机或者第二摄像机,还用于根据所述第一摄像机和第二摄像 机采集到的图像进行目标检测,根据所述第一摄像机和第二摄像机的目标检测 结果重建目标三维位置。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200910107429.X | 2009-05-20 | ||
CN200910107429XA CN101571953B (zh) | 2009-05-20 | 2009-05-20 | 一种目标检测方法、系统及立体视觉系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010133099A1 true WO2010133099A1 (zh) | 2010-11-25 |
Family
ID=41231305
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2010/070846 WO2010133099A1 (zh) | 2009-05-20 | 2010-03-03 | 一种目标检测方法、系统及立体视觉系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN101571953B (zh) |
WO (1) | WO2010133099A1 (zh) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101571953B (zh) * | 2009-05-20 | 2012-04-25 | 深圳泰山在线科技有限公司 | 一种目标检测方法、系统及立体视觉系统 |
CN102939562B (zh) * | 2010-05-19 | 2015-02-18 | 深圳泰山在线科技有限公司 | 目标投影方法以及系统 |
CN104655045B (zh) * | 2015-02-04 | 2017-05-31 | 中国科学院西安光学精密机械研究所 | 一种星敏感器光学系统弥散斑圆度的定量分析方法 |
CN112581374A (zh) * | 2019-09-29 | 2021-03-30 | 深圳市光鉴科技有限公司 | 散斑亚像素中心提取方法、系统、设备及介质 |
CN113347335B (zh) * | 2021-05-31 | 2022-08-30 | 浙江大华技术股份有限公司 | 一种聚焦方法、装置、电子设备及存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN86202639U (zh) * | 1986-05-05 | 1987-07-15 | 清华大学 | 多功能云纹照明机 |
CN1119476A (zh) * | 1993-03-17 | 1996-03-27 | 德国汤姆逊-布朗特公司 | 兼容的记录与/或重放方法与装置 |
CN1263282A (zh) * | 1999-02-12 | 2000-08-16 | 怡利电子工业股份有限公司 | 一种散焦聚距离测定方法 |
CN101261115A (zh) * | 2008-04-24 | 2008-09-10 | 吉林大学 | 空间圆几何参数的双目立体视觉测量方法 |
CN101294801A (zh) * | 2007-07-13 | 2008-10-29 | 东南大学 | 基于双目视觉的车距测量方法 |
CN101571953A (zh) * | 2009-05-20 | 2009-11-04 | 深圳泰山在线科技有限公司 | 一种目标检测方法、系统及立体视觉系统 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3481631B2 (ja) * | 1995-06-07 | 2003-12-22 | ザ トラスティース オブ コロンビア ユニヴァーシティー イン ザ シティー オブ ニューヨーク | 能動型照明及びデフォーカスに起因する画像中の相対的なぼけを用いる物体の3次元形状を決定する装置及び方法 |
EP1684503B1 (en) * | 2005-01-25 | 2016-01-13 | Canon Kabushiki Kaisha | Camera and autofocus control method therefor |
-
2009
- 2009-05-20 CN CN200910107429XA patent/CN101571953B/zh active Active
-
2010
- 2010-03-03 WO PCT/CN2010/070846 patent/WO2010133099A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN86202639U (zh) * | 1986-05-05 | 1987-07-15 | 清华大学 | 多功能云纹照明机 |
CN1119476A (zh) * | 1993-03-17 | 1996-03-27 | 德国汤姆逊-布朗特公司 | 兼容的记录与/或重放方法与装置 |
CN1263282A (zh) * | 1999-02-12 | 2000-08-16 | 怡利电子工业股份有限公司 | 一种散焦聚距离测定方法 |
CN101294801A (zh) * | 2007-07-13 | 2008-10-29 | 东南大学 | 基于双目视觉的车距测量方法 |
CN101261115A (zh) * | 2008-04-24 | 2008-09-10 | 吉林大学 | 空间圆几何参数的双目立体视觉测量方法 |
CN101571953A (zh) * | 2009-05-20 | 2009-11-04 | 深圳泰山在线科技有限公司 | 一种目标检测方法、系统及立体视觉系统 |
Also Published As
Publication number | Publication date |
---|---|
CN101571953A (zh) | 2009-11-04 |
CN101571953B (zh) | 2012-04-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110738142B (zh) | 一种自适应改善人脸图像采集的方法、系统及存储介质 | |
JP6125188B2 (ja) | 映像処理方法及び装置 | |
CN105894499B (zh) | 一种基于双目视觉的空间物体三维信息快速检测方法 | |
EP3010392A1 (en) | Iris imaging apparatus and methods for configuring an iris imaging apparatus | |
CN104933389B (zh) | 一种基于指静脉的身份识别方法和装置 | |
WO2014044126A1 (zh) | 坐标获取装置、实时三维重建系统和方法、立体交互设备 | |
CN104143098B (zh) | 基于远红外线摄像头的夜间行人识别方法 | |
Ellmauthaler et al. | A novel iterative calibration approach for thermal infrared cameras | |
WO2012062893A2 (en) | Object detection and recognition under out of focus conditions | |
JP2006268345A (ja) | 画像処理装置および画像処理方法 | |
CN104021382A (zh) | 一种眼部图像采集方法及其系统 | |
WO2010133099A1 (zh) | 一种目标检测方法、系统及立体视觉系统 | |
CN110110793B (zh) | 基于双流卷积神经网络的双目图像快速目标检测方法 | |
CN109451233A (zh) | 一种采集高清晰度面部图像的装置 | |
KR20170080116A (ko) | 깊이정보를 이용한 안면인식시스템 | |
KR101053253B1 (ko) | 3차원 정보를 이용한 얼굴 인식 장치 및 방법 | |
CN104966283A (zh) | 图像分层配准方法 | |
CN113409334B (zh) | 一种基于质心的结构光角点检测方法 | |
JP2009211122A (ja) | 画像処理装置およびオブジェクト推定プログラム。 | |
WO2019149213A1 (zh) | 基于影像识别路锥的方法、装置、存储介质以及车辆 | |
CN106599779A (zh) | 一种人耳识别方法 | |
CN108509868A (zh) | 一种基于光场相机的人脸识别系统及方法 | |
TWI569642B (zh) | 機器視覺中進行影像擷取之方法及裝置 | |
CN110009636A (zh) | 一种集成式的公路隧道可变焦视觉检测系统 | |
WO2019052320A1 (zh) | 监控方法、装置、系统、电子设备及计算机可读存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10777303 Country of ref document: EP Kind code of ref document: A1 |
|
DPE2 | Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101) | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10777303 Country of ref document: EP Kind code of ref document: A1 |