WO2018161664A1 - 一种基于点云平均背景差的三维声纳图像建模方法 - Google Patents
一种基于点云平均背景差的三维声纳图像建模方法 Download PDFInfo
- Publication number
- WO2018161664A1 WO2018161664A1 PCT/CN2017/115173 CN2017115173W WO2018161664A1 WO 2018161664 A1 WO2018161664 A1 WO 2018161664A1 CN 2017115173 W CN2017115173 W CN 2017115173W WO 2018161664 A1 WO2018161664 A1 WO 2018161664A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- diff
- background
- point cloud
- pixel
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
- G06T2207/10136—3D ultrasound image
Definitions
- the invention belongs to the field of three-dimensional sonar image modeling, and particularly relates to a three-dimensional sonar image modeling method based on point cloud average background difference.
- the three-dimensional sonar system can obtain the target information of distance, horizontal and vertical three-dimensional space, and has the characteristics of clear image and good visibility.
- the underwater environment is complex, and it is affected by the collection environment and noise. How to perform 3D reconstruction based on a large amount of data Accurately identify and establish mathematical models of targets to achieve convenient and rapid search and monitoring in complex underwater environments, with great technical difficulties.
- the background Compared with the foreground target to be monitored, the background has certain stability and repeatability for a long time.
- the background model is compared with the current image, and the known background information is subtracted. Get the desired goal. Therefore, inventing a three-dimensional sonar image modeling method based on point cloud average background difference has important engineering practical value.
- the present invention provides a three-dimensional sonar image modeling method based on point cloud average background difference, which can quickly identify foreground objects in the background, establish corresponding mathematical models for subsequent processing, and has a fast processing speed and can The background model is automatically updated as the environment changes.
- a three-dimensional sonar image modeling method based on point cloud average background difference comprising the following steps:
- the background model and the threshold TH are updated using the current frame image I(x, y, z).
- step (2) The specific steps of the step (2) are as follows:
- I t (x, y, z) represents the pixel value at coordinates (x, y, z) in the image at time t
- gap represents the time interval between two frames of images
- I t-gap (x, y, z) ) represents the pixel value at coordinates (x, y, z) in the image at t-gap time
- M is the total number of frames of the image
- the threshold TH is determined from the average value u diff (x, y, z) of all the pixel differences and the standard deviation diff std (x, y, z) of all the pixel differences, using the formula:
- ⁇ is the threshold coefficient and is generally set to 2.
- the specific process of the step (3) is: subtracting the pixel u(x, y, z) at the same position in the background model by using the pixel I(x, y, z) of the current frame image to obtain the pixel difference d(x). , y, z), and compare the pixel difference d(x, y, z) with the threshold TH to obtain an output image output(x, y, z), as follows:
- 0 is a point (x, y, z) is considered to be part of the background, not output
- 1 means that the point (x, y, z) is different from the background model, and is displayed in the output image, and the output image is a binary image.
- step (4) The specific steps of the step (4) are as follows:
- the threshold TH is updated to TH' using the current frame image I(x, y, z), and the specific formula is:
- u' diff (x, y, z) (1 - ⁇ ) ⁇ u diff (x, y, z) + ⁇ ⁇ d (x, y, z) diff ' std (x, y, z)
- ⁇ is the learning rate, and 0 ⁇ 1, the larger ⁇ , the faster the adaptation to background changes.
- the present invention has the following beneficial technical effects:
- the present invention can quickly identify the foreground target and perform corresponding mathematical modeling, which is effective for the underwater environment scene with little background change.
- the method has strong robustness, can automatically update the background model according to the change of environment, reduce the uncertainty caused by the sudden change of environment, and enhance the reliability of target recognition.
- the method is simple, can be quickly and efficiently modeled after establishing the background model, and has high accuracy for the recognition of moving targets.
- FIG. 1 is a flow chart of a method for modeling a three-dimensional sonar image based on a point cloud average background difference according to the present invention.
- the present invention is based on a three-dimensional sonar image modeling method for point cloud average background difference, including:
- S01 Acquire sonar data, and convert the three-dimensional sonar range image information corresponding to each frame of sonar data into point cloud data in global coordinates, where the point cloud data constitutes pixels of the image.
- I t (x, y, z) represents the pixel value at coordinates (x, y, z) in the image at time t
- gap represents the time interval between two frames of images
- I t-gap (x, y, z) ) represents the pixel value at the coordinates (x, y, z) in the t-gap time image
- M is the total number of frames of the image.
- ⁇ is the threshold coefficient and is set to 2.
- 0 indicates that the point is considered to be part of the background and is not output
- 1 indicates that the point is different from the background model, and is displayed in the output image
- the output image is a binary image
- u' diff (x, y, z) (1 - ⁇ ) ⁇ u diff (x, y, z) + ⁇ ⁇ d (x, y, z) diff' std (x, y, z)
- the foreground target can be quickly identified, and the corresponding mathematical modeling can be performed, and the background model can be automatically updated according to the change of the environment, thereby reducing the uncertainty caused by the sudden change of the environment and enhancing the reliability of the target recognition.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
Abstract
一种基于点云平均背景差的三维声纳图像建模方法,包含:(1)获取声纳数据,将每帧声纳数据对应的三维声纳范围图像信息转换为全局坐标下的点云数据(S01),该点云数据组成图像的像素;(2)将一系列连续帧图像中相同位置的像素的平均值u(x,y,z)作为背景模型中相同位置的像素,得到背景模型,并根据每帧图像中的像素确定用于确定背景标准的阈值TH;(3)根据背景模型和阈值TH对当前帧图像I(x,y,z)进行检测,得到输出图像;(4)利用当前帧图像I(x,y,z)对背景模型和阈值TH进行更新。该方法能够快速识别背景中的前景目标,建立对应的数学模型供后续处理,处理速度快,并且能够随环境变化自动更新背景模型。
Description
本发明属于三维声纳图像建模领域,具体涉及一种基于点云平均背景差的三维声纳图像建模方法。
我国是沿海大国,可持续发展必然越来越多地依赖海洋。另外海洋因其战略上的重要地位和经济上的巨大潜力,也越来越受到人们的重视。全面发展海洋科学技术的重要性不言而喻。
作为海洋科学技术的一个重要分支,水声探测在开发海洋资源的事业中得到了广泛的应用,水声成像技术已经成为大规模水下探测的重要手段,在蛙人的探测及跟踪、水雷和水雷类目标的识别监测、水下遥控航行器及水下自动航行器的避障及导航等多方面具有广阔的应用前景。
三维声纳系统能够获得距离、水平、垂直三维空间的目标信息,具有图像清晰、可视性好等特点,但是水下环境复杂,又受到采集环境、噪声影响,如何根据大量的数据进行三维重建,准确识别并建立目标的数学模型,实现在复杂水下环境中便捷快速的搜索和监控,具有较大的技术难度。
相对于要监控的前景目标,背景在较长时间内具有一定的稳定性和重复性,通过建立当前环境的背景模型,将背景模型和当前图像进行比较,减去已知的背景信息,可以大致得到所求的前景目标。因此,发明一种基于点云平均背景差的三维声纳图像建模方法具有重要的工程实用价值。
发明内容
鉴于上述,本发明提供了一种基于点云平均背景差的三维声纳图像建模方法,该方法能够快速识别背景中的前景目标,建立对应的数学模型供后续处理,处理速度快,并且能够随环境变化自动更新背景模型。
一种基于点云平均背景差的三维声纳图像建模方法,包含以下步骤:
(1)获取声纳数据,将每帧声纳数据对应的三维声纳范围图像信息转换为全局坐标下的点云数据,该点云数据组成图像的像素;
(2)将一系列连续帧图像中相同位置的像素的平均值u(x,y,z)作为背景模型中相同位置的像素,得到背景模型,并根据每帧图像中的像素确定用于确定背景标准的阈值TH;
(3)根据背景模型和阈值TH对当前帧图像I(x,y,z)进行检测,得到输出图像;
(4)利用当前帧图像I(x,y,z)对背景模型和阈值TH进行更新。
所述的步骤(2)的具体步骤为:
(2-1)对于一系列连续帧图像中不存在点云数据的位置,统一标定为空,得到预处理的图像集;
(2-2)计算预处理的图像集中所有图像相同位置的像素的平均值u(x,y,z)作为背景模型中相同位置的像素,得到背景模型;
(2-3)计算相邻两帧图像相同位置的像素差绝对值F(t)(x,y,z),并求得所有像素差绝对值的平均值udiff(x,y,z),所用公式为:
F(t)(x,y,z)=|It(x,y,z)-It-gap(x,y,z)|
其中,It(x,y,z)表示t时刻图像中坐标(x,y,z)处的像素值,gap表
示两帧图像之间的时间间隔,It-gap(x,y,z)表示t-gap时刻图像中坐标(x,y,z)处的像素值,M为图像的总帧数;
(2-4)求取所有像素差的标准差diffstd(x,y,z),所用公式为:
(2-5)根据所有像素差的平均值udiff(x,y,z)和所有像素差的标准差diffstd(x,y,z)确定阈值TH,所用公式为:
TH=udiff(x,y,z)+β×diffstd(x,y,z)
其中,β为阈值系数,一般设置为2。
所述的步骤(3)的具体过程为:利用当前帧图像的像素I(x,y,z)减去背景模型中相同位置的像素u(x,y,z),得到像素差d(x,y,z),并将该像素差d(x,y,z)与阈值TH进行比较,得到输出图像output(x,y,z),如下:
其中,0表示点(x,y,z)认为是背景的一部分,不输出,1表示点(x,y,z)不同于背景模型,于输出图像中进行显示,输出图像为二值图像。
所述的步骤(4)的具体步骤为:
(4-1)利用当前帧图像I(x,y,z)将背景模型的像素u(x,y,z)更新为u′(x,y,z),具体所用公式为:
u′(x,y,z)=(1-α)×u(x,y,z)+α×I(x,y,z)
(4-2)利用当前帧图像I(x,y,z)将阈值TH更新为TH′,具体所用公式为:
u′diff(x,y,z)=(1-α)×udiff(x,y,z)+α×d(x,y,z)
diff′std(x,y,z)
=(1-α)×diffstd(x,y,z)+α×|d(x,y,z)-u′diff(x,y,z)|
TH′=u′diff(x,y,z)+β×diff′std(x,y,z)
其中,α为学习率,且0<α<1,α越大,对背景变化的适应速度越快。
相较于现有的技术,本发明具有以下有益的技术效果:
(1)在建立背景模型后,本发明能够快速识别出前景目标并进行相应的数学建模,对于背景变化不大的水下环境场景效果很好。
(2)该方法具有较强的鲁棒性,能够根据环境的改变自动更新背景模型,减少由于环境的突变带来的不确定因素,增强目标识别的可靠性。
(3)该方法简单,在建立背景模型后能够快速高效地建模,对于运动目标的识别具有较高的准确性。
图1为本发明基于点云平均背景差的三维声纳图像建模方法的流程图。
为了更为具体地描述本发明,下面结合附图1及具体实施方式对本发明的技术方案进行详细说明。
如图1所示,本发明基于点云平均背景差的三维声纳图像建模方法,包括:
S01,获取声纳数据,将每帧声纳数据对应的三维声纳范围图像信息转换为全局坐标下的点云数据,该点云数据组成图像的像素。
S02,对于一系列连续帧图像中不存在点云数据的位置,统一标定为空,得到预处理的图像集。
S03,计算预处理的图像集中所有图像相同位置的像素的平均值u(x,y,z)作为背景模型中相同位置的像素,得到背景模型。
S04,计算相邻两帧图像相同位置的像素差F(t)(x,y,z),并求得所有像素差的平均值udiff(x,y,z),所用公式为:
F(t)(x,y,z)=|It(x,y,z)-It-gap(x,y,z)|
其中,It(x,y,z)表示t时刻图像中坐标(x,y,z)处的像素值,gap表示两帧图像之间的时间间隔,It-gap(x,y,z)表示t-gap时刻图像中坐标(x,y,z)处的像素值,M为图像的总帧数。
S05,求取所有像素差的标准差diffstd(x,y,z),所用公式为:
S06,根据所有像素差的平均值udiff(x,y,z)和所有像素差的标准差diffstd(x,y,z)确定阈值TH,所用公式为:
TH=udiff(x,y,z)+β×diffstd(x,y,z)
其中,β为阈值系数,设置为2。
S07,利用当前帧图像的像素I(x,y,z)减去背景模型中相同位置的像素u(x,y,z),得到像素差d(x,y,z),并将该像素差d(x,y,z)与阈值TH进行比较,得到输出图像output(x,y,z),如下:
其中,0表示该点认为是背景的一部分,不输出,1表示该点不同于背景模型,在输出图像中进行显示,输出图像为二值图像。
S08,利用当前帧图像I(x,y,z)将背景模型的像素u(x,y,z)更新为u′(x,y,z),具体所用公式为:
u′(x,y,z)=(1-α)×u(x,y,z)+α×I(x,y,z)
S09,利用当前帧图像I(x,y,z)将阈值TH更新为TH′,具体所用公式为:
u′diff(x,y,z)=(1-α)×udiff(x,y,z)+α×d(x,y,z)diff′std(x,y,z)
=(1-α)×diffstd(x,y,z)+α×|d(x,y,z)-u′diff(x,y,z)|
TH′=u′diff(x,y,z)+β×diff′std(x,y,z)
其中,α为学习率,且0<α<1。
利用上述方法能够很快速地识别出前景目标,并进行相应的数学建模,且能根据环境的改变自动更新背景模型,减少由于环境的突变带来的不确定因素,增强目标识别的可靠性。
以上所述的具体实施方式对本发明的技术方案和有益效果进行了详细说明,应理解的是以上所述仅为本发明的最优选实施例,并不用于限制本发明,凡在本发明的原则范围内所做的任何修改、补充和等同替换等,均应包含在本发明的保护范围之内。
Claims (4)
- 一种基于点云平均背景差的三维声纳图像建模方法,包含以下步骤:(1)获取声纳数据,将每帧声纳数据对应的三维声纳范围图像信息转换为全局坐标下的点云数据,该点云数据组成图像的像素;(2)将一系列连续帧图像中相同位置的像素的平均值u(x,y,z)作为背景模型中相同位置的像素,得到背景模型,并根据每帧图像中的像素确定用于确定背景标准的阈值TH;(3)根据背景模型和阈值TH对当前帧图像I(x,y,z)进行检测,得到输出图像;(4)利用当前帧图像I(x,y,z)对背景模型和阈值TH进行更新。
- 如权利要求1所述的基于点云平均背景差的三维声纳图像建模方法,其特征在于,所述的步骤(2)的具体步骤为:(2-1)对于一系列连续帧图像中不存在点云数据的位置,统一标定为空,得到预处理的图像集;(2-2)计算预处理的图像集中所有图像相同位置的像素的平均值u(x,y,z)作为背景模型中相同位置的像素,得到背景模型;(2-3)计算相邻两帧图像相同位置的像素差绝对值F(t)(x,y,z),并求得所有像素差绝对值的平均值udiff(x,y,z),所用公式为:F(t)(x,y,z)=|It(x,y,z)-It-gap(x,y,z)|其中,It(x,y,z)表示t时刻图像中坐标(x,y,z)处的像素值,gap表示两帧图像之间的时间间隔,It-gap(x,y,z)表示t-gap时刻图像中坐标(x,y,z)处的像素值,M为图像的总帧数;(2-4)求取所有像素差的标准差diffstd(x,y,z),所用公式为:(2-5)根据所有像素差的平均值udiff(x,y,z)和所有像素差的标准差diffstd(x,y,z)确定阈值TH,所用公式为:TH=udiff(x,y,z)+β×diffstd(x,y,z)其中,β为阈值系数。
- 如权利要求1所述的基于点云平均背景差的三维声纳图像建模方法,其特征在于,所述的步骤(4)的具体步骤为:(4-1)利用当前帧图像I(x,y,z)将背景模型的像素u(x,y,z)更新为u′(x,y,z),具体所用公式为:u′(x,y,z)=(1-α)×u(x,y,z)+α×I(x,y,z)(4-2)利用当前帧图像I(x,y,z)将阈值TH更新为TH′,具体所用公式为:u′diff(x,y,z)=(1-α)×udiff(x,y,z)+α×d(x,y,z)diff′std(x,y,z)=(1-α)×diffstd(x,y,z)+α×|d(x,y,z)-u′diff(x,y,z)|TH′=u′diff(x,y,z)+β×diff′std(x,y,z)其中,α为学习率,且0<α<1。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/075,946 US11217018B2 (en) | 2017-03-08 | 2017-12-08 | Point cloud mean background subtraction based method for 3D sonar image modeling |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710134480.4A CN106971395A (zh) | 2017-03-08 | 2017-03-08 | 一种基于点云平均背景差的三维声纳图像建模方法 |
CN201710134480.4 | 2017-03-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018161664A1 true WO2018161664A1 (zh) | 2018-09-13 |
Family
ID=59329459
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/115173 WO2018161664A1 (zh) | 2017-03-08 | 2017-12-08 | 一种基于点云平均背景差的三维声纳图像建模方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11217018B2 (zh) |
CN (1) | CN106971395A (zh) |
WO (1) | WO2018161664A1 (zh) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106971395A (zh) * | 2017-03-08 | 2017-07-21 | 浙江大学 | 一种基于点云平均背景差的三维声纳图像建模方法 |
CN111383340B (zh) * | 2018-12-28 | 2023-10-17 | 成都皓图智能科技有限责任公司 | 一种基于3d图像的背景过滤方法、装置及系统 |
CN110991398A (zh) * | 2019-12-18 | 2020-04-10 | 长沙融创智胜电子科技有限公司 | 一种基于改进步态能量图的步态识别方法及系统 |
CN112652074A (zh) * | 2020-12-29 | 2021-04-13 | 湖北工业大学 | 基于平面模型的点云数据滤波算法 |
CN113256697B (zh) * | 2021-04-27 | 2023-07-18 | 武汉理工大学 | 水下场景的三维重建方法、系统、装置和存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100118122A1 (en) * | 2008-11-07 | 2010-05-13 | Honeywell International Inc. | Method and apparatus for combining range information with an optical image |
CN103197308A (zh) * | 2013-03-15 | 2013-07-10 | 浙江大学 | 基于多波束相控阵声纳系统的三维声纳可视化处理方法 |
CN103593877A (zh) * | 2013-11-07 | 2014-02-19 | 清华大学 | 合成孔径声纳图像的仿真方法及系统 |
US9558564B1 (en) * | 2014-05-02 | 2017-01-31 | Hrl Laboratories, Llc | Method for finding important changes in 3D point clouds |
CN106971395A (zh) * | 2017-03-08 | 2017-07-21 | 浙江大学 | 一种基于点云平均背景差的三维声纳图像建模方法 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10281577B2 (en) * | 2015-04-20 | 2019-05-07 | Navico Holding As | Methods and apparatuses for constructing a 3D sonar image of objects in an underwater environment |
KR102439245B1 (ko) * | 2016-01-29 | 2022-09-01 | 삼성전자주식회사 | 전자 장치 및 그 제어 방법 |
-
2017
- 2017-03-08 CN CN201710134480.4A patent/CN106971395A/zh active Pending
- 2017-12-08 WO PCT/CN2017/115173 patent/WO2018161664A1/zh active Application Filing
- 2017-12-08 US US16/075,946 patent/US11217018B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100118122A1 (en) * | 2008-11-07 | 2010-05-13 | Honeywell International Inc. | Method and apparatus for combining range information with an optical image |
CN103197308A (zh) * | 2013-03-15 | 2013-07-10 | 浙江大学 | 基于多波束相控阵声纳系统的三维声纳可视化处理方法 |
CN103593877A (zh) * | 2013-11-07 | 2014-02-19 | 清华大学 | 合成孔径声纳图像的仿真方法及系统 |
US9558564B1 (en) * | 2014-05-02 | 2017-01-31 | Hrl Laboratories, Llc | Method for finding important changes in 3D point clouds |
CN106971395A (zh) * | 2017-03-08 | 2017-07-21 | 浙江大学 | 一种基于点云平均背景差的三维声纳图像建模方法 |
Also Published As
Publication number | Publication date |
---|---|
US20210217240A1 (en) | 2021-07-15 |
US11217018B2 (en) | 2022-01-04 |
CN106971395A (zh) | 2017-07-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018161664A1 (zh) | 一种基于点云平均背景差的三维声纳图像建模方法 | |
CN107833236B (zh) | 一种动态环境下结合语义的视觉定位系统和方法 | |
WO2020052540A1 (zh) | 对象标注方法、移动控制方法、装置、设备及存储介质 | |
US9996936B2 (en) | Predictor-corrector based pose detection | |
CN110689562A (zh) | 一种基于生成对抗网络的轨迹回环检测优化方法 | |
CN110675418A (zh) | 一种基于ds证据理论的目标轨迹优化方法 | |
CN110738121A (zh) | 一种前方车辆检测方法及检测系统 | |
CN109559330B (zh) | 运动目标的视觉跟踪方法、装置、电子设备及存储介质 | |
CN110570457B (zh) | 一种基于流数据的三维物体检测与跟踪方法 | |
CN103325112A (zh) | 动态场景中运动目标快速检测方法 | |
CN104933738A (zh) | 一种基于局部结构检测和对比度的视觉显著图生成方法 | |
CN111856445B (zh) | 一种目标检测方法、装置、设备及系统 | |
CN114217665A (zh) | 一种相机和激光雷达时间同步方法、装置及存储介质 | |
WO2022021661A1 (zh) | 一种基于高斯过程的视觉定位方法、系统及存储介质 | |
CN112541938A (zh) | 一种行人速度测量方法、系统、介质及计算设备 | |
CN114549549A (zh) | 一种动态环境下基于实例分割的动态目标建模跟踪方法 | |
CN114662587B (zh) | 一种基于激光雷达的三维目标感知方法、装置及系统 | |
CN112652020A (zh) | 一种基于AdaLAM算法的视觉SLAM方法 | |
CN112069997B (zh) | 一种基于DenseHR-Net的无人机自主着陆目标提取方法及装置 | |
CN117367404A (zh) | 基于动态场景下slam的视觉定位建图方法及系统 | |
CN116862832A (zh) | 一种基于三维实景模型的作业人员定位方法 | |
CN115236643A (zh) | 一种传感器标定方法、系统、装置、电子设备及介质 | |
CN104537691A (zh) | 基于分块同向速度累加光流场分割的运动目标检测方法 | |
KR101962933B1 (ko) | 해상 기동 물체 탐지 추적 방법 및 해상 기동 물체 탐지 추적 장치 | |
Geng et al. | A Vision-based Ship Speed Measurement Method Using Deep Learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17899387 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17899387 Country of ref document: EP Kind code of ref document: A1 |