CN110378970B - Monocular vision deviation detection method and device for AGV - Google Patents

Monocular vision deviation detection method and device for AGV Download PDF

Info

Publication number
CN110378970B
CN110378970B CN201910610781.9A CN201910610781A CN110378970B CN 110378970 B CN110378970 B CN 110378970B CN 201910610781 A CN201910610781 A CN 201910610781A CN 110378970 B CN110378970 B CN 110378970B
Authority
CN
China
Prior art keywords
pixel
image
positioning block
coordinate system
agv
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910610781.9A
Other languages
Chinese (zh)
Other versions
CN110378970A (en
Inventor
曹小华
刘鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN201910610781.9A priority Critical patent/CN110378970B/en
Publication of CN110378970A publication Critical patent/CN110378970A/en
Application granted granted Critical
Publication of CN110378970B publication Critical patent/CN110378970B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a monocular vision deviation detection method and device for an AGV (automatic guided vehicle). Gray processing and sampling are carried out on collected image data, a dynamic segmentation threshold value is calculated according to sample image data, threshold value segmentation is carried out on the gray data, and a binary image is obtained; screening the binary image, performing morphological filtering on an object which is probably foreground noise, and performing threshold segmentation on the filtered pixel points again; segmenting the acquired image by using the grids, traversing all grid units, and forming a set by the grid units containing the foreground colors; clustering elements in the set, and separating positioning blocks; calculating the coordinates of the central point of the positioning block by adopting an average value method; and establishing a space conversion relation model between the image coordinate system and the pixel coordinate system according to the central point coordinates of the positioning blocks, and calculating the deviation. The invention can improve the anti-interference capability of the AGV while meeting the use precision.

Description

一种用于AGV的单目视觉偏差检测方法及装置A monocular vision deviation detection method and device for AGV

技术领域technical field

本发明属于物流自动化技术领域,具体涉及一种用于AGV的单目视觉偏差检测方法及装置。The invention belongs to the technical field of logistics automation, and in particular relates to a monocular vision deviation detection method and device for AGV.

背景技术Background technique

近年来随着智慧物流的兴起,为满足“作业无人化”的要求,无人仓应运而生。AGV作为无人仓中货物入库、分拣、出库等操作的载体,在无人仓中得到了广泛的应用。但惯性导航AGV因传感器误差具有累积效应。因此需要一种结果准确,稳定性高,抗干扰能力强的偏差检测装置来消除累积误差。In recent years, with the rise of smart logistics, unmanned warehouses have emerged to meet the requirements of "unmanned operations". AGV is widely used in unmanned warehouses as the carrier for goods storage, sorting, and outbound operations in unmanned warehouses. However, inertial navigation AGV has a cumulative effect due to sensor errors. Therefore, a deviation detection device with accurate results, high stability and strong anti-interference ability is needed to eliminate accumulated errors.

为消除累计误差,目前使用的二维码偏差检测方法其优点在于含有丰富的信息量,在实现偏差检测的同时还可以实现定位;但是也因为其信息量丰富,导致其抗干扰能力不足,当受到强光干扰导致二维码边界模糊不清时会丢失大量的有用信息,受到污渍干扰时无法提取有用信息,从而导致偏差检测失败。同时这种定位方法还存在着设备维护困难,成本高昂,不利于节约企业的生产成本等缺点。In order to eliminate the cumulative error, the currently used two-dimensional code deviation detection method has the advantage of rich information, and can also realize positioning while realizing deviation detection; but also because of its rich information, its anti-interference ability is insufficient. A large amount of useful information will be lost when the border of the QR code is blurred due to strong light interference, and useful information cannot be extracted when it is interfered by stains, resulting in the failure of deviation detection. At the same time, this positioning method also has the disadvantages of difficult equipment maintenance, high cost, and is not conducive to saving the production cost of the enterprise.

发明内容Contents of the invention

本发明要解决的技术问题是:提供一种用于AGV的单目视觉偏差检测方法及装置,在满足使用精度的同时,能够提高AGV的抗干扰能力。The technical problem to be solved by the present invention is to provide a monocular vision deviation detection method and device for AGV, which can improve the anti-interference ability of the AGV while satisfying the use accuracy.

本发明为解决上述技术问题所采取的技术方案为:一种用于AGV的单目视觉偏差检测方法,其特征在于:它包括以下步骤:The technical solution adopted by the present invention to solve the above-mentioned technical problems is: a monocular visual deviation detection method for AGV, characterized in that it includes the following steps:

S1、对采集到的图像数据进行灰度处理,得到灰度数据;所述的图像数据是对信标进行摄像获取的,信标中含有三个定位块,每个定位块为至少具有一组垂直对称轴的图形,且三个定位块的中心点可以构成一个笛卡尔坐标系;S1. Perform grayscale processing on the collected image data to obtain grayscale data; the image data is obtained by photographing the beacon, which contains three positioning blocks, and each positioning block has at least one set The graph of the vertical axis of symmetry, and the center points of the three positioning blocks can form a Cartesian coordinate system;

S2、对采集的图像数据进行取样,得到样本图像数据,以样本图像数据计算动态分割阈值T,对灰度数据进行阈值分割,获取二值图像;S2. Sampling the collected image data to obtain sample image data, calculating a dynamic segmentation threshold T with the sample image data, performing threshold segmentation on the grayscale data, and obtaining a binary image;

S3、对二值图像进行筛选,对可能是前景噪声的对象进行形态学滤波,滤波后的像素点重新进行阈值分割;所述的可能是前景噪声的对象,是指满足一定像素值范围的像素点;S3. Screen the binary image, perform morphological filtering on objects that may be foreground noise, and re-threshold the filtered pixels; the object that may be foreground noise refers to pixels that meet a certain range of pixel values point;

S4、使用栅格对S3获取的图像进行分割,对所有栅格单元遍历,含有前景色的栅格单元组成集合S;S4. Segment the image acquired by S3 using a grid, traverse all the grid units, and form a set S of grid units containing the foreground color;

S5、对集合S中的元素进行聚类,分离定位块;S5. Cluster the elements in the set S, and separate the positioning blocks;

S6、采用平均值法求取定位块的中心点坐标;S6, using the average value method to obtain the coordinates of the center point of the positioning block;

S7、依据定位块的中心点坐标建立图像坐标系与像素坐标系之间的空间转换关系模型,解算偏差。S7. Establish a spatial conversion relationship model between the image coordinate system and the pixel coordinate system according to the coordinates of the center point of the positioning block, and calculate the deviation.

按上述方法,所述的S1具体的,逐行逐列的对每个像素进行灰度处理,灰度处理方法为:According to the above method, the S1 specifically performs grayscale processing on each pixel row by row, and the grayscale processing method is:

g(x,y)=afR(x,y)+bfG(x,y)+cfB(x,y)g(x, y) = af R (x, y) + bf G (x, y) + cf B (x, y)

约束条件为:a,b,c均为正整数;The constraints are: a, b, c are all positive integers;

fR(x,y)、fG(x,y)、fB(x,y)为像素f(x,y)的R、G、B分量,g(x,y)为灰度处理后的数据。f R (x, y), f G (x, y), f B (x, y) are the R, G, and B components of the pixel f (x, y), and g (x, y) is the grayscale processed The data.

按上述方法,所述的S2具体包括:According to the above method, the S2 specifically includes:

2.1、根据步骤S1得到的数据,使用最大内间方差法来计算动态分割阈值T,计算方法为:2.1, according to the data obtained in step S1, use the maximum internal variance method to calculate the dynamic segmentation threshold T, the calculation method is:

假设所采用的样本像素被分为1,2,……,m级,灰度值i的像素数为n,则总像素数N可用公式表示:

Figure BDA0002122287950000021
各像素值出现的概率Pi满足公式:
Figure BDA0002122287950000022
选取一个整数K将像素分为两组,C0={1,2,……,k},C1={k+1,k+2,……,m},则两组之间的方差公式:σ2(k)=ω0ω110)2 Assuming that the sample pixels used are divided into 1, 2, ..., m levels, and the number of pixels with gray value i is n, the total number of pixels N can be expressed by the formula:
Figure BDA0002122287950000021
The probability P i of the occurrence of each pixel value satisfies the formula:
Figure BDA0002122287950000022
Choose an integer K to divide the pixels into two groups, C 0 ={1, 2,...,k}, C 1 ={k+1,k+2,...,m}, then the variance between the two groups Formula: σ 2 (k) = ω 0 ω 110 ) 2

式中:ω0为C0发生的概率,

Figure BDA0002122287950000023
μ0为C0的均值,
Figure BDA0002122287950000024
ω1为C1发生的概率,
Figure BDA0002122287950000025
μ1为C1的均值,
Figure BDA0002122287950000026
In the formula: ω 0 is the probability of C 0 occurring,
Figure BDA0002122287950000023
μ 0 is the mean value of C 0 ,
Figure BDA0002122287950000024
ω 1 is the probability of C 1 occurring,
Figure BDA0002122287950000025
μ 1 is the mean value of C 1 ,
Figure BDA0002122287950000026

从(1,2,……,m)之间改变K,求使方差最大值时的K,得到maxσ2(k)时的K值即为动态分割阈值T;Change K from (1, 2, ..., m) to find K when the variance is at its maximum value, and the K value when maxσ 2 (k) is obtained is the dynamic segmentation threshold T;

2.2、根据步骤S1得到的数据以及2.1得到的动态分割阈值T,按照如下方法获取二值图像:2.2. According to the data obtained in step S1 and the dynamic segmentation threshold T obtained in 2.1, obtain a binary image as follows:

Figure BDA0002122287950000027
Figure BDA0002122287950000027

式中:1代表白色,0代表黑色,g(x,y)为第x列、第y行像素的灰度值,h(x,y)为第x列、第y行像素分割后的值。In the formula: 1 represents white, 0 represents black, g(x, y) is the gray value of the pixel in column x and row y, and h(x, y) is the value of the pixel in column x and row y after segmentation .

按上述方法,所述的S3具体包括:According to the above method, the S3 specifically includes:

对满足:To meet:

Figure BDA0002122287950000028
的像素点使用形态学滤波方法:
Figure BDA0002122287950000028
The pixels use the morphological filtering method:

Figure BDA0002122287950000029
Figure BDA0002122287950000029

式中:Med表示对滤波窗口A中所有像素灰度值进行升序排序后的中间值,A为滤波窗口;In the formula: Med represents the intermediate value after sorting the gray values of all pixels in the filtering window A in ascending order, and A is the filtering window;

然后重新阈值分割:Then re-threshold segmentation:

Figure BDA0002122287950000031
Figure BDA0002122287950000031

按上述方法,所述的S5按照以下方法进行聚类:According to the above method, the S5 is clustered according to the following method:

5.1、遍历所有栅格单元,将含有定位块图像的栅格坐标作为集合S的元素,定位块个数N=0;5.1. Traverse all grid units, take the grid coordinates containing the positioning block image as the elements of the set S, and the number of positioning blocks N=0;

5.2、N=N+1,建立定位块集合AN;将集合S中的第一个元素移入集合AN,同时将该元素移出集合;5.2. N=N+1, establish a positioning block set A N ; move the first element in the set S into the set A N , and move the element out of the set at the same time;

5.3、依次比较集合S中剩余的元素与集合AN中的所有元素所代表的栅格位置关系;判断是否存在两个栅格相邻,若有则将集合S中的该栅格对应的元素移入集合AN中,同时将这个元素移出集合S;5.3. Sequentially compare the remaining elements in the set S with the grid positions represented by all the elements in the set A N ; judge whether there are two adjacent grids, and if so, compare the elements corresponding to the grid in the set S Move into the set A N , and move this element out of the set S at the same time;

5.4、得到最终的定位块集合AN5.4. Obtain the final positioning block set A N .

按上述方法,所述的S6使用如下方法求取中心点坐标:According to the above method, described S6 uses the following method to obtain the center point coordinates:

对于具有两条垂直对称轴的图形,使用平均值算法计算图形中心点,图形中心点坐标满足公式:For a graph with two vertical symmetry axes, use the average value algorithm to calculate the center point of the graph, and the coordinates of the center point of the graph satisfy the formula:

Figure BDA0002122287950000032
Figure BDA0002122287950000032

式中:xt为第t个定位块的中心点横坐标,yt为第t个定位块的中心点纵坐标,n为第t个定位块中黑色像素点的总个数,xi为第t个定位块中第i个黑色像素点的横坐标,yi为第t个定位块中第i个黑色像素点的纵坐标。In the formula: x t is the abscissa of the center point of the t th positioning block, y t is the ordinate of the center point of the t th positioning block, n is the total number of black pixels in the t th positioning block, x i is The abscissa of the i-th black pixel in the t-th positioning block, and y i is the ordinate of the i-th black pixel in the t-th positioning block.

按上述方法,所述的S7具体包括:According to the above-mentioned method, described S7 specifically comprises:

通过判断定位块中心点坐标之间的相对位置关系建立像素坐标系;Establish a pixel coordinate system by judging the relative positional relationship between the center point coordinates of the positioning blocks;

计算图像的中心点处于像素坐标系的哪个象限;Calculate which quadrant the center point of the image is in the pixel coordinate system;

计算图像的中心点与像素坐标X轴、Y轴之间的距离;Calculate the distance between the center point of the image and the pixel coordinates X-axis and Y-axis;

依据图像中心点所处象限来修正偏差值的正负关系;Correct the positive and negative relationship of the deviation value according to the quadrant of the image center point;

依据像素坐标系与图像坐标系之间的坐标轴旋转关系,计算两个坐标系对应坐标轴之间的夹角。According to the coordinate axis rotation relationship between the pixel coordinate system and the image coordinate system, the included angle between the corresponding coordinate axes of the two coordinate systems is calculated.

一种用于AGV的单目视觉偏差检测装置,其特征在于:本装置包括设置在AGV上的图像采集装置、存储器和数据处理器;其中,图像采集装置用于采集图像数据,存储器中存有计算机程序,供所述的数据处理器调用,以完成所述的用于AGV的单目视觉偏差检测方法。A monocular visual deviation detection device for AGV, characterized in that: the device includes an image acquisition device, a memory and a data processor arranged on the AGV; wherein the image acquisition device is used to collect image data, and the memory stores The computer program is called by the data processor to complete the monocular vision deviation detection method for AGV.

本发明的有益效果为:The beneficial effects of the present invention are:

1、在获取信标图像后,通过对信标中特征点的提取,建立图像传感器中心点与信标定位点的相对位置关系,可以计算出图像传感器中心点相对于信标定位点在绝对坐标系下X轴、Y轴的偏移以及夹角;使用的自动阈值分割方法可以在不同的光照条件下获取较为清晰的二值图像,具有较强的适应能力;采用平均值法求取定位块中心点坐标,对强光干扰具有较强的抵抗能力。如图6所示,在同一位置受到不同程度的强光干扰时的效果对比,其中(a)、(b)、(c)三者受到的光照强度依次增加。1. After obtaining the beacon image, by extracting the feature points in the beacon, the relative positional relationship between the image sensor center point and the beacon positioning point can be established, and the absolute coordinate system of the image sensor center point relative to the beacon positioning point can be calculated. The offset and included angle of the lower X-axis and Y-axis; the automatic threshold segmentation method used can obtain clearer binary images under different lighting conditions, and has strong adaptability; the average value method is used to calculate the center of the positioning block Point coordinates, strong resistance to strong light interference. As shown in Figure 6, the effect comparison of different degrees of strong light interference at the same position, in which (a), (b), and (c) the intensity of light received by the three increases in sequence.

2、采用聚类算法对信标沾染的黑色污渍具有较强的滤除能力,可以自动剔除这种干扰。虽然光照环境改变会导致图像灰度处理、图像二值分割、噪声滤波后的黑白图像有所改变,但是采用图像聚点算法可以有效的将这种外界影响降至最低,计算出来的参照物中心点坐标的在允许范围内波动,最终结果并不会因为光照环境的变化而产生大的波动。2. The clustering algorithm has a strong ability to filter out the black stains contaminated by beacons, and can automatically eliminate this interference. Although changes in the lighting environment will lead to changes in image grayscale processing, image binary segmentation, and noise-filtered black and white images, the use of image gathering point algorithms can effectively minimize this external influence, and the calculated reference object center The point coordinates fluctuate within the allowable range, and the final result will not fluctuate greatly due to changes in the lighting environment.

附图说明Description of drawings

图1为本发明一实施例的方法流程图。FIG. 1 is a flowchart of a method according to an embodiment of the present invention.

图2为绝对坐标系、图像坐标系与像素坐标系的关系模型。Fig. 2 is a relationship model among the absolute coordinate system, the image coordinate system and the pixel coordinate system.

图3为使用动态阈值分割在不同光照条件下得到的二值图像。其中(a)为正常光照条件,(b)为弱光照条件。Figure 3 is a binary image obtained under different lighting conditions using dynamic threshold segmentation. Where (a) is normal light conditions, and (b) is low light conditions.

图4为使用栅格分割图像的效果。Figure 4 shows the effect of using a grid to segment an image.

图5为使用聚类法提取的定位块。Figure 5 shows the location block extracted by clustering method.

图6为受到强光干扰时的效果。(a)、(b)、(c)所受光照逐渐增强。Figure 6 shows the effect when disturbed by strong light. (a), (b), (c) received gradually increased light.

图7为聚类流程图。Figure 7 is a flow chart of clustering.

具体实施方式Detailed ways

下面结合具体实例和附图对本发明做进一步说明。The present invention will be further described below in conjunction with specific examples and accompanying drawings.

本发明提供一种用于AGV的单目视觉偏差检测方法,如图1所示,它包括以下步骤:The present invention provides a kind of monocular visual deviation detection method for AGV, as shown in Figure 1, it comprises the following steps:

S0、启动摄像头,拍摄一帧图片。所述的图像数据是对信标进行摄像获取的,信标中含有三个定位块,每个定位块为至少具有一组垂直对称轴的图形,如圆、正方形、矩形或菱形,且三个定位块的中心点可以构成一个笛卡尔坐标系。S0. Start the camera and take a frame of pictures. The image data is obtained by photographing the beacon, which contains three positioning blocks, each positioning block is a figure with at least one set of vertical symmetry axes, such as a circle, a square, a rectangle or a rhombus, and three The center point of the positioning block can constitute a Cartesian coordinate system.

S1、对图像传感器采集到的图像数据先进行灰度处理。采用基于R、G、B分量的加权平均灰度处理算法对图像进行灰度化处理,放大前景色与背景色之间的差异。具体包括:S1. Perform grayscale processing on the image data collected by the image sensor. The weighted average grayscale processing algorithm based on R, G, and B components is used to grayscale the image to enlarge the difference between the foreground color and the background color. Specifically include:

g(x,y)=afR(x,y)+bfG(x,y)+cfB(x,y)g(x, y) = af R (x, y) + bf G (x, y) + cf B (x, y)

约束条件为:The constraints are:

a,b,c均为正整数。a, b, c are all positive integers.

fR(x,y)、fG(x,y)、fB(x,y)为像素f(x,y)的R、G、B分量。f R (x, y), f G (x, y), and f B (x, y) are the R, G, and B components of the pixel f (x, y).

f(x,y)为图像坐标系下像素点(x,y)的颜色数值。f(x, y) is the color value of the pixel point (x, y) in the image coordinate system.

g(x,y)为像素点(x,y)灰度处理后的数据。g(x, y) is the grayscale processed data of the pixel point (x, y).

S2、通过对S1中的数据取样,以样本图像数据计算动态分割阈值T,对S1中的灰度数据进行阈值分割,获取二值图像,结果如图3所示,其中(a)为以整幅图像为数据来源进行阈值分割结果,(b)为以样本图像数据为数据来源进行阈值分割结果。具体包括:S2. By sampling the data in S1, calculate the dynamic segmentation threshold T with the sample image data, perform threshold segmentation on the grayscale data in S1, and obtain a binary image. The result is shown in Figure 3, where (a) is the integer An image is the threshold segmentation result of the data source, (b) is the threshold segmentation result of the sample image data as the data source. Specifically include:

取样原则为:The sampling principle is:

1,逐行计算每行像素中第一个像素点与其余像素点之间的灰度差值,若差值大于设定值,则认为此行可能同时含有前景色与背景色;否则检测下一行。1. Calculate the gray level difference between the first pixel in each row of pixels and the rest of the pixels row by row. If the difference is greater than the set value, it is considered that this row may contain both foreground and background colors; otherwise, it will be detected one line.

2,对于可能含有前景色与背景色的行,检测该行中前景色像素点的个数,若前景色个数大于设定值,则该行为所需样本;否则检测下一行。2. For a line that may contain foreground color and background color, detect the number of foreground color pixels in the line, if the number of foreground color is greater than the set value, then the required sample for this line; otherwise, detect the next line.

计算动态阈值的方法为:The method to calculate the dynamic threshold is:

假设所采用的样本像素被分为1,2,……,m级,灰度值i的像素数为n,则总像素数N可用公式表示:

Figure BDA0002122287950000051
各像素值出现的概率Pi满足公式:
Figure BDA0002122287950000052
选取一个整数K将像素分为两组,C0={1,2,……,k},C1={k+1,k+2,……,m},则两组之间的方差公式:σ2(k)=ω0ω110)2 Assuming that the sample pixels used are divided into 1, 2, ..., m levels, and the number of pixels with gray value i is n, the total number of pixels N can be expressed by the formula:
Figure BDA0002122287950000051
The probability P i of the appearance of each pixel value satisfies the formula:
Figure BDA0002122287950000052
Choose an integer K to divide the pixels into two groups, C 0 ={1, 2,...,k}, C 1 ={k+1,k+2,...,m}, then the variance between the two groups Formula: σ 2 (k) = ω 0 ω 110 ) 2

式中:ω0为C0发生的概率,

Figure BDA0002122287950000053
μ0为C0的均值,
Figure BDA0002122287950000054
ω1为C1发生的概率,
Figure BDA0002122287950000055
μ1为C1的均值,
Figure BDA0002122287950000056
In the formula: ω 0 is the probability of C 0 occurring,
Figure BDA0002122287950000053
μ 0 is the mean value of C 0 ,
Figure BDA0002122287950000054
ω 1 is the probability of C 1 occurring,
Figure BDA0002122287950000055
μ 1 is the mean value of C 1 ,
Figure BDA0002122287950000056

从(1,2,……,m)之间改变K,求使方差最大值时的K,得到maxσ2(k)时的K值即为最佳阈值T。Change K from (1, 2, ..., m) to find K when the variance is maximized, and get the K value when maxσ 2 (k) is the optimal threshold T.

计算出动态分割阈值T以后可以对灰度图像进行二值分割,获取二值图像。After the dynamic segmentation threshold T is calculated, the grayscale image can be binary segmented to obtain a binary image.

获取二值图像的方法为:The method to obtain a binary image is:

Figure BDA0002122287950000057
Figure BDA0002122287950000057

式中:1代表白色。0代表黑色。g(x,y)为第x列、第y行像素的灰度值。h(x,y)为第x列、第y行像素分割后的值。In the formula: 1 represents white. 0 represents black. g(x, y) is the gray value of the pixel in column x and row y. h(x, y) is the pixel-divided value of column x and row y.

S3、对S2获取的二值图像数据进行筛选,对背景色中出现的、可能是前景噪声的对象进行形态学滤波,滤波后的像素点重新进行阈值分割。对于前景色区域中出现的噪声不影响图像聚类,对定位块中心点坐标提取有效很小,不影响结果的准确性。S3. Filter the binary image data obtained in S2, perform morphological filtering on objects that appear in the background color and may be foreground noise, and perform threshold segmentation again on the filtered pixels. The noise in the foreground color area does not affect the image clustering, and the extraction of the coordinates of the center point of the positioning block is very small, and does not affect the accuracy of the result.

S4、使用栅格对S3获取的图像进行分割,分割效果如图4所示。对所有栅格单元遍历,含有前景色的栅格单元组成集合S。栅格单元的大小要保证在任何情况下,任意两个定位块之间都至少有一个只含背景色的栅格单元。S2获取的二值图像中包含了前景噪声与目标前景色,S3剔除前景噪声,留下目标前景色,S4对目标前景色分割。S4. Segment the image obtained in S3 by using a grid, and the segmentation effect is shown in FIG. 4 . Traversing all grid cells, the grid cells containing the foreground color form a set S. The size of the grid unit should ensure that in any case, there is at least one grid unit containing only the background color between any two positioning blocks. The binary image acquired by S2 contains foreground noise and target foreground color, S3 removes the foreground noise, leaving the target foreground color, and S4 segments the target foreground color.

S5、对集合S中的元素进行聚类,分离定位块。聚类流程如图7所示,5.1、遍历所有栅格单元,将含有定位块图像的栅格坐标作为集合S的元素,定位块个数N=0;5.2、N=N+1,建立定位块集合AN;将集合S中的第一个元素移入集合AN,同时将该元素移出集合;5.3、依次比较集合S中剩余的元素与集合AN中的所有元素所代表的栅格位置关系;判断是否存在两个栅格相邻,若有则将集合S中的该栅格对应的元素移入集合AN中,同时将这个元素移出集合S;5.4、得到最终的定位块集合AN。聚类结果如图5所示。S5. Cluster the elements in the set S, and separate the positioning blocks. The clustering process is shown in Figure 7, 5.1, traverse all grid units, use the grid coordinates containing the positioning block image as the elements of the set S, and the number of positioning blocks N=0; 5.2, N=N+1, establish positioning block set A N ; move the first element in the set S into the set A N , and move this element out of the set; 5.3. Compare the grid positions represented by the remaining elements in the set S and all the elements in the set A N in sequence Relationship; judge whether there are two adjacent grids, if so, move the element corresponding to the grid in the set S into the set A N , and move this element out of the set S; 5.4, get the final positioning block set A N . The clustering results are shown in Figure 5.

S6、求取S5中分离出来定位块的中心点坐标。定位块中心点的坐标满足:S6. Calculating the coordinates of the center point of the positioning block separated in S5. The coordinates of the center point of the positioning block satisfy:

对任意的、至少具有一组相互垂直对称轴的图形,其中心点坐标满足:For any figure with at least one set of mutually perpendicular symmetry axes, the coordinates of its center point satisfy:

Figure BDA0002122287950000061
Figure BDA0002122287950000061

式中:xt为第t个定位块的中心点横坐标,yt为第t个定位块的中心点纵坐标,n为第t个定位块中黑色像素点的总个数,xi为第t个定位块中第i个黑色像素点的横坐标,yi为第t个定位块中第i个黑色像素点的纵坐标。In the formula: x t is the abscissa of the center point of the t th positioning block, y t is the ordinate of the center point of the t th positioning block, n is the total number of black pixels in the t th positioning block, x i is The abscissa of the i-th black pixel in the t-th positioning block, and y i is the ordinate of the i-th black pixel in the t-th positioning block.

S7、依据定位块的中心点坐标建立图像坐标系与像素坐标系之间的空间转换关系模型,解算偏差,如图2所示。S7. Establish a spatial conversion relationship model between the image coordinate system and the pixel coordinate system according to the coordinates of the center point of the positioning block, and calculate the deviation, as shown in FIG. 2 .

具体步骤包括:Specific steps include:

1,判断三个中心点之间的相对位置关系。通过计算三个中心点之间的像素距离,最远距离的两个端点为定位块B和定位块C(此时定位块B和定位块C还无法判断),剩余的一个为定位块A。1. Determine the relative positional relationship between the three center points. By calculating the pixel distance between the three center points, the two end points of the farthest distance are positioning block B and positioning block C (at this time, positioning block B and positioning block C cannot be judged), and the remaining one is positioning block A.

2,判断定位块B与定位块C的位置。任取1中最远距离两个端点中的一个,将其记为B′,与定位块A构成一个从定位块A的中心点出发,指向B′的向量,判断第三个定位块中心点与该向量之间的位置关系。若在左边,则B′对应定位块B,否则,B′对应定位块C.2. Determine the positions of positioning block B and positioning block C. Take any one of the two endpoints with the farthest distance in 1, record it as B', and form a vector with positioning block A starting from the center point of positioning block A and pointing to B', and judge the center point of the third positioning block The positional relationship with this vector. If it is on the left, then B' corresponds to the positioning block B, otherwise, B' corresponds to the positioning block C.

3,建立像素坐标系。2中由定位块A的中心点指向定位块B的中心点的向量与像素坐标系的X轴重合且指向X轴正半轴;由定位块A的中心点指向定位块B的中心点的向量与像素坐标系的Y轴重合且指向Y的正半轴。3. Establish a pixel coordinate system. In 2, the vector from the center point of positioning block A to the center point of positioning block B coincides with the X-axis of the pixel coordinate system and points to the positive half axis of the X-axis; the vector from the center point of positioning block A to the center point of positioning block B Coincident with the Y-axis of the pixel coordinate system and point to the positive half-axis of Y.

4,计算偏差。4. Calculate the deviation.

1,判断图像中心点处于像素坐标系的哪个象限。1. Determine which quadrant of the pixel coordinate system the center point of the image is in.

公式为:The formula is:

Figure BDA0002122287950000071
Figure BDA0002122287950000071

t1<0,t2<0,位于第一象限,dx>0、dy>0。t 1 <0, t 2 <0, located in the first quadrant, dx>0, dy>0.

t1>0,t2<0,位于第二象限,dx<0、dy>0。t 1 >0, t 2 <0, located in the second quadrant, dx<0, dy>0.

t1>0,t2>0,位于第三象限,dx<0、dy<0。t 1 >0, t 2 >0, located in the third quadrant, dx<0, dy<0.

t1<0,t2>0,位于第四象限,dx>0、dy<0。t 1 <0, t 2 >0, located in the fourth quadrant, dx>0, dy<0.

图像传感器中心距信标参考点之间的距离dy计算公式为:The formula for calculating the distance dy between the center of the image sensor and the beacon reference point is:

Figure BDA0002122287950000072
Figure BDA0002122287950000072

图像传感器中心距信标参考点之间的距离dx计算公式为:The formula for calculating the distance dx between the center of the image sensor and the beacon reference point is:

Figure BDA0002122287950000073
Figure BDA0002122287950000073

图像坐标系与像素坐标系在Y轴方向的夹角计算公式为:The formula for calculating the angle between the image coordinate system and the pixel coordinate system in the Y-axis direction is:

Figure BDA0002122287950000074
Figure BDA0002122287950000074

图像坐标系与像素坐标系在X轴方向的夹角计算公式为:The formula for calculating the included angle between the image coordinate system and the pixel coordinate system in the X-axis direction is:

Figure BDA0002122287950000075
Figure BDA0002122287950000075

角度θy或θx为坐标系X3O3Y3与坐标系X2O2Y2的对应x坐标轴或y坐标轴之间的夹角,顺时针旋转为负,逆时针旋转为正,最终结果在区间[-180°—180°)内。The angle θy or θx is the angle between the coordinate system X 3 O 3 Y 3 and the corresponding x-coordinate axis or y-coordinate axis of the coordinate system X 2 O 2 Y 2 , clockwise rotation is negative, counterclockwise rotation is positive, and finally The result is in the interval [-180°—180°).

式中:f为摄像头的焦距,h为安装高度。Where: f is the focal length of the camera, and h is the installation height.

本发明还提供一种用于AGV的单目视觉偏差检测装置,包括设置在AGV上的图像采集装置、存储器和数据处理器;其中,图像采集装置用于采集图像数据,存储器中存有计算机程序,供所述的数据处理器调用,以完成上述用于AGV的单目视觉偏差检测方法。硬件系统包括电源电路、一个摄像头、一个RS232模块、一个CAN总线收发器、一块嵌入式微控制器、一块静态随机存储器、时钟电路、5V转3.3V电路、一块PCB底板以及一块PCB核心板;PCB底板和PCB核心板通过母座端子与公座端子相互连接耦合;电源电路焊接在PCB底板正面,将12V的直流电源降压为5V与3.3V的直流电源;摄像头安装于PCB底板的背面,通过一个双排排母针座与PCB底板电路相连且摄像头的COMS图像传感器的中心与PCB底板中心重合;RS232模块安装于PCB底板的正面,其VCC引脚和GND引脚通过PCB底板连接到电源电路的VCC3.3和GND,TX引脚和RX引脚通过PCB底板连接到母座端子对应引脚;CAN总线收发器模块安装于PCB底板的正面,其VCC引脚和GND引脚通过PCB底板连接到电源电路的VCC3.3和GND,CAN TX引脚和CAN RX引脚通过PCB底板连接到母座端子对应引脚;嵌入式微控制器焊接在PCB核心板正面,用于对图像的灰度处理、图像二值分割、噪声滤波、图像聚点、求取参照物的中心的坐标、计算与目标参照物之间X轴与Y轴的距离以及夹角;静态随机存储器用于存储图像信息。The present invention also provides a monocular visual deviation detection device for AGV, including an image acquisition device, a memory and a data processor arranged on the AGV; wherein, the image acquisition device is used to collect image data, and a computer program is stored in the memory , to be called by the data processor to complete the above-mentioned monocular vision deviation detection method for AGV. The hardware system includes a power supply circuit, a camera, an RS232 module, a CAN bus transceiver, an embedded microcontroller, a static random access memory, a clock circuit, a 5V to 3.3V circuit, a PCB base board and a PCB core board; PCB base board Connect and couple with the PCB core board through the female terminal and the male terminal; the power circuit is welded on the front of the PCB base plate, and the 12V DC power is stepped down to 5V and 3.3V DC power; the camera is installed on the back of the PCB base plate, through a The double-row female pin seat is connected to the circuit of the PCB bottom board and the center of the COMS image sensor of the camera coincides with the center of the PCB bottom board; the RS232 module is installed on the front of the PCB bottom board, and its VCC pin and GND pin are connected to the power supply circuit through the PCB bottom board VCC3.3 and GND, TX pin and RX pin are connected to the corresponding pins of the female terminal through the PCB backplane; the CAN bus transceiver module is installed on the front of the PCB backplane, and its VCC pin and GND pin are connected to the The VCC3.3 and GND of the power circuit, the CAN TX pin and the CAN RX pin are connected to the corresponding pins of the female terminal through the PCB bottom plate; the embedded microcontroller is welded on the front of the PCB core board for grayscale processing of images, Image binary segmentation, noise filtering, image gathering points, calculating the coordinates of the center of the reference object, calculating the distance and angle between the X-axis and the Y-axis with the target reference object; SRAM is used to store image information.

本发明方法能根据摄像头所处工作环境的光照条件自动设置分割阈值,在弱光照条件下自动打开LED灯补光,在强光照环境下也能精确提取到目标点的坐标,适应环境能力强,检测精度满足使用要求,计算速度快,实时性好。The method of the invention can automatically set the segmentation threshold according to the lighting conditions of the working environment where the camera is located, automatically turn on the LED light to supplement the light under weak lighting conditions, and can accurately extract the coordinates of the target point under strong lighting environments, and has strong adaptability to the environment. The detection accuracy meets the requirements of use, the calculation speed is fast, and the real-time performance is good.

以上实施例仅用于说明本发明的设计思想和特点,其目的在于使本领域内的技术人员能够了解本发明的内容并据以实施,本发明的保护范围不限于上述实施例。所以,凡依据本发明所揭示的原理、设计思路所作的等同变化或修饰,均在本发明的保护范围之内。The above embodiments are only used to illustrate the design concept and characteristics of the present invention, and its purpose is to enable those skilled in the art to understand the content of the present invention and implement it accordingly. The protection scope of the present invention is not limited to the above embodiments. Therefore, all equivalent changes or modifications based on the principles and design ideas disclosed in the present invention are within the protection scope of the present invention.

Claims (6)

1. A monocular vision deviation detection method for an AGV is characterized in that: it comprises the following steps:
s1, carrying out gray processing on acquired image data to obtain gray data; the image data is obtained by shooting a beacon, the beacon comprises three positioning blocks, each positioning block is a graph at least provided with a group of vertical symmetry axes, and the central points of the three positioning blocks can form a Cartesian coordinate system;
s2, sampling the acquired image data to obtain sample image data, calculating a dynamic segmentation threshold T according to the sample image data, and performing threshold segmentation on gray data to obtain a binary image;
s3, screening the binary image, performing morphological filtering on an object which may be foreground noise, and performing threshold segmentation on the filtered pixel points again; the object which is probably foreground noise refers to a pixel point which meets a certain pixel value range;
s4, segmenting the image obtained in the S3 by using the grid, traversing all grid units, and forming a set S by the grid units containing the foreground;
s5, clustering the elements in the set S, and separating positioning blocks;
s6, solving the coordinates of the central point of the positioning block by adopting an average value method;
s7, establishing a space conversion relation model between an image coordinate system and a pixel coordinate system according to the central point coordinates of the positioning blocks, and calculating deviation;
s5, clustering is carried out according to the following method:
5.1, setting the number of bit blocks N =0 for the set S obtained by S4;
5.2, N = N +1, establishing a positioning block set A N (ii) a Moving the first element in set S into set A N While moving the element out of the set;
5.3, comparing the remaining elements in the set S with the set A in sequence N The grid position relationship represented by all the elements in (1); judging whether two grids are adjacent or not, if so, moving the elements corresponding to the grids in the set S into the set A N Simultaneously moving this element out of set S;
5.4, obtaining a final positioning block set A N
The S7 specifically includes:
establishing a pixel coordinate system by judging the relative position relation between the coordinates of the central points of the positioning blocks;
calculating the quadrant of the pixel coordinate system in which the central point of the image is positioned;
calculating the distance between the center point of the image and the X axis and the Y axis of the pixel coordinate;
correcting the positive and negative relation of the deviation value according to the quadrant of the central point of the image;
and calculating an included angle between the coordinate axes corresponding to the two coordinate systems according to the coordinate axis rotation relationship between the pixel coordinate system and the image coordinate system.
2. The monocular visual deviation detecting method of claim 1, wherein: specifically, in S1, the gray processing is performed on each pixel row by row and column by column, and the gray processing method includes:
g(x,y)=af R (x,y)+bf G (x,y)+cf B (x,y)
the constraint conditions are as follows: a, b and c are positive integers;
f R (x,y)、f G (x,y)、f B the (x, y) is the R, G, B component of the pixel f (x, y), and the G (x, y) is the data after the gradation processing.
3. The monocular visual deviation detecting method of claim 1, wherein: the S2 specifically comprises:
2.1, calculating a dynamic segmentation threshold T by using a maximum inter-variance method according to the data obtained in the step S1, wherein the calculation method comprises the following steps:
assume that the sample pixel employed is divided into 1,2, \8230;, m-level, gray-value i with the number of pixels n i Then, the total pixel number M can be formulated as:
Figure FDA0004053961160000021
probability P of occurrence of each pixel value i Satisfies the formula:
Figure FDA0004053961160000022
an integer k is selected to divide the pixels into two groups, C 0 ={1,2,……,k},C 1 = k +1, k +2, \8230 \ 8230 \, m }, then the variance formula between the two groups: sigma 2 (k)=ω 0 ω 110 ) 2
In the formula: omega 0 Is C 0 The probability of the occurrence of the event is,
Figure FDA0004053961160000023
μ 0 is C 0 The average value of (a) of (b),
Figure FDA0004053961160000024
ω 1 is C 1 The probability of occurrence of the event is determined,
Figure FDA0004053961160000025
μ 1 is C 1 The average value of (a) is calculated,
Figure FDA0004053961160000026
by changing k from (1, 2, \8230;, m), k at which the variance is maximized is obtained, and max σ is obtained 2 (k) The k value is the dynamic segmentation threshold value T;
2.2, acquiring a binary image according to the data obtained in the step S1 and the dynamic segmentation threshold T obtained in the step 2.1 according to the following method:
Figure FDA0004053961160000027
in the formula: 1 represents white, 0 represents black, g (x, y) is the gray scale value of the pixel in the x-th column and the y-th row, and h (x, y) is the value obtained by dividing the pixel in the x-th column and the y-th row.
4. The monocular visual deviation detecting method of claim 1, wherein: the S3 specifically comprises:
for the following requirements:
Figure FDA0004053961160000028
the pixel point of (2) uses a morphological filtering method:
Figure FDA0004053961160000029
in the formula: med represents a middle value obtained by sequencing all pixel gray values in a filtering window A in an ascending order, wherein A is the filtering window;
then re-thresholding:
Figure FDA0004053961160000031
5. the monocular vision deviation detecting method for an AGV of claim 1, characterized in that: s6, the coordinates of the central point are obtained by using the following method:
for a graph with two perpendicular symmetry axes, calculating a graph center point by using an average value algorithm, wherein the graph center point coordinates satisfy the formula:
Figure FDA0004053961160000032
in the formula: x is the number of t Is the abscissa of the center point of the t-th positioning block, y t Is the longitudinal coordinate of the central point of the tth positioning block, is the total number of black pixel points in the tth positioning block, x i Is the abscissa, y, of the ith black pixel in the tth positioning block i Is the ordinate of the ith black pixel point in the tth positioning block.
6. The utility model provides a monocular vision deviation detection device for AGV which characterized in that: the device comprises an image acquisition device, a memory and a data processor which are arranged on the AGV; the image acquisition device is used for acquiring image data, and a computer program is stored in a memory and is called by the data processor to complete the monocular vision deviation detecting method for the AGV according to any one of claims 1 to 5.
CN201910610781.9A 2019-07-08 2019-07-08 Monocular vision deviation detection method and device for AGV Active CN110378970B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910610781.9A CN110378970B (en) 2019-07-08 2019-07-08 Monocular vision deviation detection method and device for AGV

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910610781.9A CN110378970B (en) 2019-07-08 2019-07-08 Monocular vision deviation detection method and device for AGV

Publications (2)

Publication Number Publication Date
CN110378970A CN110378970A (en) 2019-10-25
CN110378970B true CN110378970B (en) 2023-03-10

Family

ID=68252440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910610781.9A Active CN110378970B (en) 2019-07-08 2019-07-08 Monocular vision deviation detection method and device for AGV

Country Status (1)

Country Link
CN (1) CN110378970B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105528789A (en) * 2015-12-08 2016-04-27 深圳市恒科通多维视觉有限公司 Robot vision positioning method and device, and visual calibration method and device
CN107239748A (en) * 2017-05-16 2017-10-10 南京邮电大学 Robot target identification and localization method based on gridiron pattern calibration technique
CN107609451A (en) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of high-precision vision localization method and system based on Quick Response Code
CN108596980A (en) * 2018-03-29 2018-09-28 中国人民解放军63920部队 Circular target vision positioning precision assessment method, device, storage medium and processing equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10198829B2 (en) * 2017-04-25 2019-02-05 Symbol Technologies, Llc Systems and methods for extrinsic calibration of a plurality of sensors

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105528789A (en) * 2015-12-08 2016-04-27 深圳市恒科通多维视觉有限公司 Robot vision positioning method and device, and visual calibration method and device
CN107239748A (en) * 2017-05-16 2017-10-10 南京邮电大学 Robot target identification and localization method based on gridiron pattern calibration technique
CN107609451A (en) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of high-precision vision localization method and system based on Quick Response Code
CN108596980A (en) * 2018-03-29 2018-09-28 中国人民解放军63920部队 Circular target vision positioning precision assessment method, device, storage medium and processing equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Localization and navigation using QR code for mobile robot in indoor environment;Zhang H etal.;《Proceedings of the 2015 IEEE International Conference on Robotics and Biomimetics》;20151231;第2501-2506页 *
机器视觉与惯性信息融合的轨道线形检测;郑树彬 等;《振动、测试与诊断》;20180430;第38卷(第2期);第394-403页 *

Also Published As

Publication number Publication date
CN110378970A (en) 2019-10-25

Similar Documents

Publication Publication Date Title
CN109345593B (en) Camera posture detection method and device
CN103093181B (en) A kind of method and apparatus of license plate image location
CN110458161B (en) Mobile robot doorplate positioning method combined with deep learning
CN111598952B (en) Multi-scale cooperative target design and online detection identification method and system
CN111626190A (en) Water level monitoring method for scale recognition based on clustering partitions
US20220270294A1 (en) Calibration methods, apparatuses, systems and devices for image acquisition device, and storage media
CN112132857B (en) Dynamic object detection and static map reconstruction method of dynamic environment hybrid vision system
CN112883820B (en) Road target 3D detection method and system based on laser radar point cloud
CN107356933A (en) A kind of unstructured road detection method based on four line laser radars
CN112115913B (en) Image processing method, device and equipment and storage medium
CN111739020B (en) Automatic labeling method, device, equipment and medium for periodic texture background defect label
CN114332394B (en) Dynamic scene three-dimensional reconstruction method based on semantic information assistance
CN109974743A (en) A RGB-D visual odometry based on GMS feature matching and sliding window pose graph optimization
CN112150448B (en) Image processing method, device and equipment and storage medium
Li et al. Fast vision‐based autonomous detection of moving cooperative target for unmanned aerial vehicle landing
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN110765993B (en) SEM graph measuring method based on AI algorithm
CN114972531B (en) Corner detection method, equipment and readable storage medium
CN110378970B (en) Monocular vision deviation detection method and device for AGV
CN117870659A (en) Visual inertial integrated navigation algorithm based on dotted line characteristics
CN114283167B (en) Vision-based cleaning area detection method
CN110059695A (en) A kind of character segmentation method and terminal based on upright projection
CN113642553B (en) Method for accurately positioning unconstrained license plate by combining whole and part target detection
CN116385527A (en) Object positioning method, device and medium based on multi-source sensor
CN116958218A (en) A point cloud and image registration method and equipment based on calibration plate corner point alignment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant