CN110378970B - Monocular vision deviation detection method and device for AGV - Google Patents
Monocular vision deviation detection method and device for AGV Download PDFInfo
- Publication number
- CN110378970B CN110378970B CN201910610781.9A CN201910610781A CN110378970B CN 110378970 B CN110378970 B CN 110378970B CN 201910610781 A CN201910610781 A CN 201910610781A CN 110378970 B CN110378970 B CN 110378970B
- Authority
- CN
- China
- Prior art keywords
- pixel
- image
- positioning block
- coordinate system
- calculating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a monocular vision deviation detection method and device for an AGV (automatic guided vehicle). Gray processing and sampling are carried out on collected image data, a dynamic segmentation threshold value is calculated according to sample image data, threshold value segmentation is carried out on the gray data, and a binary image is obtained; screening the binary image, performing morphological filtering on an object which is probably foreground noise, and performing threshold segmentation on the filtered pixel points again; segmenting the acquired image by using the grids, traversing all grid units, and forming a set by the grid units containing the foreground colors; clustering elements in the set, and separating positioning blocks; calculating the coordinates of the central point of the positioning block by adopting an average value method; and establishing a space conversion relation model between the image coordinate system and the pixel coordinate system according to the central point coordinates of the positioning blocks, and calculating the deviation. The invention can improve the anti-interference capability of the AGV while meeting the use precision.
Description
Technical Field
The invention belongs to the technical field of logistics automation, and particularly relates to a monocular vision deviation detection method and device for an AGV.
Background
With the rise of intelligent logistics in recent years, unmanned warehouses are produced to meet the requirement of 'unmanned operation'. The AGV is used as a carrier for operations such as warehousing, sorting, delivery and the like of goods in the unmanned warehouse, and is widely applied to the unmanned warehouse. Inertial navigation AGVs have a cumulative effect due to sensor errors. Therefore, a deviation detection device with accurate result, high stability and strong anti-interference capability is needed to eliminate the accumulated error.
In order to eliminate accumulated errors, the two-dimensional code deviation detection method used at present has the advantages of containing abundant information quantity, realizing the deviation detection and positioning; but also because its information content is abundant, lead to its interference killing feature not enough, receive highlight to disturb and can lose a large amount of useful information when leading to the two-dimensional code boundary is obscure, can't extract useful information when receiving the spot to lead to deviation to detect failure. Meanwhile, the positioning method has the defects of difficult equipment maintenance, high cost, difficulty in saving the production cost of enterprises and the like.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the monocular vision deviation detection method and device for the AGV are provided, the use precision is met, and meanwhile the anti-interference capability of the AGV can be improved.
The technical scheme adopted by the invention for solving the technical problems is as follows: a monocular vision deviation detection method for an AGV is characterized in that: it comprises the following steps:
s1, carrying out gray level processing on collected image data to obtain gray level data; the image data is obtained by shooting a beacon, the beacon comprises three positioning blocks, each positioning block is a graph at least provided with a group of vertical symmetry axes, and the central points of the three positioning blocks can form a Cartesian coordinate system;
s2, sampling the acquired image data to obtain sample image data, calculating a dynamic segmentation threshold T according to the sample image data, and performing threshold segmentation on gray data to obtain a binary image;
s3, screening the binary image, performing morphological filtering on an object which may be foreground noise, and performing threshold segmentation on the filtered pixel points again; the object which is probably foreground noise refers to a pixel point which meets a certain pixel value range;
s4, segmenting the image obtained in the S3 by using the grid, traversing all grid units, and forming a set S by the grid units containing the foreground color;
s5, clustering the elements in the set S, and separating positioning blocks;
s6, solving the coordinates of the central point of the positioning block by adopting an average value method;
and S7, establishing a space conversion relation model between the image coordinate system and the pixel coordinate system according to the central point coordinates of the positioning block, and calculating the deviation.
According to the method, in S1, specifically, the gray processing is performed on each pixel row by row and column by column, and the gray processing method includes:
g(x,y)=af R (x,y)+bf G (x,y)+cf B (x,y)
the constraint conditions are as follows: a, b and c are positive integers;
f R (x,y)、f G (x,y)、f B the (x, y) is the R, G, B component of the pixel f (x, y), and the G (x, y) is the data after the gradation processing.
According to the method, the S2 specifically comprises the following steps:
2.1, calculating a dynamic segmentation threshold T by using a maximum inter-variance method according to the data obtained in the step S1, wherein the calculation method comprises the following steps:
assuming that the sample pixels employed are classified into 1,2, \8230;, m-level, where the number of pixels of the gray value i is N, the total number of pixels N can be formulated:probability P of occurrence of each pixel value i Satisfies the formula:an integer K is selected to divide the pixels into two groups, C 0 ={1,2,……,k},C 1 = k +1, k +2, \8230 \ 8230 \, m }, then the variance formula between the two groups: sigma 2 (k)=ω 0 ω 1 (μ 1 -μ 0 ) 2
In the formula: omega 0 Is C 0 The probability of the occurrence of the event is,μ 0 is C 0 The average value of (a) of (b),ω 1 is C 1 The probability of the occurrence of the event is,μ 1 is C 1 The average value of (a) of (b),
by changing K from (1, 2, \8230;, m), K at which the variance is maximized is obtained, and max σ is obtained 2 (k) The time K value is the dynamic segmentation threshold value T;
2.2, according to the data obtained in the step S1 and the dynamic segmentation threshold T obtained in the step 2.1, obtaining a binary image according to the following method:
in the formula: 1 represents white, 0 represents black, g (x, y) is the gray scale value of the pixel in the x-th column and the y-th row, and h (x, y) is the value obtained by dividing the pixel in the x-th column and the y-th row.
According to the method, the S3 specifically comprises the following steps:
for the following requirements:
in the formula: med represents a middle value obtained by sequencing all pixel gray values in a filtering window A in an ascending order, wherein A is the filtering window;
then re-thresholding:
according to the method, the S5 is clustered according to the following method:
5.1, traversing all grid units, taking grid coordinates containing positioning block images as elements of a set S, and enabling the number of the positioning blocks to be N =0;
5.2, N = N +1, establishing a positioning block set A N (ii) a To be in set SThe first element moves into set A N While moving the element out of the set;
5.3, comparing the remaining elements in the set S with the set A in sequence N The grid position relationship represented by all the elements in (1); judging whether two grids are adjacent or not, if so, moving the elements corresponding to the grids in the set S into the set A N Simultaneously moving this element out of set S;
5.4, obtaining a final positioning block set A N 。
According to the method, the S6 uses the following method to calculate the coordinates of the central point:
for a graph with two perpendicular symmetry axes, calculating a graph center point by using an average value algorithm, wherein the graph center point coordinate satisfies the formula:
in the formula: x is a radical of a fluorine atom t Is the abscissa of the center point of the tth locating block, y t Is the longitudinal coordinate of the central point of the tth positioning block, n is the total number of black pixel points in the tth positioning block, x i Is the abscissa, y, of the ith black pixel in the tth positioning block i Is the ordinate of the ith black pixel point in the tth positioning block.
According to the method, the step S7 specifically includes:
establishing a pixel coordinate system by judging the relative position relationship between the coordinates of the central points of the positioning blocks;
calculating the quadrant of the pixel coordinate system in which the central point of the image is positioned;
calculating the distance between the center point of the image and the X axis and the Y axis of the pixel coordinate;
correcting the positive and negative relation of the deviation value according to the quadrant of the central point of the image;
and calculating an included angle between the coordinate axes corresponding to the two coordinate systems according to the coordinate axis rotation relationship between the pixel coordinate system and the image coordinate system.
The utility model provides a monocular vision deviation detection device for AGV which characterized in that: the device comprises an image acquisition device, a memory and a data processor which are arranged on the AGV; the image acquisition device is used for acquiring image data, and a computer program is stored in a memory and is used for being called by the data processor so as to finish the monocular vision deviation detection method for the AGV.
The beneficial effects of the invention are as follows:
1. after a beacon image is obtained, the relative position relation between the center point of the image sensor and the beacon positioning point is established by extracting the characteristic points in the beacon, and the offset and the included angle of the center point of the image sensor relative to the X axis and the Y axis of the beacon positioning point in an absolute coordinate system can be calculated; the used automatic threshold segmentation method can obtain clearer binary images under different illumination conditions and has stronger adaptability; the coordinate of the central point of the positioning block is obtained by adopting an average value method, and the strong resistance to strong light interference is realized. As shown in fig. 6, the effect when different degrees of strong light interfere at the same position is compared, wherein the illumination intensities of the three components (a), (b), and (c) are sequentially increased.
2. The clustering algorithm has strong filtering capability on black stains stained by the beacon, and the interference can be automatically eliminated. Although the change of the illumination environment can cause the change of the black and white image after the image gray scale processing, the image binary segmentation and the noise filtering, the adoption of the image point-focusing algorithm can effectively reduce the external influence to the minimum, the calculated coordinate of the center point of the reference object fluctuates in an allowable range, and the final result does not generate large fluctuation due to the change of the illumination environment.
Drawings
FIG. 1 is a flowchart of a method according to an embodiment of the present invention.
FIG. 2 is a relational model of an absolute coordinate system, an image coordinate system, and a pixel coordinate system.
FIG. 3 is a binary image obtained under different lighting conditions using dynamic threshold segmentation. Wherein (a) is normal illumination condition and (b) is weak illumination condition.
Fig. 4 illustrates the effect of segmenting an image using a grid.
Fig. 5 shows the orientation block extracted using clustering.
Fig. 6 shows the effect of strong light interference. The light on (a), (b) and (c) is gradually increased.
Fig. 7 is a clustering flow chart.
Detailed Description
The invention is further illustrated by the following specific examples and figures.
The invention provides a monocular vision deviation detection method for an AGV, which comprises the following steps as shown in FIG. 1:
and S0, starting the camera to shoot a frame of picture. The image data is obtained by shooting a beacon, the beacon comprises three positioning blocks, each positioning block is a graph at least having a group of vertical symmetry axes, such as a circle, a square, a rectangle or a rhombus, and the central points of the three positioning blocks can form a Cartesian coordinate system.
S1, carrying out gray scale processing on image data acquired by an image sensor. And carrying out graying processing on the image by adopting a weighted average gray processing algorithm based on R, G and B components, and amplifying the difference between the foreground color and the background color. The method specifically comprises the following steps:
g(x,y)=af R (x,y)+bf G (x,y)+cf B (x,y)
the constraint conditions are as follows:
a, b and c are positive integers.
f R (x,y)、f G (x,y)、f B (x, y) are the R, G, B components of pixel f (x, y).
f (x, y) is the color value of the pixel point (x, y) in the image coordinate system.
g (x, y) is data after gray processing of the pixel points (x, y).
And S2, sampling the data in the S1, calculating a dynamic segmentation threshold T by using the sample image data, and performing threshold segmentation on the gray data in the S1 to obtain a binary image, wherein the result is shown in FIG. 3, wherein (a) is a result of performing threshold segmentation by using the whole image as a data source, and (b) is a result of performing threshold segmentation by using the sample image data as a data source. The method specifically comprises the following steps:
the sampling principle is as follows:
1, calculating the gray difference between the first pixel point and the rest pixel points in each row of pixels line by line, and if the difference is greater than a set value, determining that the row possibly contains foreground color and background color at the same time; otherwise, the next row is detected.
2, for a row possibly containing foreground colors and background colors, detecting the number of foreground color pixel points in the row, and if the number of the foreground colors is greater than a set value, determining a sample required by the row; otherwise, the next row is detected.
The method for calculating the dynamic threshold comprises the following steps:
assuming that sample pixels employed are classified into 1,2, \8230;, m levels, the number of pixels of the gradation value i is N, the total number of pixels N can be formulated:probability P of occurrence of each pixel value i Satisfies the formula:an integer K is selected to divide the pixels into two groups, C 0 ={1,2,……,k},C 1 = k +1, k +2, \8230 \ 8230 \, m }, then the variance formula between the two groups: sigma 2 (k)=ω 0 ω 1 (μ 1 -μ 0 ) 2
In the formula: omega 0 Is C 0 The probability of the occurrence of the event is,μ 0 is C 0 The average value of (a) of (b),ω 1 is C 1 The probability of the occurrence of the event is,μ 1 is C 1 The average value of (a) is calculated,
by changing K from (1, 2, \8230;, m), K at which the variance is maximized is obtained, and max σ is obtained 2 (k) The K value is the optimal threshold T.
After the dynamic segmentation threshold value T is calculated, binary segmentation can be carried out on the gray level image to obtain a binary image.
The method for acquiring the binary image comprises the following steps:
in the formula: 1 represents white. 0 represents black. g (x, y) is the gray scale value of the pixel in the x-th column and the y-th row. h (x, y) is a value obtained by dividing the pixel in the x-th column and the y-th row.
And S3, screening the binary image data obtained in the S2, performing morphological filtering on an object which appears in the background color and is possibly foreground noise, and performing threshold segmentation on the filtered pixel points again. The noise appearing in the foreground color area does not influence image clustering, the extraction efficiency of the coordinates of the central point of the positioning block is small, and the accuracy of the result is not influenced.
And S4, dividing the image acquired in the S3 by using the grid, wherein the dividing effect is shown in FIG. 4. And traversing all the grid units, wherein the grid units containing the foreground colors form a set S. The grid cells are sized such that in any event, there is at least one grid cell between any two positioning blocks that contains only the background color. And S2, the obtained binary image contains foreground noise and target foreground color, S3, the foreground noise is removed, the target foreground color is left, and S4, the target foreground color is segmented.
And S5, clustering the elements in the set S, and separating the positioning blocks. As shown in fig. 7, 5.1, traversing all grid units, taking grid coordinates containing positioning block images as elements of a set S, and setting the number of positioning blocks N =0;5.2, N = N +1, establishing a positioning block set A N (ii) a Moving the first element in set S into set A N While moving the element out of the set; 5.3, sequentially comparing the remaining elements in the set S with the set A N All elements in (1) represent a gridThe position relationship of the grids; judging whether two grids are adjacent or not, if so, moving the elements corresponding to the grids in the set S into the set A N Simultaneously moving this element out of set S;5.4, obtaining a final positioning block set A N . The clustering results are shown in fig. 5.
And S6, obtaining the coordinates of the central point of the positioning block separated in the S5. The coordinate of the central point of the positioning block meets the following requirements:
for any graph at least having a group of symmetry axes perpendicular to each other, the coordinates of the center point satisfy:
in the formula: x is a radical of a fluorine atom t Is the abscissa of the center point of the t-th positioning block, y t Is the longitudinal coordinate of the central point of the tth positioning block, n is the total number of black pixel points in the tth positioning block, x i Is the abscissa, y, of the ith black pixel in the tth positioning block i Is the ordinate of the ith black pixel point in the tth positioning block.
And S7, establishing a spatial conversion relation model between the image coordinate system and the pixel coordinate system according to the central point coordinates of the positioning block, and calculating the deviation, wherein the deviation is shown in figure 2.
The method comprises the following specific steps:
and 1, judging the relative position relation among the three central points. By calculating the pixel distances between the three central points, two end points of the farthest distance are a positioning block B and a positioning block C (at this time, the positioning block B and the positioning block C cannot be judged), and the remaining one is a positioning block A.
And 2, judging the positions of the positioning block B and the positioning block C. And (3) taking one of the two end points with the farthest distance in the step (1), recording the end point as B ', forming a vector which starts from the central point of the positioning block A and points to B' with the positioning block A, and judging the position relation between the central point of the third positioning block and the vector. If the position is on the left side, B 'corresponds to the positioning block B, otherwise, B' corresponds to the positioning block C.
And 3, establishing a pixel coordinate system. In 2, a vector pointing to the central point of the positioning block B from the central point of the positioning block A is superposed with the X axis of the pixel coordinate system and points to the X axis positive semi-axis; and a vector pointing to the central point of the positioning block B from the central point of the positioning block A is superposed with the Y axis of the pixel coordinate system and points to the positive half axis of Y.
And 4, calculating the deviation.
And 1, judging which quadrant of the pixel coordinate system the central point of the image is positioned in.
The formula is as follows:
t 1 <0,t 2 < 0, located in the first quadrant, dx > 0, dy > 0.
t 1 >0,t 2 Is less than 0 and is positioned in the second quadrant, dx is less than 0 and dy is more than 0.
t 1 >0,t 2 Is greater than 0 and is positioned in the third quadrant, dx is less than 0 and dy is less than 0.
t 1 <0,t 2 Is greater than 0 and is positioned in the fourth quadrant, dx is greater than 0 and dy is less than 0.
The distance dy between the center of the image sensor and the beacon reference point is calculated by the formula:
the distance dx between the center of the image sensor and the beacon reference point is calculated by the formula:
the calculation formula of the included angle between the image coordinate system and the pixel coordinate system in the Y-axis direction is as follows:
the calculation formula of the included angle between the image coordinate system and the pixel coordinate system in the X-axis direction is as follows:
the angle thetay or thetax being a coordinate system X 3 O 3 Y 3 And a coordinate system X 2 O 2 Y 2 The included angle between the corresponding x coordinate axis or y coordinate axis is negative when rotating clockwise and positive when rotating anticlockwise, and the final result is in the interval of [ -180 DEG ].
In the formula: f is the focal length of the camera, and h is the installation height.
The invention also provides a monocular vision deviation detection device for the AGV, which comprises an image acquisition device, a memory and a data processor, wherein the image acquisition device, the memory and the data processor are arranged on the AGV; the image acquisition device is used for acquiring image data, and a computer program is stored in the memory and is used for being called by the data processor so as to finish the monocular vision deviation detection method for the AGV. The hardware system comprises a power circuit, a camera, an RS232 module, a CAN bus transceiver, an embedded microcontroller, a static random access memory, a clock circuit, a 5V-to-3.3V circuit, a PCB bottom plate and a PCB core plate; the PCB bottom plate and the PCB core plate are mutually connected and coupled with the male socket terminal through the female socket terminal; the power circuit is welded on the front surface of the PCB bottom plate, and a 12V direct current power supply is reduced to a 5V direct current power supply and a 3.3V direct current power supply; the camera is arranged on the back of the PCB base plate and is connected with the PCB base plate circuit through a double-row female needle base, and the center of a COMS image sensor of the camera is superposed with the center of the PCB base plate; the RS232 module is arranged on the front surface of the PCB bottom plate, a VCC pin and a GND pin of the RS232 module are connected to VCC3.3 and GND of the power supply circuit through the PCB bottom plate, and a TX pin and an RX pin of the RS232 module are connected to corresponding pins of the female socket terminal through the PCB bottom plate; the CAN bus transceiver module is arranged on the front surface of the PCB bottom plate, a VCC pin and a GND pin of the CAN bus transceiver module are connected to VCC3.3 and GND of a power supply circuit through the PCB bottom plate, and a CAN TX pin and a CAN RX pin are connected to corresponding pins of a female socket terminal through the PCB bottom plate; the embedded microcontroller is welded on the front surface of the PCB core board and is used for carrying out gray processing on an image, binary image segmentation, noise filtering, image point gathering, solving the coordinate of the center of a reference object and calculating the distance and the included angle between an X axis and a Y axis of a target reference object; the static random access memory is used for storing image information.
The method can automatically set the segmentation threshold according to the illumination condition of the working environment where the camera is located, automatically turn on the LED lamp for light supplement under the weak illumination condition, and accurately extract the coordinates of the target point under the strong illumination environment, and has the advantages of strong environment adaptability, high detection precision meeting the use requirement, high calculation speed and good real-time performance.
The above embodiments are only used for illustrating the design idea and features of the present invention, and the purpose of the present invention is to enable those skilled in the art to understand the content of the present invention and implement the present invention accordingly, and the protection scope of the present invention is not limited to the above embodiments. Therefore, all equivalent changes and modifications made in accordance with the principles and concepts disclosed herein are intended to be included within the scope of the present invention.
Claims (6)
1. A monocular vision deviation detection method for an AGV is characterized in that: it comprises the following steps:
s1, carrying out gray processing on acquired image data to obtain gray data; the image data is obtained by shooting a beacon, the beacon comprises three positioning blocks, each positioning block is a graph at least provided with a group of vertical symmetry axes, and the central points of the three positioning blocks can form a Cartesian coordinate system;
s2, sampling the acquired image data to obtain sample image data, calculating a dynamic segmentation threshold T according to the sample image data, and performing threshold segmentation on gray data to obtain a binary image;
s3, screening the binary image, performing morphological filtering on an object which may be foreground noise, and performing threshold segmentation on the filtered pixel points again; the object which is probably foreground noise refers to a pixel point which meets a certain pixel value range;
s4, segmenting the image obtained in the S3 by using the grid, traversing all grid units, and forming a set S by the grid units containing the foreground;
s5, clustering the elements in the set S, and separating positioning blocks;
s6, solving the coordinates of the central point of the positioning block by adopting an average value method;
s7, establishing a space conversion relation model between an image coordinate system and a pixel coordinate system according to the central point coordinates of the positioning blocks, and calculating deviation;
s5, clustering is carried out according to the following method:
5.1, setting the number of bit blocks N =0 for the set S obtained by S4;
5.2, N = N +1, establishing a positioning block set A N (ii) a Moving the first element in set S into set A N While moving the element out of the set;
5.3, comparing the remaining elements in the set S with the set A in sequence N The grid position relationship represented by all the elements in (1); judging whether two grids are adjacent or not, if so, moving the elements corresponding to the grids in the set S into the set A N Simultaneously moving this element out of set S;
5.4, obtaining a final positioning block set A N ;
The S7 specifically includes:
establishing a pixel coordinate system by judging the relative position relation between the coordinates of the central points of the positioning blocks;
calculating the quadrant of the pixel coordinate system in which the central point of the image is positioned;
calculating the distance between the center point of the image and the X axis and the Y axis of the pixel coordinate;
correcting the positive and negative relation of the deviation value according to the quadrant of the central point of the image;
and calculating an included angle between the coordinate axes corresponding to the two coordinate systems according to the coordinate axis rotation relationship between the pixel coordinate system and the image coordinate system.
2. The monocular visual deviation detecting method of claim 1, wherein: specifically, in S1, the gray processing is performed on each pixel row by row and column by column, and the gray processing method includes:
g(x,y)=af R (x,y)+bf G (x,y)+cf B (x,y)
the constraint conditions are as follows: a, b and c are positive integers;
f R (x,y)、f G (x,y)、f B the (x, y) is the R, G, B component of the pixel f (x, y), and the G (x, y) is the data after the gradation processing.
3. The monocular visual deviation detecting method of claim 1, wherein: the S2 specifically comprises:
2.1, calculating a dynamic segmentation threshold T by using a maximum inter-variance method according to the data obtained in the step S1, wherein the calculation method comprises the following steps:
assume that the sample pixel employed is divided into 1,2, \8230;, m-level, gray-value i with the number of pixels n i Then, the total pixel number M can be formulated as:probability P of occurrence of each pixel value i Satisfies the formula:an integer k is selected to divide the pixels into two groups, C 0 ={1,2,……,k},C 1 = k +1, k +2, \8230 \ 8230 \, m }, then the variance formula between the two groups: sigma 2 (k)=ω 0 ω 1 (μ 1 -μ 0 ) 2
In the formula: omega 0 Is C 0 The probability of the occurrence of the event is,μ 0 is C 0 The average value of (a) of (b),ω 1 is C 1 The probability of occurrence of the event is determined,μ 1 is C 1 The average value of (a) is calculated,
by changing k from (1, 2, \8230;, m), k at which the variance is maximized is obtained, and max σ is obtained 2 (k) The k value is the dynamic segmentation threshold value T;
2.2, acquiring a binary image according to the data obtained in the step S1 and the dynamic segmentation threshold T obtained in the step 2.1 according to the following method:
in the formula: 1 represents white, 0 represents black, g (x, y) is the gray scale value of the pixel in the x-th column and the y-th row, and h (x, y) is the value obtained by dividing the pixel in the x-th column and the y-th row.
4. The monocular visual deviation detecting method of claim 1, wherein: the S3 specifically comprises:
for the following requirements:
the pixel point of (2) uses a morphological filtering method:
in the formula: med represents a middle value obtained by sequencing all pixel gray values in a filtering window A in an ascending order, wherein A is the filtering window;
then re-thresholding:
5. the monocular vision deviation detecting method for an AGV of claim 1, characterized in that: s6, the coordinates of the central point are obtained by using the following method:
for a graph with two perpendicular symmetry axes, calculating a graph center point by using an average value algorithm, wherein the graph center point coordinates satisfy the formula:
in the formula: x is the number of t Is the abscissa of the center point of the t-th positioning block, y t Is the longitudinal coordinate of the central point of the tth positioning block, is the total number of black pixel points in the tth positioning block, x i Is the abscissa, y, of the ith black pixel in the tth positioning block i Is the ordinate of the ith black pixel point in the tth positioning block.
6. The utility model provides a monocular vision deviation detection device for AGV which characterized in that: the device comprises an image acquisition device, a memory and a data processor which are arranged on the AGV; the image acquisition device is used for acquiring image data, and a computer program is stored in a memory and is called by the data processor to complete the monocular vision deviation detecting method for the AGV according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910610781.9A CN110378970B (en) | 2019-07-08 | 2019-07-08 | Monocular vision deviation detection method and device for AGV |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910610781.9A CN110378970B (en) | 2019-07-08 | 2019-07-08 | Monocular vision deviation detection method and device for AGV |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110378970A CN110378970A (en) | 2019-10-25 |
CN110378970B true CN110378970B (en) | 2023-03-10 |
Family
ID=68252440
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910610781.9A Active CN110378970B (en) | 2019-07-08 | 2019-07-08 | Monocular vision deviation detection method and device for AGV |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110378970B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105528789A (en) * | 2015-12-08 | 2016-04-27 | 深圳市恒科通多维视觉有限公司 | Robot vision positioning method and device, and visual calibration method and device |
CN107239748A (en) * | 2017-05-16 | 2017-10-10 | 南京邮电大学 | Robot target identification and localization method based on gridiron pattern calibration technique |
CN107609451A (en) * | 2017-09-14 | 2018-01-19 | 斯坦德机器人(深圳)有限公司 | A kind of high-precision vision localization method and system based on Quick Response Code |
CN108596980A (en) * | 2018-03-29 | 2018-09-28 | 中国人民解放军63920部队 | Circular target vision positioning precision assessment method, device, storage medium and processing equipment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10198829B2 (en) * | 2017-04-25 | 2019-02-05 | Symbol Technologies, Llc | Systems and methods for extrinsic calibration of a plurality of sensors |
-
2019
- 2019-07-08 CN CN201910610781.9A patent/CN110378970B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105528789A (en) * | 2015-12-08 | 2016-04-27 | 深圳市恒科通多维视觉有限公司 | Robot vision positioning method and device, and visual calibration method and device |
CN107239748A (en) * | 2017-05-16 | 2017-10-10 | 南京邮电大学 | Robot target identification and localization method based on gridiron pattern calibration technique |
CN107609451A (en) * | 2017-09-14 | 2018-01-19 | 斯坦德机器人(深圳)有限公司 | A kind of high-precision vision localization method and system based on Quick Response Code |
CN108596980A (en) * | 2018-03-29 | 2018-09-28 | 中国人民解放军63920部队 | Circular target vision positioning precision assessment method, device, storage medium and processing equipment |
Non-Patent Citations (2)
Title |
---|
Localization and navigation using QR code for mobile robot in indoor environment;Zhang H etal.;《Proceedings of the 2015 IEEE International Conference on Robotics and Biomimetics》;20151231;第2501-2506页 * |
机器视觉与惯性信息融合的轨道线形检测;郑树彬 等;《振动、测试与诊断》;20180430;第38卷(第2期);第394-403页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110378970A (en) | 2019-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111754583B (en) | Automatic method for vehicle-mounted three-dimensional laser radar and camera external parameter joint calibration | |
CN108921865B (en) | Anti-interference sub-pixel straight line fitting method | |
CN110458161B (en) | Mobile robot doorplate positioning method combined with deep learning | |
CN111598952B (en) | Multi-scale cooperative target design and online detection identification method and system | |
CN101751572A (en) | Pattern detection method, device, equipment and system | |
CN106780560B (en) | Bionic robot fish visual tracking method based on feature fusion particle filtering | |
CN113538491B (en) | Edge identification method, system and storage medium based on self-adaptive threshold | |
CN105160649A (en) | Multi-target tracking method and system based on kernel function unsupervised clustering | |
CN112184765B (en) | Autonomous tracking method for underwater vehicle | |
CN112150448B (en) | Image processing method, device and equipment and storage medium | |
CN115619826A (en) | Dynamic SLAM method based on reprojection error and depth estimation | |
CN111273701B (en) | Cloud deck vision control system and control method | |
CN114972531B (en) | Corner detection method, equipment and readable storage medium | |
CN107316318B (en) | Air target automatic detection method based on multi-subregion background fitting | |
CN117788693B (en) | Stair modeling method and device based on point cloud data, legged robot and medium | |
CN114511803B (en) | Target shielding detection method for visual tracking task | |
CN115861352A (en) | Monocular vision, IMU and laser radar data fusion and edge extraction method | |
CN111964681B (en) | Real-time positioning system of inspection robot | |
CN110378970B (en) | Monocular vision deviation detection method and device for AGV | |
CN117496401A (en) | Full-automatic identification and tracking method for oval target points of video measurement image sequences | |
CN110349169B (en) | Linear measuring method | |
CN117029870A (en) | Laser odometer based on road surface point cloud | |
CN116863463A (en) | Egg assembly line rapid identification and counting method | |
CN116385527A (en) | Object positioning method, device and medium based on multi-source sensor | |
CN114283167B (en) | Vision-based cleaning area detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |