CN202702247U - Rapid and accurate positioning system used for indoor mobile robot - Google Patents

Rapid and accurate positioning system used for indoor mobile robot Download PDF

Info

Publication number
CN202702247U
CN202702247U CN 201220375716 CN201220375716U CN202702247U CN 202702247 U CN202702247 U CN 202702247U CN 201220375716 CN201220375716 CN 201220375716 CN 201220375716 U CN201220375716 U CN 201220375716U CN 202702247 U CN202702247 U CN 202702247U
Authority
CN
China
Prior art keywords
points
point
image
label
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201220375716
Other languages
Chinese (zh)
Inventor
周风余
田国会
王然
闫云章
韩旭
台述鹏
王小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN 201220375716 priority Critical patent/CN202702247U/en
Application granted granted Critical
Publication of CN202702247U publication Critical patent/CN202702247U/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The utility model relates to a rapid and accurate positioning system used for an indoor mobile robot. The rapid and accurate positioning system used for the indoor mobile robot comprises a positioning sensor and a plurality of passive tags, wherein the positioning sensor is arranged on the robot, and the passive tags are arranged on a ceiling in a working region and are used for reflecting infrared rays. The positioning sensor comprises an image processing chip. The image processing chip is connected with a storage module, a complementary metal oxide semiconductor (COMS) camera, a data interface, a power supply and an infrared emission module; the infrared emission module comprises a plurality of infrared tubes; the infrared tubes are arranged around the COMS camera and are divided into several groups; the passive tags are tags; a plurality of mark points are arranged one each tag; the mark points are divided into two types; the first type of the mark points are direction points which are used for determining the direction of a coordinate axis; in order to uniquely determine the direction, the direction mark points are respectively set at only three of the four corners of each corresponding tag; the second type of the mark points are coding points, i.e. the remaining mark points which are used for determining the identification (ID) number of the corresponding tag; reflected infrared materials are respectively pasted on the mark points used for determining the direction; and the reflected infrared materials are pasted on the remaining mark points used for coding according to the coding requirements.

Description

Rapid and accurate positioning system for indoor mobile robot
Technical Field
The utility model relates to a quick accurate positioning system for indoor mobile robot belongs to detection technology, image processing and robot navigation field.
Background
And indoor positioning is to obtain more accurate estimation of the pose of the object by certain analysis and calculation according to input information such as prior environment map information, current estimation of the pose of the object, observation data of a sensor and the like in an indoor environment. For the intelligent service robot working in indoor environments such as families, hospitals, office places and the like, accurate positioning is the premise of robot navigation and is an important guarantee for completing service tasks.
Indoor positioning sensors can be divided into absolute positioning sensors and relative positioning sensors, according to the positioning technology used. The absolute positioning mainly adopts the technologies of navigation beacons, active or passive identifiers and the like for positioning; relative positioning is the measurement of the distance and direction of an object from an initial position to determine the current position of the object, also known as dead reckoning. The current indoor robot positioning method mainly comprises the following steps: RFID-based methods, wireless sensor network-based methods, odometer and inertial navigation module-based methods, and the like.
Generally, an RFID-based method reads RFID data and then performs rough positioning, and then performs distance measurement using an ultrasonic sensor to obtain positioning information. The method requires that various possibilities are fully considered when the RFID tag is placed, and the method is inconvenient to use, low in precision, suitable for positioning under the conditions of simple environment and low precision requirement.
The method based on the wireless sensor network, such as Wi-Fi technology, Zibgee technology and the like, uses signal strength to perform positioning, needs to erect the wireless sensor network, and is high in cost, easy to interfere wireless signals and poor in precision.
The inertial navigation module-based method utilizes inertial navigation modules such as a gyroscope, an accelerometer, a magnetometer and the like, and combines a speedometer to record the course, the speed and the acceleration of an object in real time, accumulate the mileage and obtain the coordinates of the object relative to an initial position through calculation. The method has accumulated errors and drift, and the precision is difficult to guarantee when the time is long or the road condition is not good.
The invention patent No. 201110260388.5 uses an infrared emitting diode to make dot matrix road signs and attaches them to the ceiling of the room, a wide-angle infrared camera is fixed on the body of the mobile robot to shoot the infrared road signs upwards, and the computer on the body of the robot is used to analyze the images and calculate the pose of the robot in real time. The method has certain limitation, the dot matrix road sign belongs to an active label, each label is a circuit board and needs power supply, the cost is high, and the installation and the use are inconvenient. And secondly, the image processing uses an onboard industrial control computer, so that the size is large, the cost is high, and the small and medium-sized robots which are not provided with the industrial control computer cannot be used at all.
Disclosure of Invention
The utility model aims at solving the above problem, the utility model provides a quick accurate positioning system and working method for indoor mobile robot, this system comprises the passive label of installing the position sensor on the robot and pasting a plurality of reflection infrared rays on the work area ceiling, the system utilizes sensor infrared emission module to send the infrared ray, shine the label on the ceiling, miniature CMOS camera on the sensor gathers label facula image, TMS320DM642DSP chip is handled the image, obtain the sensor for the X coordinate value of label, the Y coordinate value, position information such as course angle and height, realize pinpointing. The purpose of adopting infrared ray is in order to effectively avoid the influence of visible light to the sensor, improves the precision and the robustness of location.
In order to achieve the above purpose, the utility model adopts the following technical scheme:
a fast and accurate positioning system for indoor mobile robots includes a positioning sensor mounted on a robot and a plurality of infrared-reflecting passive tags attached to the ceiling of a work area; the positioning sensor comprises an image processing chip, the image processing chip is respectively connected with a storage module, a COMS camera, a data interface, a power supply and an infrared emission module, and the infrared emission module comprises a plurality of infrared tubes which surround the COMS camera and are divided into a plurality of groups; the passive label is an identification label, 15 marking point positions are arranged on the label for pasting reflective infrared materials, the marking points are divided into two types, the first type is a direction point, namely the direction of a coordinate axis is determined, in order to uniquely determine the direction, the position of any three corners in four corners of the label can only have the direction marking point, the rest vertex can not be pasted with the reflective infrared materials, and each label must have the three direction points; the second type is coding points, namely the rest mark points, each coding point represents one bit of a binary system, and the ID number of the label can be determined through the combination of the coding points; the reflecting infrared material is adhered to the direction points, and the reflecting infrared material is adhered to all or part of the other coding points according to coding requirements.
The data interface is a UART interface, and the storage module is SDRAM, FLASH and EEPROM.
The number of the infrared tubes is 12, and 4 infrared tubes are divided into one group, and the total number of the infrared tubes is 3; when the measurement is started, all the 3 groups are opened, one group is closed immediately after the measurement result is obtained, the measurement is carried out again, and if the measurement precision is not influenced, the other group is closed again, so that the purposes that the precision is not influenced and the number of used infrared tubes is the minimum are achieved, and the purposes of saving energy and reducing heat productivity are achieved.
The positioning system comprises the following working steps:
1) according to the requirement, a proper number of labels are pasted on the ceiling of the robot working area, the distance between the labels is more than 1.5 m, and the distance between the positioning sensor and the labels ranges from 1 m to 2.5 m;
2) installing a positioning sensor on the robot, electrifying and initializing;
3) reading the storage module to complete configuration;
4) judging whether to start detection, if not, continuing to wait for a detection command; if yes, turning to step 5);
5) the COMS camera receives an infrared light spot image reflected by the label, preprocesses the image, detects whether an effective label exists in a visual field, if not, turns to 7), and if the number of the effective labels in the visual field is more than one, the optimal label is selected from the visual field to identify the light reflecting point; thereby determining three direction points in the label and a label coordinate system XOY, thereby determining X, Y coordinate information and course angle information of the robot under the label coordinate system, determining height information between the positioning sensor and a ceiling of the pasted label, and simultaneously determining ID information of the label;
6) uploading the processing result to an upper computer of the robot for displaying and controlling;
7) the upper computer determines whether to stop detection, if not, the step 5) is returned; if yes, the detection process is finished.
In the step 2), before the positioning sensor is put into use, the micro CMOS camera must be calibrated to obtain internal parameters and distortion parameters:
during calibration, a calibration algorithm based on a 2D plane calibration plate is adopted, and the internal parameters of the camera are assumed to be constant all the time in the calibration process, namely, no matter the camera shoots a plane template from any angle, the internal parameters of the camera are constant, only the external parameters change, and the method comprises the following basic steps:
(1) printing a standard chessboard as a calibration plate, and attaching the calibration plate on a rigid plane;
(2) a plurality of calibration plate images are shot from different angles, and the calibration is more accurate due to a large number of calibration plate images;
(3) detecting characteristic points on the calibration plate, and determining image coordinates and actual coordinates;
(4) calculating internal parameters of the camera by using the linear model;
(5) and optimizing the internal parameters of the camera by using the distortion model, solving all the parameters, and correcting the image after determining the internal parameters so as to prepare for subsequent calculation.
In the step 5), the process of determining the effective tag and the optimal tag in the infrared image is as follows: preprocessing the acquired infrared image, firstly performing Gaussian smooth filtering on the infrared image, then selecting a proper threshold value to perform binarization processing to obtain a binarized image, extracting the contour in the binarized image, and removing a larger contour and a smaller contour to reduce the interference of noise.
The positioning sensor positions the position of the label in the infrared image by adopting a method of combining horizontal projection and vertical projection of the acquired binary image, and determines the current optimal label by using a nearest neighbor rule.
Giving a straight line, dividing the binary image into a plurality of parts by using a cluster of straight lines which are perpendicular to the straight line and have equal intervals, and counting the number of pixels with the pixel value of 1 in each part as the projection of the part of the binary image on the given straight line; when a given straight line is a horizontal or vertical straight line, calculating the number of pixels with a pixel value of 1 in each column or each row of the binary image, so as to obtain horizontal and vertical projections of the binary image, wherein the projection can be used as a feature for object identification in some applications, fig. 14a shows the binary image obtained after processing, fig. 14b and fig. 14c show the vertical projection and the horizontal projection of the binary image, respectively, and the specific classification can be seen in the following steps:
(1) successively traversing the acquired vertical projection images from left to right to obtain the spacing distance d between adjacent projection pixel clusters1,d2…dnSimilarly, the horizontal projection image is successively traversed from top to bottom to obtain the spacing distance d 'of the adjacent projection pixel clusters'1,d′2…d′nSince the labels are placed at a longer interval, the projection pixel clusters between different labels are also spaced at a longer distance, and the spacing distance can be used as a basis for distinguishing the labels.
(2) Determining a projection area corresponding to a label on a projection image, and taking the minimum value d of the pixel cluster spacing distance of the vertical projection image and seven times of the minimum value d 'of the pixel cluster spacing distance of the horizontal projection image, namely 7d and 7 d' as threshold values to carry out neighbor classification, wherein the specific classification method comprises the following steps: traversing the vertical projection image from left to right, finding a first projection pixel cluster, counting the distances from the rest projection pixel clusters to the pixel cluster, if the distances are smaller than 7d, belonging to the projection area of the same label, then continuously traversing to the right by taking the next projection pixel cluster larger than 7d as a reference, still being smaller than 7d and being the projection area where a second label is located, sequentially traversing downwards to obtain different areas of different labels in the vertical projection image, traversing from top to bottom for the horizontal projection image, and taking 7 d' as a distance reference, and obtaining the corresponding area of the label on the horizontal projection image by adopting the method. As shown in fig. 14b and 14c, A, B, C, D four projection areas can be obtained.
(3) Finding out the label area on the original image, making a straight line at the edge of each projection area determined in the step (2), enabling the straight line of the vertical projection area and the straight line of the horizontal projection area to intersect on the original image to obtain different rectangular areas, wherein the rectangular areas are areas where labels may exist, and fig. 14d shows four rectangular areas obtained by intersecting straight lines, wherein unreasonable areas exist, namely two rectangular areas do not have labels, and how to remove the unreasonable areas is introduced in the next step to obtain effective labels.
(4) Removing unreasonable areas to obtain valid labels, and obtaining areas where the labels may be located through the step (3), but removing the areas where the invalid areas are located to obtain valid labels is required, mainly to eliminate interference in two aspects: one is to exclude the case where no label is present in the intersecting rectangular region, and one is to exclude the case where the label is at the edge of the image so that a complete label is not obtained.
(5) And (3) obtaining a possible projection area, determining the coordinate range of the area where the label is possibly located in the image according to the edge straight line of the projection edge area, judging whether the label exists in the area or not according to the existence of the reflection point in the detection area, and if the label exists in the area, judging that the label exists in the area, otherwise, judging that the label does not exist in the area. Because the probability of false detection at the edge of the image is high, the label needs to be discarded when being close to the edge, and whether the label is at the edge of the image can be simply judged through the coordinates of the area. And the remaining label after the interference is eliminated is the effective label.
(6) If the number of the effective labels in the image is more than one, the optimal label needs to be selected. Fig. 14d shows the image area where the label is acquired. Solving two rectangular area center coordinates a (x)a,ya) And b (x)b,yb) Coordinate o (x) of center point of imageo,yo) And calculating the oa and ob distances:
d oa = ( x o - x a ) 2 + ( y o - y a ) 2
d ob = ( x o - x b ) 2 + ( y o - y b ) 2
and taking the label with relatively small distance as an effective label under the current condition, and when the number of the labels is more than two, judging in a manner similar to the above.
The identification process of the label reflecting points in the step 5) comprises the following steps:
(1) first a gradient is defined: defining the gradient of a pixel along a certain direction as the difference of the gray values of the pixel and the next pixel in the direction, searching from top to bottom from left to right, if at a certain pixel point, the right gradient is greater than a set threshold epsilon1If the point is a point in the mark area, the point is considered to be a point in the mark area;
(2) searching the maximum gray value in eight neighborhoods of the point by taking the point as a starting point, and finally finding out the maximum gray value point in a mark region;
(3) searching in four directions of up, down, left and right by taking the central point as a starting point, and when the pixel gray value of a certain point is smaller than a set value and the gradient is smaller than a set threshold value, considering the point as a boundary point of the mark area;
(4) respectively starting horizontal and vertical search by taking points connected with central diagonal lines as starting points until a boundary point of the mark area is found, and performing other searches in the same way;
(5) the detected areas are not necessarily all mark areas, and interference areas need to be removed, the average value of all pixels in the mark areas is calculated firstly, if the average value is too low, the mark areas are excluded, and then the size and the boundary length of the mark areas are calculated, and the exclusion is not in accordance with the threshold value;
determining all label light reflection points through the steps, selecting valley bottoms between peak values of the gray levels of the mark area and the background area as a threshold value by adopting a method for establishing a gray level histogram, and then subtracting the threshold value from the gray level value of each point of the image to obtain a new image; when the central point is determined, a gray-scale weighted centroid method is selected, wherein the gray-scale weighted centroid is that the average value of pixel coordinates of all mark points in an image is calculated by taking a gray-scale value as weight, and the formula is as follows:
x 0 = Σ ( i , j ) ∈ S i w i , j Σ ( i , j ) ∈ S w i , j y 0 = Σ ( i , j ) ∈ S j w i , j Σ ( i , j ) ∈ S w i , j
x in the formula0And y0To calculate the pixel coordinates of the resulting center point, (i, j) represents a certain pixel point in the image, i and j represent the coordinate values of the pixel point's x-axis and y-axis, respectively, wi,jRepresenting the pixel grey value at pixel point (i, j).
In the step 5), the process of determining three direction points in the label is as follows: calculating the length of a connecting line between any two mark points in the label, selecting the longest one, extracting two mark points corresponding to the longest connecting line, marking the two mark points as A and B respectively, calculating the middle points of the A and B connecting lines, then calculating the distances from all the points except the A and B points in the label to the middle point respectively, selecting the longest one, extracting the mark point corresponding to the longest distance, marking the mark point as O, and then A, O, B defining three points as three direction points.
The determination process of the label coordinate system comprises the following steps: point O as label coordinateThe coordinate origin of the system is set as the pixel coordinate of the O point in the image coordinate system as (x)1,y1) In A and B, a point is selected to rotate clockwise by 90 degrees around O, and the pixel coordinate of the selected point is (x)2,y2) The coordinates of the other remaining point are (x)3,y3) The calculation formula of the pixel coordinate (x, y) of the corresponding point P after rotation is as follows:
x = ( x 2 - x 1 ) cos ( α ) - ( y 2 - y 1 ) sin ( α ) + x 1 y = ( y 2 - y 1 ) cos ( α ) + ( x 2 - x 1 ) sin ( α ) + y 1
wherein α represents the rotation angle.
And (4) judging the delta included angle:
δ=(x-x1)(x3-x1)+(y-y1)(y3-y1)
if delta>0 is the acute angle, the point (x) is selected2,y2) The corresponding coordinate axis is the X axis, (X)3,y3) The coordinate axis corresponding to the point is Y-axis, if delta<0 is an obtuse angle, the point (x) is selected2,y2) The corresponding coordinate axis is Y axis, (x)3,y3) The corresponding coordinate axis is the X axis, and the label coordinate system XOY can be determined by the correspondence.
In the step 5), the determining process of the position information is as follows: in the image acquired by the positioning sensor, the R point is the projected position of the positioning sensor in the image, and the position is the center of the image. And establishing an AOB coordinate system in the image according to a rule for determining a label coordinate system, wherein OA is the X-axis direction, and OB is the Y-axis direction.
And when the coordinates of the positioning sensor under the label are solved, a method of twice affine transformation is used, wherein the affine transformation is the transformation between two plane coordinate systems.
Firstly, converting an AOB coordinate system and an image coordinate system, determining coordinate values of three points of the AOB in the image coordinate system by using the extracted central point of the mark point, determining the coordinate values of the three points in the AOB coordinate system according to the distance between the points, and substituting the three coordinates into an affine transformation formula to determine an affine transformation matrix and a translation matrix;
assuming an affine matrix of n 11 n 12 n 21 n 22 , The translation matrix is k 1 k 2 , The image coordinate of the R point in the image is (u)0,v0) Then, the coordinates of the R point in the AOB coordinate system are obtained x r y r ,
x r y r = n 11 n 12 n 21 n 22 u 0 v 0 + k 1 k 2
The relation between the AOB coordinate system and the label coordinate system is obtained, three direction points of the label are selected, and the size of the label and the actual distance between the mark points are known, so that the coordinates of each point in the label coordinate system can be obtained, and the coordinates of the three direction points in the AOB coordinate system can be obtained, and the assumption that the obtained affine matrix is the affine matrix m 11 m 12 m 21 m 22 , Since the coordinates of the origin point under the O point and the label coordinates in the image are (0, 0), the translation matrix is 0 0 , The coordinates of the R point in the image coordinate system AOB can be obtained in the image and are set as x r y r , The transformed coordinates are x r , y r , , Then:
x r , y r , = m 11 m 12 m 21 m 22 x r y r
from this, the coordinates (x ') of the locator under the tag are determined'r,y′r). From the coordinates (x'r,y′r) The included angle between the locator and the X axis and the Y axis in the label coordinate system can be obtained, and therefore the course angle can be determined.
In the step 5), the process of determining the height information between the positioning sensor and the ceiling of the sticker label comprises the following steps: according to the projection principle of the camera, L is the length of the actual object, after projection, L is the focal length after projection on the image plane, and Z is the distance between the object and the camera, then there are:
l L = f Z
therefore, it is not only easy to use
Figure BDA00001958160600077
F is obtained by calibrating internal parameters through a camera, and L and L are respectively obtained by calculating labels and images, so that the height information between the positioning sensor and the ceiling on which the labels are stuck can be determined;
identifying the ID number of the tag: in the random distribution of a label in an image coordinate system, the origin of coordinates O 'of an image coordinate system X' O 'Y' is a pixel point at the upper left corner of the image, the X axis is horizontally rightward, the Y axis is vertically downward, O, A, B and three points form a coordinate system AOB of the label in the image, and the corresponding pixel coordinates are (X) respectivelyo,yo)(xa,ya) And (x)b,yb) Where O is the origin of the label coordinate system, OA points to the x-axis, OB points to the y-axis, and M and N are points A andtwo equally-divided points between the points C, the points N are marked by dotted circles to indicate that no photosensitive material exists at the point on the label, the other photosensitive points except the direction point are coding points of the label and are used for determining the ID information of the label, and the following steps are adopted to determine the position of the coding point of the label.
(1) Determining the position of an encoded point on a straight line OB
Determining a vector
Figure BDA00001958160600078
The sum of vectors of points other than O, A, B and O point is obtained
Figure BDA00001958160600079
By an angle theta, e.g. another vectorThen
Figure BDA000019581606000711
And
Figure BDA000019581606000712
the included angle theta is as follows:
&theta; = arccos ( ( x b - x o ) ( x f - x o ) + ( y b - y o ) ( y f - y o ) ( x b - x o ) 2 + ( y b - y o ) 2 ( x f - x o ) 2 + ( y f - y o ) 2 )
and selecting a judgment threshold, and when the included angle theta is smaller than the threshold, considering that the point is on the OB line, and judging that the point M is on the OB line.
Determining the specific position of the M point by using the relation between the vector lengths
Figure BDA00001958160600082
The length of the utility model is as follows:
| OM &RightArrow; | = ( x m - x o ) 2 + ( y m - y o ) 2
at this time, the vector is obtained
Figure BDA00001958160600084
And vector
Figure BDA00001958160600085
Determines the specific location of point M.
For the N points where no photosensitive material exists, since it is used for discriminating other photosensitive points, the coordinates thereof are determined and set ON &RightArrow; = ( x n - x o , y n - y o ) , ON &RightArrow; = 2 / 3 OB &RightArrow; , Then there are:
x n - x o = 2 3 ( x b - x o ) y n - y o = 2 3 ( y b - y o )
finding the coordinates (x) of the N pointsn,yn) If the M point has no photosensitive material, the coordinate of the M point is determined by the method.
(2) Determining the position of a coding point on a straight line OA
To obtain
Figure BDA00001958160600089
Calculating the sum of the remaining vectors according to the method described in (1)
Figure BDA000019581606000810
The D point can be judged to be a point on the OA line, and the position of the D point can be determined by utilizing the vector length relation.
(3) Determining the positions of the remaining code points
The remaining photosites are positioned in a similar manner as described above, except that the vectors are selected differently, for example, by determining the location of the photosite point on line L2, for the label shown in FIG. 13, taking point M as the starting point of the vector, and the remaining points with undetermined positions as the ending points of the vector, four vectors are determined, e.g.
Figure BDA000019581606000811
Calculating the average of the obtained vectors
Figure BDA000019581606000812
The point E is judged to be l through a threshold value2Point of (1) throughMeasurement of
Figure BDA000019581606000813
And vector
Figure BDA000019581606000814
The length relationship of (a) can result in the location of E. The points and their positions on the straight lines L3 and L4 are known by the same token.
(4) Determining tag ID
The different positions of the coding points on the label correspond to different bits of the binary number, twelve points are traversed along the x-axis direction of a coordinate system formed by the label, and the first bit and the second bit … twelfth bit of the binary number correspond to each other in sequence; a binary number is uniquely determined using the resulting photosite location, which is the ID of the tag.
The utility model relates to a can pinpoint indoor mobile robot's precision sensor fast and operating method, mobile robot carries this sensor and can realize accurate indoor location to for the correct navigation of robot provides basic condition, have very high using value and economic benefits.
Drawings
FIG. 1 is an overall block diagram of a positioning sensor system;
FIG. 2 is a schematic diagram of the arrangement of infrared emission tubes;
FIG. 3 is a schematic view of a label;
FIG. 4 is an image coordinate system;
FIG. 5 is a camera coordinate system and a world coordinate system;
FIG. 6 shows a distribution of the image and gray values of the mark points;
FIG. 7 is a center point search diagram;
FIG. 8 is a graph of a boundary search;
FIG. 9 is a schematic view of direction point identification;
FIG. 10 is a schematic view of a label coordinate axis determination;
FIG. 11 is a schematic view of position information determination;
FIG. 12 is a schematic projection diagram;
FIG. 13 is a distribution diagram of labels in an image coordinate system;
FIGS. 14a-d are diagrams of tag infrared image processing;
FIG. 15 is a sensor work flow diagram.
The device comprises an image processing chip 1, a COMS camera 2, a UART interface 3, an SDRAM4, a FLASH5, an EEPROM6, an infrared tube 7 and a power supply 8.
Detailed Description
The present invention will be further explained with reference to the accompanying drawings and examples.
In fig. 1, it includes an image processing chip 1, the image processing chip 1 is respectively connected with a storage module, a cmos camera 3, a data interface, a power supply 8 and an infrared emission module, the infrared emission module includes a plurality of infrared tubes 7, which surround the cmos camera 2 and are divided into several groups. The data interface is a UART interface 3, and the storage modules are SDRAM4, FLASH5 and EEPROM 6. The number of the infrared tubes is 12, 4 infrared tubes are divided into one group, the number of the infrared tubes is 3, all the 3 groups are opened when the measurement is started, one group is closed immediately after the measurement result is obtained, the measurement is carried out, and if the measurement precision is not influenced, the other group is closed again, so that the purpose that the precision is not influenced and the number of the used infrared tubes 7 is the minimum is ensured.
1 tag speckle image processing
The TMS320DM642DSP processor is adopted by the image processing chip, the dominant frequency range is 480 MHz-720 MHz, and the processing capacity can reach 4800MIPS under the dominant frequency of 600 MHz. DM642 with 3 dedicated configurable video interfaces (videoports) (vps) provides great convenience for video data acquisition and processing. FIG. 1 is a block diagram of a whole positioning sensor, TMS320DM642 externally extends SDRAM, FLASH and EEPROM through an EMIF interface, wherein the EEPROM is used for storing and reading sensor information configuration; reading the data from the image sensor after AD conversion through a Video Port; after the image data is processed by the TMS320DM642 in an algorithm mode, positioning information (a space coordinate value and a course angle) is obtained and is sent to the robot controller through the UART serial data port.
2 infrared emission module
The infrared emission module consists of 12 infrared tubes and a control circuit, in order to obtain the best image quality and positioning accuracy, the infrared tubes are optimally arranged as shown in figure 2, the red part in the figure is an infrared emission tube, and the black and white part is a COMS camera. The infrared tubes are arranged around the COMS camera, the number of the infrared tubes does not affect the positioning precision, but the number of the infrared tubes is less, so that the acquisition range of the positioning sensor is reduced, and the number of the infrared tubes is more, so that the power consumption is increased, and the sensor is heated. In order to solve the contradiction, in practical application, a method of infrared tube grouping control is adopted, every 4 infrared tubes are in one group and are divided into 3 groups in total, 3 groups are all opened when measurement is started, one group is closed immediately after a measurement result is obtained, measurement is carried out again, and if the measurement precision is not influenced, the other group is closed again, so that the purpose that the precision is not influenced and the number of used infrared tubes is minimum is ensured.
3 image sensor
The image sensor adopts a COMS photosensitive array, and is integrated on a main board due to the limitation of the size of the sensor.
4 power supply
The power supply input of the sensor is DC8.4V-12V, and three power supplies with different voltage levels of 1.4V, 3.3V and 5V required by the system are generated by a DCDC voltage stabilizing chip. Because the infrared emission module consumes the most power, a high-current switch type voltage stabilizing chip LM2576 is adopted.
5 communication interface
The communication interface is used as a data output interface of the sensor and must meet the interface requirement of a common robot controller computer. The utility model discloses a UART interface, binding post choose for use 9 needle D type cluster head and 3 needle 2mm interval binding post, export 232 level and TTL level simultaneously to satisfy the needs without type host computer. In addition, the UART interface can also be connected with a display device to locally display data.
Fig. 3 is a schematic view of the reflective infrared label of the present invention, which is 15cm × 15cm, and the diameter of the upper 15 small dots is 15 mm. In the figure, three white dots are pasted with infrared ray reflecting materials for positioning the sensor, all labels are provided with the three dots, and the rest twelve dots are pasted with different numbers of infrared ray reflecting materials to construct ID numbers of label attributes, wherein the ID numbers are obtained by adding the values shown in figure 3. The attribute information may be set by itself as needed.
Principle of tag identification
1 Camera calibration
The ideal camera model is a pinhole camera model, but a real pinhole does not provide enough light for a fast exposure, so both the eye and the camera use lenses instead of a single point to collect more light. However, the use of lens models deviates from simple pinhole geometry models and introduces lens distortion and other effects. Therefore, in order to obtain an ideal image, the image must be corrected, the distortion coefficient of the camera needs to be known, and the internal parameter focal length f needs to be known in order to calculate the distance between the camera and the labeloThe camera must be calibrated to obtain the intrinsic and distortion parameters before the sensor is put into use.
First, three coordinate systems are established: an image coordinate system, a camera coordinate system, and a world coordinate system. Digital images are stored in a computer in the form of an M x N two-dimensional array, each element in an image of M rows and N columns being referred to as a pixel. Defining on an imageRectangular coordinate system UOV, the coordinate (u, v) of each pixel represents the number of rows and columns of the pixel in the image, and this coordinate system is called the image coordinate system, as shown in fig. 4. At the same time, an image coordinate system X expressed by physical unit is established1O1Y1The coordinate system is defined by the intersection O of the optical axis of the camera and the image plane1Is the origin of coordinates, X1Axis Y1The axes are parallel to the U, V axes, respectively.
Origin O1The coordinate in the U, V coordinate system is (u)0,v0) And the physical size of each pixel in the directions of the X axis and the Y axis is dx, dy, then the relationship of any one pixel in the image in two coordinate systems is:
u = x d x + u 0 v = y d y + v 0
expressed in homogeneous coordinates and matrix form as:
u v 1 = 1 d x 0 u 0 0 1 d y v 0 0 0 1 x y 1
the geometry of the camera imaging is shown in FIG. 5, where OcXcYcZcAs camera coordinate system, OcO1Is the focal length of the camera, OcIs the optical center, X, of the cameracAxis and YcAxis is respectively associated with image X1And Y1Axis parallel, ZcThe axis is the optical axis of the camera and is perpendicular to the image plane.
The world coordinate system is a reference for describing the position in the camera installation environment, and the relationship between the camera coordinate system and the world coordinate system is called an external parameter, because the relationship is only used in the calibration process, and the external parameter is not involved in the actual use of the system. If the homogeneous coordinate of the space point p in the world coordinate system and the camera coordinate system is (x) respectivelyw,yw,zw1) and (x)c,yc,zc1), then the following relationship exists:
x c y c z c 1 = R T 0 1 x w y w z w 1 = M 2 x w y w z w 1
according to an ideal pinhole imaging model, the relationship between any point P in space and its projection position P on the image is:
x = f x c z c y = f y c z c
(x, y) is p-point physical image coordinates, (x)c,yc,zc) The coordinates are in the camera coordinate system. The above formulas are simplified to obtain:
z c u v 1 = 1 d x 0 u 0 0 1 d y v 0 0 0 1 f 0 0 0 0 f 0 0 0 0 1 0 R T 0 1 x w y w z w 1
= a x x u 0 0 a y v 0 0 0 1 R T x w y w z w 1
wherein a isx、ay、u0、v0Only with respect to the internal structure of the camera, called internal parameters.
In fact, the actual imaging is not ideal imaging and has different distortion degrees, the ideal imaging point can shift due to the distortion, and the ideal imaging point (x) is assumedu,yu) Distortion of image point (x)d,yd) Can be used asThe distortion model is described by the following equation:
x u = x d + &delta; x ( x d , y d ) y u = y d + &delta; y ( x d , y d )
wherein, deltax、δyA non-linear distortion value that is related to the position of the image point in the image. First is the radial distortion, the mathematical model of which is as follows:
&delta; x ( x d , y d ) = x d ( k 1 r d 2 + k 2 r d 4 + k 3 r d 6 + &CenterDot; &CenterDot; &CenterDot; ) &delta; y ( x d , y d ) = y d ( k 1 r d 2 + k 2 r d 4 + k 3 r d 6 + &CenterDot; &CenterDot; &CenterDot; )
wherein,
Figure BDA00001958160600127
k1、k2、k3for radial distortion coefficients, the first two radial orders are generally taken to meet the requirements.
The centrifugal distortion is caused by the fact that the optical axis centers of all lenses in the optical system of the camera are not strictly coplanar, and the centrifugal distortion comprises radial distortion and tangential distortion, and can be represented by the following model:
&delta; x ( x d , y d ) = p 1 ( 3 x d 2 + y d 2 ) + 2 p 2 x d y d &delta; y ( x d , y d ) = p 2 ( 3 y d 2 + x d 2 ) + 2 p 1 x d y d
wherein p is1、p2The centrifugal distortion coefficient.
Yet another distortion is due to lens design, manufacturing imperfections, or equipment imperfections, and can be represented by the following model:
&delta; x ( x d , y d ) = s 1 ( x d 2 + y d 2 ) &delta; y ( x d , y d ) = s 2 ( x d 2 + y d 2 )
wherein s is1、s2Is the distortion coefficient.
By summarizing above, the complete distortion model is:
&delta; x ( x d , y d ) = x d ( k 1 r d 2 + k 2 r d 4 ) + p 1 ( 3 x d 2 + y d 2 ) + 2 p 2 x d y d + s 1 ( x d 2 + y d 2 ) &delta; y ( x d , y d ) = y d ( k 1 r d 2 + k 2 r d 4 ) + p 2 ( 3 y d 2 + x d 2 ) + 2 p 1 x d y d + s 2 ( x d 2 + y d 2 )
in the system, a calibration algorithm based on a 2D plane calibration plate provided by Zhangzhen is adopted during camera calibration, and the internal parameters of the camera are assumed to be constant all the time in the calibration process, namely, the internal parameters of the camera are constant no matter the camera shoots the plane template from any angle, and only the external parameters are changed. The method comprises the following basic steps:
(1) a standard checkerboard is printed as a calibration plate and attached to a rigid flat surface.
(2) A plurality of calibration plate images are shot from different angles, and the calibration can be more accurate due to the large quantity of the calibration plate images.
(3) Feature points on the calibration plate are detected, and image coordinates and actual coordinates are determined.
(4) And calculating the internal parameters of the camera by using the linear model.
(5) And optimizing the parameters in the camera by using the distortion model to calculate all the parameters.
Once the intrinsic parameters are determined, the image can be rectified and prepared for subsequent calculations.
2-active tag and optimal tag validation
In the step 5), the process of determining the effective tag and the optimal tag in the infrared image is as follows: preprocessing the acquired infrared image, firstly performing Gaussian smooth filtering on the infrared image, then selecting a proper threshold value to perform binarization processing to obtain a binarized image, extracting the contour in the binarized image, and removing a larger contour and a smaller contour to reduce the interference of noise.
The positioning sensor positions the position of the label in the infrared image by adopting a method of combining horizontal projection and vertical projection of the acquired binary image, and determines the current optimal label by using a nearest neighbor rule.
Giving a straight line, dividing the binary image into a plurality of parts by using a cluster of straight lines which are perpendicular to the straight line and have equal intervals, and counting the number of pixels with the pixel value of 1 in each part as the projection of the part of the binary image on the given straight line; when a given straight line is a horizontal or vertical straight line, calculating the number of pixels with a pixel value of 1 in each column or each row of the binary image, so as to obtain horizontal and vertical projections of the binary image, wherein the projection can be used as a feature for object identification in some applications, fig. 14a shows the binary image obtained after processing, fig. 14b and fig. 14c show the vertical projection and the horizontal projection of the binary image, respectively, and the specific classification can be seen in the following steps:
(1) successively traversing the acquired vertical projection images from left to right to obtain the spacing distance d between adjacent projection pixel clusters1,d2…dnSimilarly, the horizontal projection image is successively traversed from top to bottom to obtain the spacing distance d 'of the adjacent projection pixel clusters'1,d′2…d′nSince the labels are placed at a longer interval, the projection pixel clusters between different labels are also spaced at a longer distance, and the spacing distance can be used as a basis for distinguishing the labels.
(2) Determining a projection area corresponding to a label on a projection image, and taking the minimum value d of the pixel cluster spacing distance of the vertical projection image and seven times of the minimum value d 'of the pixel cluster spacing distance of the horizontal projection image, namely 7d and 7 d' as threshold values to carry out neighbor classification, wherein the specific classification method comprises the following steps: traversing the vertical projection image from left to right, finding a first projection pixel cluster, counting the distances from the rest projection pixel clusters to the pixel cluster, if the distances are smaller than 7d, belonging to the projection area of the same label, then continuously traversing to the right by taking the next projection pixel cluster larger than 7d as a reference, still being smaller than 7d and being the projection area where a second label is located, sequentially traversing downwards to obtain different areas of different labels in the vertical projection image, traversing from top to bottom for the horizontal projection image, and taking 7 d' as a distance reference, and obtaining the corresponding area of the label on the horizontal projection image by adopting the method. As shown in fig. 14b and 14c, A, B, C, D four projection areas can be obtained.
(3) Finding out the label area on the original image, making a straight line at the edge of each projection area determined in the step (2), enabling the straight line of the vertical projection area and the straight line of the horizontal projection area to intersect on the original image to obtain different rectangular areas, wherein the rectangular areas are areas where labels may exist, and fig. 14d shows four rectangular areas obtained by intersecting straight lines, wherein unreasonable areas exist, namely two rectangular areas do not have labels, and how to remove the unreasonable areas is introduced in the next step to obtain effective labels.
(4) Removing unreasonable areas to obtain valid labels, and obtaining areas where the labels may be located through the step (3), but removing the areas where the invalid areas are located to obtain valid labels is required, mainly to eliminate interference in two aspects: one is to exclude the case where no label is present in the intersecting rectangular region, and one is to exclude the case where the label is at the edge of the image so that a complete label is not obtained.
(5) And (3) obtaining a possible projection area, determining the coordinate range of the area where the label is possibly located in the image according to the edge straight line of the projection edge area, judging whether the label exists in the area or not according to the existence of the reflection point in the detection area, and if the label exists in the area, judging that the label exists in the area, otherwise, judging that the label does not exist in the area. Because the probability of false detection at the edge of the image is high, the label needs to be discarded when being close to the edge, and whether the label is at the edge of the image can be simply judged through the coordinates of the area. And the remaining label after the interference is eliminated is the effective label.
(6) If the number of the effective labels in the image is more than one, the optimal label needs to be selected. Fig. 14d shows the image area where the label is acquired. Solving two rectangular area center coordinates a (x)a,ya) And b (x)b,yb) Coordinate o (x) of center point of imageo,yo) And calculating the oa and ob distances:
d oa = ( x o - x a ) 2 + ( y o - y a ) 2
d ob = ( x o - x b ) 2 + ( y o - y b ) 2
and taking the label with relatively small distance as an effective label under the current condition, and when the number of the labels is more than two, judging in a manner similar to the above.
3 tag reflection point identification
The reflection points are processed by using a reflection material, can reflect incident light rays back to a light source, can form a quasi-binary image with obvious gray contrast on an image under the irradiation of a paraxial light source, and is particularly suitable for being used as a high-precision characteristic point for photogrammetry.
Because the actually detected mark point is not a complete circle due to various reasons such as camera imaging, the system adopts a pixel search-based method to identify the mark area. Taking fig. 6 as an example, the identification process is as follows:
(1) for ease of calculation, we first define a gradient. The gradient of a pixel along a certain direction is defined as the difference between the gray value of the pixel and the gray value of the pixel next to the direction. Searching from top to bottom from left to right, if the right gradient is larger than a set threshold epsilon at a certain pixel point1Then the point is considered to be a point within the marker region.
(2) And searching the maximum gray value in eight neighborhoods of the point by taking the point as a starting point, and finally finding out the maximum gray value point in the mark region. If ε in the figure1Set to 10, the starting point is (5, 8), i.e. the point where the gray value is 7.
(3) Searching in four directions of up, down, left and right by taking the central point as a starting point, and when the pixel gray value of a certain point is smaller than a set value and the gradient is smaller than a set threshold value, considering the point as a boundary point of the mark area.
(4) And respectively starting horizontal and vertical searches by taking the points connected with the central diagonal as starting points until the boundary point of the mark area is found. Other searches are analogized as shown in fig. 7 and 8.
(5) The detected areas are not all mark areas, but interference areas need to be removed, the average value of all pixels in the mark areas is calculated firstly, if the average value is too low, the average value is eliminated, then the size and the boundary length of the mark areas are calculated, and the elimination of the threshold value is not met.
All the mark points in the visual field range can be basically determined through the steps, but the gray value of the background area is not zero considering that the background area is not ideally black during imaging, and the interference of the background needs to be eliminated in order to calculate the mark center more accurately. The system adopts a method of establishing a gray level histogram, selects the valley bottom between the peak values of the gray levels of the mark area and the background area as a threshold value, and then subtracts the threshold value from the gray level value of each point of the image to obtain a new image.
When the central point is determined, a gray-scale weighted centroid method is selected, wherein the gray-scale weighted centroid is that the average value of pixel coordinates of all mark points in an image is calculated by taking a gray-scale value as weight, and the formula is as follows:
x 0 = &Sigma; ( i , j ) &Element; S i w i , j &Sigma; ( i , j ) &Element; S w i , j y 0 = &Sigma; ( i , j ) &Element; S j w i , j &Sigma; ( i , j ) &Element; S w i , j
x in the formula0And y0To calculate the pixel coordinates of the resulting center point, (i, j) represents a certain pixel point in the image, i and j represent the coordinate values of the pixel point's x-axis and y-axis, respectively, wi,jRepresenting the pixel grey value at pixel point (i, j).
4 determining three direction points and coordinate system
In a label, the mark points are divided into two types, the first type is direction points, namely the direction of the coordinate axis of the label is determined, and only three of the four corners of the label have mark points for uniquely determining the direction. The second type is a code point used to determine the number of the tag.
As shown in fig. 9, the central point of each mark point is found in the previous section, next, the direction point and the code point are distinguished, the length between any two mark points in the label is calculated, the longest mark point is selected, the two mark points corresponding to the longest connecting line are extracted, the two mark points are respectively marked as a and B, the middle point of the connecting line of a and B is calculated, the distances from all points except the point a and the point B in the label to the middle point are respectively calculated, the longest mark point is selected, the mark point corresponding to the longest distance is extracted, the mark point is marked as O, and then A, O, B three direction points are three direction points.
The determination process of the label coordinate system comprises the following steps: setting the O point as the coordinate origin of the label coordinate systemThe pixel coordinate in the image coordinate system is (x)1,y1) In A and B, a point is selected to rotate clockwise by 90 degrees around O, and the pixel coordinate of the selected point is (x)2,y2) The coordinates of the other remaining point are (x)3,y3) The calculation formula of the pixel coordinate (x, y) of the corresponding point P after rotation is as follows:
x = ( x 2 - x 1 ) cos ( &alpha; ) - ( y 2 - y 1 ) sin ( &alpha; ) + x 1 y = ( y 2 - y 1 ) cos ( &alpha; ) + ( x 2 - x 1 ) sin ( &alpha; ) + y 1
wherein α is the rotation angle.
And (4) judging the delta included angle:
δ=(x-x1)(x3-x1)+(y-y1)(y3-y1)
if delta>0 is the acute angle, the point (x) is selected2,y2) The corresponding coordinate axis is the X axis, (X)3,y3) The coordinate axis corresponding to the point is Y-axis, if delta<0 is an obtuse angle, the point (x) is selected2,y2) The corresponding coordinate axis is Y axis, (x)3,y3) The corresponding coordinate axis is the X axis, and the label coordinate system XOY can be determined by the correspondence.
5 determination of position information
The determination process of the position information comprises the following steps: in the image acquired by the positioning sensor, the R point is the projected position of the positioning sensor in the image, and the position is the center of the image. And establishing an AOB coordinate system in the image according to a rule for determining a label coordinate system, wherein OA is the X-axis direction, and OB is the Y-axis direction.
When the coordinates of the locator under the label are solved, a method of twice affine transformation is used, and the affine transformation is the transformation between two plane coordinate systems. The basic principle of affine transformation is: suppose P1、P2、P3Is any three points, P ', that are not collinear in plane'1、P’2、P'3Also three points that are not collinear in the plane, there is also only one affine transformation T, let T (P)i)=P’i(i =1,2, 3). Can be expressed by the following formula:
x , y , = a 11 a 12 a 21 a 22 x y + b 1 b 2
wherein, x y , x , y , respectively before and after transformation. a 11 a 12 a 21 a 22 For affine matrices, transformations such as rotation are generated. b 1 b 2 Is a translation matrix. From affine matrix and translation matrixAn affine transformation can be uniquely determined.
Firstly, the transformation of the AOB coordinate system and the image coordinate system can determine the coordinate values of A, O, B three points in the image coordinate system, and simultaneously can also determine the coordinate values of three points in the AOB coordinate system, and the radiation transformation matrix and the translation matrix can be determined by substituting the three coordinates into a radiation transformation formula.
Assuming that the obtained radiation matrix is n 11 n 12 n 21 n 22 , The translation matrix is k 1 k 2 , The image coordinate of the R point in the image is known as (u)0,v0) Then, the coordinates of the R point in the AOB coordinate system can be obtained x r y r ,
x r y r = n 11 n 12 n 21 n 22 u 0 v 0 + k 1 k 2
The relationship between the AOB coordinate system and the tag coordinate system is found, the tag coordinate system and the AOB coordinate system are known, and the affine matrix is assumed to be obtained as m 11 m 12 m 21 m 22 , Since the coordinates of the O point in the image and the origin in the label coordinate system are (0, 0), the translation matrix is 0 0 , The coordinates of the R point in the AOB coordinate system can be obtained in the image and are set as x r y r , The transformed coordinates are x r , y r , , Then:
x r , y r , = m 11 m 12 m 21 m 22 x r y r
from this, the coordinates (x ') of the locator under the tag can be determined'r,y′r). From the coordinates (x'r,y′r) The included angle between the locator and the X axis and the Y axis in the label coordinate system can be obtained, and therefore the course angle can be determined.
6 determining the vertical distance of the locator from the ceiling
Fig. 12 shows the principle of projection of the camera, and the process of determining the height information between the position sensor and the ceiling of the sticker label is: according to the projection principle of the camera, L is the length of the actual object, after projection, L is the focal length after projection on the image plane, and Z is the distance between the object and the camera, then there are:
l L = f Z
therefore, it is not only easy to use
Figure BDA00001958160600186
Wherein f is obtained by calibrating internal parameters with a camera, and L and L are respectively obtained by calculating the label and the image, so that the height information between the positioning sensor and the ceiling of the pasted label can be determined.
7 identification of tag ID number
This section mainly describes how to determine the location distribution of the photosites quickly and accurately by using vector operation in a planar coordinate system to confirm the ID information of the tag.
Fig. 13 shows an arbitrary distribution of labels in an image coordinate system, where the origin O 'of coordinates of the image coordinate system X' O 'Y' is the pixel point at the upper left corner of the image, the X-axis is horizontally to the right,the Y axis is vertically downward, O, A, B three points form a coordinate system AOB of the label in the image, and the corresponding pixel coordinates are (x)o,yo)(xa,ya) And (x)b,yb) Wherein O is the origin of the coordinate system of the label, OA points to the x axis, OB points to the y axis, M and N points are two equal points between the A point and the C point, the N point is marked by a dotted circle to indicate that no photosensitive material exists at the point on the label, other reflective points except the direction point are coding points for marking the label and are used for determining the ID information of the label, and the following steps are adopted for determining the position of the coding points of the label.
(1) Determining the position of an encoded point on a straight line OB
Determining a vector
Figure BDA00001958160600187
The sum of vectors of points other than O, A, B and O point is obtained
Figure BDA00001958160600188
By an angle theta, e.g. another vector
Figure BDA00001958160600189
Then
Figure BDA000019581606001810
And
Figure BDA000019581606001811
the included angle theta is as follows:
&theta; = arccos ( ( x b - x o ) ( x f - x o ) + ( y b - y o ) ( y f - y o ) ( x b - x o ) 2 + ( y b - y o ) 2 ( x f - x o ) 2 + ( y f - y o ) 2 )
selecting a judgment threshold value, considering the point to be on an OB line when the included angle theta is smaller than the threshold value, and judging that the point M is on the OB line at the moment;
determining the specific position of the M point by using the relation between the vector lengths
Figure BDA00001958160600192
The length of the utility model is as follows:
| OM &RightArrow; | = ( x m - x o ) 2 + ( y m - y o ) 2
at this time, the vector is obtained
Figure BDA00001958160600194
And vector
Figure BDA00001958160600195
The specific position of the point M is determined by the length ratio of (a);
for the N points where no photosensitive material exists, since it is used for discriminating other photosensitive points, the coordinates thereof are determined and set ON &RightArrow; = ( x n - x o , y n - y o ) , ON &RightArrow; = 2 / 3 OB &RightArrow; , Then there are:
x n - x o = 2 3 ( x b - x o ) y n - y o = 2 3 ( y b - y o )
finding the coordinates (x) of the N pointsn,yn) If the M point has no photosensitive material, the coordinate of the M point is determined by the method;
(2) determining the position of a coding point on a straight line OA
To obtain
Figure BDA00001958160600199
Calculating the sum of the remaining vectors according to the method described in (1)The included angle of the D point can be judged to be a point on an OA line, and the position of the D point can be determined by utilizing the vector length relation;
(3) determining the position of the remaining photosites
The remaining photosites are positioned in a similar manner as described above, except that the vectors are selected differently, for example, by determining the location of the photosite point on line L2, for the label shown in FIG. 13, taking point M as the starting point of the vector, and the remaining points with undetermined positions as the ending points of the vector, four vectors are determined, e.g.
Figure BDA000019581606001911
Calculating the average of the obtained vectors
Figure BDA000019581606001912
The point E is judged to be l through a threshold value2Point of (a) by a vector
Figure BDA000019581606001913
And vector
Figure BDA000019581606001914
The length relationship of (a) can result in the location of E. The same reasoning holds for the photosites and their positions on the lines L3 and L4.
(4) Determining tag ID
The different positions of the coding points on the label correspond to different bits of the binary number, twelve points are traversed along the x-axis direction of a coordinate system formed by the label, and the first bit and the second bit … twelfth bit of the binary number correspond to each other in sequence; a binary number is uniquely determined using the resulting photosite location, which is the ID of the tag. For the tag shown in fig. 13, the corresponding binary number is 101010010101, and the ID number of the tag is 2709.
The utility model discloses work flow:
when the robot provided with the positioning sensor moves in a working area, the infrared emission module of the sensor emits infrared rays to irradiate a label on a ceiling, a micro CMOS camera on the sensor collects a light spot image of the label, the TMS320DM642DSP chip processes the image by adopting an advanced algorithm to obtain position information of the sensor relative to the label, such as an X coordinate value, a Y coordinate value, a course angle, a height and the like, and the working flow is shown in FIG. 15.

Claims (2)

1. A fast and accurate positioning system for indoor mobile robot is characterized in that it comprises a positioning sensor mounted on the robot and a plurality of passive tags reflecting infrared rays and adhered on the ceiling of a working area; the positioning sensor comprises an image processing chip, the image processing chip is respectively connected with a storage module, a COMS camera, a data interface, a power supply and an infrared emission module, and the infrared emission module comprises a plurality of infrared tubes which surround the COMS camera and are divided into a plurality of groups; the passive tag is an identification tag, a plurality of marking points are arranged on the identification tag, the marking points are divided into two types, the first type is a direction point, namely the direction of a coordinate axis is determined, and in order to uniquely determine the direction, direction marking points are arranged at any three angles in four angles of the identification tag; the second type is coding points, namely the rest mark points, and the ID number of the label is determined by the combination of the rest mark points; the reflecting infrared material is adhered to the direction points, and the reflecting infrared material is adhered to all or part of the other coding points according to coding requirements.
2. The fast precise positioning system for indoor mobile robots according to claim 1, wherein the number of said infrared tubes is 12, and 4 of said infrared tubes are grouped into 3 groups; when the measurement is started, all the 3 groups are opened, one group is closed immediately after the measurement result is obtained, the measurement is carried out again, and if the measurement precision is not influenced, the other group is closed again.
CN 201220375716 2012-07-31 2012-07-31 Rapid and accurate positioning system used for indoor mobile robot Expired - Fee Related CN202702247U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201220375716 CN202702247U (en) 2012-07-31 2012-07-31 Rapid and accurate positioning system used for indoor mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201220375716 CN202702247U (en) 2012-07-31 2012-07-31 Rapid and accurate positioning system used for indoor mobile robot

Publications (1)

Publication Number Publication Date
CN202702247U true CN202702247U (en) 2013-01-30

Family

ID=47582583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201220375716 Expired - Fee Related CN202702247U (en) 2012-07-31 2012-07-31 Rapid and accurate positioning system used for indoor mobile robot

Country Status (1)

Country Link
CN (1) CN202702247U (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102773862A (en) * 2012-07-31 2012-11-14 山东大学 Quick and accurate locating system used for indoor mobile robot and working method thereof
CN103528583A (en) * 2013-10-24 2014-01-22 北京理工大学 Embedded robot locating device
CN104123015A (en) * 2014-07-18 2014-10-29 广东易凌信息科技有限公司 System for simulating laser pen through mobile phone and achieving method of system
CN104216202A (en) * 2014-08-25 2014-12-17 太仓中科信息技术研究院 Inertia gyroscope combined real-time visual camera positioning system and method
CN105856227A (en) * 2016-04-18 2016-08-17 呼洪强 Robot vision navigation technology based on feature recognition
CN105955260A (en) * 2016-05-03 2016-09-21 大族激光科技产业集团股份有限公司 Mobile robot position perception method and device
WO2016165548A1 (en) * 2015-04-16 2016-10-20 北京蚁视科技有限公司 Vision localization system and method based on high reflective infrared identification
CN106338289A (en) * 2016-08-11 2017-01-18 张满仓 Robot-based indoor positioning and navigation system and method
CN107218886A (en) * 2016-03-22 2017-09-29 周恺弟 A kind of optical alignment tracing system and method based on stealthy combination road sign
CN107436422A (en) * 2017-08-22 2017-12-05 杭州慧仓信息科技有限公司 A kind of robot localization method based on infrared lamp solid array
WO2018049710A1 (en) * 2016-09-14 2018-03-22 哈工大机器人集团上海有限公司 Road sign for determining position of robot, device, and method for distinguishing labels
CN108227687A (en) * 2016-12-09 2018-06-29 广东德豪润达电气股份有限公司 Intelligent robot identification virtual boundary method, mode of progression and beacon
CN108240812A (en) * 2017-12-31 2018-07-03 芜湖哈特机器人产业技术研究院有限公司 A kind of Navigation Sensor System based on Passive Positioning
CN110824498A (en) * 2018-08-07 2020-02-21 杭州海康机器人技术有限公司 Obstacle detection method, device and system

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102773862B (en) * 2012-07-31 2015-01-07 山东大学 Quick and accurate locating system used for indoor mobile robot and working method thereof
CN102773862A (en) * 2012-07-31 2012-11-14 山东大学 Quick and accurate locating system used for indoor mobile robot and working method thereof
CN103528583A (en) * 2013-10-24 2014-01-22 北京理工大学 Embedded robot locating device
CN104123015A (en) * 2014-07-18 2014-10-29 广东易凌信息科技有限公司 System for simulating laser pen through mobile phone and achieving method of system
CN104216202A (en) * 2014-08-25 2014-12-17 太仓中科信息技术研究院 Inertia gyroscope combined real-time visual camera positioning system and method
WO2016165548A1 (en) * 2015-04-16 2016-10-20 北京蚁视科技有限公司 Vision localization system and method based on high reflective infrared identification
CN107218886A (en) * 2016-03-22 2017-09-29 周恺弟 A kind of optical alignment tracing system and method based on stealthy combination road sign
CN105856227A (en) * 2016-04-18 2016-08-17 呼洪强 Robot vision navigation technology based on feature recognition
CN105955260B (en) * 2016-05-03 2019-05-17 大族激光科技产业集团股份有限公司 Position of mobile robot cognitive method and device
CN105955260A (en) * 2016-05-03 2016-09-21 大族激光科技产业集团股份有限公司 Mobile robot position perception method and device
CN106338289A (en) * 2016-08-11 2017-01-18 张满仓 Robot-based indoor positioning and navigation system and method
WO2018049710A1 (en) * 2016-09-14 2018-03-22 哈工大机器人集团上海有限公司 Road sign for determining position of robot, device, and method for distinguishing labels
CN108227687A (en) * 2016-12-09 2018-06-29 广东德豪润达电气股份有限公司 Intelligent robot identification virtual boundary method, mode of progression and beacon
CN107436422A (en) * 2017-08-22 2017-12-05 杭州慧仓信息科技有限公司 A kind of robot localization method based on infrared lamp solid array
CN107436422B (en) * 2017-08-22 2019-09-03 杭州慧仓信息科技有限公司 A kind of robot localization method based on infrared lamp solid array
CN108240812A (en) * 2017-12-31 2018-07-03 芜湖哈特机器人产业技术研究院有限公司 A kind of Navigation Sensor System based on Passive Positioning
CN110824498A (en) * 2018-08-07 2020-02-21 杭州海康机器人技术有限公司 Obstacle detection method, device and system

Similar Documents

Publication Publication Date Title
CN102773862B (en) Quick and accurate locating system used for indoor mobile robot and working method thereof
CN202702247U (en) Rapid and accurate positioning system used for indoor mobile robot
US11189044B2 (en) Method and device for detecting object stacking state and intelligent shelf
CN106468553B (en) A kind of localization method of the mobile object based on road sign
CN103411553B (en) The quick calibrating method of multi-linear structured light vision sensors
Robertson et al. An Image-Based System for Urban Navigation.
CN108555908A (en) A kind of identification of stacking workpiece posture and pick-up method based on RGBD cameras
CN101398907B (en) Two-dimension code structure and decoding method for movable robot
CN112667837A (en) Automatic image data labeling method and device
CN110068270A (en) A kind of monocular vision box volume measurement method based on multi-line structured light image recognition
CN106651752A (en) Three-dimensional point cloud data registration method and stitching method
CN103196370B (en) Measuring method and measuring device of conduit connector space pose parameters
CN104835173A (en) Positioning method based on machine vision
CN103295239A (en) Laser-point cloud data automatic registration method based on plane base images
CN106092090A (en) A kind of infrared road sign for indoor mobile robot location and using method thereof
CN103136525B (en) High-precision positioning method for special-shaped extended target by utilizing generalized Hough transformation
CN114323543B (en) Method for improving test efficiency of pressure-sensitive paint
CN102831595B (en) Marker detection method for image recognition of target points in natural environments
CN106403926B (en) Positioning method and system
CN102679960A (en) Robot vision locating method based on round road sign imaging analysis
US20240013437A1 (en) Method for providing calibration data for calibrating a camera, method for calibrating a camera, method for producing at least one predefined point-symmetric region, and device
CN109829476A (en) End-to-end three-dimension object detection method based on YOLO
CN110763204A (en) Planar coding target and pose measurement method thereof
Lin et al. A Robot Indoor Position and Orientation Method based on 2D Barcode Landmark.
Su et al. A novel camera calibration method based on multilevel-edge-fitting ellipse-shaped analytical model

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130130

Termination date: 20150731

EXPY Termination of patent right or utility model