WO2023143098A1 - 标签识别的方法、装置、电子设备、存储介质及标签 - Google Patents

标签识别的方法、装置、电子设备、存储介质及标签 Download PDF

Info

Publication number
WO2023143098A1
WO2023143098A1 PCT/CN2023/071912 CN2023071912W WO2023143098A1 WO 2023143098 A1 WO2023143098 A1 WO 2023143098A1 CN 2023071912 W CN2023071912 W CN 2023071912W WO 2023143098 A1 WO2023143098 A1 WO 2023143098A1
Authority
WO
WIPO (PCT)
Prior art keywords
label
points
arrangement
point
image
Prior art date
Application number
PCT/CN2023/071912
Other languages
English (en)
French (fr)
Inventor
蔡龙生
何林
唐旋来
Original Assignee
上海擎朗智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海擎朗智能科技有限公司 filed Critical 上海擎朗智能科技有限公司
Publication of WO2023143098A1 publication Critical patent/WO2023143098A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • the embodiments of the present invention relate to computer technology, and in particular to a label identification method, device, electronic equipment, storage medium and label.
  • the robot In the process of executing the task, in order to ensure the smooth progress of the task, the robot needs to be positioned and monitored multiple times to clarify the current position of the robot, so as to determine the execution progress of the task.
  • the current positioning of the robot is mainly performed manually by the staff, which not only consumes a lot of labor costs, but also has low efficiency for robot positioning.
  • Embodiments of the present invention provide a label identification method, device, electronic equipment, storage medium and label, which can quickly locate a robot.
  • an embodiment of the present invention provides a method for label identification, the method comprising:
  • the location information of the label is determined according to the arrangement of the label points and the association relationship between the preset arrangement and the location information of the label.
  • the embodiment of the present invention also provides a device for label identification, which includes:
  • An acquisition module configured to acquire a label image, wherein the label image has a label, and the label includes at least one label point with infrared reflective properties;
  • a determination module configured to identify the label points in the label image, and determine the arrangement of the label points
  • the tag identification module is configured to determine the location information of the tag according to the arrangement of the tag points and the association relationship between the preset arrangement and the location information of the tag.
  • the embodiment of the present invention also provides a label, the label includes at least one label point with infrared reflective properties, and the label is used when the robot performs the label identification method described in any embodiment of the present invention Realize the positioning of the robot.
  • an embodiment of the present invention also provides an electronic device, including a memory, a processor, and a computer program stored on the memory and operable on the processor, and the processor implements the computer program according to the present invention when executing the program.
  • an electronic device including a memory, a processor, and a computer program stored on the memory and operable on the processor, and the processor implements the computer program according to the present invention when executing the program.
  • the embodiment of the present invention also provides a storage medium containing computer-executable instructions, and the computer-executable instructions are used to perform the tag identification as described in any embodiment of the present invention when executed by a computer processor. method.
  • the robot by acquiring the label image of the robot's working environment, identifying the label points in the label image, determining the arrangement of the label points, and according to the relationship between the arrangement of the label points and the label position information, that is, an arrangement association Corresponding to a label position, the robot can be quickly positioned, thereby improving the positioning efficiency of the robot, thereby solving the problem that the robot position cannot be quickly positioned.
  • Fig. 1 is a schematic flow chart of a method for label identification in an embodiment of the present invention
  • Figure 2a is a schematic diagram of a label in an embodiment of the present invention.
  • Figure 2b is a schematic diagram of a label in an embodiment of the present invention.
  • Fig. 3 is a schematic flow chart of a method for label identification in an embodiment of the present invention.
  • Fig. 4 is a schematic flow chart of a method for label identification in an embodiment of the present invention.
  • Fig. 5 is a schematic flow chart of a method for label identification in an embodiment of the present invention.
  • Fig. 6 is a structural block diagram of a device for label identification in an embodiment of the present invention.
  • Fig. 7 is a schematic structural diagram of a tag identification device in an embodiment of the present invention.
  • the label can be used to locate the robot.
  • the label consists of several label points, and the label points can be made of reflective materials.
  • the arrangement of the label points of different labels is different, and the arrangement of the label points forms the unique identifier of the label.
  • Tags are usually pre-deployed horizontally on the ceiling of the work environment to guide the positioning of devices such as robots.
  • the light spot of the label point can be extracted through the reflection phenomenon. Due to changes in lighting conditions, etc., the light spots of the label points may not be well reflected in the imaging area. If there is noise near the label, the accuracy of light spot extraction will decrease, thereby reducing the accuracy of label recognition.
  • Fig. 1 is a schematic flow chart of a label recognition method provided by an embodiment of the present invention. This embodiment is applicable to the situation of label-based recognition in a robot working environment, and the method can be executed by a label recognition device . As shown in Figure 1, the method specifically includes the following steps:
  • Step 110 acquire the label image.
  • the label image has a label including at least one label point with infrared reflective properties.
  • the label may also include a label point bounding box.
  • acquiring the label image includes: acquiring the label image, and identifying a bounding box of label points from the label image.
  • the tag recognition device can acquire the tag image in the working environment of the robot.
  • the working environment of the robot can be an environment such as a restaurant or a shopping mall, and one or more tags can be preset in the working environment of the robot for positioning the robot.
  • the label may include a label point bounding box and at least one label point.
  • the label point bounding box is a polygon surrounding all the label points.
  • the label point bounding box may be a square or a rectangle.
  • the label point bounding box can be a fully closed box or a semi-closed box.
  • the label point bounding box is a semi-closed box with one side open.
  • the label can be composed of a label point enclosing frame with infrared reflective properties and at least one label point with infrared reflective properties.
  • the staff can arrange the label points in the label point enclosing frame, and the arrangement of the label points of different labels can be different.
  • Fig. 2a and Fig. 2b are schematic diagrams of a label in an embodiment of the present invention.
  • the label point enclosing frame 201 in Fig. 2a is a square with an opening on one side, and there are eight label points 202 in the label point enclosing frame 201, and the label point in the label The number and arrangement of 202 are predetermined for the staff.
  • the label point enclosing frame 203 is a fully enclosed square, and nine label points 204 are located inside the label point enclosing frame 203 .
  • the robot When the robot moves in the working environment, it can acquire images of the working environment in real time or at regular intervals. There may be labels preset in the working environment in the images, so the label images containing the labels can be obtained.
  • acquiring the tag image includes: collecting the tag image in the working environment of the robot through an image acquisition device installed on the robot.
  • an image acquisition device may be installed on the robot, for example, if the label is made of an infrared reflective material, the image acquisition device may be an infrared camera.
  • the labels can be pre-pasted on the ceiling of the working environment by the staff, and the image acquisition equipment can be installed on the top of the robot. When the robot is moving, the image of the ceiling can be collected by the image acquisition device on the top of the robot, so as to obtain the label image.
  • the label can also be pasted on the window or on the wall, etc., and the image acquisition device is installed on the robot where the label can be photographed. After setting up the tags, the staff can push the robot to walk in the preset working environment. During the moving process of the robot, the image acquisition device installed on the robot can collect the images in the environment in real time.
  • the image of the label is the label image in the preset environment.
  • the collection range of the image collection device may be preset, for example, the preset collection range may be a range centered on the robot and a preset distance as the radius.
  • the beneficial effect of such setting is that the label image can be actively acquired by the robot, which facilitates the positioning of the robot at any time during the working process of the robot, reduces the positioning operations of the staff, and improves the label recognition efficiency and the working efficiency of the robot.
  • the bounding frame of the tag point may be made of reflective material, for example, it may be an infrared reflective material. According to the reflection phenomenon, the bounding box of the label point can be recognized.
  • Step 120 identify the label points in the label image, and determine the arrangement of the label points.
  • the label also includes a label point enclosing frame with infrared reflective properties.
  • the label point enclosing frame is a polygon surrounding all label points, and the label points are identified in the label image, and the arrangement of the label points is determined, which can be refined as : Identify the bounding box of label points from the label image; identify the label points within the range of the bounding box of label points, and determine the arrangement of the label points.
  • the label point is identified within the scope of the label point enclosing frame, and the arrangement of the at least one label point is determined.
  • the label point can be a circular reflective material, then the circular light spot in the bounding box of the label point can be recognized, determine The arrangement of the circular light spots is the arrangement of the label points. For example, if it is recognized that the light spots in the enclosing frame of the label points of a label are arranged in a "Z" shape, it can be determined that the label points of the label are arranged in a "Z" shape.
  • Each label corresponds to a unique arrangement of label points, and according to the arrangement of label points, label information such as label codes and label positions corresponding to the arrangement can be determined.
  • Step 130 Determine the location information of the label according to the arrangement of the label points and the association between the preset arrangement and the location information of the label.
  • each label corresponds to a unique arrangement of label points and unique label position information
  • the arrangement of label points corresponds to the label position information one by one.
  • the label position information may be the coordinates of the position where the label is pasted in the working environment of the robot.
  • Presetting the correspondence between the arrangement of the label points and the label position information may be to associate and store the corresponding graphics of the arrangement of the label points and the label position information. After the arrangement of the label points is determined, the corresponding label position information can be searched according to the preset association relationship, so as to realize the positioning of the label.
  • the relationship between the arrangement of the label points and the label position information is determined by the staff before or when pasting the labels.
  • the staff need to paste the labels according to the preset label position information, so that the robot can recognize the arrangement of the label points.
  • the label position information can be determined in time, so as to realize the label-based positioning of the robot according to the label position information.
  • the pre-stored label location information corresponding to the "Z" shape is the location coordinates at the entrance of the restaurant
  • the label location information corresponding to the "L” shape is the location coordinates on the left side of table No. 1. If the arrangement of the label points is recognized as " L” shape, it can be determined according to the location information of the label that the robot is currently at the left side of table No. 1, and the positioning based on the label can be completed.
  • Fig. 3 is a schematic flow chart of a label identification method provided by an embodiment of the present invention. This embodiment is an optional embodiment based on the above-mentioned embodiments, and the method can be executed by a label identification device.
  • identifying the bounding box of the label point from the label image can be refined as follows: according to the preset bounding box recognition algorithm, identify the line of the shape of the preset label point bounding box from the label image, and obtain the line in the label image Label point bounding box.
  • the method specifically includes the following steps:
  • Step 310 acquire the label image.
  • Step 320 according to the preset bounding box recognition algorithm, identify the lines of the preset label point bounding box shape from the label image, and obtain the label point bounding box in the label image.
  • the image recognition algorithm is preset as the bounding box recognition algorithm, and the label point bounding box is recognized from the label image according to the bounding box recognition algorithm.
  • the shape of the label point bounding box is preset, and the line of the preset label point bounding box shape is recognized from the label image as the label point bounding box.
  • the bounding box of the label point is a pentagon, and the feature extraction is performed on the label image, the pattern of the pentagon is recognized, and the identified pentagon is determined as the bounding box of the label point.
  • the label point enclosing frame is a circle, and the label point is also a circle. After multiple circles are identified, the sizes of the circles are compared, and the largest circle is used as the label point enclosing frame.
  • Bounding box recognition algorithms may include image filtering algorithms, edge detection algorithms, and feature detection algorithms.
  • the label image when determining the bounding box of the label points in the label image, the label image may be filtered according to a preset image filtering algorithm to obtain a grayscale image. Then use the edge detection algorithm to extract the edge of the grayscale image to get the intermediate image. Then, the feature detection algorithm is used to identify the lines that form the shape of the bounding box of the label point from the intermediate image, and the bounding box of the label point in the label image is obtained.
  • the tag image is filtered using a preset image filtering algorithm
  • the preset image filtering algorithm is a Gaussian filtering algorithm, and the purpose of filtering is to reduce the influence of image noise on tag recognition.
  • an image with distinct gray values can be obtained, that is, a gray-scale image, and the filtered image can also be binarized to obtain a gray-scale image.
  • the edge detection algorithm may be LOG (Laplacian of Gaussian, Gaussian Laplacian) feature extraction algorithm, Sobel (Sobel) edge detection algorithm or Canny (Canny) edge detection algorithm, etc.
  • LOG Laplacian of Gaussian, Gaussian Laplacian
  • Sobel Sobel
  • Canny Canny edge detection algorithm
  • the label point bounding box and the light spot boundaries of all label points can be extracted, so that the geometric shape of the label point bounding box and label points can be seen in the intermediate image.
  • the label point bounding box is a quadrilateral
  • the label point is a circle.
  • each line on the intermediate image is identified, and the bounding box of the label points is determined.
  • the identified lines can be curved or straight.
  • the feature detection algorithm is used to search for the bounding box of the label point in the intermediate image, that is, to identify the line segments that make up the shape of the bounding box of the label point. For example, if the label point bounding box is a pentagon, you can find the line segments that make up the pentagon.
  • the preset feature detection algorithm can be Hough (Hough transform). Hough is a method of detecting the boundary shape, which realizes the fitting of straight lines and curves by transforming the image coordinate space into the parameter space.
  • the bounding box of the label points is a polygon, specifically a quadrilateral.
  • the bounding box of label points is a straight line, which can be transformed by Hough transform
  • the intermediate image is searched for a straight line, fitted with a straight line, and identified the straight line segments that make up the polygon. If the bounding box of the label points is a circle, then circle fitting can be performed to identify the curves that make up the circle.
  • the label point enclosing frame is distinguished from the label point, which is convenient to obtain the label point enclosing frame, which is beneficial to identify the label point within the range of the label point enclosing frame, and improves the accuracy of label recognition.
  • the method of straight line extraction and circle extraction is used to identify the bounding box of the label point and the label point, that is, the edge gradient information is used for label identification, which has little to do with the pixel distribution inside the label, so it is less affected by the light. , to further improve the label recognition accuracy.
  • Step 330 Identify the label points within the enclosing frame of the label points, and determine the arrangement of the label points.
  • Step 340 Determine the location information of the label according to the arrangement of the label points and the relationship between the preset arrangement and the location information of the label.
  • the label in the embodiment of the present invention is composed of a label point enclosing frame and at least one label point, and the label point enclosing frame is a semi-enclosed frame surrounding all label points.
  • the label point bounding box is first recognized according to the preset bounding box recognition algorithm.
  • the recognition can be performed according to the shape of the preset bounding box of the label point, and the recognition accuracy of the bounding box of the label point can be improved.
  • Identify label points within the bounding box of label points avoid mistaking light spots outside the bounding box of label points as label points, and improve the recognition accuracy of label points.
  • the robot can be quickly positioned.
  • the problem of identifying environmental light spots as label points in the prior art is solved, and the label positioning can be performed in time through the preset association relationship, improving the accuracy and efficiency of label identification, and further improving the working accuracy and efficiency of the robot.
  • Fig. 4 is a schematic flowchart of a label identification method provided by an embodiment of the present invention. This embodiment is an optional embodiment based on the above-mentioned embodiments, and the method can be executed by a label identification device.
  • the label point is identified within the bounding box of the label point, and the arrangement of the label point is determined, which can be refined as follows: according to the preset label point recognition algorithm, the pattern of the shape of the label point in the label point enclosing frame is recognized ; Wherein, the shape of the label point is preset; the arrangement of the pattern of the shape of the label point in the enclosing frame of the label point is determined as the arrangement of the label point.
  • the method specifically includes the following steps:
  • Step 410 acquire the label image.
  • the label image in the working environment of the robot is obtained, and the bounding box of the label point is identified from the label image.
  • Step 420 according to a preset label point recognition algorithm, identify the pattern of the label point shape in the label point enclosing frame.
  • a label point recognition algorithm is preset, and the label point recognition algorithm may be an image recognition algorithm such as an image filtering algorithm, an edge detection algorithm, and a feature detection algorithm.
  • the bounding box of the label point is determined, an image within the bounding box of the label point is determined from the label image, which is a local label image.
  • the preset label point recognition algorithm the pattern of the preset label point shape is recognized in the partial label image. For example, if the shape of the preset label point is a circle, then a circular light spot can be recognized in the local label image as a label point.
  • the image filtering algorithm and edge detection algorithm can be used to filter and edge detect the local label image.
  • the function of filtering is to reduce image noise, obtain an image with distinct gray values, and then perform edge detection on the image with distinct gray values. Pattern edges of label point shapes are detected, for example, the edges of circular label points can be detected. Since the label points are reflective materials, the label points are displayed in the form of light spots in the local label image. After image filtering, the light spots of the label points can appear white. By performing edge extraction on the image, the boundaries of all the spots in the local label image can be extracted and the geometry of the spots can be displayed.
  • Step 430 Determine the arrangement of the label point-shaped pattern in the label point enclosing frame as the arrangement of the label points.
  • the position of the pattern of each label point shape in the label point enclosing frame is determined, and the arrangement of the pattern of the label point shape is determined according to the location of the pattern of the label point shape, and the arrangement of the pattern of the label point shape is the label point.
  • Arrangement For example, the shape of the label point is a circle, and the position of the circular pattern in the enclosing frame of the label point can be the position of the center of the circle. Determine the coordinate position of the circle center of each circular pattern, obtain the arrangement of the circular patterns, and then obtain the position of the label point Arrangement.
  • the label points are arranged in the preset number of rows and columns in the label point enclosing frame.
  • the maximum number of label point rows in a label is four rows, and the maximum number of columns is four columns. That is, each row is preset with four center point positions for placing label points.
  • the distance between the positions of every two center points is preset.
  • the position of the label point in the enclosing frame of the label point can be determined according to the distance between the recognized label points. For example, if the preset distance between each two center points is 5 cm, and it is recognized that there are two label points in a row of labels, and the distance between the two label points is 10 cm, then the two labels can be identified accurately. There exists a free center between the points point. It is also possible to compare the coordinate positions of the two label points in the label point enclosing frame with the preset central point position to determine the arrangement of the two label points.
  • the label point enclosing frame is a semi-closed frame with an opening on one side
  • the arrangement of the label point-shaped patterns in the label point enclosing frame is determined as the arrangement of the label points, including: The bounding box of the label points, determine the missing edge in the bounding box of the label point, as the target edge; according to the target edge, determine the row arrangement order of the label points in the label; according to the row arrangement order and the arrangement of the pattern of the shape of the label point, the label is obtained The arrangement of points.
  • the arrangement of the label points may be different in different viewing directions for the same label. Therefore, when determining how the label points are arranged, the correct viewing direction of the label can be determined.
  • the label point bounding box can be a semi-closed box with an opening on one side, and the line close to the opening side is taken as the first line, so as to obtain the correct viewing direction of the label.
  • the shape of the label point bounding box is preset.
  • the missing edge in the label point bounding box is determined, and the missing edge is used as the target edge.
  • the edge above the bounding box 201 of the label point in Fig. 2a is missing, therefore, the edge above is the target edge.
  • Determine the line order of the label points in the label according to the target edge and take the line closest to the target edge as the first line, starting from the first line, and going down to the second, third, and fourth lines.
  • the correct row order the direction of the label in the label image is straightened, and the arrangement of the pattern of the shape of the label point after the straightening is determined, and the arrangement of the label point of the label is obtained.
  • the beneficial effect of this setting is that if a certain edge is missing from the bounding box of the label point, the spot row adjacent to the missing edge is the first row, and the lack of a certain edge does not affect the recognition of the bounding box of the label point by the algorithm.
  • the enclosing frame of the label points implies a sequence relationship, without additional identifiers, which improves the recognition accuracy of the arrangement order of the label points, thereby improving the positioning accuracy.
  • Step 440 Determine the location information of the label according to the arrangement of the label points and the association between the preset arrangement and the location information of the label.
  • the label in the embodiment of the present invention is composed of a label point enclosing frame and a plurality of label points, and the label point enclosing frame is a semi-enclosed frame surrounding all label points.
  • first identify the label point enclosing frame identify the label point within the range of the label point enclosing frame, avoid mistaking the light spots outside the label point enclosing frame as label points, and improve the accuracy of label points. recognition accuracy. Identify the pattern of the shape of the label point, determine the arrangement of the label point according to the arrangement of the pattern of the shape of the label point, and improve the determination accuracy of the arrangement of the label point.
  • An arrangement method is associated with a label position, and the robot can be quickly positioned according to the arrangement method of the label points.
  • the problem of identifying environmental light spots as label points in the prior art is solved, and the label positioning can be performed in time through the preset association relationship, improving the accuracy and efficiency of label identification, and further improving the working accuracy and efficiency of the robot.
  • Fig. 5 is a schematic flow chart of a label identification method provided by an embodiment of the present invention. This embodiment is an optional embodiment based on the above-mentioned embodiments, and the method can be executed by a label identification device.
  • the location information of the label is determined according to the arrangement of the label points and the relationship between the preset arrangement and the label position information, which can be refined as: expressing the algorithm according to the preset arrangement of the label points to obtain the label points
  • the character expression of the arrangement mode of the label points; according to the character expression of the arrangement mode of the label points, the location information of the label is determined based on the preset association relationship between the character expression and the label position information.
  • the method specifically includes the following steps:
  • Step 510 acquire the label image.
  • Step 520 identifying the label points in the label image, and determining the arrangement of the label points.
  • Step 530 according to the preset label point arrangement expression algorithm, obtain the character expression of the label point arrangement.
  • the label point arrangement expression algorithm is preset, and the label point arrangement algorithm can be used to express the label point arrangement in the form of characters, and the characters can be numbers or symbols.
  • the character expression of the arrangement of the label points is obtained according to the expression algorithm of the arrangement of the label points. For example, the position coordinates of each label point may be converted into a character string for expression, or the position coordinates of each label point may be arranged in sequence to obtain a string of position coordinate numbers as a character expression.
  • the character expression of the label point arrangement is obtained, including: according to the label point arrangement and the preset label point arrangement expression algorithm, determine The character expression of each row of label points; according to the line arrangement order of label points, sort the character expression of each row of label points to obtain the character expression of the arrangement of label points.
  • the arrangement of all label points can be converted into the character expression of the label points of the entire label, or the arrangement of each row of label points can be converted into character expression, and then combine the character expressions of each line to obtain the character expression of the label point of the entire label.
  • the character expression of each row of label points is determined according to the preset label point arrangement expression algorithm. Then, according to the arrangement order of the rows, the character expression of the label points in each row is sorted and combined to obtain the character expression of the arrangement of all the label points in the label.
  • the combination of the character expressions of each row of label points may be that the character expressions of each row are connected together according to the row arrangement sequence.
  • the character expressions of each row of label points are 01, 02, 03, and 04 according to the order of row arrangement, so the overall character expression of the arrangement of label points can be 01020304.
  • the beneficial effect of such setting is that the character expression of each line is determined first, and then the character expression of the entire label is determined, so that the label
  • the arrangement of the signature points is converted into a unique character expression, which increases the diversity of character expression, and makes the arrangement of the label points easy to store, which is conducive to quickly finding the label position and improving the positioning accuracy and efficiency of the label.
  • the character expression of each row of label points is determined, including: according to the arrangement of the label points of each row and the preset label
  • the point representation algorithm determines the binary expression of each row of label points; performs decimal conversion on the binary expression of each row of label points to obtain the character expression of each row of label points.
  • the preset algorithm for expressing the arrangement of label points may be to convert the arrangement of label points into binary and decimal.
  • the conversion rule may be that the occupancy of the label point at each preset central point position in the label point enclosing frame is represented by binary 0 or 1. If there is a label point spot at the center point position, the center point position is 1, otherwise it is 0. That is, the arrangement of the label points in each row can be expressed in binary with a 01 sequence.
  • the binary expression of the arrangement of the label points in the first row in Figure 2a is 1111
  • the binary expression of the arrangement of the label points in the second row is 1000.
  • the binary expression of the arrangement of the label points in the third row is 0110
  • the binary expression of the arrangement of the label points in the fourth row is 0001.
  • each row of label points After obtaining the binary expression of each row of label points, convert the binary expression of each row of label points into decimal to obtain the decimal expression, and the obtained decimal expression is the character expression of each row of label points.
  • the binary expression of each line of label points corresponds to a unique decimal integer according to the binary principle. For example, the binary expression of the first line of label points is 1111, and the decimal expression is 15; the binary expression of the second line of label points is 1000, and the decimal expression is 8 ; The binary expression of the label point in the third line is 0110, and the decimal expression is 6; the binary expression of the label point in the fourth line is 0001, and the decimal expression is 1.
  • a new character sequence is obtained, which is used as the character expression of the arrangement of the label points of the entire label.
  • the character expression of each line occupies two positions of the new character sequence, if less than two digits, add 0 in front of the corresponding decimal character expression.
  • Obtain a new character sequence in line order for example, in Figure 2a, it is a 4 ⁇ 4 label point arrangement, starting from the top line, the decimal character expressions corresponding to each line are 15, 8, 6 and 1 respectively. Add 0 in front of 8, 6 and 1 respectively, and the eight-digit new character sequence corresponding to the arrangement of the label points of the entire label is 15080601.
  • the beneficial effect of such setting is that, through the conversion between binary and decimal, the difficulty of determining character expression is reduced, the amount of calculation is reduced, and the efficiency and accuracy of character expression determination are improved.
  • it is efficient, reliable and easy to store, which is conducive to finding the corresponding label position according to the character expression, and improving positioning accuracy and efficiency.
  • Step 540 Determine the position information of the label according to the character expression of the arrangement of the label points, and based on the preset association relationship between the character expression and the position information of the label.
  • the staff predetermines the sticking position coordinates of each label in the working environment, that is, the label position information, and predetermines the character expression of each label, and associates and stores the character expression and label position information of each label.
  • the corresponding label position information is determined according to the pre-stored association relationship to realize the positioning of the robot. For example, during the working process of the robot, a tag is recognized from above, and the position of the recognized tag is determined, and the position of the tag is used as the position of the robot, thereby positioning the robot.
  • the label in the embodiment of the present invention is composed of a label point enclosing frame and a plurality of label points, and the label point enclosing frame may be a semi-enclosed frame surrounding all label points.
  • first identify the label point enclosing frame identify the label point within the range of the label point enclosing frame, avoid mistaking the light spots outside the label point enclosing frame as label points, and improve the accuracy of label points. recognition accuracy.
  • a character expression corresponds to a label position. According to the character expression, the robot can be quickly positioned.
  • the problem of identifying ambient light spots as label points in the prior art is solved, and the use of character expression during positioning is more efficient, reliable and easy to store than the arrangement of graphics.
  • the label positioning can be performed in time through the character expression and the preset association relationship, and the accuracy and efficiency of label recognition can be improved, thereby improving the working accuracy and efficiency of the robot.
  • Fig. 6 is a structural block diagram of a label identification device provided by an embodiment of the present invention, which can execute a label identification method provided by any embodiment of the present invention, and has corresponding functional modules and beneficial effects for executing the method.
  • the device specifically includes:
  • An acquisition module 601 configured to acquire a label image, wherein the label image has a label, and the label includes at least one label point with infrared reflective properties;
  • a determination module 602 configured to identify the label points in the label image, and determine the arrangement of the label points
  • the tag identification module 603 is configured to determine the location information of the tag according to the arrangement of the tag points and the association relationship between the preset arrangement and the location information of the tag.
  • the tag further includes a tag point enclosing frame with infrared reflective properties, and the tag point enclosing frame is a polygon surrounding all tag points;
  • the determining module 602 includes:
  • a label point bounding box identification unit configured to identify the label point bounding box from the label image
  • the arrangement determining unit is configured to identify the label points within the enclosing frame of the label points, and determine the arrangement of the label points.
  • the determining module 602 includes:
  • the label point enclosing frame recognition unit is configured to recognize the lines of the preset label point enclosing frame shape from the label image according to the preset enclosing frame recognition algorithm, and obtain the label point enclosing frame in the label image.
  • the label points are arranged according to the preset number of rows and columns in the label point enclosing frame;
  • the determining module 602 includes:
  • the label point shape pattern determination unit is used to identify the pattern of the label point shape in the label point enclosing frame according to the preset label point recognition algorithm; wherein, the label point shape is preset;
  • the arrangement determining unit is configured to determine the arrangement of the label point-shaped pattern in the label point enclosing frame as the arrangement of the label points.
  • the tag point bounding box is a semi-closed box with one side open, and the arrangement determines the unit, specifically for:
  • the identified label point enclosing frame determine the missing edge in the label point enclosing frame as the target edge
  • the arrangement manner of the label points of the label is obtained.
  • the tag identification module 603 includes:
  • the character expression determination unit is used to obtain the character expression of the arrangement of the label points according to the preset expression algorithm of the arrangement of the label points;
  • the label position determination unit is configured to determine the position information of the label based on the character expression of the arrangement of the label points, and based on the preset association relationship between the character expression and the label position information.
  • the character expression determines the unit, including:
  • the character expression determination subunit of each line is used to determine the character expression of each line of label points according to the arrangement of label points and the preset expression algorithm of label points;
  • the label character expression determination subunit is configured to sort the character expressions of the label points in each row according to the row arrangement sequence of the label points, and obtain the character expression of the arrangement of the label points.
  • each line determines the subunit, which is specifically used for:
  • the acquisition module 601 also includes:
  • the tag image acquisition unit is used to collect tag images in the working environment of the robot through the image acquisition device installed on the robot.
  • the label in the embodiment of the present invention is composed of a label point enclosing frame and a plurality of label points, and the label point enclosing frame is a semi-enclosed frame surrounding all label points.
  • first identify the label point enclosing frame identify the label point within the range of the label point enclosing frame, avoid mistaking the light spots outside the label point enclosing frame as label points, and improve the accuracy of label points. recognition accuracy.
  • One arrangement is associated with a label position. According to the arrangement of label points, the robot can be quickly positioned.
  • the problem of identifying environmental light spots as label points in the prior art is solved, and the label positioning can be performed in time through the preset association relationship, improving the accuracy and efficiency of label identification, and further improving the working accuracy and efficiency of the robot.
  • An embodiment of the present invention provides a label, the label includes at least one label point with infrared reflective properties, the label can be set in the working environment of the robot, and realize the positioning of the robot when the robot performs the label identification method of any embodiment of the present invention .
  • FIG. 7 is a schematic structural diagram of a tag identification device provided by an embodiment of the present invention.
  • the tag identification device is an electronic device
  • FIG. 7 shows a block diagram of an exemplary electronic device 700 suitable for implementing the embodiments of the present invention.
  • the electronic device 700 shown in FIG. 7 is only an example, and should not limit the functions and application scope of the embodiments of the present invention.
  • electronic device 700 takes the form of a general-purpose computing device.
  • Components of the electronic device 700 may include, but are not limited to: one or more processors or processing units 701 , a system memory 702 , and a bus 703 connecting different system components (including the system memory 702 and the processing unit 701 ).
  • Bus 703 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus structures.
  • bus structures include, by way of example, but are not limited to Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, Enhanced ISA bus, Video Electronics Standards Association
  • ISA Industry Standard Architecture
  • MAC Micro Channel Architecture
  • Enhanced ISA bus Video Electronics Standards Association
  • VESA local bus and peripheral component interconnect (PCI) bus.
  • PCI peripheral component interconnect
  • Electronic device 700 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by electronic device 700 and include both volatile and nonvolatile media, removable and non-removable media.
  • System memory 702 may include computer system readable media in the form of volatile memory, such as random access memory (RAM) 704 and/or cache memory 705 .
  • the electronic device 700 may further include other removable/non-removable, volatile/nonvolatile computer system storage media.
  • storage system 706 may be used to read and write to non-removable, non-volatile magnetic media (not shown in FIG. 7, commonly referred to as a "hard drive").
  • a disk drive for reading and writing to removable nonvolatile disks e.g., "floppy disks”
  • removable nonvolatile optical disks e.g., CD-ROM, DVD-ROM or other optical media
  • each drive may be connected to bus 703 via one or more data media interfaces.
  • Memory 702 may include at least one program product having a set (eg, at least one) of program modules configured to perform the functions of various embodiments of the present invention.
  • a program/utility tool 708 having a set (at least one) of program modules 707 such as but not limited to, an operating system, one or more application programs, other program modules, and program data, may be stored, for example, in memory 702 , each or some combination of these examples may include implementations of network environments.
  • the program module 707 generally executes the functions and/or methods of the described embodiments of the present invention.
  • the electronic device 700 may also communicate with one or more external devices 709 (such as a keyboard, pointing device, display 710, etc.), communicate with one or more devices that enable a user to interact with the electronic device 700, and/or communicate with Any device (eg, network card, modem, etc.) that enables the electronic device 700 to communicate with one or more other computing devices. Such communication may occur through input/output (I/O) interface 711 .
  • the electronic device 700 can also communicate with one or more networks (such as a local area network (LAN), a wide area network (WAN) and/or a public network such as the Internet) through the network adapter 712 . As shown in FIG.
  • the network adapter 712 communicates with other modules of the electronic device 700 through the bus 703 .
  • other hardware and/or software modules may be used in conjunction with electronic device 700, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape Drives and data backup storage systems, etc.
  • the processing unit 701 executes various functional applications and data processing by running the program stored in the system memory 702, for example, implementing a tag identification method provided by an embodiment of the present invention, including:
  • the location information of the label is determined according to the arrangement of the label points and the association relationship between the preset arrangement and the location information of the label.
  • the embodiment of the present invention also provides a storage medium containing computer-executable instructions, on which a computer program is stored.
  • a method for identifying a label as provided in the embodiment of the present invention is implemented, including:
  • the location information of the label is determined according to the arrangement of the label points and the association relationship between the preset arrangement and the location information of the label.
  • the computer storage medium in the embodiments of the present invention may use any combination of one or more computer-readable media.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (non-exhaustive list) of computer-readable storage media include: electrical connections with one or more leads, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), Erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a data signal carrying computer readable program code in baseband or as part of a carrier wave. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. .
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out the operations of the present invention may be written in one or more programming languages, or combinations thereof, including object-oriented programming languages—such as Java, Smalltalk, C++, and conventional Procedural Programming Language - such as "C" or a similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through the Internet using an Internet service provider). connect).
  • LAN local area network
  • WAN wide area network
  • connect such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明实施例公开了一种标签识别的方法、装置、电子设备、存储介质及标签。该方法包括:获取标签图像,其中,所述标签图像具有标签,所述标签包括至少一个具有红外反光特性的标签点;在所述标签图像内识别所述标签点,并确定所述标签点的排列方式;根据所述标签点的排列方式以及预设的排列方式与标签位置信息的关联关系,确定所述标签的位置信息。本发明实施例根据标签点的排列方式确定标签点的位置信息,能够对机器人进行快速定位。

Description

标签识别的方法、装置、电子设备、存储介质及标签 技术领域
本发明实施例涉及计算机技术,尤其涉及一种标签识别的方法、装置、电子设备、存储介质及标签。
发明背景
机器人在执行任务的过程中,为保证任务顺利的进行,需要对机器人进行多次定位监控,以明确机器人当前的位置,从而确定任务的执行进度。然而当前对机器人进行定位,主要是通过工作人员进行手动定位,不但消耗了大量的人工成本,而且对机器人定位的效率也较低。
鉴于此,如何对机器人进行快速定位成为亟待解决的技术问题。
发明内容
本发明实施例提供一种标签识别的方法、装置、电子设备、存储介质及标签,能够对机器人进行快速定位。
第一方面,本发明实施例提供了一种标签识别的方法,该方法包括:
获取标签图像,其中,所述标签图像具有标签,所述标签包括至少一个具有红外反光特性的标签点;
在所述标签图像内识别所述标签点,并确定所述标签点的排列方式;
根据所述标签点的排列方式以及预设的排列方式与标签位置信息的关联关系,确定所述标签的位置信息。
第二方面,本发明实施例还提供了一种标签识别的装置,该装置包括:
获取模块,用于获取标签图像,其中所述标签图像具有标签,所述标签包括至少一个具有红外反光特性的标签点;
确定模块,用于在所述标签图像内识别所述标签点,并确定所述标签点的排列方式;
标签识别模块,用于根据所述标签点的排列方式以及预设的排列方式与标签位置信息的关联关系,确定所述标签的位置信息。
第三方面,本发明实施例还提供了一种标签,所述标签包括至少一个具有红外反光特性的标签点,所述标签用于机器人在执行本发明任意实施例所述的标签识别的方法时实现机器人的定位。
第四方面,本发明实施例还提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现如本发明任意实施例所述的标签识别的方法。
第五方面,本发明实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如本发明任意实施例所述的标签识别的方法。
本发明实施例通过获取到机器人工作环境的标签图像,在标签图像内识别标签点,确定标签点的排列方式,并根据标签点的排列方式与标签位置信息的关联关系,即一种排列方式关联对应有一个标签位置,快速对机器人进行定位,进而提高了对机器人的定位效率,从而解决了无法快速定位机器人位置的问题。
附图简要说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例中的一种标签识别的方法的流程示意图;
图2a是本发明实施例中的一种标签示意图;
图2b是本发明实施例中的一种标签示意图;
图3是本发明实施例中的一种标签识别的方法的流程示意图;
图4是本发明实施例中的一种标签识别的方法的流程示意图;
图5是本发明实施例中的一种标签识别的方法的流程示意图;
图6是本发明实施例中的一种标签识别的装置的结构框图;
图7是本发明实施例中的一种标签识别的设备的结构示意图。
实施本发明的方式
下面结合附图和实施例对本发明作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本发明,而非对本发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部结构。
标签可以用来对机器人进行定位,标签由若干标签点组成,标签点可以由反光材料制成。不同标签的标签点的排列方式不同,标签点的排列方式形成了标签的唯一标识符。标签通常被预先水平部署在工作环境的天花板上,来指导机器人等装置进行定位。
在基于标签进行定位时,可以通过反光现象提取到标签点的光斑。由于光照条件等的变化,标签点的光斑可能无法在成像区域很好地反映出来,如果在标签附近存在噪点,则光斑提取的准确性将下降,进而降低标签识别的精度。
图1为本发明实施例所提供的一种标签识别的方法的流程示意图,本实施例可适用于在机器人工作环境中基于标签进行识别的情况,该方法可以由一种标签识别的装置来执行。如图1所示,该方法具体包括如下步骤:
步骤110、获取标签图像。
在一实施例中,标签图像具有标签,标签包括至少一个具有红外反光特性的标签点。并且,标签还可以包括标签点包围框。
本实施例中,可选的,获取标签图像,包括:获取标签图像,从标签图像中识别标签点包围框。其中,标签识别的装置可以在机器人工作环境中获取标签图像,机器人的工作环境可以是餐厅或商场等环境,在机器人工作环境中可以预先设置一个或多个标签,用于对机器人进行定位。标签可以包括标签点包围框和至少一个标签点,标签点包围框是围绕所有标签点的多边形,例如,标签点包围框可以是正方形或长方形等。标签点包围框可以是全封闭框,也可以是半封闭框。本实施例中,标签点包围框是一边开口的半封闭框。标签可以是由具有红外反光特性的标签点包围框和至少一个具有红外反光特性的标签点组成,工作人员可以将标签点在标签点包围框内进行排列,不同标签的标签点的排列方式可以不同。图2a和图2b为本发明实施例中的一种标签示意图。图2a中标签点包围框201为一边开口的正方形,标签点包围框201内存在八个标签点202,标签中标签点 202的数量和排列方式为工作人员预先确定。图2b中标签点包围框203为全封闭的正方形,九个标签点204位于标签点包围框203内部。
机器人在工作环境中移动时,可以实时或定时地获取工作环境的图像,图像中可以存在预先设置在工作环境中的标签,因此,可以获取含有标签的标签图像。
本实施例中,可选的,获取标签图像,包括:通过安装于机器人身上的图像采集设备,在机器人工作环境内采集标签图像。
具体的,机器人身上可以安装有图像采集设备,例如,标签由红外反光材料制成,则图像采集设备可以是红外摄像头。标签可以由工作人员预先粘贴在工作环境的天花板上,图像采集设备可以安装于机器人的顶部。机器人在移动时,可以通过机器人顶部的图像采集设备采集到天花板的图像,从而得到标签图像。
标签还可以贴在窗上或墙壁上等位置,图像采集设备安装在机器人身上可以拍摄到标签的部位。在设置好标签后,工作人员可以推动机器人在预设的工作环境内行走,机器人在移动的过程中,机器人身上安装的图像采集设备可以实时采集环境中的图像,图像采集设备所采集到的具有标签的图像就是预设环境中的标签图像。图像采集设备的采集范围可以被预先设置,例如,预设采集范围可以是以机器人为中心,以预设距离为半径的范围。
这样设置的有益效果在于,可以通过机器人主动获取标签图像,便于在机器人工作过程中随时对机器人进行定位,减少工作人员的定位操作,提高标签识别效率和机器人的工作效率。
在得到标签图像后,从标签图像中识别出标签点包围框。标签点包围框可以是采用反光材料制成,例如,可以是红外反光材料。根据反光现象,可以识别到标签点包围框。
步骤120、在标签图像内识别标签点,并确定标签点的排列方式。
本实施例中,标签还包括具有红外反光特性的标签点包围框,标签点包围框是围绕所有标签点的多边形,在标签图像内识别标签点,并确定标签点的排列方式,可细化为:从标签图像中识别标签点包围框;在标签点包围框的范围内识别标签点,确定标签点的排列方式。
其中,在确定标签图像中的标签点包围框后,根据预设的标签点形状,在标签点包围框的范围内识别出至少一个标签点,确定至少一个标签点的排列方式。标签点可以是圆形的反光材料,则可以识别到标签点包围框内的圆形光斑,确定 圆形光斑的排列方式,即为标签点的排列方式。例如,识别到一个标签的标签点包围框内的光斑排列为“Z”字形,则可以确定该标签的标签点的排列方式为排列成“Z”字形。每个标签对应有唯一的标签点的排列方式,根据标签点的排列方式,可以确定该排列方式所对应的标签编码和标签位置等标签信息。
步骤130、根据标签点的排列方式以及预设的排列方式与标签位置信息的关联关系,确定标签的位置信息。
其中,每一个标签对应有唯一的标签点的排列方式和唯一的标签位置信息,标签点的排列方式和标签位置信息一一对应。标签位置信息可以是标签在机器人工作环境中粘贴的位置坐标。预先设置标签点的排列方式和标签位置信息的对应关系,可以是将标签点的排列方式的对应图形与标签位置信息进行关联存储。在确定标签点的排列方式后,可以根据预设的关联关系,查找对应的标签位置信息,从而实现标签的定位。
标签点的排列方式和标签位置信息的关联关系是工作人员在粘贴标签之前或粘贴标签时确定的,工作人员需要按照预设的标签位置信息对标签进行粘贴,便于机器人在识别到标签点的排列方式后,可以及时确定标签位置信息,从而根据标签位置信息实现机器人基于标签的定位。例如,预先存储“Z”字形对应的标签位置信息是餐厅门口处的位置坐标,“L”字形对应的标签位置信息是一号桌左侧的位置坐标,若识别到标签点的排列方式为“L”字形,则可以根据标签位置信息确定机器人当前处于一号桌左侧的位置处,完成基于标签的定位。
本实施例通过获取到机器人工作环境的标签图像,在标签图像内识别标签点,确定标签点的排列方式,并根据标签点的排列方式与标签位置信息的关联关系,即一种排列方式关联对应有一个标签位置,快速对机器人进行定位,进而提高了对机器人的定位效率,从而解决了无法快速定位机器人位置的问题。
图3为本发明实施例所提供的一种标签识别的方法的流程示意图,本实施例为在上述实施例基础上的可选实施例,该方法可以由一种标签识别装置来执行。
本实施例中,从标签图像中识别标签点包围框,可细化为:根据预设的包围框识别算法,从标签图像中识别预设的标签点包围框形状的线条,得到标签图像中的标签点包围框。
如图3所示,该方法具体包括如下步骤:
步骤310、获取标签图像。
步骤320、根据预设的包围框识别算法,从标签图像中识别预设的标签点包围框形状的线条,得到标签图像中的标签点包围框。
其中,预先设置图像识别算法作为包围框识别算法,根据包围框识别算法,从标签图像中识别标签点包围框。预先设置标签点包围框的形状,从标签图像中识别预设的标签点包围框形状的线条,作为标签点包围框。例如,标签点包围框为五边形,对标签图像进行特征提取,识别出五边形的图案,将识别出的五边形确定为标签点包围框。或者,标签点包围框为圆形,标签点也是圆形,在识别出多个圆形后,比较各个圆形的大小,将最大的圆形作为标签点包围框。
包围框识别算法可以包括图像滤波算法、边缘检测算法和特征检测算法等。本实施例中,在确定标签图像中的标签点包围框时,可以根据预设的图像滤波算法对标签图像进行滤波,得到灰度化图像。再采用边缘检测算法对灰度化图像进行边缘提取,得到中间图像。再采用特征检测算法从中间图像中识别组成标签点包围框形状的线条,得到标签图像中的标签点包围框。
具体的,在得到标签图像后,采用预设的图像滤波算法对标签图像进行滤波,例如,预设的图像滤波算法是高斯滤波算法,滤波的目的是减少图像噪点对标签识别造成的影响。在进行滤波后,可以得到灰度值分明的图像,即灰度化图像,也可以对滤波后的图像进行二值化处理,得到灰度化图像。
边缘检测算法可以是LOG(Laplacian of Gaussian,高斯拉普拉斯算子)特征提取算法、Sobel(索贝尔)边缘检测算法或Canny(坎尼)边缘检测算法等。采用预设的边缘检测算法,基于梯度算子对灰度化图像进行边缘提取,显示出灰度化图像中的边缘线条,例如,可以是直线、曲线等,显示出边缘的图像即为中间图像。由于标签上的标签点包围框和标签点为反光材料,因此,在对灰度化图像进行边缘提取时,可以看到反光材料形成的白色光斑。可以将标签点包围框和所有标签点的光斑边界都提取出来,从而在中间图像中看到标签点包围框和标签点的几何形状。例如,标签点包围框为四边形,标签点为圆形。
在进行边缘提取后,对中间图像上的各个线条进行识别,确定其中的标签点包围框。所识别的线条可以是曲线,也可以是直线。采用特征检测算法对中间图像进行标签点包围框的寻找,即识别组成标签点包围框形状的线段。例如,标签点包围框为五边形,则可以查找组成五边形的线段。预设的特征检测算法可以是Hough(霍夫变换),Hough是一种检测边界形状的方法,它通过将图像坐标空间变换到参数空间,来实现直线与曲线的拟合。本实施例中,标签点包围框为多边形,具体的可以是四边形。因此,标签点包围框为直线,可以通过霍夫变换对 中间图像进行直线寻找,进行直线拟合,识别组成多边形的直线段。若标签点包围框为圆形,则可以进行圆拟合,识别组成圆形的曲线。通过霍夫变换,将标签点包围框与标签点进行区分,便于得到标签点包围框,有利于在标签点包围框的范围内识别标签点,提高标签识别精度。且本实施例中使用直线提取和圆形提取的方式进行标签点包围框和标签点的识别,也就是使用边缘梯度信息进行标签识别,与标签内部像素分布关系不大,因此受光照影响较小,进一步提高标签识别精度。
步骤330、在标签点包围框的范围内识别标签点,确定标签点的排列方式。
步骤340、根据标签点的排列方式以及预设的排列方式与标签位置信息的关联关系,确定标签的位置信息。
本发明实施例中的标签由标签点包围框和至少一个标签点组成,标签点包围框是围绕所有标签点的半封闭框。在获取到机器人工作环境的标签图像后,先根据预设的包围框识别算法识别到标签点包围框。可以依据预设的标签点包围框形状进行识别,提高标签点包围框的识别精度。在标签点包围框的范围内识别标签点,避免将标签点包围框外的光斑误认为是标签点,提高标签点的识别精度。确定标签点包围框中各个标签点的排列方式,一种排列方式关联对应有一个标签位置,根据标签点的排列方式,可以快速对机器人进行定位。解决了现有技术中,将环境光斑识别为标签点的问题,且可以通过预设的关联关系及时进行标签定位,提高标签识别的精度和效率,进而提高机器人的工作精度和效率。
图4为本发明实施例所提供的一种标签识别的方法的流程示意图,本实施例为在上述实施例基础上的可选实施例,该方法可以由一种标签识别装置来执行。
本实施例中,在标签点包围框的范围内识别标签点,确定标签点的排列方式,可细化为:根据预设的标签点识别算法,识别标签点包围框中的标签点形状的图案;其中,标签点形状为预先设置;将标签点包围框中的标签点形状的图案的排列方式,确定为标签点的排列方式。
如图4所示,该方法具体包括如下步骤:
步骤410、获取标签图像。
具体的,获取机器人工作环境中的标签图像,从标签图像中识别标签点包围框。
步骤420、根据预设的标签点识别算法,识别标签点包围框中的标签点形状的图案。
其中,预先设置标签点识别算法,标签点识别算法可以是图像滤波算法、边缘检测算法和特征检测算法等图像识别算法。在确定标签点包围框后,从标签图像中确定标签点包围框范围内的图像,为局部标签图像。根据预设的标签点识别算法,在局部标签图像中识别预设标签点形状的图案。例如,预设标签点形状为圆形,则可以在局部标签图像中识别出圆形的光斑,作为标签点。
具体的,可以通过图像滤波算法和边缘检测算法对局部标签图像进行滤波和边缘检测,滤波的作用是减少图像噪点,获得灰度值分明的图像,再对灰度值分明的图像进行边缘检测,检测到标签点形状的图案边缘,例如,可以检测圆形标签点的边。由于标签点是反光材料,因此,标签点在局部标签图像以光斑的形式进行显示,经过图像滤波后,标签点的光斑可以呈现白色。通过对图像进行边缘提取,可以将局部标签图像中所有光斑的边界都提取出来,并显示光斑的几何形状。判断光斑的几何形状是否为预设的标签点形状,若是,则确定光斑为标签点;若否,则确定该光斑不是标签点。通过图像滤波,减少噪点对标签点识别的影响,提高标签点识别精度。通过边缘提取,能够得到光斑的几何图形,提高对标签点查看的清晰度,避免将不是标签点的光斑作为标签点进行记录,进一步提高标签点的识别精度。
步骤430、将标签点包围框中的标签点形状的图案的排列方式,确定为标签点的排列方式。
其中,确定各个标签点形状的图案在标签点包围框中的位置,根据标签点形状的图案所在位置,确定标签点形状的图案的排列方式,标签点形状的图案的排列方式即为标签点的排列方式。例如,标签点形状为圆形,圆形图案在标签点包围框中的位置可以是圆心的位置,确定各个圆形图案的圆心的坐标位置,得到圆形图案的排列方式,进而得到标签点的排列方式。
标签点在标签点包围框中以预设的行列数进行排列,例如,一个标签中标签点行数最多为四行,列数最多为四列。即,每一行预设有四个中心点位置用于放置标签点。预设每两个中心点位置之间的距离,在识别标签点时,根据识别到的标签点之间的距离,可以确定标签点在标签点包围框中的位置。例如,预设每两个中心点位置之间的距离为5厘米,识别到一行标签中有两个标签点,该两个标签点之间的距离为10厘米,则可以确地这两个标签点之间存在一个空闲的中心 点。也可以将这两个标签点在标签点包围框中的坐标位置与预设的中心点位置进行比较,确定这两个标签点的排列方式。
本实施例中,可选的,标签点包围框是一边开口的半封闭框,将标签点包围框中的标签点形状的图案的排列方式,确定为标签点的排列方式,包括:根据识别到的标签点包围框,确定标签点包围框中缺失的边,作为目标边;根据目标边,确定标签中标签点的行排列顺序;根据行排列顺序和标签点形状的图案的排列方式,得到标签点的排列方式。
具体的,同一个标签在不同的查看方向下,标签点的排列方式可能存在不同。因此,在确定标签点的排列方式时,可以确定标签的正确查看方向。标签点包围框可以是一边开口的半封闭框,将靠近开口一边的行作为第一行,从而得到标签的正确查看方向。
预设标签点包围框的形状,在识别标签点包围框时,确定标签点包围框中缺失的边,将缺失的边作为目标边。例如,图2a中标签点包围框201上面的边缺失,因此,上面的边为目标边。根据目标边确定标签中标签点的行排列顺序,将最靠近目标边的行作为第一行,从第一行开始,依次向下为第二行、第三行和第四行。根据正确的行排列顺序,将标签图像中的标签方向摆正,确定摆正后标签点形状的图案的排列方式,得到该标签的标签点的排列方式。这样设置的有益效果在于,标签点包围框缺失某条边,缺失的边相邻的光斑行即为第一行,且缺失某条边不影响算法对标签点包围框的识别。标签点包围框隐含顺序关系,不需额外增加标识符,提高标签点的排列顺序的识别精度,进而提高定位精度。
步骤440、根据标签点的排列方式以及预设的排列方式与标签位置信息的关联关系,确定标签的位置信息。
本发明实施例中的标签由标签点包围框和多个标签点组成,标签点包围框是围绕所有标签点的半封闭框。在获取到机器人工作环境的标签图像后,先识别到标签点包围框,在标签点包围框的范围内识别标签点,避免将标签点包围框外的光斑误认为是标签点,提高标签点的识别精度。识别出标签点形状的图案,根据标签点形状的图案的排列方式,确定标签点的排列方式,提高标签点的排列方式的确定精度。一种排列方式关联对应有一个标签位置,根据标签点的排列方式,可以快速对机器人进行定位。解决了现有技术中,将环境光斑识别为标签点的问题,且可以通过预设的关联关系及时进行标签定位,提高标签识别的精度和效率,进而提高机器人的工作精度和效率。
图5为本发明实施例所提供的一种标签识别的方法的流程示意图,本实施例为在上述实施例基础上的可选实施例,该方法可以由一种标签识别装置来执行。
本实施例中,根据标签点的排列方式以及预设的排列方式与标签位置信息的关联关系,确定标签的位置信息,可细化为:根据预设的标签点排列方式表达算法,得到标签点的排列方式的字符表达;根据标签点的排列方式的字符表达,基于预设的字符表达与标签位置信息的关联关系,确定标签的位置信息。
如图5所示,该方法具体包括如下步骤:
步骤510、获取标签图像。
步骤520、在标签图像内识别标签点,确定标签点的排列方式。
步骤530、根据预设的标签点排列方式表达算法,得到标签点的排列方式的字符表达。
其中,预先设置标签点排列方式表达算法,标签点排列方式算法可以用于将标签点的排列方式以字符的形式进行表达,字符可以是数字或符号等。在得到标签点的排列方式后,根据标签点排列方式表达算法,得到标签点的排列方式的字符表达。例如,可以将各个标签点的位置坐标转换为字符串的形式进行表达,或者将各个标签点的位置坐标依次排列,得到一串位置坐标的数字作为字符表达。
本实施例中,可选的,根据预设的标签点排列方式表达算法,得到标签点的排列方式的字符表达,包括:根据标签点的排列方式和预设的标签点排列方式表达算法,确定每一行标签点的字符表达;根据标签点的行排列顺序,对每一行标签点的字符表达进行排序,得到标签点的排列方式的字符表达。
具体的,可以在得到标签点包围框中所有标签点的排列方式后,将所有标签点的排列方式转换为整个标签的标签点的字符表达,也可以先将每一行标签点的排列方式转换为字符表达,再将各行的字符表达进行组合,得到整个标签的标签点的字符表达。本实施例中,根据预设的标签点排列方式表达算法,确定每一行标签点的字符表达。再根据行排列顺序,对每一行标签点的字符表达进行排序组合,得到标签中所有标签点的排列方式的字符表达。对每一行标签点的字符表达的组合可以是,将各行字符表达按照行排列顺序连接在一起。例如,标签点包围框中有四行标签点,每一行标签点的字符表达按照行排列顺序分别是01、02、03和04,则标签点的排列方式的整体字符表达可以是01020304。这样设置的有益效果在于,先确定各个行的字符表达,再确定整个标签的字符表达,实现将标 签点的排列方式转换为唯一的字符表达,增加字符表达的多样性,并使标签点的排列方式易于存储,有利于快速找到标签位置,提高标签的定位精度和效率。
本实施例中,可选的,根据标签点的排列方式和预设的标签点排列方式表达算法,确定每一行标签点的字符表达,包括:根据每一行标签点的排列方式和预设的标签点表示算法,确定每一行标签点的二进制表达;将每一行标签点的二进制表达进行十进制转换,得到每一行标签点的字符表达。
具体的,预设的标签点排列方式表达算法可以是将标签点的排列方式转换为二进制和十进制。转换规则可以是,将标签点包围框中每个预设中心点位置的标签点占位情况用二进制的0或1表示。如果中心点位置处有标签点光斑,则该中心点位置为1,否则为0。即,每一行标签点的排列方式可以用01序列进行二进制表达,例如,图2a中第一行标签点的排列方式的二进制表达为1111,第二行标签点的排列方式的二进制表达为1000,第三行标签点的排列方式的二进制表达为0110,第四行标签点的排列方式的二进制表达为0001。
在得到每一行标签点的二进制表达后,将每一行标签点的二进制表达转换为十进制,得到十进制表达,所得到的十进制表达即为每一行标签点的字符表达。每一行标签点的二进制表达根据二进制原理对应有唯一的十进制整数,例如,第一行标签点的二进制表达为1111,十进制表达为15;第二行标签点的二进制表达为1000,十进制表达为8;第三行标签点的二进制表达为0110,十进制表达为6;第四行标签点的二进制表达为0001,十进制表达为1。
将每一行的字符表达按照行排列顺序进行组合,得到新的字符序列,作为整个标签的标签点的排列方式的字符表达。每一行的字符表达均占用新的字符序列的两个位置,若不满两位数,则在对应的十进制字符表达前面添加0。按行排列顺序得到新的字符序列,例如,图2a中为4×4的标签点排列,从最上面一行开始,每一行对应的十进制字符表达分别为15、8、6和1。在8、6和1前面分别添加0,得到整个标签的标签点的排列方式对应的八位数新字符序列为15080601。
这样设置的有益效果在于,通过二进制和十进制的转换,降低字符表达的确定难度,减少计算量,提高字符表达的确定效率和精度。且最终十进制的字符表达在存储时,高效可靠易存储,有利于根据字符表达查找对应的标签位置,提高定位精度和效率。
步骤540、根据标签点的排列方式的字符表达,基于预设的字符表达与标签位置信息的关联关系,确定标签的位置信息。
其中,工作人员预先确定每个标签在工作环境中的粘贴位置坐标,即标签位置信息,并预先确定各个标签的字符表达,将各个标签的字符表达和标签位置信息进行关联存储。在识别到标签并确定标签的字符表达后,根据预先存储的关联关系,确定对应的标签位置信息,实现机器人的定位。例如,机器人在工作过程中,上方识别到标签,确定识别到的标签的位置,则将该标签的位置作为机器人的位置,从而对机器人进行定位。
本发明实施例中的标签由标签点包围框和多个标签点组成,标签点包围框可以是围绕所有标签点的半封闭框。在获取到机器人工作环境的标签图像后,先识别到标签点包围框,在标签点包围框的范围内识别标签点,避免将标签点包围框外的光斑误认为是标签点,提高标签点的识别精度。确定标签点包围框中各个标签点的排列方式,将排列方式转换为字符表达,一种字符表达对应有一个标签位置,根据字符表达,可以快速对机器人进行定位。解决了现有技术中,将环境光斑识别为标签点的问题,且定位时使用字符表达比使用排列方式的图形更加高效可靠易存储。通过字符表达和预设的关联关系可以及时进行标签定位,提高标签识别的精度和效率,进而提高机器人的工作精度和效率。
图6为本发明实施例所提供的一种标签识别的装置的结构框图,可执行本发明任意实施例所提供的一种标签识别的方法,具备执行方法相应的功能模块和有益效果。如图6所示,该装置具体包括:
获取模块601,用于获取标签图像,其中,所述标签图像具有标签,所述标签包括至少一个具有红外反光特性的标签点;
确定模块602,用于在所述标签图像内识别所述标签点,并确定所述标签点的排列方式;
标签识别模块603,用于根据所述标签点的排列方式以及预设的排列方式与标签位置信息的关联关系,确定所述标签的位置信息。
可选的,所述标签还包括具有红外反光特性的标签点包围框,所述标签点包围框是围绕所有标签点的多边形;
相应地,确定模块602,包括:
标签点包围框识别单元,用于从所述标签图像中识别所述标签点包围框;
排列方式确定单元,用于在所述标签点包围框的范围内识别所述标签点,确定所述标签点的排列方式。
可选的,确定模块602,包括:
标签点包围框识别单元,用于根据预设的包围框识别算法,从所述标签图像中识别预设的标签点包围框形状的线条,得到所述标签图像中的标签点包围框。
可选的,标签点在标签点包围框中根据预设行列数进行排列;
相应地,确定模块602,包括:
标签点形状图案确定单元,用于根据预设的标签点识别算法,识别所述标签点包围框中的标签点形状的图案;其中,标签点形状为预先设置;
排列方式确定单元,用于将所述标签点包围框中的标签点形状的图案的排列方式,确定为所述标签点的排列方式。
可选的,所述标签点包围框是一边开口的半封闭框,排列方式确定单元,具体用于:
根据识别到的所述标签点包围框,确定所述标签点包围框中缺失的边,作为目标边;
根据所述目标边,确定所述标签中标签点的行排列顺序;
根据所述行排列顺序和标签点形状的图案的排列方式,得到所述标签的标签点的排列方式。
可选的,标签识别模块603,包括:
字符表达确定单元,用于根据预设的标签点排列方式表达算法,得到所述标签点的排列方式的字符表达;
标签位置确定单元,用于根据所述标签点的排列方式的字符表达,基于预设的字符表达与标签位置信息的关联关系,确定所述标签的位置信息。
可选的,字符表达确定单元,包括:
每一行字符表达确定子单元,用于根据标签点的排列方式和预设的标签点排列方式表达算法,确定每一行标签点的字符表达;
标签字符表达确定子单元,用于根据所述标签点的行排列顺序,对所述每一行标签点的字符表达进行排序,得到所述标签点的排列方式的字符表达。
可选的,每一行字符表达确定子单元,具体用于:
根据每一行标签点的排列方式和预设的标签点表示算法,确定每一行标签点的二进制表达;
将所述每一行标签点的二进制表达进行十进制转换,得到所述每一行标签点的字符表达。
可选的,获取模块601,还包括:
标签图像获取单元,用于通过安装于机器人身上的图像采集设备,在机器人工作环境内采集标签图像。
本发明实施例中的标签由标签点包围框和多个标签点组成,标签点包围框是围绕所有标签点的半封闭框。在获取到机器人工作环境的标签图像后,先识别到标签点包围框,在标签点包围框的范围内识别标签点,避免将标签点包围框外的光斑误认为是标签点,提高标签点的识别精度。确定标签点包围框中各个标签点的排列方式,一种排列方式关联对应有一个标签位置,根据标签点的排列方式,可以快速对机器人进行定位。解决了现有技术中,将环境光斑识别为标签点的问题,且可以通过预设的关联关系及时进行标签定位,提高标签识别的精度和效率,进而提高机器人的工作精度和效率。
本发明实施例提供一种标签,该标签包括至少一个具有红外反光特性的标签点,该标签可以设置于机器人工作环境中,在机器人执行本发明任意实施例的标签识别的方法时实现机器人的定位。
图7是本发明实施例提供的一种标签识别的设备的结构示意图。标签识别的设备是一种电子设备,图7示出了适于用来实现本发明实施方式的示例性电子设备700的框图。图7显示的电子设备700仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。
如图7所示,电子设备700以通用计算设备的形式表现。电子设备700的组件可以包括但不限于:一个或者多个处理器或者处理单元701,系统存储器702,连接不同系统组件(包括系统存储器702和处理单元701)的总线703。
总线703表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(ISA)总线,微通道体系结构(MAC)总线,增强型ISA总线、视频电子标准协会
(VESA)局域总线以及外围组件互连(PCI)总线。
电子设备700典型地包括多种计算机系统可读介质。这些介质可以是任何能够被电子设备700访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
系统存储器702可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(RAM)704和/或高速缓存存储器705。电子设备700可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统706可以用于读写不可移动的、非易失性磁介质(图7未显示,通常称为“硬盘驱动器”)。尽管图7中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如CD-ROM,DVD-ROM或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线703相连。存储器702可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本发明各实施例的功能。
具有一组(至少一个)程序模块707的程序/实用工具708,可以存储在例如存储器702中,这样的程序模块707包括但不限于操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块707通常执行本发明所描述的实施例中的功能和/或方法。
电子设备700也可以与一个或多个外部设备709(例如键盘、指向设备、显示器710等)通信,还可与一个或者多个使得用户能与该电子设备700交互的设备通信,和/或与使得该电子设备700能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口711进行。并且,电子设备700还可以通过网络适配器712与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图7所示,网络适配器712通过总线703与电子设备700的其它模块通信。应当明白,尽管图7中未示出,可以结合电子设备700使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
处理单元701通过运行存储在系统存储器702中的程序,从而执行各种功能应用以及数据处理,例如实现本发明实施例所提供的一种标签识别的方法,包括:
获取标签图像,其中,所述标签图像具有标签,所述标签包括至少一个具有红外反光特性的标签点;
在所述标签图像内识别所述标签点,并确定所述标签点的排列方式;
根据所述标签点的排列方式以及预设的排列方式与标签位置信息的关联关系,确定所述标签的位置信息。
本发明实施例还提供一种包含计算机可执行指令的存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本发明实施例所提供的一种标签识别的方法,包括:
获取标签图像,其中所述标签图像具有标签,所述标签包括至少一个具有红外反光特性的标签点;
在所述标签图像内识别所述标签点,并确定所述标签点的排列方式;
根据所述标签点的排列方式以及预设的排列方式与标签位置信息的关联关系,确定所述标签的位置信息。
本发明实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本发明操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
注意,上述仅为本发明的较佳实施例及所运用技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本发明进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本发明的范围由所附的权利要求范围决定。

Claims (13)

  1. 一种标签识别的方法,其特征在于,包括:
    获取标签图像,其中,所述标签图像具有标签,所述标签包括至少一个具有红外反光特性的标签点;
    在所述标签图像内识别所述标签点,并确定所述标签点的排列方式;
    根据所述标签点的排列方式以及预设的排列方式与标签位置信息的关联关系,确定所述标签的位置信息。
  2. 根据权利要求1所述的方法,其特征在于,所述标签还包括具有红外反光特性的标签点包围框,所述标签点包围框是围绕所有标签点的多边形,
    其中,所述在所述标签图像内识别所述标签点,并确定所述标签点的排列方式,包括:
    从所述标签图像中识别所述标签点包围框;
    在所述标签点包围框的范围内识别所述标签点,确定所述标签点的排列方式。
  3. 根据权利要求2所述的方法,其特征在于,所述从所述标签图像中识别所述标签点包围框,包括:
    根据预设的包围框识别算法,从所述标签图像中识别预设的标签点包围框形状的线条,得到所述标签图像中的所述标签点包围框。
  4. 根据权利要求2所述的方法,其特征在于,所述标签点在标签点包围框中根据预设行列数进行排列,
    其中,所述在所述标签点包围框的范围内识别所述标签点,确定所述标签点的排列方式,包括:
    根据预设的标签点识别算法,识别所述标签点包围框中的标签点形状的图案;其中,所述标签点形状为预先设置;
    将所述标签点包围框中的标签点形状的图案的排列方式,确定为所述标签点的排列方式。
  5. 根据权利要求4所述的方法,其特征在于,所述标签点包围框是一边开口的半封闭框,
    其中,所述将所述标签点包围框中的标签点形状的图案的排列方式,确定为所述标签点的排列方式,包括:
    根据识别到的所述标签点包围框,确定所述标签点包围框中缺失的边,作为目标边;
    根据所述目标边,确定所述标签中所述标签点的行排列顺序;
    根据所述行排列顺序和所述标签点形状的图案的排列方式,得到所述标签点的排列方式。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述根据所述标签点的排列方式以及预设的排列方式与标签位置信息的关联关系,确定所述标签的位置信息,包括:
    根据预设的标签点排列方式表达算法,得到所述标签点的排列方式的字符表达;
    根据所述标签点的排列方式的字符表达,基于预设的字符表达与标签位置信息的关联关系,确定所述标签的位置信息。
  7. 根据权利要求6所述的方法,其特征在于,所述根据预设的标签点排列方式表达算法,得到所述标签点的排列方式的字符表达,包括:
    根据所述标签点的排列方式和所述预设的标签点排列方式表达算法,确定每一行标签点的字符表达;
    根据所述标签点的行排列顺序,对所述每一行标签点的字符表达进行排序,得到所述标签点的排列方式的字符表达。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述标签点的排列方式和所述预设的标签点排列方式表达算法,确定每一行标签点的字符表达,包括:
    根据所述每一行标签点的排列方式和预设的标签点表示算法,确定所述每一行标签点的二进制表达;
    将所述每一行标签点的二进制表达进行十进制转换,得到所述每一行标签点的字符表达。
  9. 根据权利要求1至5中任一项所述的方法,其特征在于,所述获取标签图像,包括:
    通过安装于机器人身上的图像采集设备,在机器人工作环境内采集标签图像。
  10. 一种标签识别的装置,其特征在于,包括:
    获取模块,用于获取标签图像,其中,所述标签图像具有标签,所述标签包括至少一个具有红外反光特性的标签点;
    确定模块,用于在所述标签图像内识别所述标签点,并确定所述标签点的排列方式;
    标签识别模块,用于根据所述标签点的排列方式以及预设的排列方式与标签位置信息的关联关系,确定所述标签的位置信息。
  11. 一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现如权利要求1-9中任一所述的标签识别的方法。
  12. 一种包含计算机可执行指令的存储介质,其特征在于,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-9中任一所述的标签识别的方法。
  13. 一种标签,其特征在于,所述标签包括至少一个具有红外反光特性的标签点,所述标签用于机器人在执行如权利要求1-9中任一所述的标签识别的方法时实现机器人的定位。
PCT/CN2023/071912 2022-01-29 2023-01-12 标签识别的方法、装置、电子设备、存储介质及标签 WO2023143098A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210110321.1A CN114549624B (zh) 2022-01-29 2022-01-29 一种标签识别的方法、装置、电子设备、存储介质及标签
CN202210110321.1 2022-01-29

Publications (1)

Publication Number Publication Date
WO2023143098A1 true WO2023143098A1 (zh) 2023-08-03

Family

ID=81674245

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/071912 WO2023143098A1 (zh) 2022-01-29 2023-01-12 标签识别的方法、装置、电子设备、存储介质及标签

Country Status (2)

Country Link
CN (1) CN114549624B (zh)
WO (1) WO2023143098A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549624B (zh) * 2022-01-29 2023-07-14 上海擎朗智能科技有限公司 一种标签识别的方法、装置、电子设备、存储介质及标签

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200334457A1 (en) * 2019-04-16 2020-10-22 Boe Technology Group Co., Ltd. Image recognition method and apparatus
CN113673499A (zh) * 2021-08-20 2021-11-19 上海擎朗智能科技有限公司 一种标签建图方法、装置、设备及存储介质
CN113947717A (zh) * 2021-10-20 2022-01-18 上海擎朗智能科技有限公司 一种标签识别的方法、装置、电子设备及存储介质
CN113971414A (zh) * 2021-11-11 2022-01-25 上海擎朗智能科技有限公司 一种标签识别的方法、装置、电子设备及存储介质
CN114549624A (zh) * 2022-01-29 2022-05-27 上海擎朗智能科技有限公司 一种标签识别的方法、装置、电子设备、存储介质及标签

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140086412A (ko) * 2012-12-28 2014-07-08 삼성전기주식회사 통신 오류 발생 태그 검색 방법 및 이를 적용한 전자 가격 표시 시스템 서버
CN106526580A (zh) * 2016-10-26 2017-03-22 哈工大机器人集团上海有限公司 用于确定机器人位置的路标、设备及机器人位置确定方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200334457A1 (en) * 2019-04-16 2020-10-22 Boe Technology Group Co., Ltd. Image recognition method and apparatus
CN113673499A (zh) * 2021-08-20 2021-11-19 上海擎朗智能科技有限公司 一种标签建图方法、装置、设备及存储介质
CN113947717A (zh) * 2021-10-20 2022-01-18 上海擎朗智能科技有限公司 一种标签识别的方法、装置、电子设备及存储介质
CN113971414A (zh) * 2021-11-11 2022-01-25 上海擎朗智能科技有限公司 一种标签识别的方法、装置、电子设备及存储介质
CN114549624A (zh) * 2022-01-29 2022-05-27 上海擎朗智能科技有限公司 一种标签识别的方法、装置、电子设备、存储介质及标签

Also Published As

Publication number Publication date
CN114549624A (zh) 2022-05-27
CN114549624B (zh) 2023-07-14

Similar Documents

Publication Publication Date Title
WO2021129528A1 (zh) 分拣方法、装置、设备和存储介质
EP3786814A1 (en) Intelligent extraction of information from a document
CN112016638B (zh) 一种钢筋簇的识别方法、装置、设备及存储介质
JP4723840B2 (ja) インク入力内のハンド・ドローされたオブジェクトを検出するシステムおよび方法
CN111259889A (zh) 图像文本识别方法、装置、计算机设备及计算机存储介质
CN110222703B (zh) 图像轮廓识别方法、装置、设备和介质
US11893765B2 (en) Method and apparatus for recognizing imaged information-bearing medium, computer device and medium
CN111460967A (zh) 一种违法建筑识别方法、装置、设备及存储介质
CN109829458B (zh) 实时自动生成记录系统操作行为的日志文件的方法
WO2020107951A1 (zh) 一种基于图像的商品结算方法、装置、介质及电子设备
CN113780098B (zh) 文字识别方法、装置、电子设备以及存储介质
WO2023143098A1 (zh) 标签识别的方法、装置、电子设备、存储介质及标签
CN113657274B (zh) 表格生成方法、装置、电子设备及存储介质
US20220027661A1 (en) Method and apparatus of processing image, electronic device, and storage medium
CN112149663A (zh) 结合rpa和ai的图像文字的提取方法、装置及电子设备
US11967125B2 (en) Image processing method and system
JP2022110132A (ja) 陳列シーン認識方法、モデルトレーニング方法、装置、電子機器、記憶媒体およびコンピュータプログラム
CN112149548A (zh) 一种适用于端子排的cad图纸智能录入和识别方法及其装置
CN109508714B (zh) 一种低成本多通道实时数字仪表盘视觉识别方法及系统
CN114359383A (zh) 一种图像定位方法、装置、设备以及存储介质
CN113610809A (zh) 骨折检测方法、装置、电子设备以及存储介质
CN112857746A (zh) 一种灯光检测仪的追踪方法、装置、电子设备及存储介质
CN111723799A (zh) 坐标定位方法、装置、设备及存储介质
CN113971414A (zh) 一种标签识别的方法、装置、电子设备及存储介质
CN113033431B (zh) 光学字符识别模型训练和识别方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23745981

Country of ref document: EP

Kind code of ref document: A1