CN113674341A - Robot visual identification and positioning method, intelligent terminal and storage medium - Google Patents
Robot visual identification and positioning method, intelligent terminal and storage medium Download PDFInfo
- Publication number
- CN113674341A CN113674341A CN202110962704.7A CN202110962704A CN113674341A CN 113674341 A CN113674341 A CN 113674341A CN 202110962704 A CN202110962704 A CN 202110962704A CN 113674341 A CN113674341 A CN 113674341A
- Authority
- CN
- China
- Prior art keywords
- image
- checkerboard
- coordinates
- robot
- disassembled
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 230000000007 visual effect Effects 0.000 title claims abstract description 33
- 239000011159 matrix material Substances 0.000 claims abstract description 38
- 238000006243 chemical reaction Methods 0.000 claims abstract description 11
- 238000012549 training Methods 0.000 claims description 57
- 230000009466 transformation Effects 0.000 claims description 37
- 230000004807 localization Effects 0.000 claims 1
- 239000002699 waste material Substances 0.000 abstract description 9
- 238000004064 recycling Methods 0.000 abstract description 5
- 238000004590 computer program Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- OQCFWECOQNPQCG-UHFFFAOYSA-N 1,3,4,8-tetrahydropyrimido[4,5-c]oxazin-7-one Chemical compound C1CONC2=C1C=NC(=O)N2 OQCFWECOQNPQCG-UHFFFAOYSA-N 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 229910052799 carbon Inorganic materials 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 229910000838 Al alloy Inorganic materials 0.000 description 1
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 229910000831 Steel Inorganic materials 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000004033 plastic Substances 0.000 description 1
- 229920003023 plastic Polymers 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000010959 steel Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Manipulator (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种机器人视觉识别及定位方法、智能终端及存储介质,包括:获取待识别图像;其中,所述待识别图像中包括若干待拆卸部件;将所述待识别图像输入图像识别模型,通过所述图像识别模型输出各个待拆卸部件对应的类别信息和图像位置信息;根据所述图像位置信息以及预先确定的相机到机器人末端的转换矩阵,确定所述各个待拆卸部件对应的目标位置信息。本发明通过图像识别模型输出各个待拆卸部件的类别信息及图像位置信息,并根据图像位置信息确定目标位置信息,可以精确识别待拆卸部件的类别信息并精确定位待拆卸部件的位置信息,实现共享单车的自动分类拆卸与共享单车零件的循环利用,解决人工暴力拆卸造成的资源浪费问题。
The invention discloses a robot visual recognition and positioning method, an intelligent terminal and a storage medium. , output the category information and image position information corresponding to each part to be disassembled through the image recognition model; according to the image position information and the predetermined conversion matrix from the camera to the robot end, determine the target position corresponding to each part to be disassembled information. The invention outputs the category information and image position information of each component to be disassembled through the image recognition model, and determines the target location information according to the image location information, so that the category information of the component to be disassembled and the location information of the component to be disassembled can be accurately identified and the location information of the component to be disassembled can be accurately located, and sharing is realized. The automatic classification and disassembly of bicycles and the recycling of shared bicycle parts solve the problem of waste of resources caused by manual violent disassembly.
Description
技术领域technical field
本发明涉及机器识别技术领域,尤其涉及的是一种机器人视觉识别及定位方法、智能终端及存储介质。The invention relates to the technical field of machine identification, in particular to a robot visual identification and positioning method, an intelligent terminal and a storage medium.
背景技术Background technique
共享单车具有自由度高、价格低、低碳环保等优点,使用频度较高颇受年轻人青睐。共享单车给人们带来极大便利的同时,由于大量投放以及各种人为因素的破坏,每年都有上千万量共享单车面临报废,为解决破旧共享单车乱停乱放问题,现有方法是将回收的共享单车经过暴力拆卸后当做废品处理,这种处理方式会造成极大的资源浪费。Shared bicycles have the advantages of high degree of freedom, low price, low carbon and environmental protection, and are favored by young people with high frequency of use. While shared bicycles bring great convenience to people, tens of millions of shared bicycles face scrapping every year due to a large number of releases and the destruction of various human factors. Disposing the recycled shared bicycles as waste after violent disassembly will cause a huge waste of resources.
因此,现有技术还有待改进和发展。Therefore, the existing technology still needs to be improved and developed.
发明内容SUMMARY OF THE INVENTION
本发明要解决的技术问题在于,针对现有技术的上述缺陷,提供一种机器人视觉识别及定位方法、智能终端及存储介质,旨在解决现有人工暴力拆卸共享单车的方式造成极大的资源浪费的问题。The technical problem to be solved by the present invention is to provide a robot visual recognition and positioning method, an intelligent terminal and a storage medium in view of the above-mentioned defects of the prior art, aiming at solving the huge resource caused by the existing manual violent disassembly of shared bicycles. waste problem.
本发明解决问题所采用的技术方案如下:The technical scheme adopted by the present invention to solve the problem is as follows:
第一方面,本发明实施例提供一种机器人视觉识别及定位方法,其中,应用于与相机和机器人连接的智能终端,所述方法包括:In a first aspect, an embodiment of the present invention provides a method for visual recognition and positioning of a robot, which is applied to an intelligent terminal connected to a camera and a robot, and the method includes:
获取待识别图像;其中,所述待识别图像中包括若干待拆卸部件;acquiring an image to be identified; wherein the image to be identified includes a number of components to be disassembled;
将所述待识别图像输入图像识别模型,通过所述图像识别模型输出各个待拆卸部件对应的类别信息和图像位置信息;Inputting the to-be-recognized image into an image recognition model, and outputting the category information and image position information corresponding to each component to be disassembled through the image recognition model;
根据所述图像位置信息以及预先确定的相机到机器人末端的转换矩阵,确定所述各个待拆卸部件对应的目标位置。According to the image position information and the predetermined transformation matrix from the camera to the robot end, the target position corresponding to each component to be disassembled is determined.
所述的机器人视觉识别及定位方法,其中,所述图像识别模型的生成方法包括:The robot visual recognition and positioning method, wherein, the generation method of the image recognition model comprises:
将训练图像集中的训练图像输入预设网络模型中,通过所述预设网络模型输出所述训练图像中各部件对应的预测属性标签;其中,所述训练图像集中包括训练图像和所述训练图像中各部件对应的真实属性标签,所述真实属性标签包括真实类别信息和真实图像位置信息,所述预测属性标签包括预测类别信息和预测图像位置信息;Input the training images in the training image set into a preset network model, and output the predicted attribute labels corresponding to each component in the training image through the preset network model; wherein, the training image set includes the training image and the training image The real attribute label corresponding to each component in the above, the real attribute label includes the real category information and the real image position information, and the predicted attribute label includes the predicted category information and the predicted image position information;
根据所述预测属性标签和所述真实属性标签对所述预设网络模型的模型参数进行更新,并继续执行通过所述预设网络模型输出所述训练图像中各部件对应的预测属性标签的步骤,直至所述预设网络模型的训练情况满足预设条件,以得到图像识别模型。Update the model parameters of the preset network model according to the predicted attribute labels and the real attribute labels, and continue to perform the step of outputting the predicted attribute labels corresponding to each component in the training image through the preset network model , until the training situation of the preset network model satisfies the preset condition, so as to obtain the image recognition model.
所述的机器人视觉识别及定位方法,其中,所述根据所述预测属性标签和所述真实属性标签对所述预设网络模型的模型参数进行更新,并继续执行通过所述预设网络模型输出所述训练图像中各部件对应的预测属性标签的步骤,直至所述预设网络模型的训练情况满足预设条件的步骤包括:The robot visual recognition and positioning method, wherein the model parameters of the preset network model are updated according to the predicted attribute label and the real attribute label, and the output through the preset network model is continued. The step of predicting the attribute label corresponding to each component in the training image until the training situation of the preset network model satisfies the preset condition includes:
根据所述预测属性标签和所述真实属性标签确定损失值,并将所述损失值与预设阈值进行比较;Determine a loss value according to the predicted attribute label and the real attribute label, and compare the loss value with a preset threshold;
当所述损失值不小于所述预设阈值时,根据预设的参数学习率对所述预设网络模型的模型参数进行更新,并继续执行通过所述预设网络模型输出所述训练图像中各部件对应的预测属性标签的步骤,直至所述损失值小于所述预设阈值。When the loss value is not less than the preset threshold, update the model parameters of the preset network model according to the preset parameter learning rate, and continue to output the training image through the preset network model. The step of predicting attribute labels corresponding to each component until the loss value is less than the preset threshold.
所述的机器人视觉识别及定位方法,其中,所述根据所述图像位置信息以及预先确定的相机到机器人末端的转换矩阵,确定所述各个待拆卸部件对应的目标位置的步骤包括:The robot visual recognition and positioning method, wherein the step of determining the target position corresponding to each component to be disassembled according to the image position information and the predetermined transformation matrix from the camera to the robot end includes:
根据所述图像位置信息,确定所述各个待拆卸部件对应的中心位置坐标;According to the image position information, determine the center position coordinates corresponding to each component to be disassembled;
根据预先确定的相机到机器人末端的转换矩阵对所述中心位置坐标进行坐标变换,确定所述各个待拆卸部件对应的目标位置。Coordinate transformation is performed on the coordinates of the center position according to a predetermined conversion matrix from the camera to the end of the robot, and the target positions corresponding to the components to be disassembled are determined.
所述的机器人视觉识别及定位方法,其中,所述根据所述图像位置信息,确定所述各个待拆卸部件对应的中心位置坐标的步骤包括:The robot visual recognition and positioning method, wherein the step of determining the center position coordinates corresponding to the parts to be disassembled according to the image position information includes:
根据所述图像位置信息,确定所述各个待拆卸部件对应的最小外接矩形;According to the image position information, determine the minimum circumscribed rectangle corresponding to each component to be disassembled;
获取各个待拆卸部件对应的最小外接矩形的中心点坐标,并将所述中心点坐标确定为所述各个待拆卸部件对应的中心位置坐标。Acquire the coordinates of the center point of the smallest circumscribed rectangle corresponding to each component to be disassembled, and determine the coordinates of the center point as the coordinates of the center position corresponding to each component to be disassembled.
所述的机器人视觉识别及定位方法,其中,所述相机到机器人末端的转换矩阵的确定方法包括:The robot visual recognition and positioning method, wherein, the method for determining the transformation matrix from the camera to the robot end includes:
获取预先设计的棋盘格,根据所述棋盘格确定所述棋盘格中各个角点在机器人基座坐标系下的坐标和在相机坐标系下的坐标;Obtaining a pre-designed checkerboard, and determining the coordinates of each corner point in the checkerboard under the robot base coordinate system and the coordinates under the camera coordinate system according to the checkerboard;
根据所述棋盘格中各个角点在机器人基座坐标系下的坐标和在相机坐标系下的坐标,确定相机到机器人末端的转换矩阵。According to the coordinates of each corner point in the checkerboard in the robot base coordinate system and the coordinates in the camera coordinate system, the transformation matrix from the camera to the robot end is determined.
所述的机器人视觉识别及定位方法,其中,所述根据所述棋盘格确定所述棋盘格中各个角点在机器人基座坐标系下的坐标和在相机坐标系下的坐标的步骤包括:The described robot visual recognition and positioning method, wherein the step of determining the coordinates of each corner point in the checkerboard in the robot base coordinate system and the coordinates in the camera coordinate system according to the checkerboard includes:
根据所述棋盘格对所述相机进行标定,确定所述相机的内外参数以及畸变系数;The camera is calibrated according to the checkerboard, and the internal and external parameters and distortion coefficients of the camera are determined;
获取所述棋盘格中各个角点的图像坐标,根据所述相机的内外参数以及畸变系数对所述图像坐标进行坐标变换,确定所述棋盘格中各个角点在相机坐标系下的坐标。The image coordinates of each corner point in the checkerboard are obtained, and the coordinate transformation is performed on the image coordinates according to the internal and external parameters of the camera and the distortion coefficient, and the coordinates of each corner point in the checkerboard in the camera coordinate system are determined.
所述的机器人视觉识别及定位方法,其中,所述根据所述棋盘格确定所述棋盘格中各个角点在机器人基座坐标系下的坐标和在相机坐标系下的坐标的步骤还包括:The described robot visual recognition and positioning method, wherein the step of determining, according to the checkerboard, the coordinates of each corner point in the checkerboard under the robot base coordinate system and the coordinates under the camera coordinate system further comprises:
根据所述棋盘格,确定棋盘格坐标系到机器人末端坐标系的转换矩阵和所述棋盘格中各个角点在棋盘格坐标系下的坐标;According to the checkerboard, determine the transformation matrix from the checkerboard coordinate system to the robot end coordinate system and the coordinates of each corner point in the checkerboard under the checkerboard coordinate system;
根据棋盘格坐标系到机器人末端坐标系的转换矩阵和所述棋盘格中各个角点在棋盘格坐标系下的坐标,确定所述棋盘格中各个角点在机器人基座坐标系下的坐标。The coordinates of each corner point in the checkerboard in the robot base coordinate system are determined according to the transformation matrix from the checkerboard coordinate system to the robot end coordinate system and the coordinates of each corner point in the checkerboard in the checkerboard coordinate system.
第二方面,本发明实施例还提供一种机器人视觉识别及定位装置,其中,所述装置包括:In a second aspect, an embodiment of the present invention further provides a robot visual recognition and positioning device, wherein the device includes:
图像获取模块,用于获取待识别图像;其中,所述待识别图像中包括若干待拆卸部件;an image acquisition module for acquiring an image to be recognized; wherein the image to be recognized includes a number of components to be disassembled;
图像识别模块,用于将所述待识别图像输入图像识别模型,通过所述图像识别模型输出各个待拆卸部件对应的类别信息和图像位置信息;an image recognition module, configured to input the to-be-recognized image into an image recognition model, and output the category information and image position information corresponding to each component to be disassembled through the image recognition model;
目标定位模块,用于根据所述图像位置信息以及预先确定的相机到机器人末端的转换矩阵,确定所述各个待拆卸部件对应的目标位置。The target positioning module is configured to determine the target position corresponding to each component to be disassembled according to the image position information and the predetermined transformation matrix from the camera to the robot end.
第三方面,本发明实施例提供一种智能终端,包括:处理器、与处理器通信连接的存储介质,所述存储介质适于存储多条指令;所述处理器适于调用所述存储介质中的指令,以执行实现上述所述的机器人视觉识别及定位方法的步骤。In a third aspect, an embodiment of the present invention provides an intelligent terminal, including: a processor and a storage medium communicatively connected to the processor, where the storage medium is adapted to store a plurality of instructions; the processor is adapted to call the storage medium to execute the steps of implementing the above-mentioned robot visual recognition and positioning method.
第四方面,本发明实施例提供一种计算机可读存储介质,其上存储有多条指令,其中,所述指令适于由处理器加载并执行,以执行实现上述所述的机器人视觉识别及定位方法的步骤。In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium on which a plurality of instructions are stored, wherein the instructions are suitable for being loaded and executed by a processor, so as to implement the above-mentioned robot visual recognition and The steps of the positioning method.
本发明的有益效果:本发明实施例首先获取待识别图像;其中,所述待识别图像中包括若干待拆卸部件,然后将所述待识别图像输入图像识别模型,通过所述图像识别模型输出各个待拆卸部件对应的类别信息和图像位置信息,最后根据所述图像位置信息以及预先确定的相机到机器人末端的转换矩阵,确定所述各个待拆卸部件对应的目标位置,因此,通过图像识别模型输出类别信息及图像位置信息,并根据图像位置信息确定各个待拆卸部件对应的目标位置,可以精确识别待拆卸部件的类别信息并精确定位待拆卸部件的位置信息,实现共享单车的自动分类拆卸与共享单车零件的循环利用,解决人工暴力拆卸造成的资源浪费问题。Beneficial effects of the present invention: In the embodiment of the present invention, an image to be recognized is obtained first; wherein the image to be recognized includes a number of parts to be disassembled, and then the image to be recognized is input into an image recognition model, and each image is output through the image recognition model. The category information and image position information corresponding to the parts to be disassembled, and finally the target positions corresponding to the parts to be disassembled are determined according to the image position information and the predetermined conversion matrix from the camera to the robot end. Therefore, the image recognition model is used to output Category information and image position information, and determine the target position corresponding to each part to be disassembled according to the image position information, can accurately identify the category information of the part to be disassembled and accurately locate the position information of the part to be disassembled, and realize the automatic classification, disassembly and sharing of shared bicycles The recycling of bicycle parts solves the problem of waste of resources caused by manual violent disassembly.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments described in the present invention. For those of ordinary skill in the art, other drawings can also be obtained based on these drawings without any creative effort.
图1是本发明实施例提供的机器人视觉识别及定位方法的流程示意图;1 is a schematic flowchart of a robot visual recognition and positioning method provided by an embodiment of the present invention;
图2是本发明实施例提供的机器人视觉识别及定位装置的原理框图;2 is a schematic block diagram of a robot visual recognition and positioning device provided by an embodiment of the present invention;
图3是本发明实施例提供的智能终端的内部结构原理框图。FIG. 3 is a schematic block diagram of an internal structure of an intelligent terminal provided by an embodiment of the present invention.
具体实施方式Detailed ways
为使本发明的目的、技术方案及优点更加清楚、明确,以下参照附图并举实施例对本发明进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the objectives, technical solutions and advantages of the present invention clearer and clearer, the present invention will be further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention.
需要说明,若本发明实施例中有涉及方向性指示(诸如上、下、左、右、前、后……),则该方向性指示仅用于解释在某一特定姿态(如附图所示)下各部件之间的相对位置关系、运动情况等,如果该特定姿态发生改变时,则该方向性指示也相应地随之改变。It should be noted that if there are directional indications (such as up, down, left, right, front, back, etc.) involved in the embodiments of the present invention, the directional indications are only used to explain a certain posture (as shown in the accompanying drawings). If the specific posture changes, the directional indication also changes accordingly.
共享单车相比传统单车租赁具有如下优点:首先,共享单车无需停在专门的停车桩上,可以大多数区域即停即走,具有较高的使用自由度;其次,共享单车相比传统租赁下几元甚至景区几十元的价格更具优势;另外,共享单车符合当前低碳环保、绿色出行的潮流。这些优点使得共享单车有望满足日常高频使用需求,并成为年轻人的出行新风尚。Compared with traditional bicycle rental, shared bicycles have the following advantages: First, shared bicycles do not need to be parked on special parking piles, and can park and go in most areas, with a higher degree of freedom of use; secondly, compared to traditional rental bicycles, shared bicycles The price of a few yuan or even tens of yuan in scenic spots is more advantageous; in addition, shared bicycles are in line with the current trend of low-carbon environmental protection and green travel. These advantages make shared bicycles expected to meet the needs of daily high-frequency use and become a new travel trend for young people.
但是,共享单车给人们带来极大便利的同时,由于大量投放以及各种人为因素的破坏,每年都有上千万量共享单车面临报废,为解决破旧共享单车乱停乱放问题,现有方法是将回收的共享单车经过人工暴力拆卸后当做废品处理,而共享单车上部分铝合金、塑料、钢铁等是可利用的资料,这种处理方式会造成极大的资源浪费。However, while shared bicycles bring great convenience to people, tens of millions of shared bicycles face scrapping every year due to a large number of placements and the destruction of various human factors. The method is to treat the recycled shared bicycles as waste after manual violent disassembly, and some aluminum alloys, plastics, steel, etc. on the shared bicycles are available materials, which will cause a great waste of resources.
为了解决现有技术的问题,本实施例提供了一种机器人视觉识别及定位方法,通过所述方法可以精确识别待拆卸部件的类别信息并精确定位待拆卸部件的位置信息,实现共享单车的自动分类拆卸与共享单车零件的循环利用,解决人工暴力拆卸造成的资源浪费问题。具体实施时,首先获取待识别图像;其中,所述待识别图像中包括若干待拆卸部件,然后,将所述待识别图像输入图像识别模型,通过所述图像识别模型输出各个待拆卸部件对应的类别信息和图像位置信息,最后,根据所述图像位置信息以及预先确定的相机到机器人末端的转换矩阵,确定所述各个待拆卸部件对应的目标位置,因此,通过图像识别模型输出各个待拆卸部件的类别信息及图像位置信息,并根据图像位置信息确定目标位置,可以精确识别待拆卸部件的类别信息并精确定位待拆卸部件的位置信息,实现共享单车的自动分类拆卸与共享单车零件的循环利用。In order to solve the problems of the prior art, this embodiment provides a method for visual recognition and positioning of a robot, through which the category information of the parts to be disassembled can be accurately identified and the position information of the parts to be disassembled can be accurately located, so as to realize the automatic sharing of bicycles. Classified disassembly and recycling of shared bicycle parts to solve the problem of resource waste caused by manual violent disassembly. In a specific implementation, the image to be recognized is first acquired; wherein, the image to be recognized includes a number of parts to be disassembled, then the image to be recognized is input into an image recognition model, and the image recognition model is used to output the corresponding parts of each part to be disassembled. category information and image position information, and finally, according to the image position information and the predetermined conversion matrix from the camera to the robot end, determine the target position corresponding to each part to be disassembled, therefore, output each part to be disassembled through the image recognition model category information and image position information, and determine the target position according to the image position information, which can accurately identify the category information of the parts to be disassembled and accurately locate the position information of the parts to be disassembled, and realize the automatic classification and disassembly of shared bicycles and the recycling of shared bicycle parts. .
示例性方法Exemplary method
本实施例提供一种机器人视觉识别及定位方法,该方法可以应用于与相机和机器人连接的智能终端。具体如图1中所示,所述方法包括:This embodiment provides a method for visual recognition and positioning of a robot, and the method can be applied to an intelligent terminal connected with a camera and a robot. Specifically as shown in Figure 1, the method includes:
步骤S100、获取待识别图像;其中,所述待识别图像中包括若干待拆卸部件。Step S100, acquiring an image to be recognized; wherein the image to be recognized includes a number of components to be disassembled.
具体地,所述待识别图像通过工业相机对待拆卸共享单车进行拍照获得,所述待拆卸共享单车包括若干待拆卸部件,如螺母、螺钉、螺栓等,所述工业相机的像素高达200万,并能够以每秒15帧的速度采集待识别图像,所述工业相机与智能终端通过网线建立连接,然后利用UDP/IP协议,创建SOCKET通信,实现应用层到应用层之间的数据传输。当需要对待拆卸物体进行拆卸,如需要对报废共享单车进行拆卸时,通过工业相机对待拆卸物体进行拍照,采集待识别图像,然后由工业相机将采集到的待识别图像传输至智能终端,以便后续步骤中将所述待识别图像输入图像识别模型中,得到各个待拆卸部件对应的类别信息和图像位置信息。Specifically, the to-be-recognized image is obtained by taking pictures of the shared bicycle to be disassembled by an industrial camera, and the to-be-disassembled shared bicycle includes several parts to be disassembled, such as nuts, screws, bolts, etc. The industrial camera has up to 2 million pixels, and The image to be identified can be collected at a speed of 15 frames per second. The industrial camera and the intelligent terminal are connected through a network cable, and then the UDP/IP protocol is used to create SOCKET communication to realize data transmission between the application layer and the application layer. When the object to be disassembled needs to be disassembled, such as the dismantling of the scrapped shared bicycle, the object to be disassembled is photographed by the industrial camera, and the image to be recognized is collected, and then the collected image to be recognized is transmitted by the industrial camera to the intelligent terminal for follow-up. In the step, the to-be-recognized image is input into an image recognition model to obtain category information and image position information corresponding to each to-be-disassembled component.
步骤S200、将所述待识别图像输入图像识别模型,通过所述图像识别模型输出各个待拆卸部件对应的类别信息和图像位置信息。Step S200: Input the image to be recognized into an image recognition model, and output category information and image location information corresponding to each component to be disassembled through the image recognition model.
具体地,所述类别信息为各个待拆卸部件所属的类别,例如待拆卸部件属于螺母、螺栓或螺钉等,所述图像位置信息为各个待拆卸部件在所述待识别图像上的二维图像坐标,所述类别信息和所述位置信息通过图像识别模型对待识别图像进行识别和定位得到,相应的,获取所述各个待拆卸部件对应的类别信息和图像位置信息的步骤具体可以为:将所述待识别图像输入图像识别模型,通过所述图像识别模型输出所述待拆卸共享单车中各个待拆卸部件对应的类别信息和图像位置信息。其中,所述图像识别模型包括基础的卷积神经网络和联结的回归分类网络,如卷积层、激活层、池化层等一些核心主干层。Specifically, the category information is the category to which each component to be disassembled belongs, for example, the component to be disassembled belongs to nuts, bolts or screws, etc., and the image position information is the two-dimensional image coordinates of each component to be disassembled on the to-be-recognized image , the category information and the position information are obtained by identifying and locating the image to be recognized by an image recognition model. Correspondingly, the step of acquiring the category information and image position information corresponding to each component to be disassembled may specifically be: The to-be-recognized image is input into an image recognition model, and the image recognition model outputs the category information and image location information corresponding to each to-be-disassembled part in the to-be-disassembled shared bicycle. The image recognition model includes a basic convolutional neural network and a connected regression classification network, such as some core backbone layers such as convolutional layers, activation layers, and pooling layers.
在一具体实施方式中,步骤S200中所述图像识别模型的生成方法包括:In a specific embodiment, the method for generating the image recognition model in step S200 includes:
步骤S210、将训练图像集中的训练图像输入预设网络模型中,通过所述预设网络模型输出所述训练图像中各部件对应的预测属性标签;其中,所述训练图像集中包括训练图像和所述训练图像中各部件对应的真实属性标签,所述真实属性标签包括真实类别信息和真实图像位置信息,所述预测属性标签包括预测类别信息和预测图像位置信息;Step S210: Input the training images in the training image set into a preset network model, and output the predicted attribute labels corresponding to each component in the training image through the preset network model; wherein, the training image set includes the training image and all the components in the training image set. The real attribute labels corresponding to each component in the training image, the real attribute labels include real category information and real image position information, and the predicted attribute labels include predicted category information and predicted image position information;
步骤S220、根据所述预测属性标签和所述真实属性标签对所述预设网络模型的模型参数进行更新,并继续执行通过所述预设网络模型输出所述训练图像中各部件对应的预测属性标签的步骤,直至所述预设网络模型的训练情况满足预设条件,以得到图像识别模型。Step S220: Update the model parameters of the preset network model according to the predicted attribute label and the real attribute label, and continue to output the predicted attributes corresponding to each component in the training image through the preset network model The step of labeling, until the training situation of the preset network model satisfies the preset condition, so as to obtain the image recognition model.
具体地,所述训练图像集中包括训练图像和所述训练图像中各部件对应的真实属性标签,所述若干训练图像通过工业相机对不同种类的部件进行拍照得到,为了提高所述图像识别模型识别和定位准确性,在采集所述训练图像集时,通过改变工业相机和部件的相对位置,以及通过不同的光照、明暗、远近、图像分辨率模拟不同工业背景环境,来增加所述训练图像集的鲁棒性。所述真实属性标签包括所述训练图像中各部件对应的真实类别信息和真实图像位置信息。Specifically, the training image set includes the training image and the real attribute labels corresponding to each component in the training image, and the several training images are obtained by photographing different types of components with an industrial camera. In order to improve the recognition of the image recognition model and positioning accuracy, when collecting the training image set, the training image set is increased by changing the relative positions of industrial cameras and components, and simulating different industrial background environments through different lighting, light and shade, distance, and image resolution. robustness. The real attribute label includes real category information and real image location information corresponding to each component in the training image.
本实施例中预先使用python编译深度学习算法构建网络模型,所述网络模型与所述图像识别模型结构相同,包括基础的卷积神经网络和联结的回归分类网络,如卷积层、激活层、池化层等一些核心主干层。获取到训练图像集后,通过python编译深度学习算法中对网络模型进行训练,训练过程中包括对训练轮次、训练批次等属性的设置,其中,所述网络模型的训练过程具体包括:将训练图像集中的训练图像输入预设网络模型中,通过所述预设网络模型输出所述训练图像中各部件对应的预测属性标签,与所述真实属性标签类似,所述预测属性标签包括预测类别信息和预测图像位置信息;然后根据所述预测属性标签和所述真实属性标签对所述预设网络模型的模型参数进行更新,直至所述预设网络模型的训练情况满足预设条件,以得到图像识别模型。In this embodiment, python is used to compile a deep learning algorithm in advance to build a network model. The network model has the same structure as the image recognition model, including a basic convolutional neural network and a connected regression classification network, such as convolutional layers, activation layers, Some core backbone layers such as pooling layers. After obtaining the training image set, the network model is trained in the deep learning algorithm compiled by python, and the training process includes the setting of attributes such as training rounds, training batches, etc., wherein the training process of the network model specifically includes: The training images in the training image set are input into a preset network model, and the predicted attribute labels corresponding to the components in the training images are output through the preset network model. Similar to the real attribute labels, the predicted attribute labels include predicted categories information and predicted image position information; then update the model parameters of the preset network model according to the predicted attribute label and the real attribute label, until the training situation of the preset network model satisfies the preset conditions, to obtain Image recognition model.
在一具体实施方式中,步骤S220中所述根据所述预测属性标签和所述真实属性标签对所述预设网络模型的模型参数进行更新,并继续执行通过所述预设网络模型输出所述训练图像中各部件对应的预测属性标签的步骤,直至所述预设网络模型的训练情况满足预设条件的步骤包括:In a specific embodiment, in step S220, the model parameters of the preset network model are updated according to the predicted attribute label and the real attribute label, and the output of the preset network model is continued. The step of training the predicted attribute labels corresponding to each component in the image until the training situation of the preset network model satisfies the preset condition includes:
步骤S221、根据所述预测属性标签和所述真实属性标签确定损失值,并将所述损失值与预设阈值进行比较;Step S221, determining a loss value according to the predicted attribute label and the real attribute label, and comparing the loss value with a preset threshold;
步骤S222、当所述损失值不小于所述预设阈值时,根据预设的参数学习率对所述预设网络模型的模型参数进行更新,并继续执行通过所述预设网络模型输出所述训练图像中各部件对应的预测属性标签的步骤,直至所述损失值小于所述预设阈值。Step S222, when the loss value is not less than the preset threshold, update the model parameters of the preset network model according to the preset parameter learning rate, and continue to output the preset network model through the preset network model. The step of training the predicted attribute label corresponding to each component in the image until the loss value is less than the preset threshold.
具体地,本实施例中预先设置用于判断预设网络模型的训练情况是否满足预设条件的阈值,获取到预测属性标签后,根据预测属性标签和真实属性标签确定损失值。一般损失值越小,则表明网络模型的性能越优,获取损失值后,进一步判断损失值是否小于预设阈值;若是,则表明预设网络模型的训练情况满足预设条件;若否,则说明预设网络模型的训练情况不满足预设条件,则根据预设的参数学习率对预设网络模型的模型参数进行更新,并继续执行通过所述预设网络模型输出所述训练图像中各部件对应的预测属性标签的步骤,在网络模型的训练过程中,可以通过tensorboard进行后台实时监控,审查网络训练情况,当所述损失值趋于平稳并小于预设阈值时,网络模型训练完成。Specifically, in this embodiment, a threshold for judging whether the training situation of the preset network model satisfies the preset condition is preset, and after the predicted attribute label is obtained, the loss value is determined according to the predicted attribute label and the real attribute label. Generally, the smaller the loss value, the better the performance of the network model. After the loss value is obtained, it is further judged whether the loss value is less than the preset threshold; if so, it indicates that the training of the preset network model meets the preset conditions; It means that the training situation of the preset network model does not meet the preset conditions, then update the model parameters of the preset network model according to the preset parameter learning rate, and continue to output the training images through the preset network model. In the step of predicting the attribute label corresponding to the component, during the training process of the network model, the background real-time monitoring can be performed through tensorboard to review the network training situation. When the loss value tends to be stable and less than the preset threshold, the network model training is completed.
步骤S300、根据所述图像位置信息以及预先确定的相机到机器人末端的转换矩阵,确定所述各个待拆卸部件对应的目标位置。Step S300 , according to the image position information and a predetermined conversion matrix from the camera to the robot end, determine the target position corresponding to each component to be disassembled.
由于图像识别模型输出的各个待拆卸部件对应的图像位置信息为各个待拆卸部件对应的图像位置坐标,为了方便机器人对待拆卸部件进行自动拆卸,还需要将图像位置信息转化为机器人末端坐标系下的目标位置信息。本实施例获取到各个待拆卸部件对应的图像位置信息后,进一步通过预先确定的相机到机器人末端的转换矩阵对所述图像位置信息进行坐标变换,得到各个待拆卸部件对应的目标位置信息。Since the image position information corresponding to each part to be disassembled output by the image recognition model is the image position coordinate corresponding to each part to be disassembled, in order to facilitate the automatic disassembly of the part to be disassembled by the robot, it is also necessary to convert the image position information into the coordinate system of the robot end. target location information. In this embodiment, after obtaining the image position information corresponding to each part to be disassembled, coordinate transformation is further performed on the image position information through a predetermined transformation matrix from the camera to the robot end to obtain the target position information corresponding to each part to be disassembled.
在一具体实施方式中,步骤S300具体包括:In a specific embodiment, step S300 specifically includes:
步骤S310、根据所述图像位置信息,确定所述各个待拆卸部件对应的中心位置坐标;Step S310, according to the image position information, determine the center position coordinates corresponding to the parts to be disassembled;
步骤S320、根据预先确定的相机到机器人末端的转换矩阵对所述中心位置坐标进行坐标变换,确定所述各个待拆卸部件对应的目标位置。Step S320: Perform coordinate transformation on the coordinates of the center position according to a predetermined conversion matrix from the camera to the end of the robot, and determine the target positions corresponding to the components to be disassembled.
具体地,本实施例获取到待拆卸部件对应的图像位置信息后,根据所述图像位置信息,确定各个待拆卸部件对应的中心位置坐标,然后根据预先确定的相机到机器人末端的转换矩阵对所述中心位置坐标进行坐标变换,确定各个待拆卸部件对应的目标位置。Specifically, after obtaining the image position information corresponding to the parts to be disassembled in this embodiment, the center position coordinates corresponding to the parts to be disassembled are determined according to the image position information, and then the center position coordinates corresponding to the parts to be disassembled are determined according to the predetermined transformation matrix from the camera to the end of the robot. Coordinate transformation is performed on the coordinates of the central position to determine the target position corresponding to each component to be disassembled.
在一具体实施方式中,步骤S310具体包括:In a specific implementation manner, step S310 specifically includes:
步骤S311、根据所述图像位置信息,确定所述各个待拆卸部件对应的最小外接矩形;Step S311, according to the image position information, determine the minimum circumscribed rectangle corresponding to each component to be disassembled;
步骤S312、获取各个待拆卸部件对应的最小外接矩形的中心点坐标,并将所述中心点坐标确定为所述各个待拆卸部件对应的中心位置坐标。Step S312: Acquire the coordinates of the center point of the smallest circumscribed rectangle corresponding to each component to be disassembled, and determine the coordinates of the center point as the center position coordinates corresponding to each component to be disassembled.
在一具体实施方式中,各个待拆卸部件对应的中心位置坐标为各个待拆卸部件对应的最小外接矩形的中心点坐标,本实施例获取各个待拆卸部件对应的图像位置信息后,从各个待拆卸部件对应的图像位置信息中选择最大和最小的横纵坐标作为边界框,确定各个待拆卸部件对应的最小外接矩形,然后获取各个待拆卸部件对应的最小外接矩形的中心点坐标,并将所述中心点坐标确定为各个待拆卸部件对应的中心位置坐标。In a specific embodiment, the coordinates of the center position corresponding to each part to be disassembled are the coordinates of the center point of the smallest circumscribed rectangle corresponding to each part to be disassembled. In this embodiment, after acquiring the image position information corresponding to each part to be disassembled, In the image position information corresponding to the parts, the largest and smallest horizontal and vertical coordinates are selected as the bounding box, the smallest circumscribed rectangle corresponding to each part to be disassembled is determined, and then the center point coordinates of the smallest circumscribed rectangle corresponding to each part to be disassembled are obtained, and the The center point coordinates are determined as the center position coordinates corresponding to each component to be disassembled.
在一具体实施方式中,步骤S300中所述相机到机器人末端的转换矩阵的确定方法包括:In a specific implementation manner, the method for determining the transformation matrix from the camera to the robot end described in step S300 includes:
步骤M310、获取预先设计的棋盘格,根据所述棋盘格确定所述棋盘格中各个角点在机器人基座坐标系下的坐标和在相机坐标系下的坐标;Step M310, obtaining a pre-designed checkerboard, and determining the coordinates of each corner point in the checkerboard under the robot base coordinate system and the coordinates under the camera coordinate system according to the checkerboard;
步骤M320、根据所述棋盘格中各个角点在机器人基座坐标系下的坐标和在相机坐标系下的坐标,确定相机到机器人末端的转换矩阵。Step M320: Determine a conversion matrix from the camera to the end of the robot according to the coordinates of each corner point in the checkerboard in the robot base coordinate system and the coordinates in the camera coordinate system.
具体地,相机标定是机器人视觉中非常重要的一步,可以帮助机器人转换识别到的视觉信息,从而完成后续的共享单车拆卸,本实施例在确定相机到机器人末端的转换矩阵时,采用预先设计的棋盘格,根据所述棋盘格确定所述棋盘格中各个角点在机器人基座坐标系下的坐标和在相机坐标系下的坐标,然后根据所述棋盘格中各个角点在机器人基座坐标系下的坐标和在相机坐标系下的坐标,确定相机到机器人末端的转换矩阵。其中,相机到机器人末端的转换矩阵的计算公式为endTcamera=(baseTend)-1baseP(cameraP)-1,其中,endTcamera为相机到机器人末端的转换矩阵,baseTend为机器人末端到基座坐标系的转换矩阵,baseTend可以实时根据机器人正运动学得到,cameraP为棋盘格中各个角点在相机坐标系下的坐标,baseP为棋盘格中各个角点在机器人基座坐标系下的坐标。Specifically, camera calibration is a very important step in robot vision, which can help the robot to convert the recognized visual information, so as to complete the subsequent disassembly of the shared bicycle. Checkerboard, determine the coordinates of each corner point in the checkerboard in the robot base coordinate system and the coordinates in the camera coordinate system according to the checkerboard, and then determine the coordinates of each corner point in the robot base according to the checkerboard The coordinates in the system and the coordinates in the camera coordinate system determine the transformation matrix from the camera to the end of the robot. Among them, the calculation formula of the transformation matrix from the camera to the robot end is end T camera =( base T end ) -1base P( camera P) -1 , where end T camera is the transformation matrix from the camera to the robot end, and base T end is The transformation matrix from the robot end to the base coordinate system, base T end can be obtained in real time according to the forward kinematics of the robot, camera P is the coordinates of each corner point in the checkerboard in the camera coordinate system, base P is the corner point in the checkerboard at The coordinates in the robot base coordinate system.
在一具体实施方式中,步骤M310中,所述根据所述棋盘格确定所述棋盘格中各个角点在机器人基座坐标系下的坐标和在相机坐标系下的坐标的步骤包括:In a specific embodiment, in step M310, the step of determining the coordinates of each corner point in the checkerboard in the robot base coordinate system and the coordinates in the camera coordinate system according to the checkerboard includes:
步骤M311、根据所述棋盘格对所述相机进行标定,确定所述相机的内外参数以及畸变系数;Step M311, calibrating the camera according to the checkerboard, and determining internal and external parameters and distortion coefficients of the camera;
步骤M312、获取所述棋盘格中各个角点的图像坐标,根据所述相机的内外参数以及畸变系数对所述图像坐标进行坐标变换,确定所述棋盘格中各个角点在相机坐标系下的坐标。Step M312: Obtain the image coordinates of each corner point in the checkerboard, perform coordinate transformation on the image coordinates according to the internal and external parameters of the camera and the distortion coefficient, and determine the position of each corner point in the checkerboard in the camera coordinate system. coordinate.
具体地,本实施例在确定所述棋盘格中各个角点在相机坐标系下的坐标时,首先根据所述棋盘格对所述相机进行标定,确定所述相机的内外参数以及畸变系数,然后获取所述棋盘格中各个角点的图像坐标,根据所述相机的内外参数以及畸变系数对所述图像坐标进行坐标变换,确定所述棋盘格中各个角点在相机坐标系下的坐标。其中,相机的标定过程为:将棋盘格放于暗箱中,为棋盘格拍摄一些不同方向的照片,获得一系列棋盘格图片,获取棋盘格上各个角点在世界坐标系中的坐标值以及对应的棋盘格图片中的坐标值利用最小二乘法得到单应性矩阵,求解相机内外参数以及畸变系数从而对相机进行标定。Specifically, in this embodiment, when determining the coordinates of each corner point in the checkerboard in the camera coordinate system, the camera is first calibrated according to the checkerboard, the internal and external parameters and distortion coefficients of the camera are determined, and then The image coordinates of each corner point in the checkerboard are obtained, and the coordinate transformation is performed on the image coordinates according to the internal and external parameters of the camera and the distortion coefficient, and the coordinates of each corner point in the checkerboard in the camera coordinate system are determined. Among them, the calibration process of the camera is as follows: put the checkerboard in the camera obscura, take some pictures of the checkerboard in different directions, obtain a series of checkerboard pictures, obtain the coordinate values of each corner point on the checkerboard in the world coordinate system and the corresponding The coordinate values in the checkerboard image of , use the least squares method to obtain the homography matrix, and solve the internal and external parameters of the camera and the distortion coefficient to calibrate the camera.
在一具体实施方式中,步骤M310中,所述根据所述棋盘格确定所述棋盘格中各个角点在机器人基座坐标系下的坐标和在相机坐标系下的坐标的步骤还包括:In a specific embodiment, in step M310, the step of determining the coordinates of each corner point in the checkerboard in the robot base coordinate system and the coordinates in the camera coordinate system according to the checkerboard further includes:
步骤M313、根据所述棋盘格,确定棋盘格坐标系到机器人末端坐标系的转换矩阵和所述棋盘格中各个角点在棋盘格坐标系下的坐标;Step M313, according to the checkerboard, determine the conversion matrix from the checkerboard coordinate system to the robot end coordinate system and the coordinates of each corner point in the checkerboard under the checkerboard coordinate system;
步骤M314、根据棋盘格坐标系到机器人末端坐标系的转换矩阵和所述棋盘格中各个角点在棋盘格坐标系下的坐标,确定所述棋盘格中各个角点在机器人基座坐标系下的坐标。Step M314, according to the transformation matrix from the checkerboard coordinate system to the robot end coordinate system and the coordinates of each corner point in the checkerboard under the checkerboard coordinate system, determine that each corner point in the checkerboard is under the robot base coordinate system coordinate of.
具体地,本实施例在确定所述棋盘格中各个角点在机器人基座坐标系下的坐标时,可以通过设置棋盘格的左上角点为原点,然后测量或者根据设计尺寸确定棋盘格中各个角点在棋盘格坐标系下的坐标以及棋盘格原点到机器人末端坐标原点的平移坐标,然后根据棋盘格原点到机器人末端坐标原点的平移坐标确定棋盘格坐标系到机器人末端坐标系的转换矩阵,最后根据棋盘格坐标系到机器人末端坐标系的转换矩阵和所述棋盘格中各个角点在棋盘格坐标系下的坐标,确定所述棋盘格中各个角点在机器人基座坐标系下的坐标。其中,棋盘格中各个角点在机器人基座坐标系下的坐标的计算公式为:baseP=baseTend endTboard boardP,其中,baseTend为机器人末端到基座坐标系的转换矩阵,baseTend可以实时根据机器人正运动学得到,endTboard为棋盘格坐标系到机器人末端坐标系的转换矩阵,boardP为棋盘格中各个角点在棋盘格坐标系下的坐标。Specifically, in this embodiment, when determining the coordinates of each corner point in the checkerboard in the robot base coordinate system, the upper left corner of the checkerboard may be set as the origin, and then measuring or determining each corner of the checkerboard according to the design size. The coordinates of the corner points in the checkerboard coordinate system and the translation coordinates from the checkerboard origin to the robot end coordinate origin, and then determine the transformation matrix from the checkerboard coordinate system to the robot end coordinate system according to the translation coordinates from the checkerboard origin to the robot end coordinate origin, Finally, according to the transformation matrix from the checkerboard coordinate system to the robot end coordinate system and the coordinates of each corner point in the checkerboard under the checkerboard coordinate system, determine the coordinates of each corner point in the checkerboard under the robot base coordinate system . The formula for calculating the coordinates of each corner point in the checkerboard in the robot base coordinate system is: base P= base T end end T board board P, where base T end is the transformation matrix from the robot end to the base coordinate system , base T end can be obtained in real time according to the forward kinematics of the robot, end T board is the transformation matrix from the checkerboard coordinate system to the robot end coordinate system, board P is the coordinates of each corner point in the checkerboard in the checkerboard coordinate system.
示例性设备Exemplary Equipment
如图2中所示,本发明实施例提供一种机器人视觉识别及定位装置,该装置包括:图像获取模块210、图像识别模块220、目标定位模块230。具体地,所述图像获取模块210,用于获取待识别图像;其中,所述待识别图像中包括若干待拆卸部件。所述图像识别模块220,用于将所述待识别图像输入图像识别模型,通过所述图像识别模型输出各个待拆卸部件对应的类别信息和图像位置信息。所述目标定位模块230,用于根据所述图像位置信息以及预先确定的相机到机器人末端的转换矩阵,确定所述各个待拆卸部件对应的目标位置。As shown in FIG. 2 , an embodiment of the present invention provides a robot visual recognition and positioning device, which includes: an
基于上述实施例,本发明还提供了一种智能终端,其原理框图可以如图3所示。该智能终端包括通过系统总线连接的处理器、存储器、网络接口、显示屏、温度传感器。其中,该智能终端的处理器用于提供计算和控制能力。该智能终端的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该智能终端的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种机器人视觉识别及定位方法。该智能终端的显示屏可以是液晶显示屏或者电子墨水显示屏,该智能终端的温度传感器是预先在智能终端内部设置,用于检测内部设备的运行温度。Based on the above embodiments, the present invention also provides an intelligent terminal, the principle block diagram of which may be shown in FIG. 3 . The intelligent terminal includes a processor, a memory, a network interface, a display screen, and a temperature sensor connected through a system bus. Wherein, the processor of the intelligent terminal is used to provide computing and control capabilities. The memory of the intelligent terminal includes a non-volatile storage medium and an internal memory. The nonvolatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the execution of the operating system and computer programs in the non-volatile storage medium. The network interface of the intelligent terminal is used for communicating with external terminals through network connection. When the computer program is executed by the processor, a method for visual recognition and positioning of a robot is realized. The display screen of the smart terminal may be a liquid crystal display screen or an electronic ink display screen, and the temperature sensor of the smart terminal is pre-set inside the smart terminal to detect the operating temperature of the internal equipment.
本领域技术人员可以理解,图3中示出的原理框图,仅仅是与本发明方案相关的部分结构的框图,并不构成对本发明方案所应用于其上的智能终端的限定,具体的智能终端可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。Those skilled in the art can understand that the principle block diagram shown in FIG. 3 is only a block diagram of a part of the structure related to the solution of the present invention, and does not constitute a limitation on the intelligent terminal to which the solution of the present invention is applied. More or fewer components than shown in the figures may be included, or some components may be combined, or have a different arrangement of components.
在一个实施例中,提供了一种智能终端,包括有存储器,以及一个或者一个以上的程序,其中一个或者一个以上程序存储于存储器中,且经配置以由一个或者一个以上处理器执行所述一个或者一个以上程序包含用于进行以下操作的指令:In one embodiment, an intelligent terminal is provided that includes a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors One or more programs contain instructions to:
获取待识别图像;其中,所述待识别图像中包括若干待拆卸部件;acquiring an image to be identified; wherein the image to be identified includes a number of components to be disassembled;
将所述待识别图像输入图像识别模型,通过所述图像识别模型输出各个待拆卸部件对应的类别信息和图像位置信息;Inputting the to-be-recognized image into an image recognition model, and outputting the category information and image position information corresponding to each component to be disassembled through the image recognition model;
根据所述图像位置信息以及预先确定的相机到机器人末端的转换矩阵,确定所述各个待拆卸部件对应的目标位置。According to the image position information and the predetermined transformation matrix from the camera to the robot end, the target position corresponding to each component to be disassembled is determined.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本发明所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be implemented by instructing relevant hardware through a computer program, and the computer program can be stored in a non-volatile computer-readable storage In the medium, when the computer program is executed, it may include the processes of the above-mentioned method embodiments. Wherein, any reference to memory, storage, database or other medium used in the various embodiments provided by the present invention may include non-volatile and/or volatile memory. Nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in various forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Road (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
综上所述,本发明公开了一种机器人视觉识别及定位方法、智能终端及存储介质,包括:获取待识别图像;其中,所述待识别图像中包括若干待拆卸部件;将所述待识别图像输入图像识别模型,通过所述图像识别模型输出各个待拆卸部件对应的类别信息和图像位置信息;根据所述图像位置信息以及预先确定的相机到机器人末端的转换矩阵,确定所述各个待拆卸部件对应的目标位置。本发明通过图像识别模型输出各个待拆卸部件的类别信息及图像位置信息,并根据图像位置信息确定目标位置信息,可以精确识别待拆卸部件的类别信息并精确定位待拆卸部件的位置信息,实现共享单车的自动分类拆卸与共享单车零件的循环利用,解决人工暴力拆卸造成的资源浪费问题。In summary, the present invention discloses a robot visual recognition and positioning method, an intelligent terminal and a storage medium, including: acquiring an image to be recognized; wherein the image to be recognized includes a number of parts to be disassembled; The image is input to the image recognition model, and the category information and image position information corresponding to each part to be disassembled are output through the image recognition model; according to the image position information and the predetermined conversion matrix from the camera to the end of the robot, the each part to be disassembled is determined. The target position corresponding to the component. The invention outputs the category information and image position information of each component to be disassembled through the image recognition model, and determines the target location information according to the image location information, so that the category information of the component to be disassembled and the location information of the component to be disassembled can be accurately identified and the location information of the component to be disassembled can be accurately located, and sharing is realized. The automatic classification and disassembly of bicycles and the recycling of shared bicycle parts solve the problem of waste of resources caused by manual violent disassembly.
应当理解的是,本发明的应用不限于上述的举例,对本领域普通技术人员来说,可以根据上述说明加以改进或变换,所有这些改进和变换都应属于本发明所附权利要求的保护范围。It should be understood that the application of the present invention is not limited to the above examples. For those of ordinary skill in the art, improvements or transformations can be made according to the above descriptions, and all these improvements and transformations should belong to the protection scope of the appended claims of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110962704.7A CN113674341A (en) | 2021-08-20 | 2021-08-20 | Robot visual identification and positioning method, intelligent terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110962704.7A CN113674341A (en) | 2021-08-20 | 2021-08-20 | Robot visual identification and positioning method, intelligent terminal and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113674341A true CN113674341A (en) | 2021-11-19 |
Family
ID=78544569
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110962704.7A Pending CN113674341A (en) | 2021-08-20 | 2021-08-20 | Robot visual identification and positioning method, intelligent terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113674341A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115546566A (en) * | 2022-11-24 | 2022-12-30 | 杭州心识宇宙科技有限公司 | Intelligent body interaction method, device, equipment and storage medium based on article identification |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108171748A (en) * | 2018-01-23 | 2018-06-15 | 哈工大机器人(合肥)国际创新研究院 | A kind of visual identity of object manipulator intelligent grabbing application and localization method |
CN110660104A (en) * | 2019-09-29 | 2020-01-07 | 珠海格力电器股份有限公司 | Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium |
-
2021
- 2021-08-20 CN CN202110962704.7A patent/CN113674341A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108171748A (en) * | 2018-01-23 | 2018-06-15 | 哈工大机器人(合肥)国际创新研究院 | A kind of visual identity of object manipulator intelligent grabbing application and localization method |
CN110660104A (en) * | 2019-09-29 | 2020-01-07 | 珠海格力电器股份有限公司 | Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium |
Non-Patent Citations (1)
Title |
---|
朱静: "面向再制造拆卸产品的工业机器人视觉识别与定位研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 07, pages 138 - 734 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115546566A (en) * | 2022-11-24 | 2022-12-30 | 杭州心识宇宙科技有限公司 | Intelligent body interaction method, device, equipment and storage medium based on article identification |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10970864B2 (en) | Method and apparatus for recovering point cloud data | |
CN108171112B (en) | Vehicle Recognition and Tracking Method Based on Convolutional Neural Network | |
CN107944450B (en) | License plate recognition method and device | |
CN111259710B (en) | Parking space structure detection model training method adopting parking space frame lines and end points | |
CN115937170A (en) | Circuit board detection method and device, computer equipment and storage medium | |
CN108009543A (en) | A kind of licence plate recognition method and device | |
CN114998856B (en) | 3D target detection method, device, equipment and medium for multi-camera image | |
WO2023202485A1 (en) | Trajectory prediction method and system in autonomous driving system | |
CN112488083A (en) | Traffic signal lamp identification method, device and medium for extracting key points based on heatmap | |
CN116681687B (en) | Wire detection method and device based on computer vision and computer equipment | |
CN116071405A (en) | Method and device for screen printing image registration on lithium battery surface | |
CN115810133A (en) | Welding control method based on image processing and point cloud processing and related equipment | |
CN113674341A (en) | Robot visual identification and positioning method, intelligent terminal and storage medium | |
CN114677567B (en) | Model training method and device, storage medium and electronic equipment | |
CN115984712A (en) | Method and system for small target detection in remote sensing images based on multi-scale features | |
CN116205835A (en) | Circuit board defect detection method, device and electronic equipment | |
Paramasivam et al. | Revolutionizing Road Safety: AI-Powered Road Defect Detection | |
CN112084364A (en) | Object analysis method, local image search method, device, and storage medium | |
Wu et al. | Crack Detection on Road Surfaces Based on Improved YOLOv8 | |
CN113688897B (en) | Full-automatic disassembly method for shared bicycle, intelligent terminal and storage medium | |
CN116993654B (en) | Camera module defect detection method, device, equipment, storage medium and product | |
CN116977251A (en) | Defect detection model training method and device for camera | |
CN116823791A (en) | PIN defect detection method, device, equipment and computer readable storage medium | |
CN108268813B (en) | Lane departure early warning method and device and electronic equipment | |
CN116597130A (en) | Method and device for identifying circuit structure of printed circuit board and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20211119 |