WO2024077934A1 - 一种车间巡检机器人目标检测方法及装置 - Google Patents

一种车间巡检机器人目标检测方法及装置 Download PDF

Info

Publication number
WO2024077934A1
WO2024077934A1 PCT/CN2023/091465 CN2023091465W WO2024077934A1 WO 2024077934 A1 WO2024077934 A1 WO 2024077934A1 CN 2023091465 W CN2023091465 W CN 2023091465W WO 2024077934 A1 WO2024077934 A1 WO 2024077934A1
Authority
WO
WIPO (PCT)
Prior art keywords
workshop
module
feature
obstacle
inspection robot
Prior art date
Application number
PCT/CN2023/091465
Other languages
English (en)
French (fr)
Inventor
尹震宇
王肖辉
郭锐锋
杨东升
樊超
宋丹
李秋霞
Original Assignee
中国科学院沈阳计算技术研究所有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院沈阳计算技术研究所有限公司 filed Critical 中国科学院沈阳计算技术研究所有限公司
Publication of WO2024077934A1 publication Critical patent/WO2024077934A1/zh

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Definitions

  • the present invention belongs to the field of industrial robots, and specifically relates to a method and device for detecting a target of a workshop inspection robot, which is applied to detecting whether items in a workshop are placed in a standardized manner, thereby facilitating safe management of the workshop industry and maintaining order in the workshop.
  • the factory workshop intelligent inspection robot can solve this problem well, making material inspection more scientific and accurate, and completely getting rid of the problem of untimely material information transmission in the production workshop.
  • the workshop inspection robot has the characteristics of intelligence and quick response, and has good application prospects in material detection in factories and production workshops. Therefore, it is a very important issue to identify whether the placement of items is standardized.
  • the first method is for inspectors to carry inspection equipment to the site for inspection at regular intervals. This method manually selects the inspection area and inspection items, and manually records the type and location of materials. This extensive patrol mode is difficult to supervise and evaluate, and there is a lag in the feedback of inspection information, which affects production efficiency and quality. The placement of some items will cause unsafe workshops.
  • the second method is a basic workshop inspection robot target detection method. This method uses a camera to capture images and then passes them into the target detection model for prediction and analysis of the target object. However, the prediction model obtained by this method has low accuracy, insufficient feature extraction, and cannot accurately identify and locate the target object.
  • a workshop inspection robot target detection method and device are proposed to solve the problem of low efficiency of manual inspection methods in existing object placement estimation methods, while solving the problem of workshop management safety and low accuracy.
  • a target detection device of a workshop inspection robot comprising:
  • the multi-dimensional image acquisition module is used to detect whether there are obstacles ahead through a laser radar detector, and to collect images of workshop materials through a side monocular camera;
  • the decoupling detection module is used to extract and enhance feature vectors based on the collected workshop material images to obtain the center coordinates and width and height of the material;
  • the material judgment module is used to judge whether the material has crossed the zone based on the center coordinates, width and height of the material, as well as the fixed area position of the material in the workshop.
  • the decoupling detection module comprises:
  • the feature extraction module is used to extract features from the workshop material image through the backbone network to obtain a feature vector representing the material
  • a feature enhancement module is used to enhance the feature vector to obtain an enhanced feature vector
  • the classification module is used to obtain the category information of the material through the classification algorithm of the enhanced feature vector;
  • the positioning module is used to obtain the center coordinates, width and height of the material through the regression algorithm of the enhanced feature vector.
  • the backbone network includes a DarkNet53 backbone network and an SPP layer connected sequentially.
  • the material judgment module includes:
  • the material positioning analysis unit is used to construct a two-dimensional coordinate system according to the center coordinates and width and height of the material, so that the material and the workshop material fixed area are located under the two-dimensional coordinate system;
  • the material out-of-zone judgment unit is used to judge whether the material has crossed the zone according to the center coordinates, width and height of the material and the fixed area position of the material in the workshop in the two-dimensional coordinate system.
  • a method for detecting a target of a workshop inspection robot comprises the following steps:
  • the inspection robot uses a laser radar detector to detect whether there are obstacles ahead and the location and size of the obstacles; if an obstacle is found, it will go around it; if not, it will go straight;
  • the side monocular camera starts to collect images of the workshop materials and uploads them to the decoupling detection module.
  • the decoupling detection module extracts the feature vector preliminarily by extracting the feature vector; the feature enhancement module performs feature enhancement to obtain the enhanced feature vector;
  • the classification module obtains the category information of the material in the image based on the enhanced feature vector, and the positioning module obtains the center coordinates and width and height of the material in the image based on the enhanced feature vector;
  • the material judgment module constructs a two-dimensional coordinate system according to the center coordinates and width and height of the material, so that the material and the workshop material fixed area are located under the two-dimensional coordinate system;
  • the material out-of-zone judgment unit judges whether the material has crossed the zone according to the center coordinates, width and height of the material, and the fixed area position of the material in the workshop in the two-dimensional coordinate system.
  • the specific steps of the laser radar obstacle detection process include:
  • the laser transmitter in the LiDAR detector scatters laser light in all directions. After the detection laser hits an obstacle, it is transmitted back to the receiver in the LiDAR detector in the form of a laser point cloud.
  • the receiver in the radar detector performs time synchronization and external parameter calibration on the input laser point cloud
  • the radar detector preprocesses the point cloud to obtain a point cloud containing the background and the foreground representing obstacles.
  • An unsupervised clustering algorithm is used to form multiple clusters for the preprocessed point cloud to segment the foreground and background point clouds.
  • Each cluster representing the foreground point cloud represents an obstacle, and a bounding box is fitted for each cluster to represent an obstacle and its size range, thereby obtaining the obstacle center point and length, width and height.
  • the radar detector calculates the obstacle distance based on the laser return time and sends the obstacle distance, obstacle center point, length, width and height to the robot to avoid the obstacle.
  • the decoupling detection module extracts a feature vector preliminarily by extracting the feature vector; and performs feature enhancement by the feature enhancement module to obtain an enhanced feature vector, including the following steps:
  • the feature extraction module extracts features from material images through the backbone network and obtains feature vectors of three scales in the DarkNet53 backbone network; the feature vectors of three scales enter the SPP layer for feature extraction;
  • the feature enhancement module performs feature fusion through a bidirectional feature pyramid to combine feature information of different scales and the final feature vector.
  • the decoupling detection module is obtained by performing multiple trainings on workshop training samples.
  • the present invention establishes a device for predicting and estimating the irregular placement of materials in a workshop, establishes a spatial coordinate system according to the fixed area of each item, and uses the known fixed area for material placement, combined with the predicted material center coordinates and width and height, to compare the predicted width and height of the material with the fixed area range to analyze whether it deviates from the safe range.
  • the present invention provides a method and device for target detection of a workshop inspection robot, which constructs a set of estimation methods and devices based on movable workshop target detection through a multi-dimensional image acquisition module, a decoupling detection module and a material judgment module. It can make a standardized estimate of the placement of items in the workshop. After obtaining the location information of the items, the location coordinates are transmitted to the control center of the next analysis link, and an offset estimate is made according to the set safety range.
  • FIG1 is a flowchart of a target detection device for a workshop inspection robot according to the present invention.
  • FIG2 is an overall flow chart of a target detection device for a workshop inspection robot according to the present invention.
  • FIG3 is a flow chart of obstacle detection by a laser radar according to the present invention.
  • FIG4a is a schematic diagram 1 of material judgment results of the present invention.
  • FIG4b is a second schematic diagram of material judgment results of the present invention.
  • FIG4c is a third schematic diagram of material judgment results of the present invention.
  • FIG. 5 is a schematic diagram of material collection by the side camera of the present invention.
  • a target detection device for a workshop inspection robot comprises: a multi-dimensional image acquisition module, a decoupling detection module, and a material judgment module.
  • the multi-dimensional image acquisition module detects whether there are obstacles in front through a laser radar detector, controls the movement trajectory of the robot, and after reaching a fixed material collection area, collects multiple workshop material images through a side monocular camera.
  • the collected images are passed through the decoupling detection module to generate feature vectors, which are passed through a classification module and a positioning module to obtain the category and positioning information of the materials.
  • the material judgment module analyzes the positioning information in combination with the fixed area of the material to determine whether the material has crossed the area.
  • the multi-dimensional image acquisition module includes a side monocular camera and a laser radar detector.
  • the side monocular camera of the inspection robot is used to capture images of materials placed on the side of the safety track; the laser radar detector is used to detect whether there is any material on the workshop track. Obstacles are detected and their size and position are obtained to control the movement trajectory of the inspection robot.
  • the decoupling detection module includes a feature extraction module, a feature enhancement module, a classification module and a positioning module.
  • the feature extraction module includes preliminary feature extraction through a backbone network, and then combining with the SPP layer to achieve diversity and accuracy of feature extraction;
  • the feature enhancement module performs bidirectional feature fusion on the extracted feature vector to ensure the globality of the feature vector and obtain a feature vector with stronger representation ability;
  • the feature vector obtains the category information of the material in the image through the classification module;
  • the feature vector obtains the center coordinates and width and height of the material in the image through the positioning module.
  • the material judgment module includes a material positioning analysis unit and a material out-of-zone judgment unit.
  • the material positioning analysis unit constructs the material and its fixed area in a two-dimensional coordinate system; the material out-of-zone judgment unit judges whether the material has crossed the zone.
  • a workshop inspection robot target detection method comprises the following steps:
  • the inspection robot After receiving the command, the inspection robot starts to move forward, and the laser radar detector starts to work to detect whether there are obstacles in front and the location and size of the obstacles. If an obstacle is found, it will go around it, otherwise it will go straight;
  • the inspection robot After the inspection robot reaches the designated material placement area, it stops and the side monocular camera starts to collect images and uploads them to the decoupling detection module;
  • the decoupling detection module initially extracts the feature vector through feature vector extraction
  • the initially obtained feature vector enters the feature enhancement module for feature enhancement to obtain a feature vector with good representation ability
  • the feature vector After the feature vector is generated, it enters the classification module and the positioning module at the same time.
  • the classification module obtains the category information of the material in the image
  • the positioning module obtains the center coordinates and width and height of the material in the image.
  • the center coordinates, width and height of the material are input into the material judgment module.
  • the material positioning analysis unit flattens the three-dimensional material according to the material position information and draws it in the same two-dimensional coordinate system as the fixed range of the material.
  • the material out-of-zone judgment unit determines whether the material has crossed the zone based on the material center coordinates, width and height information in the two-dimensional coordinates and the fixed range of the material.
  • the specific steps of the laser radar obstacle detection process include:
  • the laser transmitter in the LiDAR detector actively scatters laser light in all directions. After the detection laser contacts an obstacle, it will be transmitted back to the receiver in the LiDAR detector in the form of a laser point cloud.
  • the input laser point cloud is time synchronized and calibrated with external parameters, and the point cloud is preprocessed to reduce the amount of data and remove noise points.
  • An unsupervised clustering algorithm is used to form multiple clusters of obstacle points on the ground, and the point cloud on the ground is segmented, with each cluster representing an obstacle.
  • a bounding box is fitted for each cluster to represent an obstacle and its size range, and the center point of the obstacle as well as its length, width and height are calculated.
  • the obstacle distance is calculated based on the laser return time, and a Kalman filter is constructed for each obstacle for tracking and smoothing the output.
  • the step of generating the feature vector specifically includes:
  • the input image will first be feature extracted in the backbone network.
  • the extracted features can be called feature layers, which are the feature sets of the input image.
  • Three feature layers are obtained in the backbone part for the next step of network construction. These three feature layers are called effective feature layers.
  • the three effective feature layers obtained in the backbone part then enter the attention module to extract important information, and then enter the bidirectional feature pyramid module for feature fusion.
  • the purpose of feature fusion is to combine different scales.
  • the effective feature layer that has been obtained is used to continue to extract features.
  • the top-down feature fusion method used will not only upsample the features to achieve feature fusion, but also downsample the features again to achieve feature fusion, thereby strengthening important features or suppressing unimportant features to obtain the final feature vector.
  • the decoupling detection module will generate the category probability of the material as well as the coordinates and size of the center position of the material, establish a three-dimensional coordinate system, and project the material onto a two-dimensional plane.
  • the material judgment module will determine the specific two-dimensional coordinate map of the target object based on the coordinate information of the material and the size of the prediction box, combined with the fixed area where the items are placed and the safety range. If the material exceeds the safety area, it is considered to be out of range.
  • the decoupling detection model is an optimized model obtained by training the workshop training samples multiple times, and the initial model parameters are set.
  • a work flow of a target detection device for a workshop inspection robot includes: a multi-dimensional image acquisition module, including a side monocular camera of the inspection robot and a laser radar detector.
  • the laser radar detector is used to detect obstacles on the workshop track, control the movement trajectory of the robot, and after reaching a fixed material collection area, collect multiple workshop material images through the side monocular camera;
  • a decoupling detection module including a feature extraction module, a feature enhancement module, a classification module and a positioning module.
  • the feature extraction module and the feature enhancement module are combined with each other to fully utilize information to extract feature vectors with strong characterization capabilities.
  • the classification module obtains the detected material category through a Bayesian classification algorithm.
  • the positioning module obtains the center coordinates and the width and height of the frame of the detected material through a Logistic regression algorithm; a material judgment module, including a material positioning analysis unit and a material cross-zone judgment unit.
  • the material positioning analysis unit flattens the three-dimensional material according to the material center coordinates and width and height information obtained by the decoupling detection module; the material cross-zone judgment unit obtains a judgment of whether it crosses the zone according to the material center coordinates and width and height information under the two-dimensional coordinates, combined with the specified range of the material.
  • the overall process of the workshop inspection robot target detection device is shown in Figure 2.
  • the workshop inspection robot moves forward on the workshop track, and the laser radar detector will actively scatter lasers around. Then, according to the flight time of the laser return, it will determine whether there are obstacles around. If there are obstacles, it will bypass them and reach the designated material collection image location. The image resolution is adjusted to 416 ⁇ 416.
  • the decoupling detection module will perform material detection based on the incoming image. If no material is detected, the material shortage information is transmitted to the workshop management personnel. If material is detected, the detected width, height and center coordinates are used to project it onto a two-dimensional plane, and combined with the fixed material storage area location information, it is determined whether it has crossed the area.
  • the flowchart of laser radar obstacle detection is shown in Figure 3.
  • the laser transmitter detects obstacles by emitting laser forward, and transmits it back to the receiver in the laser radar detector in the form of laser point cloud.
  • the point cloud is preprocessed to reduce the amount of data and remove noise points. Since the laser radar has different viewing angles each time it collects obstacle points, the coordinates of some collected obstacle points vary greatly, and many obstacle points are irrelevant to obstacle tracking.
  • Too many obstacle points will affect the extraction of the external frame contour, so it is necessary to filter out the original point cloud to find the area of interest; form multiple clusters of obstacle points on the ground, segment the point cloud on the ground, and each cluster represents an obstacle; fit the bounding box for each cluster to represent an obstacle and its size range, and calculate the center point of the obstacle as well as the length, width and height; calculate the obstacle distance based on the laser return time, output the obstacle position, size and distance, and control the driving route of the inspection robot.
  • the schematic diagram of the material judgment result is shown in Figure 4a to Figure 4c.
  • the detected material information includes the width, height and center position coordinates of the boundary box.
  • the material is projected onto a two-dimensional plane and compared with the fixed area of the workshop material.
  • Figure 4a represents that the material is within the safe range and is placed in a standard manner.
  • Figure 4b represents that the material is within the safe range, but the placement has deviated from the center position.
  • Figure 4c represents that the material has left the safe range.
  • the schematic diagram of the material collection by the side camera is shown in Figure 5.
  • the camera is fixed below the longitudinal motion module. After the monocular camera receives the collection control signal, it will collect images of the material. Two collections will be performed during the entire collection process. Multiple images are collected to determine material information to ensure accuracy. After the collection is completed, the acquired image is adjusted to a resolution of 416 ⁇ 416 and transmitted to the decoupling detection module.
  • S1 Dataset Collection Use cameras to collect a large number of material images in the workshop. When shooting, choose a suitable light source to maximize the brightness difference between the object under test and other parts. Control the target position and shooting angle of the object under test, and keep the size of the object under test consistent in the imaging. The resolution of the collected image is 416 ⁇ 416.
  • S2 dataset preprocessing Use Labelimg tool to annotate images.
  • the color image needs to be grayscaled to reduce the amount of data to be processed.
  • the collected image is processed by translation, transposition, mirroring, rotation, scaling and other geometric transformations.
  • MixUp is used for data enhancement.
  • S3 data set division The data set is divided into training set, test set and validation set in a ratio of 7:2:1;
  • the baseline model uses the DarkNet53 backbone + SPP layer, and then connects the bidirectional feature pyramid for feature enhancement, followed by the construction of the decoupled detection head.
  • the decoupled head is used here to divide the target detection category and location information into two branches for simultaneous detection, namely the classification module and the positioning module, to improve network performance; after back-propagation to update the network model parameters, it contains a 1*1 convolution for channel dimension reduction, followed by two 3*3 parallel branches (both convolutions).
  • the entire network adds EMA weight update, cosine learning rate mechanism, IoU loss, and IoU perception branch.
  • the BCE loss is used to train the cls and obj branches, and the IoU loss is used to train the reg branch.
  • the overall loss function is as follows:
  • L cls represents the classification loss
  • L reg represents the positioning loss
  • L obj represents the obj loss
  • represents the balance coefficient of the positioning loss
  • N pos represents the number of grids classified as positive samples.
  • S5 Network model training Use the network model in the training set S4 obtained in S2.
  • the trained model contains the information of each recognized target and its location. Set the confidence threshold. If a material is detected, mark the target with a confidence greater than or equal to the confidence threshold with "target”. Train a model suitable for workshop target detection on the training set.
  • S6 image input The workshop inspection robot collects images every time it reaches a fixed material placement area during the inspection.
  • the collection rules are in accordance with S1;
  • S7 Material Prediction Input the image collected by S6 into the S4 network model to predict whether the image contains materials. If so, the target type and location information are given and marked with a bounding box. Otherwise, no processing is performed and the material is output.
  • S8 out-of-zone judgment Based on the center coordinate position and width and height of the material generated by S7, combined with the center coordinates (X, Y) and width and height (W, H) of the specified area where each material is placed, project them onto a two-dimensional plane for comparison to determine whether the material has crossed the zone. If x+w>X+W or y+h>Y+H, the material is judged to have crossed the zone.
  • the present invention provides a method and device for detecting a target of a workshop inspection robot.
  • a multi-dimensional image acquisition module Through a multi-dimensional image acquisition module, a decoupling detection module and a material judgment module, a method and device for detecting a target of a workshop inspection robot based on a movable structure is constructed.
  • the method and device can obtain real-time image information of workshop materials without affecting the normal order of the workshop, and transmit it to the detection module in a timely manner.
  • the present invention solves the problem of low efficiency of manual inspection and overcomes the problem of insufficient safety and flexibility of manual inspection.
  • the present invention has the advantages of strong practicality and low cost.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种车间巡检机器人目标检测方法及装置,包括:多维图像采集模块,解耦合检测模块和物料判断模块。所述多维图像采集模块包括侧方单目摄像头和激光雷达探测器,通过雷达探测器判断车间轨道是否有障碍物,得到障碍物大小与位置信息使机器人绕行,单目摄像头采集车间物料图像;解耦合检测模块包括特征提取模块,特征增强模块,分类模块和定位模块,根据物料图像得到其类别以及定位信息;物料判断模块包括物料定位分析单元和物料越区判断单元,结合车间规定的物料摆放的固定区域,可以判断出该物料是否越区。该装置实用性强,成本低。

Description

一种车间巡检机器人目标检测方法及装置 技术领域
本发明属于工业机器人领域,具体的说是一种车间巡检机器人目标检测方法及装置,应用于车间物品摆放是否规范的检测,便于对车间工业实行安全化管理,并且保持车间秩序性。
背景技术
在工业机器人领域,工业机器人给制造业带来的好处有许多,其中首要是提高生产效率。相比人工劳动AGV、机械臂等,工业机器人能更快地完成作业,并且能24小时连续作业,无需休息。根据车间生产管理制度,对生产区域的物料、废次物品要明确划定摆放区域、摆放数量等要求。然而,在物料种类繁多和柔性化生产要求高的情况下,易出现物料摆放混乱、缺料、待料、物料异常等问题,存在一定的安全隐患,废次物品也要摆放在指定区域内部。工厂车间智能巡检机器人可以很好地解决这一难题,使物料巡检更科学化和准确,彻底摆脱生产车间物料信息传递不及时的问题,同时车间巡检机器人具有智能化、反应灵敏等特点,在工厂、生产车间物料检测方面具有不错的应用前景。因此,识别物品摆放是否规范的检测是非常重要的问题。
针对物品摆放估计的方法主要有两种,第一种方法是巡检人员每隔一段时间手持巡检设备到现场进行巡查。该方法通过手动选择检查区域及检查项目,人工记录物料种类及位置,这种粗放式巡逻模式难以监督和评价,巡检信息反馈存在滞后,影响了生产效率和质量,有的物品摆放会导致车间不安全的现象;第二种方法是基础的车间巡检机器人目标检测方法,该方法采用摄像头拍摄图像然后传入目标检测模型进行预测分析出目标物体,但是该方法所得到的预测模型准确率低,特征提取不充分,无法准确的识别定位目标物体。
发明内容
根据上述提出的技术问题,提出一种车间巡检机器人目标检测方法及装置,用于解决现有的物品摆放估计方法中人工巡检方法效率较低的问题,同时解决了车间管理安全,准确率不高的问题。
本发明采用的技术手段如下:一种车间巡检机器人目标检测装置,包括:
多维图像采集模块、用于通过激光雷达探测器检测前方是否有障碍物,以及通过侧方单目摄像头采集车间物料图像;
解耦合检测模块,用于根据采集车间物料图像,进行特征向量的提取与增强,得到物料的中心坐标与宽高;
物料判断模块,用于根据物料的中心坐标与宽高、以及车间物料固定区域位置,判断物料是否越区。
所述解耦合检测模块包括:
特征提取模块,用于通过骨干网络对车间物料图像进行特征提取,得到表示物料的特征向量;
特征增强模块,用于对特征向量进行特征增强,得到增强后的特征向量;
分类模块,用于对增强后的特征向量通过分类算法得到物料的类别信息;
定位模块,用于对增强后的特征向量通过回归算法得到物料的中心坐标与宽高。
所述骨干网络包括顺序连接的DarkNet53骨干网络以及SPP层。
所述物料判断模块包括:
物料定位分析单元,用于根据物料的中心坐标与宽高构建二维坐标系,使物料与车间物料固定区域位于所述二维坐标系下;
物料越区判断单元,用于在所述二维坐标系下,根据物料的中心坐标与宽高、以及车间物料固定区域位置,判断物料是否越区。
一种车间巡检机器人目标检测方法,包括以下步骤:
巡检机器人通过激光雷达探测器检测前方是否有障碍物以及障碍物的位置大小;如果发现有障碍物,则绕行;如果没有,则直行;
巡检机器人到达指定的车间物料固定区域后停下,侧面单目摄像头开始采集车间物料图像,并且上传到解耦合检测模块;
解耦合检测模块通过特征向量的提取初步提取出特征向量;通过特征增强模块进行特征增强,得到增强后的特征向量;
分类模块根据增强后的特征向量得到图像中物料的类别信息,定位模块根据增强后的特征向量得到图像中物料的中心坐标与宽高;
物料判断模块根据物料的中心坐标与宽高构建二维坐标系,使物料与车间物料固定区域位于所述二维坐标系下;
物料越区判断单元在所述二维坐标系下,根据物料的中心坐标与宽高、以及车间物料固定区域位置,判断物料是否越区。
所述激光雷达检测障碍物流程具体步骤包括:
激光雷达探测器中的激光发射器向四周散射激光,探测用的激光在接触到障碍物之后以激光点云的方式传回激光雷达探测器中的接收器;
雷达探测器中的接收器对输入的激光点云进行时间同步和外参标定;
雷达探测器对点云做预处理,得到包含背景和表示障碍物的前景的点云;采用无监督的聚类算法对预处理后的点云形成多个团簇,分割出前景和背景的点云;对于表示前景点云的每个团簇表示一个障碍物,对每一个团簇做包围框拟合,以表示一个障碍物及其大小范围,进而得到障碍物中心点以及长宽高;
雷达探测器根据激光返回时间计算障碍物距离,并将障碍物距离、障碍物中心点以及长宽高发送至机器人,以避开障碍物。
所述解耦合检测模块通过特征向量的提取初步提取出特征向量;通过特征增强模块进行特征增强,得到增强后的特征向量,包括以下步骤:
特征提取模块通过骨干网络对物料图片进行特征提取,在DarkNet53骨干网络获取三个尺度的特征向量;三个尺度的特征向量进入SPP层中进行特征提取;
特征增强模块通过双向特征金字塔进行特征融合,以结合不同尺度的特征信息,最终的特征向量。
所述解耦合检测模块通过车间训练样本进行多次训练得到。
本发明的有益效果是:
1.本发明建立了针对车间物料摆放不规范进行预测估计的装置,根据每个物品固定区域建立空间坐标系,通过已知的物料摆放固定区域,结合预测的物料中心坐标和宽高,对比物料的预测宽高与固定区域范围,分析出是否偏离安全范围。
2.本发明的一种车间巡检机器人目标检测方法及装置,通过多维图像采集模块、解耦合检测模块和物料判断模块,构建起了一套基于可移动的车间目标检测的估计方法及装置,能够对车间的物品摆放进行规范估计,当获取到物品的位置信息后,将位置坐标传送至下一个分析环节的控制中心,根据设置的安全范围做出偏移估计。
附图说明
图1为本发明车间巡检机器人目标检测装置的工作流程图;
图2为本发明车间巡检机器人目标检测装置的整体流程图;
图3为本发明激光雷达检测障碍物流程图;
图4a为本发明物料判断结果示意图一;
图4b为本发明物料判断结果示意图二;
图4c为本发明物料判断结果示意图三;
图5为本发明侧面摄像头采集物料示意图。
具体实施方式
下面结合附图对本发明做进一步的详细说明。
一种车间巡检机器人目标检测装置,包括:多维图像采集模块、解耦合检测模块、物料判断模块,多维图像采集模块通过激光雷达探测器检测前方是否有障碍物,控制机器人的移动轨迹,到达固定的物料采集区域之后,通过侧方单目摄像头采集多张车间物料图像,采集的图像经过解耦合检测模块生成特征向量,分别经过分类模块与定位模块得到物料的类别以及定位信息,物料判断模块结合物料的固定区域对定位信息进行分析判断物料是否越区。
所述多维图像采集模块包括侧方单目摄像头和激光雷达探测器,巡检机器人侧方单目摄像头用于拍摄安全轨迹侧边的物料摆放图像;激光雷达探测器用于检测车间轨道是否有 障碍物并且获得其大小与位置,控制巡检机器人的移动轨迹。
所述解耦合检测模块包括特征提取模块,特征增强模块,分类模块和定位模块,特征提取模块包括通过骨干网络初步进行特征提取,进而结合SPP层实现特征提取的多样性与准确性;特征增强模块是通过对提取的特征向量进行双向特征融合,保证特征向量的全局性,得到表征能力更强的特征向量;特征向量通过分类模块得到图像中物料的类别信息;特征向量通过定位模块得到图像中物料的中心坐标与宽高。
所述物料判断模块包括物料定位分析单元和物料越区判断单元,物料定位分析单元将物料与其固定的区域构建在一个二维坐标系下;物料越区判断单元判断物料是否越区。
一种车间巡检机器人目标检测方法包括以下步骤:
巡检机器人在接收到指令后开始向前移动,同时激光雷达探测器开始工作,检测前方是否有障碍物以及障碍物的位置大小,如果发现有障碍物,则绕行,如果没有,则直行;
巡检机器人到达指定的物料摆放区域后停下,侧面单目摄像头开始采集图像,并且上传到解耦合检测模块;
解耦合检测模块通过特征向量的提取初步提取出特征向量;
初步得到的特征向量进入特征增强模块进行特征增强,得到表征能力好的特征向量;
特征向量生成后,分别同时进入分类模块与定位模块,通过分类模块得到图像中物料的类别信息,通过定位模块得到图像中物料的中心坐标与宽高;
物料的中心坐标与宽高传入物料判断模块,物料定位分析单元根据物料位置信息将三维的物料平面化,与物料的固定范围画在同一个二维坐标系下;
然后,物料越区判断单元根据二维坐标下的物料中心坐标、宽高信息,结合物料固定的范围得到是否越区的判断。
重复以上步骤直至完成所有车间工序下的物料检测。
所述激光雷达检测障碍物流程具体步骤包括:
激光雷达探测器中的激光发射器主动向四周散射激光,探测用的激光在接触到障碍物之后会以激光点云的方式传回激光雷达探测器中的接收器;对输入的激光点云做时间同步和外参标定,对点云做预处理,减少数据量,剔除噪声点;采用无监督的聚类算法对地面上障碍物点形成多个团簇,分割出地面上的点云,每个团簇则表示一个障碍物;对每一块团簇做包围框拟合,代表一个障碍物以及它的大小范围,计算障碍物中心点以及长宽高;根据激光返回时间计算障碍物距离,对每一个障碍物构建一个卡尔曼滤波器做跟踪,平滑输出。
所述特征向量生成的步骤具体包括:
输入的图片首先会在骨干网络里进行特征提取,提取到的特征可以被称作特征层,是输入图片的特征集合。在主干部分获取了三个特征层进行下一步网络的构建,这三个特征层称它为有效特征层。在主干部分获得的三个有效特征层接着进入注意力模块中进行提取重要信息,之后在进入双向特征金字塔模块进行特征融合,特征融合的目的是结合不同尺 度的特征信息。在自下而上的融合部分,已经获得的有效特征层被用于继续提取特征,同时用到的自上而下的特征融合方式,不仅会对特征进行上采样实现特征融合,还会对特征再次进行下采样实现特征融合,从而强化重要特征或抑制非重要特征,得到最终的特征向量。
所述物料越区估计的步骤具体如下:
解耦合检测模块会生成物料的类别概率以及物料的中心位置坐标和大小,建立三维坐标系,将物料投影至二维平面,物料判断模块会根据物料的坐标信息以及预测框大小确定出具体的目标物体二维坐标图,结合物品摆放的固定区域以及安全范围,如果物料超过安全区域,则认为是越区。
所述解耦合检测模型是通过车间训练样本进行多次训练得到的优化模型,设置初始模型参数。
一种车间巡检机器人目标检测装置工作流程,如图1所示,包括:多维图像采集模块,包括巡检机器人侧面单目摄像头和激光雷达探测器。激光雷达探测器用于检测车间轨道障碍,控制机器人的移动轨迹,到达固定的物料采集区域之后,通过侧方单目摄像头采集多张车间物料图像;解耦合检测模块,包括特征提取模块,特征增强模块,分类模块和定位模块。特征提取模块与特征增强模块相互结合,充分的利用信息提取出表征能力强的特征向量。分类模块通过贝叶斯分类算法得到检测的物料类别。定位模块通过Logistic回归算法得到检测到的物料的中心坐标以及边框宽高;物料判断模块,包括物料定位分析单元和物料越区判断单元,物料定位分析单元根据解耦合检测模块得到的物料中心坐标以及宽高信息将三维的物料平面化;物料越区判断单元根据二维坐标下的物料中心坐标、宽高信息,结合物料规定的范围得到是否越区的判断。
车间巡检机器人目标检测装置的整体流程,如图2所示。车间巡检机器人在车间轨道向前移动,激光雷达探测器会主动向四周散射激光,随后根据激光返回的飞行时间来判断周边是否有障碍物,如果有障碍物则绕行,到达指定的物料采集图像地点,将图片分辨率调整为416×416,解耦合检测模块会根据传入图像进行物料检测,如果未检测到物料,则缺料信息传递给车间管理人员,如果检测出有物料,使用检测到的宽高以及中心坐标将其投影到二维平面,结合固定的物料存放区域位置信息,判断是否越区。
激光雷达检测障碍物流程图如图3所示。激光发射器通过向前方发射激光检测障碍物,以激光点云的方式传回激光雷达探测器中的接收器,对点云做预处理,减少数据量,剔除噪声点,由于激光雷达每次采集障碍点时视角不同,采集的部分障碍点坐标变化较大,而且很多障碍点与障碍物的跟踪不相关,障碍点过多会影响外接框轮廓的提取,所以需要对原始点云进行筛选出感兴趣区域;对地面上障碍物点形成多个团簇,分割出地面上的点云,每个团簇则表示一个障碍物;对每一块团簇做包围框拟合,代表一个障碍物以及它的大小范围,计算障碍物中心点以及长宽高;根据激光返回时间计算障碍物距离,输出障碍物位置、大小与距离,控制巡检机器人的行驶路线。
物料判断结果示意图如图4a~图4c所示。检测出来的物料信息包括边界框的宽、高和中心位置坐标,将其物料投影到二维平面,与车间物料固定的区域进行比较,图4a代表物料在安全范围以内并且摆放规范,图4b代表物料在安全范围以内,但是摆放已经偏离中心位置,图4c的代表物料已经脱离安全范围。
侧面摄像头采集物料示意图如图5所示。摄像头固定在纵向运动模组的下方,单目相机接收到采集控制信号后,会进行物料的图像采集,整个采集过程中会进行两次采集,多次采集图像进行判断物料信息,保证准确度,采集完成后,将获取的图像调整好分辨率为416×416,传输至解耦合检测模块。
车间巡检机器人目标检测的具体步骤如下:
S1数据集采集:使用摄像机在车间里采集大量的物料图像,在拍摄的时候,应选择合适的光源,尽可能地将被测物与其他部分的亮度差异最大化,应控制被测物的目标位置和拍摄角度,以及保持被测物在成像中的大小一致,采集图像的分辨率为416×416;
S2数据集预处理:使用Labelimg工具对图像进行标注,首先需要对彩色图像进行灰度化以减少所需处理的数据量,其次,通过平移、转置、镜像、旋转、缩放等几何变换对采集的图像进行处理,对图像进行几何变换,最后,使用MixUp进行数据增强。在RGB模型中,采用加权平均法对彩色图像进行灰度化得到较合理的灰度图像,其公式如下:
L=R*299/1000+G*587/1000+B*114/1000
S3数据集划分:按照7∶2∶1的比例对数据集划分为训练集、测试集和验证集;
S4网络模型搭建:基线模型采用了DarkNet53骨干+SPP层,随后连接双向特征金字塔进行特征增强,紧接着搭建解耦合检测头,不同于现有的检测算法,在这里使用解耦头将目标检测类别以及位置信息分为两个分支同时进行,即分类模块和定位模块,提高网络性能;在经过反向传播更新网络模型参数,它包含一个1*1卷积进行通道降维,后接两个3*3并行分支(均为卷积)。整个网络添加了EMA权值更新、cosine学习率机制、IoU损失、IoU感知分支,采用BCE损失训练cls与obj分支,IoU损失训练reg分支,整体损失函数如下:
其中,Lcls代表分类损失,Lreg代表定位损失,Lobj代表obj损失,γ代表定位损失的平衡系数,Npos代表被分为正样的网格数。
S5网络模型训练:使用S2得到的训练集S4中网络模型,训练后的模型包含各识别目标及位置信息,设置置信度阈值,如果检测到是物料,将置信度大于等于置信度阈值的目标用“target”标记,在训练集上训练出一个适合车间目标检测的模型;
S6图像输入:车间巡检机器人在巡检途中每到达物料摆放固定区域都进行采集图像,采集规则依照S1;
S7物料预测:将S6采集的图像输入至所述S4网络模型中,对图像中是否包含物料进行预测,如果包含则给出目标种类和位置信息并且用边界框标记,否则不做任何处理,输出物料 的中心坐标位置(x,y)以及宽高(w,h);
S8越区判断:根据S7生成的物料的中心坐标位置以及宽高,结合每个物料摆放的规定区域中心坐标(X,Y)和宽高(W,H),投影到二维平面上进行对比判断物料是否越区,如果x+w>X+W或y+h>Y+H则判断物料越区。
本发明的一种车间巡检机器人目标检测方法及装置,通过多维图像采集模块,解耦合检测模块和物料判断模块,构建起了一套基于可移动的车间巡检机器人目标检测方法及装置,在不影响车间正常规则秩序的同时,获取车间物料实时图像信息,并及时传输至检测模块。本发明针对车间物料检测的方法,解决了采用人工巡检效率较低的问题,也克服了人工巡检安全不足、灵活性不够的问题。本发明具有实用性强,成本低的优点。
以上所述是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明所述原理的前提下,还可以做出若干改进和润饰,这些改进和润饰应视为本发明的保护范围。

Claims (8)

  1. 一种车间巡检机器人目标检测装置,其特征在于,包括:
    多维图像采集模块、用于通过激光雷达探测器检测前方是否有障碍物,以及通过侧方单目摄像头采集车间物料图像;
    解耦合检测模块,用于根据采集车间物料图像,进行特征向量的提取与增强,得到物料的中心坐标与宽高;
    物料判断模块,用于根据物料的中心坐标与宽高、以及车间物料固定区域位置,判断物料是否越区。
  2. 根据权利要求1所述的一种车间巡检机器人目标检测装置,其特征在于,所述解耦合检测模块包括:
    特征提取模块,用于通过骨干网络对车间物料图像进行特征提取,得到表示物料的特征向量;
    特征增强模块,用于对特征向量进行特征增强,得到增强后的特征向量;
    分类模块,用于对增强后的特征向量通过分类算法得到物料的类别信息;
    定位模块,用于对增强后的特征向量通过回归算法得到物料的中心坐标与宽高。
  3. 根据权利要求1所述的一种车间巡检机器人目标检测装置,其特征在于,所述骨干网络包括顺序连接的DarkNet53骨干网络以及SPP层。
  4. 根据权利要求1所述的一种车间巡检机器人目标检测装置,其特征在于,所述物料判断模块包括:
    物料定位分析单元,用于根据物料的中心坐标与宽高构建二维坐标系,使物料与车间物料固定区域位于所述二维坐标系下;
    物料越区判断单元,用于在所述二维坐标系下,根据物料的中心坐标与宽高、以及车间物料固定区域位置,判断物料是否越区。
  5. 一种车间巡检机器人目标检测方法,其特征在于,包括以下步骤:
    巡检机器人通过激光雷达探测器检测前方是否有障碍物以及障碍物的位置大小;如果发现有障碍物,则绕行;如果没有,则直行;
    巡检机器人到达指定的车间物料固定区域后停下,侧面单目摄像头开始采集车间物料图像,并且上传到解耦合检测模块;
    解耦合检测模块提取出特征向量,通过特征增强模块进行特征增强,得到增强后的特征向量;
    分类模块根据增强后的特征向量得到图像中物料的类别信息,定位模块根据增强后的特征向量得到图像中物料的中心坐标与宽高;
    物料判断模块根据物料的中心坐标与宽高构建二维坐标系,使物料与车间物料固定区域位于所述二维坐标系下;
    物料越区判断单元在所述二维坐标系下,根据物料的中心坐标与宽高、以及车间物料 固定区域位置,判断物料是否越区。
  6. 根据权利要求5所述的一种车间巡检机器人目标检测方法,其特征在于,所述激光雷达检测障碍物流程具体步骤包括:
    激光雷达探测器中的激光发射器向四周散射激光,探测用的激光在接触到障碍物之后以激光点云的方式传回激光雷达探测器中的接收器;
    雷达探测器中的接收器对输入的激光点云进行时间同步和外参标定;
    雷达探测器对点云做预处理,得到包含背景和表示障碍物的前景的点云;采用无监督的聚类算法对预处理后的点云形成多个团簇,分割出前景和背景的点云;对于表示前景点云的每个团簇表示一个障碍物,对每一个团簇做包围框拟合,以表示一个障碍物及其大小范围,进而得到障碍物中心点以及长宽高;
    雷达探测器根据激光返回时间计算障碍物距离,并将障碍物距离、障碍物中心点以及长宽高发送至机器人,以避开障碍物。
  7. 根据权利要求5所述的一种车间巡检机器人目标检测方法,其特征在于,所述解耦合检测模块通过特征向量的提取初步提取出特征向量;通过特征增强模块进行特征增强,得到增强后的特征向量,包括以下步骤:
    特征提取模块通过骨干网络对物料图片进行特征提取,在DarkNet53骨干网络获取三个尺度的特征向量;三个尺度的特征向量进入SPP层中进行特征提取;
    特征增强模块通过双向特征金字塔进行特征融合,以结合不同尺度的特征信息,最终的特征向量。
  8. 根据权利要求5所述的一种车间巡检机器人目标检测方法,其特征在于,所述解耦合检测模块通过车间训练样本进行多次训练得到。
PCT/CN2023/091465 2022-10-11 2023-04-28 一种车间巡检机器人目标检测方法及装置 WO2024077934A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211240608.2A CN117934791A (zh) 2022-10-11 2022-10-11 一种车间巡检机器人目标检测方法及装置
CN202211240608.2 2022-10-11

Publications (1)

Publication Number Publication Date
WO2024077934A1 true WO2024077934A1 (zh) 2024-04-18

Family

ID=90668652

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/091465 WO2024077934A1 (zh) 2022-10-11 2023-04-28 一种车间巡检机器人目标检测方法及装置

Country Status (2)

Country Link
CN (1) CN117934791A (zh)
WO (1) WO2024077934A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7248968B2 (en) * 2004-10-29 2007-07-24 Deere & Company Obstacle detection using stereo vision
CN113050654A (zh) * 2021-03-29 2021-06-29 中车青岛四方车辆研究所有限公司 障碍物检测方法、巡检机器人车载避障系统及方法
CN113743391A (zh) * 2021-11-08 2021-12-03 江苏天策机器人科技有限公司 应用于低速自主驾驶机器人的三维障碍物检测系统与方法
CN113989503A (zh) * 2021-10-26 2022-01-28 广西中烟工业有限责任公司 一种生产线巡检系统、方法、电子设备及存储介质
CN115049598A (zh) * 2020-06-11 2022-09-13 创优数字科技(广东)有限公司 门店货架上摆放试用装产品规范检测方法、系统及设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7248968B2 (en) * 2004-10-29 2007-07-24 Deere & Company Obstacle detection using stereo vision
CN115049598A (zh) * 2020-06-11 2022-09-13 创优数字科技(广东)有限公司 门店货架上摆放试用装产品规范检测方法、系统及设备
CN113050654A (zh) * 2021-03-29 2021-06-29 中车青岛四方车辆研究所有限公司 障碍物检测方法、巡检机器人车载避障系统及方法
CN113989503A (zh) * 2021-10-26 2022-01-28 广西中烟工业有限责任公司 一种生产线巡检系统、方法、电子设备及存储介质
CN113743391A (zh) * 2021-11-08 2021-12-03 江苏天策机器人科技有限公司 应用于低速自主驾驶机器人的三维障碍物检测系统与方法

Also Published As

Publication number Publication date
CN117934791A (zh) 2024-04-26

Similar Documents

Publication Publication Date Title
CN110415342B (zh) 一种基于多融合传感器的三维点云重建装置与方法
Luo et al. Vision-based extraction of spatial information in grape clusters for harvesting robots
US11288884B2 (en) UAV real-time path planning method for urban scene reconstruction
WO2018028103A1 (zh) 一种基于人眼视觉特性的电力线路无人机巡检方法
CN113450408B (zh) 一种基于深度相机的非规则物体位姿估计方法及装置
CN108389256B (zh) 二三维交互式无人机电力杆塔巡检辅助方法
CN110163904A (zh) 对象标注方法、移动控制方法、装置、设备及存储介质
Kang et al. Accurate fruit localisation using high resolution LiDAR-camera fusion and instance segmentation
WO2024007485A1 (zh) 基于视觉特征的空地多机器人地图融合方法
CN111402632B (zh) 一种交叉口行人运动轨迹的风险预测方法
Fan et al. Dynamicfilter: an online dynamic objects removal framework for highly dynamic environments
CN113284144A (zh) 一种基于无人机的隧道检测方法及装置
CN115272830A (zh) 一种基于深度学习的受电弓异物检测方法
CN114750154A (zh) 一种配网带电作业机器人的动态目标识别定位与抓取方法
WO2024077934A1 (zh) 一种车间巡检机器人目标检测方法及装置
Feng et al. Object detection and localization based on binocular vision for autonomous vehicles
Ding et al. Electric power line patrol operation based on vision and laser SLAM fusion perception
CN111354028B (zh) 基于双目视觉的输电通道隐患物识别追踪方法
CN114913129A (zh) 一种针对复合材料铺放缺陷识别定位系统
Li et al. Real time obstacle estimation based on dense stereo vision for robotic lawn mowers
Suzui et al. Toward 6 dof object pose estimation with minimum dataset
CN113510691A (zh) 一种抹灰机器人的智能视觉系统
Gu et al. 3D reconstruction for the operating environment of the robots in distribution Hot-line working based on mixed reality
CN114782626B (zh) 基于激光与视觉融合的变电站场景建图及定位优化方法
Gao et al. Improved binocular localization of kiwifruit in orchard based on fruit and calyx detection using YOLOv5x for robotic picking

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23876145

Country of ref document: EP

Kind code of ref document: A1