CN117333539A - Mobile robot-oriented charging pile positioning method and device - Google Patents

Mobile robot-oriented charging pile positioning method and device Download PDF

Info

Publication number
CN117333539A
CN117333539A CN202311304575.8A CN202311304575A CN117333539A CN 117333539 A CN117333539 A CN 117333539A CN 202311304575 A CN202311304575 A CN 202311304575A CN 117333539 A CN117333539 A CN 117333539A
Authority
CN
China
Prior art keywords
charging pile
mobile robot
training
angle
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311304575.8A
Other languages
Chinese (zh)
Inventor
王鑫
颜俊
俞春华
胥锐
江新炼
虞锐锋
刘虎
刘舰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Huamai Robot Technology Co ltd
Original Assignee
Nanjing Huamai Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Huamai Robot Technology Co ltd filed Critical Nanjing Huamai Robot Technology Co ltd
Priority to CN202311304575.8A priority Critical patent/CN117333539A/en
Publication of CN117333539A publication Critical patent/CN117333539A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a mobile robot-oriented charging pile positioning method, which comprises the following steps: 1) Constructing a training data set, shooting a charging pile by a mobile robot in a target environment by using a carried camera to obtain a charging pile image, and constructing the training data set; 2) Carrying out charging pile identification, and carrying out charging pile identification training by utilizing a Yolov5-Lite network to obtain a charging pile identification model; 3) Estimating the distance, namely calculating the distance between the mobile robot and the charging pile by using a small-hole imaging principle; 4) And (3) angle estimation, training between the charging pile image and corresponding angle information by utilizing a Yolov5-Lite network, and obtaining an angle estimation model between the mobile robot and the charging pile.

Description

Mobile robot-oriented charging pile positioning method and device
Technical Field
The invention discloses a charging pile positioning method for a mobile robot, and belongs to the field of mobile robot application and deep learning.
Background
In recent years, with the continuous maturity of computer technology, the working field of mobile robots is increasing. The mobile robots have a characteristic that they need to be charged. In order to enable the mobile robot to operate for a long time, it is necessary to solve the charging problem of the robot.
Mobile robots typically perform docking charging by building a map of the environment and controlling the way the robot navigates on the map. However, this generally requires a high level of requirements for the operator of the robot and the working environment of the robot, because the conventional method for constructing an environment map using a laser and a camera does not include semantic information of objects, and is not difficult for a professional to operate the map. Moreover, due to the defects of the existing algorithm, if dynamic objects such as characters exist in the process of building the map, the dynamic information is added into the map, so that the map is not built accurately. Finally, for various reasons, the robot cannot achieve an accurate docking with the charging pile. In order to solve the problems in the charging process of the mobile robot, the invention designs a charging pile positioning method for the mobile robot.
In order to solve the problem that the detected electric energy quantization error is large and the work of the alternating current charging pile is easy to fail in the conventional technology, an image recognition alternating current charging pile error verification method is designed, a novel alternating current charging pile error verification device is constructed, the detected electric power change of the detected charging pile is detected through an image difference mutation amount, two frames of image differences are compared through a cosine similarity algorithm, and the image difference value of the current frame and the previous frame and the change of the image difference value of the previous frame are calculated. The results show that through the design, the verification error of the alternating current charging pile is reduced, and the average error is lower than 5%. Document [2] proposes a method for identifying a key component of a direct current charging pile. According to the method, fault mode, influence and hazard analysis are introduced into reliability evaluation of the charging pile, and an evaluation index system of the reliability and recycling property of the charging pile is established from three angles of risk priority, recycling difficulty and recycling potential. The document [3] carries out demand analysis on the task target and the technical requirement of automatic charging of the electric automobile, designs a set of automatic charging robot system, describes the hardware composition of the system, and designs software for realizing the automatic charging robot system according to the condition of the hardware; different recognition and positioning schemes are designed for recognizing and positioning the charging cover and the charging socket of the electric automobile aiming at the recognition and positioning problems of the charging cover and the charging socket of the electric automobile. And establishing a template library of the charging cover and the charging socket by using different local feature descriptions, and then completing identification and positioning by using a nearest neighbor search identification and ICP (Iterative Closest Point) registration algorithm based on a kd-tree (k-dimensional tree). The invention provides a charging pile positioning method for a mobile robot, which has a certain improvement on recognition accuracy.
[1] Sun Panpan the research on identification and effect analysis of key factors of participation of charging pile crowd funding [ D ]. University of petroleum in China (Beijing), 2019.
[2] Da, li Xun, huang Jianzhong. An ac charging pile error verification method based on image recognition studies [ J ]. Electronic measurement techniques, 2021,44 (07): 13-18.
[3] Liu Bin an automatic charging robot based on vision and force sense [ D ]. University of Wuhan technology, 2022.
Disclosure of Invention
In order to solve the problems, the invention discloses a charging pile positioning method for a mobile robot in combination with deep learning. Firstly, a charging pile target identification data set and an angle estimation data set are respectively constructed by collecting charging pile images. Then, the method utilizes YoLoV5-lite to respectively carry out target recognition training and angle estimation training to obtain a target recognition model and an angle estimation model. And obtaining the pixel length of the charging pile in the image according to the target recognition model result, and calculating the target distance by using a pinhole imaging principle. And finally, estimating the target distance and the angle to obtain the position information of the charging pile.
The technical scheme of the invention is that the mobile robot-oriented charging pile positioning method comprises the following steps:
step 1: building training data sets
And shooting the charging pile by using the portable camera in the target environment by the mobile robot to obtain a charging pile image, and constructing a training data set.
Step 1-1: construction of charging pile identification training database
The acquired images were manually labeled with LabelImg, and the labels were only one type of charging stake. Marked chargingtstation.
Step 1-2: building charging pile angle estimation training database
And manually labeling the acquired image by using LabelImg, and using angle information between the mobile robot and the charging pile as a label corresponding to the charging pile image. A training database is formed (charging pile image, angle label).
Step 2: charging pile identification
And carrying out charging pile identification training by utilizing the Yolov5-Lite network to obtain a charging pile identification model.
Step 2-1: picture preprocessing
And uniformly adjusting the size of the pictures of the training set to 640 x 640, and enhancing the extraction of the key point information of the charging pile.
Step 2-2: classification learning
And carrying out iterative training on the preprocessed training data set in batches by utilizing the Yolov5-Lite network.
Step 2-3: charging pile identification model
And (3) repeating the step (2-3) for multiple classification learning, and selecting the model with highest recognition accuracy of the charging pile in the offline learning stage as the final charging pile recognition model.
Step 3: distance estimation
And calculating the distance between the mobile robot and the charging pile by using a small-hole imaging principle.
Step 3-1: prior parameters
Acquiring the length W of the charging pile through priori knowledge, and acquiring the focal length f of the camera;
step 3-2: measuring parameters
Obtaining the pixel length W 'of the charging pile target in the image through the identification result of the charging pile'
Step 3-2: calculating the distance d
The actual distance d is calculated according to the formula d=wf/W'
Step 4: angle estimation
Training between the charging pile image and the corresponding angle information by utilizing the Yolov5-Lite network to obtain an angle estimation model between the mobile robot and the charging pile.
Step 4-1: image preprocessing
And (5) performing image preprocessing by utilizing the step 2-1.
Step 4-2: classification learning
And carrying out iterative training on the preprocessed training data set in batches by utilizing the Yolov5-Lite network.
Step 4-3: angle label estimation model
And (3) repeating the step (4-3) for multiple classification learning, and selecting a model with the best performance of the angle estimation label in the offline learning stage as a final angle label estimation model.
The invention firstly acquires the charging pile image to respectively construct a charging pile target identification data set and an angle estimation data set. And then, respectively carrying out target recognition training and angle estimation training by utilizing YoLoV5-lite to obtain a target recognition model and an angle estimation model. And obtaining the pixel length of the charging pile in the image according to the target recognition model result, and calculating the target distance by using a pinhole imaging principle. And finally, estimating the target distance and the angle to obtain the position information of the charging pile.
The invention relates to a charging pile positioning device for a mobile robot, which comprises the following components:
and (5) an offline module: and training a charging pile target recognition model and training a charging pile angle estimation model.
And an online identification module: and estimating the distance and the angle between the charging pile and the mobile robot by using the obtained charging pile image.
The offline learning module comprises three modules, namely a charging pile target identification module and an angle label classification learning module.
Charging pile target identification module: and constructing a charging pile target identification data set, learning the relation between the charging pile target and the tag, and training a charging pile target identification model.
And a distance calculating module: and calculating the distance between the charging pile and the mobile robot by using the small hole imaging principle and through priori knowledge and target recognition results.
The angle label classification learning module: and constructing a charging pile angle estimation data set, learning the relation between a charging pile target and an angle label, and training a charging pile angle estimation model.
And an online identification module: the device comprises a distance calculation module and an angle estimation module.
And a distance calculating module: the mobile robot sends the obtained charging pile image into a charging pile target identification model, and the distance between the charging pile and the mobile robot is obtained by using a small hole imaging principle;
an angle estimation module: sending the obtained charging pile image into a charging pile angle estimation model to obtain an angle between the charging pile and the mobile robot;
further, the charging pile positioning device facing the mobile robot comprises a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to the first aspect.
A computer readable storage medium having stored thereon a computer program which when executed by a processor realizes the steps of the method of the first aspect.
The invention has the beneficial effects that:
(1) The invention converts the mobile robot charging pile positioning problem into the classification problem, and performs positioning estimation through machine learning. Therefore, the on-line running time is shorter, and the speed and the accuracy of positioning are improved.
(2) The YOLOv5 Lite performs a series of ablation experiments on the basis of YOLOv5, so that the model is lighter (smaller in flow, lower in memory occupation and smaller in parameters) and faster (adding a shuffle channel, YOLOv5 head for channel cutting, so that better offline training performance can be obtained.
(3) The YOLOv5 Lite adopted by the invention can reach 10+FPS on the raspberry group 4B, so the invention is easier to deploy (Focus layer removal and 4-time slice operation), and the model quantization accuracy is reduced within an acceptable range.
According to the target recognition model result, the pixel length of the charging pile in the image is obtained, and the target distance is calculated by using the principle of small hole imaging. And finally, estimating the target distance and the angle to obtain the position information of the charging pile. The invention has the advantages of simple realization and high positioning precision, and can be used in the fields of intelligent home furnishing, intelligent aged care and the like.
Drawings
In order that the invention may be more readily understood, a more particular description of the invention will be rendered by reference to specific embodiments that are illustrated in the appended drawings.
FIG. 1 is a system model of the present invention;
FIG. 2 is a flow chart of the present invention;
fig. 3: the YoLoV5-lite structure schematic diagram applied by the invention;
fig. 4: the invention is based on the distance calculation schematic diagram of the aperture imaging;
fig. 5: the experimental scene description of the invention;
fig. 6: the invention aims at identifying and describing.
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present invention is not limited to the specific embodiments disclosed below.
Further, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic can be included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
While the embodiments of the present invention have been illustrated and described in detail in the drawings, the cross-sectional view of the device structure is not to scale in the general sense for ease of illustration, and the drawings are merely exemplary and should not be construed as limiting the scope of the invention. In addition, the three-dimensional dimensions of length, width and depth should be included in actual fabrication.
Also in the description of the present invention, it should be noted that the orientation or positional relationship indicated by the terms "upper, lower, inner and outer", etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first, second, or third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected, and coupled" should be construed broadly in this disclosure unless otherwise specifically indicated and defined, such as: can be fixed connection, detachable connection or integral connection; it may also be a mechanical connection, an electrical connection, or a direct connection, or may be indirectly connected through an intermediate medium, or may be a communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Example 1:
referring to fig. 1 and 2, the present embodiment provides a method for positioning a charging pile for a mobile robot, including: firstly, a mobile robot shoots a charging pile in a target environment by using a carried camera, a charging pile image is obtained, and a training data set is constructed. And then, carrying out charging pile identification by utilizing a Yolov5-Lite network to obtain a charging pile identification model. Further, the distance between the mobile robot and the charging pile is calculated based on the principle of small hole imaging. Further, training between the charging pile image and the corresponding angle information is performed by utilizing a Yolov5-Lite network, and an angle estimation model (described below) between the mobile robot and the charging pile is obtained. Further, the position of the charging pile is further determined through the obtained distance and angle
Referring to fig. 3, the present embodiment provides a charging pile identification model based on a Yolov5-Lite network, including: step one: and shooting the charging pile by the mobile robot in the target environment by using the carried camera to obtain a charging pile image. Manually labeling the obtained image by using LabelImg; the procedure using LabelImg was: 1) Creating corresponding labels, 2) framing out target objects and saving.
The tag has only one class of charging posts. Marked chargingtstation. The proportion of the training set to the test set is set to 9:1. Step two: and uniformly adjusting the size of the processed data to 640 x 640, and enhancing the extraction of the key point information of the charging pile. Step three: training the processed data, setting the iteration batch size as 64, setting the total iteration times as 300, setting the initial learning rate as 0.001, adopting a small batch gradient descent method, updating network parameters by using an Adam optimizer, and then carrying out target positioning classification and target feature extraction. Step four: judging whether the iterative training is completed or not, obtaining an optimal model, and if the optimal weight model is not obtained, returning model retraining test data. If the iterative training is completed and an optimal model (which can be judged according to the loss function) is obtained, the test data is input to detect the target class, and finally a detection result is obtained and output.
In the embodiment, referring to fig. 3, it should be noted that the light-weight modification is performed on the basis of the original Yolov5 network. Firstly, the input end of the network structure removes a Focus layer, thereby effectively reducing floating point numbers and improving operation speed. Further, a ShuffleNet is introduced into a backbone network (backbone), so that the memory access amount can be minimized and the reasoning speed can be increased. And then, improving the structure of FPN+PAN in a Neck module, wherein an upper layer structure in the Yolov5 network generates deep and shallow feature maps, and the deep feature maps carry stronger semantic features and weaker positioning information. And the shallow feature map carries stronger position information and weaker semantic information. The FPN is used for transmitting deep semantic features to a shallow layer, so that semantic expression on multiple scales is enhanced; PAN instead conducts shallow location information to the deep layer, enhancing location capability over multiple scales. And performing parameter tuning on the structure of the FPN+PAN, and further obtaining the semantic information of the feature map better.
Referring to fig. 6, the present embodiment provides a distance measurement method for a charging pile for a mobile robot, including: step one: acquiring known parameters: acquiring the length W of the charging pile; acquiring a focal length f of the mobile robot; acquiring the pixel length W' of the identified charging pile; step two: calculating the distance d according to the principle shown in fig. 6, the distance d between the charging pile and the mobile robot can be calculated from the formula d=wf/W', according to the principle shown
Referring to fig. 1, the present embodiment provides training between a charging pile image and corresponding angle information based on a Yolov5-Lite network, to obtain an angle estimation model between a mobile robot and a charging pile, including: step one: the method comprises the steps of data preparation, shooting a charging pile by a mobile robot in a target environment by using a carried camera, collecting an image data set containing the charging pile, framing the charging pile in the image by using LabelImg for the obtained image, and marking the position and corresponding angle information of each charging pile, wherein the labels are divided into 3 types in total and respectively correspond to three areas of 0 degree, 45 degrees and 90 degrees.
Step two: and (3) preprocessing data, converting the marked data set into a Yolo format, generating a corresponding tag file, uniformly adjusting the size of the processed data to 640 x 640, and enhancing the extraction of the key point information of the charging pile.
Step three: selecting a Yolov5-Lite network architecture, and processing the processed data according to 9: the ratio of 1 is divided into a training set and a testing set, the number of input categories is three, and the input such as the size of a picture is 640 x 640.
Step four: training, setting the batch size of each iteration to be 64, setting the total iteration times to be 300, setting the initial learning rate to be 0.001, adopting a small batch gradient descent method, updating network parameters by using an Adam optimizer, and then carrying out target positioning classification and target feature extraction.
Adam (Adaptive Moment Estimation) is a deep learning optimization algorithm that combines momentum and adaptive learning rate. The method accelerates convergence and jumps out of a local optimal solution through momentum, and automatically adjusts the learning rate through self-adaptive learning rate according to first moment estimation (mean) and second moment estimation (variance) of the gradient so as to adapt to updating requirements of different parameters. Adam also includes an offset correction mechanism to cope with the inaccuracy of the estimate in the initial iteration, making it suitable for various deep learning tasks and often performing well, but may require careful adjustment of the super-parameters to achieve optimal performance.
Step five: and evaluating the performance of the trained model by using a test set, wherein the performance comprises indexes such as the accuracy rate, recall rate and the like of target detection.
Step six: judging whether iteration training is completed or not, obtaining an optimal model, and if the optimal weight model is not obtained, carrying out optimization methods such as model parameter adjustment, data enhancement and the like, so as to improve the model performance.
Step seven: and detecting the charging pile image and predicting the angle information by using the trained model. The Yolov5-Lite network has the characteristic of multi-target identification, so that the method can be used for positioning scenes of a plurality of charging piles.
Example 2:
the embodiment provides a charging pile positioning device for a mobile robot, the device includes:
and (5) an offline module: and training a charging pile target recognition model and training a charging pile angle estimation model.
And an online identification module: and estimating the distance and the angle between the charging pile and the mobile robot by using the obtained charging pile image.
The offline learning module comprises three modules, namely a charging pile target identification module and an angle label classification learning module.
Charging pile target identification module: and constructing a charging pile target identification data set, learning the relation between the charging pile target and the tag, and training a charging pile target identification model.
And a distance calculating module: and calculating the distance between the charging pile and the mobile robot by using the small hole imaging principle and through priori knowledge and target recognition results.
The angle label classification learning module: and constructing a charging pile angle estimation data set, learning the relation between a charging pile target and an angle label, and training a charging pile angle estimation model.
And an online identification module: the device comprises a distance calculation module and an angle estimation module.
And a distance calculating module: the mobile robot sends the obtained charging pile image into a charging pile target identification model, and the distance between the charging pile and the mobile robot is obtained by using a small hole imaging principle;
an angle estimation module: sending the obtained charging pile image into a charging pile angle estimation model to obtain an angle between the charging pile and the mobile robot;
example 3:
the invention also provides a charging pile positioning device facing the mobile robot. The charging pile positioning device for a mobile robot of this embodiment includes: a processor, a memory, and a computer program stored in the memory and executable on the processor. The steps of the above-described method embodiments are implemented by the processor when executing the computer program. Alternatively, the processor may implement the functions of the modules/units in the above embodiments when executing the computer program.
The charging pile positioning device facing the mobile robot can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing devices. The mobile robot-oriented charging pile positioning device may include, but is not limited to, a processor, a memory. The processor may be a central processing unit (CentralProcessingUnit, CPU), but may also be other general purpose processors, digital signal processors (DigitalSignalProcessor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The memory may be used to store the computer program and/or module, and the processor may implement various functions of the mobile robot-oriented charging pile positioning device by running or executing the computer program and/or module stored in the memory and invoking data stored in the memory. The module/unit integrated with the mobile robot-oriented charging pile positioning device may be stored in a computer-readable storage medium if implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, readOnlyMemory), a random access memory (RAM, randomAccessMemory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
Calculation example:
according to the invention, the experimental scene shown in fig. 5 is utilized, the camera carried on the mobile robot is utilized to shoot the charging pile, and the acquired image is labeled to obtain the training data set. And then training a charging pile identification model and an angle estimation model, wherein the number of training data of the angle estimation model is 874, and the number of training data of the charging pile identification model is 1124. The data of the test set is predicted by using the model, and fig. 6 depicts the identification result of the charging pile on the mobile robot.
Table 1 describes the classification results for the angle tags:
Precision Recall map@0.5 map@0.5:0.95
0.962 0.999 0.995 0.798
table 2 describes the distance estimation performance:
the foregoing description is only of the preferred embodiments of the present application and is presented as a description of the principles of the technology being utilized. It will be appreciated by persons skilled in the art that the scope of the invention referred to in this application is not limited to the specific combinations of features described above, but also covers other technical solutions which may be formed by any combination of the features described above or their equivalents without departing from the inventive concept. Such as the above-described features and technical features having similar functions (but not limited to) disclosed in the present application are replaced with each other.
Other technical features besides those described in the specification are known to those skilled in the art, and are not described herein in detail to highlight the innovative features of the present invention.

Claims (5)

1. The mobile robot-oriented charging pile positioning method is characterized by comprising the following steps of:
step 1: building training data sets
Shooting a charging pile by using a portable camera in a target environment by the mobile robot to obtain a charging pile image, and constructing a training data set;
step 2: charging pile identification
Performing charging pile identification training by utilizing a Yolov5-Lite network to obtain a charging pile identification model;
step 3: distance estimation
Calculating the distance between the mobile robot and the charging pile by using a small-hole imaging principle;
step 4: angle estimation
Training between the charging pile image and corresponding angle information by utilizing a Yolov5-Lite network to obtain an angle estimation model between the mobile robot and the charging pile;
the method step 1 of constructing a training data set comprises the following steps:
step 1-1: construction of charging pile identification training database
Manually labeling the acquired image by using LabelImg, wherein the label only has one class of charging piles; marked as chargingtstation;
step 1-2: building charging pile angle estimation training database
Manually labeling the acquired image by using LabelImg, and using angle information between the mobile robot and the charging pile as a label corresponding to the charging pile image; and forming a charging pile image and an angle label training database.
2. The mobile robot-oriented charging pile positioning method according to claim 1, wherein the method step 2 of identifying the charging pile comprises the steps of:
step 2-1: picture preprocessing
Uniformly adjusting the size of the pictures of the training set to 640 x 640, and enhancing the extraction of key point information of the charging pile;
step 2-2: classification learning
Carrying out iterative training on the preprocessed training data set in batches by utilizing a Yolov5-Lite network;
step 2-3: charging pile identification model
And (3) repeating the step (2-3) for multiple classification learning, and selecting the model with highest recognition accuracy of the charging pile in the offline learning stage as the final charging pile recognition model.
3. The mobile robot-oriented charging pile positioning method according to claim 1, wherein the step 3 distance estimation comprises the steps of:
step 3-1: prior parameters
Acquiring the length W of the charging pile through priori knowledge, and acquiring the focal length f of the camera;
step 3-2: measuring parameters
Obtaining the pixel length W' of the charging pile target in the image through the identification result of the charging pile;
step 3-2: calculating the distance d
The actual distance d is calculated according to the formula d=wf/W'.
4. The mobile robot-oriented charging pile positioning method according to claim 1, wherein the method step 4 angle estimation comprises the steps of:
step 4-1: image preprocessing
Performing image preprocessing by utilizing the step 2-1;
step 4-2: classification learning
Carrying out iterative training on the preprocessed training data set in batches by utilizing a Yolov5-Lite network;
step 4-3: angle label estimation model
And (3) repeating the step (4-3) for multiple classification learning, and selecting a model with the best performance of the angle estimation label in the offline learning stage as a final angle label estimation model.
5. A charging pile positioning device obtained by positioning a charging pile facing a mobile robot according to any one of claims 1 to 4, characterized in that the device comprises:
and (5) an offline module: training a charging pile target recognition model and training a charging pile angle estimation model;
and an online identification module: estimating the distance and the angle between the charging pile and the mobile robot by using the obtained charging pile image;
the offline learning module comprises three modules, namely a charging pile target identification module and an angle label classification learning module;
charging pile target identification module: constructing a charging pile target identification data set, learning the relation between a charging pile target and a label, and training a charging pile target identification model;
and a distance calculating module: calculating the distance between the charging pile and the mobile robot by using the small hole imaging principle and through priori knowledge and a target recognition result;
the angle label classification learning module: constructing a charging pile angle estimation data set, learning the relation between a charging pile target and an angle label, and training a charging pile angle estimation model;
and an online identification module: the distance calculation module and the angle estimation module;
and a distance calculating module: the mobile robot sends the obtained charging pile image into a charging pile target identification model, and the distance between the charging pile and the mobile robot is obtained by using a small hole imaging principle;
an angle estimation module: sending the obtained charging pile image into a charging pile angle estimation model to obtain an angle between the charging pile and the mobile robot;
comprises a processor and a storage medium; the storage medium is used for storing instructions;
the processor being operative according to the instructions to perform the steps of the method according to the first aspect;
a readable storage medium having stored thereon a computer program which when executed by a processor realizes the steps of the method of the first aspect.
CN202311304575.8A 2023-10-09 2023-10-09 Mobile robot-oriented charging pile positioning method and device Pending CN117333539A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311304575.8A CN117333539A (en) 2023-10-09 2023-10-09 Mobile robot-oriented charging pile positioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311304575.8A CN117333539A (en) 2023-10-09 2023-10-09 Mobile robot-oriented charging pile positioning method and device

Publications (1)

Publication Number Publication Date
CN117333539A true CN117333539A (en) 2024-01-02

Family

ID=89276833

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311304575.8A Pending CN117333539A (en) 2023-10-09 2023-10-09 Mobile robot-oriented charging pile positioning method and device

Country Status (1)

Country Link
CN (1) CN117333539A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110000784A (en) * 2019-04-09 2019-07-12 深圳市远弗科技有限公司 A kind of robot recharges positioning navigation method, system, equipment and storage medium
WO2021244079A1 (en) * 2020-06-02 2021-12-09 苏州科技大学 Method for detecting image target in smart home environment
CN114018268A (en) * 2021-11-05 2022-02-08 上海景吾智能科技有限公司 Indoor mobile robot navigation method
CN114966538A (en) * 2022-04-26 2022-08-30 清华大学 Single-station positioning method and system combining scene recognition and ranging and angle measurement error calibration
CN116363445A (en) * 2022-12-14 2023-06-30 深圳云天励飞技术股份有限公司 Image angle classification model training method, device, equipment and storage medium
CN116402891A (en) * 2023-04-06 2023-07-07 浙大宁波理工学院 Automatic positioning method for charging interface of new energy automobile and automatic plugging and positioning method for charging gun

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110000784A (en) * 2019-04-09 2019-07-12 深圳市远弗科技有限公司 A kind of robot recharges positioning navigation method, system, equipment and storage medium
WO2021244079A1 (en) * 2020-06-02 2021-12-09 苏州科技大学 Method for detecting image target in smart home environment
CN114018268A (en) * 2021-11-05 2022-02-08 上海景吾智能科技有限公司 Indoor mobile robot navigation method
CN114966538A (en) * 2022-04-26 2022-08-30 清华大学 Single-station positioning method and system combining scene recognition and ranging and angle measurement error calibration
CN116363445A (en) * 2022-12-14 2023-06-30 深圳云天励飞技术股份有限公司 Image angle classification model training method, device, equipment and storage medium
CN116402891A (en) * 2023-04-06 2023-07-07 浙大宁波理工学院 Automatic positioning method for charging interface of new energy automobile and automatic plugging and positioning method for charging gun

Similar Documents

Publication Publication Date Title
CN108960211B (en) Multi-target human body posture detection method and system
CN108304820B (en) Face detection method and device and terminal equipment
EP3772036A1 (en) Detection of near-duplicate image
CN110659582A (en) Image conversion model training method, heterogeneous face recognition method, device and equipment
CN108921057B (en) Convolutional neural network-based prawn form measuring method, medium, terminal equipment and device
CN108229418B (en) Human body key point detection method and apparatus, electronic device, storage medium, and program
CN111768388A (en) Product surface defect detection method and system based on positive sample reference
CN106709452B (en) Instrument position detection method based on intelligent inspection robot
CN103530590A (en) DPM (direct part mark) two-dimensional code recognition system
CN111639629B (en) Pig weight measurement method and device based on image processing and storage medium
CN113129335B (en) Visual tracking algorithm and multi-template updating strategy based on twin network
Zheng et al. Improvement of grayscale image 2D maximum entropy threshold segmentation method
CN110263790A (en) A kind of power plant's ammeter character locating and recognition methods based on convolutional neural networks
CN110827312A (en) Learning method based on cooperative visual attention neural network
WO2015146113A1 (en) Identification dictionary learning system, identification dictionary learning method, and recording medium
CN115019103A (en) Small sample target detection method based on coordinate attention group optimization
CN116740758A (en) Bird image recognition method and system for preventing misjudgment
CN115223166A (en) Picture pre-labeling method, picture labeling method and device, and electronic equipment
CN109614512B (en) Deep learning-based power equipment retrieval method
CN114882213A (en) Animal weight prediction estimation system based on image recognition
CN116977859A (en) Weak supervision target detection method based on multi-scale image cutting and instance difficulty
CN116664867A (en) Feature extraction method and device for selecting training samples based on multi-evidence fusion
CN110443277A (en) A small amount of sample classification method based on attention model
CN117333539A (en) Mobile robot-oriented charging pile positioning method and device
CN107146244B (en) Method for registering images based on PBIL algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination