CN115457130A - Electric vehicle charging port detection and positioning method based on depth key point regression - Google Patents
Electric vehicle charging port detection and positioning method based on depth key point regression Download PDFInfo
- Publication number
- CN115457130A CN115457130A CN202211112854.XA CN202211112854A CN115457130A CN 115457130 A CN115457130 A CN 115457130A CN 202211112854 A CN202211112854 A CN 202211112854A CN 115457130 A CN115457130 A CN 115457130A
- Authority
- CN
- China
- Prior art keywords
- charging port
- charging
- key point
- detection
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/60—Other road transportation technologies with climate change mitigation effect
- Y02T10/70—Energy storage systems for electromobility, e.g. batteries
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/60—Other road transportation technologies with climate change mitigation effect
- Y02T10/7072—Electromobility specific charging systems or methods for batteries, ultracapacitors, supercapacitors or double-layer capacitors
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an electric vehicle charging port detection and positioning method based on depth key point regression, which comprises the steps of 1) providing a charging port rough positioning method based on a visual target detector, 2) providing a charging terminal key point detection and accurate positioning method based on depth neural network key point regression and charging terminal geometric prior, 3) providing a method for reversely solving the accurate three-dimensional pose of a charging port by using sensor parameters and a charging port terminal detection result, and a method for reversely solving the accurate three-dimensional pose of the charging port by using binocular sensor parameters and a charging port terminal detection result, and 4) providing a method for registering and correcting the reversely solved three-dimensional key point cloud and the real charging port three-dimensional key point cloud, so that the charging port positioning accuracy is further improved. The invention belongs to the field of automatic charging, and particularly relates to a depth key point regression-based electric vehicle charging port detection and positioning method, which improves the detection performance level of a charging port from the aspects of detection rate, positioning accuracy and robustness.
Description
Technical Field
The invention belongs to the technical field of automatic charging, and particularly relates to a depth key point regression-based electric vehicle charging port detection and positioning method.
Background
With the continuous development of computer vision technology, automatic driving becomes an inevitable trend in the future, and electric automobiles are the most important platform for the industrialization of the automatic driving technology in the future. In order to adapt to the future automation trend of electric vehicles, the automation of infrastructure related to the matching of electric vehicles is receiving more and more attention from the industry and academia, and the charging equipment of electric vehicles plays a crucial role in energy replenishment. In order to realize more rapid and convenient charging, an automatic charging technology is in great tendency, and the recognition and positioning of a charging port, which are key problems in the automatic charging technology, are the most important part in the whole automatic charging technology.
Automatic charging systems and equipment which are put into mass production still do not appear in the current market, and the automatic charging systems and equipment are still in the stages of concept display and technology research and development. The charging port identification and positioning technologies that have been disclosed so far are classified from the data acquisition level into technologies based on active cameras and passive cameras. Active cameras include structured light cameras, TOF cameras, and LiDAR, among others. The passive camera is mainly a common visible light camera. The method is divided into a global model fitting-based method and a component detection-based method from the level of recognition and positioning algorithms. The global fitting method has high requirement on the three-dimensional sensing capability of the sensor, and the sensor is required to generate accurate three-dimensional sensing of the charging port and is matched with the charging port template. When the sensing of the charging port is incomplete or the sensing data quality is not high, the fitting algorithm cannot work normally, and the positioning failure is caused. The method based on component detection has high flexibility, robustness and reliability, but has the problem of poor precision. Most of the current algorithms based on key component detection are used for detecting the charging port or a charging terminal in the charging port, and the charging port and the charging terminal are modeled as ellipses for detection. However, due to the concentric circle structure of the charging hole, the method based on the ellipse detection is semantic fuzzy, the inner circle or the outer circle of the charging hole cannot be definitely fitted, the charging terminal is difficult to distinguish in the detection process, only the ellipse is detected, and the shape constraint between the key parts of the charging port is not considered, so that the accuracy of the algorithm is poor in practical application.
In the general target detection methods in the prior art, the current general target detection methods are mainly classified into three categories, namely, a method based on image gray values, a method based on feature points and a method based on deep learning. The method based on the image gray value directly matches the image gray value of the image with the template image, is sensitive to illumination and is difficult to process shielding. The charging port detection method based on the feature points extracts image feature points such as SIFT and SURF, iteratively solves the matched features by using an algorithm, and solves the transformation relation between the matched feature points. Although the detection method based on deep learning has strong robustness, a large number of labeled samples are required for training, and the stability is difficult to ensure because network design is not carried out for a special structure of a charging port. In addition, the detection accuracy of the universal target detector is low, and a simple two-dimensional bounding box is provided generally, but a millimeter-scale three-dimensional pose of a charging port cannot be provided, so that the universal target detector cannot be used in charging port detection.
In other charging port detection and positioning methods in the prior art, a template matching method in Halcon is used for directly matching the position of a charging hole template in an image, and when the attitude deviation of a charging port is large, the matching error is large. The hough transform-based ellipse detection utilizes hough transform to map image space to parameter space, but this method is not only insensitive to noise, and the parameters are up to 5 dimensions, and the amount of calculation is too large.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a depth key point regression-based electric vehicle charging port detection and positioning method, the invention provides a component detection technical framework based on a depth neural network based on sensing data of a visible light camera, and the method aims at the limitation of an ellipse detection method, detects the overall position of a charging port by using a visual target detector, regresses the accurate position of a charging terminal in the charging port by using the depth neural network, and reversely solves the accurate pose of the charging port by using the arrangement of the charging terminal as prior constraint. The invention can also be based on a binocular visible light camera, and further improves the positioning accuracy and robustness by introducing binocular stereoscopic vision constraint.
The technical scheme adopted by the invention is as follows: the scheme comprises the following procedures:
1. the method comprises the steps of preprocessing input images (both monocular and binocular images) respectively, removing image noise and enhancing image contrast.
2. And (4) training a visual target detector (a general target detector such as YOLO V5 can be selected) to realize charging port detection and coarse positioning. Based on the coarse localization result, the charge port region image is cropped out of the overall image (monocular or binocular).
3. Inputting the image blocks of the charging port area into a charging terminal key point regression network, accurately regressing fine coordinates of each charging terminal in an image, and then performing combined optimization on the fine coordinates, sensor parameters (camera internal parameters and external parameters) and a charging terminal geometric prior to accurately positioning the charging terminals.
4. And according to the projection constraint (or binocular stereoscopic vision constraint) of the camera, reversely solving the accurate three-dimensional pose of the charging port based on the accurate positioning result of the charging terminal.
5. And registering and correcting the reversely solved three-dimensional key point cloud and the real three-dimensional key point cloud of the charging port to obtain the final three-dimensional pose of the charging port.
The invention with the structure has the following beneficial effects: the scheme provides a depth key point regression-based electric vehicle charging port detection and positioning method, and provides a two-stage electric vehicle charging port detection and positioning method based on a depth neural network and geometric constraint aiming at the problems of low detection rate, low positioning precision and weak robustness in the current charging port detection technology, 1) provides a charging port coarse positioning method based on a visual target detector, and detects a charging port image block by utilizing a depth convolution neural network; 2) Providing a charging terminal key point detection model based on a deep neural network, obtaining the position of a central point of a charging port terminal, correcting and optimizing by using a geometric priori of the charging terminal in the charging port, and 3) finally reversely solving the accurate three-dimensional pose of the charging port by using sensor parameters (camera internal parameters and external parameters) and a charging port terminal detection result. And the more accurate three-dimensional pose of the charging port can be reversely solved by using parameters (internal parameters, external parameters, a base line and the like) of a binocular sensor. 4) In order to further improve the positioning precision of the charging port, the three-dimensional key point cloud obtained by inverse solution and the real three-dimensional key point cloud of the charging port are registered and corrected. The invention aims to improve the detection performance level of a charging port from the aspects of detection rate, positioning accuracy and robustness.
Drawings
FIG. 1 is a flow chart of charging port detection and positioning;
FIG. 2 is a diagram of a Yolov5 network architecture in an embodiment;
FIG. 3 is a diagram illustrating a result of detecting a charging port according to an embodiment;
FIG. 4 is a key point inspection model architecture diagram;
FIG. 5 is a sample plot of the data set.
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments; all other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Embodiment of electric vehicle charging port detection and positioning method based on depth key point regression
1. Using YOLO V5 as a basic visual object detection model, the overall model structure is divided into four parts, i.e. an input end, a Backbone network (Backbone), an information fusion network (Neck), and an object detection head (Prediction), as shown in fig. 2, including: 1) An image input end: performing operations such as data enhancement, self-adaptive anchor frame calculation, self-adaptive picture scaling and the like;
2) Backbone network (Backbone): the convolutional neural network structure adopts various downsampling structures, so that the information of the original picture is kept as much as possible, and the information loss is reduced;
3) Information fusion network: an information fusion layer with parallel down sampling and up sampling;
4) A target detection head: target detection head adopting multi-scale-based limiting spear box
The detection algorithm is trained on the labeled charging port image data set, the size of the training set is 750, the number of the test set is 50, and partial detection results are shown in fig. 3:
2. terminal charging positioning method based on deep neural network key point regression
In other current methods for positioning a charging port terminal (i.e., a charging port terminal pile head, as shown in fig. 4) based on ellipse detection, the ellipse edge characteristics of the charging port terminal are detected first, and then the position of the center of the ellipse is determined by using the detected ellipse edge. The method has two defects, 1) the charging port terminal often comprises two concentric circles, namely an inner circle or an outer circle, and the edge characteristics generated by the inner circle or the outer circle cannot be robustly distinguished by a detection algorithm, so that the algorithm is unstable; 2) The current method only utilizes the geometrical characteristics of the charging port terminals themselves, and ignores the geometrical constraints between the charging port terminals.
The invention provides a terminal positioning method based on deep neural network key point regression, which directly predicts the central point of each charging port terminal, has definite semantic correspondence, and uses a pre-constructed charging port terminal shape model to model geometric constraints among the charging port terminals, thereby greatly improving the accuracy and robustness of a detection positioning algorithm. The method mainly comprises two stages: 1) And defining, marking and training key points of the charging port terminal. 2) And predicting the terminal key points in the image based on the trained neural network model.
1. Charging port terminal definition and data annotation
In the invention, key points of the charging terminal are defined according to the pile head of the charging port terminal shown in fig. 4, the key points are defined as the circle centers of concentric circles where the terminal is located, the key points are named by using the number of the charging terminal shown in fig. 4, namely the number from 1 to 9 is a semantic label of each key point.
The invention labels the charging port terminal in the training image data set in the semantic labeling mode, and the labeling comprises the following steps:
1) The x, y coordinates of key points in the image have a value space from zero to the maximum value of the image sub-table rate;
2) Semantic label c of the terminal, the space is 0 to 8.
Each frame of collected image and corresponding marking information form a training sample, and all the training samples form a model training data set together.
2. Key point regression model based on deep neural network
The invention mainly adopts a key point detection algorithm based on deep learning, uses MobileNet 2 as a skeleton network in order to ensure the balance of detection precision and speed, and increases the expression capability of the model by fusing three characteristics with different scales.
The network takes an image as input, outputs position coordinates of key points of 9 charging terminals in a two-dimensional image, the 9 two-dimensional coordinates form an 18-dimensional output vector, and normalizes the vector and the central position of a charging port to ensure that the value range of the predicted output vector is between [ -1, +1], so that the neural network works in the most suitable working state. In addition, a sub-network is introduced in the training process to supervise the training of the network model. The subnetwork only plays a role in the training phase and does not participate in inference; the sub-network is used for estimating the three-dimensional Euler angle of each input charging port image sample, the training true value of the sample is estimated by key point information in training data, the network aims to supervise and assist training convergence, and the network is mainly used for serving a key point detection network and is accurate enough as a basis for distinguishing data distribution.
3. The method for training and predicting key points based on the deep neural network comprises the following steps:
the invention trains a neural network by using a random gradient error Back Propagation (BP) mode, an optimizer adopts Adam, a loss function adopts L1 loss with better noise robustness, a variable learning rate mode is adopted, the training process comprises an iteration period (epochs) according to the evolution of the training process and the change process of a self-adaptive condition learning rate, and the iterative training can be stopped when the change rate of the average regression error of the iteration period in two continuous periods is less than 3 orders of magnitude of an actual error value. After the neural network key point regression model is trained, the charging port image detected and intercepted in the previous stage can be used as input, and the regression model can output two-dimensional coordinates of 9 charging terminals on the two-dimensional image.
3. Three-dimensional key point estimation method using sensor parameters
1. Three-dimensional key point estimation method based on monocular sensor parameters
Using pre-calibrated sensor parameters, and using two-dimensional coordinate values { x ] of 9 key points 1 ,y 1 ,x 2 ,y 2 ,...,x 9 ,y 9 Using the three-dimensional coordinate values of the 9 key points { X } as input, and solving the three-dimensional coordinate values of the 9 key points 1 ,Y 1 ,Z 1 ,X 2 ,Y 2 ,Z 2 ,...,X 9 ,Y 9 ,Z 9 The concrete method comprises the following steps:
Z i =f x /d i
Y i =(y i -c y )*Z i /f y
X i =(x i -c x )*Z i /f x
wherein, f x ,f y ,c x ,c y As an internal reference of the camera, d i Is the pixel distance of the key point to the center point. Camera internal parameter f x ,f y ,c x ,c y 。
2. Three-dimensional key point estimation method based on binocular epipolar geometric constraint
Respectively obtaining the pixel coordinates of charging port key points in the binocular image through a neural network, and setting the charging terminal key point coordinates of the left eye image asThe key point coordinates of the charging terminal of the right eye image areAccording to the binocular solid geometry principle, the three-dimensional coordinate values { X ] of 9 key points can be calculated 1 ,Y 1 ,Z 1 ,X 2 ,Y 2 ,Z 2 ,...,X 9 ,Y 9 ,Z 9 The concrete method comprises the following steps:
Z i =(f x *b)/d i
wherein f is x ,f y ,c x ,c y Internal reference of the left eye camera, d i Is the disparity of the key point i in left and right views. Camera internal parameter f x ,f y ,c x ,c y The parameters of the sensors are kept unchanged in the detection process by pre-calibrating and calculating through a binocular camera, wherein b is the length of a base line.
4. The method comprises the following steps of (1) charging port three-dimensional charging port pose estimation based on charging port three-dimensional prior:
because the image detection still has errors due to noise, environmental interference and image pixel quantization problems, and the errors can be spread to the three-dimensional coordinates of the central point of the component to be calculated, the errors need to be corrected by using the three-dimensional template of the charging port, and the attitude information from the three-dimensional template of the charging port to the three-dimensional coordinates of the central point of the component is further calculated.
The invention carries out modeling on the three-dimensional prior of the charging port by using a real charging port terminal three-dimensional key point cloud. In the solving process, a RANdom SAmple Consensus (RANSAC) algorithm is used, corresponding point matching is carried out on the charging terminal three-dimensional key point cloud obtained in the previous step and the real charging port terminal three-dimensional key point cloud, a pose matrix [ R, t ] is calculated, and the three-dimensional pose of the charging port is further obtained.
5. Effect analysis of the invention
1. Charging port coarse positioning experimental result and analysis
In order to test the accuracy and robustness of the charging port rough positioning method based on the visual target detector, 100 pictures containing different illumination backgrounds of different poses of the charging port are collected as a test set, the accuracy of the edge-based detection method and the method provided by the invention on a verification set is compared, and the accuracy is defined as that the detected position of the charging port contains a complete charging port image block. The results of the experiment are shown in table 1.
TABLE 1 comparison of charging port detection accuracy and false detection rate
As can be seen from Table 1, the detection accuracy of the method provided herein is superior to that of the edge-based method in the verification set, and the method is lower in false detection rate and has higher practical application value.
2. Charging port fine positioning experimental result and analysis
(1) Data set construction
In order to train the shape model and the central point regression model of the charging port, 400 pictures containing different illumination backgrounds of different poses of the charging port are collected and marked as an experimental data set, wherein 300 training sets and 100 testing sets are provided. The data set label contains the number and the center point position of each charging hole, and in order to make the error of the marking center point smaller, a mode of marking a circle and then taking the center of the circle as the center point is used. The label is shown in FIG. 5.
(2) Evaluation method
To evaluate the effectiveness of the proposed method, a Mean Pixel Error (MPE) indicator is proposed herein to evaluate the performance of the method. The calculation method of MPE is as follows:
(3) Results and analysis of the experiments
The average pixel error of the proposed method on the test set is tested, in order to verify the effectiveness of the method based on the central point, the prediction error of the proposed method on each central point position is firstly tested, and the test result is shown in table 2.
Table 2 presents the prediction error of the method for each charging hole center
As can be seen from table 2, the error of the center point position predicted by the method is about 1 pixel, and meets the requirement of the positioning accuracy of the charging port.
To further verify the superiority of the proposed method, the present invention was compared with the currently popular ellipse detection-based method, and the comparison results are shown in table 3.
Table 3 compares with ellipse detection based methods
Method | Amount of test data | MPE |
Method based on ellipse detection | 78 | 1.28 |
Text methods | 100 | 1.08 |
As can be seen from table 3, the error of the center point position predicted by the method proposed herein is lower than that of the method based on ellipse detection (0.16), which indicates that the positioning accuracy of the method herein is higher. It is worth mentioning that the method of the present disclosure can accurately and stably predict the position of the center point on all 100 images of the test set, and the method based on the ellipse detection can detect the ellipse meeting the requirement on 78 images, and the detection fails on the other 22 images, and the robustness of the fully explained method of the present disclosure is superior to the method based on the ellipse detection.
3. Three-dimensional pose estimation experiment and analysis of charging port
According to the invention, 400 collected and labeled experimental samples containing different illumination backgrounds with different poses of a charging port are used as an experimental data set, wherein 300 training sets and 100 testing sets are used. The camera is arranged at the tail end of the high-precision mechanical arm, and the accurate relative position of the 400 groups of sample cameras to the charging port is converted by combining the charging port with a known position.
By the three-dimensional pose estimation method, in pose test, the test sample with the space precision of less than 1mm and the angle precision of less than 3 degrees exceeds 95 percent, which is superior to the current similar algorithm.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
The present invention and its embodiments have been described above, and the description is not intended to be limiting, and the drawings show only one embodiment of the present invention, and the actual structure is not limited thereto. In summary, those skilled in the art should appreciate that they can readily use the disclosed conception and specific embodiments as a basis for designing or modifying other structures for carrying out the same purposes of the present invention without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (1)
1. The electric vehicle charging port detection and positioning method based on depth key point regression is characterized by comprising the following steps of: the technical scheme adopted by the invention is as follows: the scheme comprises the following procedures:
1. respectively preprocessing input images (both monocular and binocular), removing image noise and enhancing image contrast;
2. and (4) training a visual target detector (a general target detector such as YOLO V5 can be selected) to realize charging port detection and coarse positioning. According to the coarse positioning result, cutting the charging hole area image from the whole image (monocular or binocular);
3. inputting the image blocks of the charging port area into a charging terminal key point regression network, accurately regressing fine coordinates of each charging terminal in the image, and then performing combined optimization on the fine coordinates, sensor parameters (camera internal parameters and external parameters) and a charging terminal geometric prior to accurately positioning the charging terminals;
4. according to camera projection constraint (or binocular stereoscopic vision constraint), the accurate three-dimensional pose of the charging port is reversely solved based on the accurate positioning result of the charging terminal;
5. and registering and correcting the reversely solved three-dimensional key point cloud and the real charging port three-dimensional key point cloud to obtain the final charging port three-dimensional pose.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211112854.XA CN115457130A (en) | 2022-09-14 | 2022-09-14 | Electric vehicle charging port detection and positioning method based on depth key point regression |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211112854.XA CN115457130A (en) | 2022-09-14 | 2022-09-14 | Electric vehicle charging port detection and positioning method based on depth key point regression |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115457130A true CN115457130A (en) | 2022-12-09 |
Family
ID=84303141
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211112854.XA Pending CN115457130A (en) | 2022-09-14 | 2022-09-14 | Electric vehicle charging port detection and positioning method based on depth key point regression |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115457130A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116091498A (en) * | 2023-04-07 | 2023-05-09 | 飞杨电源技术(深圳)有限公司 | Visual defect detection method for intelligent charger of lead-acid storage battery |
-
2022
- 2022-09-14 CN CN202211112854.XA patent/CN115457130A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116091498A (en) * | 2023-04-07 | 2023-05-09 | 飞杨电源技术(深圳)有限公司 | Visual defect detection method for intelligent charger of lead-acid storage battery |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111563442B (en) | Slam method and system for fusing point cloud and camera image data based on laser radar | |
CN112734852B (en) | Robot mapping method and device and computing equipment | |
CN107063228B (en) | Target attitude calculation method based on binocular vision | |
CN112883820B (en) | Road target 3D detection method and system based on laser radar point cloud | |
CN102236794A (en) | Recognition and pose determination of 3D objects in 3D scenes | |
CN110992424B (en) | Positioning method and system based on binocular vision | |
CN115359021A (en) | Target positioning detection method based on laser radar and camera information fusion | |
CN113393524B (en) | Target pose estimation method combining deep learning and contour point cloud reconstruction | |
CN114325634A (en) | Method for extracting passable area in high-robustness field environment based on laser radar | |
CN112150448B (en) | Image processing method, device and equipment and storage medium | |
CN112634368A (en) | Method and device for generating space and OR graph model of scene target and electronic equipment | |
CN114758504A (en) | Online vehicle overspeed early warning method and system based on filtering correction | |
CN114140539A (en) | Method and device for acquiring position of indoor object | |
CN116573017A (en) | Urban rail train running clearance foreign matter sensing method, system, device and medium | |
CN115457130A (en) | Electric vehicle charging port detection and positioning method based on depth key point regression | |
CN117215316B (en) | Method and system for driving environment perception based on cooperative control and deep learning | |
CN114137564A (en) | Automatic indoor object identification and positioning method and device | |
CN111256651B (en) | Week vehicle distance measuring method and device based on monocular vehicle-mounted camera | |
Ma et al. | Semantic geometric fusion multi-object tracking and lidar odometry in dynamic environment | |
CN116309882A (en) | Tray detection and positioning method and system for unmanned forklift application | |
CN116243329A (en) | High-precision multi-target non-contact ranging method based on laser radar and camera fusion | |
Wang et al. | A binocular vision method for precise hole recognition in satellite assembly systems | |
SANDOVAL et al. | Robust sphere detection in unorganized 3D point clouds using an efficient Hough voting scheme based on sliding voxels | |
CN116309817A (en) | Tray detection and positioning method based on RGB-D camera | |
Schilling et al. | Mind the gap-a benchmark for dense depth prediction beyond lidar |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |