CN112801871B - Image self-adaptive fusion method based on Chebyshev distance discrimination - Google Patents
Image self-adaptive fusion method based on Chebyshev distance discrimination Download PDFInfo
- Publication number
- CN112801871B CN112801871B CN202110130776.5A CN202110130776A CN112801871B CN 112801871 B CN112801871 B CN 112801871B CN 202110130776 A CN202110130776 A CN 202110130776A CN 112801871 B CN112801871 B CN 112801871B
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- point
- img
- chebyshev distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 20
- 238000000034 method Methods 0.000 claims abstract description 16
- 238000013507 mapping Methods 0.000 claims abstract description 4
- 230000004927 fusion Effects 0.000 claims description 19
- 101150013335 img1 gene Proteins 0.000 claims description 12
- 101150071665 img2 gene Proteins 0.000 claims description 9
- 238000005259 measurement Methods 0.000 claims description 5
- 238000011524 similarity measure Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 2
- 239000013598 vector Substances 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 abstract description 6
- 230000004075 alteration Effects 0.000 abstract description 5
- 230000000007 visual effect Effects 0.000 abstract description 2
- 238000007499 fusion processing Methods 0.000 abstract 1
- 230000008447 perception Effects 0.000 description 6
- 238000005457 optimization Methods 0.000 description 4
- 230000016776 visual perception Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4038—Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Abstract
The invention provides an image self-adaptive fusion method based on Chebyshev distance discrimination, which comprises the steps of determining an image overlapping region by using a registration method of SIFT features after reading image pixel information in a fusion process, evaluating and preferentially selecting and planning a proper imaging point set of each lens automatically by introducing a Chebyshev distance discrimination idea to the overlapping region part of each camera, integrating all pixel points, and mapping one by one in a pixel coordinate frame of a new image to be fused to form a spliced new image. The method can make the fused image smoother and effectively eliminate the influence of the splice joint and the chromatic aberration under different visual field conditions.
Description
Technical Field
The image self-adaptive fusion method based on Chebyshev distance discrimination is suitable for the field of automatic driving visual perception, has a good imaging result on multi-scene image fusion and can ensure the imaging quality of an image perception task.
Background
The invention belongs to the field of image processing, and particularly relates to an image self-adaptive fusion method based on Chebyshev distance discrimination. At present, visual perception is becoming a popular research in the field of automatic driving, and visual is introduced to a vehicle to enable the vehicle to perform target detection, target classification, image segmentation and the like on surrounding environments, so that the safety, stability and intelligence of the vehicle are effectively improved. Because the field of view that a single camera obtained is limited, can't satisfy corresponding perception demands such as target detection, so will install a plurality of cameras on the vehicle generally. And finally, obtaining a required global image by using an image stitching algorithm, and further completing a corresponding perception task.
The core of the image stitching algorithm is two parts, namely image registration and image fusion, the related research in the image registration field is mature, and the conventional algorithm can meet the requirements under the normal condition. In the field of image fusion, due to the importance of safety in the driving process of an automobile, the requirement on an image fusion imaging result is extremely high, and a splicing seam brought by a traditional image fusion algorithm can bring extremely great interference to a subsequent perception task and can not well meet the requirement on image quality in a visual perception task.
The conventional image fusion method mainly comprises the steps of firstly carrying out image registration on IMG1 and IMG2 in the figure 1 so as to determine the overlapping area of two images, and finally, directly taking the weighted average value of the pixels of the overlapping area of the two images in the corresponding overlapping area. The image fusion result has obvious splice seams, and in addition, the method has obvious chromatic aberration problems due to the environmental influence of illumination and the like under different angles of the automobile. The problem of splice and chromatic aberration can seriously affect the final perception task result, so the traditional image fusion method is not suitable for the vision perception task in the automatic driving field.
Disclosure of Invention
The invention aims to: the invention relates to a self-adaptive image fusion method based on Chebyshev distance discrimination, and aims to solve the problems of splice joint and chromatic aberration in the traditional image fusion method.
The technical scheme is as follows: an image self-adaptive fusion method based on Chebyshev distance discrimination comprises the following steps:
image reading: reading pixel information of two images, wherein one image is used as a reference image and the other image is used as a target image;
registering image features: performing feature description on the two image feature key points by using a SIFT feature registration method to obtain image feature points; respectively searching and traversing the characteristic points extracted from the target image by taking the characteristic points of the reference image as a standard to match, and determining the overlapping area of the two images;
image feature processing: setting a coordinate frame of an image to be spliced by using a Chebyshev image self-adaptive fusion method, and evaluating and preferentially selecting pixels of an overlapping area;
image feature fusion: and integrating all the pixel points, and mapping the pixel points one by one in a pixel coordinate frame of the new image to be fused to form a spliced new image.
The specific implementation steps are as follows:
step 1, completing registration of images IMG1 and IMG2 by a SIFT image registration method, and determining pixel point sets of all areas of the synthesized image, wherein the IMG is in a non-overlapping part on the left side L For the non-overlapping region pixel point set in IMG1, the non-overlapping region IMG is at the right R A non-overlapping region pixel point set in IMG 2;
step 2, defining the pixel information set of the IMG1 in the overlapping area as P1, and defining the pixel information set of the overlapping area after the IMG2 is transformed as P2, wherein the pixel information set of the whole image overlapping area IMGM= { P1, P2}, and any point in the coordinates of the image overlapping area comprises two pixel points, namely IMG Mi =(P 1i ,P 2i ),i=1,2...n;
Step 3, obtaining the pixel mean value of P1 and P2 asCalculating the pixel mean value of the image overlapping area;
step 4, introducing Chebyshev distance discrimination ideas to respectively calculate IMGs Mi Inner P 1i ,P 2i The Chebyshev distance S from the pixel mean value of the overlapping area is the similarity measure;
step 5, comparing P under the same coordinate 1i ,P 2i Corresponding similarity measurement results;
and 6, integrating the pixel sets of the synthesized image to finish the fusion and splicing of the image.
As a further optimization scheme of the image self-adaptive fusion method based on Chebyshev distance discrimination, the specific implementation process of image feature matching in the step 1 is as follows:
step 11, generating an image Gaussian differential pyramid, and constructing a scale space;
step 12, detecting a spatial extreme point: searching feature points with unchanged scale and rotation in the Gaussian pyramid;
step 13, accurate positioning of stable key points: searching an extreme point by curve fitting;
step 14, distributing stable key point direction information;
step 15, describing key points: describing the position, direction and scale information of the obtained feature points by using a group of vectors, wherein the information of the feature points and surrounding neighborhood pixels thereof;
and step 16, finishing the feature point matching of the two images.
As a further optimization scheme of the image self-adaptive fusion method based on Chebyshev distance discrimination, the specific calculation mode of the step 3 is as follows:
step 31, calculating the pixel mean value of P1:
step 32, calculating the pixel mean value of P2:
step 33, calculating the average value of pixels in the overlapping area:
as a further optimization scheme of the image self-adaptive fusion method based on Chebyshev distance discrimination, the step 4 is further as follows:
step 41, obtaining IMG Mi Inner P 1i Chebyshev distance to the pixel mean of the overlap region:
step 42, obtaining IMG Mi Inner P 2i Chebyshev distance to the pixel mean of the overlap region:
as a further optimization scheme of the image self-adaptive fusion method based on Chebyshev distance discrimination, the step 5 is further as follows:
step 51, if S 1i <S 2i Determining the overlapped coordinate point IMG Mi P in i 1i Is superior to P 2i Namely, automatically selecting the pixel point of the image 1 during image fusion;
step 52, if S 1i >S 2i Determining the overlapped coordinate point IMG Mi Inner P 2i Is superior to P 1i I.e. automatically selecting the pixels of image 2 during image fusion;
step 53, if S 1i =S 2i Then in IMG Mi Randomly selecting a point as the point pixel of the image fusion;
the beneficial effects are that: compared with the traditional technology, the method for distinguishing and analyzing the Chebyshev distance can select more real and global pixel points on the basis of unified measurement standards, and can automatically select and plan out proper camera imaging point sets, so that the image details of the estimated pixel point sets are clearer, the image splicing seam and the influence of chromatic aberration can be effectively eliminated, and the image imaging quality of visual perception tasks is ensured.
Drawings
Fig. 1 is a schematic view of an image and an overlapping area thereof.
Fig. 2 is a graph of pixel distribution of an image area to be stitched.
Fig. 3 is a schematic diagram of two original image pixels corresponding to coordinate points in an image overlapping region.
Fig. 4 is a block diagram of the method of the invention.
Fig. 5 is a flow chart of the method of the invention.
Detailed Description
In order to make the objects, technical solutions and some of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, in the present embodiment, two images IMG1 and IMG2 are formed.
As shown in fig. 4, which is a structural diagram of the method of the present invention, the following 4 modules are included:
image reading: reading pixel information of two images, wherein one image is used as a reference image and the other image is used as a target image;
registering image features: performing feature description on the two image feature key points by using a SIFT feature registration method to obtain image feature points; respectively searching and traversing the characteristic points extracted from the target image by taking the characteristic points of the reference image as a standard to match, and determining the overlapping area of the two images;
image feature processing: setting a coordinate frame of an image to be spliced by using a Chebyshev image self-adaptive fusion method, and evaluating and preferentially selecting pixels of an overlapping area;
image feature fusion: and integrating all the pixel points, and mapping the pixel points one by one in a pixel coordinate frame of the new image to be fused to form a spliced new image.
The following details of the image adaptive fusion method based on chebyshev distance discrimination according to the present embodiment are as follows with reference to fig. 2, 3 and 5:
step 1, as shown in fig. 2, registration of the images IMG1 and IMG2 is completed through a SIFT image registration method, and pixel point sets of all areas of the synthesized image are determined, wherein the IMG is a non-overlapped part on the left side L For the non-overlapping region pixel point set in IMG1, the non-overlapping region IMG is at the right R A non-overlapping region pixel point set in IMG 2;
step 2, as shown in fig. 3, defining the pixel information set of IMG1 in the overlapping region as P1, and defining the pixel information set of the overlapping region after IMG2 transformation as P2, where imgm= { P1, P2} in the pixel information set of the entire image overlapping region, any point in the coordinates of the image overlapping region includesTwo pixels, i.e. IMG Mi =(P 1i ,P 2i ),i=1,2...n;
Step 3, obtaining the pixel mean value of P1 and P2 asThe pixel mean value of the image overlapping region is calculated by the following specific calculation method:
step 31, calculating the pixel mean value of P1:
step 32, calculating the pixel mean value of P2:
step 33, calculating the average value of pixels in the overlapping area:
step 4, introducing Chebyshev distance discrimination ideas to respectively calculate IMGs Mi Inner P 1i ,P 2i The chebyshev distance S from the pixel mean value of the overlapping area is the similarity measure, and the specific steps are as follows:
step 41, obtaining IMG Mi Inner P 1i Chebyshev distance to the pixel mean of the overlap region:
step 42, obtaining IMG Mi Inner P 2i Chebyshev distance to the pixel mean of the overlap region:
step 5, comparing P under the same coordinate 1i ,P 2i The corresponding similarity measurement result comprises the following specific steps:
step 51, if S 1i <S 2i Determining the overlapped coordinate point IMG Mi P in i 1i Is superior to P 2i Namely, automatically selecting the pixel point of the image 1 during image fusion;
step 52, if S 1i >S 2i Determining the overlapped coordinate point IMG Mi Inner P 2i Is superior to P 1i I.e. automatically selecting the pixels of image 2 during image fusion;
step 53, if S 1i =S 2i Then in IMG Mi Randomly selecting a point as the point pixel of the image fusion;
and 6, integrating the pixel sets of the synthesized image to finish the fusion and splicing of the image.
Claims (2)
1. An image self-adaptive fusion method based on chebyshev distance discrimination is characterized by comprising the following steps:
reading pixel information of two images, wherein one image is used as a reference image and the other image is used as a target image;
registering the images IMG1 and IMG2 by a SIFT image registration method, and determining pixel point sets of all areas of the synthesized image, wherein the IMG is in a non-overlapping part on the left side L For the non-overlapping region pixel point set in IMG1, the non-overlapping region IMG is at the right R A non-overlapping region pixel point set in IMG 2;
defining the pixel information set of the IMG1 in the overlapping area as P1 and the pixel information set of the overlapping area after IMG2 transformation as P2, wherein the pixel information set of the whole image overlapping area IMGM= { P1, P2}, and any point in the coordinates of the image overlapping area comprises two pixel points, namely IMG Mi =(P 1i ,P 2i ),i=1 ,2 ...n;
The pixel mean value of P1 is obtained:;
calculating the pixel mean value of P2:;
calculating the average value of pixels in the overlapping area:;
setting a coordinate frame of an image to be spliced by using a Chebyshev image self-adaptive fusion method, and solving an IMG (inertial measurement unit) Mi Inner P 1i Chebyshev distance to the pixel mean of the overlap region:
;
obtaining IMG Mi Inner P 2i Chebyshev distance to the pixel mean of the overlap region:
,S 1i and S is 2i Namely, similarity measurement;
comparing P at the same coordinate 1i , P 2i A corresponding similarity measure, wherein:
if S 1i <S 2i Determining the overlapped coordinate point IMG Mi Inner P 1i Is superior to P 2i Namely, automatically selecting the pixel point of the image 1 during image fusion;
if S 1i >S 1i Determining the overlapped coordinate point IMG Mi Inner P 1i Is superior to P 2i I.e. automatically selecting the pixels of image 2 during image fusion;
if S 1i =S 1i Then in IMG Mi Randomly selecting a point as the point pixel of the image fusion; and integrating all the pixel points, and mapping the pixel points one by one in a pixel coordinate frame of the new image to be fused to form a spliced new image.
2. The image self-adaptive fusion method based on chebyshev distance discrimination according to claim 1, wherein the specific implementation process of matching image features in the registration of the image IMG1 and the image IMG2 by a SIFT image registration method is as follows:
step 1, generating an image Gaussian differential pyramid, and constructing a scale space;
step 2, detecting a spatial extreme point: searching feature points with unchanged scale and rotation in the Gaussian pyramid;
step 3, accurate positioning of stable key points: searching an extreme point by curve fitting;
step 4, distributing stable key point direction information;
step 5, describing key points: describing the position, direction and scale information of the obtained feature points by using a group of vectors, wherein the information of the feature points and surrounding neighborhood pixels thereof;
and 6, finishing the characteristic point matching of the two images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110130776.5A CN112801871B (en) | 2021-01-29 | 2021-01-29 | Image self-adaptive fusion method based on Chebyshev distance discrimination |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110130776.5A CN112801871B (en) | 2021-01-29 | 2021-01-29 | Image self-adaptive fusion method based on Chebyshev distance discrimination |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112801871A CN112801871A (en) | 2021-05-14 |
CN112801871B true CN112801871B (en) | 2024-04-05 |
Family
ID=75813085
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110130776.5A Active CN112801871B (en) | 2021-01-29 | 2021-01-29 | Image self-adaptive fusion method based on Chebyshev distance discrimination |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112801871B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103279939A (en) * | 2013-04-27 | 2013-09-04 | 北京工业大学 | Image stitching processing system |
US9092691B1 (en) * | 2014-07-18 | 2015-07-28 | Median Technologies | System for computing quantitative biomarkers of texture features in tomographic images |
CN108460724A (en) * | 2018-02-05 | 2018-08-28 | 湖北工业大学 | The Adaptive image fusion method and system differentiated based on mahalanobis distance |
CN109544498A (en) * | 2018-11-29 | 2019-03-29 | 燕山大学 | A kind of image adaptive fusion method |
CN111272428A (en) * | 2020-02-17 | 2020-06-12 | 济南大学 | Rolling bearing fault diagnosis method based on improved Chebyshev distance |
WO2020119144A1 (en) * | 2018-12-10 | 2020-06-18 | 厦门市美亚柏科信息股份有限公司 | Image similarity calculation method and device, and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111148460B (en) * | 2017-08-14 | 2022-05-03 | 奥普托斯股份有限公司 | Retinal location tracking |
-
2021
- 2021-01-29 CN CN202110130776.5A patent/CN112801871B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103279939A (en) * | 2013-04-27 | 2013-09-04 | 北京工业大学 | Image stitching processing system |
US9092691B1 (en) * | 2014-07-18 | 2015-07-28 | Median Technologies | System for computing quantitative biomarkers of texture features in tomographic images |
CN108460724A (en) * | 2018-02-05 | 2018-08-28 | 湖北工业大学 | The Adaptive image fusion method and system differentiated based on mahalanobis distance |
CN109544498A (en) * | 2018-11-29 | 2019-03-29 | 燕山大学 | A kind of image adaptive fusion method |
WO2020119144A1 (en) * | 2018-12-10 | 2020-06-18 | 厦门市美亚柏科信息股份有限公司 | Image similarity calculation method and device, and storage medium |
CN111272428A (en) * | 2020-02-17 | 2020-06-12 | 济南大学 | Rolling bearing fault diagnosis method based on improved Chebyshev distance |
Non-Patent Citations (6)
Title |
---|
Region-based image fusion using a combinatory Chebyshev-ICA method;Omar Z , Mitianoudis N , Stathaki T .;《IEEE International Conference on Acoustics》;1213-1216 * |
不同距离测度的SIFT特征描述符相似性度量比较;杨帆;郭建华;谭海;雷兵;;遥感信息(01);104-108 * |
基于SIFT特征的SAR图像匹配;刘杰;;舰船科学技术(06);166-168 * |
基于加权Chebyshev距离的融合图像评价指标;杨飒;彭哲方;;计算机应用与软件(08);47-49 * |
基于改进SIFT的遥感图像匹配方法;胡文超;周伟;关键;;电光与控制;24(05);36-39 * |
增量式SFM中特征点检测与匹配方法的改进;赵云皓;贺赛先;;激光杂志;41(03);59-66 * |
Also Published As
Publication number | Publication date |
---|---|
CN112801871A (en) | 2021-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111462135B (en) | Semantic mapping method based on visual SLAM and two-dimensional semantic segmentation | |
Fan et al. | Rethinking road surface 3-d reconstruction and pothole detection: From perspective transformation to disparity map segmentation | |
CN109615611B (en) | Inspection image-based insulator self-explosion defect detection method | |
CN109034047A (en) | A kind of method for detecting lane lines and device | |
CN110569704A (en) | Multi-strategy self-adaptive lane line detection method based on stereoscopic vision | |
CN108171247B (en) | Vehicle re-identification method and system | |
CN105809626A (en) | Self-adaption light compensation video image splicing method | |
CN112180373A (en) | Multi-sensor fusion intelligent parking system and method | |
CN109544635B (en) | Camera automatic calibration method based on enumeration heuristic | |
CN109697696B (en) | Benefit blind method for panoramic video | |
JP5188429B2 (en) | Environment recognition device | |
CN111768332A (en) | Splicing method of vehicle-mounted all-around real-time 3D panoramic image and image acquisition device | |
CN114372919B (en) | Method and system for splicing panoramic all-around images of double-trailer train | |
CN108460724B (en) | Adaptive image fusion method and system based on Mahalanobis distance discrimination | |
CN110120012A (en) | The video-splicing method that sync key frame based on binocular camera extracts | |
CN112801871B (en) | Image self-adaptive fusion method based on Chebyshev distance discrimination | |
CN111860270B (en) | Obstacle detection method and device based on fisheye camera | |
CN112001954B (en) | Underwater PCA-SIFT image matching method based on polar curve constraint | |
Zhu et al. | Advanced driver assistance system based on machine vision | |
CN103903269B (en) | The description method and system of ball machine monitor video | |
CN114926332A (en) | Unmanned aerial vehicle panoramic image splicing method based on unmanned aerial vehicle mother vehicle | |
CN112800890B (en) | Road obstacle detection method based on surface normal vector | |
Kitt et al. | Trinocular optical flow estimation for intelligent vehicle applications | |
Zhu et al. | A Pose Estimation Method in Dynamic Scene with Yolov5, Mask R-CNN and ORB-SLAM2 | |
CN113284181A (en) | Scene map point and image frame matching method in environment modeling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |