CN113506218A - 360-degree video splicing method for multi-compartment ultra-long vehicle type - Google Patents

360-degree video splicing method for multi-compartment ultra-long vehicle type Download PDF

Info

Publication number
CN113506218A
CN113506218A CN202110781027.9A CN202110781027A CN113506218A CN 113506218 A CN113506218 A CN 113506218A CN 202110781027 A CN202110781027 A CN 202110781027A CN 113506218 A CN113506218 A CN 113506218A
Authority
CN
China
Prior art keywords
image
processing
noise reduction
pixel points
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110781027.9A
Other languages
Chinese (zh)
Other versions
CN113506218B (en
Inventor
金文�
张翟容
吴乐飞
万晴
殷靖蓓
金鸥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Jinhaixing Navigation Technology Co ltd
Original Assignee
Jiangsu Jinhaixing Navigation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Jinhaixing Navigation Technology Co ltd filed Critical Jiangsu Jinhaixing Navigation Technology Co ltd
Priority to CN202110781027.9A priority Critical patent/CN113506218B/en
Publication of CN113506218A publication Critical patent/CN113506218A/en
Application granted granted Critical
Publication of CN113506218B publication Critical patent/CN113506218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30264Parking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a 360-degree video splicing method for a multi-compartment ultralong vehicle type, which comprises the steps of S1, obtaining image frames shot at the same time in video streams of a plurality of cameras; s2, respectively carrying out image correction processing on each image frame to obtain corrected images; s3, respectively determining to-be-processed pixel points in each corrected image, and then performing adaptive noise reduction processing on the to-be-processed pixel points to obtain noise-reduced images; s4, performing projection transformation processing on each noise reduction image to obtain a projection image; s5, image registration processing is carried out on the projection images corresponding to the two adjacent cameras in sequence to obtain a registration result, and image splicing processing is carried out based on the registration result to obtain a spliced image; and S6, composing the spliced images at the continuous time into an output video. According to the invention, only the self-adaptive noise reduction processing is carried out on the pixel points to be processed, so that the noise reduction processing speed can be effectively increased, and more detailed information can be retained in the noise reduction image.

Description

360-degree video splicing method for multi-compartment ultra-long vehicle type
Technical Field
The invention relates to the field of video splicing, in particular to a 360-degree video splicing method for a multi-compartment ultra-long vehicle type.
Background
With the rapid development of image and computer vision technologies, more and more technologies are applied to the field of automotive electronics, a traditional image-based reversing image system only installs a camera at the tail of a car and can only cover a limited area around the tail of the car, while blind areas around the car and the head of the car undoubtedly increase the hidden danger of safe driving, and collision and scratch events easily occur in narrow and congested urban areas and parking lots. In order to enlarge the visual field of a driver, the driver needs to be capable of sensing 360-degree all-around environment, and a whole set of video images around the whole vehicle are formed through video synthesis processing after the mutual cooperation of a plurality of visual sensors.
Since the noise point has an influence on the video splicing, the image frame generally needs to be denoised in the splicing process, however, the existing denoising method generally directly denoises all the pixels, and obviously, the processing method lengthens the delay of the finally output video frame.
Disclosure of Invention
In view of the above problems, the present invention aims to provide a 360 ° video stitching method for a multi-compartment ultralong vehicle type, comprising:
s1, acquiring the image frames shot at the same time in the video streams of a plurality of cameras;
s2, respectively carrying out image correction processing on each image frame to obtain corrected images;
s3, respectively determining to-be-processed pixel points in each corrected image, and then performing adaptive noise reduction processing on the to-be-processed pixel points to obtain noise-reduced images;
s4, performing projection transformation processing on each noise reduction image to obtain a projection image;
s5, image registration processing is carried out on the projection images corresponding to the two adjacent cameras in sequence to obtain a registration result, and image splicing processing is carried out based on the registration result to obtain a spliced image;
and S6, composing the spliced images at the continuous time into an output video.
Preferably, in the plurality of cameras, an overlapping area exists in video frames obtained by two adjacent cameras.
Preferably, the performing image rectification processing on each image frame to obtain a rectified image includes:
and carrying out distortion correction processing on each image frame to obtain a corrected image.
Preferably, the determining the pixel point to be processed in each rectified image includes:
for a pixel point n in the corrected image, the judgment coefficient adidx of the pixel point n is calculated by the following formulan
Figure BDA0003156998550000021
Figure BDA0003156998550000022
Wherein k belongs to {1,2,3,4}, cs represents a preset constant coefficient, fnAnd fmRespectively representing pixel values of pixel point n and pixel point m, m representing wkThe 8 neighborhoods of n are expressed as pixel points contained in
Figure BDA0003156998550000023
w1Representing pixel points n in 8 neighbourhood of n4、n5Set of compositions, w2Representing pixel points n in 8 neighbourhood of n2、n7Set of compositions, w3Representing pixel points n in 8 neighbourhood of n1、n8Set of compositions, w4Representing images in 8 neighborhoods of nPrime point n3、n6A set of n corresponding sea plug matrixes SnIs shown as
Figure BDA0003156998550000024
max indicates that the maximum value is taken,
Figure BDA0003156998550000025
Figure BDA0003156998550000026
will adidxnComparing with a preset comparison threshold value if adidxnAnd if the comparison threshold value is larger than the comparison threshold value, taking the pixel point n as a pixel point to be processed.
Preferably, the performing projective transformation processing on each noise-reduced image to obtain a projection image includes:
and carrying out circular column projection transformation on each noise reduction image to obtain a projection image.
Preferably, the image registration processing is performed on the projection images corresponding to two adjacent cameras, and the obtaining of the registration result includes:
respectively using a feature extraction algorithm to obtain feature points in the two projection images;
matching the feature points in the two projected images to obtain a feature point matching pair;
and taking the matched pairs of the characteristic points as a registration result.
According to the embodiment of the invention, the adaptive noise reduction processing is only carried out on the pixel points to be processed, so that more detailed information can be kept in the noise reduction image while the noise reduction processing speed is effectively increased. The existing video splicing method generally does not perform noise reduction processing or directly performs noise reduction processing on all pixel points, and if the noise reduction processing is not performed, noise can affect the subsequent image matching process, so that the image matching result is not accurate enough, and if the noise reduction processing is performed on all the pixel points, obviously, the required processing time can be longer, so that the time for obtaining the output video is too long, and the difference between the shooting time of the picture in the output video and the output time of the output video is too large, so that the picture delay is too large.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
Fig. 1 is a diagram illustrating an exemplary embodiment of a 360 ° video stitching method for a multi-compartment ultra-long vehicle type according to the present invention.
Fig. 2 is a diagram illustrating an exemplary embodiment of a camera mounting manner of the multi-compartment ultra-long vehicle type according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In one embodiment shown in fig. 1, the present invention provides a 360 ° video stitching method for a multi-compartment ultralong vehicle, comprising:
s1, acquiring the image frames shot at the same time in the video streams of a plurality of cameras;
s2, respectively carrying out image correction processing on each image frame to obtain corrected images;
s3, respectively determining to-be-processed pixel points in each corrected image, and then performing adaptive noise reduction processing on the to-be-processed pixel points to obtain noise-reduced images;
s4, performing projection transformation processing on each noise reduction image to obtain a projection image;
s5, image registration processing is carried out on the projection images corresponding to the two adjacent cameras in sequence to obtain a registration result, and image splicing processing is carried out based on the registration result to obtain a spliced image;
and S6, composing the spliced images at the continuous time into an output video.
Specifically, for example, when the multi-compartment ultralong vehicle has 4 compartments, as shown in fig. 2, a person skilled in the art may set 10 cameras to acquire image frames, two cameras are respectively set at the outer side of the center of each compartment, and one camera is respectively set at the head and the tail of the vehicle, so as to achieve 360 ° coverage of the shooting range.
The 10 cameras are respectively a left camera to a left four cameras, a right camera to a right four cameras, and then a front camera and a rear camera are added.
In particular, the camera may be a fisheye camera.
Specifically, the image frames of the continuous time frames are processed to obtain a plurality of spliced images, the spliced images are sorted according to the shooting time of the image frames corresponding to the spliced images, and the spliced images are used as video frames in the output video.
Preferably, in the plurality of cameras, an overlapping area exists in video frames obtained by two adjacent cameras.
The overlapping area is mainly set to facilitate the splicing of video frames.
Preferably, the performing image rectification processing on each image frame to obtain a rectified image includes:
and carrying out distortion correction processing on each image frame to obtain a corrected image.
Specifically, the distortion includes radial distortion and tangential distortion. And respectively carrying out radial distortion correction and tangential distortion correction on the image frame, and then fusing the two correction results to obtain a corrected image.
Preferably, the determining the pixel point to be processed in each rectified image includes:
for a pixel point n in the corrected image, the judgment coefficient adidx of the pixel point n is calculated by the following formulan
Figure BDA0003156998550000041
Figure BDA0003156998550000042
Wherein k belongs to {1,2,3,4}, cs represents a preset constant coefficient, fnAnd fmRespectively representing pixel values of pixel point n and pixel point m, m representing wkThe 8 neighborhoods of n are expressed as pixel points contained in
Figure BDA0003156998550000043
w1Representing pixel points n in 8 neighbourhood of n4、n5Set of compositions, w2Representing pixel points n in 8 neighbourhood of n2、n7Set of compositions, w3Representing pixel points n in 8 neighbourhood of n1、n8Set of compositions, w4Representing pixel points n in 8 neighbourhood of n3、n6A set of n corresponding sea plug matrixes SnIs shown as
Figure BDA0003156998550000044
max indicates that the maximum value is taken,
Figure BDA0003156998550000045
Figure BDA0003156998550000046
will adidxnComparing with a preset comparison threshold value if adidxnAnd if the comparison threshold value is larger than the comparison threshold value, taking the pixel point n as a pixel point to be processed.
In the above embodiment of the present invention, when the pixel point n is a pixel point to be processed, the determination coefficient is much larger than that of a pixel point not to be processed, and if the pixel point n is a normal pixel point or edgeBoundary pixel point, then adidxnThe value of (c) will be very small and can therefore be identified by setting a comparison threshold. Meanwhile, the boundary pixel points can be prevented from being mistakenly considered as the pixel points to be processed.
Preferably, the performing adaptive noise reduction processing on the pixel point to be processed includes:
if adidxnIf < thre, the pixel point n is denoised in the following way:
Figure BDA0003156998550000051
if adidxnAnd if the pixel point n is not less than thre, performing noise reduction processing on the pixel point n by adopting the following method:
Figure BDA0003156998550000052
in the formula (a) anRepresenting the pixel value of a pixel n subjected to noise reduction, un representing the set of pixels in a neighborhood of R multiplied by R of the pixel n, nofun representing the number of pixels contained in un, fhThe pixel value, lod [ n, h, representing a pixel point h in un]Representing the total number of pixels, g, passed by the connection between pixels n and hnAnd ghRespectively representing the gradient amplitudes of the pixel points n and h, wherein thre represents a preset judgment coefficient,
Figure BDA0003156998550000053
Figure BDA0003156998550000054
according to the embodiment of the invention, different noise reduction processing modes are respectively adopted for the pixels to be processed with different judgment coefficients, so that the processing mode of the invention has higher pertinence. Specifically, if the judgment coefficient of the pixel point to be processed is smaller, it indicates that the difference between the pixel point to be processed and the surrounding pixel points is smaller, so that the noise reduction processing is performed by directly calculating the average value, if the judgment coefficient of the pixel point to be processed is larger, the difference between the pixel point to be processed and the surrounding pixel points is larger, and the loss of the boundary information of the image is easily caused by adopting a common noise reduction mode, therefore, different coefficients are respectively obtained for the pixel points in the neighborhood through the importance degree, and then, the noise reduction result is obtained through the product of the coefficient and the pixel point, and the processing mode integrates the influence factors of the pixel points in the neighborhood and the pixel points processed currently in two aspects of the difference of the linear distance and the gradient amplitude, can well express the importance degree of different pixel points, and is favorable for improving the accuracy of the noise reduction result while keeping boundary information as much as possible. Meanwhile, a fast noise reduction mode is adopted for the pixel points with smaller judgment coefficients, so that the noise reduction time can be further shortened, and the delay of the output video of the invention is shortened.
Preferably, the performing projective transformation processing on each noise-reduced image to obtain a projection image includes:
and carrying out circular column projection transformation on each noise reduction image to obtain a projection image.
Specifically, the projection system may be a cube projection, SSP projection, TSP projection, or the like, in addition to the cylindrical projection transform.
Preferably, the image registration processing is performed on the projection images corresponding to two adjacent cameras, and the obtaining of the registration result includes:
respectively using a feature extraction algorithm to obtain feature points in the two projection images;
matching the feature points in the two projected images to obtain a feature point matching pair;
and taking the matched pairs of the characteristic points as a registration result.
Specifically, the feature extraction algorithm includes a harris algorithm, a susan algorithm, and the like.
According to the embodiment of the invention, the adaptive noise reduction processing is only carried out on the pixel points to be processed, so that more detailed information can be kept in the noise reduction image while the noise reduction processing speed is effectively increased. The existing video splicing method generally does not perform noise reduction processing or directly performs noise reduction processing on all pixel points, and if the noise reduction processing is not performed, noise can affect the subsequent image matching process, so that the image matching result is not accurate enough, and if the noise reduction processing is performed on all the pixel points, obviously, the required processing time can be longer, so that the time for obtaining the output video is too long, and the difference between the shooting time of the picture in the output video and the output time of the output video is too large, so that the picture delay is too large.
While embodiments of the invention have been shown and described, it will be understood by those skilled in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (6)

1. A360-degree video splicing method for a multi-compartment ultralong vehicle type is characterized by comprising the following steps:
s1, acquiring the image frames shot at the same time in the video streams of a plurality of cameras;
s2, respectively carrying out image correction processing on each image frame to obtain corrected images;
s3, respectively determining to-be-processed pixel points in each corrected image, and then performing adaptive noise reduction processing on the to-be-processed pixel points to obtain noise-reduced images;
s4, performing projection transformation processing on each noise reduction image to obtain a projection image;
s5, image registration processing is carried out on the projection images corresponding to the two adjacent cameras in sequence to obtain a registration result, and image splicing processing is carried out based on the registration result to obtain a spliced image;
and S6, composing the spliced images at the continuous time into an output video.
2. The method as claimed in claim 1, wherein the video frames obtained by two adjacent cameras in the plurality of cameras have overlapping regions.
3. The 360 ° video stitching method for the multi-compartment ultralong vehicle type according to claim 2, wherein the performing image rectification processing on each image frame to obtain a rectified image comprises:
and carrying out distortion correction processing on each image frame to obtain a corrected image.
4. The method for 360 ° video stitching according to claim 1, wherein the determining the pixel points to be processed in each rectified image comprises:
for a pixel point n in the corrected image, the judgment coefficient adidx of the pixel point n is calculated by the following formulan
Figure FDA0003156998540000011
Figure FDA0003156998540000012
Wherein k belongs to {1,2,3,4}, cs represents a preset constant coefficient, fnAnd fmRespectively representing pixel values of pixel point n and pixel point m, m representing wkThe 8 neighborhoods of n are expressed as pixel points contained in
Figure FDA0003156998540000013
w1Representing pixel points n in 8 neighbourhood of n4、n5Set of compositions, w2Representing pixel points n in 8 neighbourhood of n2、n7Set of compositions, w3Representing pixel points n in 8 neighbourhood of n1、n8Set of compositions, w4Representing pixel points n in 8 neighbourhood of n3、n6Composition ofA sea plug matrix S corresponding to nnIs shown as
Figure FDA0003156998540000021
max indicates that the maximum value is taken,
Figure FDA0003156998540000022
will adidxnComparing with a preset comparison threshold value if adidxnAnd if the comparison threshold value is larger than the comparison threshold value, taking the pixel point n as a pixel point to be processed.
5. The 360 ° video stitching method for the multi-compartment ultralong vehicle type according to claim 1, wherein the performing projective transformation on each noise-reduced image to obtain a projected image comprises:
and carrying out circular column projection transformation on each noise reduction image to obtain a projection image.
6. The 360-degree video stitching method for the multi-compartment ultralong vehicle type according to claim 1, wherein image registration processing is performed on the projection images corresponding to two adjacent cameras to obtain a registration result, and the method comprises the following steps:
respectively using a feature extraction algorithm to obtain feature points in the two projection images;
matching the feature points in the two projected images to obtain a feature point matching pair;
and taking the matched pairs of the characteristic points as a registration result.
CN202110781027.9A 2021-07-09 2021-07-09 360-degree video splicing method for multi-compartment ultra-long vehicle type Active CN113506218B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110781027.9A CN113506218B (en) 2021-07-09 2021-07-09 360-degree video splicing method for multi-compartment ultra-long vehicle type

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110781027.9A CN113506218B (en) 2021-07-09 2021-07-09 360-degree video splicing method for multi-compartment ultra-long vehicle type

Publications (2)

Publication Number Publication Date
CN113506218A true CN113506218A (en) 2021-10-15
CN113506218B CN113506218B (en) 2022-03-08

Family

ID=78012596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110781027.9A Active CN113506218B (en) 2021-07-09 2021-07-09 360-degree video splicing method for multi-compartment ultra-long vehicle type

Country Status (1)

Country Link
CN (1) CN113506218B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080043835A1 (en) * 2004-11-19 2008-02-21 Hisao Sasai Video Encoding Method, and Video Decoding Method
US20080212889A1 (en) * 2007-03-02 2008-09-04 Chao-Ho Chen Method for reducing image noise
CN105654428A (en) * 2014-11-14 2016-06-08 联芯科技有限公司 Method and system for image noise reduction
CN107786780A (en) * 2017-11-03 2018-03-09 深圳Tcl新技术有限公司 Video image noise reducing method, device and computer-readable recording medium
CN108257420A (en) * 2018-02-11 2018-07-06 江苏金海星导航科技有限公司 Meter people based on camera counts vehicle method, apparatus and system
CN110445951A (en) * 2018-05-02 2019-11-12 腾讯科技(深圳)有限公司 Filtering method and device, storage medium, the electronic device of video
CN111815518A (en) * 2020-07-14 2020-10-23 璞洛泰珂(上海)智能科技有限公司 Projection image splicing method and device, computer equipment, storage medium and system
CN112017222A (en) * 2020-09-08 2020-12-01 北京正安维视科技股份有限公司 Video panorama stitching and three-dimensional fusion method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080043835A1 (en) * 2004-11-19 2008-02-21 Hisao Sasai Video Encoding Method, and Video Decoding Method
US20080212889A1 (en) * 2007-03-02 2008-09-04 Chao-Ho Chen Method for reducing image noise
CN105654428A (en) * 2014-11-14 2016-06-08 联芯科技有限公司 Method and system for image noise reduction
CN107786780A (en) * 2017-11-03 2018-03-09 深圳Tcl新技术有限公司 Video image noise reducing method, device and computer-readable recording medium
CN108257420A (en) * 2018-02-11 2018-07-06 江苏金海星导航科技有限公司 Meter people based on camera counts vehicle method, apparatus and system
CN110445951A (en) * 2018-05-02 2019-11-12 腾讯科技(深圳)有限公司 Filtering method and device, storage medium, the electronic device of video
CN111815518A (en) * 2020-07-14 2020-10-23 璞洛泰珂(上海)智能科技有限公司 Projection image splicing method and device, computer equipment, storage medium and system
CN112017222A (en) * 2020-09-08 2020-12-01 北京正安维视科技股份有限公司 Video panorama stitching and three-dimensional fusion method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
XIANYE LI ET AL: "Noise Suppression in Compressive Single-Pixel Imaging", 《SENSOR》 *
冯丽娜等: "一种基于RGB分量帧差的自适应镜头边界检测算法", 《广西轻工业》 *
吴定雪等: "基于视觉特征信息量度量的高斯尺度参数自适应算法", 《计算机工程与科学》 *
朱金华等: "基于卡尔曼滤波的图像降噪算法", 《现代电子技术》 *

Also Published As

Publication number Publication date
CN113506218B (en) 2022-03-08

Similar Documents

Publication Publication Date Title
JP7053816B2 (en) How to generate an output image showing a powered vehicle and the environmental area of the powered vehicle in a given target view, a camera system, and a powered vehicle.
US20150302561A1 (en) Method, apparatus and system for performing geometric calibration for surround view camera solution
US10885403B2 (en) Image processing device, imaging device, and image processing method
US9533618B2 (en) Method, apparatus and system for processing a display from a surround view camera solution
CN113298810B (en) Road line detection method combining image enhancement and depth convolution neural network
CN108124095B (en) Image processing method and image system for vehicle
CN103914810B (en) Image super-resolution for dynamic rearview mirror
JP2001082955A (en) Device for adjusting dislocation of stereoscopic image
US20170070650A1 (en) Apparatus for correcting image distortion of lens
CN111768332A (en) Splicing method of vehicle-mounted all-around real-time 3D panoramic image and image acquisition device
JPWO2019069469A1 (en) Object detection device, object detection method, and program
CN113506218B (en) 360-degree video splicing method for multi-compartment ultra-long vehicle type
CN110580684A (en) image enhancement method based on black-white-color binocular camera
US20210001773A1 (en) Automated Creation of a Freeform Mask for Automotive Cameras
CN112995529A (en) Imaging method and device based on optical flow prediction
JP6960827B2 (en) Road surface area detector
JP6126849B2 (en) Lane identification device and lane identification method
CN115330710A (en) Automobile wire harness quality inspection system based on cloud computing
US20130010142A1 (en) Image processing device, image processing method, and information terminal apparatus
WO2017169079A1 (en) Image processing device, image processing method, and image pickup element
US20220036507A1 (en) Method, apparatus, computer device and storage medium for adjusting brightness of mosaiced images
KR101969235B1 (en) Rear vehicle sensing method by fish-eye lens and image information improvement in hard environment
JP5129094B2 (en) Vehicle periphery monitoring device
KR102429893B1 (en) Apparatus and method for processing image around vehicle, and recording medium for recording program performing the method
EP3823269A1 (en) Color image reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant