CN112801880A - Vehicle-mounted panoramic image imaging and target detection fusion display method - Google Patents
Vehicle-mounted panoramic image imaging and target detection fusion display method Download PDFInfo
- Publication number
- CN112801880A CN112801880A CN202110249400.6A CN202110249400A CN112801880A CN 112801880 A CN112801880 A CN 112801880A CN 202110249400 A CN202110249400 A CN 202110249400A CN 112801880 A CN112801880 A CN 112801880A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- target
- panoramic image
- fusion display
- target detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 37
- 230000004927 fusion Effects 0.000 title claims abstract description 32
- 238000001514 detection method Methods 0.000 title claims abstract description 24
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000002372 labelling Methods 0.000 claims description 2
- 230000005540 biological transmission Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000009958 sewing Methods 0.000 description 3
- 230000002411 adverse Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 241000287196 Asthenes Species 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method for vehicle-mounted panoramic image imaging and target detection fusion display, which comprises the steps of detecting an image acquired by a camera of a vehicle-mounted imaging system, extracting a concerned target and a corresponding pixel area in the image, acquiring a bottom boundary midpoint of the area, setting the bottommost of the target to be in contact with the ground, enabling the bottom boundary midpoint of the pixel area to be on a ground plane, acquiring a real coordinate position of the concerned target in a world coordinate system according to internal reference and external reference of the camera and the fact that the bottom boundary midpoint of the pixel area is on the ground plane, and carrying out identification fusion display in the panoramic image according to the real coordinate position of the concerned target in the world coordinate system.
Description
Technical Field
The invention relates to the technical field of vehicle-mounted panoramic image imaging, in particular to a method for fusion display of vehicle-mounted panoramic image imaging and target detection.
Background
At present, a panoramic image stitching algorithm based on multiple cameras and a related scheme are mature. A large number of small and medium-sized vehicles are generally provided with panoramic image generation equipment, so that a driver is assisted to observe all blind areas and environments around the vehicles, and the probability of various safety accidents is finally reduced.
No matter the 3D 360 panoramic image generation equipment or the 2D 360 panoramic image generation equipment displays, the phenomenon that a small part of three-dimensional object blind areas are formed at the joint of splicing of multiple paths of images cannot be avoided. Compared with 2D 360 panoramic image equipment, the 3D 360 panoramic image equipment has better effect on a three-dimensional object; however, due to the perspective problem of the 3D view, some pedestrians may be blocked by the human animated car model on the 3D 360 panoramic image.
In addition, 3D 360 panorama generation, a virtual and true 3D view; in fact, the method only projects multiple paths of videos onto the bowl body imaging surface and then carries out splicing imaging. This means that around the body, a large amount of ground plane projection area will be contained. If the 3D view is viewed from a perspective where the ground plane is nearly parallel (or nearly parallel) when displayed, what information is projected onto the ground plane is completely ignored. Even if not viewed from such a 3D perspective, more solid objects, such as pedestrians, may be deformed to a greater extent in the ground plane projection area near the vehicle. The height of the camera shooting installation position is different, different situations can occur, for example, imaging of pedestrians can be greatly lengthened, or a projection area is very small, and in addition, still due to the problem of the observation angle of the 3D view, if the projection area on the ground plane is small, and the included angle between the 3D display observation angle and the ground is small, the problem that a target is more difficult to be found by an observer can occur.
Disclosure of Invention
The invention aims to provide a vehicle-mounted panoramic image imaging and target detection fusion display method for improving the actual display effect of panoramic imaging.
The method for fusing and displaying the imaging of the vehicle-mounted panoramic image and the target detection comprises the following steps:
detecting an image acquired by a camera of a vehicle-mounted imaging system, extracting a target of interest and a pixel region corresponding to the target of interest in the image, and acquiring a bottom boundary midpoint of the region;
if the bottommost part of the target is in contact with the ground, the middle point of the bottom boundary of the pixel region is located on the ground plane, and the real coordinate position of the concerned target in the world coordinate system is obtained according to the internal reference and the external reference of the camera and the middle point of the bottom boundary of the pixel region;
and carrying out identification fusion display in the panoramic image according to the real coordinate position of the concerned target in the world coordinate system.
The method for vehicle-mounted panoramic image imaging and target detection fusion display comprises the steps of detecting an image acquired by a camera of a vehicle-mounted imaging system, extracting a target of interest and a pixel region corresponding to the target of interest in the image, acquiring a bottom boundary midpoint of the region, setting the bottommost part of the target to be in contact with the ground, enabling the bottom boundary midpoint of the pixel region to be on the ground plane, acquiring the real coordinate position of the target of interest in a world coordinate system according to the internal reference and the external reference of the camera and the bottom boundary midpoint of the pixel region on the ground plane, and carrying out identification fusion display in a panoramic image according to the real coordinate position of the target of interest in the world coordinate system, so that the problems that pedestrians or other objects are possibly not clearly seen and pedestrians or objects close to a vehicle body in a blind area of the vehicle-mounted panoramic image can be solved, large distortion may occur or the projection surface is small and difficult to observe and detect. And the positions of pedestrians or objects which are possibly generated are marked by adopting a 2D circle mark or a mode of placing a 3D stereo model to remind a driver, so that the adverse effect on a user caused by the blind area of the panoramic generation splicing and sewing area can be effectively solved.
Drawings
FIG. 1 is a schematic flow chart of a method for fusion display of vehicle-mounted panoramic image imaging and target detection according to the present invention;
fig. 2 is a schematic diagram of a stitching area of a three-dimensional object blind area that may be generated due to image stitching according to the present invention.
Detailed Description
As shown in FIG. 1, a method for vehicle-mounted panoramic image imaging and target detection fusion display includes detecting an image acquired by a camera of a vehicle-mounted imaging system, detecting the image, detecting and identifying the image through a deep convolutional neural network, or detecting and identifying the image through a deep convolutional neural network operated in a panoramic imaging control box, or setting a corresponding processor in the camera and operating the deep convolutional neural network through the processor to detect and identify the image, extracting an object of interest and a corresponding pixel region thereof, wherein the object of interest can be a pedestrian or a stereoscopic or planar obstacle, and obtaining a bottom boundary midpoint of the region, and if the bottommost part of the object is in contact with the ground, the bottom boundary midpoint of the pixel region exists on the ground plane, and the bottom boundary midpoint of the pixel region exists on the ground plane according to internal reference of the camera, And the outer parameter and the middle point of the bottom boundary of the pixel region exist on the ground plane, the real coordinate position of the concerned target in the world coordinate system is obtained, and the identification fusion display is carried out in the panoramic image according to the real coordinate position of the concerned target in the world coordinate system.
Taking the concerned target as a pedestrian as an example, when the bottommost part of the pedestrian is in contact with the ground, the middle point of the bottom boundary of the pixel region is positioned on the ground plane, and the real coordinate position of the concerned target in the world coordinate system is obtained according to the internal reference and the external reference of the camera and the middle point of the bottom boundary of the pixel region positioned on the ground plane; namely, the coordinates of the bottom midpoint pixel point of the pedestrian in the image are set asThen the coordinates of the pixel point are calculatedThe additional condition of the camera being internal reference, external reference of the camera with respect to the center of the vehicle, and the coordinate point being in the ground plane, i.e. the world coordinate system coordinates with respect to the center of the vehicleAn additional condition of 0; reverse solving of the world coordinate system of the contact point of the pedestrian and the ground relative to the center of the vehicleI.e. to solveThe value of (c).
The specific mapping relationship from the pixel point coordinate system to the camera coordinate system is as follows:
wherein dx and dy represent the size of the photosensitive element of the camera corresponding to the unit pixel point,、representing the coordinate value of the center point of the coordinate system of the pixel point, s is the distortion factor, f is the focal length of the camera,coordinate data in a camera coordinate system.
The mapping of the camera coordinate system to the world coordinate system with the vehicle center as the origin is as follows (R stands for rotation matrix and T stands for translation):
thus, the entire mapping process can be abbreviated as:
since the pedestrian stands on the ground, the value Zw is 0, the camera internal reference and external reference K, R, T are known, the unknown variables Zc, Xw and Yw are given to the pixel coordinates u and v of the standing point of the pedestrian on the ground, and the values Xw, Yw and Zc can be solved by the three equations and the three unknowns, that is, the positions Xw and Yw where the pedestrian stands on the ground relative to the center of the vehicle can be obtained, and then the positions Xw and Yw are subjected to label fusion display in the panoramic image.
When the identification fusion display is carried out on the panoramic image at the real coordinate position of the concerned target in the world coordinate system, because the identification display possibly has an unobvious condition, the identification fusion display is carried out on the 3D vehicle-mounted panoramic image by adopting the 3D humanoid model, and the adverse effect on the user caused by the blind area of the three-dimensional object in the panoramic generation splicing and sewing area can be further effectively solved.
Since only the 3D humanoid model is used to identify the objects around the vehicle body on the 3D vehicle-mounted panoramic image, it may happen that the 3D humanoid model, i.e. the object of interest, is blocked by the 3D vehicle model. If the 3D humanoid model used for labeling the concerned target is shielded by the 3D vehicle model during 3D vehicle-mounted panoramic display, the transparency of the 3D vehicle model is improved, so that a driver can more easily see the concerned target behind the 3D humanoid model, and the problem is solved.
The 3D human-shaped model is specifically a 3D human-shaped model with color flicker change, can attract the attention of a driver, improves the safety, and can be used as a light source to be processed in the process of rendering, so that the surrounding area can be illuminated, the flicker of a certain area is formed, and the warning is further provided for the driver.
The number of the vehicle-mounted imaging system cameras is at least four, and after the real coordinate position of a target concerned by each vehicle-mounted imaging system camera in a world coordinate system is obtained, identification fusion display is carried out in a panoramic image only aiming at a splicing area where a three-dimensional object blind area is predicted to possibly appear. The number of the vehicle-mounted imaging system cameras is at least four, and the visual coverage of each camera can cause an overlapping area. When image splicing is performed, the pictures of the two cameras need to be spliced and stitched in the overlapping area. Near the sewing line, a dead zone of a three-dimensional object may occur, and a ghost image may occur. When fusion display is performed, the prompt mark of the fusion related target can be covered only for the splicing area where the dead zone of the three-dimensional object is likely to appear, for example, some 3D human-shaped marks are placed. As shown in fig. 2, the overlapped region represents a region that can be seen by both cameras, and belongs to a spliced fusion region, wherein a partial region may have a dead zone or a ghost of a three-dimensional object.
Each vehicle-mounted imaging system camera has the calculation capability of target detection. Under the normal condition, the calculation power of the host of the panoramic image cannot sufficiently support the simultaneous identification of the targets of multiple channels, so the scheme places the calculation of target detection on each vehicle-mounted imaging system camera, thereby improving the fusion calculation capability of the panoramic image.
And the camera of the vehicle-mounted imaging system transmits image data and target detection result data through analog signals. Compared with the mode based on Ethernet and h264 and h265 video compression streams, the mode of transmitting image data such as an AHD bus through analog signals can reduce the front-end compression requirement and the rear-end video decompression requirement, reduce transmission delay and reduce the synchronism of multi-path transmission. In this case, the result of target detection can be encoded in the data of a plurality of pixels of the analog transmission image for transmission, and the receiving end only needs to read and decode corresponding pixel row information after receiving the image, so that the detection result can be read. If the receiving end needs to perform image splicing fusion, display and video recording, pixel lines for transmitting detection target result data can be abandoned, and subsequent image processing can be normally performed according to the original rule.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.
Claims (7)
1. A method for vehicle-mounted panoramic image imaging and target detection fusion display is characterized by comprising the following steps:
detecting an image acquired by a camera of a vehicle-mounted imaging system, extracting a target of interest and a pixel region corresponding to the target of interest in the image, and acquiring a bottom boundary midpoint of the region;
if the bottommost part of the target is in contact with the ground, the middle point of the bottom boundary of the pixel region is located on the ground plane, and the real coordinate position of the concerned target in the world coordinate system is obtained according to the internal reference and the external reference of the camera and the middle point of the bottom boundary of the pixel region;
and carrying out identification fusion display in the panoramic image according to the real coordinate position of the concerned target in the world coordinate system.
2. The method for vehicle-mounted panoramic image imaging and target detection fusion display according to claim 1, wherein the identification fusion display of the real coordinate position of the target of interest in the world coordinate system in the panoramic image comprises the identification fusion display of a 3D vehicle-mounted panoramic image by using a 3D human-shaped model.
3. The method for fusion display of vehicle-mounted panoramic image imaging and target detection according to claim 2, wherein if the 3D humanoid model used for labeling the target of interest is shielded by the 3D humanoid model during 3D vehicle-mounted panoramic display, the transparency of the model of the 3D vehicle is improved, so that the driver can more easily see the target of interest behind the 3D humanoid model.
4. The method for fusion display of vehicle-mounted panoramic image imaging and target detection as claimed in claim 2, wherein the 3D humanoid model is a 3D humanoid model with color flicker variation.
5. The method for fusion display of vehicle-mounted panoramic image imaging and target detection according to claim 1, wherein the number of the vehicle-mounted imaging system cameras is at least four, and identification fusion display is performed in the panoramic image only for a spliced area where a dead zone of a three-dimensional object is predicted to possibly occur after a real coordinate position of a target concerned by each vehicle-mounted imaging system camera in a world coordinate system is obtained.
6. The method for fusion display of vehicle-mounted panoramic image imaging and target detection as claimed in claim 1, wherein each vehicle-mounted imaging system camera has the computing capability of target detection.
7. The method for fusion display of vehicle-mounted panoramic image imaging and target detection as claimed in claim 6, wherein a camera of a vehicle-mounted imaging system transmits image data and target detection result data through analog signals.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110249400.6A CN112801880B (en) | 2021-03-08 | 2021-03-08 | Method for fusion display of vehicle-mounted panoramic image imaging and target detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110249400.6A CN112801880B (en) | 2021-03-08 | 2021-03-08 | Method for fusion display of vehicle-mounted panoramic image imaging and target detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112801880A true CN112801880A (en) | 2021-05-14 |
CN112801880B CN112801880B (en) | 2024-06-07 |
Family
ID=75816650
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110249400.6A Active CN112801880B (en) | 2021-03-08 | 2021-03-08 | Method for fusion display of vehicle-mounted panoramic image imaging and target detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112801880B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113609945A (en) * | 2021-07-27 | 2021-11-05 | 深圳市圆周率软件科技有限责任公司 | Image detection method and vehicle |
CN113705403A (en) * | 2021-08-18 | 2021-11-26 | 广州敏视数码科技有限公司 | Front target vehicle collision early warning method fused with panoramic imaging system |
CN113936101A (en) * | 2021-10-18 | 2022-01-14 | 北京茵沃汽车科技有限公司 | Method and device for restoring lost object in joint area of 3D panorama |
CN118247765A (en) * | 2024-01-16 | 2024-06-25 | 广东六力智行科技有限公司 | Panoramic object detection method, device, vehicle and storage medium |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120229596A1 (en) * | 2007-03-16 | 2012-09-13 | Michael Kenneth Rose | Panoramic Imaging and Display System With Intelligent Driver's Viewer |
CN105291984A (en) * | 2015-11-13 | 2016-02-03 | 中国石油大学(华东) | Pedestrian and vehicle detecting method and system based on multi-vehicle cooperation |
CN106023080A (en) * | 2016-05-19 | 2016-10-12 | 沈祥明 | Seamless splicing processing system for vehicle-mounted panoramic image |
CN106791710A (en) * | 2017-02-10 | 2017-05-31 | 北京地平线信息技术有限公司 | Object detection method, device and electronic equipment |
CN108482247A (en) * | 2018-03-27 | 2018-09-04 | 京东方科技集团股份有限公司 | A kind of vehicle and its DAS (Driver Assistant System) and auxiliary driving method |
CN109109748A (en) * | 2018-10-08 | 2019-01-01 | 南京云计趟信息技术有限公司 | A kind of pedestrian's identification early warning system for blind area on the right side of heavy motor truck |
CN109740463A (en) * | 2018-12-21 | 2019-05-10 | 沈阳建筑大学 | A kind of object detection method under vehicle environment |
CN109767473A (en) * | 2018-12-30 | 2019-05-17 | 惠州华阳通用电子有限公司 | A kind of panorama parking apparatus scaling method and device |
CN110430401A (en) * | 2019-08-12 | 2019-11-08 | 腾讯科技(深圳)有限公司 | Vehicle blind zone method for early warning, prior-warning device, MEC platform and storage medium |
CN111186379A (en) * | 2020-01-21 | 2020-05-22 | 武汉大学 | Automobile blind area dangerous object alarm method based on deep learning |
CN111447431A (en) * | 2020-04-02 | 2020-07-24 | 深圳普捷利科技有限公司 | Naked eye 3D display method and system applied to vehicle-mounted all-around camera shooting |
CN111582080A (en) * | 2020-04-24 | 2020-08-25 | 杭州鸿泉物联网技术股份有限公司 | Method and device for realizing 360-degree all-round monitoring of vehicle |
US20200273147A1 (en) * | 2019-02-22 | 2020-08-27 | Verizon Patent And Licensing Inc. | Methods and Systems for Automatic Image Stitching Failure Recovery |
WO2020237942A1 (en) * | 2019-05-30 | 2020-12-03 | 初速度(苏州)科技有限公司 | Method and apparatus for detecting 3d position of pedestrian, and vehicle-mounted terminal |
-
2021
- 2021-03-08 CN CN202110249400.6A patent/CN112801880B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120229596A1 (en) * | 2007-03-16 | 2012-09-13 | Michael Kenneth Rose | Panoramic Imaging and Display System With Intelligent Driver's Viewer |
CN105291984A (en) * | 2015-11-13 | 2016-02-03 | 中国石油大学(华东) | Pedestrian and vehicle detecting method and system based on multi-vehicle cooperation |
CN106023080A (en) * | 2016-05-19 | 2016-10-12 | 沈祥明 | Seamless splicing processing system for vehicle-mounted panoramic image |
CN106791710A (en) * | 2017-02-10 | 2017-05-31 | 北京地平线信息技术有限公司 | Object detection method, device and electronic equipment |
CN108482247A (en) * | 2018-03-27 | 2018-09-04 | 京东方科技集团股份有限公司 | A kind of vehicle and its DAS (Driver Assistant System) and auxiliary driving method |
CN109109748A (en) * | 2018-10-08 | 2019-01-01 | 南京云计趟信息技术有限公司 | A kind of pedestrian's identification early warning system for blind area on the right side of heavy motor truck |
CN109740463A (en) * | 2018-12-21 | 2019-05-10 | 沈阳建筑大学 | A kind of object detection method under vehicle environment |
CN109767473A (en) * | 2018-12-30 | 2019-05-17 | 惠州华阳通用电子有限公司 | A kind of panorama parking apparatus scaling method and device |
US20200273147A1 (en) * | 2019-02-22 | 2020-08-27 | Verizon Patent And Licensing Inc. | Methods and Systems for Automatic Image Stitching Failure Recovery |
WO2020237942A1 (en) * | 2019-05-30 | 2020-12-03 | 初速度(苏州)科技有限公司 | Method and apparatus for detecting 3d position of pedestrian, and vehicle-mounted terminal |
CN110430401A (en) * | 2019-08-12 | 2019-11-08 | 腾讯科技(深圳)有限公司 | Vehicle blind zone method for early warning, prior-warning device, MEC platform and storage medium |
CN111186379A (en) * | 2020-01-21 | 2020-05-22 | 武汉大学 | Automobile blind area dangerous object alarm method based on deep learning |
CN111447431A (en) * | 2020-04-02 | 2020-07-24 | 深圳普捷利科技有限公司 | Naked eye 3D display method and system applied to vehicle-mounted all-around camera shooting |
CN111582080A (en) * | 2020-04-24 | 2020-08-25 | 杭州鸿泉物联网技术股份有限公司 | Method and device for realizing 360-degree all-round monitoring of vehicle |
Non-Patent Citations (1)
Title |
---|
陆天舒等: "基于图像拼接的全景目标检测技术", 《兵工自动化》, vol. 33, no. 2, pages 7 - 10 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113609945A (en) * | 2021-07-27 | 2021-11-05 | 深圳市圆周率软件科技有限责任公司 | Image detection method and vehicle |
CN113609945B (en) * | 2021-07-27 | 2023-06-13 | 圆周率科技(常州)有限公司 | Image detection method and vehicle |
CN113705403A (en) * | 2021-08-18 | 2021-11-26 | 广州敏视数码科技有限公司 | Front target vehicle collision early warning method fused with panoramic imaging system |
CN113705403B (en) * | 2021-08-18 | 2023-08-08 | 广州敏视数码科技有限公司 | Front target vehicle collision early warning method fused with panoramic imaging system |
CN113936101A (en) * | 2021-10-18 | 2022-01-14 | 北京茵沃汽车科技有限公司 | Method and device for restoring lost object in joint area of 3D panorama |
CN118247765A (en) * | 2024-01-16 | 2024-06-25 | 广东六力智行科技有限公司 | Panoramic object detection method, device, vehicle and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112801880B (en) | 2024-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112801880B (en) | Method for fusion display of vehicle-mounted panoramic image imaging and target detection | |
CN111448591B (en) | System and method for locating a vehicle in poor lighting conditions | |
Mori et al. | A survey of diminished reality: Techniques for visually concealing, eliminating, and seeing through real objects | |
CN101489117B (en) | Apparatus and method for displaying images | |
TWI320756B (en) | ||
US11380111B2 (en) | Image colorization for vehicular camera images | |
CN112224132A (en) | Vehicle panoramic all-around obstacle early warning method | |
JP3301421B2 (en) | Vehicle surrounding situation presentation device | |
AU2018225269B2 (en) | Method, system and apparatus for visual effects | |
CN110827197A (en) | Method and device for detecting and identifying vehicle all-round looking target based on deep learning | |
CN111862210B (en) | Object detection and positioning method and device based on looking-around camera | |
JP2019532540A (en) | Method for supporting a driver of a power vehicle when driving the power vehicle, a driver support system, and the power vehicle | |
TW201605247A (en) | Image processing system and method | |
TWM524490U (en) | Driving panoramic auxiliary device | |
CA3199763A1 (en) | Vehicle undercarriage imaging | |
US20230162464A1 (en) | A system and method for making reliable stitched images | |
CN116495004A (en) | Vehicle environment sensing method, device, electronic equipment and storage medium | |
Santhanam et al. | Lens distortion correction and geometrical alignment for Around View Monitoring system | |
Rameau et al. | A real-time vehicular vision system to seamlessly see-through cars | |
Lo et al. | Embedded system implementation for vehicle around view monitoring | |
CN112308987A (en) | Vehicle-mounted image splicing method, system and device | |
JP4162817B2 (en) | Distance detector | |
Yan et al. | The research of surround view parking assist system | |
CN115937483B (en) | Method and device for transforming three-dimensional object in two-dimensional image based on virtual reality | |
Lin et al. | Front vehicle blind spot translucentization based on augmented reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |