CN117830089A - Method and device for generating looking-around spliced view, electronic equipment and storage medium - Google Patents

Method and device for generating looking-around spliced view, electronic equipment and storage medium Download PDF

Info

Publication number
CN117830089A
CN117830089A CN202311872807.XA CN202311872807A CN117830089A CN 117830089 A CN117830089 A CN 117830089A CN 202311872807 A CN202311872807 A CN 202311872807A CN 117830089 A CN117830089 A CN 117830089A
Authority
CN
China
Prior art keywords
view
fusion area
around
preset fusion
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311872807.XA
Other languages
Chinese (zh)
Inventor
蒋亚冲
李勇
高金锋
王一伟
蒯兴宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN202311872807.XA priority Critical patent/CN117830089A/en
Publication of CN117830089A publication Critical patent/CN117830089A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention provides a method, a device, electronic equipment and a storage medium for generating a look-around spliced view, belonging to the technical field of intelligent driving, wherein the method comprises the following steps: performing target detection on real-time images acquired by a plurality of camera assemblies of a target vehicle, determining whether a target object exists in a preset fusion area, and if the target object exists in the preset fusion area, acquiring pixel coordinates of a boundary frame of the target object; calculating an optimal starting position of the preset fusion area based on pixel coordinates of a boundary frame of the target object; and based on the optimal starting position of the preset fusion area, carrying out fusion splicing on the real-time images to obtain a circular splice view. The invention can make people or other living beings or objects with very close distance outside the vehicle clearly display in the looking-around spliced view, and improve driving safety.

Description

Method and device for generating looking-around spliced view, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a method and a device for generating a look-around spliced view, electronic equipment and a storage medium.
Background
The regions of the 4 corners of the panoramic spliced view of the panoramic system are respectively spliced by 2 adjacent fisheye cameras through a fusion method. World coordinate P of fusion area in panoramic all-around system avm =(x 0 ,y 0 ) To the image pixel coordinates P image In modeling of the outlier rotation matrix R, the outlier translation vector t, and the outlier matrix K, the world coordinates P of the fusion region = (u, v) = avm =(x 0 ,y 0 ) The coordinate system of (2) is the coordinate system of the plane of the ground plane, namely, the coordinate system is not suitable for the situation that the person or other living beings or objects are higher than the ground, so that the person or other living beings or objects which are very close to the outside of the vehicle at a specific angle are very blurred or disappear in the looking-around spliced view, and the driver cannot clearly see the person or other living beings or objects in the spliced region, so that a great potential safety hazard is caused.
Accordingly, it is desirable to provide a method that optimizes the problem of very blurring or vanishing in a look-around stitched view for people or other living beings or objects that are very close in distance outside the vehicle.
Disclosure of Invention
The invention provides a method, a device, electronic equipment and a storage medium for generating a looking-around spliced view, which are used for solving the problem that people or other organisms or objects with very close distances outside a vehicle are very blurred or vanished in the looking-around spliced view, and realizing driving safety.
The invention provides a method for generating a look-around spliced view, which comprises the following steps:
performing target detection on real-time images acquired by a plurality of camera assemblies of a target vehicle, determining whether a target object exists in a preset fusion area, and if the target object exists in the preset fusion area, acquiring pixel coordinates of a boundary frame of the target object;
calculating an optimal starting position of the preset fusion area based on pixel coordinates of a boundary frame of the target object;
and based on the optimal starting position of the preset fusion area, carrying out fusion splicing on the real-time images to obtain a circular splice view.
According to the method for generating the look-around mosaic view provided by the invention, the target detection is carried out on the real-time images acquired by the plurality of camera assemblies of the target vehicle, and whether the target object exists in the preset fusion area is determined, and the method comprises the following steps:
inputting the real-time images acquired by the camera assemblies into a target detection model, and outputting target detection results corresponding to the real-time images acquired by the camera assemblies;
and determining whether the position of the visual detection result is positioned in the preset fusion area based on the target detection result.
According to the method for generating the view-around splice view provided by the invention, the calculating the optimal starting position of the preset fusion area based on the pixel coordinates of the boundary frame of the target object comprises the following steps:
Calculating world coordinates of the bounding box projected to a preset fusion area based on pixel coordinates of the bounding box of the target object;
and calculating the optimal starting position of the preset fusion area based on the world coordinates.
According to the method for generating the view-around splice view provided by the invention, the calculating of the world coordinates of the bounding box projected to the preset fusion area based on the pixel coordinates of the bounding box of the target object comprises the following steps:
calculating world coordinates of the boundary frame projected to a preset fusion area on each camera shooting assembly based on pixel coordinates of the boundary frame of the target object and internal and external parameters corresponding to each camera shooting assembly;
the internal and external parameters include: an extrinsic rotation matrix, an extrinsic translation vector, and an intrinsic matrix.
According to the method for generating the view-around splice view provided by the invention, the calculating the optimal starting position of the preset fusion area based on the world coordinates comprises the following steps:
calculating the integral sum of imaging alpha fusion weights of the corresponding camera shooting components on the area where the boundary frame is positioned when the starting positions of different preset fusion areas are taken as independent variables based on the world coordinates;
And taking the starting position of the corresponding preset fusion area when the sum of the integral values is maximum as the optimal starting position of the preset fusion area.
According to the method for generating the all-around splice view, provided by the invention, the method further comprises the following steps:
acquiring a view-around mosaic view, wherein the view-around mosaic view is obtained by fusing and mosaicing real-time images acquired by camera assemblies positioned in the front, back, left and right directions of a target vehicle;
the preset fusion area is a fusion area in the all-around spliced video.
According to the method for generating the all-around splice view, provided by the invention, the method further comprises the following steps:
and optimizing the all-around view splice view based on the optimal starting position of the preset fusion area to obtain an optimized all-around view splice view.
The invention also provides a device for generating the all-around splice view, which comprises the following steps:
the target detection unit is used for carrying out target detection on real-time images acquired by a plurality of camera assemblies of a target vehicle, determining whether a target object exists in a preset fusion area, and if the target object exists in the preset fusion area, acquiring pixel coordinates of a boundary frame of the target object;
A calculation unit that calculates an optimal start position of the preset fusion region based on pixel coordinates of a bounding box of the target object;
and the fusion splicing unit is used for carrying out fusion splicing on the real-time images based on the optimal starting position of the preset fusion area to obtain a circular splicing view.
The invention also provides electronic equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the method for generating the look-around spliced view when executing the program.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of generating a look-around mosaic view as described in any of the above.
The invention also provides a computer program product comprising a computer program which when executed by a processor implements a method of generating a look-around mosaic view as described in any one of the above.
According to the method, the device, the electronic equipment and the storage medium for generating the all-around spliced view, the all-around spliced view is obtained, the real-time image acquired by each camera shooting assembly is subjected to target detection, and if a target object exists in a preset fusion area, the pixel coordinates of the boundary frame of the target object are obtained; the optimal starting position of the preset fusion area is calculated based on the pixel coordinates of the bounding box of the target object, the looking-around spliced view is optimized based on the optimal starting position of the preset fusion area, the splicing angles of the fusion positions of different camera assemblies are dynamically adjusted, key information is not lost at the spliced fusion positions, people or other organisms or objects with very close distances outside the vehicle can be clearly displayed in the looking-around spliced view, the problem that people or other organisms or objects with very close distances outside the vehicle are very fuzzy or disappear in the looking-around spliced view is solved, and driving safety is improved.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of a fused region of a view-around splice;
FIG. 2 is a schematic diagram showing a comparison of the disappearance and non-disappearance of a person in close proximity outside the vehicle in a look-around splice view;
FIG. 3 is a schematic flow chart of a method for generating a view of a view-around splice according to an embodiment of the present invention;
FIG. 4 is a second flowchart of a method for generating a view of a view-around splice according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a device for generating a view of a view splice in the embodiment of the present invention;
fig. 6 is a schematic diagram of an entity structure of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The panoramic view system is generally called AVM (Around View Monitor), and is used for simultaneously collecting images around a vehicle through 4 ultra-wide angle cameras arranged on the front, rear, left and right sides of the vehicle body, and realizing visual modes such as 2D overlooking, 3D panoramic view, full vehicle transparency and the like through distortion correction, panoramic stitching and image rendering. The sensor can collect environmental data outside the vehicle, know the surrounding environment of the vehicle and provide prompt before danger occurs. The 4 independent cameras are used, and the monitor displays an all-view, so that a driver can know all surrounding environments in real time without dead angles in all directions.
The look-around stitched view (Panoramic Stitching View) is to stitch multiple adjacent images or videos to form a continuous, panoramic view. Through the look-around splicing technology, images or videos shot by the camera shooting assembly at different positions or angles can be seamlessly connected together, and a panoramic effect is displayed.
The regions of the 4 corners of the panoramic stitching view of the panoramic looking-around system are respectively stitched by 2 adjacent fisheye cameras through a fusion method. Fig. 1 is a schematic view of a fusion area of a view-around splice.
The upper right corner area of the all-around view is formed by fusion and splicing of a front fish-eye camera and a right fish-eye camera, the lower left corner of the area is taken as an origin O of a two-dimensional rectangular coordinate system, the right is taken as an x-axis, the upward direction is taken as a y-axis, and any point sitting mark is P in the area avm =(x 0 ,y 0 )。
The world coordinate P of any point of the fusion area can be obtained through the external reference rotation matrix R, the external reference translation vector t and the internal reference matrix K of the fish-eye camera avm =(x 0 ,y 0 ) Pixel coordinates P in front and right fisheye cameras, respectively image RGB Pixel values in = (u, v) are noted as Pixel image_front (u, v) and Pixel image_right (u,v)。
Is provided withThe axis is the beginning of the fusion region, +.>The axis is the end position of the fusion region, i.e. the angular extent of the fusion region is +.>Shaft and->Included angle of shaft->
If the starting position of the different fusion areas or the ending position of the fusion areas are selected, the angle sizes of the different fusion areas are corresponding.
In order to reduce the blind field of view of the object above the ground in the fusion zone, it is necessary to appropriately increase the angle of the fusion zone, which is generally at least
In order to adjust the problem of ghost images and image blur disappearance of objects or people above the ground in the fusion area, it is necessary to adjust the starting position of the fusion area or the ending position of the fusion area so that the image of the object or person is more completely displayed in the AVM fused view.
At this time, any point P in the fusion region avm =(x 0 ,y 0 ) Vector relative to origin OStart position of fusion region->The angle of the axes, denoted as θ 0
(4) Let the weight of alpha transparent channel of RGB pixel values displayed by fusion area be along with theta 0 The angle of the light beam is changed linearly,
then there is any point P avm =(x 0 ,y 0 ) The RGB pixel values on the look-around stitched view are:
the fusion splicing method comprises the following steps:
for objects such as traffic marking lines, floor tiles and well covers attached to the ground level, the spliced areas can not feel spliced marks, and the objects are very natural;
for the vertically walking people and objects above the ground plane, when 1 person walks in the 90-degree fusion area, the images of 2 human ghosts with alpha fusion weights, which are overlapped at the sole position of the ground plane, are formed on the human body above the ground plane. However, when the human body walks to a specific angle, the situation that the alpha fusion weights multiplied when the image pixels where the human bodies of the 2 fish-eye cameras are rendered to the AVM fusion view are all close to 0 appears as if the human bodies standing outside the vehicle and very close to each other disappear in the image spliced by looking around, as shown in fig. 2. Fig. 2 is a schematic diagram showing a comparison of the disappearance and non-disappearance of a person in close proximity outside the vehicle in a look-around splice view. The left side of fig. 2 is a schematic diagram of a person with a very close distance outside the vehicle disappearing in the view of the look-around splice, and the right side of fig. 2 is a schematic diagram of a person with a very close distance outside the vehicle not disappearing in the view of the look-around splice.
When a person walks from the right mirror position of the car to the front head position by turning 90 degrees counterclockwise around the car, it is seen in the look-around splice view of the AVM that: alpha fusion weight ratio Alpha of right-eye camera of person in spliced view image_left =θ 0 Alpha fusion weight duty ratio Alpha of front fisheye camera with gradually changing from 100% opacity to 0% complete transparency disappearance image_front =(1-θ 0 θ) gradually changes from 0% complete transparency to 100% opacity.
But when the person walks to an angle (about 45 degrees), the person's projection in the right fisheye camera behind the look-around stitched viewVery close to->Weight θ of imaging in right-eye camera obtained by axis (i.e., start position of fusion region) 0 /θ×Pixel image_right The (u, v) pixel value will be relatively small.
Human front fisheye camera projected behind a look-around stitched viewVery close to->Weights (1-theta) for imaging in the preceding fisheye camera, determined by the axis (i.e. the end position of the fusion zone) 0 /θ)×Pixel image_left The (u, v) pixel values will also be relatively small.
Thus, the situation that the alpha fusion weights of the 2 fisheye cameras are close to 0 can occur, and the situation can be seen as if people standing outside the vehicle and very close to each other are very blurred or disappeared in the image spliced by looking around.
The deep cause of this problem arises because of the world coordinate P of the fused region seen around the AVM360 avm =(x 0 ,y 0 ) To the image pixel coordinates P image In modeling of the extrinsic rotation matrix R, the extrinsic translation vector t, and the intrinsic matrix K of = (u, v), the world coordinates P of the fusion region are preset avm =(x 0 ,y 0 ) The coordinate system of the ground plane is the coordinate system of the plane, namely the coordinate system is not suitable for the situation that the person stands higher than the ground, so that when the person stands at a specific angle, a blind area can be generated in the spliced view, and a great potential safety hazard is caused.
The related art is to rely on the car radar to scan around the automobile body, after detecting that the automobile body is higher than the ground object around, dynamically adjust the beginning position and the ending position of fusion area to the area position that no radar does not detect the object higher than the ground, in order to avoid the area higher than the ground object.
The related art has 2 disadvantages:
(1) the algorithm for adjusting the fusion area depends on that the radar is installed on the automobile, but in practice, many automobiles do not have 4 lateral APA parking radars installed on the wheel sides of the wheels, and even many automobiles are only provided with 4 early warning radars in total of front 2 and rear 2, so that the fusion area of the look-around splicing cannot be covered.
(2) Even if the starting position and the ending position of the fusion area are dynamically adjusted to an angle position at which no radar does not detect an object higher than the ground, the phenomenon still occurs that when a person is at a certain angle, the person projects in the right fisheye camera after looking around the spliced view Near the beginning of the fusion zone, the weight θ of the right-eye camera 0 /θ×Pixel image_right (u, v) will be smaller and the person's front fisheye camera will be projected behind the look-around mosaic view +.>Near the end position of the fusion zone, the weight (1- θ) of the previous fisheye camera imaging is determined 0 /θ)×Pixel image_front (u, v) will also be relatively small. Thus, the situation that the alpha fusion weights of the 2 fisheye cameras are close to 0 can occur, and the situation can be seen as if people standing outside the vehicle and very close to each other are very blurred or disappeared in the image spliced by looking around.
In order to optimize the problem that people or objects standing outside the automobile and very close to each other are very blurred or vanished in the images spliced by looking around, the invention does not depend on the radar sensor of the automobile, and the invention optimizes the splice fusion position without losing key information, so that a driver can clearly see the standing people or objects in the splice area, and the driving safety is increased.
The following describes a method, an apparatus, an electronic device and a storage medium for generating a look-around splice view according to an embodiment of the present invention with reference to fig. 3 to fig. 6.
Fig. 3 is a schematic flow chart of a method for generating a view of a view-around splice according to an embodiment of the present invention, as shown in fig. 3, the method includes the following steps: step 110, step 120 and step 130. The method flow steps are only one possible implementation of the invention. The method comprises the following steps:
Step 110, performing target detection on real-time images acquired by a plurality of camera assemblies of a target vehicle, determining whether a target object exists in a preset fusion area, and if the target object exists in the preset fusion area, acquiring pixel coordinates of a boundary frame of the target object;
in the embodiment of the invention, the shooting component can be a fisheye camera, or other ultra-wide angle cameras, or consists of a plurality of looking around or looking sideways cameras.
Alternatively, the plurality of image pickup assemblies of the target vehicle may be image pickup assemblies located in at least two adjacent directions among four directions of the front, rear, left, and right of the target vehicle.
In this step, a related target detection algorithm or model may be used to perform target detection on the real-time image acquired by each of the image capturing components, so as to determine whether a target object exists in the preset fusion area.
In embodiments of the present invention, the target object may be a person or other organism or object.
The preset fusion area is an area where transition and fusion are required between images shot by adjacent shooting assemblies when image stitching is performed. For example, the upper right corner area in fig. 1 is the fusion of the image captured by the front fisheye camera and the image of the right fisheye camera when they are spliced.
If the target object exists in the preset fusion area, the pixel coordinates of the boundary frame corresponding to the target object in the real-time image acquired by each camera component can be acquired through a target detection algorithm or model.
Step 130, calculating an optimal starting position of the preset fusion area based on pixel coordinates of a boundary box of the target object;
specifically, the starting position of the preset fusion area is adjusted based on the pixel coordinates of the bounding box of the target object, and the optimal starting position of the preset fusion area is calculated.
Take the upper right corner region of the look-around stitched view as an example:
the upper right corner area of the all-around view is formed by fusion and splicing of a front fish-eye camera and a right fish-eye camera, the lower left corner of the area is taken as an origin O of a two-dimensional rectangular coordinate system, the right is taken as an x-axis, the upward direction is taken as a y-axis, and any sitting mark of P (x 0 ,y 0 )。
Setting the starting position of a preset fusion areaThe included angle of the axes is the independent variable theta start End position of preset fusion region and +.>The included angle of the axes is a dependent variable theta start +ΔθThat is, the angle of the preset fusion area is a constant theta between the start position and the end position start +Δθ-θ start =Δθ, generally at least +.>
Let alpha weight of preset fusion area be along with theta 0 The angle varies linearly, wherein any point P (x 0 ,y 0 ) Vector relative to origin OStart position of fusion region with preset>The angle of the axes, denoted as θ 0
Then the seat mark at any point in the preset fusion area is P avm =(x 0 ,y 0 ) The fusion weight of the imaging Alpha of the front fisheye camera at the starting position side is Alpha image_front (x 0 ,y 0 )=(1-θ 0 Alpha fusion weight of right fisheye camera imaging at end position side is Alpha image_right (x 0 ,y 0 )=θ 0 /Δθ。
From the above, it can be seen that by adjusting θ start Can adjust theta 0 Furthermore, the imaging alpha fusion weight of the corresponding camera component can be adjusted, the optimal starting position of the preset fusion area is calculated based on the pixel coordinates of the boundary frame of the target object, and the target object is not lost at the spliced fusion position at the optimal starting position.
And 140, fusion splicing is carried out on the real-time images based on the optimal starting position of the preset fusion area, so that a view of the all-around splice is obtained.
And finally, based on the optimal starting position of the preset fusion area, carrying out fusion splicing on the real-time images acquired by each camera shooting assembly, namely, based on the optimal starting position of the preset fusion area, calculating imaging alpha fusion weights corresponding to each camera shooting assembly, and carrying out looking-around fusion splicing based on the imaging alpha fusion weights corresponding to each camera shooting assembly, so as to obtain a looking-around splicing view.
According to the method for generating the all-around view splice view, the real-time images acquired by the plurality of camera assemblies of the target vehicle are subjected to target detection, if a target object exists in the preset fusion area, the pixel coordinates of the boundary frame of the target object are acquired, the optimal starting position of the preset fusion area is calculated based on the pixel coordinates of the boundary frame of the target object, and the real-time images are fused and spliced based on the optimal starting position of the preset fusion area, so that the splice angles of the fusion positions of different camera assemblies are dynamically adjusted, key information is not lost at the splice fusion positions, people or other organisms or objects which are very close to the outside of the vehicle can be clearly displayed in the all-around view splice view, the problem that the people or other organisms or objects which are very close to the outside of the vehicle are very fuzzy or disappear in the all-around view splice view is solved, and driving safety is improved.
In some embodiments, the performing object detection on the real-time image acquired by each image capturing component, and determining whether the target object exists in the preset fusion area includes:
inputting the real-time images acquired by the camera assemblies into a target detection model, and outputting target detection results corresponding to the real-time images acquired by the camera assemblies;
And determining whether the position of the visual detection result is positioned in the preset fusion area based on the target detection result.
Specifically, in this embodiment, the real-time images collected by each image capturing component are directly input to a target detection model, and the target detection result corresponding to the real-time images collected by each image capturing component is output, where the target detection model may be: R-CNN series models, including R-CNN, fast R-CNN, etc., using a regional advice network (Region Proposal Network, RPN) and a convolutional neural network (Convolutional Neural Network, CNN) to generate an object detection bounding box and a prediction class; YOLO series model: including YOLOv1, YOLOv2, YOLOv3, etc., which use a single CNN network to predict object bounding boxes and categories in one forward pass and achieve high speed and accuracy through dense predictions; SSD model: single Shot Multibox Detector (SSD) is a single-stage object detection model that uses a multi-scale detector to detect objects of various sizes and shapes; retinaNet model: the RetinaNet uses a special loss function to balance the weight between positive and negative samples, so that the accuracy of small target detection is improved; mask R-CNN model: mask R-CNN is a model based on R-CNN, which not only can perform target detection, but also can generate a segmentation Mask of a target instance. The object detection model may also be other object detection models, which the present invention is not limited to.
And determining whether the position of the visual detection result is positioned in the preset fusion area based on the target detection result.
If the target object exists in the preset fusion area, further acquiring pixel coordinates of a boundary frame corresponding to the target object in the real-time image acquired by each camera component.
Alternatively, the image of the preset fusion area may be first extracted, and then a related target detection algorithm or a target detection model is adopted to perform target detection on the image of the preset fusion area, so as to determine whether the target object exists in the preset fusion area. If the target object exists, further acquiring pixel coordinates of a boundary frame corresponding to the target object in the image of the preset fusion area.
According to the method for generating the all-round view stitching view, the real-time images acquired by the camera assemblies are subjected to target detection, so that whether a target object exists in a preset fusion area is determined, if the target object exists, the next processing is performed, if the target object does not exist, the processing is not needed, calculation resources can be saved, people or other organisms or objects with very close distances outside a vehicle can be clearly displayed in the all-round view stitching view, the problem that the people or other organisms or objects with very close distances outside the vehicle are very blurred or vanished in the all-round view stitching view is solved, and driving safety is improved.
In some embodiments, the calculating the optimal starting position of the preset fusion area based on the pixel coordinates of the bounding box of the target object includes:
calculating world coordinates of the bounding box projected to a preset fusion area based on pixel coordinates of the bounding box of the target object;
and calculating the optimal starting position of the preset fusion area based on the world coordinates.
It has been mentioned previously that, in the related art, the world coordinates P of the fusion area are preset avm =(x 0 ,y 0 ) The coordinate system of the plane of the ground plane is not suitable for the situation that the person or other living beings or objects are higher than the ground, so that the person or other living beings or objects which are very close to the outside of the vehicle at a specific angle are very blurred or vanished in the looking-around spliced view, therefore, in order to solve the problem, the pixel coordinates of the boundary frame of the target object need to be converted into world coordinates, and then the optimal starting position of the preset fusion area is calculated based on the world coordinates.
Specifically, the calculating, based on the pixel coordinates of the bounding box of the target object, world coordinates of the bounding box projected to a preset fusion area includes:
Calculating world coordinates of the boundary frame projected to a preset fusion area on each camera shooting assembly based on pixel coordinates of the boundary frame of the target object and internal and external parameters corresponding to each camera shooting assembly;
the internal and external parameters include: an extrinsic rotation matrix R, an extrinsic translation vector t and an intrinsic matrix K.
For example, the bounding boxes of the target object are at pixel coordinates BoundingBox in the front and right fisheye cameras, respectively image The RGB pixel values in = (u 1, v1, u2, v 2) are noted as BoundingBox image_front And boundingBox image_righ
The bounding box can be converted into a projection on the respective camera to a preset fusion area by the respective extrinsic rotation matrix R, extrinsic translation vector t and intrinsic matrix K of the fish-eye cameraWorld-sit sign BounddingBox avm = (x 1, y1, x2, y 2), in particular BoundingBox avm_image_front And boundingBox avm_image_rignt
In some embodiments, the calculating the optimal starting position of the preset fusion area based on the world coordinates includes:
calculating the integral sum of imaging alpha fusion weights of the corresponding camera shooting components on the area where the boundary frame is positioned when the starting positions of different preset fusion areas are taken as independent variables based on the world coordinates;
and taking the starting position of the corresponding preset fusion area when the sum of the integral values is maximum as the optimal starting position of the preset fusion area.
The imaging Alpha fusion weight is an image processing method for fusing two or more images into one image with transparency effect. In imaging Alpha fusion, each pixel needs to be assigned an Alpha fusion weight value, typically between 0.0 and 1.0, and indicating the extent to which that pixel contributes to the final fused image. Specifically, if two images have overlapping portions at a certain position, the pixel value at that position can be calculated by linear interpolation. The Alpha fusion weights control the transparency of the two images at that location, i.e. their visibility. Normally, darker, opaque pixels will be assigned a higher Alpha blending weight, while lighter, transparent pixels will be assigned a lower Alpha blending weight.
It can be understood that, based on the world coordinates, the integral sum of the imaging alpha fusion weights of the corresponding imaging components on the area where the bounding box is located when the starting positions of different preset fusion areas are taken as independent variables is calculated, and the imaging characteristic information of the target object in the preset fusion area can be determined, so that the imaging effect of the characterization target object in the preset fusion area is best when the integral value is maximum. And the starting position of the corresponding preset fusion area is used as the optimal starting position of the preset fusion area when the integral value is maximum, so that the splicing angles of fusion positions of different camera shooting assemblies can be dynamically adjusted according to target objects in different preset fusion areas, and the images of the target objects at the splicing fusion positions are ensured to be clear and not lost.
According to the method for generating the view of the look-around splice, provided by the embodiment of the invention, based on world coordinates, the integral of the imaging alpha fusion weight of the corresponding camera shooting assembly on the area where the boundary frame is located when the starting positions of different preset fusion areas are taken as independent variables is calculated, and the starting position of the corresponding preset fusion area when the integral value is maximum is taken as the optimal starting position of the preset fusion area, so that key information is not lost at the splice fusion position, and the driving safety is improved.
Optionally, calculating, based on world coordinates of the bounding box projected to a preset fusion area on each image capturing component, a sum of integral of imaging alpha fusion weights of the corresponding image capturing components on the area where the bounding box is located when starting positions of different preset fusion areas are used as independent variables, specifically includes:
and calculating the sum of the integral of the front fisheye camera alpha fusion weight on the area where the bounding box is located and the integral of the right fisheye camera alpha fusion weight on the area where the bounding box is located when the starting positions of different preset fusion areas are independent variables based on the world coordinates of the bounding box projected to the preset fusion areas on the front fisheye camera and the world coordinates of the bounding box projected to the preset fusion areas on the right fisheye camera.
Specifically, the calculating, based on the world coordinates, the sum of integral of the imaging alpha fusion weights of the corresponding imaging components on the area where the bounding box is located when the starting positions of different preset fusion areas are used as independent variables includes:
let the world coordinate be BounddingBox avm = (x 1, y1, x2, y 2), the integral sum of the imaging alpha fusion weights of the corresponding camera assemblies on the area where the bounding box is located is calculated according to the following formula:
∫∫Alpha image_direction1 (x 0 ,y 0 )dα+∫∫Alpha image_direction2 (x 0 ,y 0 )dβ
wherein the Alpha image_direction (x 0 ,y 0 ) Fusing weights for imaging Alpha of the camera component in the first direction, wherein Alpha is image_direction2 (x 0 ,y 0 ) The imaging alpha fusion weight of the camera component in the second direction is determined according to the preset fusion area, and dalpha is boundingBox avm_image_direction1 The enclosed region dβ is BoundingBox avm_image_direction1 The enclosed region, boundingBox avm_image_direction1 World coordinates projected to a preset fusion area on the camera component in the first direction for the boundary box avm_image_direction Projecting world coordinates of the bounding box onto a preset fusion area on the camera component in the second direction, (x) 0 ,y 0 ) The coordinates of any point of the fusion area are preset.
The first direction and the second direction are two adjacent directions.
In some embodiments, the first-direction fisheye camera imaging Alpha fusion weight Alpha at the preset fusion zone start position side image_direction1 (x 0 ,y 0 )=(1-θ 0 Delta theta), the fusion weight of the first-direction fisheye camera imaging Alpha at the fusion end position side is Alpha image_direction2t (x 0 ,y 0 )=θ 0 /delta theta. Wherein θ 0 For presetting any point P in the fusion area avm =(x 0 ,y 0 ) Vector relative to origin OStart position of fusion region with preset>The included angle of the axes.
Still take the upper right corner region of the look-around stitched view as an example:
(1) the upper right corner area of the all-around view is formed by fusion and splicing of a front fish-eye camera and a right fish-eye camera, the lower left corner of the area is taken as an origin O of a two-dimensional rectangular coordinate system, the right is taken as an x-axis, the upward direction is taken as a y-axis, and any point is arranged in the areaSitting is marked as P (x) 0 ,y 0 )。
(2) Setting the starting position of a preset fusion areaThe included angle of the axes is the independent variable theta start End position of preset fusion region and +.>The included angle of the axes is a dependent variable theta start +Δθ, i.e. the angle between the start position and the end position is constant θ start +Δθ-θ start =Δθ is generally at least +.>
(3) Let alpha weight of preset fusion area be along with theta 0 The angle is linearly changed, and the sitting mark is P at any point in the preset fusion area avm =(x 0 ,y 0 ) The fusion weight of the imaging Alpha of the front fisheye camera at the starting position side is Alpha image_front (x 0 ,y 0 )=(1-θ 0 Alpha fusion weight of right fisheye camera imaging at end position side is Alpha image_right (x 0 ,y 0 )=θ 0 /Δθ。
(4) The pixel sitting mark of the boundary box for detecting the image of the person or object existing in each fish-eye camera in the looking-around projection view is boundingBox by an image-based target detection algorithm image = (u 1, v1, u2, v 2), this bounding box can be converted by the respective reference rotation matrix R, reference translation vector t, reference matrix K of the fisheye camera into a world coordinate marker BoundingBox projected onto the respective camera onto the preset fusion area avm =(x1,y1,x2,y2)
(5) At this time, if a person or object appears in the stitching region, the world coordinate BoundingBox of the preset fusion region of the image of the person or object is detected avm (x 1, y1, x2, y 2), then the included angle between the starting position of the different preset fusion areas on the area and the y axis is calculated asIndependent variable theta start Down theta 0 Delta theta and (1-theta) 0 Sum of/Δθ):
∫∫Alpha image_front (x 0 ,y 0 )dα+∫∫Alpha image_right (x 0 ,y 0 )dβ
wherein dα is BounddingBox avm_image_front The area of the enclosure is defined by the shape of the panel,
wherein dβ is BounddingBox avm_image_right The enclosed area.
(6) Taking the integral maximum value as the start position θ at this time start And the joint angles of the fusion positions of the different cameras are dynamically adjusted to ensure that key information is not lost at the fusion positions, so that a driver can clearly see a standing person in the joint region, and the driving safety is improved.
According to the method for generating the view of the look-around splice, provided by the embodiment of the invention, based on world coordinates, the integral of the imaging alpha fusion weight of the corresponding camera shooting assembly on the area where the boundary frame is located when the starting positions of different preset fusion areas are taken as independent variables is calculated, and the starting position of the corresponding preset fusion area when the integral value is maximum is taken as the optimal starting position of the preset fusion area, so that key information is not lost at the splice fusion position, and the driving safety is improved.
In some embodiments, the method further comprises:
acquiring a view-around mosaic view, wherein the view-around mosaic view is obtained by fusing and mosaicing real-time images acquired by camera assemblies positioned in the front, back, left and right directions of a target vehicle;
the preset fusion area is a fusion area in the all-around spliced video.
Specifically, the look-around stitching view is obtained by fusing and stitching real-time images acquired by the camera assemblies positioned in the front, back, left and right directions of the target vehicle.
And the real-time images acquired by the camera assemblies in the front, back, left and right directions of the target vehicle are fused and spliced to obtain a look-around spliced view, and the look-around spliced view can help a driver to know all surrounding environments in real time without dead angles. At this time, the fusion angle used for fusion splicing is preset.
And the preset fusion area is the fusion area in the all-around spliced video.
On the basis of the above embodiment, the method further includes:
and optimizing the all-around view splice view based on the optimal starting position of the preset fusion area to obtain an optimized all-around view splice view.
Specifically, based on the optimal starting position of the preset fusion area, imaging alpha fusion weights corresponding to all the camera modules are calculated, and the view-around splicing is performed based on the imaging alpha fusion weights corresponding to all the camera modules, so that an optimized view-around spliced view is obtained.
According to the method for generating the all-around splice view, the all-around splice view is optimized based on the optimal starting position of the preset fusion area, the optimized all-around splice view is obtained, key information is not lost at the splice fusion position, and driving safety is improved.
Fig. 4 is a second flowchart of a method for generating a view of a view-around splice according to an embodiment of the present invention, as shown in fig. 4, where the method for generating a view-around splice includes:
step 210, acquiring a view-around mosaic view, wherein the view-around mosaic view is obtained by fusing and mosaicing real-time images acquired by camera assemblies positioned in front, back, left and right directions of a target vehicle;
220, performing target detection on the real-time images acquired by the camera assemblies, determining whether a target object exists in a fusion area of the all-around view splice view, and if the target object exists in the fusion area, acquiring pixel coordinates of a boundary frame of the target object;
Step 230, calculating world coordinates of the bounding box projected to the fusion area on each image capturing component based on pixel coordinates of the bounding box of the target object and internal and external parameters corresponding to each image capturing component, where the internal and external parameters include: an extrinsic rotation matrix, an extrinsic translation vector and an intrinsic matrix;
step 240, calculating the integral sum of the imaging alpha fusion weights of the corresponding camera shooting components on the area where the boundary frame is positioned when the starting positions of different fusion areas are taken as independent variables based on the world coordinates;
step 250, using the starting position of the corresponding fusion area when the sum of the integrals is maximum as the optimal starting position of the fusion area;
and 260, calculating imaging alpha fusion weights corresponding to the camera assemblies based on the optimal starting positions of the fusion areas, and performing look-around splicing based on the imaging alpha fusion weights corresponding to the camera assemblies to obtain an optimized look-around spliced view.
The embodiment of the invention can enable people or other organisms or objects with very close distance outside the vehicle to be clearly displayed in the looking-around spliced view, and improves driving safety.
It should be noted that each embodiment of the present application may be freely combined, permuted, or executed separately, and does not need to rely on or rely on a fixed execution sequence.
The following describes the device for generating the splice view of the present invention, and the device for generating the splice view of the present invention and the method for generating the splice view of the present invention can be referred to correspondingly. Fig. 5 is a schematic structural diagram of a device for generating a view of a view splice in the embodiment of the present invention. As shown in fig. 5, the apparatus for generating a view of a view splice includes:
the target detection unit 510 is configured to perform target detection on real-time images acquired by multiple image capturing components of a target vehicle, determine whether a target object exists in a preset fusion area, and if the target object exists in the preset fusion area, acquire pixel coordinates of a bounding box of the target object;
a calculating unit 520 for calculating an optimal start position of the preset fusion area based on pixel coordinates of a bounding box of the target object;
and the fusion splicing unit 530 is configured to fusion splice the real-time images based on the optimal starting position of the preset fusion area, so as to obtain a look-around splice view.
In some embodiments, the performing object detection on real-time images acquired by a plurality of camera assemblies of the target vehicle, and determining whether a target object exists in the preset fusion area includes:
Inputting the real-time images acquired by the camera assemblies into a target detection model, and outputting target detection results corresponding to the real-time images acquired by the camera assemblies;
and determining whether the position of the visual detection result is positioned in the preset fusion area based on the target detection result.
In some embodiments, the calculating the optimal starting position of the preset fusion area based on the pixel coordinates of the bounding box of the target object includes:
calculating world coordinates of the bounding box projected to a preset fusion area based on pixel coordinates of the bounding box of the target object;
and calculating the optimal starting position of the preset fusion area based on the world coordinates.
In some embodiments, the calculating world coordinates of the bounding box projected to the preset fusion area based on pixel coordinates of the bounding box of the target object includes:
calculating world coordinates of the boundary frame projected to a preset fusion area on each camera shooting assembly based on pixel coordinates of the boundary frame of the target object and internal and external parameters corresponding to each camera shooting assembly;
the internal and external parameters include: an extrinsic rotation matrix, an extrinsic translation vector, and an intrinsic matrix.
In some embodiments, the calculating the optimal starting position of the preset fusion area based on the world coordinates includes:
calculating the integral sum of imaging alpha fusion weights of the corresponding camera shooting components on the area where the boundary frame is positioned when the starting positions of different preset fusion areas are taken as independent variables based on the world coordinates;
and taking the starting position of the corresponding preset fusion area when the sum of the integral values is maximum as the optimal starting position of the preset fusion area.
In some embodiments, the apparatus for generating a view of a view-around splice further comprises:
the acquisition unit is used for acquiring a look-around spliced view, wherein the look-around spliced view is obtained by fusion splicing of real-time images acquired by the camera assemblies positioned in the front, back, left and right directions of a target vehicle;
the preset fusion area is a fusion area in the all-around spliced video.
In some embodiments, the apparatus for generating a view of a view-around splice further comprises:
and the optimizing unit is used for optimizing the all-around view splicing view based on the optimal starting position of the preset fusion area to obtain an optimized all-around view splicing view.
It should be noted that, the device for optimizing the image of the preset fusion area for looking around provided by the embodiment of the present invention can implement all the method steps implemented by the embodiment of the method for generating the spliced view for looking around, and can achieve the same technical effects, and detailed descriptions of the same parts and beneficial effects as those of the embodiment of the method in the embodiment are omitted.
Fig. 6 illustrates a physical schematic diagram of an electronic device, as shown in fig. 6, which may include: processor 610, communication interface (Communications Interface) 620, memory 630, and communication bus 640, wherein processor 610, communication interface 620, and memory 630 communicate with each other via communication bus 640. The processor 610 may invoke logic instructions in the memory 630 to perform a look-around splice view generation method comprising: performing target detection on real-time images acquired by a plurality of camera assemblies of a target vehicle, determining whether a target object exists in a preset fusion area, and if the target object exists in the preset fusion area, acquiring pixel coordinates of a boundary frame of the target object; calculating an optimal starting position of the preset fusion area based on pixel coordinates of a boundary frame of the target object; and based on the optimal starting position of the preset fusion area, carrying out fusion splicing on the real-time images to obtain a circular splice view.
Further, the logic instructions in the memory 630 may be implemented in the form of software functional units and stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, where the computer program product includes a computer program, where the computer program can be stored on a non-transitory computer readable storage medium, and when the computer program is executed by a processor, the computer can execute the method for generating the looking-around splice view provided by the above method embodiments, and the method includes: performing target detection on real-time images acquired by a plurality of camera assemblies of a target vehicle, determining whether a target object exists in a preset fusion area, and if the target object exists in the preset fusion area, acquiring pixel coordinates of a boundary frame of the target object; calculating an optimal starting position of the preset fusion area based on pixel coordinates of a boundary frame of the target object; and based on the optimal starting position of the preset fusion area, carrying out fusion splicing on the real-time images to obtain a circular splice view.
In yet another aspect, the present invention further provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the method for generating a view of a view-around splice provided by the above methods, the method comprising: performing target detection on real-time images acquired by a plurality of camera assemblies of a target vehicle, determining whether a target object exists in a preset fusion area, and if the target object exists in the preset fusion area, acquiring pixel coordinates of a boundary frame of the target object; calculating an optimal starting position of the preset fusion area based on pixel coordinates of a boundary frame of the target object; and based on the optimal starting position of the preset fusion area, carrying out fusion splicing on the real-time images to obtain a circular splice view.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The method for generating the all-around splice view is characterized by comprising the following steps of:
performing target detection on real-time images acquired by a plurality of camera assemblies of a target vehicle, determining whether a target object exists in a preset fusion area, and if the target object exists in the preset fusion area, acquiring pixel coordinates of a boundary frame of the target object;
calculating an optimal starting position of the preset fusion area based on pixel coordinates of a boundary frame of the target object;
and based on the optimal starting position of the preset fusion area, carrying out fusion splicing on the real-time images to obtain a circular splice view.
2. The method for generating a view of a view-around mosaic according to claim 1, wherein the performing object detection on real-time images acquired by a plurality of camera assemblies of a target vehicle to determine whether a target object exists in a preset fusion area comprises:
Inputting the real-time images acquired by the camera assemblies into a target detection model, and outputting target detection results corresponding to the real-time images acquired by the camera assemblies;
and determining whether the position of the visual detection result is positioned in the preset fusion area based on the target detection result.
3. The method for generating a view of a view-around mosaic according to claim 1, wherein calculating the optimal start position of the preset fusion area based on the pixel coordinates of the bounding box of the target object comprises:
calculating world coordinates of the bounding box projected to a preset fusion area based on pixel coordinates of the bounding box of the target object;
and calculating the optimal starting position of the preset fusion area based on the world coordinates.
4. A method of generating a view of a view-around mosaic according to claim 3, wherein said calculating world coordinates of a bounding box projected onto a preset fusion area based on pixel coordinates of the bounding box of the target object comprises:
calculating world coordinates of the boundary frame projected to a preset fusion area on each camera shooting assembly based on pixel coordinates of the boundary frame of the target object and internal and external parameters corresponding to each camera shooting assembly;
The internal and external parameters include: an extrinsic rotation matrix, an extrinsic translation vector, and an intrinsic matrix.
5. A method of generating a view of a view splice as defined in claim 3, wherein said calculating an optimal start position of the preset fusion area based on the world coordinates includes:
calculating the integral sum of imaging alpha fusion weights of the corresponding camera shooting components on the area where the boundary frame is positioned when the starting positions of different preset fusion areas are taken as independent variables based on the world coordinates;
and taking the starting position of the corresponding preset fusion area when the sum of the integral values is maximum as the optimal starting position of the preset fusion area.
6. The method of generating a view of a view-around splice of claim 1, further comprising:
acquiring a view-around mosaic view, wherein the view-around mosaic view is obtained by fusing and mosaicing real-time images acquired by camera assemblies positioned in the front, back, left and right directions of a target vehicle;
the preset fusion area is a fusion area in the all-around spliced video.
7. The method of generating a view of a view splice as defined in claim 6, further comprising:
And optimizing the all-around view splice view based on the optimal starting position of the preset fusion area to obtain an optimized all-around view splice view.
8. A view-around splice generation apparatus, comprising:
the target detection unit is used for carrying out target detection on real-time images acquired by a plurality of camera assemblies of a target vehicle, determining whether a target object exists in a preset fusion area, and if the target object exists in the preset fusion area, acquiring pixel coordinates of a boundary frame of the target object;
a calculation unit that calculates an optimal start position of the preset fusion region based on pixel coordinates of a bounding box of the target object;
and the fusion splicing unit is used for carrying out fusion splicing on the real-time images based on the optimal starting position of the preset fusion area to obtain a circular splicing view.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of generating a view of a splice looking around as claimed in any one of claims 1 to 7 when the program is executed by the processor.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the look-around splice view generation method according to any of claims 1 to 7.
CN202311872807.XA 2023-12-29 2023-12-29 Method and device for generating looking-around spliced view, electronic equipment and storage medium Pending CN117830089A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311872807.XA CN117830089A (en) 2023-12-29 2023-12-29 Method and device for generating looking-around spliced view, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311872807.XA CN117830089A (en) 2023-12-29 2023-12-29 Method and device for generating looking-around spliced view, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117830089A true CN117830089A (en) 2024-04-05

Family

ID=90507625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311872807.XA Pending CN117830089A (en) 2023-12-29 2023-12-29 Method and device for generating looking-around spliced view, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117830089A (en)

Similar Documents

Publication Publication Date Title
US10452931B2 (en) Processing method for distinguishing a three dimensional object from a two dimensional object using a vehicular system
CN104851076B (en) Panoramic looking-around parking assisting system and camera installation method for commercial car
CN107792179B (en) A kind of parking guidance method based on vehicle-mounted viewing system
JP5108605B2 (en) Driving support system and vehicle
US11657319B2 (en) Information processing apparatus, system, information processing method, and non-transitory computer-readable storage medium for obtaining position and/or orientation information
JP2009129001A (en) Operation support system, vehicle, and method for estimating three-dimensional object area
CN107993263A (en) Viewing system automatic calibration method, automobile, caliberating device and storage medium
CN111582080A (en) Method and device for realizing 360-degree all-round monitoring of vehicle
KR20170135952A (en) A method for displaying a peripheral area of a vehicle
CN112224132A (en) Vehicle panoramic all-around obstacle early warning method
JP2010136289A (en) Device and method for supporting drive
US11528453B2 (en) Sensor fusion based perceptually enhanced surround view
JP2010218226A (en) Measurement map generation device and traveling environment confirmation device
CN112215747A (en) Method and device for generating vehicle-mounted panoramic picture without vehicle bottom blind area and storage medium
WO2020011670A1 (en) Method for estimating a relative position of an object in the surroundings of a vehicle and electronic control unit for a vehicle and vehicle
CN103824296B (en) Fisheye image correction method of vehicle panoramic display system based on unit square
Yeh et al. Driver assistance system providing an intuitive perspective view of vehicle surrounding
CN111860270B (en) Obstacle detection method and device based on fisheye camera
CN117830089A (en) Method and device for generating looking-around spliced view, electronic equipment and storage medium
CN113379605B (en) Vehicle-mounted 360-degree panoramic image system and computer storage medium
JP2009077022A (en) Driving support system and vehicle
JP2013200840A (en) Video processing device, video processing method, video processing program, and video display device
US20230191998A1 (en) Method for displaying a virtual view of the area surrounding a vehicle, computer program, control unit, and vehicle
Yan et al. The research of surround view parking assist system
CN113516733B (en) Method and system for filling blind areas at bottom of vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination