CN117853569A - Vehicle peripheral area presentation device and method and electronic equipment - Google Patents

Vehicle peripheral area presentation device and method and electronic equipment Download PDF

Info

Publication number
CN117853569A
CN117853569A CN202410263199.0A CN202410263199A CN117853569A CN 117853569 A CN117853569 A CN 117853569A CN 202410263199 A CN202410263199 A CN 202410263199A CN 117853569 A CN117853569 A CN 117853569A
Authority
CN
China
Prior art keywords
vehicle
image
preset
projection
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410263199.0A
Other languages
Chinese (zh)
Other versions
CN117853569B (en
Inventor
刘雄辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Lichi Semiconductor Co ltd
Original Assignee
Shanghai Lichi Semiconductor Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Lichi Semiconductor Co ltd filed Critical Shanghai Lichi Semiconductor Co ltd
Priority to CN202410263199.0A priority Critical patent/CN117853569B/en
Publication of CN117853569A publication Critical patent/CN117853569A/en
Application granted granted Critical
Publication of CN117853569B publication Critical patent/CN117853569B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The device comprises a processor, a storage unit and a plurality of projection devices for projecting the image to the peripheral area of the vehicle, wherein the storage unit stores a plurality of coordinate mapping matrixes in advance, each coordinate mapping matrix corresponds to one projection device, after a target area representing a dangerous area is marked in a bird's-eye view image through configuring the coordinate mapping matrixes calibrated in advance for each projection device, pixel values of a plurality of pixel points are taken from the bird's-eye view image based on the coordinate mapping matrixes of the corresponding projection devices, and an image to be projected is generated, so that the corresponding projection devices project an undistorted image, the dangerous area when the vehicle runs is presented more accurately, the position of the dangerous area is determined more accurately by a driver and pedestrians, and the running safety is improved.

Description

Vehicle peripheral area presentation device and method and electronic equipment
Technical Field
The present disclosure relates to the field of vehicle security technologies, and in particular, to a device and method for displaying a peripheral area of a vehicle, and an electronic device.
Background
With the increasing importance of automobiles in people's daily lives, people are also paying more attention to driving safety. Vehicles in lane changing, especially large vehicles such as buses, trucks, and earth-moving vehicles, can generate a large danger area on the inner side of the lane changing direction due to large volume and long wheel distance of the vehicles during lane changing. However, other pedestrians and auxiliary vehicles outside the vehicle cannot know the road track to be covered by the turning vehicle body due to lack of experience, and the vehicle cannot avoid in time, so that frequent safety accidents are caused. It is therefore necessary to alert others at or attempting to enter the hazardous area in a timely manner.
In the prior art, an illuminating lamp or a dangerous information projection lamp is additionally arranged on the side face of a vehicle, so that drivers and pedestrians can see the dangerous area clearly during turning. However, this approach is merely a generalized indication, and may not accurately indicate the dangerous area, and thus the pedestrian may lack sufficient vigilance to the dangerous level, and the driver may be negligent or misjudge the dangerous level.
Disclosure of Invention
The embodiment of the application provides a presentation device, a presentation method and electronic equipment of a vehicle peripheral area, which are used for more accurately presenting a dangerous area when a vehicle runs.
In a first aspect, there is provided an apparatus for presenting a peripheral area of a vehicle, the apparatus comprising a processor, a storage unit and a plurality of projection devices for projecting to the peripheral area of the vehicle, the storage unit storing a plurality of coordinate mapping matrices in advance, each coordinate mapping matrix corresponding to one projection device, the processor being configured to: acquiring a steering angle of the vehicle; if the steering angle is in a preset angle range, marking a target area from a bird's-eye view based on the vehicle body information, the running direction and the steering angle of the vehicle, wherein the bird's-eye view is an image of the vehicle in a preset bird's-eye view field, and the bird's-eye view comprises a vehicle body area and a vehicle peripheral area; determining at least one target projection device from among the projection devices based on the position of the target region in the peripheral region of the vehicle, and acquiring a coordinate mapping matrix of the target projection device from the storage unit; taking pixel values of a plurality of pixel points from the aerial view based on the coordinate mapping matrix of the target projection equipment, generating an image to be projected, and inputting the image to be projected into the target projection equipment for projection; the coordinate mapping matrix is generated based on a coordinate mapping relation between a preset feature image and a projection image of the preset feature image, and the preset feature image comprises a plurality of feature points distributed according to a preset rule.
In a second aspect, there is provided a method for presenting a peripheral area of a vehicle, on which a plurality of projection devices for projecting to the peripheral area of the vehicle are provided, each of the projection devices corresponding to a coordinate mapping matrix, the method comprising: acquiring a steering angle of the vehicle; if the steering angle is in a preset angle range, marking a target area from a bird's-eye view based on the vehicle body information, the running direction and the steering angle of the vehicle, wherein the bird's-eye view is an image of the vehicle in a preset bird's-eye view field, and the bird's-eye view comprises a vehicle body area and a vehicle peripheral area; determining at least one target projection device in the projection devices, and acquiring a coordinate mapping matrix of the target projection device from the storage unit; taking pixel values of a plurality of pixel points from the aerial view based on the coordinate mapping matrix of the target projection equipment, generating an image to be projected, and inputting the image to be projected into the target projection equipment for projection; the coordinate mapping matrix is generated based on a coordinate mapping relation between a preset feature image and a projection image of the preset feature image, and the preset feature image comprises a plurality of feature points distributed according to a preset rule.
In a third aspect, there is provided an electronic device including a processor and a memory, the memory having stored therein an executable program that is executed by the processor to perform the method of presenting a vehicle peripheral region as described in the second aspect.
The beneficial effects of this application embodiment lie in: after a target area representing a dangerous area is marked in the aerial view, pixel values of a plurality of pixel points are taken from the aerial view based on the coordinate mapping matrix of the corresponding projection equipment, and an image to be projected is generated, so that the corresponding projection equipment projects an undistorted image, the dangerous area during vehicle running is displayed more accurately, the positions of the dangerous area are determined more accurately by drivers and pedestrians, and running safety is improved.
Drawings
Fig. 1 is a block diagram of a presentation device for a peripheral area of a vehicle according to an embodiment of the present application;
FIG. 2 is a flowchart of generating a coordinate mapping matrix in an embodiment of the present application;
FIG. 3 is a flowchart of determining a mapping relationship in an embodiment of the present application;
FIG. 4 is a flowchart of generating a second coordinate matrix in an embodiment of the present application;
FIG. 5 is a flowchart of feature point detection for gray scale images based on target color levels in the embodiment of the application;
FIG. 6 is a schematic illustration of determining a target area when a vehicle is traveling in an embodiment of the present application;
FIG. 7 is a schematic diagram of determining a target area when a vehicle is backing up in an embodiment of the present application;
FIG. 8 is a schematic illustration of determining a target area while a vehicle is traveling in accordance with another embodiment of the present application;
FIG. 9 is a schematic illustration of determining a target area when a vehicle is backing up in another embodiment of the present application;
FIG. 10 is a schematic diagram of a coordinate mapping matrix established in an embodiment of the present application;
FIG. 11 is a schematic diagram of a preset feature image according to an embodiment of the present application;
FIG. 12 is a schematic view of acquiring a projected image of a front projection device of a vehicle in an embodiment of the present application;
FIG. 13 is a schematic diagram of determining a mapping relationship according to an embodiment of the present application;
FIG. 14 is a schematic view showing the distribution and projection effects of each projection device on a vehicle according to an embodiment of the present disclosure;
FIG. 15 is a schematic view showing the distribution and projection effects of each projection device on a vehicle according to another embodiment of the present disclosure;
FIG. 16 is a flowchart of a method for presenting a peripheral area of a vehicle according to an embodiment of the present application;
fig. 17 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Various aspects and features of the present application are described herein with reference to the accompanying drawings.
It should be understood that various modifications may be made to the embodiments of the application herein. Therefore, the above description should not be taken as limiting, but merely as exemplification of the embodiments. Other modifications within the scope and spirit of this application will occur to those skilled in the art.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and, together with a general description of the application given above and the detailed description of the embodiments given below, serve to explain the principles of the application.
These and other characteristics of the present application will become apparent from the following description of a preferred form of embodiment, given as a non-limiting example, with reference to the accompanying drawings.
It is also to be understood that, although the present application has been described with reference to some specific examples, those skilled in the art can certainly realize many other equivalent forms of the present application.
The foregoing and other aspects, features, and advantages of the present application will become more apparent in light of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present application will be described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely exemplary of the application, which can be embodied in various forms. Well-known and/or repeated functions and constructions are not described in detail to avoid obscuring the application with unnecessary or excessive detail. Therefore, specific structural and functional details disclosed herein are not intended to be limiting, but merely serve as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present application in virtually any appropriately detailed structure.
The specification may use the word "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments as per the application.
According to the presentation device for the vehicle peripheral area, after the coordinate mapping matrix calibrated in advance is configured for each projection device and the target area representing the dangerous area is marked in the aerial view, the pixel values of a plurality of pixel points are taken from the aerial view and the image to be projected is generated based on the coordinate mapping matrix of the corresponding projection device, so that the corresponding projection device projects an undistorted image, the dangerous area when the vehicle runs is presented more accurately, the position of the dangerous area is determined more accurately by a driver and pedestrians, and the running safety is improved.
As shown in fig. 1, the apparatus for presenting a vehicle peripheral area includes a processor, a storage unit, and a plurality of projection devices for projecting to the vehicle peripheral area, the storage unit storing a plurality of coordinate mapping matrices in advance, each coordinate mapping matrix corresponding to one projection device, the processor being configured to:
Acquiring a steering angle of the vehicle;
if the steering angle is in a preset angle range, marking a target area from a bird's-eye view based on the vehicle body information, the running direction and the steering angle of the vehicle, wherein the bird's-eye view is an image of the vehicle in a preset bird's-eye view field, and the bird's-eye view comprises a vehicle body area and a vehicle peripheral area;
determining at least one target projection device from among the projection devices based on the position of the target region in the peripheral region of the vehicle, and acquiring a coordinate mapping matrix of the target projection device from the storage unit;
taking pixel values of a plurality of pixel points from the aerial view based on the coordinate mapping matrix of the target projection equipment, generating an image to be projected, and inputting the image to be projected into the target projection equipment for projection;
the coordinate mapping matrix is generated based on a coordinate mapping relation between a preset feature image and a projection image of the preset feature image, and the preset feature image comprises a plurality of feature points distributed according to a preset rule.
In this embodiment, the presenting apparatus includes a processor, a storage unit, and a plurality of projection devices, where the storage unit and each projection device are respectively connected to the processor, and the projection is controlled by the processor, and the projection devices may be, for example, projection devices such as LCD, DLP, LCOS. Those skilled in the art can flexibly set the number and the installation positions of the projection devices according to the needs, and the more the number and the more reasonable the installation distribution of the projection devices, the larger the correct projection area can be covered, for example, the vehicle shown in fig. 14 is provided with four projection devices, the vehicle shown in fig. 15 is provided with six projection devices, and each projection device in fig. 15 can cover a larger projection area compared with each projection device in fig. 14.
Each projection device is mounted at a preset position on the vehicle for projecting to a peripheral area of the vehicle, which may be an area surrounding the vehicle, such as an area within 3 meters of the periphery of the vehicle, or an area formed by several sub-areas spaced apart in the periphery of the vehicle.
After a normal picture is projected onto a plane by a projection device, the picture displayed on the projection plane and the original picture have distortion due to the fact that the projection plane cannot be perpendicular to a projection optical axis and the optical deviation of the projection device, in order to avoid the problem, a coordinate mapping matrix corresponding to each projection device is generated based on a coordinate mapping relation between a preset feature image and a projection image of the preset feature image, each coordinate mapping matrix is stored in a storage unit, and a subsequent processor can call the corresponding coordinate mapping matrix from the storage unit according to requirements. The preset feature image includes a plurality of feature points distributed according to a preset rule, for example, the preset feature image may be any one of a black background bitmap including white points as feature points, a white background bitmap including black points as feature points, a checkerboard map, and the like.
When the processor performs projection control, the steering angle of the vehicle is acquired firstly, wherein the processor can acquire the steering angle from a steering wheel, can acquire the steering angle from a whole vehicle controller, and can also directly acquire the steering angle from wheels of the vehicle. And judging whether the steering angle is in a preset angle range, if so, indicating that the vehicle is in a turning state, and marking a target area from a bird's-eye view based on vehicle body information, a driving direction and the steering angle of the vehicle, wherein the target area represents a dangerous area when the vehicle turns, the vehicle body information can be length, width, height and other size information of the vehicle body, the driving direction comprises advancing or retreating and the like, the bird's-eye view is an image of the vehicle under the preset bird's-eye view, the bird's-eye view comprises a vehicle body area and a vehicle peripheral area, and in a specific application scene of the application, as shown in fig. 6-9, the bird's-eye view comprises the vehicle body area where the vehicle is and the vehicle peripheral area except the vehicle body. In some embodiments of the present application, the directly upper part of the aerial view is always consistent with the directly front part of the vehicle body, and the position of the vehicle body in the aerial view is always motionless.
After the target area is marked, at least one target projection device is determined from each projection device based on the position of the target area in the peripheral area of the vehicle, a coordinate mapping matrix of the target projection device is obtained from a storage unit, the target projection device is enabled to project the target area, then pixel values of a plurality of pixel points are obtained from a bird's eye view based on the coordinate mapping matrix of the target projection device, the pixel values of the obtained plurality of pixel points are endowed to the corresponding pixel points according to the coordinate mapping matrix, an image to be projected is generated, finally, the image to be projected is input into the target projection device for projection, and as the coordinate mapping matrix is generated based on the coordinate mapping relation between the preset feature image and the projection image of the preset feature image, the image projected by the target projection device can be prevented from being distorted, and therefore the image representing the dangerous area is projected based on the undistorted image of each target projection device in the peripheral area of the vehicle.
It will be appreciated that if the steering angle is not within the predetermined angle range, which means that the vehicle is not in a cornering condition, projection is not required, and the processor controls each projection device to be in a stopped condition.
The display device for the vehicle peripheral area comprises a processor, a storage unit and a plurality of projection devices for projecting the vehicle peripheral area, wherein the storage unit is used for storing a plurality of coordinate mapping matrixes in advance, each coordinate mapping matrix corresponds to one projection device, after a target area representing a dangerous area is marked in a bird's-eye view image through configuring the coordinate mapping matrixes calibrated in advance for each projection device, pixel values of a plurality of pixel points are taken from the bird's-eye view image based on the coordinate mapping matrixes of the corresponding projection devices, an image to be projected is generated, the corresponding projection devices project an undistorted image, the dangerous area when the vehicle runs is displayed more accurately, the position of the dangerous area is determined more accurately by a driver and pedestrians, and running safety is improved.
In some embodiments of the present application, as shown in fig. 2, the generation process of the coordinate mapping matrix includes the following steps:
and S21, placing the vehicle in a preset darkroom space according to the position of the vehicle body area in the aerial view.
In this embodiment, for a vehicle provided with a presentation device in a vehicle peripheral area before shipment, the coordinate mapping matrix is generated before shipment of the vehicle, and for a vehicle to which the presentation device is attached after shipment, the coordinate mapping matrix is generated after the vehicle attaches the presentation device. When the coordinate mapping matrix is generated, a preset darkroom space is arranged, shooting equipment for shooting the vehicle and the peripheral area is arranged in the darkroom space, a preset parking point is also arranged in the darkroom space, the preset parking point corresponds to the position of the vehicle body area in the aerial view, and the vehicle is placed in the preset parking point in the darkroom space.
And S22, manufacturing the preset characteristic image according to the resolution of the projection equipment, projecting the preset characteristic image based on the projection equipment, shooting the aerial view after projection, and obtaining the projection image.
And manufacturing a preset characteristic image according to the resolution of the projection equipment so as to ensure the quality of the image projected by the projection equipment. Inputting the preset characteristic image into a projection device, projecting the preset characteristic image based on the projection device to obtain a projected aerial view, and shooting the projected aerial view based on a shooting device in a preset darkroom space to obtain a projection image.
Step S23, generating the coordinate mapping matrix based on the mapping relationship between the coordinates of each pixel point in the preset feature image and the coordinates of each pixel point in the projection image.
And determining the coordinates of each pixel point in the preset characteristic image and the coordinates of each pixel point in the projection image, and generating a coordinate mapping matrix based on the mapping relation between the two coordinates.
In a specific application scenario of the present application, as shown in fig. 11, the preset feature image is a black background bitmap with a white point as a feature point, as shown in fig. 12, the preset feature image is input into a front projection device of a vehicle, the front projection device of the vehicle projects the preset feature image, and a bird's eye view after projection is photographed to obtain a projection image corresponding to the front projection device of the vehicle. As shown in fig. 10, knowing the coordinates of the pixel point a in the projected image (i.e. the preset feature image), the coordinates of the pixel point a 'in the projected image can be found through the coordinate mapping matrix M, and the pixel point a' in the projected image is equal to the pixel point a″ in the bird's eye view, and then the pixel value of the pixel point a″ in the bird's eye view is given to the pixel point a, and the pixel point is correctly projected to the position of the pixel point a ', i.e. the position of the pixel point a″ in the bird's eye view by the projection device.
It can be appreciated that the coordinate mapping matrix does not need to be set frequently, but only needs to be set when shipping or initializing the rendering device, wherein the initialization of the rendering device includes initialization after the rendering device is installed or initialization of the rendering device when a projection abnormality occurs.
The projection device is used for projecting the preset feature image based on the projection device in the preset darkroom space, shooting the projection image, and generating the coordinate mapping matrix based on the coordinate mapping relation between the preset feature image and the projection image, so that the coordinate mapping matrix can meet the undistorted projection requirement, and the accuracy of the coordinate mapping matrix is improved.
In some embodiments of the present application, as shown in fig. 3, the mapping relationship determining process includes the following steps:
step S31, a first coordinate matrix is generated based on the position of each feature point in the preset feature image, and a plurality of groups of rectangular feature points are determined based on the first coordinate matrix, wherein each group of rectangular feature points correspondingly divides the preset feature image into a plurality of minimum rectangles.
Because the projection image is obtained after the projection of the preset feature image, the preset feature image and the projection image both comprise a plurality of feature points, each feature point in the preset feature image can divide the preset feature image into a plurality of minimum rectangles, each feature point in the projection image can divide the projection image into a plurality of minimum quadrilaterals, perspective transformation is carried out based on each minimum rectangle and each minimum quadrangle, and the mapping relation of the pixel point level is determined.
First, a first coordinate matrix is generated based on the position of each feature point in a preset feature image, then a plurality of groups of rectangular feature points are determined based on the first coordinate matrix, and each group of rectangular feature points correspondingly divides the preset feature image into a plurality of minimum rectangles.
In a specific application scenario of the present application, as shown in fig. 11, the preset feature image is a black background bitmap with a white point as a feature point, and the input resolution of the projection device isThe number of the lattice points distributed at equal intervals isThe upper, lower, left and right edge distances of the dot matrix from the image are nTopMargin, nBottomMargin, nLeftMargin and nRightMargin, respectively. The abscissa of the first coordinate matrix isThe ordinate is +.>Then:
and S32, generating a second coordinate matrix based on the position of each characteristic point in the projection image, and determining a plurality of groups of quadrilateral characteristic points based on the second coordinate matrix, wherein each group of quadrilateral characteristic points correspondingly divides the projection image into a plurality of minimum quadrilaterals.
It should be noted that, the embodiment of the present application is not limited to dividing the preset feature image into a plurality of minimum rectangles, and dividing the projection image into a plurality of minimum quadrilaterals, and when the distribution form of each feature point accords with other shapes, the preset feature image and the projection image may be further divided according to other shapes.
Step S33, determining a perspective transformation matrix between each set of rectangular feature points and each set of quadrilateral feature points based on the projection perspective transformation algorithm.
In order to determine the pixel values of other pixels except for each feature point, after each set of rectangular feature points and each set of quadrilateral feature points are determined, each set of rectangular feature points and each set of quadrilateral feature points are processed based on a projective perspective transformation algorithm (a getperspective transformation function in opencv may be adopted), and a perspective transformation matrix between each set of rectangular feature points and each set of quadrilateral feature points is determined.
And step S34, determining coordinate corresponding relations between all pixel points in each minimum rectangle and all pixel points in each minimum quadrangle based on the perspective transformation matrix, and determining the mapping relation based on each coordinate corresponding relation.
The correspondence between all the pixel points in each minimum rectangle and all the pixel points in each minimum quadrangle also accords with the perspective transformation matrix, the coordinate correspondence (the permectransform function of opencv) between all the pixel points in each minimum rectangle and all the pixel points in each minimum quadrangle is determined based on the perspective transformation matrix, and the mapping relationship is determined based on each coordinate correspondence.
In a specific application scenario of the present application, as shown in fig. 13, a plurality of sets of rectangular feature points in a projected image (i.e., a preset feature image) are determined based on a first coordinate matrix, and a plurality of sets of quadrilateral feature points in the projected image are determined based on a second coordinate matrix. A perspective transformation matrix between each set of rectangular feature points in the projected image and each set of quadrilateral feature points in the projected image is determined based on a projection perspective transformation algorithm.
Through determining a plurality of groups of rectangular feature points in a preset feature image and a plurality of groups of quadrilateral feature points in a projection image, the coordinate corresponding relation between the preset feature image and all pixel points of the projection image is determined based on a perspective transformation matrix between each group of rectangular feature points and each group of quadrilateral feature points, and the mapping relation is determined based on each coordinate corresponding relation, so that the mapping relation is determined more efficiently and accurately.
In some embodiments of the present application, the preset feature image is a black background bitmap with a white point as a feature point, as shown in fig. 4, and the generating process of the second coordinate matrix includes the following steps:
step S41, converting the projection image into a gray scale.
And carrying out graying processing on the projection image to generate a gray scale image so as to reduce the calculation amount of subsequent image processing. Alternatively, the graying process may be performed using any one of a method including a component method, a maximum value method, an average value method, and a weighted average method.
In some embodiments of the present application, the converting the projection image into a gray scale image includes:
and carrying out distortion correction and cutting on the projection image, converting the projection image subjected to distortion correction and cutting into a gray level image, and filtering the gray level image according to a preset filtering algorithm.
In this embodiment, distortion correction and cropping are performed on the projection image, so that the field of view of the projection image is consistent with the aerial view layout, the projection image after distortion correction and cropping is converted into a gray scale image, and the gray scale image is filtered according to a preset filtering algorithm to remove noise in the gray scale image, so that the image quality of the gray scale image is improved, and further feature point detection can be performed in the gray scale image more accurately and efficiently.
Optionally, the preset filtering algorithm is any one of the algorithms including gaussian filtering, mean filtering, median filtering, bilateral filtering, and the like.
Step S42, carrying out histogram statistics on the gray level map to generate a statistical histogram, and carrying out normalization processing on original values of all color levels in the statistical histogram to obtain statistical values of all color levels.
And carrying out histogram statistics on the gray level map, namely counting the number of pixel points of each color level, specifically carrying out histogram statistics based on a calcHist function in opencv to generate a statistical histogram, and carrying out normalization processing on the original values of each color level in the statistical histogram, wherein the normalization processing comprises changing the number of the pixel points of all the color levels from the original number of the pixel points to the total number of the pixel points of the whole map to obtain the statistical value of each color level, and it can be understood that the maximum value of the statistical value is 1.
Step S43, sequentially selecting target color levels from the statistical histogram in the order from large to small, wherein the target color level is the largest color level in all unselected color levels, and the statistical value of the target color level is smaller than a preset threshold value.
The preset feature image is a black background dot matrix image taking a white point as a feature point, white dots in the gray image are required to be searched from large to small according to gray values, and then a second coordinate matrix is determined, so that target color levels are sequentially selected from the statistical histogram in the order from large to small, and then the white dots in the gray image are searched based on the target color levels. The target tone scale is the largest tone scale of the unselected tone scales, and the statistic value of the target tone scale is smaller than a preset threshold value, namely searching downwards one by one from the largest tone scale statistic value, and when the statistic value of a certain tone scale is smaller than the preset threshold value, taking the tone scale and each later tone scale as the target tone scale in sequence, and starting to detect the characteristic points.
And step S44, detecting characteristic points of the gray scale map based on the target color levels, and generating the second coordinate matrix based on the coordinates of the detected characteristic points.
And respectively detecting the characteristic points of the gray level map under each target color level, and after the detection is finished, arranging the coordinates of each detected characteristic point from small to large to generate a second coordinate matrix. In the specific application scenario of the present application, all searched applications According to the ranking value calculated from its coordinates (x, y)Arranging from small to large to obtain a second coordinate matrix +.>And
the projected image is converted into a gray level image, histogram statistics and normalization processing are carried out, and feature point detection is carried out according to each selected target color level, so that a second coordinate matrix is generated more efficiently and accurately.
It should be noted that, the preset feature image in the embodiment of the present application is not limited to a black background bitmap with a white point as a feature point, and when the preset feature image is another type of feature image, feature point detection may be performed according to characteristics of each feature point in the preset feature image, for example, when the preset feature image is a white background bitmap with a black point as a feature point, target color levels need to be sequentially selected from the statistical histogram in order from small to large. When the preset feature image is a checkerboard image, a harris corner detection algorithm can be adopted to detect feature points.
In some embodiments of the present application, as shown in fig. 5, the process of performing feature point detection on the gray scale map based on each target color level includes the following steps:
and step S51, binarizing the gray level map based on the current target color level to generate a binarized image.
The binarization processing specifically includes setting the gray value of a pixel having a gray value smaller than the current target tone to 0 and setting the gray values of other pixels to 255, thereby generating a binarized image.
And step S52, searching contour lines in the binarized image based on a contour line searching algorithm, determining the minimum circle of each group of searched contour lines based on a minimum bounding circle algorithm, and taking the center of each minimum circle as a characteristic point.
The contour lines are searched in the binary image based on a contour line searching algorithm (a findContours function in opencv can be adopted), then the minimum circle of each searched set of contour lines is determined based on a minimum bounding circle algorithm (a minEnclosingcircle function in opencv can be adopted), and finally the circle center of each minimum circle is used as a characteristic point.
In some embodiments of the present application, before searching for a contour in the binarized image based on a contour search algorithm, the method further includes:
and performing morphological closing operation processing on the binarized image.
In this embodiment, before searching the contour line, morphological closing operation (that is, expansion-before-corrosion operation is performed) is performed on the binary image, and specifically, a morphyoyex function in opencv may be used to smooth and continue the binary image, so that the calculation amount of searching the contour line is reduced, and the efficiency is improved.
Step S53, if the number of the searched feature points is not greater than the number of the feature points in the preset feature image, executing step S54 if yes, otherwise executing step S57.
Counting the number of the searched feature points, if the number of the searched feature points is not greater than the number of the feature points in the preset feature image, executing step S54, otherwise executing step S57.
Step S54, if the processing for all target color levels is completed, step S56 is executed, otherwise step S55 is executed.
Step S55, taking the next target color level as the new current target color level, and executing step S51.
Step S56, ends.
Step S57, the projected aerial view is captured again, a new projection image is acquired, and feature point detection is performed again based on the gray scale of the new projection image.
If the number of the searched feature points is larger than the number of the feature points in the preset feature images, the fact that the image quality of the projected images is insufficient is indicated, the projected aerial view needs to be shot again, a new projected image is obtained, and feature point detection is carried out again based on the gray level image of the new projected image.
By performing binarization processing on the gray level map under each target color level and searching each characteristic point in the binarized image based on a contour line searching algorithm and a minimum bounding circle algorithm, the characteristic point detection on the gray level map is realized more efficiently and accurately.
For example, as shown in fig. 6, when the vehicle is traveling forward and the deflection angle (i.e., steering angle) between the deflection direction of the left front wheel and the direction of the vehicle body is θ, the vehicle will perform circular motion with the point O as the center of a circle, and the determination method of O is: the intersection point of the straight line where the rear wheel axle is located and the straight line which passes through the left front wheel axle and is perpendicular to the deflection direction of the left front wheel.
The track line of the left front wheel is a circle with O as a circle center and r_l as a radius, and then:
r_l=wb/sin (θ), where wb is the vehicle wheelbase and sin is a sinusoidal function.
The track line of the right rear wheel is a circle with O as a circle center and r_r as a radius, and then:
r_r=wb×cot (θ) -wt, where wb is the wheelbase, wt is the wheelbase, and cot is the cotangent function.
The region between the left front wheel track and the right front wheel track is the region to be marked, the region to be marked is marked, and then the target region is generated.
The manner of determining the target area is different when the vehicle is moving forward and backward, as shown in fig. 7, which is a schematic diagram of determining the target area when the vehicle is moving backward, and the manner of determining the target area is similar to that of determining the target area in fig. 6, and will not be described again.
In some embodiments, to improve safety, as shown in fig. 8 and 9, the left front wheel track is replaced with the track of the vehicle body left front point, which requires adding the front wheel center of the vehicle body to the vehicle body front endIs a distance fo (front overhang). The trace line at the left front part is a circle with O as the center and r_l as the radius.Where wb is the vehicle wheelbase, fo is the front overhang, and cot is the cotangent function.
In fig. 6 to 9, when the vehicle wheel turns right, if the vehicle wheel turns left, the target area at the time of turning left can be determined by performing bilateral symmetry estimation processing.
In some embodiments of the present application, the storage unit stores a preset relationship table, and the processor is further specifically configured to:
inquiring the preset relation table acquired from the storage unit based on the vehicle body information, the driving direction and the steering angle;
determining a region to be marked based on the query result;
marking the region to be marked based on a preset marking mode, and generating the target region;
the preset relation table is generated according to the corresponding relation between different vehicle body information, running directions and steering angles and different preset areas in the aerial view.
In this embodiment, different preset areas in the aerial view under different vehicle body information, running directions and steering angles are drawn in advance, a preset relation table is built, the preset relation table is stored in a storage unit, after the vehicle body information, the running directions and the steering angles are determined, the preset relation table which is called from the storage unit is queried, and the area to be marked is determined based on the query result.
And establishing a preset relation table according to the corresponding relation between different vehicle body information, running directions and steering angles and different preset areas in the aerial view, and directly inquiring the preset relation table to determine the area to be marked after determining the vehicle body information, the running directions and the steering angles, so that the target area can be determined more efficiently.
Optionally, the preset indication mode is one or a combination of a plurality of graphics, symbols, characters and colors for indicating danger.
As an alternative, the area to be marked may also be determined in real time, and in some embodiments of the present application, the processor is further specifically configured to:
determining a front outside reference point and a rear inside reference point when the vehicle turns according to the traveling direction;
determining the circle center of the circular motion of the vehicle steering based on the steering angle;
Determining a trajectory of the anterior lateral datum point and a trajectory of the posterior medial datum point based on the circle center;
determining an initial blind zone of the vehicle based on the trajectory of the front outboard datum point and the trajectory of the rear inboard datum point;
adjusting the initial blind area based on the vehicle body information, and determining an area to be marked based on an adjustment result;
and marking the region to be marked based on a preset marking mode, and generating the target region.
In this embodiment, the front outside reference point and the rear inside reference point are determined based on the driving direction when the vehicle turns, then the circle center of the circular motion of the vehicle turns is determined based on the turning angle, the track line of the front outside reference point and the track line of the rear inside reference point are determined based on the circle center, then the initial blind area of the vehicle is determined based on the two track lines, the size of the blind area caused by different vehicle body information is different, finally the initial blind area is required to be adjusted based on the vehicle body information, and the area to be marked is determined based on the adjustment result, for example, for a small car of the vehicle body, the initial blind area can be reduced according to a first preset proportion, for a large car of the vehicle body, the initial blind area can be increased according to a second preset proportion, so that the final area to be marked more accords with the vehicle body information, and the accuracy of the area to be marked is improved.
It is understood that the initial blind area and the area to be indicated in the present embodiment are both driving blind areas with respect to the driver in the vehicle.
The region to be marked is determined in real time through the driving direction, the steering angle and the vehicle body information, so that a target region is determined more accurately.
In some embodiments of the present application, the processor is further specifically configured to:
if the traveling direction is forward right or backward right, the center of the front left wheel of the vehicle or a front left point on the vehicle body is set as the front outside reference point, and the center of the rear right wheel of the vehicle is set as the rear inside reference point;
if the traveling direction is forward left or backward left, the center of the right front wheel of the vehicle or a right front point on the vehicle body is set as the front outside reference point, and the center of the left rear wheel of the vehicle is set as the rear inside reference point.
In this embodiment, as shown in fig. 6 to 9, if the traveling direction is forward right or backward right, the center of the front left wheel of the vehicle or the front left point on the vehicle body is used as the front outside reference point, and the center of the rear right wheel of the vehicle is used as the rear inside reference point.
Accordingly, if the traveling direction is forward left or backward left, the center of the right front wheel of the vehicle or the right front point on the vehicle body is set as the front outside reference point and the center of the left rear wheel of the vehicle is set as the rear inside reference point, based on the left-right symmetry.
By determining different front and rear outside reference points according to different traveling directions, the accuracy of the front and rear inside reference points is improved, and thus the target area is more accurately determined.
In some embodiments of the present application, the generating a bird's eye view includes:
placing the vehicle at a preset parking point in a preset darkroom space;
shooting the vehicle and the peripheral area of the vehicle based on shooting equipment positioned right above the vehicle, and obtaining an initial aerial view;
and carrying out distortion correction and cutting on the initial aerial view to generate the aerial view.
In this embodiment, a photographing device is disposed in a preset darkroom space directly above a vehicle, after the vehicle is placed at a preset parking point, the vehicle and a peripheral area of the vehicle are photographed based on the photographing device, an initial aerial view is obtained, distortion may exist in the initial aerial view, and in order to make the aerial view conform to a preset layout, the distortion correction and the trimming are performed on the initial aerial view, so as to generate the aerial view, thereby improving accuracy of the aerial view.
In some embodiments of the present application, the processor is specifically configured to:
interpolating pixel values of all pixel points obtained from the aerial view based on a preset interpolation algorithm;
and generating the image to be projected based on the pixel value of each pixel point after interpolation.
In this embodiment, since the coordinates of each pixel in the preset feature image are integer coordinates and the coordinates of a plurality of pixel points obtained from the aerial view are floating point coordinates with decimal, the pixel values of each pixel point obtained from the aerial view are interpolated based on a preset interpolation algorithm, specifically, four integer coordinate pixel points nearest to the obtained current pixel point are determined, interpolation is performed according to the pixel values of the four integer coordinate pixel points, and the pixel values of each pixel point after interpolation are given to the pixel points corresponding to each obtained pixel point in the coordinate mapping matrix, so that the pixel values of each pixel point in the image to be projected are more accurate, and the accuracy of the image to be projected is improved.
Optionally, the preset interpolation algorithm is any one of algorithms including bilinear interpolation, nearest neighbor interpolation, bicubic interpolation, lagrangian interpolation, and the like.
The embodiment of the application also provides a method for presenting a vehicle peripheral area, wherein a plurality of projection devices for projecting to the vehicle peripheral area are arranged on the vehicle, each projection device corresponds to one coordinate mapping matrix, as shown in fig. 16, and the method comprises the following steps:
Step S100, acquiring a steering angle of the vehicle;
step S200, if the steering angle is in a preset angle range, marking a target area from a bird 'S-eye view based on the vehicle body information, the running direction and the steering angle of the vehicle, wherein the bird' S-eye view is an image of the vehicle in a preset bird 'S-eye view field, and the bird' S-eye view includes a vehicle body area and a vehicle peripheral area;
step S200 of determining at least one target projection device from among the projection devices based on the position of the target region in the vehicle peripheral region, and acquiring a coordinate mapping matrix of the target projection device from the storage unit;
step S400, based on the coordinate mapping matrix of the target projection device, pixel values of a plurality of pixel points are taken from the aerial view, an image to be projected is generated, and the image to be projected is input into the target projection device for projection;
the coordinate mapping matrix is generated based on a coordinate mapping relation between a preset feature image and a projection image of the preset feature image, and the preset feature image comprises a plurality of feature points distributed according to a preset rule.
In some embodiments of the present application, marking a target area from a bird's eye view based on body information of the vehicle, a traveling direction, and the steering angle includes: inquiring a preset relation table based on the vehicle body information, the driving direction and the steering angle; determining a region to be marked based on the query result; marking the region to be marked based on a preset marking mode, and generating the target region; the preset relation table is generated according to the corresponding relation between different vehicle body information, running directions and steering angles and different preset areas in the aerial view.
In some embodiments of the present application, marking a target area from a bird's eye view based on body information of the vehicle, a traveling direction, and the steering angle includes: determining a front outside reference point and a rear inside reference point when the vehicle turns according to the traveling direction; determining the circle center of the circular motion of the vehicle steering based on the steering angle; determining a trajectory of the anterior lateral datum point and a trajectory of the posterior medial datum point based on the circle center; determining an initial blind zone of the vehicle based on the trajectory of the front outboard datum point and the trajectory of the rear inboard datum point; adjusting the initial blind area based on the vehicle body information, and determining an area to be marked based on an adjustment result; and marking the region to be marked based on a preset marking mode, and generating the target region.
In some embodiments of the present application, determining a front outboard reference point and a rear inboard reference point of the vehicle when steering according to the travel direction includes: if the traveling direction is forward right or backward right, the center of the front left wheel of the vehicle or a front left point on the vehicle body is set as the front outside reference point, and the center of the rear right wheel of the vehicle is set as the rear inside reference point; if the traveling direction is forward left or backward left, the center of the right front wheel of the vehicle or a right front point on the vehicle body is set as the front outside reference point, and the center of the left rear wheel of the vehicle is set as the rear inside reference point.
In some embodiments of the present application, based on a coordinate mapping matrix of the target projection device, capturing pixel values of a plurality of pixel points from the aerial view and generating an image to be projected, including: interpolating pixel values of all pixel points obtained from the aerial view based on a preset interpolation algorithm; and generating the image to be projected based on the pixel value of each pixel point after interpolation.
According to the method for displaying the vehicle peripheral area, after the coordinate mapping matrix calibrated in advance is configured for each projection device and the target area representing the dangerous area is marked in the aerial view, the pixel values of a plurality of pixel points are taken from the aerial view based on the coordinate mapping matrix of the corresponding projection device, and the image to be projected is generated, so that the corresponding projection device projects an undistorted image, the dangerous area when the vehicle runs is displayed more accurately, the positions of the dangerous area are determined more accurately by drivers and pedestrians, and running safety is improved.
For other embodiments of a method for presenting a vehicle peripheral area of the present application, reference may be made to related embodiments in a device for presenting a vehicle peripheral area of the present application.
The embodiment of the application also provides an electronic device, as shown in fig. 17, which comprises a processor and a memory, wherein the memory stores an executable program, and the processor executes the executable program to perform the method for presenting the peripheral area of the vehicle.
The electronic device in the embodiments of the present application may be an onboard electronic device on a vehicle, and the vehicles in the embodiments of the present application may include general motor vehicles, including, for example, passenger cars, SUVs, MPVs, buses, trucks, and other cargo or passenger vehicles, including hybrid vehicles, electric vehicles, fuel vehicles, plug-in hybrid vehicles, fuel cell vehicles, and other alternative fuel vehicles. The hybrid vehicle refers to a vehicle having two or more power sources, and the electric vehicle includes a pure electric vehicle, an extended range electric vehicle, and the like, which are not particularly limited in this application.
The embodiments of the present application also provide a storage medium carrying one or more computer programs which, when executed by a processor, implement a method of presenting a vehicle peripheral region as described above.
It should be appreciated that in embodiments of the present application, the processor may be a central processing unit (Central Processing Unit, CPU for short), other general purpose processor, digital signal processor (Digital Signal Processing, DSP for short), application specific integrated circuit (Application Specific Integrated Circuit, ASIC for short), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA for short) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
It should also be understood that the memory referred to in the embodiments of the present application may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable ROM (Electrically EPROM, EEPROM), or a flash Memory. The volatile memory may be a random access memory (Random Access Memory, RAM for short) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (Direct Rambus RAM, DR RAM).
Note that when the processor is a general-purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, the memory (storage module) is integrated into the processor.
It should be noted that the memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
It should also be understood that the first, second, third, fourth, and various numerical numbers referred to herein are merely descriptive convenience and are not intended to limit the scope of the present application.
It should be understood that the term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method. To avoid repetition, a detailed description is not provided herein.
In various embodiments of the present application, the sequence number of each process does not mean the sequence of execution, and the execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative logical blocks (illustrative logical block, abbreviated ILBs) and steps described in connection with the embodiments disclosed herein can be implemented in electronic hardware, or in combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed method, apparatus and electronic device may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A presentation apparatus of a vehicle peripheral area, the apparatus comprising a processor, a storage unit, and a plurality of projection devices for projecting to the vehicle peripheral area, the storage unit storing a plurality of coordinate mapping matrices in advance, each coordinate mapping matrix corresponding to one projection device, the processor being configured to:
acquiring a steering angle of the vehicle;
if the steering angle is in a preset angle range, marking a target area from a bird's-eye view based on the vehicle body information, the running direction and the steering angle of the vehicle, wherein the bird's-eye view is an image of the vehicle in a preset bird's-eye view field, and the bird's-eye view comprises a vehicle body area and a vehicle peripheral area;
determining at least one target projection device from among the projection devices based on the position of the target region in the peripheral region of the vehicle, and acquiring a coordinate mapping matrix of the target projection device from the storage unit;
Taking pixel values of a plurality of pixel points from the aerial view based on the coordinate mapping matrix of the target projection equipment, generating an image to be projected, and inputting the image to be projected into the target projection equipment for projection;
the coordinate mapping matrix is generated based on a coordinate mapping relation between a preset feature image and a projection image of the preset feature image, and the preset feature image comprises a plurality of feature points distributed according to a preset rule.
2. The presentation device of claim 1, wherein the generation process of the coordinate mapping matrix includes:
placing the vehicle in a preset darkroom space according to the position of the vehicle body area in the aerial view;
the preset characteristic image is manufactured according to the resolution of the projection equipment, the preset characteristic image is projected based on the projection equipment, and the projected aerial view is shot to obtain the projection image;
and generating the coordinate mapping matrix based on the mapping relation between the coordinates of each pixel point in the preset characteristic image and the coordinates of each pixel point in the projection image.
3. The presentation device of a vehicle peripheral area according to claim 2, wherein the map determination process includes:
Generating a first coordinate matrix based on the position of each feature point in the preset feature image, and determining a plurality of groups of rectangular feature points based on the first coordinate matrix, wherein each group of rectangular feature points correspondingly divides the preset feature image into a plurality of minimum rectangles;
generating a second coordinate matrix based on the position of each characteristic point in the projection image, and determining a plurality of groups of quadrilateral characteristic points based on the second coordinate matrix, wherein each group of quadrilateral characteristic points correspondingly divides the projection image into a plurality of minimum quadrilaterals;
determining a perspective transformation matrix between each group of rectangular characteristic points and each group of quadrilateral characteristic points based on a projection perspective transformation algorithm;
and determining coordinate corresponding relations between all pixel points in each minimum rectangle and all pixel points in each minimum quadrangle based on the perspective transformation matrix, and determining the mapping relation based on the coordinate corresponding relations.
4. The apparatus for presenting a peripheral area of a vehicle according to claim 3, wherein the preset feature image is a black background bitmap having a white point as a feature point, and the generating of the second coordinate matrix includes:
converting the projection image into a gray scale map;
Carrying out histogram statistics on the gray level map to generate a statistical histogram, and carrying out normalization processing on original values of all color levels in the statistical histogram to obtain statistical values of all color levels;
sequentially selecting target color levels from the statistical histogram in the order from large to small, wherein the target color level is the maximum color level in all unselected color levels, and the statistical value of the target color level is smaller than a preset threshold value;
and respectively detecting characteristic points of the gray level map based on each target color level, and generating the second coordinate matrix based on the detected coordinates of each characteristic point.
5. The presentation device for a vehicle peripheral area according to claim 4, wherein the process of performing feature point detection on the gray scale map based on each of the target color levels, respectively, includes:
performing binarization processing on the gray level map based on the current target color level to generate a binarized image;
searching contour lines in the binarized image based on a contour line searching algorithm, determining the minimum circle of each group of searched contour lines based on a minimum bounding circle algorithm, and taking the circle center of each minimum circle as a characteristic point;
if the number of the searched feature points is not greater than the number of the feature points in the preset feature image, taking the next target color level as a new current target color level until the processing of all the target color levels is completed;
If the number of the searched feature points is larger than the number of the feature points in the preset feature images, shooting the aerial view after projection again, obtaining a new projection image, and detecting the feature points again based on the gray level image of the new projection image.
6. The presentation device of a vehicle peripheral area according to claim 1, wherein the memory unit stores therein a preset relationship table, and the processor is further specifically configured to:
inquiring the preset relation table acquired from the storage unit based on the vehicle body information, the driving direction and the steering angle;
determining a region to be marked based on the query result;
marking the region to be marked based on a preset marking mode, and generating the target region;
the preset relation table is generated according to the corresponding relation between different vehicle body information, running directions and steering angles and different preset areas in the aerial view.
7. The vehicle peripheral zone presentation device of claim 1, wherein the processor is further specifically configured to:
determining a front outside reference point and a rear inside reference point when the vehicle turns according to the traveling direction;
Determining the circle center of the circular motion of the vehicle steering based on the steering angle;
determining a trajectory of the anterior lateral datum point and a trajectory of the posterior medial datum point based on the circle center;
determining an initial blind zone of the vehicle based on the trajectory of the front outboard datum point and the trajectory of the rear inboard datum point;
adjusting the initial blind area based on the vehicle body information, and determining an area to be marked based on an adjustment result;
and marking the region to be marked based on a preset marking mode, and generating the target region.
8. The vehicle peripheral zone presentation device of claim 1, wherein the processor is specifically configured to:
interpolating pixel values of all pixel points obtained from the aerial view based on a preset interpolation algorithm;
and generating the image to be projected based on the pixel value of each pixel point after interpolation.
9. A method of presenting a peripheral area of a vehicle, wherein a plurality of projection devices for projecting to the peripheral area of the vehicle are provided on the vehicle, each of the projection devices corresponding to a coordinate mapping matrix, the method comprising:
acquiring a steering angle of the vehicle;
If the steering angle is in a preset angle range, marking a target area from a bird's-eye view based on the vehicle body information, the running direction and the steering angle of the vehicle, wherein the bird's-eye view is an image of the vehicle in a preset bird's-eye view field, and the bird's-eye view comprises a vehicle body area and a vehicle peripheral area;
determining at least one target projection device from among the projection devices based on the position of the target region in the peripheral region of the vehicle, and acquiring a coordinate mapping matrix of the target projection device from the storage unit;
taking pixel values of a plurality of pixel points from the aerial view based on the coordinate mapping matrix of the target projection equipment, generating an image to be projected, and inputting the image to be projected into the target projection equipment for projection;
the coordinate mapping matrix is generated based on a coordinate mapping relation between a preset feature image and a projection image of the preset feature image, and the preset feature image comprises a plurality of feature points distributed according to a preset rule.
10. An electronic device comprising a processor and a memory, wherein the memory has stored therein an executable program that is executed by the processor to perform the method of presenting a peripheral region of a vehicle as claimed in claim 9.
CN202410263199.0A 2024-03-07 2024-03-07 Vehicle peripheral area presentation device and method and electronic equipment Active CN117853569B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410263199.0A CN117853569B (en) 2024-03-07 2024-03-07 Vehicle peripheral area presentation device and method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410263199.0A CN117853569B (en) 2024-03-07 2024-03-07 Vehicle peripheral area presentation device and method and electronic equipment

Publications (2)

Publication Number Publication Date
CN117853569A true CN117853569A (en) 2024-04-09
CN117853569B CN117853569B (en) 2024-05-28

Family

ID=90543742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410263199.0A Active CN117853569B (en) 2024-03-07 2024-03-07 Vehicle peripheral area presentation device and method and electronic equipment

Country Status (1)

Country Link
CN (1) CN117853569B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6141014A (en) * 1995-04-20 2000-10-31 Hitachi, Ltd. Bird's-eye view forming method, map display apparatus and navigation system
JP2013026801A (en) * 2011-07-20 2013-02-04 Aisin Seiki Co Ltd Vehicle periphery monitoring system
CN113256742A (en) * 2021-07-15 2021-08-13 禾多科技(北京)有限公司 Interface display method and device, electronic equipment and computer readable medium
CN114820396A (en) * 2022-07-01 2022-07-29 泽景(西安)汽车电子有限责任公司 Image processing method, device, equipment and storage medium
JP2022125973A (en) * 2021-02-17 2022-08-29 株式会社エフェクト Position estimating apparatus, position estimating program, and position estimating method
CN116409243A (en) * 2022-12-29 2023-07-11 云车智途(重庆)科技有限公司 Truck panoramic pedestrian positioning and early warning method based on 360-degree fisheye camera
CN117522766A (en) * 2022-07-29 2024-02-06 长沙智能驾驶研究院有限公司 Obstacle presenting method, apparatus, device, readable storage medium, and program product
CN117612132A (en) * 2023-11-15 2024-02-27 北京百度网讯科技有限公司 Method and device for complementing bird's eye view BEV top view and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6141014A (en) * 1995-04-20 2000-10-31 Hitachi, Ltd. Bird's-eye view forming method, map display apparatus and navigation system
JP2013026801A (en) * 2011-07-20 2013-02-04 Aisin Seiki Co Ltd Vehicle periphery monitoring system
JP2022125973A (en) * 2021-02-17 2022-08-29 株式会社エフェクト Position estimating apparatus, position estimating program, and position estimating method
CN113256742A (en) * 2021-07-15 2021-08-13 禾多科技(北京)有限公司 Interface display method and device, electronic equipment and computer readable medium
CN114820396A (en) * 2022-07-01 2022-07-29 泽景(西安)汽车电子有限责任公司 Image processing method, device, equipment and storage medium
CN117522766A (en) * 2022-07-29 2024-02-06 长沙智能驾驶研究院有限公司 Obstacle presenting method, apparatus, device, readable storage medium, and program product
CN116409243A (en) * 2022-12-29 2023-07-11 云车智途(重庆)科技有限公司 Truck panoramic pedestrian positioning and early warning method based on 360-degree fisheye camera
CN117612132A (en) * 2023-11-15 2024-02-27 北京百度网讯科技有限公司 Method and device for complementing bird's eye view BEV top view and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DANIELS JĀNIS JUSTS等: "Bird’s-eye view image acquisition from simulated scenes using geometric inverse perspective mapping", 2020 17TH BIENNIAL BALTIC ELECTRONICS CONFERENCE (BEC), 8 December 2020 (2020-12-08), pages 1 - 6 *
杨加东;谢明;张波;刘启帆;: "一种基于DSP的全景系统功能的实现", 科技通报, no. 11, 30 November 2017 (2017-11-30), pages 178 - 181 *

Also Published As

Publication number Publication date
CN117853569B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
US10860870B2 (en) Object detecting apparatus, object detecting method, and computer program product
CN107577988B (en) Method, device, storage medium and program product for realizing side vehicle positioning
JP6303090B2 (en) Image processing apparatus and image processing program
JP7190583B2 (en) Vehicle feature acquisition method and device
US11518390B2 (en) Road surface detection apparatus, image display apparatus using road surface detection apparatus, obstacle detection apparatus using road surface detection apparatus, road surface detection method, image display method using road surface detection method, and obstacle detection method using road surface detection method
CN108596899B (en) Road flatness detection method, device and equipment
CN106651963A (en) Mounting parameter calibration method for vehicular camera of driving assistant system
US10657396B1 (en) Method and device for estimating passenger statuses in 2 dimension image shot by using 2 dimension camera with fisheye lens
CN111681285B (en) Calibration method, calibration device, electronic equipment and storage medium
CN111751824A (en) Method, device and equipment for detecting obstacles around vehicle
CN111160070A (en) Vehicle panoramic image blind area eliminating method and device, storage medium and terminal equipment
US11377027B2 (en) Image processing apparatus, imaging apparatus, driving assistance apparatus, mobile body, and image processing method
CN114332142A (en) External parameter calibration method, device, system and medium for vehicle-mounted camera
CN117853569B (en) Vehicle peripheral area presentation device and method and electronic equipment
CN116486351A (en) Driving early warning method, device, equipment and storage medium
CN116630401A (en) Fish-eye camera ranging method and terminal
EP4067815A1 (en) Electronic device and control method
CN110727269A (en) Vehicle control method and related product
JP6983334B2 (en) Image recognition device
CN109703556B (en) Driving assistance method and apparatus
CN115063772B (en) Method for detecting vehicles after formation of vehicles, terminal equipment and storage medium
JP2021051348A (en) Object distance estimation apparatus and object distance estimation method
JP2018113622A (en) Image processing apparatus, image processing system, and image processing method
US11477371B2 (en) Partial image generating device, storage medium storing computer program for partial image generation and partial image generating method
KR20230127436A (en) Apparatus and method for detecting nearby vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant