CN114750696A - Vehicle vision presenting method, vehicle-mounted equipment and vehicle - Google Patents

Vehicle vision presenting method, vehicle-mounted equipment and vehicle Download PDF

Info

Publication number
CN114750696A
CN114750696A CN202210415479.XA CN202210415479A CN114750696A CN 114750696 A CN114750696 A CN 114750696A CN 202210415479 A CN202210415479 A CN 202210415479A CN 114750696 A CN114750696 A CN 114750696A
Authority
CN
China
Prior art keywords
obstacle
vehicle
position information
target
obstacles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210415479.XA
Other languages
Chinese (zh)
Inventor
董宏宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhidao Network Technology Beijing Co Ltd
Original Assignee
Zhidao Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhidao Network Technology Beijing Co Ltd filed Critical Zhidao Network Technology Beijing Co Ltd
Priority to CN202210415479.XA priority Critical patent/CN114750696A/en
Publication of CN114750696A publication Critical patent/CN114750696A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8093Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for obstacle warning

Abstract

The disclosure relates to a vehicle visual presentation method, vehicle-mounted equipment and a vehicle, which can be applied to a common vehicle or an automatic driving vehicle, wherein the method comprises the following steps: acquiring image data of a surrounding environment acquired by a camera device of a vehicle in real time; detecting obstacles according to the image data at each moment to obtain the detection result of whether the obstacles exist and the type of the corresponding obstacles when the obstacles exist; when the detection result represents that an obstacle exists, determining three-dimensional position information of the obstacle according to the image data, the parameters of the camera device and the real-time position information of the vehicle; and rendering the corresponding obstacle in a vehicle driving map loaded in real time on a display interface of the vehicle-mounted equipment of the vehicle according to the three-dimensional position information and the obstacle type. The display effect is intuitive for non-technical users, the perception degree of the users on the road conditions outside the vehicle in the driving process of the vehicle is improved, and the portability is realized.

Description

Vehicle vision presenting method, vehicle-mounted equipment and vehicle
Technical Field
The disclosure relates to the technical field of vehicles, in particular to a vehicle visual presentation method, vehicle-mounted equipment and a vehicle.
Background
In the field of vehicles, particularly in the automatic driving technology, it is very critical to ensure safety during driving and accurate perception of road conditions.
The existing methods include sensing and analyzing road conditions by using a sensor, for example, sensing road conditions by using infrared radar scanning, visual recognition, millimeter-wave radar detection, and the like.
Disclosure of Invention
In order to solve or at least partially solve the following technical problem: the existing visual presentation mode is oriented to debugging developers, strongly depends on a Linux system, and has no flexible portability.
In a first aspect, embodiments of the present disclosure provide a method of vehicle visual presentation. The method comprises the following steps: acquiring image data of a surrounding environment acquired by a camera device of a vehicle in real time; detecting obstacles according to the image data at each moment to obtain the detection result of whether the obstacles exist and the type of the corresponding obstacles when the obstacles exist; when the detection result represents that an obstacle exists, determining three-dimensional position information of the obstacle according to the image data, the parameters of the camera device and the real-time position information of the vehicle; and rendering the corresponding obstacle in a vehicle driving map loaded in real time on a display interface of the vehicle-mounted equipment of the vehicle according to the three-dimensional position information and the obstacle type.
According to an embodiment of the present disclosure, rendering a corresponding obstacle in a vehicle driving map loaded in real time on a display interface of an onboard device of the vehicle according to the three-dimensional position information and the obstacle type includes: according to the three-dimensional position information of the obstacle and the type of the obstacle, calculating the coverage position and the coverage area of a graphic element to be rendered corresponding to the obstacle in the vehicle driving map; determining whether a first target primitive located within the display range of the display interface and a second target primitive located outside the display range of the display interface exist in the primitives to be rendered according to the coverage position and the coverage area; when the first target graphic primitive exists in the graphic primitives to be rendered, rendering the first target graphic primitive in a 3D mode on the display interface; and when the second target primitive exists in the primitives to be rendered, storing the second target primitive in a list to be rendered, refreshing the storage state of the second target primitive in the list to be rendered according to the real-time display range of the display interface, and when the second target primitive is located in the real-time display range, deleting the second target primitive from the list to be rendered and outputting the second target primitive to the display interface for rendering.
According to an embodiment of the present disclosure, the vehicle travel map is loaded by: acquiring a planned path of the vehicle; determining map data to be displayed from the current driving position of the vehicle to a subsequent planned path; calculating a target map interval adapted to current display parameters in a display interface of the vehicle-mounted equipment according to the map data, wherein the current display parameters are obtained by user pre-configuration or real-time update; and loading data corresponding to the target map section in real time on the display interface to obtain the vehicle driving map.
According to an embodiment of the present disclosure, the method further includes: when the detection result represents that an obstacle exists, extracting corresponding attention features of the obstacle in the image data according to preset attention features aiming at each known obstacle type and the obstacle type corresponding to the existing obstacle; the preset concern features are used for representing identity differences of similar obstacles; carrying out similarity comparison on the attention features extracted from the similar obstacles in the image data at the previous and next two moments to obtain whether the identity recognition result of the same obstacle exists in the image data at the previous and next two moments; according to the identification result, the same type identification and the respectively different identification are distributed to the obstacles of the same type, and different type identifications are distributed to the obstacles of different types. Wherein, the rendering of the corresponding obstacle in the vehicle driving map loaded in real time on the display interface of the vehicle-mounted device of the vehicle according to the three-dimensional position information and the obstacle type includes: rendering the obstacle in a vehicle driving map loaded in real time on a display interface of the vehicle-mounted equipment of the vehicle according to the type identifier, the identity identifier and the three-dimensional position information of the obstacle.
According to the embodiment of the disclosure, when the type identification and the identity identification of the obstacles at the current moment and the later moment are the same, the obstacles at the previous moment and the later moment are correspondingly rendered in the vehicle driving map by using the same 3D graphics; when the type identifications of the obstacles at the front moment and the rear moment are different, correspondingly rendering the obstacles at the front moment and the rear moment in the vehicle driving map by using differentiated 3D graphics, wherein the differentiated 3D graphics have difference at least in shape; when the type identifications of the obstacles at the front moment and the rear moment are the same and the identification identifications are different, the obstacles at the front moment and the rear moment are correspondingly rendered in the vehicle driving map by the same 3D graphics and differentiated rendering effects, and the differentiated rendering effects are used for visually distinguishing the same 3D graphics.
According to the embodiment of the present disclosure, rendering a corresponding obstacle in a vehicle driving map loaded in real time on a display interface of an onboard device of the vehicle according to the three-dimensional position information and the obstacle type further includes: determining whether a third target primitive exists in the primitives to be rendered, wherein the partial image of the third target primitive is located within the display range of the display interface and the partial image of the third target primitive is located outside the display range of the display interface; when the third target primitive exists in the primitives to be rendered, rendering a primitive area, which is located within a display range of the display interface, in the vehicle driving map; or, when the to-be-rendered primitive has the third target primitive, reducing the coverage area of the third target primitive in an equal proportion until all the area of the third target primitive can be located within the display range of the display interface; and rendering the reduced third target graphic element in the vehicle driving map.
According to an embodiment of the present disclosure, determining three-dimensional position information of the obstacle based on the image data, the parameter of the camera, and the real-time position information of the vehicle includes: determining relative position information of an obstacle in a world coordinate system where the vehicle is located according to the image data and the parameters of the camera device; and determining the three-dimensional position information of the obstacle according to the real-time position information of the vehicle and the relative position information.
According to the embodiment of the disclosure, the camera device is a binocular camera device; determining relative position information of an obstacle in a world coordinate system in which the vehicle is located, based on the image data and parameters of the imaging device, the method including: determining coordinates of matched image points of two cameras of the binocular camera device, which are aimed at the same barrier; determining projection matrixes corresponding to the two cameras according to the calibrated parameters of the two cameras respectively; constructing a conversion equation from a world coordinate system to an image plane coordinate system according to the projection matrix and the coordinates of the matched image points; and performing least square solution on the transformation equation between the two cameras simultaneously to obtain the three-dimensional coordinate of the obstacle in the world coordinate system, wherein the three-dimensional coordinate is the relative position information.
According to an embodiment of the present disclosure, the parameters of the image capturing apparatus include: the binocular camera device comprises a focal length value and a central distance of two cameras after calibration and three-dimensional calibration, a parallax value between the two cameras, and offset parameters after calibration and three-dimensional calibration between coordinate systems of a left image plane and a right image plane corresponding to the two cameras and an original point in the world coordinate system.
According to an embodiment of the present disclosure, the above detecting an obstacle from the image data to obtain a detection result of whether the obstacle exists and a type of the obstacle corresponding to the obstacle when the obstacle exists includes: performing image segmentation on the image data based on gray level threshold values of pixel points to obtain one or more candidate pixel areas; performing expansion processing on the candidate pixel area; performing region segmentation according to the boundary of the candidate pixel region after the expansion processing, calculating the area of each region obtained by segmentation, and removing the noise point region of which the area does not accord with the range of a preset threshold value to obtain a target pixel region; and inputting the target pixel area into a pre-trained obstacle detection model, and outputting a detection result of whether the target pixel area has an obstacle or not and a corresponding obstacle type when the obstacle exists.
According to an embodiment of the present disclosure, performing obstacle detection on the image data to obtain a detection result of whether an obstacle exists and a corresponding obstacle type when the obstacle exists, includes: performing image segmentation on the image data based on gray level threshold values of pixel points to obtain one or more candidate pixel areas; eliminating interference pixels in the candidate pixel area according to at least one item of the color feature difference in the candidate pixel area and whether the candidate pixel area covers the off-road pixel area, and obtaining a target pixel area after the interference pixels are eliminated; and inputting the target pixel area into a pre-trained obstacle detection model, and outputting a detection result of whether an obstacle exists in the target pixel area or not and a corresponding obstacle type when the obstacle exists.
According to an embodiment of the present disclosure, the obstacle detection model is obtained by training in the following manner: acquiring road condition calibration data, wherein the road condition calibration data comprises: obtaining a road condition target area after the road condition scene picture is subjected to object recognition; the real result of the type of the barrier corresponding to the road condition target area; taking the real result as a training label, training the parameters of the obstacle detection model to be trained, wherein the input of the obstacle detection model to be trained is the road condition target area, and the output of the obstacle detection model to be trained is as follows: whether the road condition target area has the obstacle or not and a prediction result of a corresponding obstacle prediction type when the obstacle is predicted to exist; and when the training reaches the set times or when the loss function representing the gap between the training label and the prediction result is lower than a set value, the training is considered to be finished.
In a second aspect, embodiments of the present disclosure provide an in-vehicle apparatus. The above-mentioned on-vehicle equipment includes: the device comprises a data acquisition module, an obstacle detection module, a position calculation module and a display module. The data acquisition module is used for acquiring image data of the surrounding environment acquired by a camera device of the vehicle in real time; the obstacle detection module is used for detecting obstacles according to the image data at each moment to obtain the detection result of whether the obstacles exist and the type of the corresponding obstacles when the obstacles exist. And the position calculation module is used for determining the three-dimensional position information of the obstacle according to the image data, the parameters of the camera device and the real-time position information of the vehicle under the condition that the detection result indicates that the obstacle exists. The display module is used for rendering corresponding obstacles in a vehicle driving map loaded in real time on a display interface of vehicle-mounted equipment of the vehicle according to the three-dimensional position information and the obstacle types.
In a third aspect, embodiments of the present disclosure provide a vehicle. The vehicle is used for executing the vehicle visual presentation method or comprises the vehicle-mounted device.
Some technical solutions provided by the embodiments of the present disclosure have some or all of the following advantages:
acquiring image data of the surrounding environment acquired in real time; the method comprises the steps of detecting obstacles according to image data at each moment, determining three-dimensional position information of the obstacles after detecting that the obstacles and corresponding obstacle types exist, and rendering the corresponding obstacles in a vehicle driving map loaded in real time on a display interface of vehicle-mounted equipment of the vehicle according to the three-dimensional position information and the obstacle types of the obstacles, so that a universal logic of visual presentation is provided, the logic of visual presentation can be applied to an automatic driving vehicle or a conventional vehicle, the logic of visual presentation is executed by the vehicle or a vehicle-mounted unit, and the logic of visual presentation has transportability; and when the vehicle runs, the real-time three-dimensional position information is calculated based on the type of the obstacle and the obstacle, and corresponding visual dynamic presentation is carried out in the vehicle running map loaded in real time.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the related art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 schematically illustrates a flow chart of a method of vehicle visual presentation according to an embodiment of the present disclosure;
fig. 2A schematically illustrates a detailed implementation process diagram of step S120 according to an embodiment of the present disclosure;
fig. 2B schematically illustrates a detailed implementation process diagram of step S120 according to another embodiment of the disclosure;
fig. 3 schematically shows a detailed implementation flowchart of step S130 according to an embodiment of the present disclosure;
FIG. 4 schematically shows a detailed implementation flowchart of step S140 according to an embodiment of the present disclosure;
FIG. 5 schematically illustrates a detailed implementation flowchart for loading a vehicle travel map according to an embodiment of the present disclosure;
FIG. 6 schematically illustrates a flow chart of a method of vehicle visual presentation according to another embodiment of the present disclosure; and
fig. 7 schematically shows a block diagram of the structure of the vehicle-mounted device provided by the embodiment of the disclosure.
Detailed Description
During the development process, the following findings are carried out: when the problem of how to present real-time road conditions is faced, most vehicles directly present the information acquired by the sensor, for example, the sensing data of the sensor is presented by a three-dimensional visual imaging tool rviz, however, rviz is a development tool, and can solve the problem for debugging developers, but the tool is too professional, is not friendly to non-technical users such as drivers or passengers, and strongly depends on a Linux system, and has no flexible portability.
In view of this, the embodiment of the disclosure provides a vehicle visual presentation method, an on-board device and a vehicle. The method comprises the following steps: acquiring image data of a surrounding environment acquired by a camera device of a vehicle in real time; detecting obstacles according to the image data at each moment to obtain the detection result of whether the obstacles exist and the type of the corresponding obstacles when the obstacles exist; when the detection result indicates that an obstacle exists, determining three-dimensional position information of the obstacle according to the image data, the parameters of the camera device and the real-time position information of the vehicle; and rendering the corresponding obstacle in a vehicle driving map loaded in real time on a display interface of the vehicle-mounted equipment of the vehicle according to the three-dimensional position information and the obstacle type.
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all, embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
A first exemplary embodiment of the present disclosure provides a method of vehicle visual presentation.
FIG. 1 schematically shows a flow chart of a method of vehicle vision presentation according to an embodiment of the present disclosure.
Referring to fig. 1, a method for presenting a vehicle vision according to an embodiment of the present disclosure includes the following steps: s110, S120, S130, and S140.
In step S110, image data of the surrounding environment acquired in real time by the camera device of the vehicle is acquired.
In an implementation scenario, during a driving process of a vehicle, a camera device on the vehicle obtains time sequence image data of a surrounding environment through shooting, for example, a road condition video can be obtained through a video recording mode of the camera device, and a video frame image at each moment in the road condition video is captured to obtain image data of the surrounding environment acquired at each moment.
In step S120, obstacle detection is performed on the image data at each time, and a detection result of whether an obstacle is present and a type of the obstacle corresponding to the presence of the obstacle is obtained.
For example, image segmentation may be performed by calling an image processing service of opencv (open source computer vision, which is a programming function library mainly for real-time computer vision), and the segmented image region is input to an obstacle detection model via tensoflow (an open source machine learning platform, which is a platform that transmits a complex data structure to an artificial intelligent neural network for analysis and processing), so as to obtain a detection result. The obstacle detection model is obtained after training through a large amount of road condition calibration data.
In step S130, when the detection result indicates that an obstacle exists, three-dimensional position information of the obstacle is determined based on the image data, the parameter of the camera, and the real-time position information of the vehicle.
The image data reflects the position information of the obstacles existing in the surrounding environment relative to the camera device, the relative position information of the obstacles in the world coordinate system of the vehicle can be determined based on the parameters of the camera device and the image data, and the three-dimensional position information of the obstacles is further determined by combining the real-time position information of the vehicle.
In step S140, a corresponding obstacle is rendered in a vehicle driving map loaded in real time on a display interface of the vehicle-mounted device of the vehicle according to the three-dimensional position information and the obstacle type.
The vehicle driving map loaded in real time on the display interface of the vehicle-mounted equipment is a map area which is adapted to the current display parameters of the display interface of the vehicle-mounted equipment, the initial position of the map area is the real-time position of the current vehicle, and the range of the map area covers a partial area of a planned path of the vehicle. Rendering the obstacles appearing in the real-time road condition in the vehicle driving map dynamically changing in real time according to the types of the obstacles obtained in the step S120 and the three-dimensional position information of the obstacles obtained in the step S130, and rendering the different types of obstacles by adopting different graphs.
Based on the steps S110 to S140, acquiring image data of the surrounding environment acquired in real time; the method comprises the steps of detecting obstacles according to image data at each moment, determining three-dimensional position information of the obstacles after detecting that the obstacles and corresponding obstacle types exist, and rendering the corresponding obstacles in a vehicle driving map loaded in real time on a display interface of vehicle-mounted equipment of the vehicle according to the three-dimensional position information and the obstacle types of the obstacles, so that a universal logic of visual presentation is provided, the logic of visual presentation can be applied to an automatic driving vehicle or a conventional vehicle, the logic of visual presentation is executed by the vehicle or a vehicle-mounted unit, and the logic of visual presentation has transportability; and when the vehicle runs, the real-time three-dimensional position information is calculated based on the type of the obstacle and the obstacle, and corresponding visual dynamic presentation is carried out in the vehicle running map loaded in real time.
Fig. 2A schematically illustrates a detailed implementation process diagram of step S120 according to an embodiment of the disclosure.
According to an embodiment of the present disclosure, as shown in fig. 2A, performing obstacle detection on the image data to obtain a detection result of whether an obstacle exists and a corresponding obstacle type when the obstacle exists includes the following steps: s211, S212, S213, and S214.
In step S211, the image data is subjected to image segmentation based on the gray level threshold of the pixel point, so as to obtain one or more candidate pixel regions.
In fig. 2A, the image data at the current time is taken as SAProceed with exampleActually the image data SAThe actual road condition image should be used, and for simplicity, only text boxes and corresponding characters are used to illustrate the distribution positions and ranges of the objects, and the specific object shapes are not illustrated. The image data SAComprises the following steps: the image of the street lamp is a three-dimensional image of a road, a vehicle line on two sides of the road, a vehicle in front of the road, a plurality of trees on the right side of the road, a pedestrian in the front right of the vehicle, a pedestrian crosswalk zebra crossing under the feet of the pedestrian, and a street lamp partially overlapping with the human pattern, wherein the plurality of trees, the vehicle line on the right side, the image between the street lamp and the pedestrian are adjacent and have overlapped or crossed parts.
In the embodiment of the present disclosure, an adaptive gray threshold may be adopted to segment an image. For a given image to be divided, histogram analysis is performed on local features (such as color, shape distribution, and other features) of the given image to obtain gray level thresholds for respective regions of the given image, where the gray level thresholds are different. For example, when the histogram clearly shows a double peak situation, the midpoint of the two peaks may be selected as the optimal threshold.
In many cases, the contrast between the object and the background varies from place to place in the image, making it difficult to distinguish the object from the background with a uniform threshold. The image is divided by respectively adopting different thresholds according to the local feature difference of the image, for example, in the actual processing, a plurality of gray level thresholds are set in a differentiation manner according to the types of obstacles possibly existing in the actual road condition scene, such as people, animals, various types of vehicles and the like, the gray level thresholds correspondingly adopted aiming at different areas can be determined after the image is analyzed, and then the value of the pixel point of which the gray level value is greater than or equal to the gray level threshold in the area is 1, and the value of the pixel point of which the gray level value is less than the threshold is 0. In this embodiment, after obtaining the division range according to the grayscale threshold, a map obtained by dividing the original image with color according to the division range may be used as the candidate pixel region, or a grayscale map corresponding to the division range may be used as the candidate pixel region.
Illustratively, the image data SAImage segmentation based on gray threshold of pixel pointAfter processing, obtaining a plurality of candidate pixel regions, which are respectively: candidate pixel area S containing pedestrian crosswalk zebra stripesA1Candidate pixel region S including left vehicle lineA2Including a candidate pixel region S of a preceding vehicleA3Candidate pixel region S including a person, a right vehicle line, a street lamp, and a plurality of treesA4
In step S212, the expansion process is performed on the candidate pixel region.
The image expansion processing refers to expanding the edge of an image based on a preset processing algorithm, which is helpful for clearly defining the boundary of a target, separating an object from a smooth larger object boundary at a fine part, and in fig. 2A, a schematic frame of each image is thickened to indicate the state after the expansion processing, actually, a structural element is moved in an entity object image, and then an intersection or a union and the like are obtained between the structural element and a pixel point at the position of the structural element, so that the boundary of the entity object image is more obvious and smooth.
In step S213, region segmentation is performed according to the boundary of the candidate pixel region after the expansion processing, the area of each segmented region is calculated, and the noise point region whose area does not meet the preset threshold range is removed to obtain the target pixel region.
Because one or more candidate pixel regions obtained based on image segmentation possibly contain noise pixel points which are interfered with subsequent detection identification except for obstacles, the candidate pixel regions are subjected to expansion processing, region segmentation is carried out again according to the boundaries of the regions subjected to the expansion processing, and after noise point regions which are obtained after the segmentation is carried out again and have areas which do not accord with the range of the preset threshold value are removed, fewer interference pixels are favorably contained in the target pixel region, and the subsequent detection accuracy in an obstacle detection model is improved. For example, as shown in fig. 2A, after the region division is performed again on the basis of the boundary of the candidate pixel region after the expansion processing, the candidate pixel region SA1、SA2And SA3A candidate pixel region S including a person, a right vehicle line, a street lamp, and a plurality of trees, which are kept unchangedA4Is divided into the following regions: region S including a person and a part of a right vehicle line intersecting the personA4-1Area S containing street lampsA4-2Region S comprising a plurality of treesA4-3. Wherein a preset threshold range may be formed from calibration values corresponding to known obstacles (e.g., trees, various types of vehicles and pedestrians, etc.), then a region whose area does not fit the preset threshold range is determined as a noisy region, e.g., region S is determined here A2Region SA4-2And region SA4-3As a noisy region, the remaining region SA1Region SA3And region SA4-1Is the target pixel area.
In step S214, the target pixel area is input into a pre-trained obstacle detection model, and a detection result of whether an obstacle exists in the target pixel area and a corresponding obstacle type if an obstacle exists is output.
Referring to FIG. 2A, the regions S are respectivelyA1Region SA3And region SA4-1Inputting the data into a pre-trained obstacle detection model, and outputting to obtain an area SA1Region SA3And region SA4-1The respective detection results.
According to an embodiment of the present disclosure, the obstacle detection model is obtained by training in the following manner: acquiring road condition calibration data, wherein the road condition calibration data comprises: obtaining a road condition target area after the road condition scene picture is subjected to object recognition; the real result of the type of the barrier corresponding to the road condition target area; taking the real result as a training label, training the parameters of the obstacle detection model to be trained, wherein the input of the obstacle detection model to be trained is the road condition target area, and the output of the obstacle detection model to be trained is as follows: whether the road condition target area has the obstacle or not and a prediction result of a corresponding obstacle prediction type when the obstacle is predicted to exist; and when the training reaches the set times or when the loss function representing the gap between the training label and the prediction result is lower than a set value, the training is considered to be finished.
The above steps S211 to S214 may be implemented by executing corresponding processing logic based on opencv and tensorflow in the vehicle-mounted device.
Fig. 2B schematically shows a detailed implementation process diagram of step S120 according to another embodiment of the disclosure.
According to another embodiment of the present disclosure, as shown in fig. 2B, in the step S120, performing obstacle detection on the image data to obtain a detection result of whether an obstacle exists and a type of the obstacle corresponding to the obstacle when the obstacle exists, the method includes the following steps: s221, S222, and S223.
In step S221, image segmentation is performed on the image data based on the gray level threshold of the pixel point to obtain one or more candidate pixel regions.
The detailed implementation process in step S221 may refer to the detailed description of step S211, which is not described herein again.
In step S222, the interference pixels in the candidate pixel region are removed according to the difference of the color features in the candidate pixel region and at least one of whether the candidate pixel region covers the off-road pixel region, so as to obtain the target pixel region with the interference pixels removed.
When the candidate pixel area covers the pixel area outside the road, taking the pixels in the pixel area outside the road as interference pixels for removing; for example, referring to FIG. 2B, a candidate pixel region S A4Tree of (1)A4-5And street lamp SA4-6The interference pixels in the pixel area outside the road are removed, and the rest part is the target pixel area SA4-4. When the candidate pixel regions have different color features, pixels in the pixel region corresponding to the color feature occupying a small pixel area are removed as interference pixels, for example, in another image, there are a small number of overlapping regions between a pedestrian and a tree, and the overlapping regions are divided into candidate pixel regions where the pedestrian is located, then for the overlapping regions, pixels corresponding to trees in the overlapping regions can be removed according to the difference of the color features.
In step S223, the target pixel area is input into a pre-trained obstacle detection model, and a detection result of whether an obstacle exists in the target pixel area and a corresponding obstacle type if an obstacle exists is output.
The detailed process of step S223 and the training process of the obstacle detection model may refer to the description of step S213, which is not described herein again.
Fig. 3 schematically shows a detailed implementation flowchart of step S130 according to an embodiment of the present disclosure.
According to an embodiment of the present disclosure, as shown in fig. 3, the step S130 of determining the three-dimensional position information of the obstacle based on the image data, the parameter of the image capturing device, and the real-time position information of the vehicle includes the steps of: s310 and S320.
In step S310, the relative position information of the obstacle in the world coordinate system where the vehicle is located is determined based on the image data and the parameter of the imaging device.
In one embodiment, the image capturing device is a binocular image capturing device, and the parameters of the image capturing device include: the binocular camera device comprises a focal length value and a central distance of two cameras after calibration and three-dimensional calibration, a parallax value between the two cameras, and offset parameters after calibration and three-dimensional calibration between coordinate systems of a left image plane and a right image plane corresponding to the two cameras and an original point in the world coordinate system.
Determining relative position information of an obstacle in a world coordinate system in which the vehicle is located, based on the image data and parameters of the imaging device, the method including: determining coordinates of matched image points of two cameras of the binocular camera device, which are aimed at the same barrier; determining projection matrixes corresponding to the two cameras according to the calibrated parameters of the two cameras respectively; constructing a conversion equation between a world coordinate system and an image plane coordinate system according to the projection matrix and the coordinates of the matched image points; and performing least square solution on the simultaneous transformation equations between the two cameras to obtain the three-dimensional coordinates of the barrier in the world coordinate system, wherein the three-dimensional coordinates are the relative position information.
In step S320, the three-dimensional position information of the obstacle is determined based on the real-time position information of the vehicle and the relative position information.
The real-time position information of the vehicle can be acquired in real time through communication between the vehicle-mounted unit and a Global Navigation Satellite System (GNSS).
Fig. 4 schematically shows a detailed implementation flowchart of step S140 according to an embodiment of the present disclosure.
According to an embodiment of the present disclosure, as shown in fig. 4, in the step S140, rendering the corresponding obstacle in the vehicle driving map loaded in real time on the display interface of the vehicle-mounted device of the vehicle according to the three-dimensional position information and the obstacle type includes the following steps: s410, S420, S431, and S432.
In step S410, a coverage position and a coverage area of a primitive to be rendered corresponding to the obstacle in the vehicle driving map are calculated according to the three-dimensional position information of the obstacle and the type of the obstacle.
In step S420, it is determined whether a first target primitive located within the display range of the display interface and a second target primitive located outside the display range of the display interface exist in the primitives to be rendered according to the coverage position and the coverage area.
In step S431, when the first target primitive exists in the primitives to be rendered, rendering the first target primitive in a 3D manner on the display interface.
In step S432, when the second target primitive exists in the primitives to be rendered, the second target primitive is stored in a list to be rendered, and the storage state of the second target primitive is refreshed in the list to be rendered according to the real-time display range of the display interface, and when the second target primitive is located in the real-time display range, the second target primitive is deleted from the list to be rendered and output to the display interface for rendering.
For example, the actual distance between a to-be-rendered primitive of a pedestrian shot at a certain moment and the current position of the vehicle is 15 meters, in a certain scene, for example, the vehicle is in a street with dense traffic lights, a user (a driver of an ordinary vehicle or a developer of an automatic driving vehicle) adjusts the current display parameters of the display interface to be used for displaying an interval within 10 meters from the current position, that is, a real-time display range is a road segment within 10 meters ahead of the current position in the planned path, and then the pedestrian belongs to a second target primitive at the current moment.
When one or some of the primitives to be rendered are second target primitives located outside the display range of the display interface, the second target primitives are stored in the list to be rendered, and each second target primitive in the list to be rendered is read and the storage state is updated according to the updating of the real-time display range, so that the second target primitives excluded at the last moment can be rendered without being omitted if the second target primitives fall into the current display range in the display interface loaded in real time.
According to another embodiment of the present disclosure, the step S140 further includes, in addition to the steps S410 to S432, the following steps:
determining whether a third target primitive exists in the to-be-rendered primitives, wherein the partial image is located within the display range of the display interface and the partial image is located outside the display range of the display interface according to the covering position and the covering area;
rendering a pixel area, which is located within a display range of the display interface, in the vehicle driving map, in the third target pixel when the third target pixel exists in the pixels to be rendered; or, when the third target primitive exists in the primitives to be rendered, reducing the coverage area of the third target primitive in an equal proportion until the whole area of the third target primitive can be located within the display range of the display interface; and rendering the reduced third target graphic primitive in the vehicle driving map.
In a display strategy provided by the present disclosure, when a third target primitive exists, the position of the third target primitive is located near the display edge, and by rendering the portion of the third target primitive located inside the display interface, the portion located outside the display interface is cut and is not rendered, which is helpful for reducing consumption of hardware (GPU) corresponding to rendering. In another display strategy, the coverage area of the graphic element to be rendered is reduced in an equal proportion until the graphic element can be located in the display interface, so that the barrier can be presented at each moment, particularly, the barrier is presented at the initial moment, the user is prompted, and the method can be adopted for the scene that a driver drives a vehicle.
Fig. 5 schematically shows a detailed implementation flowchart of loading a vehicle travel map according to an embodiment of the present disclosure.
On the basis of the above embodiments, the method for visually presenting a vehicle further includes: loading a vehicle driving map in real time on a display interface of vehicle-mounted equipment of the vehicle; referring to fig. 5, the method for loading the vehicle driving map in real time comprises the following steps: s510, S520, S530, and S540.
In step S510, a planned route of the vehicle is acquired.
For an autonomous vehicle or a general vehicle, a planned path of the vehicle may be obtained from a cloud server providing navigation service for the vehicle or a navigation application of the vehicle.
In step S520, the map data to be displayed from the current driving position of the vehicle to the subsequent planned route is determined.
Determining map data to be displayed from the current driving position of the vehicle to a subsequent planned path along with real-time driving of the vehicle in a planned section between the starting point and the end point of the planned path, wherein the map data is displayed in the range from the current driving position to the end point of the planned path; or map data within a plurality of different distance intervals from the current driving position, such as map data within 5 meters from the current driving position, map data within 20 meters from the current driving position, map data within 200 meters from the current driving position, map data within 400 meters from the current driving position, map data within 1000 meters from the current driving position, and so on.
In step S530, a target map section adapted to a current display parameter in a display interface of the vehicle-mounted device is calculated according to the map data, where the current display parameter is obtained by user pre-configuration or real-time update.
For example, the map data adapted to the current display parameters in the display interface of the vehicle-mounted device is: and if the distance from the current driving position is within 200 meters, the range from the current position to within 200 meters in the planned path is the target map interval. In other embodiments, if the current display parameters set by the user are different, the target map section loaded in real time by the display interface of the vehicle-mounted unit also has a difference, for example, the range from the current position to within 20 meters in the planned path is the target map section. For example, current display parameters include: parameters such as length, width and height of a display screen of the vehicle-mounted equipment, pixel density and the like, and the viewing angle height set by a current user and the like; and dynamically adjusting the rendering range of the map by combining the parameter information.
In step S540, data corresponding to the target map section is loaded in real time on the display interface to obtain the vehicle driving map.
Based on the steps S510 to S540, the vehicle driving map can be dynamically loaded during the vehicle driving process, and the target map section corresponding to the displayed vehicle driving map can be changed according to the current display parameters set or updated by the user, which is helpful for meeting the personalized requirements of the user and improving the flexibility of displaying the vehicle driving map.
In the embodiment including steps S510 to S540 and S410 to S432, on the premise that the user adjusts the vehicle display parameters in real time, the corresponding obstacle can be rendered accurately, so that the dynamically loaded vehicle driving map in the current display interface is adapted to the obstacle, and omission is avoided.
FIG. 6 schematically shows a flow chart of a method of vehicle visual presentation according to another embodiment of the present disclosure.
In some embodiments of the present disclosure, the method for visually presenting a vehicle includes, in addition to the steps S110, S120, S130, and S140, the steps of: s610, S620 and S630, in concrete implementation, the step S140 includes a step S140a, which is shown in fig. 6.
In step S610, when the detection result indicates that an obstacle exists, extracting a feature of interest corresponding to the obstacle in the image data according to a preset feature of interest for each obstacle type and a type of the existing obstacle. The preset attention features are used for representing identity differences of similar obstacles.
For example, taking a vehicle as an example of a type of obstacle, the preset attention feature of the vehicle type may be a license plate number.
In step S620, similarity comparison is performed on the attention features extracted from similar obstacles in the image data at the two previous and next times, so as to obtain an identity recognition result of whether the same obstacle exists in the image data at the two previous and next times.
In step S630, according to the identification result, the same type identifier and the different identification identifiers are allocated to the same type of obstacle, and different type identifiers are allocated to different types of obstacles.
In step S140, rendering the corresponding obstacle in the vehicle driving map loaded in real time on the display interface of the vehicle-mounted device of the vehicle according to the three-dimensional position information and the obstacle type includes the following steps S140 a: rendering the obstacle in a vehicle driving map loaded in real time on a display interface of vehicle-mounted equipment of the vehicle according to the type identifier, the identity identifier and the three-dimensional position information of the obstacle.
According to the embodiment of the disclosure, when the type identification and the identity identification of the obstacles at the current moment and the later moment are the same, the obstacles at the previous moment and the later moment are correspondingly rendered in the vehicle driving map by using the same 3D graphics; when the type identifications of the obstacles at the front moment and the rear moment are different, correspondingly rendering the obstacles at the front moment and the rear moment in a differentiated 3D graph in the vehicle driving map, wherein the differentiated 3D graph at least has difference in shape; when the type identifications of the obstacles at the front moment and the rear moment are the same and the identification identifications are different, the obstacles at the front moment and the rear moment are correspondingly rendered in the vehicle driving map by the same 3D graphics and differentiated rendering effects, and the differentiated rendering effects are used for visually distinguishing the same 3D graphics.
A second exemplary embodiment of the present disclosure provides an in-vehicle apparatus.
Fig. 7 schematically shows a block diagram of a vehicle-mounted device provided in an embodiment of the present disclosure.
Referring to fig. 7, an in-vehicle device 700 provided in an embodiment of the present disclosure includes: a data acquisition module 701, an obstacle detection module 702, a position calculation module 703 and a display module 704.
The data acquiring module 701 is configured to acquire image data of a surrounding environment acquired by a camera of a vehicle in real time.
The obstacle detecting module 702 is configured to perform obstacle detection on the image data at each time to obtain a detection result of whether an obstacle exists and a type of the obstacle corresponding to the obstacle when the obstacle exists. The obstacle detecting module 702 includes various functional modules or functional sub-modules for implementing the step S120.
The position calculation module 703 is configured to determine three-dimensional position information of the obstacle according to the image data, the parameter of the camera, and the real-time position information of the vehicle when the detection result indicates that the obstacle exists. The position calculation module 703 includes a functional module or a sub-module for implementing the step S130.
The display module 704 is configured to render a corresponding obstacle in a vehicle driving map loaded in real time on a display interface of the vehicle-mounted device of the vehicle according to the three-dimensional position information and the obstacle type. The display module 704 includes a functional module or a sub-module for implementing the step S140.
According to an embodiment of the present disclosure, the vehicle-mounted device 700 further includes: the system comprises an attention feature extraction module, an identity recognition module and an obstacle mark distribution module.
The attention feature extraction module is used for extracting corresponding attention features of the obstacles in the image data according to preset attention features aiming at each obstacle type and the type of the existing obstacles under the condition that the detection result indicates that the obstacles exist. The preset attention features are used for representing identity differences of similar obstacles.
The identity recognition module is used for carrying out similarity comparison on the attention features extracted from similar obstacles in the image data at the two moments before and after to obtain the identity recognition result of whether the same obstacle exists in the image data at the two moments before and after.
The obstacle mark distribution module is used for distributing the same type marks and respectively different identity marks for the obstacles of the same type and distributing different type marks for the obstacles of different types according to the identity recognition result.
In an embodiment of the present disclosure, the above-mentioned in-vehicle device may be a display device built in a vehicle; or may be a display device located in the vehicle independent of the vehicle, for example, a mobile device held by a user, for example, a smart phone, a tablet computer, a notebook computer, a smart watch, and the like. The vehicle-mounted equipment is provided with a map display application, and the map display application comprises program instructions or functional modules for realizing the method.
A third exemplary embodiment of the present disclosure also provides a vehicle. The vehicle is used for executing the vehicle visual presentation method or comprises the vehicle-mounted device. The vehicle is a normal vehicle or an autonomous vehicle.
Any number of the modules included in the vehicle-mounted device may be combined into one module to be implemented, or any one of the modules may be split into multiple modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of other modules and implemented in one module. At least one of the modules included in the vehicle-mounted device may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or may be implemented by any one of three implementations of software, hardware, and firmware, or any suitable combination of any of them. Alternatively, at least one of the modules comprised by the above-mentioned on-board device may be at least partly implemented as a computer program module, which, when executed, may perform a corresponding function.
It is noted that, in this document, relational terms such as "first" and "second," and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The previous description is only for the purpose of describing particular embodiments of the present disclosure, so as to enable those skilled in the art to understand or implement the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (14)

1. A method of vehicle visual presentation, comprising:
acquiring image data of a surrounding environment acquired by a camera device of a vehicle in real time;
detecting obstacles according to the image data at each moment to obtain the detection result of whether the obstacles exist and the type of the corresponding obstacles when the obstacles exist;
when the detection result indicates that an obstacle exists, determining three-dimensional position information of the obstacle according to the image data, the parameters of the camera device and the real-time position information of the vehicle; and
and rendering the corresponding obstacle in a vehicle driving map loaded in real time on a display interface of vehicle-mounted equipment of the vehicle according to the three-dimensional position information and the obstacle type.
2. The method according to claim 1, wherein rendering the corresponding obstacle in a vehicle driving map loaded in real time on a display interface of an on-board device of the vehicle according to the three-dimensional position information and the obstacle type comprises:
according to the three-dimensional position information of the obstacle and the type of the obstacle, calculating the coverage position and the coverage area of a graphic element to be rendered corresponding to the obstacle in the vehicle driving map;
Determining whether a first target primitive located within the display range of the display interface and a second target primitive located outside the display range of the display interface exist in the primitives to be rendered according to the coverage position and the coverage area;
when the first target primitive exists in the primitives to be rendered, rendering the first target primitive in a 3D mode on the display interface;
when the second target primitive exists in the primitives to be rendered, storing the second target primitive in a list to be rendered, refreshing the storage state of the second target primitive in the list to be rendered according to the real-time display range of the display interface, and when the second target primitive is located in the real-time display range, deleting the second target primitive from the list to be rendered and outputting the second target primitive to the display interface for rendering.
3. The method according to claim 1 or 2, characterized in that the vehicle driving map is loaded by:
acquiring a planned path of the vehicle;
determining map data to be displayed from the current driving position of the vehicle to a subsequent planned path;
calculating a target map interval adapted to current display parameters in a display interface of the vehicle-mounted equipment according to the map data, wherein the current display parameters are obtained by user pre-configuration or real-time update; and
And loading data corresponding to the target map interval in real time on the display interface to obtain the vehicle driving map.
4. The method of claim 1 or 2, further comprising:
when the detection result represents that the obstacle exists, extracting corresponding attention features of the obstacle in the image data according to preset attention features aiming at each obstacle type and the obstacle type corresponding to the existing obstacle; the preset concern features are used for representing identity differences of similar obstacles;
carrying out similarity comparison on the attention features extracted from similar obstacles in the image data at the two moments before and after to obtain whether the identity recognition result of the same obstacle exists in the image data at the two moments before and after;
according to the identity recognition result, allocating the same type identification and the respectively different identity identification for the same type of obstacles, and allocating different type identifications for different types of obstacles;
wherein, the rendering of the corresponding obstacle in the vehicle driving map loaded in real time on the display interface of the vehicle-mounted device of the vehicle according to the three-dimensional position information and the obstacle type includes:
And rendering the obstacles in a vehicle driving map loaded in real time on a display interface of vehicle-mounted equipment of the vehicle according to the type identifier, the identity identifier and the three-dimensional position information of the obstacles.
5. The method of claim 4,
when the type identification and the identity identification of the obstacles at the two moments before and after are the same, correspondingly rendering the obstacles at the two moments before and after in the vehicle driving map by using the same 3D graph;
when the type identifications of the obstacles at the front moment and the rear moment are different, correspondingly rendering the obstacles at the front moment and the rear moment in a differentiated 3D graph in the vehicle driving map, wherein the differentiated 3D graph has at least difference in shape;
when the type identifications and the identity identifications of the obstacles at the front and the back moments are the same and different, correspondingly rendering the obstacles at the front and the back moments by using the same 3D graphics and differentiated rendering effects in the vehicle driving map, wherein the differentiated rendering effects are used for visually distinguishing the same 3D graphics.
6. The method of claim 2, wherein rendering the corresponding obstacle in a vehicle driving map loaded in real time on a display interface of an on-board device of the vehicle according to the three-dimensional position information and the obstacle type, further comprises:
Determining whether a third target primitive exists in the to-be-rendered primitive, wherein the partial image is located within the display range of the display interface and the partial image is located outside the display range of the display interface according to the covering position and the covering area;
when the third target graphic primitive exists in the graphic primitives to be rendered, rendering a graphic primitive area, located within a display range of the display interface, in the vehicle driving map; alternatively, the first and second liquid crystal display panels may be,
when the third target primitive exists in the primitives to be rendered, reducing the coverage area of the third target primitive in an equal proportion until the whole area of the third target primitive can be located within the display range of the display interface; and rendering the reduced third target graphic primitive in the vehicle driving map.
7. The method of claim 1, wherein determining three-dimensional position information of the obstacle based on the image data, parameters of the camera, and real-time position information of the vehicle comprises:
determining relative position information of an obstacle in a world coordinate system where the vehicle is located according to the image data and the parameters of the camera device; and
And determining the three-dimensional position information of the obstacle according to the real-time position information and the relative position information of the vehicle.
8. The method of claim 7, wherein the camera is a binocular camera;
determining the relative position information of the obstacle in the world coordinate system of the vehicle according to the image data and the parameters of the camera device, and the method comprises the following steps:
determining coordinates of two cameras of the binocular camera device for matched image points of the same obstacle;
determining projection matrixes corresponding to the two cameras according to the calibrated parameters of the two cameras respectively;
constructing a conversion equation between a world coordinate system and an image plane coordinate system according to the projection matrix and the coordinates of the matched image points;
and performing least square solution on a conversion equation between the two cameras in a simultaneous manner to obtain a three-dimensional coordinate of the obstacle in the world coordinate system, wherein the three-dimensional coordinate is the relative position information.
9. The method of claim 8, wherein the parameters of the camera device comprise: the binocular camera device comprises focal length values and central distances of two cameras after calibration and three-dimensional calibration, a parallax value between the two cameras, coordinate systems of a left image plane and a right image plane corresponding to the two cameras, and offset parameters between original points in a world coordinate system after calibration and three-dimensional calibration.
10. The method according to claim 1, wherein the performing obstacle detection on the image data to obtain a detection result of whether an obstacle exists and a corresponding obstacle type when the obstacle exists comprises:
performing image segmentation on the image data based on gray level threshold values of pixel points to obtain one or more candidate pixel areas;
performing expansion processing on the candidate pixel area;
performing region segmentation according to the boundary of the candidate pixel region after expansion processing, calculating the area of each region obtained by segmentation, and removing the noise point region of which the area does not conform to the range of a preset threshold value to obtain a target pixel region;
and inputting the target pixel area into a pre-trained obstacle detection model, and outputting a detection result of whether an obstacle exists in the target pixel area or not and a corresponding obstacle type when the obstacle exists.
11. The method of claim 1, wherein performing obstacle detection on the image data to obtain a detection result of whether an obstacle exists and a corresponding obstacle type when the obstacle exists comprises:
carrying out image segmentation on the image data based on a gray threshold of a pixel point to obtain one or more candidate pixel regions;
Eliminating interference pixels in the candidate pixel area according to at least one item of difference of corresponding initial color features in the candidate pixel area and whether the candidate pixel area covers an off-road pixel area, and obtaining a target pixel area after the interference pixels are eliminated;
and inputting the target pixel area into a pre-trained obstacle detection model, and outputting a detection result of whether an obstacle exists in the target pixel area or not and a corresponding obstacle type when the obstacle exists.
12. The method according to claim 10 or 11, wherein the obstacle detection model is trained by:
acquiring road condition calibration data, wherein the road condition calibration data comprises: obtaining a road condition target area after the road condition scene picture is subjected to object recognition; the real result of the type of the obstacle corresponding to the road condition target area;
taking the real result as a training label, training the parameters of the obstacle detection model to be trained, wherein the input of the obstacle detection model to be trained is the road condition target area, and the output of the obstacle detection model to be trained is as follows: whether an obstacle exists in the road condition target area or not and a prediction result of a corresponding obstacle prediction type when the obstacle is predicted to exist;
And when the training reaches a set number of times or when a loss function representing the gap between the training label and the prediction result is lower than a set value, the training is considered to be finished.
13. An in-vehicle apparatus characterized by comprising:
the data acquisition module is used for acquiring image data of the surrounding environment acquired by a camera device of the vehicle in real time;
the obstacle detection module is used for carrying out obstacle detection on the image data at each moment to obtain a detection result of whether an obstacle exists and a corresponding obstacle type when the obstacle exists;
the position calculation module is used for determining the three-dimensional position information of the obstacle according to the image data, the parameters of the camera device and the real-time position information of the vehicle under the condition that the detection result indicates that the obstacle exists; and
and the display module is used for rendering the corresponding obstacle in a vehicle driving map loaded in real time on a display interface of vehicle-mounted equipment of the vehicle according to the three-dimensional position information and the obstacle type.
14. A vehicle for carrying out the method of any one of claims 1-12 or comprising the vehicle-mounted device of claim 13.
CN202210415479.XA 2022-04-18 2022-04-18 Vehicle vision presenting method, vehicle-mounted equipment and vehicle Pending CN114750696A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210415479.XA CN114750696A (en) 2022-04-18 2022-04-18 Vehicle vision presenting method, vehicle-mounted equipment and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210415479.XA CN114750696A (en) 2022-04-18 2022-04-18 Vehicle vision presenting method, vehicle-mounted equipment and vehicle

Publications (1)

Publication Number Publication Date
CN114750696A true CN114750696A (en) 2022-07-15

Family

ID=82331353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210415479.XA Pending CN114750696A (en) 2022-04-18 2022-04-18 Vehicle vision presenting method, vehicle-mounted equipment and vehicle

Country Status (1)

Country Link
CN (1) CN114750696A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187762A (en) * 2022-08-04 2022-10-14 广州小鹏自动驾驶科技有限公司 Rendering method and device of vehicle-mounted map, vehicle and storage medium
CN115797817A (en) * 2023-02-07 2023-03-14 科大讯飞股份有限公司 Obstacle identification method, obstacle display method, related equipment and system
CN115861976A (en) * 2023-03-01 2023-03-28 小米汽车科技有限公司 Vehicle control method and device and vehicle

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187762A (en) * 2022-08-04 2022-10-14 广州小鹏自动驾驶科技有限公司 Rendering method and device of vehicle-mounted map, vehicle and storage medium
CN115187762B (en) * 2022-08-04 2023-09-12 广州小鹏自动驾驶科技有限公司 Vehicle map rendering method and device, vehicle and storage medium
CN115797817A (en) * 2023-02-07 2023-03-14 科大讯飞股份有限公司 Obstacle identification method, obstacle display method, related equipment and system
CN115797817B (en) * 2023-02-07 2023-05-30 科大讯飞股份有限公司 Obstacle recognition method, obstacle display method, related equipment and system
CN115861976A (en) * 2023-03-01 2023-03-28 小米汽车科技有限公司 Vehicle control method and device and vehicle
CN115861976B (en) * 2023-03-01 2023-11-21 小米汽车科技有限公司 Vehicle control method and device and vehicle

Similar Documents

Publication Publication Date Title
US11967109B2 (en) Vehicle localization using cameras
US11488392B2 (en) Vehicle system and method for detecting objects and object distance
US11487988B2 (en) Augmenting real sensor recordings with simulated sensor data
CN114750696A (en) Vehicle vision presenting method, vehicle-mounted equipment and vehicle
CN111874006B (en) Route planning processing method and device
US20190065637A1 (en) Augmenting Real Sensor Recordings With Simulated Sensor Data
JP6082802B2 (en) Object detection device
CN111209825B (en) Method and device for dynamic target 3D detection
JP6678605B2 (en) Information processing apparatus, information processing method, and information processing program
US20230184560A1 (en) Visual interface display method and apparatus, electronic device, and storage medium
CN103770704A (en) System and method for recognizing parking space line markings for vehicle
KR102167835B1 (en) Apparatus and method of processing image
JP2020053046A (en) Driver assistance system and method for displaying traffic information
CN114945952A (en) Generating depth from camera images and known depth data using neural networks
CN111091037A (en) Method and device for determining driving information
CN109583312A (en) Lane detection method, apparatus, equipment and storage medium
CN114765972A (en) Display method, computer program, controller and vehicle for representing a model of the surroundings of a vehicle
CN117576652B (en) Road object identification method and device, storage medium and electronic equipment
CN111191482A (en) Brake lamp identification method and device and electronic equipment
CN110727269B (en) Vehicle control method and related product
JP2019117214A (en) Object data structure
CN113221756A (en) Traffic sign detection method and related equipment
CN112507887A (en) Intersection sign extracting and associating method and device
JP2019117432A (en) Display control device
JP2019117052A (en) Display control device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination