CN117125055A - Obstacle sensing method and device based on visual parking - Google Patents

Obstacle sensing method and device based on visual parking Download PDF

Info

Publication number
CN117125055A
CN117125055A CN202310902634.5A CN202310902634A CN117125055A CN 117125055 A CN117125055 A CN 117125055A CN 202310902634 A CN202310902634 A CN 202310902634A CN 117125055 A CN117125055 A CN 117125055A
Authority
CN
China
Prior art keywords
identification information
obstacle
camera
video stream
parking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310902634.5A
Other languages
Chinese (zh)
Inventor
路凯
李荣辉
曾小辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zero Beam Technology Co ltd
Original Assignee
Zero Beam Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zero Beam Technology Co ltd filed Critical Zero Beam Technology Co ltd
Priority to CN202310902634.5A priority Critical patent/CN117125055A/en
Publication of CN117125055A publication Critical patent/CN117125055A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a visual parking-based obstacle sensing method and device, wherein the method comprises the following steps: acquiring a video stream of a fisheye camera and a video stream of a linear camera; outputting first identification information of a target obstacle based on the video stream of the fisheye camera, and outputting second identification information of the target obstacle based on the video stream of the linear camera; dynamically adjusting the confidence level of the first identification information and the confidence level of the second identification information according to the distance between the target obstacle and the vehicle or the position of the target obstacle in the fish-eye camera; and taking at least one of the first identification information or the second identification information as an output result of the perception information according to the confidence coefficient of the first identification information and the confidence coefficient of the second identification information. The invention can finish accurate positioning sensing of the obstacles around the vehicle in the parking process, and ensure safer and more efficient parking.

Description

Obstacle sensing method and device based on visual parking
Technical Field
The invention relates to the technical field of automatic parking, in particular to a visual parking-based obstacle sensing method and device.
Background
Unmanned parking is automatic parking based on computer vision. Compared with manual operation parking, the automatic parking technology has the advantages that the parking path is more accurate, the parking operation is more concise, the scratch, the collision and the like can be reduced, and the parking process is safer and more efficient.
In recent years, with the development of computer vision technology, a technical scheme relying on computer vision is also attracting attention. Since fish eye cameras have a wider view angle than pinhole cameras, obstacle perception around the vehicle is accomplished using more look-around fish eye cameras in a parking scene. However, the simple fisheye camera system has the problems of difficult medium-short distance ranging, difficult obstacle positioning and the like under a large distortion background.
Disclosure of Invention
Aiming at the technical problems, the invention provides an obstacle sensing method and device based on visual parking, which can improve the obstacle sensing capability.
In a first aspect of the present invention, there is provided a visual parking-based obstacle sensing method, including:
acquiring a video stream of a fisheye camera and a video stream of a linear camera;
outputting first identification information of a target obstacle based on the video stream of the fisheye camera, and outputting second identification information of the target obstacle based on the video stream of the linear camera;
dynamically adjusting the confidence level of the first identification information and the confidence level of the second identification information according to the distance between the target obstacle and the vehicle or the position of the target obstacle in the fish-eye camera;
and taking at least one of the first identification information or the second identification information as an output result of the perception information according to the confidence coefficient of the first identification information and the confidence coefficient of the second identification information.
In an optional embodiment, the outputting the first identification information of the target obstacle based on the video stream of the fisheye camera, and outputting the second identification information of the target obstacle based on the video stream of the linear camera, includes:
outputting first identification information of a target obstacle based on a video stream of the fisheye camera by using a first neural network model;
outputting second identification information of the target obstacle based on the video stream of the linear camera using a second neural network model; wherein the first identification information and the second identification information each include a bounding box of a target obstacle.
In an optional embodiment, the dynamically adjusting the confidence of the first identification information and the second identification information according to the distance between the target obstacle and the vehicle includes:
calculating a distance between a target obstacle and a host vehicle based on a video stream of the linear camera when the fisheye camera overlaps with a visual region of the linear camera;
and if the distance between the target obstacle and the vehicle is greater than the preset distance, dynamically adjusting the confidence coefficient of the second identification information to be higher than that of the first identification information.
In an optional embodiment, the dynamically adjusting the confidence of the first identification information and the confidence of the second identification information according to the position of the target obstacle in the fisheye camera includes:
when the fish-eye camera is overlapped with the visual area of the linear camera, dividing the visual angle area of the fish-eye camera, wherein the visual angle area comprises a middle area and other areas;
and if the target obstacle is in the middle area shot by the fisheye camera, dynamically adjusting the confidence of the first identification information to be higher than the confidence of the second identification information.
In an alternative embodiment, the method for sensing obstacle based on visual parking further includes: and when the vehicle is in the forward process, adjusting the confidence degree of the second identification information to be higher than that of the first identification information.
In an alternative embodiment, the method for sensing obstacle based on visual parking further includes: and calculating barrier key points based on the video stream of the fish-eye camera, and taking the barrier key points as a part of the output result of the perception information.
In an alternative embodiment, the obstacle key points comprise an obstacle grounding point and an obstacle locating point close to the ground of the vehicle, and the obstacle key points are marked on the second identification information.
In a second aspect of the present invention, there is provided an obstacle sensing device based on visual parking, comprising:
the information acquisition module is used for acquiring the video stream of the fisheye camera and the video stream of the linear camera;
the model detection module is used for outputting first identification information of a target obstacle based on the video stream of the fisheye camera and outputting second identification information of the target obstacle based on the video stream of the linear camera;
the confidence coefficient adjusting module is used for dynamically adjusting the confidence coefficient of the first identification information and the confidence coefficient of the second identification information according to the distance between the target obstacle and the vehicle or the position of the target obstacle in the fisheye camera;
and the perception result output module is used for taking at least one of the first identification information or the second identification information as an output result of the perception information according to the confidence coefficient of the first identification information and the confidence coefficient of the second identification information.
According to a third aspect of the invention, a vision-based parking method is provided, which comprises the vision-based parking obstacle sensing method according to the first aspect of the invention.
In a fourth aspect of the present invention, there is provided a vehicle comprising:
at least one processor; and at least one memory communicatively coupled to the processor, wherein: the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method according to the first aspect of the embodiments of the invention or the second aspect of the invention.
In a fifth aspect of the invention, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a computer, performs a method according to the first aspect of the embodiments or the second aspect of the invention.
The invention is based on the video stream of the fisheye camera and the linear camera, can give consideration to the recognition and the perception of the middle-distance obstacle and the close-distance obstacle, and is beneficial to the excitation calculation of the obstacle; by utilizing the dynamic adjustment of the confidence coefficient, the accurate positioning sensing of the obstacles around the vehicle can be completed in the parking process, and the safer and more efficient parking is ensured.
Drawings
Fig. 1 is a schematic flow chart of an obstacle sensing method based on visual parking in an embodiment of the invention.
Fig. 2 is a schematic diagram of an output after recognizing an image captured by a linear camera according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of an output of an image captured by a fisheye camera according to an embodiment of the invention.
Fig. 4 is a flow chart of another obstacle sensing method based on visual parking according to an embodiment of the invention.
Fig. 5 is a schematic diagram of identifying the division of the shooting area of the fisheye camera according to the embodiment of the invention.
Fig. 6 is a block diagram of an obstacle sensing device based on visual parking according to an embodiment of the invention.
Fig. 7 is a schematic structural view of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the terms "first," "second," and "third," etc. in the claims, specification and drawings of the present disclosure are used for distinguishing between different objects and not for describing a particular sequential order. The terms "comprises" and "comprising" when used in the specification and claims of the present disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The linear camera and the fisheye camera are usually arranged around the vehicle to acquire video information around the vehicle. The fish-eye camera has large visual angle range, large distortion and large ranging positioning error; the linear camera has smaller visual angle, long visual distance, small distortion and more accurate ranging and positioning. The linear camera adopts a front sight type camera, and fish-eye cameras are arranged at the front, rear, left and right positions of the vehicle.
Because the linear camera is accurate in ranging and positioning, the linear camera can be used for ranging the obstacle when the vehicle is parked and driven, the vehicle can safely run, the fish-eye camera is used for sensing the obstacle when the vehicle is parked in a parking space, more obstacles are sensed by using the characteristic of large visual angle range, and the safety of the vehicle when the vehicle is parked in the parking space is ensured.
According to the invention, the recognition of forward middle-long distance obstacles is required in the parking process, the normal running of the vehicle is ensured, meanwhile, the repeated obstacles perceived in a short distance in the parking process are also required, and the accurate obstacle perception output is realized by assigning different confidence degrees to the perception results of the video streams acquired by the linear camera and the fish-eye camera, so that the perception accuracy is improved, and the aim of accurate ranging is fulfilled.
Referring to fig. 1, the present invention provides a visual parking-based obstacle sensing method, which includes:
step 100: and acquiring the video stream of the fisheye camera and the video stream of the linear camera.
Different fisheye cameras acquire video streams respectively, and a linear camera acquires the video streams as a separate device.
In some embodiments, four fisheye cameras disposed in front of, behind, and on the left and right of the vehicle respectively acquire video streams in front of, behind, and on the left and right of the vehicle. Where a fish-eye camera located in front of the vehicle may overlap the linear camera creating a perception area, i.e. identifying the same obstacle. For this, to improve the sensing accuracy, the sensing output result may be adjusted based on the confidence, which will be described in detail later.
Step 200: outputting first identification information of the target obstacle based on the video stream of the fisheye camera, and outputting second identification information of the target obstacle based on the video stream of the linear camera.
Different perception models can be matched for different cameras and different imaging characteristics so as to accurately output perceived output results. The perception of the obstacle can be realized based on an artificial intelligence AI model, a 2D full boundary frame or a 3D information frame of the target obstacle can be output generally, and the recognition and positioning of key points can be completed for the perception of the obstacle.
In some embodiments, the first identification information includes a plurality of different first identification information, for example, first identification information of a perception result corresponding to a video stream acquired by a fisheye camera located in front of the vehicle, and first identification information of a perception result corresponding to a video stream acquired by a fisheye camera located behind the vehicle, where different fisheye cameras may respectively correspond to different identification information, and collectively referred to as first identification information.
The linear camera is used as a forward-looking camera, and the corresponding identification result is second identification information which is used for information base in the advancing process of the bicycle.
It can be understood that the vehicles can move forwards, backwards, leftwards or rightwards for many times in the parking process, and other vehicles or other building bodies exist at two sides of the traffic lane, and if the fish-eye camera is used for ranging, errors and inaccurate positioning can be caused; particularly, scratch and collision occur during forward movement of the vehicle. However, the aforementioned problems can be avoided by using the first identification information and the second identification information.
Step 300: and dynamically adjusting the confidence of the first identification information and the confidence of the second identification information according to the distance between the target obstacle and the vehicle or the position of the target obstacle in the fish-eye camera.
The fish-eye camera has certain credibility in the distance measurement in a relatively short distance, and the obstacle has certain credibility in the distance measurement in the area with small distortion of the fish-eye camera. However, in the region with large distortion, the vehicle is scratched and crashed due to inaccurate ranging in the region.
The problem of inaccuracy of the fisheye camera can be solved by means of the characteristic of accurate ranging of the linear camera. And adjusting the switching of the perception information of the obstacle according to the actual parking scene through the dynamic adjustment of the confidence.
In combination with the above, when the distance between the target obstacle and the own vehicle is far, sensing and ranging are performed by using the second identification information, and the confidence of the second identification information is assigned higher; or when the target obstacle is in the range calibration area of the fisheye camera, the first identification information of the fisheye camera is directly utilized, and the confidence of the first identification information is assigned to be higher.
Step 400: and taking at least one of the first identification information or the second identification information as an output result of the perception information according to the confidence coefficient of the first identification information and the confidence coefficient of the second identification information.
As can be seen from the foregoing steps 100 to 300, in the parking process, parking can be completed based on the first identification information, that is, parking can be completed by using the video stream of the fisheye camera; the parking can be completed based on the first identification information and the second identification information, such as scenes of parking in a warehouse of a vehicle head and the like.
When the linear camera and the fisheye camera acquire the video stream for identifying the obstacle, a forward overlapping area is generated, the linear camera and the fisheye camera identify the same obstacle, at the moment, which video stream information is used can be determined based on the confidence value, and then the first identification information or the second identification information is switched to serve as an output identification result according to the confidence value.
Because the confidence coefficient can ensure accurate ranging, avoid ranging errors and locate obstacles more accurately; the output result of the sensing information can be accurately determined.
The invention is based on the video stream of the fisheye camera and the linear camera, can give consideration to the recognition and the perception of the middle-distance obstacle and the close-distance obstacle, and is beneficial to the excitation calculation of the obstacle; by utilizing the dynamic adjustment of the confidence coefficient, the accurate positioning sensing of the obstacles around the vehicle can be completed in the parking process, and the safer and more efficient parking is ensured.
Further, in the step 200, the outputting the first identification information of the target obstacle based on the video stream of the fisheye camera and the outputting the second identification information of the target obstacle based on the video stream of the linear camera includes:
outputting first identification information of a target obstacle based on a video stream of the fisheye camera by using a first neural network model; outputting second identification information of the target obstacle based on the video stream of the linear camera using a second neural network model.
The first neural network module and the second neural network module are respectively trained by using different training sets, for example, the first neural network module is trained by using a convolutional neural network by using an image shot by a fisheye camera, and the second neural network model is trained by using the convolutional neural network module by using an image shot by a linear camera. The model training process can refer to the prior art and will not be described in detail.
Wherein the first identification information and the second identification information each include a bounding box of a target obstacle. The bounding box is described in detail with reference to the bounding box of the obstacle recorded in the parking scene shown in fig. 2 and 3. Fig. 2 is an obstacle image captured by the preceding camera, and the bounding box in the figure is the second identification information that is output. As can be seen from fig. 2, four building columns and one car are respectively identified on the left and right sides of the figure. Fig. 3 is an image of an obstacle taken by a fisheye camera, which is an automobile obstacle on the left side of the host vehicle.
Further, for example, as shown in fig. 3, the key point positions of the obstacle can be identified, and in fig. 3, the ground point and the ground positioning point of the obstacle adjacent to the own vehicle are shown; see fig. 3 for the location of the two tires and the connection of the points between the two tires.
Therefore, the present invention may also calculate the obstacle keypoints based on the video stream of the fisheye camera on the basis of the above steps, and use the obstacle keypoints as a part of the output result of the perception information. The obstacle key points comprise obstacle grounding points and ground positioning points, wherein the obstacle positioning points are close to the vehicle, and the obstacle key points are marked on the second identification information. The identification of the key points is more beneficial to parking, the occurrence of scratch and collision is prevented, and the distance measurement can be accurately completed.
In some embodiments, a keypoint detection network may be incorporated into the first neural network model described above for locating vehicle location keypoints. Such as a keypoint detection algorithm, an ROI algorithm (region of interest ), etc. Of course, a key point detection network may be added to the second neural network model to locate the key point of the vehicle position.
Referring to fig. 4, the invention further provides an obstacle sensing method based on visual parking, wherein in the process of parking a vehicle, the vehicle is capable of identifying an obstacle when the vehicle is searching a garage, and the method comprises the following steps:
step 410: and acquiring the video stream of the fisheye camera and the video stream of the linear camera.
The method comprises the steps of acquiring a video stream of a fisheye camera arranged in front of a vehicle, or acquiring a video stream of a fisheye camera arranged on the left side or the right side of the front of the vehicle, arranging one linear camera, wherein the fisheye camera and a visual area of the linear camera can overlap, identifying the same obstacle, and selecting the fisheye camera with the visual area which can overlap with the linear camera as a target fisheye camera.
Step 420: and when the vehicle is in the forward process, adjusting the confidence degree of the second identification information to be higher than that of the first identification information.
In the process of searching for the parking position when the vehicle moves forwards, the requirement on obstacle recognition is higher, so that a video stream acquired by a linear camera is processed into second recognition information through a neural network model, the second recognition information is directly used for automatic driving, and a parking system can recognize the vacant parking position based on the second recognition information; the corner points (such as four opposite corners of a rectangle) of the library in the second identification information can be generally identified, and whether the library is empty or not is judged according to whether the obstacles exist or the distance between the obstacles, and whether the parking requirement is met or not is judged.
Step 430: when the fisheye camera overlaps with the visual region of the linear camera, a distance between a target obstacle and an own vehicle is calculated based on a video stream of the linear camera.
The linear camera obtains the video stream from the front of the car, the fisheye camera also obtains the video stream from the front of the car, the visual areas obtained by the fisheye camera and the video stream can overlap, the same obstacle can be identified in the overlapping area, and the reliability of the fisheye camera and the video stream of the linear camera can be judged based on the distance between the vehicle and the obstacle at the moment. The distance measurement of the linear camera is accurate, and the distance between the target obstacle and the vehicle can be calculated by using the linear camera.
The distance calculation formula is:
hpixel represents the pixel height value of the minimum bounding box of the target, H is the real vehicle height, fy is the camera focal length, and D is the approximate distance of the vehicle in the real world coordinate system.
Step 440: and if the distance between the target obstacle and the vehicle is greater than the preset distance, dynamically adjusting the confidence coefficient of the second identification information to be higher than that of the first identification information.
For example, if the target obstacle is an automobile, and the distance between the automobile and the own automobile is greater than the preset distance, the confidence of the second identification information is dynamically assigned to be higher by taking the video stream of the linear camera as a perception basis. The preset distance may be a distance limit value with good accuracy of the fisheye camera distance identification.
And if the confidence coefficient of the second identification information is higher than that of the first identification information, the automatic driving system can acquire the second identification information to park, and discard the first identification information of the fisheye camera which is overlapped with the vision area of the linear camera.
Step 450: when the fisheye camera overlaps with the visual area of the linear camera, the viewing angle area of the fisheye camera is divided, and the viewing angle area comprises a middle area and other areas.
In other embodiments, when the visual areas overlap, the viewing angle area of the fisheye camera is divided into a middle area, a ground line, and other areas, as shown in fig. 5. The middle area shot by the fisheye camera is an area surrounded by two middle curves, the ground wire marked by the curve is horizontally arranged at the approximate middle position of the area, and the other areas except the middle area are other areas, namely the areas with serious distortion, and are not suitable for distance calculation.
Step 460: and if the target obstacle is in the middle area shot by the fisheye camera, dynamically adjusting the confidence of the first identification information to be higher than the confidence of the second identification information.
In some embodiments, it may be determined whether the target obstacle is in the middle region photographed by the fisheye camera by identifying a coordinate region of the target obstacle in the image. Or judging whether the target obstacle is in the middle area shot by the fisheye camera in other modes, which belongs to the prior art and is not repeated one by one.
As to how to recognize the target obstacle coincidence between the linear camera and the fisheye camera, the recognized target obstacle can be projected under the world coordinate system, and the obstacle coordinates recognized by the two are identical or approximately identical. The automatic driving system is characterized in that the automatic driving system is combined with an SLAM self-positioning module in automatic driving, the SLAM self-positioning module is combined with information such as video streams and wheel speed meters transmitted by the fisheye camera to give out vehicle body positioning and posture information, and the SLAM belongs to the known technology and is not repeated.
And then, according to the identification result, dynamically adjusting the confidence level of the first identification information and giving a high value.
Step 470: and taking at least one of the first identification information or the second identification information as an output result of the perception information according to the confidence coefficient of the first identification information and the confidence coefficient of the second identification information.
Referring to fig. 6, the present invention provides an obstacle sensing device based on visual parking, comprising:
the information acquisition module 61 is used for acquiring the video stream of the fisheye camera and the video stream of the linear camera;
a model detection module 62 for outputting first identification information of a target obstacle based on the video stream of the fisheye camera, and outputting second identification information of a target obstacle based on the video stream of the linear camera;
a confidence level adjustment module 63, configured to dynamically adjust a confidence level of the first identification information and a confidence level of the second identification information according to a distance between a target obstacle and a vehicle or a position of the target obstacle in the fisheye camera;
and a sensing result output module 64, configured to take at least one of the first identification information or the second identification information as an output result of the sensing information according to the confidence level of the first identification information and the confidence level of the second identification information.
Further, the model detection module 62 is further configured to output, based on the video stream of the fisheye camera, first identification information of the target obstacle using the first neural network model; outputting second identification information of the target obstacle based on the video stream of the linear camera using a second neural network model; wherein the first identification information and the second identification information each include a bounding box of a target obstacle.
The confidence adjustment module 63 is further configured to calculate a distance between a target obstacle and a vehicle based on a video stream of the linear camera when the fisheye camera overlaps with a visual region of the linear camera; if the distance between the target obstacle and the vehicle is greater than the preset distance, dynamically adjusting the confidence coefficient of the second identification information to be higher than that of the first identification information; for the following: when the fish-eye camera is overlapped with the visual area of the linear camera, dividing the visual angle area of the fish-eye camera, wherein the visual angle area comprises a middle area and other areas; and if the target obstacle is in the middle area shot by the fisheye camera, dynamically adjusting the confidence of the first identification information to be higher than the confidence of the second identification information. And when the vehicle is in the forward process, the confidence of the second identification information is adjusted to be higher than that of the first identification information.
Still further, the model detection module 62 is further configured to calculate an obstacle keypoint based on the video stream of the fisheye camera, and use the obstacle keypoint as a part of the output result of the perception information. The obstacle key points comprise an obstacle grounding point and a ground positioning point of the obstacle close to the vehicle, and the obstacle key points are marked on the second identification information.
In addition, the invention provides a vision-based parking method, which comprises the vision-based parking obstacle sensing method as shown in the above-mentioned fig. 1 to 5; then, the vision SLAM technology is utilized to carry out drawing construction and positioning work on the self-vehicle and the surrounding environment, and meanwhile, effective library positions are detected; and after releasing the effective position information, the position and the posture of the vehicle body are adjusted by utilizing the self-positioning information, and meanwhile, the camera is utilized to perform high-precision re-detection and matching work on the effective position, so that the vehicle can be ensured to be smoothly parked in the position.
As shown in fig. 7, the present invention also provides a vehicle including:
at least one processor; and at least one memory communicatively coupled to the processor, wherein: the memory stores program instructions executable by the processor, and the processor invokes the program instructions to perform the vision parking-based obstacle sensing method.
The invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the obstacle sensing method based on visual parking when being executed by a processor.
It is understood that the computer-readable storage medium may include: any entity or device capable of carrying a computer program, a recording medium, a USB flash disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a software distribution medium, and so forth. The computer program comprises computer program code. The computer program code may be in the form of source code, object code, executable files, or in some intermediate form, among others. The computer readable storage medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a software distribution medium, and so forth.
In some embodiments of the present invention, the apparatus may include a controller, which is a single-chip microcomputer chip, integrated with a processor, a memory, a communication module, etc. The processor may refer to a processor comprised by the controller. The processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (11)

1. A vision-parking-based obstacle sensing method, comprising:
acquiring a video stream of a fisheye camera and a video stream of a linear camera;
outputting first identification information of a target obstacle based on the video stream of the fisheye camera, and outputting second identification information of the target obstacle based on the video stream of the linear camera;
dynamically adjusting the confidence level of the first identification information and the confidence level of the second identification information according to the distance between the target obstacle and the vehicle or the position of the target obstacle in the fish-eye camera;
and taking at least one of the first identification information or the second identification information as an output result of the perception information according to the confidence coefficient of the first identification information and the confidence coefficient of the second identification information.
2. The vision-parking-based obstacle sensing method according to claim 1, wherein the outputting of the first identification information of the target obstacle based on the video stream of the fisheye camera and the outputting of the second identification information of the target obstacle based on the video stream of the linear camera, comprises:
outputting first identification information of a target obstacle based on a video stream of the fisheye camera by using a first neural network model;
outputting second identification information of the target obstacle based on the video stream of the linear camera using a second neural network model; wherein the first identification information and the second identification information each include a bounding box of a target obstacle.
3. The vision-parking-based obstacle sensing method according to claim 1, wherein the dynamically adjusting the confidence of the first identification information and the second identification information according to the distance between the target obstacle and the own vehicle comprises:
calculating a distance between a target obstacle and a host vehicle based on a video stream of the linear camera when the fisheye camera overlaps with a visual region of the linear camera;
and if the distance between the target obstacle and the vehicle is greater than the preset distance, dynamically adjusting the confidence coefficient of the second identification information to be higher than that of the first identification information.
4. The vision-parking-based obstacle sensing method according to claim 1, wherein the dynamically adjusting the confidence of the first identification information and the confidence of the second identification information according to the position of the target obstacle in the fisheye camera comprises:
when the fish-eye camera is overlapped with the visual area of the linear camera, dividing the visual angle area of the fish-eye camera, wherein the visual angle area comprises a middle area and other areas;
and if the target obstacle is in the middle area shot by the fisheye camera, dynamically adjusting the confidence of the first identification information to be higher than the confidence of the second identification information.
5. The vision-parking-based obstacle sensing method as claimed in claim 1, further comprising: and when the vehicle is in the forward process, adjusting the confidence degree of the second identification information to be higher than that of the first identification information.
6. The vision-parking-based obstacle sensing method as claimed in claim 1, further comprising: and calculating barrier key points based on the video stream of the fish-eye camera, and taking the barrier key points as a part of the output result of the perception information.
7. The vision-parking-based obstacle sensing method according to claim 6, wherein the obstacle keypoints include an obstacle ground point, an obstacle close to a ground anchor point of the own vehicle, and the obstacle keypoints are marked on the second identification information.
8. An obstacle sensing device based on visual parking, comprising:
the information acquisition module is used for acquiring the video stream of the fisheye camera and the video stream of the linear camera;
the model detection module is used for outputting first identification information of a target obstacle based on the video stream of the fisheye camera and outputting second identification information of the target obstacle based on the video stream of the linear camera;
the confidence coefficient adjusting module is used for dynamically adjusting the confidence coefficient of the first identification information and the confidence coefficient of the second identification information according to the distance between the target obstacle and the vehicle or the position of the target obstacle in the fisheye camera;
and the perception result output module is used for taking at least one of the first identification information or the second identification information as an output result of the perception information according to the confidence coefficient of the first identification information and the confidence coefficient of the second identification information.
9. A vision-based parking method comprising the vision-based parking obstacle sensing method according to any one of claims 1 to 7.
10. A vehicle, characterized by comprising:
at least one processor; and at least one memory communicatively coupled to the processor, wherein: the memory stores program instructions executable by the processor, the processor invoking the program instructions capable of performing the method of any of claims 1-7 or claim 9.
11. A computer-readable storage medium, on which a computer program is stored, which, when being run by a computer, performs the method of any one of claims 1 to 7 or claim 9.
CN202310902634.5A 2023-07-21 2023-07-21 Obstacle sensing method and device based on visual parking Pending CN117125055A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310902634.5A CN117125055A (en) 2023-07-21 2023-07-21 Obstacle sensing method and device based on visual parking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310902634.5A CN117125055A (en) 2023-07-21 2023-07-21 Obstacle sensing method and device based on visual parking

Publications (1)

Publication Number Publication Date
CN117125055A true CN117125055A (en) 2023-11-28

Family

ID=88860712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310902634.5A Pending CN117125055A (en) 2023-07-21 2023-07-21 Obstacle sensing method and device based on visual parking

Country Status (1)

Country Link
CN (1) CN117125055A (en)

Similar Documents

Publication Publication Date Title
US11657604B2 (en) Systems and methods for estimating future paths
CN107738612B (en) Automatic parking space detection and identification system based on panoramic vision auxiliary system
CN107577988B (en) Method, device, storage medium and program product for realizing side vehicle positioning
CN108647638B (en) Vehicle position detection method and device
Li et al. Easy calibration of a blind-spot-free fisheye camera system using a scene of a parking space
EP3438872A1 (en) Method, apparatus and computer program for a vehicle
CN112507862B (en) Vehicle orientation detection method and system based on multitasking convolutional neural network
JP7077910B2 (en) Bound line detection device and lane marking method
CN110555407B (en) Pavement vehicle space identification method and electronic equipment
CN110717445B (en) Front vehicle distance tracking system and method for automatic driving
CN109741241B (en) Fisheye image processing method, device, equipment and storage medium
JP6038422B1 (en) Vehicle determination device, vehicle determination method, and vehicle determination program
CN114419098A (en) Moving target trajectory prediction method and device based on visual transformation
WO2020011670A1 (en) Method for estimating a relative position of an object in the surroundings of a vehicle and electronic control unit for a vehicle and vehicle
Ponsa et al. On-board image-based vehicle detection and tracking
CN110780287A (en) Distance measurement method and distance measurement system based on monocular camera
KR20190067578A (en) Collision warning device and method using heterogeneous cameras having overlapped capture area
CN111860270B (en) Obstacle detection method and device based on fisheye camera
CN113838060A (en) Perception system for autonomous vehicle
KR102003387B1 (en) Method for detecting and locating traffic participants using bird's-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
CN116343165A (en) 3D target detection system, method, terminal equipment and storage medium
JP6174884B2 (en) Outside environment recognition device and outside environment recognition method
CN110727269A (en) Vehicle control method and related product
CN117125055A (en) Obstacle sensing method and device based on visual parking
WO2022133986A1 (en) Accuracy estimation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination