CN116229417A - Obstacle distance information generation method, device, equipment and computer readable medium - Google Patents

Obstacle distance information generation method, device, equipment and computer readable medium Download PDF

Info

Publication number
CN116229417A
CN116229417A CN202310148899.0A CN202310148899A CN116229417A CN 116229417 A CN116229417 A CN 116229417A CN 202310148899 A CN202310148899 A CN 202310148899A CN 116229417 A CN116229417 A CN 116229417A
Authority
CN
China
Prior art keywords
obstacle
obstacle detection
frame
detection frame
coordinate set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310148899.0A
Other languages
Chinese (zh)
Inventor
胡禹超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HoloMatic Technology Beijing Co Ltd
Original Assignee
HoloMatic Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HoloMatic Technology Beijing Co Ltd filed Critical HoloMatic Technology Beijing Co Ltd
Priority to CN202310148899.0A priority Critical patent/CN116229417A/en
Publication of CN116229417A publication Critical patent/CN116229417A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

Embodiments of the present disclosure disclose obstacle distance information generation methods, apparatuses, devices, and computer-readable media. One embodiment of the method comprises the following steps: acquiring a road image at the current moment; performing obstacle detection on the road image at the current moment to generate obstacle detection information at the current moment, wherein the obstacle detection information at the current moment comprises: obstacle type information, an obstacle detection frame equation set and an obstacle detection frame vertex coordinate set; generating a first target obstacle key point coordinate set based on the obstacle detection frame vertex coordinate set in response to determining that the abscissa value of each obstacle detection frame vertex coordinate in the obstacle detection frame vertex coordinate set is greater than the abscissa value of the preset camera optical center imaging point coordinate; and generating obstacle distance information based on the first target obstacle keypoint coordinate set and the obstacle type information. This embodiment can generate efficiency of obstacle distance information.

Description

Obstacle distance information generation method, device, equipment and computer readable medium
Technical Field
Embodiments of the present disclosure relate to the field of computer technology, and in particular, to a method, an apparatus, a device, and a computer readable medium for generating obstacle distance information.
Background
The obstacle distance information generation method is a technique of determining a distance between an obstacle and a current vehicle in an image. Currently, in generating obstacle distance information, the following methods are generally adopted: first, a distortion removal process is performed on a road image having distortion photographed by a wide-angle camera, and then obstacle distance information is identified from the road image after the distortion removal.
However, the inventors found that when the obstacle distance information generation is performed in the above manner, there are often the following technical problems:
firstly, in road images of continuous frames, the real-time de-distortion processing of the road images needs to consume more computing resources, so that the efficiency of generating obstacle distance information is reduced;
secondly, even if the process of de-distortion processing does not affect the efficiency of generating the obstacle distance information, the minimum external rectangular frame of the obstacle, which is obtained by performing the obstacle normal frame detection on the distorted road image, is not the minimum external rectangular frame of the obstacle headstock or the parking space after de-distortion, but becomes a detection frame with pincushion distortion, if the detection frame is directly used for generating the obstacle distance information, an observation error is easily introduced, and the error of the generated obstacle distance information is increased, so that the accuracy of generating the obstacle distance information is reduced.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, may contain information that does not form the prior art that is already known to those of ordinary skill in the art in this country.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose obstacle distance information generation methods, apparatuses, devices, and computer readable media to address one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method for generating obstacle distance information, the method including: acquiring a road image at the current moment, wherein the road image at the current moment is a distorted image; performing obstacle detection on the current-time road image to generate current-time obstacle detection information, wherein the current-time obstacle detection information comprises: obstacle type information, an obstacle detection frame equation set and an obstacle detection frame vertex coordinate set; generating a first target obstacle key point coordinate set based on the obstacle detection frame vertex coordinate set in response to determining that the abscissa value of each obstacle detection frame vertex coordinate in the obstacle detection frame vertex coordinate set is greater than the abscissa value of a preset camera optical center imaging point coordinate; and generating obstacle distance information based on the first target obstacle key point coordinate set and the obstacle type information.
In a second aspect, some embodiments of the present disclosure provide an obstacle distance information generating apparatus, the apparatus including: an acquisition unit configured to acquire a current-time road image, wherein the current-time road image is a distorted image; a detection unit configured to perform obstacle detection on the current-time road image to generate current-time obstacle detection information, where the current-time obstacle detection information includes: obstacle type information, an obstacle detection frame equation set and an obstacle detection frame vertex coordinate set; a first generation unit configured to generate a first target obstacle key point coordinate set based on the obstacle detection frame vertex coordinate set in response to determining that the abscissa value of each obstacle detection frame vertex coordinate in the obstacle detection frame vertex coordinate set is greater than the abscissa value of a preset camera optical center imaging point coordinate; and a second generation unit configured to generate obstacle distance information based on the first target obstacle keypoint coordinate set and the obstacle type information.
In a third aspect, some embodiments of the present disclosure provide an electronic device comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors causes the one or more processors to implement the method described in any of the implementations of the first aspect above.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
The above embodiments of the present disclosure have the following advantageous effects: by the obstacle distance information generating method of some embodiments of the present disclosure, efficiency of generating obstacle distance information may be improved. Specifically, the efficiency of generating obstacle distance information is reduced because: in road images of consecutive frames, the real-time de-distortion processing of the road images requires a lot of computing resources, and thus, the efficiency of generating obstacle distance information is reduced. Based on this, the obstacle distance information generating method of some embodiments of the present disclosure first acquires the road image at the current time. The road image at the current moment is a distorted image. Then, the obstacle detection is performed on the road image at the current time to generate obstacle detection information at the current time. Wherein, the obstacle detection information at the current moment includes: obstacle type information, an obstacle detection frame equation set and an obstacle detection frame vertex coordinate set. Here, since the distortion removal processing of the distorted image requires a large amount of calculation resources, the obstacle detection is directly performed without the distortion removal processing. Then, in response to determining that the abscissa value of each obstacle detection frame vertex coordinate in the obstacle detection frame vertex coordinate set is greater than the abscissa value of the preset camera optical center imaging point coordinate, a first target obstacle key point coordinate set is generated based on the obstacle detection frame vertex coordinate set. By generating the first target obstacle keypoint coordinates, obstacle distance information may be facilitated to be generated. And finally, generating obstacle distance information based on the first target obstacle key point coordinate set and the obstacle type information. Thus, it is achieved that the obstacle distance information is generated without performing the de-distortion processing on the image. Furthermore, the computational resources which are required to be consumed in the de-distortion processing can be reduced, so that the efficiency of generating obstacle distance information is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of some embodiments of an obstacle distance information generation method according to the present disclosure;
fig. 2 is a schematic structural view of some embodiments of an obstacle distance information generating device according to the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flow 100 of some embodiments of an obstacle distance information generation method according to the present disclosure. The obstacle distance information generating method comprises the following steps:
And step 101, acquiring a road image at the current moment.
In some embodiments, the execution subject of the obstacle distance information generating method may acquire the road image at the current time by a wired manner or a wireless manner. The road image at the current moment is a distorted image. Secondly, the road image at the current moment is a distorted image. The current-time road image may be an image photographed by the current-frame in-vehicle camera.
It should be noted that the wireless connection may include, but is not limited to, 3G/4G connections, wiFi connections, bluetooth connections, wiMAX connections, zigbee connections, UWB (ultra wideband) connections, and other now known or later developed wireless connection means.
Step 102, performing obstacle detection on the road image at the current moment to generate obstacle detection information at the current moment.
In some embodiments, the executing body may perform obstacle detection on the current-time road image to generate current-time obstacle detection information. Wherein, the current time obstacle detection information may include: obstacle type information, an obstacle detection frame equation set and an obstacle detection frame vertex coordinate set. Secondly, the obstacle detection can be carried out on the road image at the current moment through a preset obstacle detection algorithm so as to generate obstacle detection information at the current moment. The obstacle type information may be a type of obstacle vehicle. The obstacle detection bezel equation set may be a boundary equation of each side on the obstacle three-dimensional minimum circumscribed rectangular frame in the detected camera coordinate system. The vertex coordinates of the obstacle detection frame may be the vertex coordinates of the two-dimensional minimum circumscribed rectangular frame of the obstacle head or tail in the image coordinate system of the detected road image.
As an example, the obstacle detection algorithm may include, but is not limited to, at least one of: G-CRF (Gaus-conditional random field, gaussian conditional random field) model, densecRF (Fully-Connected Conditional Random Fields, fully connected conditional random field) model, MRF (MRF-Markov Random Field, markov conditional random field) model, and the like.
And step 103, generating a first target obstacle key point coordinate set based on the obstacle detection frame vertex coordinate set in response to determining that the abscissa value of each obstacle detection frame vertex coordinate in the obstacle detection frame vertex coordinate set is larger than the abscissa value of the preset camera optical center imaging point coordinate.
In some embodiments, the executing body may generate the first target obstacle keypoint coordinate set based on the obstacle detection bezel vertex coordinate set in response to determining that the abscissa value of each obstacle detection bezel vertex coordinate in the obstacle detection bezel vertex coordinate set is greater than the abscissa value of the preset camera optical center imaging point coordinate. The camera optical center imaging point coordinates may be coordinates of the optical center of the vehicle-mounted camera in a road image coordinate system. The abscissa value of each obstacle detection frame vertex coordinate in the obstacle detection frame vertex coordinate set is larger than the abscissa value of the preset camera optical center imaging point coordinate, so that the right half part of the position of the obstacle detection frame in the road image coordinate system can be represented.
In some optional implementations of some embodiments, the executing body may generate the first target obstacle keypoint coordinate set based on the obstacle detection bezel vertex coordinate set, and may include the steps of:
and a first step of selecting, as the screening frame vertex coordinates, obstacle detection frame vertex coordinates at the upper left corner position of the frame from the obstacle detection frame vertex coordinate set in response to determining that the obstacle detection frame vertex coordinate set meets a first preset position condition. The first preset position condition may be: in the obstacle detection frame vertex coordinate set, the difference between the ordinate value of the obstacle detection frame vertex coordinate corresponding to the left lower corner position of the two-dimensional minimum circumscribed rectangular frame and the ordinate value of the camera optical center imaging point coordinate is 2 norms, the difference between the ordinate value of the obstacle detection frame vertex coordinate corresponding to the left upper corner position of the two-dimensional minimum circumscribed rectangular frame and the ordinate value of the camera optical center imaging point coordinate is less than 2 norms, and the ordinate value of the obstacle detection frame vertex coordinate corresponding to the left lower corner position of the two-dimensional minimum circumscribed rectangular frame is greater than or equal to the ordinate value of the camera optical center imaging point coordinate and the ordinate value of the obstacle detection frame vertex coordinate corresponding to the left upper corner position of the two-dimensional minimum circumscribed rectangular frame is greater than or equal to the ordinate value of the camera optical center imaging point coordinate. The obstacle detection frame vertex coordinate set meets a first preset position condition, and can be characterized in that the upper edge of the two-dimensional minimum circumscribed rectangular frame is located above a midline (namely, a transverse line where the ordinate of the camera optical center imaging point coordinate is located), the lower frame is located below the midline, and the upper edge is intersected with the midline to be farther from the midline than the lower edge.
In practice, in the case where the above-described first preset position condition is satisfied, the frame vertex coordinates that are less affected by the image distortion may be obstacle detection frame vertex coordinates that are in the upper left corner position of the frame. The coordinates are selected as the filter frame vertex coordinates. In addition, in generating obstacle distance information, it is necessary to assume that the viewing angle direction of the in-vehicle (front view) camera is horizontal or approximately horizontal to the ground, that is, the viewing angle direction is approximately perpendicular to the front and rear surfaces of the obstacle.
And secondly, projecting an obstacle detection frame equation corresponding to the right side frame in the obstacle detection frame equation set to an image coordinate system of the road image so as to generate a first projected detection frame curve equation. The obstacle detection frame equation corresponding to the right side frame in the obstacle detection frame equation set may be an equation of a right side line on a front rectangle in the three-dimensional detection frame. Second, the obstacle detection bounding box equation may be projected from the camera coordinate system to the image coordinate system by way of an inverse projective transformation.
And thirdly, determining a right frame equation of the vertex coordinates of the two obstacle detection frames in the vertex coordinates of the obstacle detection frames at the upper right corner position and the lower right corner position of the frames. The connection equation of the two obstacle detection frame vertex coordinates corresponding to the left upper corner position and the right lower corner position of the two-dimensional minimum circumscribed rectangular frame in the obstacle detection frame vertex coordinate set can be determined through a two-point equation and used as a right frame equation.
And fourthly, determining the coordinates of the intersection points of the first projected detection frame curve equation and the right frame equation as coordinates of key points of the detection frame. The abscissa value of the coordinate of the key point of the detection frame is the same as the abscissa value of the coordinate of the vertex of the obstacle detection frame corresponding to the right upper corner position of the two-dimensional minimum circumscribed rectangle. The ordinate value of the coordinates of the key point of the detection frame may be the same as the ordinate value of the coordinates of the imaging point of the optical center of the camera.
And fifthly, respectively determining the vertex coordinates of the screening frame and the key point coordinates of the detection frame as first target obstacle key point coordinates to generate a first target obstacle key point coordinate set.
Step 104, generating obstacle distance information based on the first target obstacle key point coordinate set and the obstacle type information.
In some embodiments, the executing body may generate the obstacle distance information based on the first target obstacle keypoint coordinate set and the obstacle type information. Wherein, first, a target obstacle width value and a target obstacle height value corresponding to the above-mentioned obstacle type information may be acquired from a preset obstacle data table. The obstacle data table may include various types of standard obstacles and their corresponding size information.
Next, an obstacle distance value may be generated as obstacle distance information by the following formula:
Figure SMS_1
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_3
representing the obstacle distance value. />
Figure SMS_6
Representing the target obstacle width value. />
Figure SMS_8
Representing the lateral focal length of the onboard camera. />
Figure SMS_4
Representing a preset de-distortion operation. />
Figure SMS_7
Representing first target obstacle keypoint coordinates. />
Figure SMS_9
And representing the coordinates of the first target obstacle key points corresponding to the coordinates of the top points of the screening frames in the first target obstacle key point coordinate set. />
Figure SMS_10
And the abscissa value of the first target obstacle key point coordinate corresponding to the detection frame key point coordinate in the first target obstacle key point coordinate group is represented.
Figure SMS_2
And the ordinate value of the first target obstacle key point coordinate corresponding to the detection frame key point coordinate in the first target obstacle key point coordinate group is expressed, namely, the ordinate value of the first target obstacle key point coordinate is equal to the ordinate value of the camera optical center imaging point coordinate. />
Figure SMS_5
The first element of the vector in brackets is shown.
Optionally, in response to determining that the obstacle detection bezel vertex coordinate set meets a second preset position condition, obstacle distance information is generated based on the obstacle detection bezel vertex coordinate set, the target obstacle width value, and the camera optical center imaging point coordinates. The second preset condition may be: in the obstacle detection frame vertex coordinate set, the difference between the ordinate value of the obstacle detection frame vertex coordinate corresponding to the left lower corner position of the two-dimensional minimum circumscribed rectangular frame and the ordinate value of the camera optical center imaging point coordinate is 2 norms, the difference between the ordinate value of the obstacle detection frame vertex coordinate corresponding to the left upper corner position of the two-dimensional minimum circumscribed rectangular frame and the ordinate value of the camera optical center imaging point coordinate is 2 norms, and the ordinate value of the obstacle detection frame vertex coordinate corresponding to the left lower corner position of the two-dimensional minimum circumscribed rectangular frame is greater than or equal to the ordinate value of the camera optical center imaging point coordinate and the ordinate value of the obstacle detection frame vertex coordinate corresponding to the left upper corner position of the two-dimensional minimum circumscribed rectangular frame. The obstacle detection frame vertex coordinate set meets a second preset position condition, and can be characterized in that the upper edge line of the two-dimensional minimum circumscribed rectangular frame is located above a central line (namely, a transverse line where the ordinate of the camera optical center imaging point coordinate is located), the lower frame is located below the central line, and the upper edge line is closer to the central line than the lower edge line when the upper edge line is intersected with the central line. In practice, in the case where the above second preset position condition is satisfied, the frame vertex coordinates that are less affected by the image distortion may be obstacle detection frame vertex coordinates that are in the upper right corner position of the frame. The coordinates are selected as the filter frame vertex coordinates.
Specifically, first, the obstacle detection border equation corresponding to the left border in the obstacle detection border equation set may be projected to the image coordinate system of the road image, so as to generate a first projected detection border curve equation. Next, a left frame equation of the two obstacle detecting frame vertex coordinates in the frame upper left corner position and lower left corner position in the above-described obstacle detecting frame vertex coordinate set is determined. And then determining the intersection point coordinates of the first projected detection frame curve equation and the left frame equation as detection frame key point coordinates. And then, respectively determining the vertex coordinates of the screening frame and the key point coordinates of the detection frame as first target obstacle key point coordinates to generate a first target obstacle key point coordinate set. Finally, generating obstacle distance information based on the first target obstacle key point coordinate set and the obstacle type information through the formula. Here, the manner of generating the obstacle distance information is the same as that of the embodiments in the above steps 103 to 104, and will not be described in detail.
Optionally, in response to determining that the obstacle detection bezel vertex coordinate set meets a third preset position condition, obstacle distance information is generated based on the obstacle detection bezel vertex coordinate set, the target obstacle width value, the target obstacle height value, and the camera optical center imaging point coordinates. The third preset position condition may be: the vertical coordinate value of the obstacle detection frame vertex coordinate corresponding to the left lower corner position of the two-dimensional minimum circumscribed rectangular frame in the obstacle detection frame vertex coordinate group is larger than or equal to the vertical coordinate value of the obstacle detection frame vertex coordinate corresponding to the left upper corner position of the two-dimensional minimum circumscribed rectangular frame, and the vertical coordinate value of the obstacle detection frame vertex coordinate corresponding to the left upper corner position of the two-dimensional minimum circumscribed rectangular frame is larger than or equal to the vertical coordinate value of the camera optical center imaging point coordinate. The fact that the obstacle detection frame vertex coordinate set meets a third preset position condition indicates that the whole two-dimensional minimum circumscribed rectangular frame is below the ordinate of the camera optical center imaging point coordinate. In practice, in the case where the above third preset position condition is satisfied, the frame vertex coordinates that are less affected by the image distortion may be obstacle detection frame vertex coordinates at the lower left and upper right corner positions of the frame. The obstacle distance value can be generated as the obstacle distance information by the following formula:
Figure SMS_11
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_13
representing the optimization objective as a minimization objective function of the obstacle distance value. Here, a->
Figure SMS_17
May be a preset initial distance value. />
Figure SMS_19
Representing the longitudinal focal length of the onboard camera. />
Figure SMS_14
Representing the target obstacle height value. />
Figure SMS_16
Representing the error value. />
Figure SMS_20
Representing a preset size error covariance matrix. />
Figure SMS_22
Representing the mahalanobis distance. />
Figure SMS_12
Indicating the lower left corner position. />
Figure SMS_15
Indicating the upper right corner position. />
Figure SMS_18
And the obstacle detection frame vertex coordinates which are positioned at the left lower corner position of the frame in the obstacle detection frame vertex coordinate set are represented. />
Figure SMS_21
And the obstacle detection frame vertex coordinates which are positioned at the upper right corner position of the frame in the obstacle detection frame vertex coordinate set are shown.
Optionally, the executing body may further execute the following steps:
and a first step of generating a second target obstacle key point coordinate set based on the obstacle detection frame vertex coordinate set in response to determining that the abscissa value of each obstacle detection frame vertex coordinate in the obstacle detection frame vertex coordinate set is smaller than the abscissa value of the camera optical center imaging point coordinate. The horizontal coordinate values of the vertex coordinates of each obstacle detection frame in the vertex coordinate set of the obstacle detection frame are smaller than the horizontal coordinate values of the coordinates of the optical center imaging point of the camera, so that the position of the obstacle detection frame can be represented as the left half part in the road image coordinate system. Therefore, there are three positional situations as well, namely, the ordinate value of the two-dimensional minimum circumscribed rectangular frame with respect to the camera optical center imaging point coordinates. That is, the two-dimensional minimum circumscribed rectangular frame intersects with a horizontal line where the ordinate value of the camera optical center imaging point coordinate is located, and the upper edge line of the two-dimensional minimum circumscribed rectangular frame is farther from the horizontal line where the ordinate value is located. And the two-dimensional minimum circumscribed rectangular frame is intersected with a transverse line where the longitudinal coordinate value of the camera optical center imaging point coordinate is located, and the lower edge line of the two-dimensional minimum circumscribed rectangular frame is far away from the transverse line where the longitudinal coordinate value is located. The two-dimensional minimum circumscribed rectangular frame is completely below the horizontal line where the vertical coordinate value of the camera optical center imaging point coordinate is located.
In practice, for the first position case, the frame vertex coordinates where the image distortion has less influence may be obstacle detection frame vertex coordinates at the upper right corner position of the frame. In addition, the process of generating the second target obstacle keypoint coordinate set may be the same as the above manner of generating the second target obstacle keypoint coordinate set when the second preset position condition is satisfied, which is not described in detail. For the second position case, the frame vertex coordinates where the image distortion has less influence may be obstacle detection frame vertex coordinates at the upper left corner position of the frame. In addition, the process of generating the second target obstacle keypoint coordinate set may be the same as the above manner of generating the second target obstacle keypoint coordinate set when the first preset position condition is satisfied, which is not described in detail. Here, for the third position case, the frame vertex coordinates in which the image distortion influence is small may be obstacle detection frame vertex coordinates in the lower right and upper left corner positions of the frame.
Optionally, the executing body generates the second target obstacle key point coordinate set based on the obstacle detection frame vertex coordinate set, and may include the following steps:
And in the first step, in response to determining that the obstacle detection frame vertex coordinate set meets a first preset position condition, selecting obstacle detection frame vertex coordinates at the upper right corner position of the frame from the obstacle detection frame vertex coordinate set as screening frame vertex coordinates.
And secondly, projecting an obstacle detection frame equation corresponding to the left side frame in the obstacle detection frame equation set to an image coordinate system of the road image so as to generate a second projected detection frame curve equation.
And thirdly, determining a left frame equation of the vertex coordinates of the two obstacle detection frames, which are selected from the vertex coordinates of the obstacle detection frames and are positioned at the left upper corner position and the left lower corner position of the frame.
And fourthly, determining the coordinates of the intersection points of the detected frame curve equation after the second projection and the left frame equation as coordinates of key points of the detection frame.
And fifthly, respectively determining the vertex coordinates of the screening frame and the key point coordinates of the detection frame as second target obstacle key point coordinates to generate a second target obstacle key point coordinate set.
In some embodiments, the specific implementation manner and the technical effects of the first to fifth steps may refer to those embodiments in the step 103, which are not described in detail.
And a second step of generating obstacle distance information based on the second target obstacle key point coordinate set and the obstacle type information. Wherein, for the first location scenario described above: the method for generating the distance information of the obstacle and the technical effects thereof may refer to those steps after the vertex coordinate set of the obstacle detecting frame meets the second preset position condition, which are not described in detail. For the second location case described above: the method of generating the obstacle distance information and the technical effects thereof may refer to steps in the embodiments corresponding to the steps 103-104, which are not described in detail. For the third position case, obstacle distance information may be generated by the second formula.
Optionally, the executing body may further execute the following steps:
in the first step, each obstacle detection frame equation in the obstacle detection frame equation set is projected to the image coordinate system of the road image in response to determining that the obstacle detection frame vertex coordinate in which the abscissa value is greater than and less than the abscissa value of the camera optical center imaging point coordinate and the obstacle detection frame vertex coordinate in which the ordinate value is greater than and less than the ordinate value of the camera optical center imaging point coordinate are simultaneously present in the obstacle detection frame vertex coordinate set, so as to generate a third projected detection frame curve equation set. The obstacle detection frame vertex coordinates, representing the two-dimensional minimum circumscribed rectangular frame, are located at the middle position of the road image, wherein the obstacle detection frame vertex coordinates, of which the abscissa value is larger than or smaller than that of the camera optical center imaging point coordinates, exist in the obstacle detection frame vertex coordinate group at the same time. The obstacle detection frame vertex coordinates with the ordinate values larger than and smaller than the ordinate values of the camera optical center imaging point coordinates coexist in the obstacle detection frame vertex coordinate group, the upper side line representing the two-dimensional minimum circumscribed rectangular frame is positioned above the central line (namely, the transverse line where the ordinate of the camera optical center imaging point coordinates is positioned), and the lower side line is positioned below the central line. Second, the obstacle detection bounding box equation may be projected from the camera coordinate system to the image coordinate system by way of a back-projection transformation.
And a second step of generating an obstacle detection frame edge equation set based on the obstacle detection frame vertex coordinates in the obstacle detection frame vertex coordinate set. For each side of the two-dimensional minimum circumscribed rectangular frame, two obstacle detection frame vertex coordinates and a two-point formula corresponding to the side can be utilized to generate an obstacle detection frame edge equation.
And thirdly, determining intersection point coordinates of an obstacle detection side line equation corresponding to the upper frame position in the obstacle detection side line equation set and a third projection post-detection frame curve equation corresponding to the front upper frame position in the third projection post-detection frame curve equation set as first detection frame key point coordinates. The abscissa value of the first detection frame key point coordinate is the same as the abscissa value of the obstacle detection frame vertex coordinate corresponding to the left lower corner position of the two-dimensional minimum circumscribed rectangle. The ordinate value of the first detection frame key point coordinate may be the same as the ordinate value of the camera optical center imaging point coordinate.
And fourthly, determining intersection point coordinates of an obstacle detection side line equation corresponding to the lower frame position in the obstacle detection side line equation set and a third projection post-detection frame curve equation corresponding to the rear lower frame position in the third projection post-detection frame curve equation set as key point coordinates of a second detection frame. The abscissa value of the second detection frame key point coordinate may be the same as the abscissa value of the obstacle detection frame vertex coordinate corresponding to the lower right corner position of the two-dimensional minimum circumscribed rectangle. The ordinate value of the key point coordinate of the second detection frame may be the same as the ordinate value of the camera optical center imaging point coordinate.
And fifthly, determining intersection point coordinates of an obstacle detection sideline equation corresponding to the left frame position in the obstacle detection sideline equation set and a third projection post-detection frame curve equation corresponding to the front left frame position in the third projection post-detection frame curve equation set as third detection frame key point coordinates. The abscissa value of the third detection frame key point coordinate may be the same as the abscissa value of the camera optical center imaging point coordinate. The ordinate value of the key point coordinate of the third detection frame can be the same as the ordinate value of the vertex coordinate of the obstacle detection frame corresponding to the left upper corner position of the two-dimensional minimum circumscribed rectangle.
And a sixth step of determining intersection coordinates of an obstacle detection sideline equation corresponding to the right frame position in the obstacle detection sideline equation set and a third projection post-detection frame curve equation corresponding to the rear right frame position in the third projection post-detection frame curve equation set as fourth detection frame key point coordinates. The abscissa value of the fourth detection frame key point coordinate may be the same as the abscissa value of the camera optical center imaging point coordinate. The ordinate value of the fourth detection frame key point coordinate may be the same as the ordinate value of the obstacle detection frame vertex coordinate corresponding to the lower left corner position of the two-dimensional minimum bounding rectangle.
In practice, the position of the two-dimensional minimum circumscribed rectangular frame in the above case, the coordinates where the image distortion influence is small may be the following four coordinates: and the tangent point coordinates of the curve projected in the distorted image by the upper edge line of the two-dimensional minimum circumscribed rectangular frame and the upper edge line of the front of the three-dimensional minimum circumscribed rectangular frame. And the tangent point coordinates of the curve projected in the distorted image by the lower edge line of the two-dimensional minimum circumscribed rectangular frame and the lower edge line behind the three-dimensional minimum circumscribed rectangular frame. And the coordinates of the tangent points of the curves projected in the distorted image by the left line of the two-dimensional minimum circumscribed rectangular frame and the left line in front of the three-dimensional minimum circumscribed rectangular frame. And the coordinates of the tangent points of the curves projected in the distorted image by the right edge line of the two-dimensional minimum circumscribed rectangular frame and the rear right edge line of the three-dimensional minimum circumscribed rectangular frame.
Seventh, generating obstacle distance information based on the first detection frame key point coordinates, the second detection frame key point coordinates, the third detection frame key point coordinates, the fourth detection frame key point coordinates, and the obstacle type information. Wherein, the obstacle distance value may be generated as the obstacle distance information by the following formula:
Figure SMS_23
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_26
And representing the key point coordinates of the first detection frame after de-distortion. />
Figure SMS_28
And representing the key point coordinates of the second detection frame after the distortion removal. />
Figure SMS_30
And (5) representing the key point coordinates of the third detection frame after de-distortion. />
Figure SMS_25
And (5) representing the key point coordinates of the fourth detection frame after de-distortion. />
Figure SMS_29
And the abscissa value representing the coordinates of the key point of the first detection frame. />
Figure SMS_31
And the abscissa value representing the coordinates of the key point of the second detection frame. />
Figure SMS_32
And an ordinate value representing the coordinates of the key point of the third detection frame. />
Figure SMS_24
And the ordinate value representing the coordinates of the key point of the fourth detection frame. />
Figure SMS_27
And the abscissa value of the camera optical center imaging point coordinate is represented.
Optionally, the executing body may further execute the following steps:
and selecting the obstacle detection frame vertex coordinates at the left upper corner position and the right upper corner position of the frame from the obstacle detection frame vertex coordinate set as a screening frame vertex coordinate set in response to determining that the obstacle detection frame vertex coordinates with the abscissa value larger than and smaller than the abscissa value of the camera optical center imaging point coordinate exist in the obstacle detection frame vertex coordinate set at the same time and the ordinate values of the obstacle detection frame vertex coordinates in the obstacle detection frame vertex coordinate set are larger than the camera optical center imaging point coordinate. Wherein, each obstacle detection frame vertex coordinate in the obstacle detection frame vertex coordinate set is larger than the ordinate value of the camera optical center imaging point coordinate, and the representing two-dimensional minimum circumscribed rectangular frame is all positioned below the central line. The frame vertex coordinates less affected by the image distortion may be upper left corner vertex coordinates and upper right corner vertex coordinates of the two-dimensional minimum bounding rectangle frame.
And a second step of generating obstacle distance information based on the screening frame vertex coordinate set and the obstacle type information. Wherein, the obstacle distance value may be generated as the obstacle distance information by the following formula:
Figure SMS_33
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_34
and the vertex coordinates of the screening frame corresponding to the upper left corner position of the frame in the vertex coordinate set of the screening frame are represented. />
Figure SMS_35
And the screening frame vertex coordinates corresponding to the upper right corner position of the frame in the screening frame vertex coordinate set are represented.
The above formulas and related matters serve as an invention point of the embodiments of the present disclosure, and solve the second technical problem mentioned in the background art, namely that even if the process of de-distortion processing does not affect the efficiency of generating the obstacle distance information, the minimum external rectangular frame of the obstacle, which is obtained by performing the obstacle positive frame detection on the distorted road image, is no longer the minimum external rectangular frame of the obstacle headstock or parking space after de-distortion, but becomes a detection frame with pillow-shaped distortion, and if such a detection frame is directly utilized for generating the obstacle distance information, an observation error is easily introduced, so that the error of the generated obstacle distance information increases, and the accuracy of generating the obstacle distance information decreases. Factors that cause a decrease in accuracy of generating obstacle distance information tend to be as follows: even if the process of de-distortion processing does not affect the efficiency of generating the obstacle distance information, the minimum obstacle circumscribed rectangular frame obtained by performing obstacle positive frame detection on the distorted road image is not the minimum obstacle headstock or the minimum parking space circumscribed rectangular frame for de-distortion after de-distortion, but becomes a detection frame with pillow-shaped distortion, and if the detection frame is directly utilized for generating the obstacle distance information, an observation error is easily introduced, so that the error of the generated obstacle distance information is increased. If the above factors are solved, the accuracy of the generated obstacle distance information can be improved. To achieve this effect, first, by distinguishing the positional relationship between the horizontal line and the vertical line corresponding to the two-dimensional minimum bounding rectangle frame detected from the road image and the camera optical center imaging point coordinates, it is made possible to select the obstacle detection frame vertex coordinates, or other coordinates on the two-dimensional minimum bounding rectangle frame, from among the obstacle detection frame vertex coordinate sets, for which the influence of distortion is smallest. The influence of undistorted road images on the process of generating obstacle distance information is eliminated greatly, and the error of the generated obstacle distance is reduced. Meanwhile, the formulas are respectively introduced corresponding to different position conditions so as to further remove the influence of distortion. Here, since the distortion removal process is performed only on a small number of coordinates, not only the consumption of the calculation resources can be greatly reduced, but also the error of the generated obstacle distance can be further reduced. Thus, the accuracy of the generated obstacle distance information is greatly improved.
Optionally, the executing body may further send the obstacle distance information to a display terminal for display.
The above embodiments of the present disclosure have the following advantageous effects: by the obstacle distance information generating method of some embodiments of the present disclosure, efficiency of generating obstacle distance information may be improved. Specifically, the efficiency of generating obstacle distance information is reduced because: in road images of consecutive frames, the real-time de-distortion processing of the road images requires a lot of computing resources, and thus, the efficiency of generating obstacle distance information is reduced. Based on this, the obstacle distance information generating method of some embodiments of the present disclosure first acquires the road image at the current time. The road image at the current moment is a distorted image. Then, the obstacle detection is performed on the road image at the current time to generate obstacle detection information at the current time. Wherein, the obstacle detection information at the current moment includes: obstacle type information, an obstacle detection frame equation set and an obstacle detection frame vertex coordinate set. Here, since the distortion removal processing of the distorted image requires a large amount of calculation resources, the obstacle detection is directly performed without the distortion removal processing. Then, in response to determining that the abscissa value of each obstacle detection frame vertex coordinate in the obstacle detection frame vertex coordinate set is greater than the abscissa value of the preset camera optical center imaging point coordinate, a first target obstacle key point coordinate set is generated based on the obstacle detection frame vertex coordinate set. By generating the first target obstacle keypoint coordinates, obstacle distance information may be facilitated to be generated. And finally, generating obstacle distance information based on the first target obstacle key point coordinate set and the obstacle type information. Thus, it is achieved that the obstacle distance information is generated without performing the de-distortion processing on the image. Furthermore, the computational resources which are required to be consumed in the de-distortion processing can be reduced, so that the efficiency of generating obstacle distance information is improved.
With further reference to fig. 2, as an implementation of the method shown in the above figures, the present disclosure provides some embodiments of an obstacle distance information generating apparatus, which correspond to those method embodiments shown in fig. 1, and which are particularly applicable to various electronic devices.
As shown in fig. 2, the obstacle distance information generating apparatus 200 of some embodiments includes: an acquisition unit 201, a detection unit 202, a first generation unit 203, and a second generation unit 204. Wherein, the obtaining unit 201 is configured to obtain a current time road image, wherein the current time road image is a distorted image; a detection unit 202 configured to perform obstacle detection on the road image at the current time to generate obstacle detection information at the current time, where the obstacle detection information at the current time includes: obstacle type information, an obstacle detection frame equation set and an obstacle detection frame vertex coordinate set; a first generating unit 203 configured to generate a first target obstacle key point coordinate set based on the obstacle detection frame vertex coordinate set in response to determining that the abscissa value of each obstacle detection frame vertex coordinate in the obstacle detection frame vertex coordinate set is greater than the abscissa value of a preset camera optical center imaging point coordinate; the second generation unit 204 is configured to generate obstacle distance information based on the first target obstacle keypoint coordinate set and the obstacle type information.
It will be appreciated that the elements described in the apparatus 200 correspond to the various steps in the method described with reference to fig. 1. Thus, the operations, features and resulting benefits described above for the method are equally applicable to the apparatus 200 and the units contained therein, and are not described in detail herein.
Referring now to fig. 3, a schematic diagram of an electronic device 300 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 3 is merely an example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means 301 (e.g., a central processing unit, a graphics processor, etc.) that may perform various suitable actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data required for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
In general, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 308 including, for example, magnetic tape, hard disk, etc.; and communication means 309. The communication means 309 may allow the electronic device 300 to communicate with other devices wirelessly or by wire to exchange data. While fig. 3 shows an electronic device 300 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 3 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications device 309, or from storage device 308, or from ROM 302. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing means 301.
It should be noted that, in some embodiments of the present disclosure, the computer readable medium may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (Hyper Text Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be embodied in the apparatus; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a road image at the current moment, wherein the road image at the current moment is a distorted image; performing obstacle detection on the current-time road image to generate current-time obstacle detection information, wherein the current-time obstacle detection information comprises: obstacle type information, an obstacle detection frame equation set and an obstacle detection frame vertex coordinate set; generating a first target obstacle key point coordinate set based on the obstacle detection frame vertex coordinate set in response to determining that the abscissa value of each obstacle detection frame vertex coordinate in the obstacle detection frame vertex coordinate set is greater than the abscissa value of a preset camera optical center imaging point coordinate; and generating obstacle distance information based on the first target obstacle key point coordinate set and the obstacle type information.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes an acquisition unit, a detection unit, a first generation unit, and a second generation unit. The names of these units do not constitute a limitation on the unit itself in some cases, and the acquisition unit may also be described as "a unit that acquires a road image at the present time", for example.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (10)

1. An obstacle distance information generation method, comprising:
acquiring a road image at the current moment, wherein the road image at the current moment is a distorted image;
performing obstacle detection on the current-moment road image to generate current-moment obstacle detection information, wherein the current-moment obstacle detection information comprises: obstacle type information, an obstacle detection frame equation set and an obstacle detection frame vertex coordinate set;
generating a first target obstacle key point coordinate set based on the obstacle detection frame vertex coordinate set in response to determining that the abscissa value of each obstacle detection frame vertex coordinate in the obstacle detection frame vertex coordinate set is greater than the abscissa value of a preset camera optical center imaging point coordinate;
and generating obstacle distance information based on the first target obstacle key point coordinate set and the obstacle type information.
2. The method of claim 1, wherein the method further comprises:
and sending the obstacle distance information to a display terminal for display.
3. The method of claim 1, wherein the generating a first target obstacle keypoint coordinate set based on the obstacle detection bezel vertex coordinate set comprises:
In response to determining that the obstacle detection frame vertex coordinate set meets a first preset position condition, selecting obstacle detection frame vertex coordinates at the upper left corner position of the frame from the obstacle detection frame vertex coordinate set as screening frame vertex coordinates;
projecting an obstacle detection frame equation corresponding to a right frame in the obstacle detection frame equation set to an image coordinate system of the road image to generate a first projected detection frame curve equation;
determining a right frame equation of the vertex coordinates of the two obstacle detection frames in the upper right corner position and the lower right corner position of the frame in the vertex coordinate set of the obstacle detection frames;
determining the intersection point coordinates of the first projected detection frame curve equation and the right frame equation as detection frame key point coordinates;
and respectively determining the vertex coordinates of the screening frame and the key point coordinates of the detection frame as first target obstacle key point coordinates to generate a first target obstacle key point coordinate set.
4. The method of claim 1, wherein the method further comprises:
generating a second target obstacle key point coordinate set based on the obstacle detection frame vertex coordinate set in response to determining that the abscissa value of each obstacle detection frame vertex coordinate in the obstacle detection frame vertex coordinate set is less than the abscissa value of the camera optical center imaging point coordinate;
And generating obstacle distance information based on the second target obstacle key point coordinate set and the obstacle type information.
5. The method of claim 4, wherein the generating a second target obstacle keypoint coordinate set based on the obstacle detection bezel vertex coordinate set comprises:
responding to the fact that the obstacle detection frame vertex coordinate set meets a first preset position condition, and selecting obstacle detection frame vertex coordinates at the position of the upper right corner of the frame from the obstacle detection frame vertex coordinate set to serve as screening frame vertex coordinates;
projecting an obstacle detection frame equation corresponding to the left side frame in the obstacle detection frame equation set to an image coordinate system of the road image to generate a second projected detection frame curve equation;
determining a left frame equation of the vertex coordinates of the two obstacle detection frames at the left upper corner position and the left lower corner position of the frame in the vertex coordinate set of the obstacle detection frames;
determining the intersection point coordinates of the second projected detection frame curve equation and the left frame equation as detection frame key point coordinates;
and respectively determining the vertex coordinates of the screening frame and the key point coordinates of the detection frame as second target obstacle key point coordinates to generate a second target obstacle key point coordinate set.
6. The method of claim 1, wherein the method further comprises:
in response to determining that there are obstacle detection bezel vertex coordinates in the obstacle detection bezel vertex coordinate set that have abscissa values greater than and less than the abscissa values of the camera optical center imaging point coordinates at the same time, and obstacle detection bezel vertex coordinates in the obstacle detection bezel vertex coordinate set that have ordinate values greater than and less than the ordinate values of the camera optical center imaging point coordinates at the same time, projecting each obstacle detection bezel equation in the obstacle detection bezel equation set to an image coordinate system of the road image to generate a third post-projection detection bezel curve equation set;
generating an obstacle detection frame edge equation set based on each obstacle detection frame vertex coordinate in the obstacle detection frame vertex coordinate set;
determining intersection point coordinates of an obstacle detection side line equation corresponding to the upper frame position in the obstacle detection side line equation set and a third projection post-detection frame curve equation corresponding to the front upper frame position in the third projection post-detection frame curve equation set as first detection frame key point coordinates;
Determining intersection point coordinates of an obstacle detection side line equation corresponding to the lower frame position in the obstacle detection side line equation set and a third projection post-detection frame curve equation corresponding to the rear lower frame position in the third projection post-detection frame curve equation set as second detection frame key point coordinates;
determining intersection point coordinates of an obstacle detection side line equation corresponding to the left frame position in the obstacle detection side line equation set and a third projection post-detection frame curve equation corresponding to the front left frame position in the third projection post-detection frame curve equation set as third detection frame key point coordinates;
determining intersection point coordinates of an obstacle detection side line equation corresponding to the right frame position in the obstacle detection side line equation set and a third projection post-detection frame curve equation corresponding to the rear right frame position in the third projection post-detection frame curve equation set as fourth detection frame key point coordinates;
and generating obstacle distance information based on the first detection frame key point coordinates, the second detection frame key point coordinates, the third detection frame key point coordinates, the fourth detection frame key point coordinates and the obstacle type information.
7. The method of claim 1, wherein the method further comprises:
in response to determining that there are obstacle detection frame vertex coordinates in the obstacle detection frame vertex coordinate set having abscissa values greater than and less than the abscissa values of the camera optical center imaging point coordinates at the same time, and that each obstacle detection frame vertex coordinate in the obstacle detection frame vertex coordinate set is greater than the ordinate values of the camera optical center imaging point coordinates, selecting obstacle detection frame vertex coordinates at upper left and upper right corner positions of the frame from the obstacle detection frame vertex coordinate set as a screening frame vertex coordinate set;
and generating barrier distance information based on the screening frame vertex coordinate set and the barrier type information.
8. An obstacle distance information generating device, comprising:
an acquisition unit configured to acquire a current-time road image, wherein the current-time road image is a distorted image;
a detection unit configured to perform obstacle detection on the current-time road image to generate current-time obstacle detection information, wherein the current-time obstacle detection information includes: obstacle type information, an obstacle detection frame equation set and an obstacle detection frame vertex coordinate set;
A first generation unit configured to generate a first target obstacle key point coordinate set based on the obstacle detection frame vertex coordinate set in response to determining that the abscissa value of each obstacle detection frame vertex coordinate in the obstacle detection frame vertex coordinate set is greater than the abscissa value of a preset camera optical center imaging point coordinate;
and a second generation unit configured to generate obstacle distance information based on the first target obstacle keypoint coordinate set and the obstacle type information.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-7.
10. A computer readable medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-7.
CN202310148899.0A 2023-02-22 2023-02-22 Obstacle distance information generation method, device, equipment and computer readable medium Pending CN116229417A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310148899.0A CN116229417A (en) 2023-02-22 2023-02-22 Obstacle distance information generation method, device, equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310148899.0A CN116229417A (en) 2023-02-22 2023-02-22 Obstacle distance information generation method, device, equipment and computer readable medium

Publications (1)

Publication Number Publication Date
CN116229417A true CN116229417A (en) 2023-06-06

Family

ID=86585327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310148899.0A Pending CN116229417A (en) 2023-02-22 2023-02-22 Obstacle distance information generation method, device, equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN116229417A (en)

Similar Documents

Publication Publication Date Title
US10694175B2 (en) Real-time automatic vehicle camera calibration
CN113607185B (en) Lane line information display method, lane line information display device, electronic device, and computer-readable medium
CN110211195B (en) Method, device, electronic equipment and computer-readable storage medium for generating image set
CN110852258A (en) Object detection method, device, equipment and storage medium
CN114399588B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN115817463B (en) Vehicle obstacle avoidance method, device, electronic equipment and computer readable medium
CN113256742A (en) Interface display method and device, electronic equipment and computer readable medium
CN112258519A (en) Automatic extraction method and device for way-giving line of road in high-precision map making
CN110795196A (en) Window display method, device, terminal and storage medium
CN115393815A (en) Road information generation method and device, electronic equipment and computer readable medium
CN113033715B (en) Target detection model training method and target vehicle detection information generation method
CN116311155A (en) Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium
CN112257598B (en) Method and device for identifying quadrangle in image, readable medium and electronic equipment
CN110321854B (en) Method and apparatus for detecting target object
CN116161040B (en) Parking space information generation method, device, electronic equipment and computer readable medium
CN115468578B (en) Path planning method and device, electronic equipment and computer readable medium
CN114723640B (en) Obstacle information generation method and device, electronic equipment and computer readable medium
CN116229417A (en) Obstacle distance information generation method, device, equipment and computer readable medium
CN115610415A (en) Vehicle distance control method, device, electronic equipment and computer readable medium
CN113506356B (en) Method and device for drawing area map, readable medium and electronic equipment
CN110796144B (en) License plate detection method, device, equipment and storage medium
CN116563818B (en) Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium
CN115019021A (en) Image processing method, device, equipment and storage medium
CN116259037A (en) Guideboard distance information generation method, apparatus, device and computer readable medium
CN116563817B (en) Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination