CN111326005B - Image processing method, apparatus, computer device and medium for automatic driving - Google Patents

Image processing method, apparatus, computer device and medium for automatic driving Download PDF

Info

Publication number
CN111326005B
CN111326005B CN202010145181.2A CN202010145181A CN111326005B CN 111326005 B CN111326005 B CN 111326005B CN 202010145181 A CN202010145181 A CN 202010145181A CN 111326005 B CN111326005 B CN 111326005B
Authority
CN
China
Prior art keywords
obstacle
target object
tangents
processing
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010145181.2A
Other languages
Chinese (zh)
Other versions
CN111326005A (en
Inventor
程凯
王军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111337815.5A priority Critical patent/CN113936493B/en
Priority to CN202010145181.2A priority patent/CN111326005B/en
Publication of CN111326005A publication Critical patent/CN111326005A/en
Application granted granted Critical
Publication of CN111326005B publication Critical patent/CN111326005B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096725Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control

Abstract

The present disclosure provides an image processing method, which may be used for automatic driving, including: the method comprises the steps of obtaining an image of a preset area, identifying a target object and at least one obstacle in the image, respectively making a tangent line for each obstacle in the at least one obstacle by taking the target object as a starting point to obtain a plurality of tangent lines, and determining an environmental parameter of the target object based on the plurality of tangent lines.

Description

Image processing method, apparatus, computer device and medium for automatic driving
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method, an image processing apparatus, a computer device, and a medium.
Background
With the rapid development of vehicle technology and electronic technology, autonomous vehicles are increasingly emerging in people's lives. The automatic driving vehicle can obtain the information of the traffic scene of the vehicle through various sensors, and determine a proper automatic driving strategy according to the traffic scene information so as to realize the automatic driving of the vehicle.
However, the autonomous vehicle may encounter a problem of blocking of an obstacle during traveling, for example, the obstacle is blocked by a background, or the obstacles are blocked from each other. This affects the stability and reliability of the autonomous vehicle for obstacle perception.
In implementing the disclosed concept, the inventors found that there are at least the following problems in the related art: in the related technology, the shielding relation of the obstacles is generally analyzed by methods such as lower edge sorting and the like, and the method is low in accuracy and reliability and difficult to meet the requirement of the automatic driving vehicle on obstacle perception.
Disclosure of Invention
In view of the above, the present disclosure provides an image processing method, an image processing apparatus, a computer device, and a medium.
One aspect of the present disclosure provides an image processing method, including: the method comprises the steps of obtaining an image of a preset area, identifying a target object and at least one obstacle in the image, respectively making a tangent line for each obstacle in the at least one obstacle by taking the target object as a starting point to obtain a plurality of tangent lines, and determining an environmental parameter of the target object based on the plurality of tangent lines.
According to an embodiment of the present disclosure, the determining an environmental parameter of the target object based on the tangent line includes: for each obstacle in the at least one obstacle, determining a tangent reaching the obstacle in the plurality of tangents as a processing tangent, and determining an actual visual range of the obstacle for the target object according to the processing tangent; and determining an environmental parameter of the target object according to the actual visual range of each obstacle of the at least one obstacle.
According to an embodiment of the present disclosure, the determining the actual visual range of the obstacle for the target object according to the processing tangent line includes: determining at least one pair of adjacent processing tangents satisfying a first preset condition from the processing tangents, wherein the first preset condition comprises that no obstacle exists in an area formed by the adjacent processing tangents and the obstacle boundary, and determining an actual visual range of the obstacle to the target object according to the at least one pair of adjacent processing tangents.
According to an embodiment of the present disclosure, the determining an actual visible range of the obstacle for the target object according to the at least one pair of adjacent processing tangents includes: and calculating the sum of included angles between each pair of adjacent processing tangent lines in the at least one pair of adjacent processing tangent lines as the actual visual range of the obstacle.
According to an embodiment of the present disclosure, the determining an environmental parameter of the target object according to the actual visible range of each obstacle of the at least one obstacle comprises: determining a maximum included angle formed by the starting point and a plurality of tangent lines of the obstacle as a theoretical visual range of the obstacle for the target object, and determining a ratio of an actual visual range and the theoretical visual range of each obstacle as the environment parameter.
According to an embodiment of the present disclosure, said determining an actual viewing range of the obstacle for the target object from the at least one pair of adjacent processing tangents comprises: and calculating the sum of the areas formed by the boundaries of each pair of adjacent processing tangents and the obstacle in the at least one pair of adjacent processing tangents as the actual visual range of the obstacle.
According to an embodiment of the present disclosure, the determining an environmental parameter of the target object according to the actual visible range of each obstacle of the at least one obstacle comprises: calculating the sum of areas formed by adjacent tangents meeting a second preset condition and the boundary of the preset area as a first area, wherein the second preset condition comprises that no barrier exists in the area formed by the adjacent tangents and the boundary of the preset area, calculating the sum of actual visual ranges of at least one barrier as a second area, and calculating the ratio of the sum of the first area and the second area to the area of the preset area as the environment parameter.
According to an embodiment of the present disclosure, the acquiring an image of a preset region includes: and acquiring a two-dimensional top view of the preset area.
According to an embodiment of the present disclosure, the acquiring an image of a preset region includes: and acquiring three-dimensional information of the preset area, and performing projection processing on the three-dimensional information to obtain a two-dimensional top view of the preset area.
According to an embodiment of the present disclosure, the method further comprises: modeling the target object as a single point and modeling each obstacle as a polygon. The making of a tangent line for each obstacle in at least one obstacle with the target object as a starting point includes: determining a tangent to the obstacle by connecting to vertices of a polygon starting from the single point.
According to an embodiment of the present disclosure, the method further comprises: pre-processing the at least one obstacle, wherein the pre-processing comprises: determining a to-be-supplemented obstacle in the at least one obstacle based on the size and/or the shape, and performing complement processing on the to-be-supplemented obstacle according to a preset size to obtain a complete image of the to-be-supplemented obstacle.
According to an embodiment of the present disclosure, the target object includes a vehicle. The method further comprises the following steps: based on the environmental parameter, an autonomous driving maneuver is generated for the vehicle.
Another aspect of the present disclosure provides an image processing apparatus including an acquisition module, a recognition module, a processing module, and a determination module. The system comprises an acquisition module, an identification module, a processing module and a determination module, wherein the acquisition module is used for acquiring an image of a preset area, the identification module is used for identifying a target object and at least one obstacle in the image, the processing module is used for taking the target object as a starting point and respectively making a tangent line aiming at each obstacle in the at least one obstacle to obtain a plurality of tangent lines, and the determination module is used for determining the environmental parameters of the target object based on the plurality of tangent lines.
According to an embodiment of the present disclosure, the determining an environmental parameter of the target object based on the tangent line includes: for each obstacle in the at least one obstacle, determining a tangent reaching the obstacle in the plurality of tangents as a processing tangent, and determining an actual visual range of the obstacle for the target object according to the processing tangent; and determining an environmental parameter of the target object according to the actual visual range of each obstacle of the at least one obstacle.
According to an embodiment of the present disclosure, the determining the actual visual range of the obstacle for the target object according to the processing tangent line includes: determining at least one pair of adjacent processing tangents satisfying a first preset condition from the processing tangents, wherein the first preset condition comprises that no obstacle exists in an area formed by the adjacent processing tangents and the obstacle boundary, and determining an actual visual range of the obstacle to the target object according to the at least one pair of adjacent processing tangents.
According to an embodiment of the present disclosure, the determining an actual visible range of the obstacle for the target object according to the at least one pair of adjacent processing tangents includes: and calculating the sum of included angles between each pair of adjacent processing tangent lines in the at least one pair of adjacent processing tangent lines as the actual visual range of the obstacle.
According to an embodiment of the present disclosure, the determining an environmental parameter of the target object according to the actual visible range of each obstacle of the at least one obstacle comprises: determining a maximum included angle formed by the starting point and a plurality of tangent lines of the obstacle as a theoretical visual range of the obstacle for the target object, and determining a ratio of an actual visual range and the theoretical visual range of each obstacle as the environment parameter.
According to an embodiment of the present disclosure, said determining an actual viewing range of the obstacle for the target object from the at least one pair of adjacent processing tangents comprises: and calculating the sum of the areas formed by the boundaries of each pair of adjacent processing tangents and the obstacle in the at least one pair of adjacent processing tangents as the actual visual range of the obstacle.
According to an embodiment of the present disclosure, the determining an environmental parameter of the target object according to the actual visible range of each obstacle of the at least one obstacle comprises: calculating the sum of areas formed by adjacent tangents meeting a second preset condition and the boundary of the preset area as a first area, wherein the second preset condition comprises that no barrier exists in the area formed by the adjacent tangents and the boundary of the preset area, calculating the sum of actual visual ranges of at least one barrier as a second area, and calculating the ratio of the sum of the first area and the second area to the area of the preset area as the environment parameter.
According to an embodiment of the present disclosure, the acquiring an image of a preset region includes: and acquiring a two-dimensional top view of the preset area.
According to an embodiment of the present disclosure, the acquiring an image of a preset region includes: and acquiring three-dimensional information of the preset area, and performing projection processing on the three-dimensional information to obtain a two-dimensional top view of the preset area.
According to an embodiment of the present disclosure, the apparatus further comprises: a first modeling module for modeling the target object as a single point, and a second modeling module for modeling each obstacle as a polygon. The processing module is further configured to: determining a tangent to the obstacle by connecting to vertices of a polygon starting from the single point.
According to an embodiment of the present disclosure, the apparatus further comprises: a preprocessing module for preprocessing the at least one obstacle, wherein the preprocessing comprises: determining a to-be-supplemented obstacle in the at least one obstacle based on the size and/or the shape, and performing complement processing on the to-be-supplemented obstacle according to a preset size to obtain a complete image of the to-be-supplemented obstacle.
According to an embodiment of the present disclosure, the target object includes a vehicle. The device further comprises: a strategy generation module to generate an autonomous driving strategy for the vehicle based on the environmental parameter.
Another aspect of the present disclosure provides a computer device, comprising: one or more processors, a storage device to store one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method as described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.
Another aspect of the disclosure provides a computer program comprising computer executable instructions for implementing the method as described above when executed.
According to the embodiment of the disclosure, the problem of low accuracy and reliability of obstacle sensing of the automatic driving vehicle in the prior art can be at least partially solved, and therefore, the technical effect of improving the sensing capability of the automatic driving vehicle on the obstacle can be achieved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
fig. 1 schematically shows an exemplary system architecture to which an image processing method according to an embodiment of the present disclosure is applied;
FIG. 2 schematically shows a flow chart of an image processing method according to an embodiment of the present disclosure;
FIG. 3A schematically illustrates a cut-line schematic in accordance with an embodiment of the present disclosure;
and 3B schematically illustrates a cut-line schematic according to another embodiment of the present disclosure;
FIG. 3C schematically illustrates an actual visibility range diagram according to an embodiment of the present disclosure;
FIG. 3D schematically illustrates an actual viewing range diagram according to another embodiment of the present disclosure;
fig. 4 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure; and
FIG. 5 schematically shows a block diagram of a computer device according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
An embodiment of the present disclosure provides an image processing method. The image processing method may include: the method comprises the steps of acquiring an image of a preset area, and identifying a target object and at least one obstacle in the image of the preset area. Then, with the target object as a starting point, a tangent line is respectively made for each obstacle in the at least one obstacle to obtain a plurality of tangent lines. Next, an environmental parameter of the target object is determined based on the plurality of tangents.
Fig. 1 schematically shows an exemplary system architecture 100 to which an image processing method according to an embodiment of the present disclosure is applied.
It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include an image acquisition device 101, a network 102, a server 103, and a target object 104. The network 102 is a medium to provide a communication link between the image acquisition apparatus 101 and the server 103, or between the image acquisition apparatus 101 and the target object 104, or between the server 103 and the target object 104. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The image acquisition apparatus 101 may be a two-dimensional image acquisition apparatus or a three-dimensional image acquisition apparatus. For example, the image capturing device 101 may be a vehicle-mounted laser radar, a road monitor, or the like. The image acquisition means 101 may be used to acquire image data of the environment surrounding the target object 104.
The server 103 may be a server that provides various services. For example, the server 103 may receive an image acquired from the image acquisition apparatus 101, perform analysis processing on the image, and feed back the processing result to the target object 104.
The target object 104 may be, for example, an autonomous vehicle. The target object 104 may determine an automatic driving policy, for example, according to a processing result fed back by the server 103, thereby implementing automatic driving.
It should be noted that the image processing method provided by the embodiment of the present disclosure may be executed by the server 103. Accordingly, the image processing apparatus provided by the embodiment of the present disclosure may be provided in the server 103.
It should be understood that the number of image acquisition devices, networks, servers, and target objects in fig. 1 are merely illustrative. There may be any number of image capture devices, networks, servers, and target objects, as desired.
Fig. 2 schematically shows a flow chart of an image processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S201 to S204.
In operation S201, an image of a preset region is acquired.
According to an embodiment Of the present disclosure, the preset Region may be a Region Of Interest (ROI). For example, the region of interest may be determined based on the position of the target object.
In the embodiment of the present disclosure, acquiring the image of the preset area may be acquiring a top view of the preset area. The top-view boundary of the preset area in the embodiment of the present disclosure may be a polygon, a circle, or an irregular shape, for example.
In an embodiment of the present disclosure, acquiring the image of the preset area may be acquiring a two-dimensional top view of the preset area. For example, the preset area may be taken down by the image pickup device, thereby acquiring a two-dimensional top view of the preset area. For example, a two-dimensional top view of the preset area may be acquired by the road monitor.
In another embodiment of the present disclosure, the obtaining of the image of the preset region may also be obtaining three-dimensional information of the preset region, and performing projection processing on the three-dimensional information to obtain a two-dimensional top view of the preset region. For example, three-dimensional information of the preset area may be acquired by a sensor, and the three-dimensional information may be subjected to projection processing to obtain a two-dimensional top view of the preset area. For example, three-dimensional point cloud data of a preset area can be acquired through a vehicle-mounted detector, and the three-dimensional point cloud data is projected to a two-dimensional overhead view to obtain a two-dimensional overhead view of the preset area.
Next, in operation S202, a target object and at least one obstacle in an image are identified.
According to an embodiment of the present disclosure, the target object may be, for example, an autonomous vehicle, and the obstacle may be, for example, another object in the image other than the target object, such as another vehicle, a fence, or a building.
Next, in operation S203, a tangent line is made for each obstacle of the at least one obstacle respectively, taking the target object as a starting point, so as to obtain a plurality of tangent lines.
According to the embodiment of the disclosure, the contour of the obstacle can be tangent with the target object as a starting point. Wherein, the tangent line of the obstacle is a line with only one intersection point with the outline of the obstacle. It will be appreciated that starting from the target object, two tangent lines may be obtained for each obstacle.
In the disclosed embodiment, the target object may be modeled as a single point and the obstacle may be modeled as a polygon, so that a tangent to the obstacle may be determined by a line connecting vertices of the polygon with the single point as a starting point. According to the embodiment of the disclosure, the calculation amount of the determined tangent line can be reduced by modeling the obstacle as the polygon, and the tangent line can be determined directly through the connecting line with the vertex of the polygon.
For example, the obstacle may be modeled as a suitable polygon according to its contour shape. For example, when the obstacle is a vehicle, it may be modeled as a rectangle, establishing lines connecting a single point to the four vertices of the rectangle, where the outermost two lines are tangents to the single point to the obstacle.
Fig. 3A schematically illustrates a tangential schematic in accordance with an embodiment of the present disclosure. A specific embodiment of operation S203 is described next in conjunction with fig. 3A.
For example, as shown in fig. 3A, point O is the target object and the obstacles include A, B and C. Taking O as a starting point, cutting the obstacle A to obtain cutting lines L2 and L6; taking O as a starting point, cutting the obstacle B to obtain cutting lines L4 and L5; the tangents L1 and L3 can be obtained by cutting the obstacle C from O.
In operation S204, an environmental parameter of the target object is determined based on the plurality of tangents.
According to the embodiment of the disclosure, for each obstacle in at least one obstacle, a tangent reaching the obstacle in a plurality of tangents is determined as a processing tangent, and an actual visual range of the obstacle for a target object is determined according to the processing tangent. Then, an environmental parameter of the target object is determined based on the actual viewing range of each of the at least one obstacle.
For example, as shown in fig. 3A, for the obstacle a, there are L2, L3, L4, L5, and L6 as tangent lines to the obstacle a, and L2, L3, L4, L5, and L6 are processing tangent lines to the obstacle a, and the actual visible range of the obstacle a for the target object O can be determined based on L2, L3, L4, L5, and L6.
Similarly, for the obstacle B, the tangents to the obstacle B are L4 and L5, and then L4 and L5 are processing tangents to the obstacle B, and the actual visible range of the obstacle B for the target object O can be determined based on L4 and L5.
Similarly, for the obstacle C, the tangents to the obstacle C are L1, L2, and L3, and then L1, L2, and L3 are processing tangents to the obstacle C, and the actual visible range of the obstacle C for the target object O can be determined based on L1, L2, and L3.
The disclosed embodiments may determine the environmental parameters of target object O based on the actual visible range of obstacles A, B and C.
In the embodiment of the present disclosure, determining the actual visual range of the obstacle for the target object according to the processing tangent line may include: determining at least one pair of adjacent processing tangents satisfying a first preset condition from the processing tangents, wherein the first preset condition comprises that no obstacle exists in an area formed by the adjacent processing tangents and an obstacle boundary, and determining an actual visual range of the obstacle to the target object according to the at least one pair of adjacent processing tangents.
For example, as shown in fig. 3A, for the obstacle a, the adjacent processing tangents satisfying the first preset condition may be determined as (L3, L4) and (L5, L6) from the processing tangents L2, L3, L4, L5, and L6, and the actual visible range of the obstacle a corresponding to the target object O may be determined from the adjacent processing tangents (L3, L4) and (L5, L6).
Similarly, for the obstacle B, the adjacent processing tangents (L4, L5) satisfying the first preset condition may be determined from the processing tangents L4 and L5, and the actual visible range of the obstacle B corresponding to the target object O may be determined from the adjacent processing tangents (L4, L5).
Similarly, for the obstacle C, the adjacent processing tangents satisfying the first preset condition may be determined as (L1, L2) and (L2, L3) from among the processing tangents L1, L2, and L3, and then the actual visible range of the obstacle C corresponding to the target object O is determined from the adjacent processing tangents (L1, L2) and (L2, L3).
In an embodiment of the present disclosure, in order to reduce the amount of calculation, when determining the processing tangent line, or when determining the adjacent processing tangent line satisfying the first preset condition, the tangent line having the other obstacle between the starting point and the tangent point to the tangent obstacle among the plurality of tangent lines may not be considered. For example, as shown in fig. 3A, if the tangent line L2 has an obstacle C between the starting point O and the tangent point to the obstacle a, the tangent line L2 may not be considered.
Fig. 3B schematically illustrates a tangential schematic according to another embodiment of the present disclosure. The following describes a specific embodiment of the above without considering the tangent lines of the plurality of tangent lines where other obstacles exist between the starting point and the tangent point of the tangent obstacle in conjunction with fig. 3B.
For example, as shown in fig. 3B, the tangent line L2 may be deleted, and there are L3, L4, L5, and L6 as process tangent lines for the obstacle a, where the adjacent process tangent lines are (L3, L4) and (L5, L6). The processing tangent lines for the obstacle B are L4 and L5, and the adjacent processing tangent lines are (L4, L5). The process tangents to obstacle C are L1 and L3, with the adjacent process tangents being (L1, L3).
Specifically, in an embodiment of the present disclosure, the actual visible range of the obstacle for the target object is determined according to at least one pair of adjacent processing tangents, which may be calculating, for the obstacle, a sum of included angles between each pair of adjacent processing tangents in the at least one pair of adjacent processing tangents as the actual visible range of the obstacle.
Fig. 3C schematically illustrates an actual visibility range diagram according to an embodiment of the present disclosure. A specific embodiment of calculating the actual viewing range is described next in conjunction with fig. 3C.
As shown in fig. 3C, for the obstacle a, the included angle between the adjacent processing tangents (L3, L4) is θ 2, and the included angle between the adjacent processing tangents (L5, L6) is θ 4, and the actual visible range of the obstacle a is θ 2+ θ 4.
For the obstacle B, the included angle between the adjacent processing tangents (L4, L5) is θ 3, and the actual visible range of the obstacle B is θ 3.
For the obstacle C, the included angle between the adjacent processing tangents (L1, L3) is θ 1, and the actual visible range of the obstacle C is θ 1.
According to an embodiment of the present disclosure, determining an environmental parameter of a target object according to an actual visible range of each obstacle of at least one obstacle may include: and determining a maximum included angle formed by the starting point and a plurality of tangent lines of the obstacle as a theoretical visual range of the obstacle to the target object for each obstacle, and taking a ratio of the actual visual range to the theoretical visual range of each obstacle as an environmental parameter.
For example, as shown in fig. 3C, the theoretical visual range for the obstacle a is an angle θ formed by a dotted line (L2) and L6AThen the environmental parameter is (theta 2+ theta 4)/thetaA. The environmental parameter can be used as an obstacle A phaseThe occlusion ratio for the target object.
For the obstacle B, the theoretical visual range is an included angle theta formed by L4 and L6BIf the environmental parameter is theta 3/thetaB(═ 1). The environment parameter may be used as the occlusion ratio of the obstacle B with respect to the target object.
For the obstacle C, the theoretical visual range is an included angle theta formed by L1 and L6CIf the environmental parameter is theta 1/thetaC(═ 1). The environment parameter may be used as the occlusion ratio of the obstacle C with respect to the target object.
Specifically, in another embodiment of the present disclosure, the actual visible range of the obstacle for the target object is determined according to at least one pair of adjacent processing tangents, and for the obstacle, the sum of areas of regions formed by boundaries between each pair of adjacent processing tangents and the obstacle in the at least one pair of adjacent processing tangents may also be calculated as the actual visible range of the obstacle.
For example, fig. 3D schematically illustrates an actual viewing range diagram according to another embodiment of the present disclosure. A specific embodiment of calculating the actual viewing range is described next in conjunction with fig. 3D.
As shown in fig. 3D, when the area of the region Q2 formed by the boundary between the adjacent processing tangent line (L3, L4) and the obstacle a is S2 and the area of the region Q4 formed by the boundary between the adjacent processing tangent line (L5, L6) and the obstacle a is S4 for the obstacle a, the actual visible range of the obstacle a is S2+ S4.
With respect to the obstacle B, if the area of the region Q3 formed by the boundaries between the adjacent processing tangents (L4, L5) and the obstacle B is S3, the actual visible range of the obstacle B is S3.
With respect to the obstacle C, if the area of the region Q1 formed by the boundaries between the adjacent processing tangents (L1, L3) and the obstacle C is S1, the actual visible range of the obstacle C is S1.
According to an embodiment of the present disclosure, determining an environmental parameter of a target object according to an actual visible range of each obstacle of at least one obstacle may further include: calculating the sum of the areas of areas formed by adjacent tangents meeting a second preset condition and the boundary of the preset area as a first area, wherein the second preset condition comprises that no barrier exists in the area formed by the adjacent tangents and the boundary of the preset area, calculating the sum of the actual visual range of at least one barrier as a second area, and calculating the ratio of the sum of the first area and the second area to the area of the preset area as an environmental parameter.
For example, as shown in fig. 3D, if the adjacent tangent lines satisfying the second preset condition are (L1, L6), and the area of the region Q5 formed by the boundary between the adjacent tangent lines (L1, L6) and the preset region is S5, the first area is S5. The sum of the actual visible ranges of the obstacles A, B and C is S1+ S2+ S3+ S4, and the second area is S1+ S2+ S3+ S4. The area of the preset region may be represented by S, and the environmental parameter may be (S1+ S2+ S3+ S4+ S5)/S. The environment parameter may be an environment complexity of the preset area.
It is to be understood that fig. 3A-3D only illustrate embodiments in which the obstacle is located on one side of the target object. It is understood that in the embodiments of the present disclosure, based on the acquired image of the preset area, the obstacles identified from the image may be distributed in any position of the target object.
The image processing method according to the embodiment of the present disclosure may further include preprocessing at least one obstacle. The preprocessing can be, for example, determining an obstacle to be complemented in at least one obstacle based on the size and/or the shape, and performing complementing processing on the obstacle to be complemented according to a preset size to obtain a complete image of the obstacle to be complemented.
It can be understood that if the image within the preset area is obtained by the vehicle-mounted detector, there may be a problem that the obstacle image information is not complete. For example, if the obstacle M is higher than the vehicle-mounted detector, the obstacle N that is shielded by the obstacle M and scanned by the detector may not be complete. The contour of the obstacle N projected in the two-dimensional top view may not be the contour of the real obstacle N.
In view of this, it is contemplated that the image processing method of the present disclosure may be used in a scenario where an autonomous vehicle is driven on a road. If the number of obstacles is more than one vehicle, the complete image of the scanned incomplete obstacle can be complemented according to the average length and width of various vehicle profiles.
For example, the identified obstacle vehicle has a trapezoidal outline, and a preset size corresponding to the vehicle may be determined according to the current size (e.g., the length) of the vehicle, and the complete image of the vehicle may be complemented according to the preset size.
According to an embodiment of the present disclosure, the target object may be, for example, an autonomous vehicle. The method may also include generating an autonomous driving maneuver for the autonomous vehicle based on the obtained environmental parameter to enable autonomous driving of the autonomous vehicle. For example, when the environment is complex and high, a driving strategy may be generated in which the driving speed is low. For another example, when the environmental complexity exceeds a threshold, switching from automatic driving to manual driving may be performed.
The embodiment of the disclosure can improve the perception capability of the target object to the surrounding environment by performing tangent processing on the target object and the obstacle in the image and determining the environmental parameter of the target object based on the tangent. For example, the embodiment of the present disclosure may determine the shielding ratio of each obstacle by the included angle between the adjacent processing tangents. For another example, embodiments of the present disclosure may determine the environmental complexity by processing the area of the area enclosed by the tangent line and the obstacle adjacent to each other. Therefore, the perception capability of the target object to the surrounding environment can be improved through a plurality of environment parameters, and the target object is facilitated to make a corresponding strategy based on the surrounding environment condition.
According to the embodiment of the invention, the three-dimensional image of the preset area can be obtained through the vehicle-mounted detector, the three-dimensional image is subjected to projection processing to obtain the two-dimensional top view, and the environmental parameters (such as the shielding proportion of each obstacle) of the target object are determined based on the two-dimensional top view, so that the target object can directly correspond to the obstacles in the three-dimensional environment, a complex algorithm of two-dimensional back to three-dimensional is avoided, and the accuracy of the perception capability of the target object is improved.
Fig. 4 schematically shows a block diagram of an image processing apparatus 400 according to an embodiment of the present disclosure.
As shown in fig. 4, the apparatus 400 includes an acquisition module 410, a recognition module 420, a processing module 430, and a determination module 440.
The obtaining module 410 is configured to obtain an image of a preset area. According to the embodiment of the present disclosure, the obtaining module 410 may, for example, perform the operation S201 described above with reference to fig. 2, which is not described herein again.
The recognition module 420 is used to recognize a target object and at least one obstacle in the image. According to the embodiment of the present disclosure, the identifying module 420 may perform, for example, the operation S202 described above with reference to fig. 2, which is not described herein again.
The processing module 430 is configured to make a tangent line for each obstacle of the at least one obstacle, respectively, with the target object as a starting point, so as to obtain a plurality of tangent lines. According to the embodiment of the present disclosure, the processing module 430 may, for example, perform operation S203 described above with reference to fig. 2, which is not described herein again.
The determining module 440 is configured to determine an environmental parameter of the target object based on the plurality of tangents. According to the embodiment of the present disclosure, the determining module 440 may perform, for example, the operation S204 described above with reference to fig. 2, which is not described herein again.
According to an embodiment of the present disclosure, determining an environmental parameter of a target object based on a tangent line may include: determining a tangent reaching the obstacle from a plurality of tangents as processing tangents for each obstacle of the at least one obstacle, and determining an actual visual range of the obstacle to the target object according to the processing tangents; and determining an environmental parameter of the target object according to the actual visual range of each obstacle of the at least one obstacle.
According to the embodiment of the present disclosure, determining the actual visual range of the obstacle for the target object according to the processing tangent line may include: determining at least one pair of adjacent processing tangents satisfying a first preset condition from the processing tangents, wherein the first preset condition comprises that no obstacle exists in an area formed by the adjacent processing tangents and an obstacle boundary, and determining an actual visual range of the obstacle to the target object according to the at least one pair of adjacent processing tangents.
According to an embodiment of the present disclosure, determining an actual viewing range of an obstacle for a target object from at least one pair of adjacent processing tangents includes: and calculating the sum of included angles between each pair of adjacent processing tangent lines in at least one pair of adjacent processing tangent lines as the actual visual range of the obstacle.
According to an embodiment of the present disclosure, determining an environmental parameter of a target object according to an actual viewing range of each obstacle of at least one obstacle comprises: and determining a maximum included angle formed by the starting point and a plurality of tangent lines of the obstacle as a theoretical visual range of the obstacle to the target object for each obstacle, and taking a ratio of the actual visual range to the theoretical visual range of each obstacle as an environmental parameter.
According to an embodiment of the present disclosure, determining an actual visual range of the obstacle for the target object from the at least one pair of adjacent processing tangents includes: and calculating the sum of the areas formed by the boundaries of each pair of adjacent processing tangents and the obstacle in at least one pair of adjacent processing tangents as the actual visual range of the obstacle.
According to an embodiment of the present disclosure, determining an environmental parameter of a target object according to an actual viewing range of each obstacle of the at least one obstacle comprises: calculating the sum of the areas of areas formed by adjacent tangents meeting a second preset condition and the boundary of the preset area as a first area, wherein the second preset condition comprises that no barrier exists in the area formed by the adjacent tangents and the boundary of the preset area, calculating the sum of the actual visual range of at least one barrier as a second area, and calculating the ratio of the sum of the first area and the second area to the area of the preset area as an environmental parameter.
According to an embodiment of the present disclosure, acquiring an image of a preset area includes: a two-dimensional top view of the preset area is acquired.
According to an embodiment of the present disclosure, acquiring an image of a preset area includes: and acquiring three-dimensional information of the preset area, and performing projection processing on the three-dimensional information to obtain a two-dimensional top view of the preset area.
According to an embodiment of the present disclosure, the apparatus 400 further comprises: a first modeling module for modeling the target object as a single point, and a second modeling module for modeling each obstacle as a polygon. The processing module is further configured to: a tangent to the obstacle is determined by a line drawn from the vertices of the polygon starting from a single point.
According to an embodiment of the present disclosure, the apparatus 400 further comprises: the pretreatment module is used for pretreating at least one obstacle. Wherein the pretreatment comprises: determining an obstacle to be complemented in at least one obstacle based on the size and/or the shape, and complementing the obstacle to be complemented according to a preset size to obtain a complete image of the obstacle to be complemented.
According to an embodiment of the present disclosure, the target object includes a vehicle. The apparatus 400 further comprises: a strategy generation module to generate an autonomous driving strategy for the vehicle based on the environmental parameter.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any of the obtaining module 410, the identifying module 420, the processing module 430, and the determining module 440 may be combined in one module to be implemented, or any one of the modules may be split into multiple modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the obtaining module 410, the identifying module 420, the processing module 430, and the determining module 440 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in any one of three implementations of software, hardware, and firmware, or in any suitable combination of any of them. Alternatively, at least one of the obtaining module 410, the identifying module 420, the processing module 430 and the determining module 440 may be at least partially implemented as a computer program module, which when executed may perform a corresponding function.
Fig. 5 schematically shows a block diagram of a computer device adapted to implement the above described method according to an embodiment of the present disclosure. The computer device shown in fig. 5 is only an example and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
As shown in fig. 5, a computer device 500 according to an embodiment of the present disclosure includes a processor 501, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. The processor 501 may comprise, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 501 may also include onboard memory for caching purposes. Processor 501 may include a single processing unit or multiple processing units for performing different actions of a method flow according to embodiments of the disclosure.
In the RAM 503, various programs and data necessary for the operation of the system 500 are stored. The processor 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. The processor 501 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 502 and/or the RAM 503. Note that the programs may also be stored in one or more memories other than the ROM 502 and the RAM 503. The processor 501 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, system 500 may also include an input/output (I/O) interface 505, input/output (I/O) interface 505 also being connected to bus 504. The system 500 may also include one or more of the following components connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511. The computer program, when executed by the processor 501, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, a computer-readable storage medium may include ROM 502 and/or RAM 503 and/or one or more memories other than ROM 502 and RAM 503 described above.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (9)

1. An image processing method for automatic driving, comprising:
acquiring an image of a preset area;
identifying a target object and at least one obstacle in the image; the target object comprises a vehicle;
respectively cutting a line for each obstacle in at least one obstacle by taking the target object as a starting point to obtain a plurality of cut lines; and
determining an environmental parameter of the target object based on the plurality of tangents;
wherein the determining an environmental parameter of the target object based on the plurality of tangents comprises:
for each obstacle in the at least one obstacle, determining a tangent reaching the obstacle in the plurality of tangents as a processing tangent, and determining an actual visual range of the obstacle for the target object according to the processing tangent; and
determining an environmental parameter of the target object according to an actual visible range of each obstacle of the at least one obstacle;
wherein the determining the actual visual range of the obstacle to the target object according to the processing tangent line comprises:
determining at least one pair of adjacent processing tangents satisfying a first preset condition from the processing tangents, wherein the first preset condition comprises that no obstacle exists in an area formed by the adjacent processing tangents and the obstacle boundary;
determining the actual visual range of the obstacle to the target object according to the at least one pair of adjacent processing tangents;
wherein the determining the actual visual range of the obstacle for the target object according to the at least one pair of adjacent processing tangents comprises:
calculating the sum of the areas formed by the boundaries of each pair of adjacent processing tangents and the obstacle in the at least one pair of adjacent processing tangents as the actual visual range of the obstacle;
wherein the determining of the environmental parameter of the target object according to the actual viewing range of each obstacle of the at least one obstacle comprises:
calculating the sum of the areas of areas formed by adjacent tangents meeting a second preset condition and the boundary of the preset area as a first area, wherein the second preset condition comprises that no barrier exists in the area formed by the adjacent tangents and the boundary of the preset area;
calculating the sum of the actual visual ranges of the at least one obstacle as a second area; and
and calculating the ratio of the sum of the first area and the second area to the area of the preset area as the environment parameter.
2. The method of claim 1, wherein the acquiring the image of the preset region comprises:
and acquiring a two-dimensional top view of the preset area.
3. The method of claim 1, wherein the acquiring the image of the preset region comprises:
and acquiring three-dimensional information of the preset area, and performing projection processing on the three-dimensional information to obtain a two-dimensional top view of the preset area.
4. The method of claim 1, further comprising:
modeling the target object as a single point;
modeling each obstacle as a polygon;
the making of a tangent line for each obstacle in at least one obstacle with the target object as a starting point includes:
determining a tangent to the obstacle by connecting to vertices of a polygon starting from the single point.
5. The method of claim 1, further comprising:
pre-processing the at least one obstacle,
wherein the pre-processing comprises:
determining an obstacle to be complemented of the at least one obstacle based on size and/or shape;
and performing complement processing on the obstacle to be complemented according to a preset size to obtain a complete image of the obstacle to be complemented.
6. The method of claim 1, wherein:
the method further comprises the following steps: based on the environmental parameter, an autonomous driving maneuver is generated for the vehicle.
7. An image processing apparatus for automatic driving, comprising:
the acquisition module is used for acquiring an image of a preset area;
an identification module for identifying a target object and at least one obstacle in the image; the target object comprises a vehicle;
the processing module is used for respectively cutting a line aiming at each obstacle in at least one obstacle by taking the target object as a starting point to obtain a plurality of cut lines; and
a determining module for determining an environmental parameter of the target object based on the plurality of tangents;
wherein the determining an environmental parameter of the target object based on the plurality of tangents comprises:
for each obstacle in the at least one obstacle, determining a tangent reaching the obstacle in the plurality of tangents as a processing tangent, and determining an actual visual range of the obstacle for the target object according to the processing tangent; and
determining an environmental parameter of the target object according to an actual visible range of each obstacle of the at least one obstacle;
wherein the determining the actual visual range of the obstacle to the target object according to the processing tangent line comprises:
determining at least one pair of adjacent processing tangents satisfying a first preset condition from the processing tangents, wherein the first preset condition comprises that no obstacle exists in an area formed by the adjacent processing tangents and the obstacle boundary;
determining the actual visual range of the obstacle to the target object according to the at least one pair of adjacent processing tangents;
wherein the determining the actual visual range of the obstacle for the target object according to the at least one pair of adjacent processing tangents comprises:
calculating the sum of the areas formed by the boundaries of each pair of adjacent processing tangents and the obstacle in the at least one pair of adjacent processing tangents as the actual visual range of the obstacle;
wherein the determining of the environmental parameter of the target object according to the actual viewing range of each obstacle of the at least one obstacle comprises:
calculating the sum of the areas of areas formed by adjacent tangents meeting a second preset condition and the boundary of the preset area as a first area, wherein the second preset condition comprises that no barrier exists in the area formed by the adjacent tangents and the boundary of the preset area;
calculating the sum of the actual visual ranges of the at least one obstacle as a second area; and
and calculating the ratio of the sum of the first area and the second area to the area of the preset area as the environment parameter.
8. A computer device, comprising:
one or more processors;
a storage device for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-6.
9. A computer readable medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1 to 6.
CN202010145181.2A 2020-03-04 2020-03-04 Image processing method, apparatus, computer device and medium for automatic driving Active CN111326005B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111337815.5A CN113936493B (en) 2020-03-04 2020-03-04 Image processing method, apparatus, computer device and medium for automatic driving
CN202010145181.2A CN111326005B (en) 2020-03-04 2020-03-04 Image processing method, apparatus, computer device and medium for automatic driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010145181.2A CN111326005B (en) 2020-03-04 2020-03-04 Image processing method, apparatus, computer device and medium for automatic driving

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202111337815.5A Division CN113936493B (en) 2020-03-04 2020-03-04 Image processing method, apparatus, computer device and medium for automatic driving

Publications (2)

Publication Number Publication Date
CN111326005A CN111326005A (en) 2020-06-23
CN111326005B true CN111326005B (en) 2021-12-28

Family

ID=71165648

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010145181.2A Active CN111326005B (en) 2020-03-04 2020-03-04 Image processing method, apparatus, computer device and medium for automatic driving
CN202111337815.5A Active CN113936493B (en) 2020-03-04 2020-03-04 Image processing method, apparatus, computer device and medium for automatic driving

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202111337815.5A Active CN113936493B (en) 2020-03-04 2020-03-04 Image processing method, apparatus, computer device and medium for automatic driving

Country Status (1)

Country Link
CN (2) CN111326005B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508034A (en) * 2020-11-03 2021-03-16 精英数智科技股份有限公司 Freight train fault detection method and device and electronic equipment

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4592494B2 (en) * 2005-05-25 2010-12-01 新光電気工業株式会社 Automatic wiring determination device
JP5112666B2 (en) * 2006-09-11 2013-01-09 株式会社日立製作所 Mobile device
JP5086942B2 (en) * 2008-09-02 2012-11-28 トヨタ自動車株式会社 Route search device, route search method, and route search program
JP5246430B2 (en) * 2009-08-19 2013-07-24 株式会社Ihi Obstacle detection method and apparatus
CN102854880B (en) * 2012-10-08 2014-12-31 中国矿业大学 Robot whole-situation path planning method facing uncertain environment of mixed terrain and region
CN104460666B (en) * 2014-10-27 2017-05-10 上海理工大学 Robot autonomous obstacle avoidance moving control method based on distance vectors
CN106022274B (en) * 2016-05-24 2024-01-12 零度智控(北京)智能科技有限公司 Obstacle avoidance method, obstacle avoidance device and unmanned machine
EP3306431B1 (en) * 2016-10-06 2021-04-14 The Boeing Company A computer-implemented method and a system for guiding a vehicle within a scenario with obstacles
CN106774329B (en) * 2016-12-29 2019-08-13 大连理工大学 A kind of robot path planning method based on oval tangent line construction
CN106991207B (en) * 2017-02-08 2019-12-10 吉林大学 method for calculating obstacle angle of A column of vehicle body
KR102028398B1 (en) * 2017-05-17 2019-11-04 (주)에스더블유엠 Method and apparatus for providing obstacle information during driving
CN109633688B (en) * 2018-12-14 2019-12-24 北京百度网讯科技有限公司 Laser radar obstacle identification method and device
CN110488839A (en) * 2019-08-30 2019-11-22 长安大学 A kind of legged type robot paths planning method and device based on tangent line interior extrapolation method

Also Published As

Publication number Publication date
CN113936493A (en) 2022-01-14
CN111326005A (en) 2020-06-23
CN113936493B (en) 2022-11-04

Similar Documents

Publication Publication Date Title
US11320833B2 (en) Data processing method, apparatus and terminal
US11138750B2 (en) Distance estimating method and apparatus
US11250288B2 (en) Information processing apparatus and information processing method using correlation between attributes
JP2021185548A (en) Object detection device, object detection method and program
WO2021226921A1 (en) Method and system of data processing for autonomous driving
CN113240909A (en) Vehicle monitoring method, equipment, cloud control platform and vehicle road cooperative system
CN112258519B (en) Automatic extraction method and device for way-giving line of road in high-precision map making
CN111213153A (en) Target object motion state detection method, device and storage medium
US20130202155A1 (en) Low-cost lane marker detection
US20230005278A1 (en) Lane extraction method using projection transformation of three-dimensional point cloud map
CN115273039B (en) Small obstacle detection method based on camera
CN111326005B (en) Image processing method, apparatus, computer device and medium for automatic driving
CN113806464A (en) Road tooth determining method, device, equipment and storage medium
CN112639822B (en) Data processing method and device
US11861914B2 (en) Object recognition method and object recognition device
CN113762004A (en) Lane line detection method and device
Oniga et al. A fast ransac based approach for computing the orientation of obstacles in traffic scenes
Bubeníková et al. Security increasing trends in intelligent transportation systems utilising modern image processing methods
CN115447606A (en) Automatic driving vehicle control method and device based on blind area recognition
CN109839645B (en) Speed detection method, system, electronic device and computer readable medium
CN116189150A (en) Monocular 3D target detection method, device, equipment and medium based on fusion output
CN115457505A (en) Small obstacle detection method, device and equipment for camera and storage medium
CN114643984A (en) Driving risk avoiding method, device, equipment, medium and product
CN114998863A (en) Target road identification method, target road identification device, electronic equipment and storage medium
US20210302991A1 (en) Method and system for generating an enhanced field of view for an autonomous ground vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant