WO2022033089A1 - 确定检测对象的三维信息的方法及装置 - Google Patents

确定检测对象的三维信息的方法及装置 Download PDF

Info

Publication number
WO2022033089A1
WO2022033089A1 PCT/CN2021/092807 CN2021092807W WO2022033089A1 WO 2022033089 A1 WO2022033089 A1 WO 2022033089A1 CN 2021092807 W CN2021092807 W CN 2021092807W WO 2022033089 A1 WO2022033089 A1 WO 2022033089A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
detection object
point
boundary
line
Prior art date
Application number
PCT/CN2021/092807
Other languages
English (en)
French (fr)
Inventor
符张杰
杨臻
张维
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022033089A1 publication Critical patent/WO2022033089A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Definitions

  • the present application relates to the field of intelligent driving, and in particular, to a method and device for determining three-dimensional information of a detection object.
  • the moving vehicle (referred to as the first vehicle in this application) needs to detect the position of other surrounding vehicles (referred to as the second vehicle in this application) relative to the first vehicle in real time, and the second vehicle
  • the size of the vehicle (the length of the vehicle, the width of the vehicle) and the direction of the second vehicle (the direction of the vehicle, the direction of travel, etc.) Anticipate and avoid.
  • the first vehicle determines the size, orientation and position of the second vehicle relative to the first vehicle, three-dimensional (3D) information of the second vehicle needs to be determined.
  • 3D information of the second vehicle needs to be determined.
  • the first vehicle can determine the 3D information of the second vehicle by means of binocular 3D.
  • the binocular 3D method needs to be equipped with a binocular camera, which is expensive; and the current algorithm used in the process of determining the 3D information of the second vehicle requires high image labeling cost and calculation amount.
  • the present application provides a method and device for determining three-dimensional information of a detection object, which solves the problems of high hardware requirements and large amount of calculation data in the prior art when determining three-dimensional information of surrounding vehicles.
  • a method for determining three-dimensional information of a detection object comprising: acquiring an image to be detected by a first vehicle; the image to be detected includes a first detection object; the first vehicle determining a passable area boundary of the first detection object, and The ground contact line of the first detection object; the boundary of the passable area includes the boundary of the first detection object in the image to be detected; the ground contact line is the connection between the intersection of the first detection object and the ground; the first vehicle is based on the boundary of the passable area , and the touchdown line to determine the three-dimensional information of the first detection object.
  • the first vehicle in the method for determining the three-dimensional information of the detection object provided by the present application, can determine the passable area boundary and the ground contact line of the first detection object according to the collected images to be detected, and further, the first vehicle The vehicle determines the three-dimensional information represented by the first detection object in the two-dimensional image according to the boundary of the passable area of the first detection object and the ground contact line.
  • the above image to be detected may be a two-dimensional image collected by a monocular camera.
  • the first vehicle can determine the three-dimensional information of the first detection object by collecting the image information of the first detection object through the monocular camera.
  • the first vehicle adopts the monocular camera to collect the first detection object
  • the three-dimensional information of the first detection object is determined from the image information of the first detection object, which can greatly reduce the hardware cost of determining the three-dimensional information of the first detection object.
  • the first vehicle only needs to mark the passable area boundary of the first detection object, and determine the touchdown of the first detection object according to the passable area boundary of the first detection object String.
  • the first vehicle can determine the three-dimensional information represented by the first detection object in the two-dimensional image according to the boundary of the passable area of the first detection object and the ground contact line, combined with the visual relationship of the first detection object in the to-be-detected image, etc. .
  • the method for determining the three-dimensional information of the detection object does not require the first vehicle to perform other additional data labeling training, thereby reducing the amount of calculation for determining the three-dimensional information of the first detection object, and reducing the determination of the first detection object.
  • the graphics processor Graphics Processing Unit, GPU
  • the three-dimensional information of the first detection object is used to determine at least one of the size, direction, and relative position of the first detection object.
  • the first vehicle may convert the three-dimensional information represented by the first detection object in the to-be-detected image into the vehicle body coordinate system of the first vehicle, and determine that the first detection object corresponds to the vehicle body coordinate system of the first vehicle size, orientation, and position of the first detection object relative to the first vehicle.
  • the first vehicle can plan the driving route of the first vehicle by combining the corresponding sizes and directions of the plurality of surrounding first detection objects in the vehicle body coordinate system of the first vehicle, and the position of the first detection objects relative to the first vehicle , to realize the intelligent driving of the first vehicle.
  • the passable area boundary of the first detection object includes a plurality of boundary points corresponding to the first mark, and a plurality of boundary points corresponding to a plurality of second marks;
  • the first mark It also corresponds to the first side surface of the first detection object, and the second identification also corresponds to the second side surface of the first detection object;
  • the first side surface and the second side surface are two intersecting side surfaces of the first detection object.
  • the first vehicle when the first vehicle uses a monocular camera to capture two sides of the first detection object (that is, the image to be detected includes the two sides of the first detection object), the first vehicle may The boundary of the passable area is marked, and different signs are assigned to the boundary of the passable area of different sides, and the first vehicle can determine the boundary of the passable area corresponding to each side according to the different signs.
  • the grounding line includes a first grounding line and a second grounding line; the first grounding line is determined by fitting a plurality of boundary points corresponding to the first identification
  • the second grounding line is a grounding line determined by fitting a plurality of boundary points corresponding to the second identification.
  • the first vehicle determines the ground contact line corresponding to each side surface of the first detection object shown in the image to be detected according to the passable area boundary corresponding to each side surface. Since the touchdown line is obtained by fitting the points on the boundary of the passable area corresponding to each side, the touchdown line corresponding to each side can represent the partial boundary of the projection of the side on the ground (that is, the projection of each side on the ground). the outermost part).
  • the plurality of boundary points corresponding to the first identifier include the first boundary point; the first boundary point is among the plurality of boundary points with the first identifier, the same as the second The boundary point with the largest distance from the touchdown line; the plurality of boundary points corresponding to the second mark includes the second boundary point; the second boundary point is the boundary point with the first touchdown line among the plurality of boundary points with the second mark The maximum distance from the boundary point.
  • the above-mentioned first boundary point can represent the point on the first side of the first detection object with the largest distance from the first vehicle, that is to say, the first boundary point can represent the point on the first side that is farthest from the first vehicle vertex.
  • the second boundary point can represent the point on the second side of the first detection object that has the largest distance from the first vehicle, that is, the second boundary point can represent the vertex of the second side that is farthest from the first vehicle.
  • the three-dimensional information of the first detection object is determined according to three points and two lines corresponding to the first detection object; wherein, the first point of the three points is the The projection of a boundary point on the ground; the second point of the three points is the projection of the second boundary point on the ground; the third point of the three points passes the second point and touches the ground with the second point The intersection of a straight line parallel to the first touchdown line; the first of the two lines is the connection between the first and third points; the second of the two lines is the second The line between the first point and the third point.
  • the projection of the first boundary point on the ground is determined as the first point; the projection of the second boundary point on the ground is determined as the first point two points; the projection of the second boundary point on the ground has been determined, and the intersection of the line parallel to the second touchdown line and the first touchdown line is the third point;
  • the connection line between the first point and the third point is the first line; the connection line between the second point and the third point is determined as the second line; according to the The first point, the second point, the third point, the first line, and the second line determine the three-dimensional information of the first detection object.
  • the projection of the first boundary point on the ground can represent the point where the projection of the first side on the ground is the farthest from the first vehicle, and the projection of the second boundary point on the ground can represent the position of the second side on the ground.
  • the projection on the ground is the farthest point from the second vehicle, and the third point can represent the intersection of the projection of the first side on the ground and the projection of the second side on the ground.
  • the first ground contact line can characterize the direction of the projection of the first side surface on the ground.
  • the second touchdown line can characterize the direction of the projection of the second side surface on the ground. Therefore, the first vehicle may determine that the first line is the outermost frame line projected by the first side surface on the ground, and the second line is the outermost frame line projected by the second side surface on the ground.
  • the first vehicle may determine the direction of the first detection object according to the direction of the first line and/or the second line, and the positions of the first line and the second line in the image to be detected.
  • the first vehicle can determine the size of the first detection object according to the length of the first line and the length of the second line, and the positions of the first line and the second line in the image to be detected.
  • the positions of the first line and the second line in the image to be detected determine the position of the first detection object relative to the first vehicle.
  • the first vehicle only needs to project a specific point on the first detection object according to the passable area boundary of the first detection object and the ground contact line to determine the three-dimensional information of the first detection object, which greatly reduces the need for the first detection object.
  • a vehicle determines the amount of calculation of the three-dimensional information of the first detection object.
  • the first vehicle determines the first point according to the first boundary point and the first ground contact line; the first vehicle determines the first point according to the first ground contact line and the second ground contact line line, and the second boundary point, determine the second point; based on the first touchdown line, the second touchdown line, and the second point, determine the third point.
  • the first vehicle may determine the vertex of the projection of the first detection object on the ground according to the boundary of the passable area and the touchdown line.
  • the first vehicle determines a first straight line; the first straight line is a straight line that passes through the first boundary point in the image to be detected and is perpendicular to the eye level; The vehicle determines the intersection of the first straight line and the first ground contact line as the first point.
  • the first vehicle can quickly and accurately determine the projection of the first boundary point on the ground according to the visual relationship between the first touchdown line and the first boundary point in the image to be detected.
  • the first vehicle can quickly and accurately determine the projection of the first boundary point on the ground according to the visual relationship between the first touchdown line and the first boundary point in the image to be detected.
  • the first vehicle determines a second straight line and a third straight line; wherein the second straight line is an intersection point that passes through the first ground contact line and the eye level in the image to be detected , and a straight line far from the end point of the first ground contact line in the second ground contact line; the third straight line is a straight line that passes through the second boundary point in the image to be detected and is perpendicular to the eye level; the first vehicle determines the second straight line The intersection with the third line is the second point.
  • the first vehicle can quickly and accurately determine the projection of the first boundary point on the ground according to the visual relationship between the first touchdown line and the second touchdown line and the second boundary point in the image to be detected. In this way, the first vehicle determines the projection of the second boundary point on the ground, which can further reduce the amount of calculation for the first vehicle to determine the three-dimensional information of the first detection object.
  • the first vehicle determines a fourth straight line; the fourth straight line is a straight line that passes through the second point in the image to be detected and is parallel to the second ground contact line; the first The vehicle determines that the intersection of the fourth straight line and the first touchdown line is the third point.
  • the first vehicle can quickly and accurately determine the intersection of the projection of the first side and the second side on the ground according to the visual relationship between the first and second ground contact lines and the second boundary point in the image to be detected. In this way, the first vehicle determines the intersection of the projections of the first side surface and the second side surface on the ground, which can further reduce the amount of calculation for the first vehicle to determine the three-dimensional information of the first detection object.
  • the boundary of the passable area of the first detection object includes a plurality of boundary points corresponding to the third identification; the third identification also corresponds to the third side surface of the first detection object.
  • the first vehicle when the first vehicle uses a monocular camera to capture a side surface of the first detection object (that is, the image to be detected includes a side surface of the first detection object), the first vehicle can pass the boundary of the passable area of the side surface. Mark and assign a corresponding identification to it, and the first vehicle can determine the border of the passable area corresponding to the side according to the border point of the passable area with the identification.
  • the grounding line of the first detection object includes a third grounding line; the third grounding line is to fit multiple boundary points corresponding to the third identifier, and determine grounding wire.
  • the first vehicle may determine the ground contact line of the side according to the passable area boundary of the side. Since the touchdown line is obtained by fitting the points on the boundary of the passable area corresponding to the side surface, the touchdown line can represent part of the boundary of the side surface projected on the ground (that is, the outermost edge of each surface on the ground projection). part).
  • the plurality of boundary points with the third identifier include a third boundary point and a fourth boundary point;
  • the third boundary point is one of the plurality of boundary points with the third identifier , the point farthest from one end of the third grounding line;
  • the fourth boundary point is the point farthest from the other end of the third grounding line among the plurality of boundary points with the third identification.
  • the third boundary point and the fourth boundary point can represent the two vertices of the third side surface of the first detection object.
  • the three-dimensional information of the first detection object is determined according to two points and a line corresponding to the first detection object; the first point of the two points is the third boundary point Projection on the ground; the second of the two points is the projection of the fourth boundary point on the ground.
  • the projection of the third boundary point on the ground is determined as the first point; the projection of the fourth boundary point on the ground is determined as the first point two points; determine the connecting line between the first point and the second point as the first line; according to the first point, the second point, and the first line line to determine the three-dimensional information of the first detection object.
  • the projection of the third boundary point on the ground can represent a vertex of the projection of the third side surface on the ground.
  • the projection of the fourth boundary point on the ground can represent another top line of the projection of the third side on the ground.
  • the connection line (the first line) between the projection of the third boundary point on the ground and the projection of the fourth boundary point on the ground can represent the outermost frame line of the projection of the third side on the ground.
  • the first vehicle may determine the direction of the first detection object according to the direction of the first line and the position of the first line in the image to be detected.
  • the first vehicle may determine the size of the first detection object according to the length of the first line and the position of the first line in the image to be detected.
  • the first vehicle may determine the position of the first detection object relative to the first vehicle according to the position of the first line in the image to be detected.
  • the first vehicle only needs to project a specific point on the first detection object according to the passable area boundary of the first detection object and the ground contact line to determine the three-dimensional information of the first detection object, which greatly reduces the need for the first detection object.
  • a vehicle determines the amount of calculation of the three-dimensional information of the first detection object.
  • determining the three-dimensional information of the first detection object according to the boundary of the passable area and the touchdown line includes: determining the third boundary point and the third touchdown line according to the third boundary point and the third touchdown line. A point; according to the fourth boundary point and the third touchdown line, determine the second point; according to the first point and the second point, determine a line.
  • the first vehicle determines the information of the vertices of the projection of the first detection object on the ground, and the outermost boundary of the projection of the first detection object on the ground.
  • the first vehicle may determine the size and direction of the first detection object according to the projected vertex of the first detection object on the ground and the outermost boundary. For example, when the first detection object is the second vehicle, the vertices of the projection of the second vehicle on the ground and the outermost boundary line can represent the size of the second vehicle, and the direction of the outermost boundary line can represent the direction of the second vehicle.
  • determining the first point according to the third boundary point and the third touchdown line includes: determining a fifth straight line; Three boundary points, and a straight line perpendicular to the eye level; determine the intersection of the fifth straight line and the third touchdown line, as the first point.
  • the first vehicle can quickly and accurately determine the projection of the third boundary point on the ground according to the visual relationship between the third touchdown line and the third boundary point in the image to be detected.
  • the first vehicle can quickly and accurately determine the projection of the third boundary point on the ground according to the visual relationship between the third touchdown line and the third boundary point in the image to be detected.
  • determining the second point according to the fourth boundary point and the third touchdown line includes: determining a sixth straight line; Four boundary points and a straight line perpendicular to the eye level; determine the intersection of the sixth straight line and the third touchdown line as the second point.
  • the first vehicle can quickly and accurately determine the projection of the fourth boundary point on the ground according to the visual relationship between the third touchdown line and the fourth boundary point in the image to be detected. In this way, the first vehicle determines the projection of the fourth boundary point on the ground, which can further reduce the amount of calculation for the first vehicle to determine the three-dimensional information of the first detection object.
  • the method further includes: inputting the three-dimensional information of the first detection object into the vehicle body coordinate system, and determining at least one of the size, direction and relative position of the first detection object item.
  • the first vehicle can determine the size and direction of the first detection object in the three-dimensional space, and the position of the detection object relative to the first vehicle by inputting the three-dimensional information of the first detection object into the vehicle body coordinate system.
  • a device for determining three-dimensional information of a detection object which is characterized by comprising: a communication unit and a processing unit; a communication unit for acquiring an image to be detected; the image to be detected includes a first detection object; and a processing unit, Used to determine the passable area boundary of the first detection object and the ground contact line of the first detection object; the passable area boundary includes the boundary of the first detection object in the image to be detected; the ground contact line is the first detection object and the ground The connection line of the intersection points; the processing unit is further configured to determine the three-dimensional information of the first detection object according to the boundary of the passable area and the touchdown line.
  • the three-dimensional information of the first detection object is used to determine at least one of the size, direction, and relative position of the first detection object.
  • the boundary of the passable area of the first detection object includes a plurality of boundary points corresponding to the first identification, and a plurality of boundary points corresponding to the second identification;
  • the first identification It also corresponds to the first side surface of the first detection object, and the second identification also corresponds to the second side surface of the first detection object;
  • the first side surface and the second side surface are two intersecting side surfaces of the first detection object.
  • the grounding line includes a first grounding line and a second grounding line; the first grounding line is determined by fitting a plurality of boundary points corresponding to the first identification
  • the second grounding line is a grounding line determined by fitting a plurality of boundary points corresponding to the second identification.
  • the plurality of boundary points corresponding to the first identifier include the first boundary point; the first boundary point is among the plurality of boundary points with the first identifier, which is the same as the second The boundary point with the largest distance from the touchdown line.
  • the second boundary point is included in the plurality of boundary points corresponding to the second identification; the second boundary point is the boundary point with the largest distance from the first grounding line among the plurality of boundary points with the second identification.
  • the three-dimensional information of the first detection object is determined according to three points and two lines corresponding to the first detection object;
  • the second point of the three points is the projection of the second boundary point on the ground;
  • the third point of the three points passes the second point and touches the ground with the second point The intersection of a straight line parallel to the first touchdown line;
  • the first of the two lines is the connection between the first and third points;
  • the second of the two lines is the second The line between the first point and the third point.
  • the processing unit is specifically configured to: determine the projection of the first boundary point on the ground as the first point; determine that the second boundary point is in the The projection on the ground is the second point; the projection of the second boundary point on the ground is determined, and the intersection of the line parallel to the second touchdown line and the first touchdown line is the first touchdown line.
  • Three points determine the connection between the first point and the third point as the first line; determine the connection between the second point and the third point as the first line Two lines; according to the first point, the second point, the third point, the first line, and the second line, determine the three-dimensionality of the first detection object information.
  • the processing unit is specifically configured to: determine the first point according to the first boundary point and the first grounding line; The ground line, and the second boundary point, determine the second point; according to the first ground line, the second ground line, and the second point, determine the third point.
  • the processing unit is specifically configured to determine a first straight line; the first straight line is a straight line that passes through the first boundary point in the image to be detected and is perpendicular to the eye level ; Determine the intersection of the first straight line and the first grounding line as the first point.
  • the processing unit is specifically configured to: determine the second straight line and the third straight line; wherein, the second straight line is a line passing through the first ground contact line and the visual line in the image to be detected. The intersection point of the horizontal lines, and a straight line far from the end point of the first grounding line in the second grounding line; the third straight line is a straight line that passes through the second boundary point in the image to be detected and is perpendicular to the eye level; determine the second The intersection of the straight line and the third straight line is the second point.
  • the processing unit is specifically configured to: determine a fourth straight line; the fourth straight line is a line that passes through the second point in the image to be detected and is parallel to the second grounding line Straight line; determine the intersection of the fourth straight line and the first touchdown line as the third point.
  • the passable area boundary of the first detection object includes a plurality of boundary points corresponding to the third identification; the third identification also corresponds to the third side surface of the first detection object.
  • the grounding line of the first detection object includes a third grounding line; the third grounding line is to fit multiple boundary points corresponding to the third identifier, and determine grounding wire.
  • the plurality of boundary points with the third identifier include a third boundary point and a fourth boundary point;
  • the third boundary point is one of the plurality of boundary points with the third identifier , the point farthest from one end of the third grounding line;
  • the fourth boundary point is the point farthest from the other end of the third grounding line among the plurality of boundary points with the third identification.
  • the three-dimensional information of the first detection object is determined according to two points and a line corresponding to the first detection object; the first point of the two points is the third boundary point Projection on the ground; the second of the two points is the projection of the fourth boundary point on the ground.
  • the processing unit is specifically configured to: determine the projection of the third boundary point on the ground as the first point; determine the fourth boundary point The projection on the ground is the second point; the connecting line between the first point and the second point is determined as the first line; according to the first point, the second The points, and the first line, determine the three-dimensional information of the first detection object.
  • the processing unit is specifically configured to: determine the first point according to the third boundary point and the third grounding line; according to the fourth boundary point and the third grounding line , determine the second point; according to the first point and the second point, determine a line.
  • the processing unit is specifically configured to: determine a fifth straight line; the fifth straight line is a straight line that passes through the third boundary point in the image to be detected and is perpendicular to the eye level; Determine the intersection of the fifth straight line and the third touchdown line as the first point.
  • the processing unit is specifically configured to: determine a sixth straight line; the sixth straight line is a straight line that passes through the fourth boundary point in the image to be detected and is perpendicular to the eye level; Determine the intersection of the sixth straight line and the third touchdown line as the second point.
  • the processing unit is further configured to: input the three-dimensional information of the first detection object into the vehicle body coordinate system, and determine the size, direction and relative position of the first detection object at least one.
  • the present application provides a device for determining three-dimensional information of a detection object, comprising: a processor and a memory, wherein the memory is used for storing computer programs and instructions, and the processor is used for executing the computer programs and instructions to achieve the first aspect and the method described in any possible implementation manner of the first aspect.
  • the device for determining the three-dimensional information of the detection object may be the first vehicle, or may be a chip in the first vehicle.
  • the present application provides an intelligent vehicle, comprising: a vehicle body, a monocular camera, and a device for determining three-dimensional information of a detection object as described in any possible implementation manner of the second aspect and the second aspect,
  • the monocular camera is used to collect the image to be detected;
  • the device for determining the three-dimensional information of the detected object is configured to perform the method for determining the three-dimensional information of the detected object as described in the first aspect and any possible implementation manner of the first aspect, Determine the three-dimensional information of the detection object.
  • the intelligent vehicle further includes a display screen; the display screen is used to display three-dimensional information of the detected object.
  • the present application provides an advanced driver assistance system (ADAS), including the device for determining three-dimensional information of a detection object described in the second aspect and any possible implementation manner of the second aspect , the apparatus for determining three-dimensional information of a detection object is configured to execute the method for determining three-dimensional information of a detection object as described in the first aspect and any possible implementation manner of the first aspect, and determine the three-dimensional information of the detection object.
  • ADAS advanced driver assistance system
  • the present application provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed on a computer, the computer is made to execute any one of the first aspect and the first aspect.
  • the present application provides a computer program product comprising instructions that, when the computer program product is run on a computer, cause the computer to perform as described in the first aspect and any possible implementation manner of the first aspect method.
  • FIG. 1 is a schematic structural diagram 1 of a vehicle provided by an embodiment of the present application.
  • FIG. 2 is a system architecture diagram of an ADAS system provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a computer system according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram 1 of an application of a cloud-side commanded automatic driving vehicle according to an embodiment of the present application
  • FIG. 5 is a second application schematic diagram of a cloud-side commanded automatic driving vehicle provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a computer program product provided by an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a method for determining three-dimensional information of a detection object provided by an embodiment of the present application.
  • FIG. 8a is a schematic diagram of a first detection object provided by an embodiment of the present application.
  • FIG. 8b is a schematic diagram of another first detection object provided by an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of another method for determining three-dimensional information of a detection object provided by an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of another method for determining three-dimensional information of a detection object provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of three-dimensional information of a first detection object according to an embodiment of the present application.
  • FIG. 12 is a schematic flowchart of another method for determining three-dimensional information of a detection object provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of another three-dimensional information of a first detection object provided by an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of an apparatus for determining three-dimensional information of a detection object according to an embodiment of the present application.
  • Embodiments of the present application provide a method and apparatus for determining three-dimensional information of a detection object, and the method is applied in a vehicle, or in other devices (such as cloud servers, mobile terminals, etc.) that have the function of controlling the vehicle.
  • the vehicle or other equipment can implement the method for determining the three-dimensional information of the detection object provided by the embodiment of the present application through the components (including hardware and software) included in the vehicle. (size, direction, relative position), so that the vehicle can plan the driving path of the vehicle according to the three-dimensional information of the detected objects.
  • FIG. 1 is a functional block diagram of a vehicle 100 according to an embodiment of the present application, and the vehicle 100 may be an intelligent vehicle.
  • the vehicle 100 determines the three-dimensional information of the detected objects according to the images to be detected collected by the image acquisition device, so that the vehicle can plan the driving path of the vehicle according to the three-dimensional information of the detected objects.
  • Vehicle 100 may include various subsystems, such as travel system 110 , sensor system 120 , control system 130 , one or more peripherals 140 and power supply 150 , computer system 160 , and user interface 170 .
  • vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple elements. Additionally, each of the subsystems and elements of the vehicle 100 may be interconnected by wire or wirelessly.
  • the travel system 110 may include components that provide powered motion for the vehicle 100 .
  • travel system 110 may include engine 111 , transmission 112 , energy source 113 , and wheels 114 .
  • the engine 111 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine composed of a gasoline engine and an electric motor, and a hybrid engine composed of an internal combustion engine and an air compression engine.
  • Engine 111 converts energy source 113 into mechanical energy.
  • Examples of energy sources 113 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity.
  • the energy source 113 may also provide energy to other systems of the vehicle 100 .
  • Transmission 112 may transmit mechanical power from engine 111 to wheels 114 .
  • the transmission 112 may include a gearbox, a differential, and a driveshaft.
  • the transmission 112 may also include other devices, such as clutches.
  • the drive shafts may include one or more axles that may be coupled to one or more of the wheels 114 .
  • the sensor system 120 may include several sensors that sense information about the environment surrounding the vehicle 100 .
  • the sensor system 120 may include a positioning system 121 (the positioning system may be a global positioning system (GPS), a Beidou system or other positioning systems), an inertial measurement unit (IMU) 122, a radar 123 , lidar 124 and camera 125 .
  • the sensor system 120 may also include sensors that monitor internal systems of the vehicle 100 (eg, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding characteristics (position, shape, orientation, velocity, etc.). This detection and identification is a critical function for the safe operation of the autonomous driving of the vehicle 100 .
  • the positioning system 121 may be used to estimate the geographic location of the vehicle 100 .
  • the IMU 122 is used to sense position and orientation changes of the vehicle 100 based on inertial acceleration.
  • IMU 122 may be a combination of an accelerometer and a gyroscope.
  • Radar 123 may utilize radio signals to sense objects within the surrounding environment of vehicle 100 . In some embodiments, in addition to sensing objects, radar 123 may be used to sense the speed and/or heading of objects.
  • the lidar 124 may utilize laser light to sense objects in the environment in which the vehicle 100 is located.
  • lidar 124 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
  • the camera 125 may be used to capture multiple images of the surrounding environment of the vehicle 100, as well as multiple images within the vehicle cockpit. Camera 125 may be a still camera or a video camera.
  • the control system 130 may control the operation of the vehicle 100 and its components. Control system 130 may include various elements, including steering system 131 , throttle 132 , braking unit 133 , computer vision system 134 , route control system 135 , and obstacle avoidance system 136 .
  • the steering system 131 is operable to adjust the heading of the vehicle 100 .
  • it may be a steering wheel system.
  • the throttle 132 is used to control the operating speed of the engine 111 and thus the speed of the vehicle 100 .
  • the braking unit 133 is used to control the deceleration of the vehicle 100 .
  • the braking unit 133 may use friction to slow the wheels 114 .
  • the braking unit 133 may convert the kinetic energy of the wheels 114 into electrical current.
  • the braking unit 133 may also take other forms to slow the wheels 114 to control the speed of the vehicle 100 .
  • the computer vision system 134 is operable to process and analyze the images captured by the camera 125 in order to identify objects and/or features in the environment surrounding the vehicle 100 as well as physical and facial features of the driver within the vehicle cockpit.
  • Objects and/or features may include traffic signals, road conditions, and obstacles, and the driver's physical and facial features include driver behavior, sight lines, expressions, and the like.
  • Computer vision system 134 may use object recognition algorithms, structure from motion (SFM) algorithms, video tracking, and other computer vision techniques. In some embodiments, the computer vision system 134 may be used to map the environment, track objects, estimate the speed of objects, determine driver behavior, face recognition, and the like.
  • SFM structure from motion
  • the route control system 135 is used to determine the travel route of the vehicle 100 .
  • route control system 135 may combine data from sensors, positioning system 121 , and one or more predetermined maps to determine a driving route for vehicle 100 .
  • the obstacle avoidance system 136 is used to identify, evaluate and avoid or otherwise traverse potential obstacles in the environment of the vehicle 100 .
  • control system 130 may add some components not shown above; or replace some components shown above with other components; or may also reduce some components shown above.
  • Peripherals 140 may include wireless communication system 141 , onboard computer 142 , microphone 143 and/or speaker 144 .
  • peripherals 140 provide a means for a user of vehicle 100 to interact with user interface 170 .
  • the onboard computer 142 may provide information to a user of the vehicle 100 .
  • the user interface 170 may also operate the onboard computer 142 to receive user input.
  • the on-board computer 142 can be operated through a touch screen.
  • peripheral device 140 may provide a means for vehicle 100 to communicate with other devices located within the vehicle.
  • microphone 143 may receive audio (eg, voice commands or other audio input) from a user of vehicle 100 .
  • speakers 144 may output audio to a user of vehicle 100 .
  • Wireless communication system 141 may communicate wirelessly with one or more devices, either directly or via a communication network.
  • wireless communication system 141 may use 3G cellular communications, such as CDMA, EVDO, GSM/GPRS, or 4G cellular communications, such as LTE, or 5G cellular communications.
  • the wireless communication system 141 may communicate with a wireless local area network (WLAN) using WiFi.
  • WLAN wireless local area network
  • the wireless communication system 141 may communicate directly with the device using an infrared link, Bluetooth, or ZigBee.
  • Wireless communication system 141 may also communicate with devices using other wireless protocols.
  • Wireless communication system 141 may include one or more dedicated short range communications (DSRC) devices.
  • DSRC dedicated short range communications
  • Power supply 150 may provide power to various components of vehicle 100 .
  • the power source 150 may be a rechargeable lithium-ion or lead-acid battery.
  • One or more battery packs of such a battery may be configured as a power source to provide power to various components of the vehicle 100 .
  • the power source 150 and the energy source 113 may be implemented together, such as a pure electric vehicle or a gasoline-electric hybrid vehicle in a new energy vehicle.
  • Computer system 160 may include at least one processor 161 that executes instructions 1621 stored in a non-transitory computer-readable medium such as data storage device 162 .
  • Computer system 160 may also be multiple computing devices that control individual components or subsystems of vehicle 100 in a distributed fashion.
  • the processor 161 may be any conventional processor, such as a commercially available central processing unit (CPU). Alternatively, the processor may be a dedicated device such as an application specific integrated circuit (ASIC) or other hardware-based processor.
  • FIG. 1 functionally illustrates a processor, memory, and other elements in the same physical enclosure, one of ordinary skill in the art would understand that the processor, computer system, or memory may actually include storage that may be stored in the same Multiple processors, computer systems, or memories within a physical enclosure, or include multiple processors, computer systems, or memories that may not be stored within the same physical enclosure.
  • the memory may be a hard drive, or other storage medium located within a different physical enclosure.
  • processors or computer systems will be understood to include reference to a collection of processors or computer systems or memories that may operate in parallel, or a collection of processors or computer systems or memories that may not operate in parallel.
  • some components such as the steering and deceleration components, may each have their own processors that only perform computations related to component-specific functions.
  • a processor may be located in a device remote from and in wireless communication with the vehicle. In other aspects, some of the processes described herein are performed on a processor disposed within the vehicle while others are performed by a remote processor, including taking steps necessary to perform a single maneuver.
  • data storage device 162 may include instructions 1621 (eg, program logic) executable by processor 161 to perform various functions of vehicle 100 , including all or part of the functions described above.
  • Data storage 162 may also contain additional instructions, including sending data to, receiving data from, interacting with, and/or performing operations on one or more of travel system 110 , sensor system 120 , control system 130 , and peripherals 140 control commands.
  • data storage device 162 may store data such as road maps, route information, vehicle location, direction, speed, and other such vehicle data, among other information. Such information may be used by the vehicle 100 and the computer system 160 during operation of the vehicle 100 in autonomous, semi-autonomous and/or manual modes.
  • the data storage device 162 may obtain information on obstacles in the surrounding environment obtained by the vehicle based on sensors in the sensor system 120, such as other vehicles, road edges, and obstacles such as green belts and other obstacles. Information such as the location, the distance of the obstacle to the vehicle, and the distance between the obstacles.
  • the data storage device 162 can also obtain environmental information from the sensor system 120 or other components of the vehicle 100 .
  • the environmental information can be, for example, whether there are green belts, lanes, pedestrians, etc. near the current environment of the vehicle, or the vehicle uses machine learning algorithms to calculate the current location. Whether there are green belts, pedestrians, etc. near the environment.
  • the data storage device 162 can also store the state information of the vehicle itself and the state information of other vehicles interacting with the vehicle, wherein the state information of the vehicle includes but is not limited to the position, speed, acceleration, heading angle, etc.
  • the processor 161 can obtain the information from the data storage device 162, and determine the passable area of the vehicle based on the environmental information of the environment where the vehicle is located, the state information of the vehicle itself, the state information of other vehicles, etc., and based on the passable area
  • a final driving strategy is determined to control the autonomous driving of the vehicle 100 .
  • User interface 170 for providing information to or receiving information from a user of vehicle 100 .
  • user interface 170 may interact with one or more input/output devices within the set of peripheral devices 140 , such as one or more of wireless communication system 141 , onboard computer 142 , microphone 143 and speaker 144 .
  • Computer system 160 may control vehicle 100 based on information obtained from various subsystems (eg, travel system 110 , sensor system 120 , and control system 130 ) and information received from user interface 170 .
  • computer system 160 may control steering system 131 to change the heading of the vehicle based on information from control system 130 to avoid obstacles detected by sensor system 120 and obstacle avoidance system 136 .
  • computer system 160 may control many aspects of vehicle 100 and its subsystems.
  • one or more of these components described above may be installed or associated with the vehicle 100 separately.
  • data storage device 162 may exist partially or completely separate from vehicle 100 .
  • the above-described components may be coupled together in communication by wired and/or wireless means.
  • FIG. 1 should not be construed as a limitation on the embodiments of the present application.
  • a self-driving car traveling on a road can determine an adjustment command for the current speed based on other vehicles in its surroundings.
  • the objects in the surrounding environment of the vehicle 100 may be traffic control equipment, or other types of objects such as green belts.
  • each object within the surrounding environment may be considered independently, and the speed adjustment command for vehicle 100 may be determined based on the object's respective characteristics, such as its current speed, acceleration, distance from the vehicle, and the like.
  • the vehicle 100 as an autonomous vehicle or computer equipment associated therewith can obtain the state of the surrounding environment based on the identified measurement data. (eg, traffic, rain, ice on the road, etc.) and determine the relative positions of obstacles and vehicles in the surrounding environment at the current moment.
  • the boundaries of the passable area formed by each obstacle depend on each other. Therefore, all the acquired measurement data can also be used to determine the boundary of the passable area of the vehicle, and remove the actual impassable area in the passable area. passable area.
  • the vehicle 100 can adjust its driving strategy based on the detected passable area of the vehicle.
  • the autonomous vehicle can determine what steady state the vehicle needs to adjust to (e.g., accelerate, decelerate, steer, or stop, etc.) based on the detected traversable area of the vehicle. In this process, other factors may also be considered to determine the speed adjustment command for the vehicle 100, such as the lateral position of the vehicle 100 in the road being traveled, the curvature of the road, the proximity of static and dynamic objects, and the like.
  • the computer device may also provide instructions to modify the steering angle of the vehicle 100 so that the self-driving car follows a given trajectory and/or maintains the self-driving car with nearby objects (eg, safe lateral and longitudinal distances for cars in adjacent lanes).
  • nearby objects eg, safe lateral and longitudinal distances for cars in adjacent lanes.
  • the above-mentioned vehicle 100 can be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, a recreational vehicle, a playground vehicle, construction equipment, a tram, a golf cart, a train, a cart, etc.
  • the application examples are not particularly limited.
  • the autonomous driving vehicle may further include a hardware structure and/or a software module, and implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above functions is performed in the form of a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
  • the method for determining three-dimensional information of a detection object is applied to the ADAS system 200 shown in FIG. 2 .
  • the ADAS system 200 includes a hardware system 201 , a perception fusion system 202 , a planning system 203 , and a control system 204 .
  • the hardware system 201 is used to collect road information, vehicle information, obstacle information, etc. around the first vehicle.
  • Currently commonly used hardware systems 201 mainly include cameras, video capture cards, and the like.
  • the hardware system 201 includes a monocular camera.
  • the perception fusion system 202 is used to process the image information collected by the hardware system 201 to determine target information around the first vehicle (including vehicle information, pedestrian information, traffic lights, obstacle information, etc.), and road structure information (including lane line information, road edge information, etc.).
  • the planning system 203 is configured to plan the driving route, driving speed, etc. of the first vehicle according to the target information and the road structure information, and generate planning information.
  • the control system 204 is configured to convert the planning information generated by the planning system 203 into control information, and issue the control information to the first vehicle, so that the first vehicle travels along the first vehicle planned by the planning system 30 according to the control information Route, travel speed to travel.
  • the in-vehicle communication module 205 (not shown in FIG. 2 ) is used for information exchange between the self-vehicle and other vehicles.
  • the storage component 206 (not shown in FIG. 2 ) is configured to store the executable codes of the foregoing modules, and running the executable codes can implement part or all of the method processes of the embodiments of the present application.
  • the computer system 160 shown in FIG. 1 includes a processor 301, the processor 301 is coupled to a system bus 302, and the processor 301 may be one or more processors, each of which may include one or more processor cores.
  • a video adapter 303 may drive a display 324, which is coupled to the system bus 302.
  • the system bus 302 is coupled to an input-output (I/O) bus (BUS) 305 through a bus bridge 304, an I/O interface 306 is coupled to the I/O bus 305, and the I/O interface 306 communicates with various I/O devices.
  • I/O input-output
  • BUS bus bridge 304
  • I/O interface 306 is coupled to the I/O bus 305
  • the I/O interface 306 communicates with various I/O devices.
  • an input device 307 eg, a keyboard, a mouse, a touch screen, etc.
  • a media tray 308, eg, a CD-ROM, a multimedia interface, etc.
  • a transceiver 309 which can transmit and/or receive radio communication signals
  • a camera 310 which can capture still and moving digital video images
  • an external universal serial bus (USB) port 311 e.g., USB
  • the processor 301 may be any conventional processor, including a reduced instruction set computing (reduced instruction set computer, RISC) processor, a complex instruction set computing (complex instruction set computer, CISC) processor, or a combination of the above.
  • the processor 301 may also be a dedicated device such as an application specific integrated circuit ASIC.
  • the processor 301 may also be a neural network processor or a combination of a neural network processor and the above conventional processor.
  • computer system 160 may be located remotely from the smart vehicle and communicate wirelessly with smart vehicle 100 .
  • some of the processes of the present application may be arranged to be performed on a processor within an intelligent vehicle, and other processes may be performed by a remote processor, including taking actions required to perform a single maneuver.
  • Network interface 312 may be a hardware network interface, such as a network card.
  • the network (Network) 314 may be an external network, such as the Internet, or an internal network, such as Ethernet or a virtual private network (VPN), and optionally, the network 314 may also be a wireless network, such as a WiFi network, a cellular network, and the like.
  • Hard drive interface 315 is coupled to system bus 302 .
  • the hard disk drive interface 315 is connected to the hard disk drive 316 .
  • System memory 317 is coupled to system bus 302 .
  • Data running in system memory 317 may include operating system (OS) 318 and application programs 319 of computer system 160 .
  • Operating System (OS) 318 includes, but is not limited to, Shell 320 and Kernel 321.
  • Shell 320 is an interface between the user and kernel 321 of operating system 318.
  • Shell 320 is the outermost layer of operating system 318. The shell manages the interaction between the user and operating system 318: waiting for user input, interpreting user input to operating system 318, and processing various operating system 318 output.
  • Kernel 321 consists of the portion of operating system 318 that manages memory, files, peripherals, and system resources, and interacts directly with the hardware.
  • the kernel 321 of the operating system 318 usually runs processes, provides communication between processes, and provides functions such as CPU time slice management, interrupts, memory management, and IO management.
  • Applications 319 include programs 323 related to autonomous driving, such as programs that manage the interaction between the autonomous vehicle and road obstacles, programs that control the driving route or speed of the autonomous vehicle, and control the interaction between the autonomous vehicle and other cars/autonomous vehicles on the road. program etc.
  • Application 319 also exists on the system of deploying server 313.
  • computer system 160 may download application 319 from deploying server 313 when application 319 needs to be executed.
  • the application 319 may be an application that controls the vehicle to determine the driving strategy according to the traversable area of the vehicle and the conventional control module.
  • the processor 301 of the computer system 160 calls the application 319 to obtain the driving strategy.
  • Sensor 322 is associated with computer system 160 .
  • Sensor 322 is used to detect the environment around computer system 160 .
  • sensors 322 may detect animals, cars, obstacles and/or pedestrian crossings, and the like. Further sensors 322 may also detect the environment around objects such as the aforementioned animals, cars, obstacles and/or pedestrian crossings.
  • the environment around the animal for example, other animals that appear around the animal, weather conditions, the brightness of the environment around the animal, etc.
  • the sensor 322 may be at least one of a camera, an infrared sensor, a chemical detector, a microphone, and the like.
  • computer system 160 may also receive information from or transfer information to other computer systems.
  • sensor data collected from the sensor system 120 of the vehicle 100 may be transferred to another computer, where the data is processed.
  • data from computer system 160 may be transmitted via a network to computer system 410 on the cloud side for further processing.
  • Networks and intermediate nodes may include various configurations and protocols, including the Internet, the World Wide Web, Intranets, Virtual Private Networks, Wide Area Networks, Local Area Networks, private networks using one or more of the company's proprietary communication protocols, Ethernet, WiFi and HTTP, and various combinations of the foregoing.
  • Such communication may be performed by any device capable of transferring data to and from other computers, such as modems and wireless interfaces.
  • computer system 410 may include a server having multiple computers, such as a load balancing server farm.
  • server 420 exchanges information with various nodes of the network.
  • the computer system 410 may have a configuration similar to the computer system 160 and have a processor 430 , memory 440 , instructions 450 , and data 460 .
  • the data 460 of the server 420 may include providing weather-related information.
  • the server 420 may receive, monitor, store, update, and transmit various information related to target objects in the surrounding environment. This information may include target category, target shape information, and target tracking information, eg, in the form of reports, radar information, forecasts, and the like.
  • the cloud service center may receive information (such as data collected by vehicle sensors or other information) from vehicles 513 , vehicles 512 within its operating environment 500 via network 511 , such as a wireless communication network.
  • the vehicle 513 and the vehicle 512 may be smart vehicles.
  • the cloud service center 520 controls the vehicle 513 and the vehicle 512 by running a program related to controlling the automatic driving of the vehicle stored in the cloud service center 520 .
  • Programs related to controlling the autonomous driving of cars can be: programs that manage the interaction between autonomous vehicles and road obstacles, or programs that control the route or speed of autonomous vehicles, or programs that control the interaction between autonomous vehicles and other autonomous vehicles on the road.
  • the cloud service center 520 may provide the part of the map to the vehicle 513 and the vehicle 512 through the network 511 .
  • operations may be divided among different locations.
  • multiple cloud service centers may receive, validate, combine, and/or transmit information reports.
  • Information reports and/or sensor data may also be sent between vehicles in some examples. Other configurations are also possible.
  • the cloud service center 520 sends the intelligent vehicle suggested solutions regarding possible driving situations in the environment (eg, informing about the obstacle ahead, and informing how to get around it)). For example, the cloud service center 520 may assist the vehicle in determining how to proceed when faced with certain obstacles within the environment.
  • the cloud service center 520 sends a response to the smart vehicle indicating how the vehicle should travel in a given scenario. For example, based on the collected sensor data, the cloud service center 520 may confirm the existence of a temporary stop sign ahead of the road, or, for another example, based on the "lane closed" sign and sensor data of construction vehicles, determine that the lane is closed due to construction.
  • the cloud service center 520 sends a suggested operating mode for the vehicle to pass the obstacle (eg, instructing the vehicle to change lanes to another road).
  • a suggested operating mode for the vehicle to pass the obstacle eg, instructing the vehicle to change lanes to another road.
  • the operation steps used by the intelligent vehicle can be added to the driving information map. Accordingly, this information can be sent to other vehicles in the area that may encounter the same obstacle in order to assist other vehicles not only in recognizing closed lanes but also knowing how to pass.
  • example computer program product 600 is provided using signal bearing medium 601 .
  • the signal bearing medium 601 may include one or more program instructions 602, which, when executed by one or more processors, may provide all or part of the functionality described above with respect to FIGS. 2-5, or may provide the functionality described in subsequent embodiments all or part of the functionality.
  • program instructions 602 in FIG. 6 also describe example instructions.
  • the signal bearing medium 601 may include a computer readable medium 603 such as, but not limited to, a hard drive, a compact disc (CD), a digital video disc (DVD), a digital tape, a memory, a read only memory (Read) -Only Memory, ROM) or random access memory (Random Access Memory, RAM) and so on.
  • the signal bearing medium 601 may include a computer recordable medium 604 such as, but not limited to, memory, read/write (R/W) CDs, R/W DVDs, and the like.
  • signal bearing medium 601 may include communication medium 605, such as, but not limited to, digital and/or analog communication media (eg, fiber optic cables, waveguides, wired communication links, wireless communication links, etc.).
  • the signal bearing medium 601 may be conveyed by a wireless form of communication medium 605 (eg, a wireless communication medium conforming to the IEEE 802.11 standard or other transmission protocol).
  • the one or more program instructions 602 may be, for example, computer-executable instructions or logic-implemented instructions.
  • computing devices such as those described with respect to FIGS. 2-6 may be configured to respond to communication via one or more of computer readable medium 603 , and/or computer recordable medium 604 , and/or communication medium 605 .
  • Program instructions 602 communicated to a computing device to provide various operations, functions, or actions. It should be understood that the arrangements described herein are for illustrative purposes only. Thus, those skilled in the art will understand that other arrangements and other elements (eg, machines, interfaces, functions, sequences, and groups of functions, etc.) can be used instead and that some elements may be omitted altogether depending on the desired results . Additionally, many of the described elements are functional entities that may be implemented as discrete or distributed components, or in conjunction with other components in any suitable combination and position.
  • the touchdown line refers to a line segment composed of points in the image to be detected where the detection object actually contacts the ground.
  • the ground contact line of the vehicle is the connection line between the vehicle tire and the ground contact point.
  • the grounding line of the vehicle can be distinguished according to the four sides of the vehicle (left side, right side, front side and rear side respectively). One side of the vehicle corresponds to a ground wire.
  • the contact point between the left front tire of the vehicle and the ground is marked as contact point 1; the contact point between the right front tire of the vehicle and the ground is marked as contact point 2; the left rear tire of the vehicle and the ground are marked as contact point 2.
  • the contact point of the vehicle is marked as contact point 3; the contact point between the right rear tire of the vehicle and the ground is marked as contact point 4.
  • the grounding line corresponding to the left side of the vehicle is the connection between contact point 1 and contact point 3.
  • the grounding line corresponding to the right side of the vehicle is the connection between the contact point 2 and the contact point 4.
  • the grounding line corresponding to the front side of the vehicle is the connection between the contact point 1 and the contact point 2.
  • the grounding line corresponding to the rear side of the vehicle is the connection between the contact point 3 and the contact point 4.
  • the part of the tire in contact with the ground is usually a contact surface (the contact surface can be approximately considered as a rectangle).
  • the front left vertex of the contact surface between the front left tire of the vehicle and the ground can be regarded as the contact point 1; the vertex at the front right of the contact surface between the front right tire of the vehicle and the ground can be regarded as the contact point 2;
  • the left rear vertex of the contact surface between the tire and the ground is the contact point 3 ;
  • the right rear vertex of the contact surface between the right rear tire of the vehicle and the ground is the contact point 4 .
  • the neural network model is an information processing system composed of a large number of processing units (referred to as neurons) interconnected, and the neurons in the neural network model contain corresponding mathematical expressions. After the data is fed into the neuron, the neuron runs the mathematical expressions it contains, performs calculations on the input data, and generates output data.
  • the input data of each neuron is the output data of the previous neuron connected to it; the output data of each neuron is the input data of the next neuron connected to it.
  • the neural network model after inputting data, the neural network model selects corresponding neurons for the input data according to its own learning and training, and calculates the input data according to these neural pairs, and determines and outputs the final operation result.
  • the neural network can continuously learn and evolve in the process of data operation, and continuously optimize its own operation process according to the feedback of the operation results.
  • the neural network model described in the embodiments of the present application is used to process the pictures collected by the image acquisition device to determine the passable area boundary on each detection object in the image (referred to as the passable area boundary of the detection object).
  • a passable area is an area that a car can drive through. For example, a pedestrian, an obstacle, and an open area between other vehicles in the front area detected by the first vehicle is recorded as a passable area of the first vehicle.
  • the passable area is generally located on the boundary of the detection object, so in this embodiment of the present application, the passable area located on the boundary of the detection object can be used to represent the boundary of the first detection object, and then according to the passable area located on the first detection object. area to determine the three-dimensional information of the first detection object.
  • the passable area boundary of the detection object output by the neural network model is usually shown by a plurality of points with corresponding identifiers.
  • the passable area on the left side of the second vehicle includes a plurality of points on the boundary point on the left side of the vehicle. The plurality of points are used to characterize the passable area on the left side of the second vehicle.
  • points located on different sides of the detection object may have different identifiers.
  • these identifiers may include: the identifier "00", which is used to indicate that the points are located on the border of the passable area on the left side of the detection object.
  • the mark "01” is used to characterize the mark of the point on the border of the passable area on the right side of the detection object; the mark “10” is used to characterize the mark of the point on the border of the passable area on the front side of the test object; mark “11” is used to characterize the identification of the point on the boundary of the passable area on the rear side of the vehicle.
  • Eye level is the line in the image that is parallel to the line of sight.
  • the eye-level line refers to a straight line in the image that is the same height as the image capturing device and parallel to the image capturing device.
  • two straight lines parallel to each other on the horizontal plane will intersect at a point at the eye level in a two-dimensional image, and the intersection point is the vanishing point.
  • the vehicle body coordinate system refers to a three-dimensional coordinate system whose origin is located on the vehicle body. Generally speaking, the origin of the vehicle body coordinate system coincides with the center of mass of the vehicle, the X axis is along the length of the vehicle and points to the front of the vehicle, the Y axis is along the width of the vehicle and points to the left of the driver, and the Z axis is along the height of the vehicle. above the vehicle.
  • Vehicle 2D detection is as follows: the first vehicle determines the image information displayed by the second vehicle in the image to be detected; the first vehicle frames the image information displayed by the second vehicle in the image to be detected in the form of a rectangular frame; the first vehicle According to the position of the lower edge of the rectangular frame, the distance between the second vehicle and the first vehicle is calculated, and the relative position of the second vehicle and the first vehicle is determined.
  • the first vehicle can only determine the position information of the second vehicle relative to the first vehicle.
  • the vehicle in addition to determining the location information of the vehicle, the vehicle also needs to determine the size, direction and other information of the vehicle to determine whether the surrounding vehicles interfere with its own driving.
  • Binocular 3D detection can obtain the depth information of the detection object by determining the difference between the images of the detection object collected by two image acquisition devices located at different positions. By establishing the corresponding relationship of the same point in the image, the mapping points of the same physical point in different images are corresponded to form a disparity image, and the 3D information of the detection object can be determined according to the disparity image.
  • the binocular 3D detection of the vehicle is as follows: the first vehicle adopts a binocular camera, and two images of the second vehicle are collected from two angles respectively. The first vehicle calculates the three-dimensional information of the second vehicle according to the deviation of the position of the same point on the second vehicle in the two images.
  • the binocular 3D detection algorithm can more accurately calculate the size, direction and position information of the second vehicle relative to the first vehicle.
  • the hardware equipment of the binocular camera is expensive, and the production requirements of the binocular camera applied to the smart vehicle are high, and the current algorithm used in the process of determining the 3D information of the second vehicle requires high image annotation cost and calculation amount. .
  • the reflected laser When a beam of laser irradiates the surface of an object, the reflected laser will carry information such as azimuth and distance. If the laser beam is scanned according to a certain trajectory, the reflected laser point information will be recorded while scanning. Since the scanning is extremely fine, a large number of laser points can be obtained, so a laser point cloud can be formed.
  • Vehicle laser point cloud detection is as follows: the first vehicle emits laser light to the surroundings and scans the surrounding detection objects.
  • the first vehicle receives laser point cloud data returned by surrounding detection objects, and the point cloud data includes point cloud data returned by the second vehicle and point cloud data returned by other detection objects.
  • the first vehicle uses algorithms such as machine learning or deep learning to map the laser point cloud data returned by the second vehicle into a certain data structure.
  • the first vehicle extracts each point or feature in these data, and clusters the point cloud data according to these features, and classifies similar point clouds into one category.
  • the first vehicle inputs the clustered point cloud into the corresponding classifier for classification and identification, and determines the point cloud data of the second vehicle.
  • the first vehicle maps the point cloud data of the second vehicle back into the three-dimensional point cloud data, constructs a 3D bounding box of the second vehicle, and determines three-dimensional information of the second vehicle.
  • the size and direction information of the second vehicle cannot be accurately determined.
  • the vehicle binocular 3D In detection or vehicle laser point cloud detection the hardware cost is high and the computational complexity is high.
  • the embodiment of the present application provides a method for determining three-dimensional information of a detection object.
  • the first vehicle can determine the boundary of the passable area and the touchdown line of the first detection object according to the collected image to be detected. Further, the first vehicle The three-dimensional information represented by the first detection object in the two-dimensional image is determined according to the boundary of the passable area of the first detection object and the ground contact line.
  • the above image to be detected may be a two-dimensional image collected by a monocular camera.
  • the first vehicle can determine the three-dimensional information of the first detection object by collecting the image information of the first detection object through the monocular camera.
  • the first vehicle needs to rely on the binocular camera to collect the image information of the first detection object to determine the three-dimensional information of the first detection object, or the first vehicle relies on the lidar to determine the three-dimensional information of the first detection object.
  • the first vehicle uses a monocular camera to collect image information of the first detection object to determine the three-dimensional information of the first detection object, which can greatly reduce the hardware cost of determining the three-dimensional information of the first detection object.
  • the first vehicle only needs to mark the passable area boundary of the first detection object, and determine the touchdown of the first detection object according to the passable area boundary of the first detection object String.
  • the first vehicle can determine the three-dimensional information represented by the first detection object in the two-dimensional image according to the boundary of the passable area of the first detection object and the ground contact line, combined with the visual relationship of the first detection object in the to-be-detected image, etc. .
  • the method for determining the three-dimensional information of the detection object does not require the first vehicle to perform other additional data labeling training, thereby reducing the amount of calculation for determining the three-dimensional information of the first detection object, and reducing the determination of the first detection object.
  • the 3D information occupies the GPU resources of the graphics processor.
  • the method includes:
  • the first vehicle acquires an image to be detected.
  • the to-be-detected image includes the first detection object.
  • the detection objects can be vehicles, pedestrians, obstacles, etc.
  • the detection object is taken as an example of the second vehicle for description.
  • the above image to be detected may be a picture collected by a vehicle-mounted image collection device.
  • the in-vehicle image acquisition device is usually used to acquire other vehicles located in front of the vehicle; alternatively, the in-vehicle image acquisition device can also acquire the information of all other vehicles around the vehicle by acquiring 360° omnidirectional images of the vehicle.
  • the image acquisition device described in the embodiment of the present application may be a monocular camera, and when the first vehicle executes the method for determining the three-dimensional information of the detection object provided by the embodiment of the present application, the first vehicle may be set in the In-vehicle terminal equipment in the first vehicle, or other equipment with data processing capabilities.
  • the first vehicle determines the passable area boundary of the first detection object and the ground contact line of the first detection object.
  • the passable area boundary of the first detection object includes the boundary of the first detection object in the to-be-detected image.
  • the ground contact line of the first detection object is a line connecting the intersection of the first detection object and the ground.
  • the passable area boundary of the first detection object is the passable area boundary of the first detection object output by the neural network model after the image to be detected is input into the neural network model.
  • the number of ground contact lines of the second vehicle is related to the number of sides of the second vehicle shown in the image to be detected.
  • the first detection object is a vehicle located in front of the first vehicle in the image to be detected, and when the image to be detected shows the left side and rear side of the second vehicle, the
  • the grounding wire includes two grounding wires, which are respectively the grounding wire on the left side of the second vehicle and the grounding wire on the rear side of the second vehicle.
  • the ground contact line on the left side of the second vehicle is a line between the contact point of the tire on the left front side of the second vehicle and the contact point of the tire on the left rear side of the second vehicle.
  • the ground contact line on the rear side of the second vehicle is a line between the contact point of the tire on the left rear side of the second vehicle and the contact point of the tire on the right rear side of the second vehicle.
  • the first detection object is a vehicle located directly in front of the first vehicle in the to-be-detected image, and when only the rear side of the second vehicle is shown in the to-be-detected image, the touch point of the second vehicle is
  • the ground wire includes a ground wire, which is the ground wire on the rear side of the second vehicle.
  • the ground contact line on the rear side of the second vehicle is a line connecting the contact point of the tire on the left rear side of the second vehicle and the contact point of the tire on the right rear side of the second vehicle.
  • the first vehicle determines three-dimensional information of the first detection object according to the boundary of the passable area and the ground contact line.
  • the three-dimensional information of the first detection object is used to determine at least one of the size, direction and relative position of the first detection object.
  • the three-dimensional information of the first detection object is used to represent the three-dimensional information of the first detection object shown in the image to be detected.
  • the first vehicle may convert the three-dimensional information shown by the first detection object in the image to be detected into a three-dimensional coordinate system, so as to further determine the real three-dimensional information of the first detection object.
  • the first vehicle converts the three-dimensional information shown by the second vehicle in the to-be-detected image into a three-dimensional coordinate system, and the size of the second vehicle (for example, the first vehicle) can be determined. the length and width of the second vehicle), the direction of the second vehicle (for example, the heading of the second vehicle, the possible driving direction of the second vehicle), and the position of the second vehicle in the three-dimensional coordinate system.
  • the relative position of the first detection object is related to the three-dimensional coordinate system to which the first vehicle converts the first detection object.
  • the relative position of the first detection object is the position of the first detection object relative to the first vehicle;
  • the relative position of the first detection object is the actual geographic location of the first detection object.
  • the three-dimensional information of the first detection object includes multiple points and multiple line segments.
  • the multiple points are projections on the ground of the endpoints of the first detection object displayed in the image to be detected.
  • the multiple line segments at least include line segments generated by the projection of the outermost boundary of the first detection object on the ground; or, the multiple line segments are contour lines of the first detection object in the image to be detected.
  • the first vehicle inputs the plurality of line segments into the vehicle body coordinate system of the first vehicle, and at least one of the size, direction and relative position of the first detection object can be determined.
  • the first vehicle in the method for determining the three-dimensional information of the detection object provided by the present application, can determine the passable area boundary and the ground contact line of the first detection object according to the collected image to be detected, and further, the first vehicle The vehicle determines the three-dimensional information represented by the first detection object in the two-dimensional image according to the boundary of the passable area of the first detection object and the ground contact line.
  • the above image to be detected may be a two-dimensional image collected by a monocular camera.
  • the first vehicle can determine the three-dimensional information of the first detection object by collecting the image information of the first detection object through the monocular camera.
  • the first vehicle adopts the monocular camera to collect the first detection object
  • the three-dimensional information of the first detection object is determined from the image information of the first detection object, which can greatly reduce the hardware cost of determining the three-dimensional information of the first detection object.
  • the first vehicle only needs to mark the passable area boundary of the first detection object, and determine the touchdown of the first detection object according to the passable area boundary of the first detection object String.
  • the first vehicle can determine the three-dimensional information represented by the first detection object in the two-dimensional image according to the boundary of the passable area of the first detection object and the touchdown line, combined with the visual relationship of the first detection object in the to-be-detected image, etc. .
  • the method for determining the three-dimensional information of the detection object does not require the first vehicle to perform other additional data labeling training, thereby reducing the amount of calculation for determining the three-dimensional information of the first detection object, and reducing the determination of the first detection object.
  • the 3D information occupies the GPU resources of the graphics processor.
  • S1021-S1023 will be described in detail below.
  • S1021 The first vehicle inputs the image to be detected into the neural network model to obtain L points.
  • the image to be detected usually includes one or more detection objects.
  • the above-mentioned L points are points on the boundary of the passable area of the one or more detection objects. L is a positive integer.
  • the above-mentioned neural network model is a pre-trained neural network model.
  • the neural network model has the ability to mark the traversable area boundary of the detected object in the image to be detected.
  • the boundary of the passable area is the boundary of the detection object in the image to be detected.
  • the neural network model is called, and the image to be detected is input into the neural network model, and L points are output.
  • the L points are the capability of the passable area boundary of the detection object in the image to be detected.
  • Each of the L points output by the neural network model may correspond to an identifier. This flag is used to characterize which side of the detection object the point is located on.
  • the point on the left side of the detection object corresponds to the first identifier.
  • the first identifier is used to indicate that the point is located on the left side of the detection object.
  • the point on the right side of the detection object corresponds to the second identification.
  • the second identification is used to indicate that the point is located on the right side of the detection object.
  • the point on the front side of the detection object corresponds to the third marker.
  • the third identifier is used to indicate that the point is located on the front side of the detection object.
  • the point on the rear side of the detection object corresponds to the fourth mark.
  • the fourth identification is used to indicate that the point is located on the back side of the detection object.
  • S1022 The first vehicle determines M points from the L points.
  • the M points are points located on the boundary of the passable area of the first detection object.
  • the first vehicle classifies the L points according to one or more detection objects in the image to be detected, and determines the detection object corresponding to each point. After that, the first vehicle determines M points corresponding to the first detection object according to the detection objects of each point object.
  • the M points are points located on the boundary of the passable area of the first detection object. M is a positive integer, and M is less than or equal to L.
  • the first vehicle fits the M points to determine the ground contact line of the first detection object.
  • the first vehicle may use a random sample consensus (RANSAC) fitting algorithm to fit and determine the touchdown line of the target object.
  • RANSAC random sample consensus
  • the first vehicle may adopt the RANSAC fitting algorithm, and the process of fitting and determining the touchdown line of the target object includes the following steps a to f, which will be described in detail below:
  • Step a The first vehicle determines K points located at the boundary of the passable area of the first detection object and having the same identification, where K is a positive integer.
  • Step b The first vehicle randomly selects T points from the K points, and uses the least squares method to fit the T points to obtain a straight line.
  • Step c The first vehicle determines the distance between each of the K points except the T points and the straight line.
  • Step d the first vehicle determines that the points whose distance is less than the first threshold are in-group points, and determines the number of in-group points.
  • Step e The first vehicle executes the above steps b to d multiple times to determine a plurality of straight lines and the number of in-group points corresponding to each straight line in the plurality of straight lines.
  • Step f The first vehicle determines that among the above-mentioned straight lines, the straight line with the largest number of corresponding in-group points is a ground contact line of the first detection object.
  • step e the more the number of straight lines determined by the first vehicle through steps b to d, the higher the accuracy of the final determined result.
  • the first vehicle can use the neural network model and the corresponding fitting algorithm to determine the passable area boundary of the first detection object and the ground contact line of the first detection object according to the image to be detected.
  • the monocular camera of the first vehicle can capture an image of an object located in front of the monocular camera.
  • the second vehicle is located directly in front of the monocular camera, usually only one side of the second vehicle can be captured by the monocular camera of the first vehicle.
  • the second vehicle is located in front of the monocular camera and deviates from the position directly in front of the monocular camera, there are usually two sides of the second vehicle that can be captured by the monocular camera of the first vehicle.
  • the image of the second vehicle collected by the first vehicle includes the following two scenarios: Scenario 1, two sides of the second vehicle are collected by the first vehicle.
  • Scenario 2 The first vehicle is collected to one side of the second vehicle.
  • Scenario 1 The first vehicle collects two sides of the second vehicle.
  • the image information of the second vehicle collected by the first vehicle is related to which orientation of the image of the first vehicle is collected by the monocular camera.
  • the monocular camera captures an image of the front of the first vehicle:
  • the monocular camera can capture the right side and rear side of the second vehicle.
  • the monocular camera can capture the left side and rear side of the second vehicle.
  • the monocular camera can capture the front side and left side of the second vehicle.
  • the monocular camera can capture the front side and the right side of the second vehicle.
  • the monocular camera can capture the right side of the second vehicle and either the front side or the rear side of the second vehicle.
  • the monocular camera can also collect images of other side surfaces of the second vehicle, which will not be repeated in this application.
  • Scenario 2 The first vehicle is collected to one side of the second vehicle.
  • the image information of the second vehicle collected by the first vehicle is related to which orientation of the image of the first vehicle is collected by the monocular camera.
  • the monocular camera captures an image of the front of the first vehicle:
  • the monocular camera can capture the rear side of the second vehicle.
  • the monocular camera can capture the front side of the second vehicle.
  • the monocular camera can capture the right side of the second vehicle.
  • the monocular camera can capture the left side of the second vehicle.
  • the monocular camera can capture the right side of the second vehicle.
  • the passable area boundary of the second vehicle determined by the first vehicle is the passable area boundary of the two sides.
  • the ground contact line of the second vehicle determined by the first vehicle includes a first ground contact line and a second ground contact line, and the first ground contact line and the second ground contact line respectively correspond to different side surfaces of the two side surfaces.
  • the three-dimensional information of the second vehicle determined by the first vehicle includes the three-dimensional information composed of the two side surfaces.
  • the passable area boundary of the second vehicle determined by the first vehicle is the passable area boundary of the one side surface.
  • the ground contact line of the second vehicle determined by the first vehicle includes a third ground contact line, and the third ground contact line is the ground contact line of the one side surface.
  • the three-dimensional information of the second vehicle determined by the first vehicle includes the three-dimensional information composed of the one side surface.
  • the first vehicle determines the three-dimensional information of the first detection object according to the boundary of the passable area and the touchdown line, including the following two cases, namely: case 1, the first The vehicle determines the three-dimensional information of the first detection object according to the passable area boundary and the touchdown line of the two sides of the first detection object; and in case 2, the first vehicle determines the passable area boundary and Touch the ground line to determine the three-dimensional information of the first detection object.
  • the first vehicle determines the three-dimensional information of the first detection object according to the passable area boundary and the ground contact line of the two sides of the first detection object.
  • the passable area boundary of the first detection object determined by the first vehicle according to the above S102, and the grounding line of the first detection object respectively have the following characteristics:
  • the passable area boundary of the first detection object includes a plurality of boundary points corresponding to the first mark and a plurality of boundary points corresponding to a plurality of second marks.
  • the first identification also corresponds to the first side surface of the first detection object
  • the second identification also corresponds to the second side surface of the first detection object.
  • the first side surface and the second side surface are two intersecting side surfaces of the first detection object.
  • the grounding wire includes a first grounding wire and a second grounding wire.
  • the first grounding line is a grounding line determined by fitting a plurality of boundary points corresponding to the first identification.
  • the second grounding line is a grounding line determined by fitting a plurality of boundary points corresponding to the second identification.
  • the plurality of boundary points corresponding to the first identification include the first boundary point; the first boundary point is the boundary point with the largest distance from the second grounding line among the plurality of boundary points with the first identification.
  • the plurality of boundary points corresponding to the second identification include a second boundary point; the second boundary point is the boundary point with the largest distance from the first grounding line among the plurality of boundary points with the second identification.
  • the first vehicle is determined according to the three-dimensional information of the first detection object determined in the above S103 according to three points and two lines corresponding to the first detection object.
  • the first point among the three points is the projection of the first boundary point on the ground.
  • the second of the three points is the projection of the second boundary point on the ground.
  • the third point of the three points passes through the second point, and the intersection of the line parallel to the second grounding line and the first grounding line.
  • the first of the two lines is the line connecting the first point and the third point.
  • the second of the two lines is the line connecting the second and third points.
  • S103 can be specifically implemented by the following S1031-S1035. Below, S1031-S1035 are described in detail:
  • the first vehicle determines a first point according to the first boundary point and the first ground contact line.
  • the first point is the intersection of the first ground contact line and the first straight line.
  • the first straight line is a straight line that passes through the first boundary point and is perpendicular to the eye level in the image to be detected.
  • the method for the first vehicle to determine the first point is:
  • Step I the first vehicle determines a first straight line; the first straight line is a straight line that passes through the first boundary point in the image to be detected and is perpendicular to the eye level.
  • the first vehicle passes through the first boundary point as a vertical line of the eye level, and the vertical line is the first straight line.
  • Step II the first vehicle determines the intersection of the first straight line and the first ground contact line as the first point.
  • the first vehicle is an extension line of the first ground contact line, and the extension line intersects with the first straight line at point a.
  • the first vehicle determines the point a as the first point.
  • the first vehicle determines a second point according to the first ground contact line, the second ground contact line, and the second boundary point.
  • the second point is the intersection of the second straight line and the third straight line.
  • the second straight line is a straight line passing through the intersection of the first ground contact line and the eye level, and a straight line far from the vertex of the first ground contact line in the second ground contact line.
  • the third straight line is a straight line that passes through the second boundary point and is perpendicular to the eye level in the image to be detected.
  • the method for the first vehicle to determine the second point is:
  • Step III the first vehicle determines the second straight line.
  • the second straight line is, in the image to be detected, a straight line passing through the intersection of the first ground contact line and the eye level, and a straight line in the second ground contact line away from the end point of the first ground contact line.
  • the first vehicle is used as an extension of the grounding line to obtain the intersection point b of the first grounding line and the eye-level line.
  • the first vehicle determines an end point c of the second ground contact that is remote from the first ground contact.
  • the first vehicle draws a straight line passing through the above-mentioned intersection point b and the end point c, and the straight line is the second straight line.
  • Step IV the first vehicle determines the third straight line.
  • the third straight line is a straight line that passes through the second boundary point in the image to be detected and is perpendicular to the eye level.
  • the first vehicle passes through the second boundary point as a vertical line of the eye level, and the vertical line is the third straight line.
  • Step V the first vehicle determines the intersection of the second straight line and the third straight line as the second point.
  • the first vehicle determines that the second straight line and the third straight line intersect at point c, and the first vehicle determines that point c is the second point.
  • the first vehicle determines a third point according to the first ground contact line, the second ground contact line, and the second point.
  • the third point is the intersection of the first grounding line and the fourth straight line.
  • the fourth straight line is a straight line that passes through the second point in the image to be detected and is parallel to the second grounding line.
  • the method for the first vehicle to determine the third point is:
  • Step VI the first vehicle determines the fourth straight line.
  • the first vehicle passes through the second point as a parallel line of the second ground contact line.
  • the first vehicle determines the parallel line to be the fourth straight line.
  • Step VII The first vehicle determines that the intersection of the fourth straight line and the first ground contact line is the third point.
  • the determining device determines the intersection point d of the fourth straight line and the first ground contact line, and the first vehicle determines that the intersection point d is the third point.
  • the first vehicle determines the first line according to the first point and the third point.
  • the first vehicle draws a line segment a with the first point and the third point as endpoints respectively, and the first vehicle determines the line segment as the first line.
  • the first vehicle determines a second line according to the second point and the third point.
  • the first vehicle draws a line segment b with the second point and the third point as endpoints respectively, and the first vehicle determines the line segment as the second line.
  • Case 2 The first vehicle determines the three-dimensional information of the first detection object according to the passable area boundary and the touchdown line of one side surface of the first detection object.
  • the passable area boundary of the first detection object determined by the first vehicle according to the above S102, and the grounding line of the first detection object respectively have the following characteristics:
  • the passable area boundary of the first detection object includes a plurality of boundary points corresponding to the third identification; the third identification also corresponds to the third side surface of the first detection object.
  • the grounding line of the first detection object includes a third grounding line; the third grounding line is a grounding line determined by fitting a plurality of boundary points corresponding to the third identification.
  • the plurality of boundary points with third identifiers include a third boundary point and a fourth boundary point.
  • the third boundary point is the point farthest from one end of the third grounding line among the plurality of boundary points with the third identification.
  • the fourth boundary point is the point farthest from the other end of the third grounding line among the plurality of boundary points with the third identification.
  • the first vehicle is determined according to the three-dimensional information of the first detection object determined in the above S103 according to two points and a line corresponding to the first detection object.
  • the first of the two points is the projection of the third boundary point on the ground.
  • the second of the two points is the projection of the fourth boundary point on the ground.
  • S103 can be specifically implemented by the following S1036-S1038, and S1036-S1038 will be described in detail below.
  • the first vehicle determines the first point according to the third boundary point and the third ground contact line.
  • the first point is the intersection of the third ground contact line and the fifth straight line.
  • the fifth straight line is a straight line passing through the third boundary point and perpendicular to the third ground contact line.
  • the method for the first vehicle to determine the first point of the three-dimensional information of the first detection object is as follows:
  • Step 1 The first vehicle determines the fifth straight line.
  • the fifth straight line is a straight line that passes through the third boundary point in the image to be detected and is perpendicular to the eye level.
  • the first vehicle passes through the third boundary point as a vertical line of the eye level, and the first vehicle determines the vertical line as the fifth straight line.
  • Step 2 The first vehicle determines the intersection of the fifth straight line and the third ground contact line as the first point.
  • the first vehicle is an extension of the third grounding line, the extension of the third grounding line and the fifth straight line intersect at point e, and the first vehicle determines point e as the first point.
  • the first vehicle determines the second point according to the fourth boundary point and the third ground contact line.
  • the second point is the intersection of the third ground contact line and the sixth straight line.
  • the sixth straight line is a straight line that passes through the fourth boundary point and is perpendicular to the third ground contact line.
  • the method for the first vehicle to determine the second point of the three-dimensional information of the first detection object is as follows:
  • Step 3 The first vehicle determines the sixth straight line.
  • the sixth straight line is a straight line that passes through the fourth boundary point in the image to be detected and is perpendicular to the eye level.
  • the first vehicle passes through the fourth boundary point as a vertical line of the eye level, and the first vehicle determines the vertical line as the sixth straight line.
  • Step 4 The first vehicle determines the intersection of the sixth straight line and the third touchdown line as the second point.
  • the first vehicle is an extension line of the third ground contact line
  • the extension line of the third ground contact line intersects the sixth straight line at point f
  • the first vehicle determines point f as the second point.
  • the first vehicle determines the first line according to the first point and the second point.
  • the first vehicle draws a line segment c with the first point and the second point as endpoints respectively, and the first vehicle determines the line segment c as the first line.
  • the first vehicle can determine the three-dimensional information of the first detection object according to the boundary of the passable area of the first detection object and the ground contact line of the first detection object.
  • the above three-dimensional information is the three-dimensional information represented by the first detection object in the to-be-detected image.
  • the first vehicle needs to determine the real three-dimensional information of the first detection object, it also needs to bring the three-dimensional information represented by the first detection object in the to-be-detected image into the vehicle body coordinate system with the first vehicle, so that the first vehicle Determine the position of the first detection object relative to the first vehicle, the size of the first detection object (at least one of length, width, and high school), and information such as the orientation of the first detection object.
  • the method further includes:
  • the first vehicle inputs the three-dimensional information of the first detection object into the vehicle body coordinate system, and determines at least one of the size, direction and relative position of the first detection object.
  • the first vehicle establishes a first rectangular coordinate system according to the image to be detected; the image to be detected is located in the first rectangular coordinate system.
  • the first rectangular coordinate system can be a matrix pre-set for the image acquisition device, and all pictures acquired by the image acquisition device can be mapped into the matrix.
  • the first vehicle determines the coordinates of the three-dimensional information of the first detection object in the first inter-coordinate system. After this, the first vehicle determines the intrinsic and extrinsic parameters of the image acquisition device. The first vehicle converts the coordinates of the three-dimensional information of the first detection object in the first rectangular coordinate system into vehicle body coordinates according to the internal parameters and external parameters of the image acquisition device and the position of the image acquisition device in the vehicle body coordinate system coordinates in the system.
  • the first vehicle determines the position, movement direction, size and other information of the first detection object according to the coordinates of the three-dimensional information of the first detection object in the vehicle body coordinate system.
  • the internal parameters of the image acquisition device are used to represent some parameters related to the image acquisition device itself, such as the focal length and pixel size of the image acquisition device.
  • the external parameters of the image acquisition device are used to characterize the parameters of the image acquisition device in the world coordinate system, such as the position and rotation direction of the image acquisition device in the world coordinate system.
  • the internal parameter matrix and the external parameter matrix of the image acquisition device are preset in the first vehicle.
  • the first vehicle converts a coordinate point in the vehicle body coordinate system into a coordinate point in the first intermediate coordinate system (referred to as the first coordinate point): the first vehicle connects the first coordinate point with the external parameter in turn.
  • the coordinate point corresponding to the first coordinate point in the first rectangular coordinate system can be obtained by multiplying the matrix and the internal parameter matrix.
  • the first vehicle finds its corresponding point in the vehicle body coordinate system according to the coordinate point in the first rectangular coordinate system, it only needs to perform the operation process opposite to the above, and then according to the coordinate point in the first rectangular coordinate system, Determine its corresponding point in the body coordinate system.
  • the size of the first detection object determined by the first vehicle includes the length and width of the first detection object.
  • the first vehicle may determine the height of the first detection object according to the type of the first detection object.
  • the first vehicle identifies the vehicle type of the second vehicle, and determines the height of the vehicle according to the vehicle type.
  • the vehicle type of the second vehicle is the level of the vehicle, such as: mini car, small car, compact car, medium car, medium and large car, large car, small sport utility vehicle (SUV) , compact SUV, medium SUV, medium and large SUV, large SUV, compact multi-purpose vehicles (MPV), medium MPV, medium and large MPV, large MPV, sports car, pickup truck, micro-face, light passenger, micro truck Wait.
  • mini car small car, compact car, medium car, medium and large car, large car, small sport utility vehicle (SUV) , compact SUV, medium SUV, medium and large SUV, large SUV, compact multi-purpose vehicles (MPV), medium MPV, medium and large MPV, large MPV, sports car, pickup truck, micro-face, light passenger, micro truck Wait.
  • SUV small sport utility vehicle
  • MPV compact multi-purpose vehicles
  • MPV compact multi-purpose vehicles
  • sports car pickup truck, micro-face, light passenger, micro truck Wait.
  • the first vehicle is preconfigured with standard sizes of vehicles of various grades. After the first vehicle determines the vehicle grade of the second vehicle, the height of the second vehicle is determined according to the vehicle grade of the second vehicle.
  • the vehicle type of the second vehicle is the model of the vehicle (for example, vehicle brand+specific model).
  • the first vehicle is pre-configured with standard sizes of vehicles of various models. After the first vehicle determines the vehicle model of the second vehicle, the height of the second vehicle is determined according to the vehicle model of the second vehicle.
  • the first vehicle can also determine the length and width of the second vehicle according to the type of the second vehicle.
  • the first vehicle can determine the exact length and width of the second vehicle according to the length and width of the second vehicle determined by the method and the length and width determined by the three-dimensional information of the second vehicle in the image to be detected.
  • the first vehicle determines the positions, sizes and movement directions of all vehicles in the image to be detected.
  • the first vehicle plans a driving route of the first vehicle according to the positions, sizes and moving directions of all vehicles, as well as current road information, obstacle information, and vehicle purpose information determined by other devices in the first vehicle.
  • the first control instruction is generated according to the first travel route.
  • the first control instruction is used to instruct the first vehicle to travel according to the planned travel route.
  • the first vehicle issues the first control command to the first vehicle.
  • the first vehicle performs intelligent driving according to the control instruction issued by the first vehicle.
  • the first vehicle can determine the size, direction, and position of the second vehicle relative to the first vehicle according to the three-dimensional information of the second vehicle. After that, the first vehicle can plan the driving route of the first vehicle according to the information, so as to realize intelligent driving.
  • each device for example, the first vehicle and the second vehicle, includes at least one of a hardware structure and a software module corresponding to each function in order to implement the above-mentioned functions.
  • the present application can be implemented in hardware or a combination of hardware and computer software with the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
  • the vehicle includes corresponding hardware structures and/or software modules for executing each function.
  • Those skilled in the art should easily realize that the units and method steps of each example described in conjunction with the embodiments disclosed in the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is implemented by hardware or computer software-driven hardware depends on the specific application scenarios and design constraints of the technical solution.
  • FIG. 14 is a schematic structural diagram of an apparatus for determining three-dimensional information of a detection object provided by an embodiment of the present application. These apparatuses for determining the three-dimensional information of the detection object can be used to implement the functions of the processors in the above method embodiments, and thus can also achieve the beneficial effects of the above method embodiments.
  • the device for determining the three-dimensional information of the detection object may be the processor 161 shown in FIG. 1 .
  • the apparatus 1400 for determining three-dimensional information of a detection object includes a processing unit 1410 and a communication unit 1420 .
  • the apparatus 1400 for determining the three-dimensional information of the detection object is used to realize the function of the first vehicle in the method embodiment shown in FIG. 7 , FIG. 9 , FIG. 10 , or FIG. 12 .
  • the processing unit 1410 is used to execute S102 to S103, and the communication unit 1420 is used to communicate with other entities.
  • the processing unit 1410 is used to execute S101, S1021 to S1023, S103 and S104, and the communication unit 1420 is used to communicate with communicate with other entities.
  • the processing unit 1410 is used to execute S101, S102, and S1031 to S1035, and the communication unit 1420 is used to communicate with other entities to communicate.
  • the processing unit 1410 is used to execute S101, S102, and S1036 to S1038, and the communication unit 1420 is used to communicate with other entities to communicate.
  • processing unit 1410 and the communication unit 1420 can be obtained directly by referring to the relevant descriptions in the method embodiments shown in FIG. 7 , FIG. 9 , FIG. 10 or FIG. 12 , and details are not repeated here.
  • each step in the method provided in this embodiment may be completed by an integrated logic circuit of hardware in a processor or an instruction in the form of software.
  • the steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the processor in this application may include, but is not limited to, at least one of the following: a central processing unit (CPU), a microprocessor, a digital signal processor (DSP), a microcontroller (MCU), or Artificial intelligence processors and other types of computing devices that run software, each computing device may include one or more cores for executing software instructions to perform operations or processing.
  • the processor can be a separate semiconductor chip, or can be integrated with other circuits into a semiconductor chip. For example, it can form a SoC (on-chip) with other circuits (such as codec circuits, hardware acceleration circuits, or various bus and interface circuits).
  • the processor may further include necessary hardware accelerators, such as field programmable gate arrays (FPGA), PLDs (Programmable Logic Devices) , or a logic circuit that implements dedicated logic operations.
  • FPGA field programmable gate arrays
  • PLD Programmable Logic Devices
  • the memory in this embodiment of the present application may include at least one of the following types: read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (random access memory) , RAM) or other types of dynamic storage devices that can store information and instructions, and can also be electrically erasable programmable read-only memory (Electrically erasable programmable read-only memory, EEPROM).
  • ROM read-only memory
  • RAM random access memory
  • EEPROM electrically erasable programmable read-only memory
  • the memory may also be compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, Blu-ray disc, etc.) , a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, without limitation.
  • CD-ROM compact disc read-only memory
  • optical disc storage including compact disc, laser disc, optical disc, digital versatile disc, Blu-ray disc, etc.
  • magnetic disk storage medium or other magnetic storage device or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, without limitation.
  • Embodiments of the present application further provide a computer-readable storage medium, including instructions, which, when executed on a computer, cause the computer to execute any of the foregoing methods.
  • Embodiments of the present application also provide a computer program product containing instructions, which, when run on a computer, enables the computer to execute any of the above methods.
  • An embodiment of the present application further provides a communication system, including: the above-mentioned base station and a server.
  • An embodiment of the present application further provides a chip, the chip includes a processor and an interface circuit, the interface circuit is coupled to the processor, the processor is used to run a computer program or instructions to implement the above method, and the interface circuit is used to connect with the processor. communicate with other modules outside the chip.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website site, computer, server, or data center over a wire (e.g.
  • Coaxial cable, optical fiber, digital subscriber line (DSL) or wireless means to transmit to another website site, computer, server or data center.
  • Computer-readable storage media can be any available media that can be accessed by a computer or data storage devices including one or more servers, data centers, etc., that can be integrated with the media.
  • Useful media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, solid state disk (SSD)), and the like.

Abstract

一种确定检测对象的三维信息的方法及装置,涉及智能驾驶领域,用于确定检测对象的三维信息。该方法包括:获取待检测图像;待检测图像包括第一检测对象;确定第一检测对象的可通行区域边界,以及第一检测对象的触地线;可通行区域边界包括第一检测对象在待检测图像中的边界;触地线为第一检测对象与地面的交点的连线;根据可通行区域边界,以及触地线,确定第一检测对象的三维信息。上述方法能够通过单目摄像头获取的图片确定检测对象的三维信息,降低了确定第一检测对象的三维信息的硬件成本,并降低了第一车辆处理图像时的计算量。

Description

确定检测对象的三维信息的方法及装置
本申请要求于2020年08月11日提交国家知识产权局、申请号为202010803409.2、申请名称为“确定检测对象的三维信息的方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及智能驾驶领域,尤其涉及一种确定检测对象的三维信息的方法及装置。
背景技术
在智能驾驶场景中,行驶的车辆(在本申请中记为第一车辆)需要实时的检测周围其他车辆(在本申请中记为第二车辆)的相对于第一车辆的位置,第二车辆的大小(车辆的长度、车辆的宽度)以及第二车辆的方向(车辆朝向,行驶方向等),以便于第一车辆根据这些信息规划自身的行驶路线,对其他车辆可能带来的危害行为进行预判和规避。
第一车辆确定第二车辆的大小,方向以及相对于第一车辆的位置时,需要确定第二车辆的三维(3D)信息。当前第一车辆可以通过双目3D的方式,确定第二车辆的3D信息。但是双目3D的方式需要搭配双目摄像头,价格昂贵;并且目前的对于第二车辆3D信息确定过程中所使用的算法需要较高的图像标注成本,以及计算量。
发明内容
本申请提供一种确定检测对象的三维信息的方法及装置,解决了现有技术中确定周围车辆的三维信息时,对硬件要求高,计算数据量大的问题。
为达到上述目的,本申请采用如下技术方案:
第一方面,提供一种确定检测对象的三维信息的方法,包括:第一车辆获取待检测图像;待检测图像包括第一检测对象;第一车辆确定第一检测对象的可通行区域边界,以及第一检测对象的触地线;可通行区域边界包括第一检测对象在待检测图像中的边界;触地线为第一检测对象与地面的交点的连线;第一车辆根据可通行区域边界,以及触地线,确定第一检测对象的三维信息。
基于上述技术方案,本申请提供的确定检测对象的三维信息的方法,第一车辆能够根据采集到的待检测图像,确定第一检测对象的可通行区域边界以及触地线,进一步的,第一车辆根据第一检测对象的可通行区域边界,以及触地线,确定第一检测对象在二维图像中表征出的三维信息。
上述待检测图像可以为单目摄像头采集的二维图像。这样,第一车辆通过单目摄像头采集第一检测对象的图像信息即可确定第一检测对象的三维信息。相比较于现有技术中第一车辆需要依赖双目摄像头采集第一检测对象的图像信息以确定第一检测对象的三维信息的方法,本申请中第一车辆采用单目摄像头采集第一检测对象的图像信息确定第一检测对象的三维信息,能够大大降低确定第一检测对象的三维信息的硬件成本。
此外,本申请提供的确定检测对象的三维信息的方法,第一车辆只需标记第一检测对象的可通行区域边界,并根据第一检测对象的可通行区域边界确定第一检测对象的触地线。第一车辆可以根据第一检测对象的可通行区域边界,以及触地线,结合第一检测对象在待检测图像中的视觉关系等,确定第一检测对象在二维图像中表征出的三维信息。因此,本申请提供的确定检测对象的三维信息的方法,无需第一车辆进行其他额外的数据标注训练,从而降低了确定第一检测对象的三维信息的计算量,以及降低了确定第一检测对象的三维信息占用的图形处理器(Graphics Processing Unit,GPU)资源。
结合第一方面,在一种可能的实现方式中,第一检测对象的三维信息用于确定第一检测对象的大小、方向,以及相对位置中的至少一项。
例如,第一车辆可以将第一检测对象在待检测图像中表征出的三维信息,转换到第一车辆的车体坐标系中,确定第一检测对象在第一车辆的车体坐标系中对应的大小,方向,以及第一检测对象相对于第一车辆的位置。这样,第一车辆可以结合周围多个第一检测对象在第一车辆的车体坐标系中对应的大小,方向,以及第一检测对象相对于第一车辆的位置,规划第一车辆的行车路线,实现第一车辆的智能驾驶。
结合第一方面,在一种可能的实现方式中,第一检测对象的可通行区域边界包括多个与第一标识对应的边界点,以及与多个第二标识对应的边界点;第一标识还对应于第一检测对象的第一侧面,第二标识还对应于第一检测对象的第二侧面;第一侧面和第二侧面为第一检测对象中两个相交的侧面。
基于此,在第一车辆采用单目摄像头采集第一检测对象的两个侧面(即待检测图像中包括第一检测对象的两个侧面)的情况下,第一车辆可以分别对每个侧面的可通行区域边界进行标记,并为不同的侧面的可通行区域边界分配不同的标识,第一车辆可以根据不同的标识确定各个侧面对应的可通行区域边界。
结合第一方面,在一种可能的实现方式中,触地线中包括第一触地线和第二触地线;第一触地线为拟合多个与第一标识对应的边界点确定的触地线;第二触地线为拟合多个与第二标识对应的边界点确定的触地线。
基于此,第一车辆根据每个侧面对应的可通行区域边界,确定第一检测对象在待检测图像中示出的每个侧面对应的触地线。由于触地线是根据每个侧面对应的可通行区域边界上的点拟合得到的,因此每个侧面对应的触地线能够表征该侧面在地面投影的部分边界(即每个面在地面投影的最外沿部分)。
结合第一方面,在一种可能的实现方式中,多个与第一标识对应的边界点中包括第一边界点;第一边界点为多个具有第一标识的边界点中,与第二触地线的距离最大的边界点;多个与第二标识对应的边界点中包括第二边界点;第二边界点为多个具有第二标识的边界点中,与第一触地线的距离最大的边界点。
基于此,上述第一边界点能够表征第一检测对象的第一侧面中与第一车辆的距离最大的点,也即是说,第一边界点能够表征第一侧面距离第一车辆最远的顶点。第二边界点能够表征第一检测对象的第二侧面中,与第一车辆的距离最大的点,也即是说,第二边界点能够表征第二侧面距离第一车辆最远的顶点。
结合第一方面,在一种可能的实现方式中,第一检测对象的三维信息根据第一检 测对象对应的三个点和两条线确定;其中,三个点中的第一个点为第一边界点在地面上的投影;三个点中的第二个点为第二边界点在地面上的投影;三个点中的第三个点过第二个点,且与第二触地线平行的直线与第一触地线的交点;两条线中的第一条线为第一个点和第三个点之间的连线;两条线中的第二条线为第二个点和第三个点之间的连线。
结合第一方面,在一种可能的实现方式中,确定所述第一边界点在所述地面上的投影为第一个点;确定所述第二边界点在所述地面上的投影为第二个点;确定过所述第二边界点在所述地面上的投影,且与所述第二触地线平行的直线与所述第一触地线的交点为第三个点;确定所述第一个点和所述第三个点之间的连线为第一条线;确定所述第二个点和所述第三个点之间的连线为第二条线;根据所述第一个点,所述第二个点,所述第三个点,所述第一条线,以及所述第二条线,确定所述第一检测对象的三维信息。
基于上述技术方案,第一边界点在地面上的投影能够表征第一侧面在地面上的投影与第一车辆的距离最远的点,第二边界点在地面上的投影能够表征第二侧面在地面上的投影与第二车辆的距离最远的点,第三个点能够表征第一侧面在地面投影与第二侧面在地面投影的交点。第一触地线能够表征第一侧面在地面投影的方向。第二触地线能够表征第二侧面在地面投影的方向。因此,第一车辆可以确定第一条线为第一侧面在地面投影的最外侧框线,第二条线为第二侧面在地面投影的最外侧框线。
进一步的,第一车辆可以根据第一条线和/或第二条线的方向,以及第一条线和第二条线在待检测图像中的位置,确定第一检测对象的方向。第一车辆可以根据第一条线的长度和第二条线的长度,以及第一条线和第二条线在待检测图像中的位置,确定第一检测对象的大小,第一车辆可以根据第一条线和第二条线在待检测图像中的位置,确定第一检测对象相对于第一车辆的位置。
这样,第一车辆仅需根据第一检测对象的可通行区域边界,以及触地线,对第一检测对象上特定的点进行投影,即可确定第一检测对象的三维信息,大大降低了第一车辆确定第一检测对象的三维信息的计算量。
结合第一方面,在一种可能的实现方式中,第一车辆根据第一边界点,以及第一触地线,确定第一个点;第一车辆根据第一触地线,第二触地线,以及第二边界点,确定第二个点;根据第一触地线,第二触地线,以及第二个点,确定第三个点。
基于此,第一车辆可以根据可通行区域边界,以及触地线确定第一检测对象在地面的投影的顶点。
结合第一方面,在一种可能的实现方式中,第一车辆确定第一直线;第一直线为在待检测图像中过第一边界点,且与视平线垂直的直线;第一车辆确定第一直线与第一触地线的交点为第一个点。
基于此,第一车辆可以根据第一触地线和第一边界点在待检测图像中的视觉关系,快速准确的确定第一边界点在地面的投影。第一车辆通过该方式确定第一边界点在地面的投影可以进一步降低第一车辆确定第一检测对象的三维信息的计算量。
结合第一方面,在一种可能的实现方式中,第一车辆确定第二直线和第三直线;其中,第二直线为在待检测图像中,过第一触地线与视平线的交点,以及第二触地线 中远离第一触地线的端点的直线;第三直线为在待检测图像中过第二边界点,且垂直于视平线的直线;第一车辆确定第二直线和第三直线的交点为第二个点。
基于此,第一车辆可以根据第一触地线和第二触地线以及第二边界点在待检测图像中的视觉关系,快速准确的确定第一边界点在地面的投影。第一车辆通过该方式确定第二边界点在地面的投影可以进一步降低第一车辆确定第一检测对象的三维信息的计算量。
结合第一方面,在一种可能的实现方式中,第一车辆确定第四直线;第四直线为在待检测图像中过第二个点,且与第二触地线平行的直线;第一车辆确定第四直线与第一触地线的交点为第三个点。
基于此,第一车辆可以根据第一触地线和第二触地线以及第二边界点在待检测图像中的视觉关系,快速准确的确定第一侧面和第二侧面在地面投影的交点。第一车辆通过该方式确定第一侧面和第二侧面在地面投影的交点,可以进一步降低第一车辆确定第一检测对象的三维信息的计算量。
结合第一方面,在一种可能的实现方式中,第一检测对象的可通行区域边界包括多个与第三标识对应的边界点;第三标识还对应于第一检测对象的第三侧面。
基于此,在第一车辆采用单目摄像头采集第一检测对象的一个侧面(即待检测图像中包括第一检测对象的一个侧面)的情况下,第一车辆可以对该侧面的可通行区域边界进行标记,并为其分配相应的标识,第一车辆可以根据具有该标识的可通行区域边界点确定该侧面对应的可通行区域边界。
结合第一方面,在一种可能的实现方式中,第一检测对象的触地线中包括第三触地线;第三触地线为拟合多个与第三标识对应的边界点,确定的触地线。
基于此,第一车辆可以根据该侧面的可通行区域边界,确定该侧面的触地线。由于该触地线是根据该侧面对应的可通行区域边界上的点拟合得到的,因此该触地线能够表征该侧面在地面投影的部分边界(即每个面在地面投影的最外沿部分)。
结合第一方面,在一种可能的实现方式中,多个具有第三标识的边界点中包括第三边界点和第四边界点;第三边界点为多个具有第三标识的边界点中,距离第三触地线的一端最远的点;第四边界点为多个具有第三标识的边界点中,距离第三触地线的另一端最远的点。
基于此,上述第三边界点和第四边界点能够表征第一检测对象的第三侧面的两个顶点。
结合第一方面,在一种可能的实现方式中,第一检测对象的三维信息根据第一检测对象对应的两个点和一条线确定;两个点中的第一个点为第三边界点在地面上的投影;两个点中的第二个点为第四边界点在地面上的投影。
结合第一方面,在一种可能的实现方式中,确定所述第三边界点在所述地面上的投影为第一个点;确定所述第四边界点在所述地面上的投影为第二个点;确定所述第一个点和所述第二个点之间的连线为第一条线;根据所述第一个点,所述第二个点,以及所述第一条线,确定所述第一检测对象的三维信息。
基于上述技术方案,第三边界点在地面的投影能够表征第三侧面在地面投影的一个顶点。第四边界点在地面的投影能够表征第三侧面在地面投影的另一个顶线。第三 边界点在地面的投影和第四边界点在地面的投影之间的连线(第一条线),能够表征第三侧面在地面投影的最外侧框线。
进一步的,第一车辆可以根据第一条线的方向,以及第一条线在待检测图像中的位置,确定第一检测对象的方向。第一车辆可以根据第一条线的长度,以及第一条线在待检测图像中的位置,确定第一检测对象的大小。第一车辆可以根据第一条线在待检测图像中的位置,确定第一检测对象相对于第一车辆的位置。
这样,第一车辆仅需根据第一检测对象的可通行区域边界,以及触地线,对第一检测对象上特定的点进行投影,即可确定第一检测对象的三维信息,大大降低了第一车辆确定第一检测对象的三维信息的计算量。
结合第一方面,在一种可能的实现方式中,根据可通行区域边界,以及触地线,确定第一检测对象的三维信息,包括:根据第三边界点和第三触地线,确定第一个点;根据第四边界点和第三触地线,确定第二个点;根据第一个点和第二个点,确定一条线。
基于此,第一车辆确定第一检测对象在地面投影的顶点的信息,以及第一检测对象在地面的投影的最外围边界。第一车辆可以根据第一检测对象在地面投影的顶点,和最外围边界,确定第一检测对象的大小和方向。例如,在第一检测对象为第二车辆时,第二车辆在地面的投影的顶点和最外围边界线,能够表征第二车辆的大小,最外围边界线的方向能够表征第二车辆的方向。
结合第一方面,在一种可能的实现方式中,根据第三边界点和第三触地线,确定第一个点,包括:确定第五直线;第五直线为在待检测图像中过第三边界点,且与视平线垂直的直线;确定第五直线与第三触地线的交点,为第一个点。
基于此,第一车辆可以根据第三触地线和第三边界点在待检测图像中的视觉关系,快速准确的确定第三边界点在地面的投影。第一车辆通过该方式确定第三边界点在地面的投影可以进一步降低第一车辆确定第一检测对象的三维信息的计算量。
结合第一方面,在一种可能的实现方式中,根据第四边界点和第三触地线,确定第二个点,包括:确定第六直线;第六直线为在待检测图像中过第四边界点,且与视平线垂直的直线;确定第六直线与第三触地线的交点,为第二个点。
基于此,第一车辆可以根据第三触地线和第四边界点在待检测图像中的视觉关系,快速准确的确定第四边界点在地面的投影。第一车辆通过该方式确定第四边界点在地面的投影可以进一步降低第一车辆确定第一检测对象的三维信息的计算量。
结合第一方面,在一种可能的实现方式中,该方法还包括:将第一检测对象的三维信息输入车体坐标系中,确定第一检测对象的大小、方向和相对位置中的至少一项。
基于此,第一车辆通过将第一检测对象的三维信息输入到车体坐标系中,可以确定第一检测对象在三维空间中的大小,方向,以及检测对象相对于第一车辆的位置。
第二方面,提供一种确定检测对象的三维信息的装置,其特征在于,包括:通信单元和处理单元;通信单元,用于获取待检测图像;待检测图像包括第一检测对象;处理单元,用于确定第一检测对象的可通行区域边界,以及第一检测对象的触地线;可通行区域边界包括第一检测对象在待检测图像中的边界;触地线为第一检测对象与地面的交点的连线;处理单元,还用于根据可通行区域边界,以及触地线,确定第一 检测对象的三维信息。
结合第二方面,在一种可能的实现方式中,第一检测对象的三维信息用于确定第一检测对象的大小、方向,以及相对位置中的至少一项。
结合第二方面,在一种可能的实现方式中,第一检测对象的可通行区域边界包括多个与第一标识对应的边界点,以及与多个第二标识对应的边界点;第一标识还对应于第一检测对象的第一侧面,第二标识还对应于第一检测对象的第二侧面;第一侧面和第二侧面为第一检测对象中两个相交的侧面。
结合第二方面,在一种可能的实现方式中,触地线中包括第一触地线和第二触地线;第一触地线为拟合多个与第一标识对应的边界点确定的触地线;第二触地线为拟合多个与第二标识对应的边界点确定的触地线。
结合第二方面,在一种可能的实现方式中,多个与第一标识对应的边界点中包括第一边界点;第一边界点为多个具有第一标识的边界点中,与第二触地线的距离最大的边界点。
多个与第二标识对应的边界点中包括第二边界点;第二边界点为多个具有第二标识的边界点中,与第一触地线的距离最大的边界点。
结合第二方面,在一种可能的实现方式中,第一检测对象的三维信息根据第一检测对象对应的三个点和两条线确定;其中,三个点中的第一个点为第一边界点在地面上的投影;三个点中的第二个点为第二边界点在地面上的投影;三个点中的第三个点过第二个点,且与第二触地线平行的直线与第一触地线的交点;两条线中的第一条线为第一个点和第三个点之间的连线;两条线中的第二条线为第二个点和第三个点之间的连线。
结合第二方面,在一种可能的实现方式中,处理单元具体用于:确定所述第一边界点在所述地面上的投影为第一个点;确定所述第二边界点在所述地面上的投影为第二个点;确定过所述第二边界点在所述地面上的投影,且与所述第二触地线平行的直线与所述第一触地线的交点为第三个点;确定所述第一个点和所述第三个点之间的连线为第一条线;确定所述第二个点和所述第三个点之间的连线为第二条线;根据所述第一个点,所述第二个点,所述第三个点,所述第一条线,以及所述第二条线,确定所述第一检测对象的三维信息。
结合第二方面,在一种可能的实现方式中,处理单元,具体用于:根据第一边界点,以及第一触地线,确定第一个点;根据第一触地线,第二触地线,以及第二边界点,确定第二个点;根据第一触地线,第二触地线,以及第二个点,确定第三个点。
结合第二方面,在一种可能的实现方式中,处理单元,具体用于确定第一直线;第一直线为在待检测图像中过第一边界点,且与视平线垂直的直线;确定第一直线与第一触地线的交点为第一个点。
结合第二方面,在一种可能的实现方式中,处理单元,具体用于:确定第二直线和第三直线;其中,第二直线为在待检测图像中,过第一触地线与视平线的交点,以及第二触地线中远离第一触地线的端点的直线;第三直线为在待检测图像中过第二边界点,且垂直于视平线的直线;确定第二直线和第三直线的交点为第二个点。
结合第二方面,在一种可能的实现方式中,处理单元,具体用于:确定第四直线; 第四直线为在待检测图像中过第二个点,且与第二触地线平行的直线;确定第四直线与第一触地线的交点为第三个点。
结合第二方面,在一种可能的实现方式中,第一检测对象的可通行区域边界包括多个与第三标识对应的边界点;第三标识还对应于第一检测对象的第三侧面。
结合第二方面,在一种可能的实现方式中,第一检测对象的触地线中包括第三触地线;第三触地线为拟合多个与第三标识对应的边界点,确定的触地线。
结合第二方面,在一种可能的实现方式中,多个具有第三标识的边界点中包括第三边界点和第四边界点;第三边界点为多个具有第三标识的边界点中,距离第三触地线的一端最远的点;第四边界点为多个具有第三标识的边界点中,距离第三触地线的另一端最远的点。
结合第二方面,在一种可能的实现方式中,第一检测对象的三维信息根据第一检测对象对应的两个点和一条线确定;两个点中的第一个点为第三边界点在地面上的投影;两个点中的第二个点为第四边界点在地面上的投影。
结合第二方面,在一种可能的实现方式中,所述处理单元,具体用于:确定所述第三边界点在所述地面上的投影为第一个点;确定所述第四边界点在所述地面上的投影为第二个点;确定所述第一个点和所述第二个点之间的连线为第一条线;根据所述第一个点,所述第二个点,以及所述第一条线,确定所述第一检测对象的三维信息。
结合第二方面,在一种可能的实现方式中,处理单元,具体用于:根据第三边界点和第三触地线,确定第一个点;根据第四边界点和第三触地线,确定第二个点;根据第一个点和第二个点,确定一条线。
结合第二方面,在一种可能的实现方式中,处理单元,具体用于:确定第五直线;第五直线为在待检测图像中过第三边界点,且与视平线垂直的直线;确定第五直线与第三触地线的交点,为第一个点。
结合第二方面,在一种可能的实现方式中,处理单元,具体用于:确定第六直线;第六直线为在待检测图像中过第四边界点,且与视平线垂直的直线;确定第六直线与第三触地线的交点,为第二个点。
结合第二方面,在一种可能的实现方式中,处理单元,还用于:将第一检测对象的三维信息输入车体坐标系中,确定第一检测对象的大小、方向和相对位置中的至少一项。
第三方面,本申请提供了一种确定检测对象的三维信息的装置,包括:处理器和存储器,其中,存储器用于存储计算机程序和指令,处理器用于执行计算机程序和指令实现如第一方面和第一方面的任一种可能的实现方式中所描述的方法。该确定检测对象的三维信息的装置可以是第一车辆,也可以是第一车辆中的芯片。
第四方面,本申请提供一种智能车辆,包括:车辆本体、单目摄像头以及如第二方面和第二方面的任一种可能的实现方式中所描述的确定检测对象的三维信息的装置,单目摄像头用于采集待检测图像;确定检测对象的三维信息的装置用于执行如第一方面和第一方面的任一种可能的实现方式中所描述的确定检测对象的三维信息的方法,确定检测对象的三维信息。
结合第四方面,在一种可能的实现方式中,该智能车辆还包括显示屏;显示屏用 于显示检测对象的三维信息。
第五方面,本申请提供一种高级驾驶辅助系统(advanced sriver assistant system,ADAS),包括第二方面和第二方面的任一种可能的实现方式中所描述的确定检测对象的三维信息的装置,该确定检测对象的三维信息的装置用于执行如第一方面和第一方面的任一种可能的实现方式中所描述的确定检测对象的三维信息的方法,确定检测对象的三维信息。
第六方面,本申请提供了一种计算机可读存储介质,计算机可读存储介质中存储有指令,当该指令在计算机上运行时,使得计算机执行如第一方面和第一方面的任一种可能的实现方式中所描述的方法。
第七方面,本申请提供一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行如第一方面和第一方面的任一种可能的实现方式中所描述的方法。
应当理解的是,本申请中对技术特征、技术方案、有益效果或类似语言的描述并不是暗示在任意的单个实施例中可以实现所有的特点和优点。相反,可以理解的是对于特征或有益效果的描述意味着在至少一个实施例中包括特定的技术特征、技术方案或有益效果。因此,本说明书中对于技术特征、技术方案或有益效果的描述并不一定是指相同的实施例。进而,还可以任何适当的方式组合本实施例中所描述的技术特征、技术方案和有益效果。本领域技术人员将会理解,无需特定实施例的一个或多个特定的技术特征、技术方案或有益效果即可实现实施例。在其他实施例中,还可在没有体现所有实施例的特定实施例中识别出额外的技术特征和有益效果。
附图说明
图1为本申请实施例提供的一种车辆的结构示意图一;
图2为本申请实施例提供的一种ADAS系统的系统架构图;
图3为本申请实施例提供的一种计算机系统的结构示意图;
图4为本申请实施例提供的一种云侧指令自动驾驶车辆的应用示意图一;
图5为本申请实施例提供的一种云侧指令自动驾驶车辆的应用示意图二;
图6为本申请实施例提供的一种计算机程序产品的结构示意图;
图7为本申请实施例提供的一种确定检测对象的三维信息的方法的流程示意图;
图8a为本申请实施例提供的一种第一检测对象的示意图;
图8b为本申请实施例提供的另一种第一检测对象的示意图;
图9为本申请实施例提供的另一种确定检测对象的三维信息的方法的流程示意图;
图10为本申请实施例提供的另一种确定检测对象的三维信息的方法的流程示意图;
图11为本申请实施例提供的一种第一检测对象的三维信息的示意图;
图12为本申请实施例提供的另一种确定检测对象的三维信息的方法的流程示意图;
图13为本申请实施例提供的另一种第一检测对象的三维信息的示意图;
图14为本申请实施例提供的一种确定检测对象的三维信息的装置的结构示意图。
具体实施方式
在本申请的描述中,除非另有说明,“/”表示“或”的意思,例如,A/B可以表示A或B。本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。此外,“至少一个”是指一个或多个,“多个”是指两个或两个以上。“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
需要说明的是,本申请中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
本申请实施例提供一种确定检测对象的三维信息的方法及装置,该方法应用在车辆中,或者应用于具有控制车辆的功能的其他设备(比如云端服务器、手机终端等)中。车辆或者其他设备可以通过其包含的组件(包括硬件和软件),实施本申请实施例提供的确定检测对象的三维信息的方法,车辆根据图像采集装置采集到待带检测图像,确定检测对象三维信息(大小、方向,相对位置),使得车辆可以根据这些检测对象的三维信息,规划该车辆的行驶路径。
图1为本申请实施例提供的车辆100的功能框图,该车辆100可以是智能车辆。在一个实施例中,车辆100根据图像采集装置采集到待带检测图像,确定检测对象三维信息,使得车辆可以根据这些检测对象的三维信息,规划车辆的行驶路径。
车辆100可包括各种子系统,例如行进系统110、传感器系统120、控制系统130、一个或多个外围设备140以及电源150、计算机系统160和用户接口170。可选地,车辆100可包括更多或更少的子系统,并且每个子系统可包括多个元件。另外,车辆100的每个子系统和元件可以通过有线或者无线互连。
行进系统110可包括为车辆100提供动力运动的组件。在一个实施例中,行进系统110可包括引擎111、传动装置112、能量源113和车轮114。引擎111可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合,例如汽油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。引擎111将能量源113转换成机械能量。
能量源113的示例包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源113也可以为车辆100的其他系统提供能量。
传动装置112可以将来自引擎111的机械动力传送到车轮114。传动装置112可包括变速箱、差速器和驱动轴。在一个实施例中,传动装置112还可以包括其他器件,比如离合器。其中,驱动轴可包括可耦合到一个或多个车轮114的一个或多个轴。
传感器系统120可包括感测关于车辆100周边的环境的信息的若干个传感器。例如,传感器系统120可包括定位系统121(定位系统可以是全球定位系统(global positioning system,GPS),也可以是北斗系统或者其他定位系统)、惯性测量单元(inertial measurement unit,IMU)122、雷达123、激光雷达124以及相机125。传感器系统120还可包括监视车辆100的内部系统的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感器数据可用于检 测对象及其相应特性(位置、形状、方向、速度等)。这种检测和识别是车辆100自动驾驶的安全操作的关键功能。
定位系统121可用于估计车辆100的地理位置。IMU 122用于基于惯性加速度来感测车辆100的位置和朝向变化。在一个实施例中,IMU 122可以是加速度计和陀螺仪的组合。
雷达123可利用无线电信号来感测车辆100的周边环境内的物体。在一些实施例中,除了感测物体以外,雷达123还可用于感测物体的速度和/或前进方向。
激光雷达124可利用激光来感测车辆100所位于的环境中的物体。在一些实施例中,激光雷达124可包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他系统组件。
相机125可用于捕捉车辆100的周边环境的多个图像,以及车辆驾驶舱内的多个图像。相机125可以是静态相机或视频相机。控制系统130可控制车辆100及其组件的操作。控制系统130可包括各种元件,其中包括转向系统131、油门132、制动单元133、计算机视觉系统134、路线控制系统135以及障碍规避系统136。
转向系统131可操作来调整车辆100的前进方向。例如在一个实施例中可以为方向盘系统。
油门132用于控制引擎111的操作速度并进而控制车辆100的速度。
制动单元133用于控制车辆100减速。制动单元133可使用摩擦力来减慢车轮114。在其他实施例中,制动单元133可将车轮114的动能转换为电流。制动单元133也可采取其他形式来减慢车轮114转速从而控制车辆100的速度。
计算机视觉系统134可以操作来处理和分析由相机125捕捉的图像以便识别车辆100周边环境中的物体和/或特征以及车辆驾驶舱内的驾驶员的肢体特征和面部特征。物体和/或特征可包括交通信号、道路状况和障碍物,驾驶员的肢体特征和面部特征包括驾驶员的行为、视线、表情等。计算机视觉系统134可使用物体识别算法、运动中恢复结构(structure from motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉系统134可以用于为环境绘制地图、跟踪物体、估计物体的速度、确定驾驶员行为、人脸识别等等。
路线控制系统135用于确定车辆100的行驶路线。在一些实施例中,路线控制系统135可结合来自传感器、定位系统121和一个或多个预定地图的数据以为车辆100确定行驶路线。
障碍规避系统136用于识别、评估和避免或者以其他方式越过车辆100的环境中的潜在障碍物。
当然,在一个实例中,控制系统130可以增加部分以上未示出的组件;或者采用其他组件替换上述示出的部分组件;又或者也可以减少一部分上述示出的组件。
车辆100通过外围设备140与外部传感器、其他车辆、其他计算机系统或用户之间进行交互。外围设备140可包括无线通信系统141、车载电脑142、麦克风143和/或扬声器144。
在一些实施例中,外围设备140提供车辆100的用户与用户接口170交互的手段。例如,车载电脑142可向车辆100的用户提供信息。用户接口170还可操作车载电脑 142来接收用户的输入。车载电脑142可以通过触摸屏进行操作。在其他情况中,外围设备140可提供用于车辆100与位于车内的其它设备通信的手段。例如,麦克风143可从车辆100的用户接收音频(例如,语音命令或其他音频输入)。类似地,扬声器144可向车辆100的用户输出音频。
无线通信系统141可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信系统141可使用3G蜂窝通信,例如CDMA、EVD0、GSM/GPRS,或者使用4G蜂窝通信,例如LTE,或者使用5G蜂窝通信。无线通信系统141可利用WiFi与无线局域网(wireless local area network,WLAN)通信。在一些实施例中,无线通信系统141可利用红外链路、蓝牙或ZigBee与设备直接通信。无线通信系统141还可以利用其他无线协议与设备通信。例如各种车辆通信系统。无线通信系统141可包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备。
电源150可向车辆100的各种组件提供电力。在一个实施例中,电源150可以为可再充电锂离子或铅酸电池。这种电池的一个或多个电池组可被配置为电源,从而为车辆100的各种组件提供电力。在一些实施例中,电源150和能量源113可一起实现,例如新能源车辆中的纯电动车辆或者油电混动车辆等。
车辆100的部分或所有功能受计算机系统160控制。计算机系统160可包括至少一个处理器161,处理器161执行存储在例如数据存储装置162这样的非暂态计算机可读介质中的指令1621。计算机系统160还可以是采用分布式方式控制车辆100的个体组件或子系统的多个计算设备。
处理器161可以是任何常规的处理器,诸如商业可获得的中央处理单元(central processing unit,CPU)。替选地,该处理器可以是诸如专用集成电路(application specific integrated circuit,ASIC)或其它基于硬件的处理器的专用设备。尽管图1功能性地图示了处理器、存储器、和在相同物理外壳中的其它元件,但是本领域的普通技术人员应该理解该处理器、计算机系统、或存储器实际上可以包括可以存储在相同的物理外壳内的多个处理器、计算机系统、或存储器,或者包括可以不存储在相同的物理外壳内的多个处理器、计算机系统、或存储器。例如,存储器可以是硬盘驱动器,或位于不同于物理外壳内的其它存储介质。因此,对处理器或计算机系统的引用将被理解为包括对可以并行操作的处理器或计算机系统或存储器的集合的引用,或者可以不并行操作的处理器或计算机系统或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,处理器只执行与特定于组件的功能相关的计算。
在此处所描述的各个方面中,处理器可以位于远离该车辆并且与该车辆进行无线通信的设备中。在其它方面中,此处所描述的过程中的一些在布置于车辆内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。
在一些实施例中,数据存储装置162可包含指令1621(例如,程序逻辑),指令1621可被处理器161执行来执行车辆100的各种功能,包括以上描述的全部或者部分功能。数据存储装置162也可包含额外的指令,包括向行进系统110、传感器系统120、控制系统130和外围设备140中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。
除了指令1621以外,数据存储装置162还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在车辆100在自主、半自主和/或手动模式中操作期间被车辆100和计算机系统160使用。
比如,在一种可能的实施例中,数据存储装置162可以获取车辆基于传感器系统120中的传感器获取到的周围环境中的障碍物信息,例如其他车辆、道路边沿,以及绿化带等障碍物的位置,障碍物与车辆的距离以及障碍物之间的距离等信息。数据存储装置162还可以从传感器系统120或车辆100的其他组件获取环境信息,环境信息例如可以为车辆当前所处环境附近是否有绿化带、车道、行人等,或者车辆通过机器学习算法计算当前所处环境附近是否存在绿化带、行人等。除上述内容外,数据存储装置162还可以存储该车辆自身的状态信息,以及与该车辆有交互的其他车辆的状态信息,其中,车辆的状态信息包括但不限于车辆的位置、速度、加速度、航向角等。如此,处理器161可从数据存储装置162获取这些信息,并基于车辆所处环境的环境信息、车辆自身的状态信息、其他车辆的状态信息等确定车辆的可通行区域,并基于该可通行区域确定最终的驾驶策略,以控制车辆100自动驾驶。
用户接口170,用于向车辆100的用户提供信息或从其接收信息。可选地,用户接口170可与外围设备140的集合内的一个或多个输入/输出设备进行交互,例如无线通信系统141、车载电脑142、麦克风143和扬声器144中的一个或多个。
计算机系统160可基于从各种子系统(例如,行进系统110、传感器系统120和控制系统130)获取的信息以及从用户接口170接收的信息来控制车辆100。例如,计算机系统160可根据来自控制系统130的信息,控制转向系统131更改车辆前进方向,从而规避由传感器系统120和障碍规避系统136检测到的障碍物。在一些实施例中,计算机系统160可对车辆100及其子系统的许多方面进行控制。
可选地,上述这些组件中的一个或多个可与车辆100分开安装或关联。例如,数据存储装置162可以部分或完全地与车辆100分开存在。上述组件可以通过有线和/或无线的方式耦合在一起进行通信。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1不应理解为对本申请实施例的限制。
在道路行进的自动驾驶汽车,如上面的车辆100,可以根据其周围环境内的其他车辆以确定对当前速度的调整指令。其中,车辆100周围环境内的物体可以是交通控制设备、或者绿化带等其它类型的物体。在一些示例中,可以独立地考虑周围环境内的每个物体,并且基于物体的各自的特性,诸如它的当前速度、加速度、与车辆的间距等,来确定车辆100的速度调整指令。
可选地,作为自动驾驶汽车的车辆100或者与其相关联的计算机设备(如图1的计算机系统160、计算机视觉系统134、数据存储装置162)可以基于所识别的测量数据,得到周围环境的状态(例如,交通、雨、道路上的冰、等等),并确定在当前时刻周边环境中的障碍物与车辆的相对位置。可选地,每一障碍物所形成的可通行区域的边界依赖于彼此,因此,还可以将获取到的所有测量数据来一起确定车辆的可通行区域的边界,去除掉可通行区域中实际不可通行的区域。车辆100能够基于检测到的车辆的可通行区域来调整它的驾驶策略。换句话说,自动驾驶汽车能够基于所检测到的 车辆的可通行区域来确定车辆需要调整到什么稳定状态(例如,加速、减速、转向或者停止等)。在这个过程中,也可以考虑其它因素来确定车辆100的速度调整指令,诸如,车辆100在行驶的道路中的横向位置、道路的曲率、静态和动态物体的接近度等等。
除了提供调整自动驾驶汽车的速度的指令之外,计算机设备还可以提供修改车辆100的转向角的指令,以使得自动驾驶汽车遵循给定的轨迹和/或维持自动驾驶汽车与附近的物体(例如相邻车道中的轿车)的安全横向和纵向距离。
上述车辆100可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车、和手推车等,本申请实施例不做特别的限定。
在本申请的另一些实施例中,自动驾驶车辆还可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
一种实现方式中,参见图2,本申请实施例提供的本申请实施例提供的确定检测对象的三维信息的方法,应用于如图2所示的ADAS系统200中。如图2所示,ADAS系统200中包括硬件系统201、感知融合系统202、规划系统203、以及控制系统204。
其中,硬件系统201用于采集第一车辆周围的道路信息,车辆信息,障碍物信息等。当前常用的硬件系统201主要包括摄像头,视频采集卡等。在本申请实施例中,硬件系统201中包括单目摄像头。
感知融合系统202用于对硬件系统201采集到的图像信息进行处理,确定第一车辆周围的目标信息(包括车辆信息、行人信息、红路灯息,障碍物信息等),以及道路结构信息(包括车道线信息,路沿信息等)。
规划系统203用于根据目标信息以及道路结构信息,规划第一车辆的行驶路线,行驶速度等,生成规划信息。
控制系统204用于将规划系统203生成的规划信息,转换为控制信息,并向第一车辆下发控制信息,以使得第一车辆根据该控制信息,沿规划系统30规划的第一车辆的行驶路线,行驶速度进行行驶。
车载通信模块205(图2中未示出),用于自车和其他车辆之间的信息交互。
存储组件206(图2未示出),用于存储上述各个模块的可执行代码,运行这些可执行代码可实现本申请实施例的部分或全部方法流程。
在本申请实施例的一种可能的实现方式中,如图3所示,图1所示的计算机系统160包括处理器301,处理器301和系统总线302耦合,处理器301可以是一个或者多个处理器,其中每个处理器都可以包括一个或多个处理器核。显示适配器(video adapter)303可以驱动显示器324,显示器324和系统总线302耦合。系统总线302通过总线桥304和输入输出(I/O)总线(BUS)305耦合,I/O接口306和I/O总线305耦合,I/O接口306和多种I/O设备进行通信,比如输入设备307(如:键盘,鼠标,触摸屏等),多媒体盘(media tray)308,(例如,CD-ROM,多媒体接口等)。收发器309(可以发送和/或接收无线电通信信号),摄像头310(可以捕捉静态和动态数字视频图像) 和外部通用串行总线(universal serial bus,USB)端口311。其中,可选地,和I/O接口306相连接的接口可以是USB接口。
其中,处理器301可以是任何传统处理器,包括精简指令集计算(reduced instruction set computer,RISC)处理器、复杂指令集计算(complex instruction set computer,CISC)处理器或上述的组合。可选地,处理器301还可以是诸如专用集成电路ASIC的专用装置。可选地,处理器301还可以是神经网络处理器或者是神经网络处理器和上述传统处理器的组合。
可选地,在本申请的各种实施例中,计算机系统160可位于远离智能车辆的地方,且与智能车辆100无线通信。在其它方面,本申请的一些过程可设置在智能车辆内的处理器上执行,其它一些过程由远程处理器执行,包括采取执行单个操纵所需的动作。
计算机系统160可以通过网络接口312和软件部署服务器(deploying server)313通信。可选的,网络接口312可以是硬件网络接口,比如网卡。网络(Network)314可以是外部网络,比如因特网,也可以是内部网络,比如以太网或者虚拟私人网络(VPN),可选地,network314还可以为无线网络,比如WiFi网络、蜂窝网络等。
硬盘驱动器接口315和系统总线302耦合。硬盘驱动器接口315和硬盘驱动器316相连接。系统内存317和系统总线302耦合。运行在系统内存317的数据可以包括计算机系统160的操作系统(OS)318和应用程序319。
操作系统(OS)318包括但不限于Shell 320和内核(kernel)321。Shell 320是介于使用者和操作系统318的kernel 321间的一个接口。Shell 320是操作系统318最外面的一层。shell管理使用者与操作系统318之间的交互:等待使用者的输入,向操作系统318解释使用者的输入,并且处理各种各样的操作系统318的输出结果。
内核321由操作系统318中用于管理存储器、文件、外设和系统资源的部分组成,直接与硬件交互。操作系统318的内核321通常运行进程,并提供进程间的通信,提供CPU时间片管理、中断、内存管理、IO管理等功能。
应用程序319包括自动驾驶相关的程序323,比如,管理自动驾驶汽车和路上障碍物交互的程序,控制自动驾驶汽车的行驶路线或者速度的程序,控制自动驾驶汽车和路上其他汽车/自动驾驶汽车交互的程序等。应用程序319也存在于deploying server 313的系统上。在一个实施例中,在需要执行应用程序319时,计算机系统160可以从deploying server 313下载应用程序319。
又比如,应用程序319可以是控制车辆根据上述车辆的可通行区域以及传统控制模块确定驾驶策略的应用程序。计算机系统160的处理器301调用该应用程序319,得到驾驶策略。
传感器322和计算机系统160关联。传感器322用于探测计算机系统160周围的环境。举例来说,传感器322可以探测动物,汽车,障碍物和/或人行横道等。进一步传感器322还可以探测上述动物,汽车,障碍物和/或人行横道等物体周围的环境。比如:动物周围的环境,例如,动物周围出现的其他动物,天气条件,动物周围环境的光亮度等。可选地,如果计算机系统160位于自动驾驶的汽车上,传感器322可以是摄像头,红外线感应器,化学检测器,麦克风等器件中的至少一项。
在本申请的另一些实施例中,计算机系统160还可以从其它计算机系统接收信息 或转移信息到其它计算机系统。或者,从车辆100的传感器系统120收集的传感器数据可以被转移到另一个计算机,由另一计算机对此数据进行处理。如图4所示,来自计算机系统160的数据可以经由网络被传送到云侧的计算机系统410用于进一步的处理。网络以及中间节点可以包括各种配置和协议,包括因特网、万维网、内联网、虚拟专用网络、广域网、局域网、使用一个或多个公司的专有通信协议的专用网络、以太网、WiFi和HTTP、以及前述的各种组合。这种通信可以由能够传送数据到其它计算机和从其它计算机传送数据的任何设备执行,诸如调制解调器和无线接口。
在一个示例中,计算机系统410可以包括具有多个计算机的服务器,例如负载均衡服务器群。为了从计算机系统160接收、处理并传送数据,服务器420与网络的不同节点交换信息。该计算机系统410可以具有类似于计算机系统160的配置,并具有处理器430、存储器440、指令450、和数据460。
在一个示例中,服务器420的数据460可以包括提供天气相关的信息。例如,服务器420可以接收、监视、存储、更新、以及传送与周边环境中目标对象相关的各种信息。该信息可以包括例如以报告形式、雷达信息形式、预报形式等的目标类别、目标形状信息以及目标跟踪信息。
参见图5,为自主驾驶车辆和云服务中心(云服务器)交互的示例。云服务中心可以经诸如无线通信网络的网络511,从其操作环境500内的车辆513、车辆512接收信息(诸如车辆传感器收集到数据或者其它信息)。其中,车辆513和车辆512可为智能车辆。
云服务中心520根据接收到的数据,运行其存储的控制汽车自动驾驶相关的程序对车辆513、车辆512进行控制。控制汽车自动驾驶相关的程序可以为:管理自动驾驶汽车和路上障碍物交互的程序,或者控制自动驾驶汽车路线或者速度的程序,或者控制自动驾驶汽车和路上其他自动驾驶汽车交互的程序。
示例性的,云服务中心520通过网络511可将地图的部分提供给车辆513、车辆512。在其它示例中,可以在不同位置之间划分操作。例如,多个云服务中心可以接收、证实、组合和/或发送信息报告。在一些示例中还可以在车辆之间发送信息报告和/传感器数据。其它配置也是可能的。
在一些示例中,云服务中心520向智能车辆发送关于环境内可能的驾驶情况所建议的解决方案(如,告知前方障碍物,并告知如何绕开它))。例如,云服务中心520可以辅助车辆确定当面对环境内的特定障碍时如何行进。云服务中心520向智能车辆发送指示该车辆应当在给定场景中如何行进的响应。例如,云服务中心520基于收集到的传感器数据,可以确认道路前方具有临时停车标志的存在,又比如,基于“车道封闭”标志和施工车辆的传感器数据,确定该车道由于施工而被封闭。相应地,云服务中心520发送用于车辆通过障碍的建议操作模式(例如:指示车辆变道另一条道路上)。云服务中心520观察其操作环境500内的视频流,并且已确认智能车辆能安全并成功地穿过障碍时,对该智能车辆所使用的操作步骤可以被添加到驾驶信息地图中。相应地,这一信息可以发送到该区域内可能遇到相同障碍的其它车辆,以便辅助其它车辆不仅识别出封闭的车道还知道如何通过。
在一些实施例中,所公开的方法可以实施为以机器可读格式,被编码在计算机可 读存储介质上的或者被编码在其它非瞬时性介质或者制品上的计算机程序指令。图6示意性地来示出根据这里展示的至少一些实施例而布置的示例计算机程序产品的概念性局部视图,示例计算机程序产品包括用于在计算设备上执行计算机进程的计算机程序。在一个实施例中,示例计算机程序产品600是使用信号承载介质601来提供的。信号承载介质601可以包括一个或多个程序指令602,其当被一个或多个处理器运行时可以提供以上针对图2至图5描述的全部功能或者部分功能,或者可以提供后续实施例中描述的全部或部分功能。例如,参考图7中所示的实施例,S101至S103中的一个或多个特征可以由与信号承载介质601相关联的一个或多个指令来承担。此外,图6中的程序指令602也描述示例指令。
在一些示例中,信号承载介质601可以包含计算机可读介质603,诸如但不限于,硬盘驱动器、紧密盘(CD)、数字视频光盘(DVD)、数字磁带、存储器、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等等。在一些实施方式中,信号承载介质601可以包含计算机可记录介质604,诸如但不限于,存储器、读/写(R/W)CD、R/W DVD、等等。在一些实施方式中,信号承载介质601可以包含通信介质605,诸如但不限于,数字和/或模拟通信介质(例如,光纤电缆、波导、有线通信链路、无线通信链路、等等)。因此,例如,信号承载介质601可以由无线形式的通信介质605(例如,遵守IEEE 802.11标准或者其它传输协议的无线通信介质)来传达。一个或多个程序指令602可以是,例如,计算机可执行指令或者逻辑实施指令。在一些示例中,诸如针对图2至图6描述的计算设备可以被配置为,响应于通过计算机可读介质603、和/或计算机可记录介质604、和/或通信介质605中的一个或多个传达到计算设备的程序指令602,提供各种操作、功能、或者动作。应该理解,这里描述的布置仅仅是用于示例的目的。因而,本领域技术人员将理解,其它布置和其它元素(例如,机器、接口、功能、顺序、和功能组等等)能够被取而代之地使用,并且一些元素可以根据所期望的结果而一并省略。另外,所描述的元素中的许多是可以被实现为离散的或者分布式的组件的、或者以任何适当的组合和位置来结合其它组件实施的功能实体。
以上为对本申请实施例记载的确定检测对象的三维信息的方法的应用场景进行了简要介绍。
为了使得本申请更加的清楚,以下将对本申请涉及到的部分概念做简单介绍。
1、触地线
触地线是指待检测图像中的检测对象与地面实际接触的点组成的线段。
以检测对象为车辆为例,车辆的触地线为车辆轮胎与地面接触点的连线。车辆的触地线可以按照车的四个侧面(分别为左侧面、右侧面、前侧面和后侧面)进行区分。车辆的一个侧面对应一条触地线。
具体来说,以常见的家用轿车为例,车辆左前方轮胎与地面的接触点,记为接触点1;车辆右前方轮胎与地面的接触点,记为接触点2;车辆左后方轮胎与地面的接触点,记为接触点3;车辆右后方轮胎与地面的接触点,记为接触点4。
车辆的左侧面对应的触地线,为接触点1和接触点3之间的连线。
车辆的右侧面对应的触地线,为接触点2和接触点4之间的连线。
车辆的前侧面对应的触地线,为接触点1和接触点2之间的连线。
车辆的后侧面对应的触地线,为接触点3和接触点4之间的连线。
需要指出的是,车辆的轮胎与地面接触部分,通常为一个接触面(该接触面可以近似的认为是一个长方形)。本申请中可以将车辆左前方轮胎与地面的接触面的左前方的顶点,作为接触点1;将车辆右前方轮胎与地面的接触面的右前方的顶点,作为接触点2;将车辆左后方轮胎与地面的接触面的左后方的顶点,作为接触点3;将车辆右后方轮胎与地面的接触面的右后方的顶点最为接触点4。
2、神经网络模型
神经网络模型是由大量处理单元(记为神经元)互相连联组成的信息处理系统,神经网络模型中的神经元中包含有相应的数学表达式。数据输入神经元之后,神经元运行其包含的数学表达式,对输入数据进行计算,生成输出数据。其中,每个神经元的输入数据为与其连接的上一个神经元的输出数据;每个神经元的输出数据为与其连接的下一个神经元的输入数据。
在神经网络模型中,输入数据之后,神经网络模型根据自身的学习训练,为输入数据选择相应的神经元,并根据这些神经对对输入数据进行计算,确定并输出最终的运算结果。同时,神经网络在数据运算过程中还可以不断学习进化,根据对运算结果的反馈不断优化自身的运算过程,神经网络模型运算训练次数越多,得到的结果反馈越多,计算的结果越准确。
本申请实施例中所记载的神经网络模型用于对图像采集装置采集到的图片进行处理,确定图像中位于各个检测对象上的可通行区域边界(记为检测对象的可通行区域边界)。
3、可通行区域(freespace)
可通行区域是指汽车可以行驶通过的区域。例如,第一车辆检测到的前方区域中行人,障碍物,以及其他车辆之间的空旷区域,记为第一车辆的可通行区域。
可通行区域一般位于检测对象的边界上,因此在本申请实施例中,可以用位于检测对象的边界上的可通行区域表征第一检测对象的边界,进而根据位于第一检测对象上的可通行区域,确定第一检测对象的三维信息。
在本申请实施例中,神经网络模型输出的检测对象的可通行区域边界通常以多个具有相应标识的点示出。例如,第二车辆左侧的可通行区域包括位于车辆左侧的边界点上的多个点。该多个点用于表征第二车辆左侧的可通行区域。
此外,神经网络模型输出的这些点中,位于检测对象的不同侧面的点可以具有不同的标识,例如,这些标识可以包括:标识“00”,用于表征位于检测对象左侧可通行区域边界上的点;标识“01”,用于表征位于检测对象右侧可通行区域边界上的点的标识;标识“10”,用于表征位于检测对象前侧可通行区域边界上的点的标识;标识“11”,用于表征位于车辆后侧可通行区域边界上的点的标识。
4、视平线
视平线是指图像中与视线平行的直线。在本申请实施例中,视平线指的是图像中与图像采集设备高度相同,且平行于图像采集设备的直线。
5、灭点(vanishing point)
根据图像的透视原理,水平面上相互平行的两条直线,在二维的图像中会于视平线处相交于一点,该交点即为灭点。
6、车体坐标系
车体坐标系指的是坐标系原点位于车身上的三维坐标系。一般来说,车体坐标系的原点与车辆的质心重合,X轴沿车辆的长度方向,指向车辆正前方,Y轴沿车辆宽度方向指向驾驶员左侧的方向,Z轴沿车辆高度方向指向车辆上方。
以上是对本申请涉及到的部分内容以及概念所作的简单介绍。
当前,为了确定第一车辆周围的第二车辆的三维信息,提出了如下三种确定车辆信息的方法。分别为:方法1、车辆2D检测,方法2、车辆双目3D检测,以及方法3、车辆激光点云检测。以下,分别对方法1、方法2、以及方法3进行详细说明。
方法1、车辆2D检测
车辆2D检测为:第一车辆确定第二车辆在待检测图像中显示的图像信息;第一车辆以矩形框的形式,将第二车辆在待检测图像中显示的图像信息框出;第一车辆根据矩形框的下边沿的位置,计算第二车辆距离第一车辆的距离,确定第二车辆与第一车辆的相对位置。
由此可知,在该方法中,第一车辆仅能确定第二车辆相对于第一车辆的位置信息。但是,在智能驾驶场景中,车辆除了需要确定车辆的位置信息之外,还需要确定出车辆的大小,方向等信息,以判断周围车辆是否对自身的行驶造成干扰。
因此,依靠简单的车辆2D检测,无法满足智能驾驶场景中第一车辆对其他车辆信息的需求。
方法2、车辆双目3D检测
双目3D检测通过确定两个位于不同位置的图像采集装置采集到的检测对象的图像的差别,可以获得检测对象的深度信息。通过建立图像中相同点的对应关系,将同一空间物理点在不同图像中的映像点对应起来,形成视差(Disparity)图像,并可以根据视差图像,确定检测对象的3D信息。
车辆双目3D检测为:第一车辆采用双目摄像头,分别从两个角度采集第二车辆的两张图像。第一车辆根据第二车辆上的相同的点在两张图像中的位置的偏差,计算出第二车辆的三维信息。通过双目3D检测算法能够较为准确的计算出第二车辆的大小,方向以及相对于第一车辆的位置信息。
但是双目摄像头的硬件设备昂贵,应用到智能车辆上的双目摄像头的制作要求高,且目前的对于第二车辆3D信息确定过程中所使用的算法需要较高的图像标注成本,以及计算量。
方法3、车辆激光点云检测
当一束激光照射到物体表面时,所反射的激光会携带方位、距离等信息。若将激光束按照某种轨迹进行扫描,便会边扫描边记录到反射的激光点信息,由于扫描极为精细,则能够得到大量的激光点,因而就可形成激光点云。
车辆激光点云检测为:第一车辆向周围发射激光,扫描周围的检测对象。第一车辆接收周围检测对象返回的激光点云数据,这些点云数据中包括第二车辆返回的点云数据,以及其他检测对象返回的点云数据。第一车辆采用机器学习或者深度学习等算 法,将第二车辆返回的激光点云数据映射为某种数据结构。第一车辆提取出这些数据中每个点或特征,并根据这些特征对点云数据进行聚类,将相似的点云归为一类。第一车辆将聚类后的点云输入到相应的分类器中进行分类识别,确定第二车辆的点云数据。第一车辆将第二车辆的点云数据映射回三维点云数据中,构建第二车辆的3D包围框,确定第二车辆的三维信息。
车辆激光点云检测虽然具有很好的检测精度,但是激光雷达的硬件成本较高,且点云数据的数据计算量较大,需要耗费大量的计算资源,占用第一车辆大量的GPU资源。
为了解决现有技术中,第一车辆确定第二车辆的二维信息时,无法准确的确定第二车辆的大小和方向信息,第一车辆确定第二车辆的三维信息时,采用车辆双目3D检测或者车辆激光点云检测时,硬件成本高,计算复杂度高的问题。本申请实施例提供了一种确定检测对象的三维信息的方法,第一车辆能够根据采集到的待检测图像,确定第一检测对象的可通行区域边界以及触地线,进一步的,第一车辆根据第一检测对象的可通行区域边界,以及触地线,确定第一检测对象在二维图像中表征出的三维信息。
上述待检测图像可以为单目摄像头采集的二维图像。这样,第一车辆通过单目摄像头采集第一检测对象的图像信息即可确定第一检测对象的三维信息。相比较于现有技术中第一车辆需要依赖双目摄像头采集第一检测对象的图像信息以确定第一检测对象的三维信息的方法,或者第一车辆依赖激光雷达确定第一检测对象的三维信息的方法,本申请中第一车辆采用单目摄像头采集第一检测对象的图像信息确定第一检测对象的三维信息,能够大大降低确定第一检测对象的三维信息的硬件成本。
此外,本申请提供的确定检测对象的三维信息的方法,第一车辆只需标记第一检测对象的可通行区域边界,并根据第一检测对象的可通行区域边界确定第一检测对象的触地线。第一车辆可以根据第一检测对象的可通行区域边界,以及触地线,结合第一检测对象在待检测图像中的视觉关系等,确定第一检测对象在二维图像中表征出的三维信息。因此,本申请提供的确定检测对象的三维信息的方法,无需第一车辆进行其他额外的数据标注训练,从而降低了确定第一检测对象的三维信息的计算量,以及降低了确定第一检测对象的三维信息占用的图形处理器GPU资源。
以下,将对本申请实施例提供的确定检测对象的三维信息的方法进行详细说明,如图7所示,该方法包括:
S101、第一车辆获取待检测图像。
其中,待检测图像包括第一检测对象。
在智能驾驶领域,检测对象可以为车辆,行人,障碍物等。在本申请实施例中,以检测对象为第二车辆为例,进行说明。
上述待检测图像可以为车载的图像采集装置采集到的图片。车载图像采集装置通常用于采集位于车辆前方的其他车辆;或者,车载图像采集装置也可以通过采集车辆360°全方位的图像,以获得车辆周围所有其他车辆的信息。
需要指出的是,本申请实施例所记载的图像采集装置可以为单目摄像头,当第一车辆执行本申请实施例提供的确定检测对象的三维信息的方法时,该第一车辆可以为 设置在第一车辆中的车载终端设备,或者其他具有数据处理能力的设备。
S102、第一车辆确定第一检测对象的可通行区域边界,以及第一检测对象的触地线。
其中,第一检测对象的可通行区域边界包括第一检测对象在待检测图像中的边界。第一检测对象的触地线为第一检测对象与地面的交点的连线。
第一检测对象的可通行区域边界为将待检测图像输入到神经网络模型之后,由神经网络模型输出的第一检测对象的可通行区域边界。
当第一检测对象为第二车辆时,第二车辆的触地线的数量与第二车辆在待检测图像中示出的侧面的数量有关。
如图8a所示,第一检测对象为在待检测图像中位于第一车辆右前方的车辆,待检测图像中示出了第二车辆的左侧面和后侧面的情况下,第二车辆的触地线包括两条触地线,分别为第二车辆左侧面的触地线和第二车辆后侧面的触地线。
第二车辆左侧面的触地线为:第二车辆左前侧的轮胎的触地点和第二车辆左后侧的轮胎的触地点之间的连线。
第二车辆后侧面的触地线为:第二车辆左后侧的轮胎的触地点和第二车辆右后侧的轮胎的触地点之间的连线。
或者,如图8b所示,第一检测对象为在待检测图像中位于第一车辆正前方的车辆,待检测图像中仅示出了第二车辆的后侧面的情况下,第二车辆的触地线包括一条触地线,为第二车辆后侧面的触地线。
第二车辆的后侧面的触地线为:第二车辆左后侧的轮胎的触地点和第二车辆右后侧的轮胎的触地点之间的连线。
S103、第一车辆根据可通行区域边界,以及触地线,确定第一检测对象的三维信息。
第一检测对象的三维信息用于确定第一检测对象的大小、方向和相对位置中的至少一项。
第一检测对象的三维信息,用于表征第一检测对象在待检测图像中所示出的三维信息。第一车辆可以将第一检测对象在待检测图像中所示出的三维信息转换到三维坐标系中,以进一步确定第一检测对象的真实三维信息。
例如,在第一检测对象为第二车辆的情况下,第一车辆将第二车辆在待检测图像中所示出的三维信息转换到三维坐标系中,可以确定第二车辆的大小(例如第二车辆的长度,宽度),第二车辆的方向(例如第二车辆的车头朝向,第二车辆可能的行驶方向),第二车辆的在该三维坐标系中的位置。
需要指出的是,第一检测对象的相对位置,与第一车辆将第一检测对象转换到的三维坐标系有关。例如,在第一车辆将第一检测对象转换到第一车辆的车体坐标系时,第一检测对象的相对位置为第一检测对象相对于第一车辆的位置;在第一车辆将第一检测对象转换到世界坐标系时,第一检测对象的相对位置为第一检测对象的实际地理位置。
一种可能的实现方式中,第一检测对象的三维信息,包括多个点和多条线段。该多个点为第一检测对象在待检测图像中显示出的端点在地面上的投影。该多条线段中 至少包括第一检测对象的最外围边界在地面上的投影所产生的线段;或者,该多条线段为第一检测对象的在待检测图像中的轮廓线。第一车辆将该多条线段输入至第一车辆的车体坐标系中,可以确定第一检测对象的大小,方向和相对位置中的至少一项。
基于上述技术方案,本申请提供的确定检测对象的三维信息的方法,第一车辆能够根据采集到的待检测图像,确定第一检测对象的可通行区域边界以及触地线,进一步的,第一车辆根据第一检测对象的可通行区域边界,以及触地线,确定第一检测对象在二维图像中表征出的三维信息。
上述待检测图像可以为单目摄像头采集的二维图像。这样,第一车辆通过单目摄像头采集第一检测对象的图像信息即可确定第一检测对象的三维信息。相比较于现有技术中第一车辆需要依赖双目摄像头采集第一检测对象的图像信息以确定第一检测对象的三维信息的方法,本申请中第一车辆采用单目摄像头采集第一检测对象的图像信息确定第一检测对象的三维信息,能够大大降低确定第一检测对象的三维信息的硬件成本。
此外,本申请提供的确定检测对象的三维信息的方法,第一车辆只需标记第一检测对象的可通行区域边界,并根据第一检测对象的可通行区域边界确定第一检测对象的触地线。第一车辆可以根据第一检测对象的可通行区域边界,以及触地线,结合第一检测对象在待检测图像中的视觉关系等,确定第一检测对象在二维图像中表征出的三维信息。因此,本申请提供的确定检测对象的三维信息的方法,无需第一车辆进行其他额外的数据标注训练,从而降低了确定第一检测对象的三维信息的计算量,以及降低了确定第一检测对象的三维信息占用的图形处理器GPU资源。
结合图7,如图9所示,上述S102具体可以通过以下S1021-S1023实现。下面对S1021-S1023进行详细说明。
S1021、第一车辆将待检测图像输入神经网络模型中,得到L个点。
其中,待检测图像中通常包括一个或多个检测对象。上述L个点为该一个或多个检测对象的可通行区域边界上的点。L为正整数。
一种可能的实现方式中,上述神经网络模型,为预先训练好的神经网络模型。该神经网络模型具有标记待检测图像中检测对象的可通行区域边界的能力。可通行区域的边界即为待检测图像中的检测对象的边界。
具体来说,第一车辆获取待检测图像之后,调用神经网络模型,并将待检测图像输入到该神经网络模型中,输出L个点。该L个点即为待检测图像中检测对象的可通行区域边界的能力。
神经网络模型输出的L个点中的每个点可以对应一个标识。该标识用于表征该点位于检测对象的哪一侧。
举例来说,位于检测对象的左侧的点对应第一标识。相应的,该第一标识用于表征该点位于检测对象的左侧。
位于检测对象右侧的点对应第二标识。相应的,该第二标识用于表征该点位于检测对象的右侧。
位于检测对象的前侧的点对应第三标识。相应的,该第三标识用于表征该点位于检测对象的前侧。
位于检测对象的后侧的点对应第四标识。相应的,该第四标识用于表征该点位于检测对象的后侧。
S1022、第一车辆从该L个点中,确定出M个点。
该M个点为位于第一检测对象的可通行区域边界上的点。
一种具体的实现方式中,第一车辆根据待检测图像中的一个或多个检测对象,对该L个点进行分类,确定每个点所对应的检测对象。在此之后,第一车辆根据每个点对象的检测对象,确定对应于第一检测对象的M个点。该M个点即为位于第一检测对象的可通行区域边界上的点。M为正整数,且M小于等于L。
S1023、第一车辆拟合该M个点,确定第一检测对象的触地线。
一种可能的实现方式中,第一车辆可以采用随机抽样一致算法(random sample consensus,RANSAC)拟合算法,拟合确定目标对象的触地线。
具体来说,第一车辆可以采用RANSAC拟合算法,拟合确定目标对象的触地线的过程包括以下步骤a-步骤f,以下进行详细说明:
步骤a、第一车辆确定出位于第一检测对象的可通行区域边界,且具有相同标识的K个点,K为正整数。
步骤b、第一车辆从该K个点中随机选取T个点,并用最小二乘法拟合该T个点,得到一条直线。
步骤c、第一车辆确定K个点中除该T个点以外的每个点距离该直线的距离。
步骤d、第一车辆确定距离小于第一阈值的点为内群点,并确定内群点的个数。
步骤e、第一车辆多次执行上述步骤b-步骤d,确定多条直线,以及该多条直线中每条直线对应的内群点的个数。
步骤f、第一车辆确定上述多条直线中,对应的内群点的个数最多的直线,为第一检测对象的一个触地线。
需要指出的是,在上述步骤e中,第一车辆通过步骤b-步骤d确定的直线的数量越多,最终确定的结果的准确性越高。
基于上述技术方案,第一车辆能够根据待检测图像,采用神经网络模型,和相应的拟合算法,确定第一检测对象的可通行区域边界,以及第一检测对象的触地线。
需要说明的是,第一车辆的单目摄像头,能够拍摄到位于该单目摄像头前方的物体图像。在第二车辆位于该单目摄像头的正前方的位置时,第二车辆通常只有一个面能够被第一车辆的单目摄像头采集到。在第二车辆位于该单目摄像头前方,且偏离单目摄像头正前方的位置时,第二车辆通常有两个面能够被第一车辆的单目摄像头采集到。
因此,第一车辆采集到的第二车辆的图像包括以下两种场景:场景1、第一车辆采集到第二车辆的两个侧面。场景2、第一车辆采集到第二车辆的一个侧面。下面,分别对上述场景1和场景2进行详细说明:
场景1、第一车辆采集到第二车辆的两个侧面。
其中,第一车辆采集到第二车辆的图像信息,与单目摄像头采集的第一车辆的哪个方位的图像有关。
例如,当单目摄像头采集第一车辆的前方的图像时:
若第二车辆第一车辆同向行驶,且位于第一车辆的左前方,则单目摄像头可以采集到第二车辆的右侧面和后侧面。
若第二车辆与第一车辆同向行驶,且位于第一车辆的右前方,则单目摄像头可以采集到第二车辆的左侧面和后侧面。
若第二车辆与第一车辆相向行驶,且位于车辆的左前方,则单目摄像头可以采集到第二车辆的前侧面和左侧面。
若第二车辆与第一车辆相向行驶,且位于车辆的右前方,则单目摄像头可以采集到第二车辆的前侧面和右侧面。
又例如,单目摄像头采集第二车辆的左侧的图像时:
若第二车辆位于第一车辆左侧,且偏离单目摄像头的正前方,单目摄像头可以采集到第二车辆的右侧面,以及第二车辆的前侧面或者后侧面中的一个。
此外,单目摄像头还可以采集第二车辆的其他侧面的图像,本申请对此不在赘述。
场景2、第一车辆采集到第二车辆的一个侧面。
其中,第一车辆采集到第二车辆的图像信息,与单目摄像头采集的第一车辆的哪个方位的图像有关。
例如,当单目摄像头采集第一车辆的前方的图像时:
若第二车辆与第一车辆同向行驶,且位于第一车辆的正前方,则单目摄像头可以采集到第二车辆的后侧面。
若第二车辆与第一车辆相向行驶,且位于第一车辆的正前方,则单目摄像头可以采集到第二车辆的前侧面。
若第一车辆向正北方向行驶,第二车辆向正东方向行驶,则单目摄像头可以采集到第二车辆的右侧面。
若第一车辆向正北方向行驶,第二车辆向正西方向行驶,则单目摄像头可以采集到第二车辆的左侧面。
又例如,单目摄像头采集第二车辆的左侧的图像时:
若第二车辆位于第一车辆左侧,且位于单目摄像头的正前方,单目摄像头可以采集到第二车辆的右侧面。
以上,记载了在不同场景下,第一车辆可以采集到的第二车辆的侧面的数量不同。
需要指出的是,当第一车辆采集到的第二车辆的侧面的数量不同时,第一车辆确定的第二车辆的可通行区域边界不同,第一车辆确定的第二车辆的触地线的数量不同,以及第一车辆确定的第二车辆的三维信息不同。
具体来说,在第一车辆采集到第二车辆的两个侧面的场景下:第一车辆确定的第二车辆的可通行区域边界为该两个侧面可通行区域边界。第一车辆确定的第二车辆的触地线包括第一触地线和第二触地线,第一触地线和第二触地线分别对应该两个侧面中的不同侧面。第一车辆确定的第二车辆的三维信息,包括该两个侧面组成的三维信息。
在第一车辆采集到第二车辆的一个侧面的场景下:第一车辆确定的第二车辆的可通行区域边界为该一个侧面可通行区域边界。第一车辆确定的第二车辆的触地线包括第三触地线,第三触地线为该一个侧面的触地线。第一车辆确定的第二车辆的三维信 息,包括该一个侧面组成的三维信息。
因此,结合上述场景1和场景2,在S103中第一车辆根据可通行区域边界,以及触地线,确定第一检测对象的三维信息,包括如下两种情况,分别为:情况1、第一车辆根据第一检测对象的两个侧面的可通行区域边界和触地线,确定第一检测对象的三维信息;以及情况2、第一车辆根据第一检测对象的一个侧面的可通行区域边界和触地线,确定第一检测对象的三维信息。
以下,分别对情况1和情况2进行详细说明:
情况1、第一车辆根据第一检测对象的两个侧面的可通行区域边界和触地线,确定第一检测对象的三维信息。
结合上述S102,在情况1中,第一车辆根据上述S102确定的第一检测对象的可通行区域边界,以及第一检测对象的触地线分别具有如下特征:
1、第一检测对象的可通行区域边界包括多个与第一标识对应的边界点,以及与多个第二标识对应的边界点。
第一标识还对应于第一检测对象的第一侧面,第二标识还对应于第一检测对象的第二侧面。第一侧面和第二侧面为第一检测对象中两个相交的侧面。
2、触地线中包括第一触地线和第二触地线。
第一触地线为拟合多个与第一标识对应的边界点确定的触地线。
第二触地线为拟合多个与第二标识对应的边界点确定的触地线。
3、上述多个与第一标识对应的边界点中包括第一边界点;第一边界点为多个具有第一标识的边界点中,与第二触地线的距离最大的边界点。
上述多个与第二标识对应的边界点中包括第二边界点;第二边界点为多个具有第二标识的边界点中,与第一触地线的距离最大的边界点。
结合上述S103,在情况1中,第一车辆根据上述S103确定的第一检测对象的三维信息根据第一检测对象对应的三个点和两条线确定。
其中,三个点中的第一个点为第一边界点在地面上的投影。
三个点中的第二个点为第二边界点在地面上的投影。
三个点中的第三个点过第二个点,且与第二触地线平行的直线与第一触地线的交点。
两条线中的第一条线为第一个点和第三个点之间的连线。
两条线中的第二条线为第二个点和第三个点之间的连线。
结合图7,如图10所示,在情况1中,S103具体可以通过以下S1031-S1035实现。下面,对S1031-S1035进行具体说明:
S1031、第一车辆根据第一边界点,以及第一触地线,确定第一个点。
其中,该第一个点为第一触地线和第一直线的交点。第一直线为过第一边界点,且与待检测图像中的视平线垂直的直线。
一种具体的实现方式中,结合图11中示出的待检测图像中的第一检测对象,第一车辆确定第一个点的方法为:
步骤Ⅰ、第一车辆确定第一直线;第一直线为在待检测图像中过第一边界点,且与视平线垂直的直线。
一种实现方式中,第一车辆过第一边界点做视平线的垂线,该垂线为第一直线。
步骤Ⅱ、第一车辆确定第一直线与第一触地线的交点为第一个点。
一种实现方式中,第一车辆做第一触地线的延长线,该延长线与第一直线相交与点a。第一车辆确定该点a为第一个点。
S1032、第一车辆根据第一触地线,第二触地线,以及第二边界点,确定第二个点。
其中,该第二个点为第二直线与第三直线的交点。第二直线为过第一触地线与视平线的交点,以及第二触地线中远离第一触地线的顶点的直线。第三直线为过第二边界点,且垂直于待检测图像中的视平线的直线。
一种具体的实现方式中,结合图11中示出的待检测图像中的第一检测对象,第一车辆确定第二个点的方法为:
步骤Ⅲ、第一车辆确定第二直线。
其中,第二直线为在待检测图像中,过第一触地线与视平线的交点,以及第二触地线中远离第一触地线的端点的直线。
一种实现方式中,第一车辆做触地线的延长线,得到第一触地线与视平线的交点b。第一车辆确定第二触地线中远离第一触地线的端点c。第一车辆做一条过上述交点b以及端点c的直线,该直线为第二直线。
步骤Ⅳ、第一车辆确定第三直线。
第三直线为在待检测图像中过第二边界点,且垂直于视平线的直线。
一种实现方式中,第一车辆过第二边界点做视平线的垂线,该垂线为第三直线。
步骤Ⅴ、第一车辆确定第二直线和第三直线的交点为第二个点。
一种实现方式中,第一车辆确定第二直线和第三直线交于点c,第一车辆确定该点c为第二个点。
S1033、第一车辆根据第一触地线,第二触地线,以及第二个点,确定第三个点。
其中,第三个点为第一触地线,与第四直线的交点。第四直线为在待检测图像中过第二个点,且与第二触地线平行的直线。
一种具体的实现方式中,结合图11中示出的待检测图像中的第一检测对象,第一车辆确定第三个点的方法为:
步骤Ⅵ、第一车辆确定第四直线。
一种实现方式中,第一车辆过第二个点做第二触地线的平行线。第一车辆确定该平行线为第四直线。
步骤Ⅶ、第一车辆确定第四直线与第一触地线的交点为第三个点。
一种实现方式中,确定设备确定第四直线和第一触地线的交点d,第一车辆恩确定该交点d为第三个点。
S1034、第一车辆根据第一个点和第三个点,确定第一条线。
一种实现方式中,如图11所示,第一车辆做一条分别以第一个点和第三个点为端点的线段a,第一车辆确定该线段为第一条线。
S1035、第一车辆根据第二个点和第三个点,确定第二条线。
一种实现方式中,如图11所示,第一车辆做一条分别以第二个点和第三个点为端点的线段b,第一车辆确定该线段为第二条线。
情况2、第一车辆根据第一检测对象的一个侧面的可通行区域边界和触地线,确定第一检测对象的三维信息。
结合上述S102,在情况2中,第一车辆根据上述S102确定的第一检测对象的可通行区域边界,以及第一检测对象的触地线分别具有如下特征:
a、第一检测对象的可通行区域边界包括多个与第三标识对应的边界点;第三标识还对应于第一检测对象的第三侧面。
b、第一检测对象的触地线中包括第三触地线;第三触地线为拟合多个与第三标识对应的边界点,确定的触地线。
c、多个具有第三标识的边界点中包括第三边界点和第四边界点。
第三边界点为多个具有第三标识的边界点中,距离第三触地线的一端最远的点。
第四边界点为多个具有第三标识的边界点中,距离第三触地线的另一端最远的点。
结合上述S103,在情况2中,第一车辆根据上述S103确定的第一检测对象的三维信息根据第一检测对象对应的两个点和一条线确定。
两个点中的第一个点为第三边界点在地面上的投影。
两个点中的第二个点为第四边界点在地面上的投影。
结合图8,如图12所示,在情况2中,S103具体可以通过以下S1036-S1038实现,下面,对S1036-S1038进行详细说明。
S1036、第一车辆根据第三边界点和第三触地线,确定第一个点。
其中,第一个点为第三触地线与第五直线的交点。第五直线为过第三边界点,且与第三触地线垂直的直线。
一种具体的实现方式中,结合图13中示出的待检测图像中的第一检测对象,第一车辆确定第一检测对象的三维信息的第一个点的方法为:
步骤1、第一车辆确定第五直线。
第五直线为在待检测图像中过第三边界点,且与视平线垂直的直线。
一种实现方式中,第一车辆过第三边界点做视平线的垂线,第一车辆确定该垂线为第五直线。
步骤2、第一车辆确定第五直线与第三触地线的交点,为第一个点。
一种实现方式中,第一车辆做第三触地线的延长线,第三触地线的延长线与第五直线交于点e,第一车辆确定点e为第一个点。
S1037、第一车辆根据第四边界点和第三触地线,确定第二个点。
其中,第二个点为第三触地线与第六直线的交点。第六直线为过第四边界点,且与第三触地线垂直的直线。
一种具体的实现方式中,结合图13中示出的待检测图像中的第一检测对象,第一车辆确定第一检测对象的三维信息的第二个点的方法为:
步骤3、第一车辆确定第六直线。
其中,第六直线为在待检测图像中过第四边界点,且与视平线垂直的直线。
一种实现方式中,第一车辆过第四边界点做视平线的垂线,第一车辆确定该垂线为第六直线。
步骤4、第一车辆确定第六直线与第三触地线的交点,为第二个点。
一种实现方式中,第一车辆做第三触地线的延长线,第三触地线的延长线与第六直线交于点f,第一车辆确定点f为第二个点。
S1038、第一车辆根据第一个点和第二个点,确定第一条线。
一种实现方式中,如图13所示,第一车辆做一条分别以第一个点和第二个点为端点的线段c,第一车辆确定该线段c为第一条线。
基于上述技术方案,第一车辆能够根据第一检测对象的可通行区域边界,以及第一检测对象的触地线,确定第一检测对象的三维信息。
需要指出的是,上述三维信息是第一检测对象在待检测图像中所表征的三维信息。第一车辆需要确定第一检测对象的真实三维信息时,还需要将第一检测对象在待检测图像中所表征的三维信息带入带第一车辆的车体坐标系中,以使得第一车辆确定第一检测对象相对于第一车辆的位置,第一检测对象的大小(长、宽、高中的至少一项),以及第一检测对象的朝向等信息。
具体来说,结合图7,如图9所示,在S103之后,该方法还包括:
S104、第一车辆将第一检测对象的三维信息输入车体坐标系中,确定第一检测对象的大小、方向和相对位置中的至少一项。
一种具体的实现方式中,第一车辆根据待检测图像建立第一直角坐标系;待检测图像位于该第一直角坐标系中。该第一直角坐标系可以为预先为图像采集装置设置的矩阵,图像采集装置采集的图片均可以映射到该矩阵中。
第一车辆确定第一检测对象的三维信息的在第一之间坐标系中的坐标。在此之后,第一车辆确定图像采集装置的内参数和外参数。第一车辆根据图像采集装置的内参数和外参数,以及图像采集装置在车体坐标系中的位置,将第一检测对象的三维信息的在第一直角坐标系中的坐标转换为车体坐标系中的坐标。
第一车辆根据第一检测对象的三维信息在车体坐标系中的坐标,确定第一检测对象的位置,运动方向,大小等信息。
其中,图像采集装置的内参数,用于表征与图像采集装置自身相关的一些参数,例如图像采集装置的焦距,像素大小等。
图像采集装置的外参数,用于表征图像采集装置在世界坐标系中的参数,例如图像采集装置在世界坐标系中的位置,旋转方向等。
需要说明的是,第一车辆中预先设置有图像采集装置的内参矩阵和外参矩阵。第一车辆在将车体坐标系中的一个坐标点转换为第一之间坐标系中的一个坐标点(记为第一坐标点)时:第一车辆将该第一坐标点依次与外参矩阵和内参矩阵相乘,即可得到第一坐标点在第一直角坐标系中对应的坐标点。第一车辆在根据第一直角坐标系中的坐标点求其在车体坐标系中对应的点时,只需执行与上述相反的运算过程,即可根据第一直角坐标系中的坐标点,确定其在车体坐标系中对应的点。
需要指出的是,在S105之后,第一车辆确定的第一检测对象的大小,包括第一监测对象的长度和宽度。
为了确定第一检测对象的高度,第一车辆可以根据第一检测对象的类型,确定第一监测对象的高度。
举例来说,在第一检测对象是第二车辆的情况下,第一车辆识别第二车辆的车辆 类型,并根据该车辆的类型确定车辆的高度。
一种示例,第二车辆的车辆类型为车辆的级别,例如:微型车、小型车、紧凑型车、中型车、中大型车、大型车、小型运动型多用途汽车(sport utility vehicle,SUV)、紧凑型SUV、中型SUV、中大型SUV、大型SUV、紧凑型多用途汽车(multi purpose vehicles,MPV)、中型MPV、中大型MPV、大型MPV、跑车、皮卡、微面、轻客、微卡等。
第一车辆中预先配置有各个等级车辆的标准尺寸,在第一车辆确定第二车辆的车辆等级之后,根据第二车辆的车辆等级,确定第二车辆的高度。
又一种示例,第二车辆的车辆类型为车辆的型号(例如,车辆品牌+具体车型)。第一车辆中预先配置有各个型号的车辆的标准尺寸,在第一车辆确定第二车辆的车辆型号之后,根据第二车辆的车辆型号,确定第二车辆的高度。
需要指出的是,根据上述方法,第一车辆同样可以根据第二车辆的类型确定第二车辆的长度和宽度。第一车辆可以根据通过该方法确定的第二车辆的长度和宽度,以及通过第二车辆在待检测图像中的三维信息确定的长度和宽度互相验证,确定第二车辆的准确长度和宽度。
一种可能的实现方式中,在S105之后,第一车辆确定待检测图像中的全部车辆的位置,大小以及运动方向。第一车辆根据全部车辆的位置,大小以及运动方向,以及第一车辆中其他设备确定的当前道路信息,障碍信息,车辆的目的信息等,规划第一车辆的行驶路线。
第一车辆确定第一车辆的行驶路线之后,根据该第一行驶路线生成第一控制指令。第一控制指令用于指示第一车辆根据规划好的行驶路线行驶。
第一车辆向第一车辆下发该第一控制指令。第一车辆根据第一车辆下发的控制指令进行智能驾驶。
基于上述技术方案,第一车辆可以根据第二车辆的三维信息确定第二车辆的大小,方向,以及第二车辆相对于第一车辆的位置。在此之后,第一车辆可以根据这些信息,规划第一车辆的行驶路线,实现智能驾驶。
本申请上述实施例中的各个方案在不矛盾的前提下,均可以进行结合。
上述主要从各个装置之间交互的角度对本申请实施例的方案进行了介绍。可以理解的是,各个装置,例如,第一车辆和第二车辆为了实现上述功能,其包含了执行各个功能相应的硬件结构和软件模块中的至少一个。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
可以理解的是,为了实现上述实施例中功能,车辆包括了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本申请中所公开的实施例描述的各示例的单元及方法步骤,本申请能够以硬件或硬件和计算机软件相结合的形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于 技术方案的特定应用场景和设计约束条件。
图14为本申请的实施例提供的确定检测对象的三维信息的装置的结构示意图。这些确定检测对象的三维信息的装置可以用于实现上述方法实施例中处理器的功能,因此也能实现上述方法实施例所具备的有益效果。在本申请的实施例中,该确定检测对象的三维信息的装置可以是如图1所示的处理器161。
如图14所示,确定检测对象的三维信息的装置1400包括处理单元1410和通信单元1420。确定检测对象的三维信息的装置1400用于实现上述图7、图9,图10,或者图12中所示的方法实施例中第一车辆的功能。
当确定检测对象的三维信息的装置1400用于实现图7所示的方法实施例中处理器的功能时:处理单元1410用于执行S102至S103,通信单元1420用于与其他实体进行通信。
当确定检测对象的三维信息的装置1400用于实现图9所示的方法实施例中处理器的功能时:处理单元1410用于执行S101、S1021至S1023、S103以及S104,通信单元1420用于与其他实体进行通信。
当确定检测对象的三维信息的装置1400用于实现图10所示的方法实施例中处理器的功能时:处理单元1410用于执行S101、S102、以及S1031至S1035,通信单元1420用于与其他实体进行通信。
当确定检测对象的三维信息的装置1400用于实现图12所示的方法实施例中处理器的功能时:处理单元1410用于执行S101、S102、以及S1036至S1038,通信单元1420用于与其他实体进行通信。
有关上述处理单元1410和通信单元1420更详细的描述可以直接参考图7、图9,图10或图12所示的方法实施例中相关描述直接得到,这里不加赘述。
在实现过程中,本实施例提供的方法中的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
本申请中的处理器可以包括但不限于以下至少一种:中央处理单元(central processing unit,CPU)、微处理器、数字信号处理器(DSP)、微控制器(microcontroller unit,MCU)、或人工智能处理器等各类运行软件的计算设备,每种计算设备可包括一个或多个用于执行软件指令以进行运算或处理的核。该处理器可以是个单独的半导体芯片,也可以跟其他电路一起集成为一个半导体芯片,例如,可以跟其他电路(如编解码电路、硬件加速电路或各种总线和接口电路)构成一个SoC(片上系统),或者也可以作为一个ASIC的内置处理器集成在所述ASIC当中,该集成了处理器的ASIC可以单独封装或者也可以跟其他电路封装在一起。该处理器除了包括用于执行软件指令以进行运算或处理的核外,还可进一步包括必要的硬件加速器,如现场可编程门阵列(field programmable gate array,FPGA)、PLD(可编程逻辑器件)、或者实现专用逻辑运算的逻辑电路。
本申请实施例中的存储器,可以包括如下至少一种类型:只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备, 也可以是电可擦可编程只读存储器(Electrically erasable programmabler-only memory,EEPROM)。在某些场景下,存储器还可以是只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。
本申请实施例还提供了一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行上述任一方法。
本申请实施例还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述任一方法。
本申请实施例还提供了一种通信系统,包括:上述基站和服务器。
本申请实施例还提供了一种芯片,该芯片包括处理器和接口电路,该接口电路和该处理器耦合,该处理器用于运行计算机程序或指令,以实现上述方法,该接口电路用于与该芯片之外的其它模块进行通信。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式来实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或者数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,简称DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可以用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质(例如,软盘、硬盘、磁带),光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,简称SSD))等。
尽管在此结合各实施例对本申请进行了描述,然而,在实施所要求保护的本申请过程中,本领域技术人员通过查看附图、公开内容、以及所附权利要求书,可理解并实现公开实施例的其他变化。在权利要求中,“包括”(comprising)一词不排除其他组成部分或步骤,“一”或“一个”不排除多个的情况。单个处理器或其他单元可以实现权利要求中列举的若干项功能。相互不同的从属权利要求中记载了某些措施,但这并不表示这些措施不能组合起来产生良好的效果。
尽管结合具体特征及其实施例对本申请进行了描述,显而易见的,在不脱离本申请的精神和范围的情况下,可对其进行各种修改和组合。相应地,本说明书和附图仅仅是所附权利要求所界定的本申请的示例性说明,且视为已覆盖本申请范围内的任意和所有修改、变化、组合或等同物。显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。
最后应说明的是:以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (16)

  1. 一种确定检测对象的三维信息的方法,其特征在于,包括:
    获取待检测图像;所述待检测图像包括第一检测对象;
    确定所述第一检测对象的可通行区域边界,以及所述第一检测对象的触地线;所述可通行区域边界包括所述第一检测对象在所述待检测图像中的边界;所述触地线为所述第一检测对象与地面的交点的连线;
    根据所述可通行区域边界,以及所述触地线,确定所述第一检测对象的三维信息。
  2. 根据权利要求1所述的方法,其特征在于,所述第一检测对象的三维信息用于确定所述第一检测对象的大小、方向,以及相对位置中的至少一项。
  3. 根据权利要求1或2所述的方法,其特征在于,所述第一检测对象的可通行区域边界包括多个与第一标识对应的边界点,以及与多个第二标识对应的边界点;所述第一标识还对应于所述第一检测对象的第一侧面,所述第二标识还对应于所述第一检测对象的第二侧面;所述第一侧面和所述第二侧面为所述第一检测对象中两个相交的侧面;
    所述触地线中包括第一触地线和第二触地线;
    所述第一触地线为拟合所述多个与第一标识对应的边界点确定的触地线;
    所述第二触地线为拟合所述多个与第二标识对应的边界点确定的触地线。
  4. 根据权利要求3所述的方法,其特征在于,所述多个与第一标识对应的边界点包括第一边界点;所述第一边界点为多个具有第一标识的边界点中,与所述第二触地线的距离最大的边界点;
    所述多个与第二标识对应的边界点包括第二边界点;所述第二边界点为所述多个具有第二标识的边界点中,与所述第一触地线的距离最大的边界点。
  5. 根据权利要求4所述的方法,其特征在于,所述确定所述第一检测对象的三维信息包括:
    确定所述第一边界点在所述地面上的投影为第一个点;
    确定所述第二边界点在所述地面上的投影为第二个点;
    确定过所述第二边界点在所述地面上的投影,且与所述第二触地线平行的直线与所述第一触地线的交点为第三个点;
    确定所述第一个点和所述第三个点之间的连线为第一条线;
    确定所述第二个点和所述第三个点之间的连线为第二条线;
    根据所述第一个点,所述第二个点,所述第三个点,所述第一条线,以及所述第二条线,确定所述第一检测对象的三维信息。
  6. 根据权利要求1或2所述的方法,其特征在于,所述第一检测对象的可通行区域边界包括多个与第三标识对应的边界点;所述第三标识还对应于所述第一检测对象的第三侧面;所述第一检测对象的触地线中包括第三触地线;所述第三触地线为拟合所述多个与第三标识对应的边界点,确定的触地线。
  7. 根据权利要求6所述的方法,其特征在于,所述多个具有第三标识的边界点中包括第三边界点和第四边界点;
    所述第三边界点为所述多个具有第三标识的边界点中,距离所述第三触地线的一 端最远的点;
    所述第四边界点为所述多个具有第三标识的边界点中,距离所述第三触地线的另一端最远的点。
  8. 根据权利要求7所述的方法,其特征在于,所述确定所述第一检测对象的三维信息包括:
    确定所述第三边界点在所述地面上的投影为第一个点;
    确定所述第四边界点在所述地面上的投影为第二个点;
    确定所述第一个点和所述第二个点之间的连线为第一条线;
    根据所述第一个点,所述第二个点,以及所述第一条线,确定所述第一检测对象的三维信息。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述方法还包括:
    将所述第一检测对象的三维信息输入车体坐标系中,确定所述第一检测对象的大小、方向和相对位置中的至少一项。
  10. 一种确定检测对象的三维信息的装置,其特征在于,包括:通信单元和处理单元;
    所述通信单元,用于获取待检测图像;所述待检测图像包括第一检测对象;
    所述处理单元,用于确定所述第一检测对象的可通行区域边界,以及所述第一检测对象的触地线;所述可通行区域边界包括所述第一检测对象在所述待检测图像中的边界;所述触地线为所述第一检测对象与地面的交点的连线;
    所述处理单元,还用于根据所述可通行区域边界,以及所述触地线,确定所述第一检测对象的三维信息。
  11. 一种确定检测对象的三维信息的装置,其特征在于,所述装置包括处理器和存储器,其中,所述存储器用于存储计算机程序和指令,所述处理器用于执行所述计算机程序和指令实现如权利要求1-9中任一项所述的确定检测对象的三维信息的方法。
  12. 一种智能车辆,其特征在于,包括车辆本体、单目摄像头以及如权利要求10所述的确定检测对象的三维信息的装置,所述单目摄像头用于采集待检测图像;所述确定检测对象的三维信息的装置用于执行如权利要求1-9任一项所述的确定检测对象的三维信息的方法,确定检测对象的三维信息。
  13. 根据权利要求12所述的智能车辆,其特征在于,还包括显示屏;所述显示屏用于显示所述检测对象的三维信息。
  14. 一种高级驾驶辅助系统ADAS,包括如权利要求10所述的确定检测对象的三维信息的装置,所述确定检测对象的三维信息的装置用于执行如权利要求1-9任一项所述的确定检测对象的三维信息的方法,确定检测对象的三维信息。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括计算机指令,当所述计算机指令在计算机上运行时,使得所述计算机执行如权利要求1-9任意一项所述的确定检测对象的三维信息的方法。
  16. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-9任意一项所述的确定检测对象的三维信息的方法。
PCT/CN2021/092807 2020-08-11 2021-05-10 确定检测对象的三维信息的方法及装置 WO2022033089A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010803409.2A CN114078246A (zh) 2020-08-11 2020-08-11 确定检测对象的三维信息的方法及装置
CN202010803409.2 2020-08-11

Publications (1)

Publication Number Publication Date
WO2022033089A1 true WO2022033089A1 (zh) 2022-02-17

Family

ID=80246877

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/092807 WO2022033089A1 (zh) 2020-08-11 2021-05-10 确定检测对象的三维信息的方法及装置

Country Status (2)

Country Link
CN (1) CN114078246A (zh)
WO (1) WO2022033089A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226637A (zh) * 2007-01-18 2008-07-23 中国科学院自动化研究所 一种车轮与地面接触点的自动检测方法
CN103718224A (zh) * 2011-08-02 2014-04-09 日产自动车株式会社 三维物体检测装置和三维物体检测方法
US20150178575A1 (en) * 2012-07-27 2015-06-25 Nissan Motor Co., Ltd. Three-dimensional object detection device and three-dimensional object detection method
CN108550143A (zh) * 2018-04-03 2018-09-18 长安大学 一种基于rgb-d相机的车辆长宽高尺寸的测量方法
CN108645625A (zh) * 2018-03-21 2018-10-12 北京纵目安驰智能科技有限公司 尾端与侧面结合的3d车辆检测方法、系统、终端和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226637A (zh) * 2007-01-18 2008-07-23 中国科学院自动化研究所 一种车轮与地面接触点的自动检测方法
CN103718224A (zh) * 2011-08-02 2014-04-09 日产自动车株式会社 三维物体检测装置和三维物体检测方法
US20150178575A1 (en) * 2012-07-27 2015-06-25 Nissan Motor Co., Ltd. Three-dimensional object detection device and three-dimensional object detection method
CN108645625A (zh) * 2018-03-21 2018-10-12 北京纵目安驰智能科技有限公司 尾端与侧面结合的3d车辆检测方法、系统、终端和存储介质
CN108550143A (zh) * 2018-04-03 2018-09-18 长安大学 一种基于rgb-d相机的车辆长宽高尺寸的测量方法

Also Published As

Publication number Publication date
CN114078246A (zh) 2022-02-22

Similar Documents

Publication Publication Date Title
CN110543814B (zh) 一种交通灯的识别方法及装置
WO2021000800A1 (zh) 道路可行驶区域推理方法及装置
CN112639882B (zh) 定位方法、装置及系统
WO2022104774A1 (zh) 目标检测方法和装置
WO2021057344A1 (zh) 一种数据呈现的方法及终端设备
CN112543877B (zh) 定位方法和定位装置
WO2021238306A1 (zh) 一种激光点云的处理方法及相关设备
WO2021218693A1 (zh) 一种图像的处理方法、网络的训练方法以及相关设备
WO2022148172A1 (zh) 车道线规划方法及相关装置
WO2022142839A1 (zh) 一种图像处理方法、装置以及智能汽车
WO2022001366A1 (zh) 车道线的检测方法和装置
WO2022051951A1 (zh) 车道线检测方法、相关设备及计算机可读存储介质
CN114842075B (zh) 数据标注方法、装置、存储介质及车辆
CN112810603B (zh) 定位方法和相关产品
WO2022052881A1 (zh) 一种构建地图的方法及计算设备
WO2022052765A1 (zh) 目标跟踪方法及装置
WO2021217646A1 (zh) 检测车辆可通行区域的方法及装置
KR20230003143A (ko) 차량이 붐 배리어를 통과하기 위한 방법 및 장치
CN115205311B (zh) 图像处理方法、装置、车辆、介质及芯片
WO2022033089A1 (zh) 确定检测对象的三维信息的方法及装置
WO2021159397A1 (zh) 车辆可行驶区域的检测方法以及检测装置
WO2022022284A1 (zh) 目标物的感知方法及装置
WO2021110166A1 (zh) 道路结构检测方法及装置
CN115100630A (zh) 障碍物检测方法、装置、车辆、介质及芯片
CN114821212A (zh) 交通标志物的识别方法、电子设备、车辆和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21855145

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21855145

Country of ref document: EP

Kind code of ref document: A1