CN110555407B - Pavement vehicle space identification method and electronic equipment - Google Patents

Pavement vehicle space identification method and electronic equipment Download PDF

Info

Publication number
CN110555407B
CN110555407B CN201910824982.9A CN201910824982A CN110555407B CN 110555407 B CN110555407 B CN 110555407B CN 201910824982 A CN201910824982 A CN 201910824982A CN 110555407 B CN110555407 B CN 110555407B
Authority
CN
China
Prior art keywords
target vehicle
image
envelope frame
vehicle
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910824982.9A
Other languages
Chinese (zh)
Other versions
CN110555407A (en
Inventor
张昭
黄鸿彬
王舒
刘杰
杜超喜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongfeng Motor Co Ltd
Original Assignee
Dongfeng Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongfeng Motor Co Ltd filed Critical Dongfeng Motor Co Ltd
Priority to CN201910824982.9A priority Critical patent/CN110555407B/en
Publication of CN110555407A publication Critical patent/CN110555407A/en
Application granted granted Critical
Publication of CN110555407B publication Critical patent/CN110555407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for identifying a space of a road vehicle and electronic equipment, wherein the method comprises the following steps: acquiring an image of a road target vehicle through a camera device; determining a three-dimensional envelope frame of the target vehicle according to the image of the target vehicle; determining the distance between a facade included in the three-dimensional envelope frame and a camera device; according to the distance, identifying the space coordinate of each vertex of the facade relative to a camera device; and determining the relative spatial position relationship between the target vehicle and the camera device according to the spatial coordinates of each vertex. The invention can use a monocular camera to identify the 3D model of the target vehicle and calculate the relative spatial position information. Compared with a binocular camera, the method is low in cost and high in calculation precision. Compared with the existing monocular camera, the method has the advantage that the identification effect is more accurate.

Description

Pavement vehicle space identification method and electronic equipment
Technical Field
The invention relates to the technical field of automobiles, in particular to a method for identifying a vehicle space on a road surface and electronic equipment.
Background
Environmental awareness is the basis for automated driving and driving assistance techniques. The vehicle learns the distribution of other vehicles on the road surface through the sensor to formulate the motion plan of the vehicle in sequence, or send out danger early warning to the driver.
Common sensors for sensing surrounding target vehicles include laser radar, millimeter wave radar, ultrasonic radar, vehicle-to-outside information exchange (V2X), and cameras. The laser radar is accurate in target position measurement, is easy to be affected by weather hardware and is high in price; the millimeter wave radar can detect the position and the relative speed of a target, but has low resolution and is not suitable for target classification and identification; the detection distance of the ultrasonic radar is limited, and the ultrasonic radar can only be used in low-speed scenes; the V2X technology is affected by network conditions and presents obstacles in terms of real-time and reliability.
The camera has the advantages of low cost, high resolution, rich target details, accurate classification and identification and the like, and is suitable for target perception of a target vehicle. However, the common camera projects a three-dimensional space onto an imaging plane, and the position and the distance of a target cannot be directly determined due to the loss of depth information. The range of the infrared Time of flight (TOF) camera is close, and the TOF camera is easily interfered by outdoor illumination. The binocular stereo camera can calculate the target distance according to binocular parallax, but image feature point matching needs to consume a large amount of calculation power, and the difficulty of image processing and hardware cost are increased.
In addition, the common camera is used for estimating the target position and the distance, and the default vehicle and the front road surface are on the same plane. If the slope of the road ahead changes, errors may be introduced into the distance estimation.
Disclosure of Invention
Therefore, it is necessary to provide a method for identifying a vehicle space on a road and an electronic device, aiming at the technical problems that the cost of identifying target vehicle information by using a binocular camera is too high and the target vehicle information cannot be accurately identified by using a monocular camera in the prior art.
The invention provides a method for identifying a vehicle space on a road surface, which comprises the following steps:
acquiring an image of a road target vehicle through a camera device;
determining a three-dimensional envelope frame of the target vehicle according to the image of the target vehicle;
determining the distance between a facade included in the three-dimensional envelope frame and a camera device;
according to the distance, identifying the space coordinate of each vertex of the facade relative to a camera device;
and determining the relative spatial position relationship between the target vehicle and the camera device according to the spatial coordinates of each vertex.
Further, the determining a three-dimensional envelope frame of the target vehicle according to the image of the target vehicle specifically includes:
extracting a maximum two-dimensional envelope frame of the target vehicle from the image of the target vehicle;
extracting a two-dimensional envelope frame of the head or the tail of the target vehicle from the image of the target vehicle as a reference two-dimensional envelope frame;
determining a picture vanishing point of the image;
connecting the top point of the reference two-dimensional envelope frame of the target vehicle with the picture vanishing point to obtain a plurality of connecting lines;
and intersecting the connecting line with the maximum two-dimensional envelope frame of the target vehicle, and cutting the connecting line, the reference envelope frame and the maximum two-dimensional envelope frame according to the perspective imaging principle of a cube to obtain the three-dimensional envelope frame of the target vehicle, which accords with the perspective relation.
Further, the determining the picture vanishing point of the image specifically includes:
extracting a plurality of straight lane lines from the image, and taking the intersection points of the plurality of lane lines as image vanishing points; or
And taking an imaging point corresponding to the optical axis of the camera device as a picture vanishing point.
Further, the determining a distance between a facade included in the three-dimensional envelope frame and the camera device specifically includes:
acquiring the height of an envelope frame of a facade included in the three-dimensional envelope frame;
and calculating the distance between the vertical face and the image pick-up device according to the L-H-f/H, wherein H is the actual height of the vehicle, f is the image distance from a photosensitive element of the image pick-up device to a focus, and H is the height of an enveloping frame of the vertical face.
Further, the determining a distance between a facade included in the three-dimensional envelope frame and the camera device specifically includes:
determining the vehicle type of the target vehicle according to the image of the target vehicle;
and determining the actual height of the vehicle according to the vehicle type.
Further, the identifying, according to the distance, the spatial coordinates of each vertex of the facade with respect to the camera device specifically includes:
acquiring a checkerboard related to the distance, wherein the checkerboard comprises a plurality of squares, and each square corresponds to a space coordinate relative to the camera device at the distance;
and taking the space coordinate of the square where each vertex of the facade is positioned in the checkerboard as the space coordinate of the vertex relative to the camera device.
The invention provides a road vehicle space recognition electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the one processor to cause the at least one processor to:
acquiring an image of a road target vehicle through a camera device;
determining a three-dimensional envelope frame of the target vehicle according to the image of the target vehicle;
determining the distance between a facade included in the three-dimensional envelope frame and a camera device;
according to the distance, identifying the space coordinate of each vertex of the facade relative to a camera device;
and determining the relative spatial position relationship between the target vehicle and the camera device according to the spatial coordinates of each vertex.
Further, the determining a three-dimensional envelope frame of the target vehicle according to the image of the target vehicle specifically includes:
extracting a maximum two-dimensional envelope frame of the target vehicle from the image of the target vehicle;
extracting a two-dimensional envelope frame of the head or the tail of the target vehicle from the image of the target vehicle as a reference two-dimensional envelope frame;
determining a picture vanishing point of the image;
connecting the top point of the reference two-dimensional envelope frame of the target vehicle with the picture vanishing point to obtain a plurality of connecting lines;
and intersecting the connecting line with the maximum two-dimensional envelope frame of the target vehicle, and cutting the connecting line, the reference envelope frame and the maximum two-dimensional envelope frame according to the perspective imaging principle of a cube to obtain the three-dimensional envelope frame of the target vehicle, which accords with the perspective relation.
Further, the determining the picture vanishing point of the image specifically includes:
extracting a plurality of straight lane lines from the image, and taking the intersection points of the plurality of lane lines as image vanishing points; or
And taking an imaging point corresponding to the optical axis of the camera device as a picture vanishing point.
Further, the determining a distance between a facade included in the three-dimensional envelope frame and the camera device specifically includes:
acquiring the height of an envelope frame of a facade included in the three-dimensional envelope frame;
and calculating the distance between the vertical face and the image pick-up device according to the L-H-f/H, wherein H is the actual height of the vehicle, f is the image distance from a photosensitive element of the image pick-up device to a focus, and H is the height of an enveloping frame of the vertical face.
Further, the determining a distance between a facade included in the three-dimensional envelope frame and the camera device specifically includes:
determining the vehicle type of the target vehicle according to the image of the target vehicle;
and determining the actual height of the vehicle according to the vehicle type.
Further, the identifying, according to the distance, the spatial coordinates of each vertex of the facade with respect to the camera device specifically includes:
acquiring a checkerboard related to the distance, wherein the checkerboard comprises a plurality of squares, and each square corresponds to a space coordinate relative to the camera device at the distance;
and taking the space coordinate of the square where each vertex of the facade is positioned in the checkerboard as the space coordinate of the vertex relative to the camera device.
The invention uses a monocular camera to identify the 3D model of the target vehicle and calculate the relative spatial position information. Compared with a binocular camera, the method is low in cost and high in calculation precision. Compared with the existing monocular camera, the method has the advantage that the identification effect is more accurate.
Drawings
FIG. 1 is a flow chart of the operation of a method for identifying a vehicle space on a roadway according to the present invention;
FIG. 2 is a schematic image of a roadway subject vehicle;
FIG. 3 is a schematic diagram of a two-dimensional envelope of a target vehicle;
FIG. 4 is a schematic view of determining a vanishing point for a frame based on a straight lane line;
FIG. 5 is a schematic view of a connecting wire;
FIG. 6 is a schematic diagram of a three-dimensional envelope of a target vehicle;
FIG. 7 is a schematic of distance determination;
FIG. 8 is a schematic view of camera calibration;
fig. 9 is a schematic diagram of a hardware structure of the electronic device for identifying a vehicle space on a road according to the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and specific examples.
Fig. 1 is a work flow chart of a method for identifying a vehicle space on a road according to the present invention, which includes:
step S101, acquiring an image of a road surface target vehicle through a camera device;
step S102, determining a three-dimensional envelope frame of the target vehicle according to the image of the target vehicle;
step S103, determining the distance between the facade included in the three-dimensional envelope frame and the camera device;
step S104, identifying the space coordinate of each vertex of the facade relative to the camera device according to the distance;
step S105, determining the relative spatial position relationship between the target vehicle and the camera device according to the spatial coordinates of each vertex.
Specifically, the input information required in the development stage includes a camera projection matrix of the camera, and an installation position of the camera on the vehicle. The camera projection matrix is a basic parameter of the camera and can be obtained in the camera development and calibration process. The mounting position of the camera on the vehicle is basic information in the vehicle design process. The input information required in the use process only comprises images acquired by the camera in real time. The imaging device may be mounted on the vehicle or may be mounted on a road surface device. In step S101, an image of a target vehicle on the road surface is acquired by a camera device, and the acquired image is analyzed in step S102 to obtain a three-dimensional envelope frame of the target vehicle in the image. As shown in fig. 6, the three-dimensional envelope frame is a virtual cube 2 that envelopes the target vehicle 1. The stereoscopic shape of the target vehicle may be represented by a three-dimensional envelope. Then, in step S103, the distance between the vertical surface and the imaging device is calculated. Preferably, the selected facade comprises at least a front facade 21 and a rear facade 22. Then in step S104, spatial coordinates of each vertex of the facade, which include X-axis, Y-axis, and Z-axis coordinates, with respect to the camera, are determined, where the Z-axis coordinates of the vertices are distances of the facade with respect to the camera. The X-axis and Y-axis coordinates are determined according to the distance of the facade relative to the camera and the shape size of the facade in the image. Finally, the relative spatial positional relationship of the target vehicle and the imaging device is determined in step S105 with the spatial coordinates of each vertex. For example, a 3D model of the target vehicle may be identified on the image captured by the monocular camera and its size and position relative to the host vehicle may be output. Furthermore, in combination with the sequence of positions of the target vehicle over a continuous period of time, it is also possible to calculate the speed of the target vehicle or to plan a driving path in autonomous driving. For example, the position of the target vehicle as a target with respect to the host vehicle in which the imaging device is located over a period of time is recorded, and the relative speed between the target vehicle and the host vehicle can be calculated. The vehicle speed information of the target vehicle can be calculated in conjunction with the speed of the host vehicle. For another example, the distance between the target vehicle and the host vehicle in which the image pickup device is located is determined to plan the automatic driving route so as to avoid a collision.
The invention can use a monocular camera to identify the 3D model of the target vehicle and calculate the relative spatial position information. Compared with a binocular camera, the method is low in cost and high in calculation precision. Compared with the existing monocular camera, the method has the advantage that the identification effect is more accurate.
In one embodiment, the determining a three-dimensional envelope frame of the target vehicle according to the image of the target vehicle specifically includes:
extracting a maximum two-dimensional envelope frame of the target vehicle from the image of the target vehicle;
extracting a two-dimensional envelope frame of the head or the tail of the target vehicle from the image of the target vehicle as a reference two-dimensional envelope frame;
determining a picture vanishing point of the image;
connecting the top point of the reference two-dimensional envelope frame of the target vehicle with the picture vanishing point to obtain a plurality of connecting lines;
and intersecting the connecting line with the maximum two-dimensional envelope frame of the target vehicle, and cutting the connecting line, the reference envelope frame and the maximum two-dimensional envelope frame according to the perspective imaging principle of a cube to obtain the three-dimensional envelope frame of the target vehicle, which accords with the perspective relation.
Specifically, for a vehicle as shown in fig. 2, the maximum 2D envelope frame of the target vehicle and the 2D envelope frame of the head or tail of the vehicle in the image captured by the camera are extracted first. The identification of the 2D envelope frame can be realized through an adaboost classifier or existing algorithms such as YOLO, SSD, RCNN and the like, and belongs to a mature technology in the field of image processing. The recognition effect is as shown in fig. 3, and a maximum two-dimensional envelope frame 11 and a two-dimensional envelope frame 12 of the car tail are extracted for the target vehicle 1. Then, as shown in fig. 4, the picture vanishing point 13 of the image is determined, and as shown in fig. 5, the four vertexes of the two-dimensional envelope border 12 corresponding to the head or tail of the target vehicle 1 and the picture vanishing point 13 are connected by using the connecting line 14, and the connecting line 14 intersects with the maximum two-dimensional envelope border 11 of the target vehicle, as shown in fig. 5. And cutting the connecting line and the enveloping border line according to the perspective imaging principle of the cube to obtain the 3D enveloping border 2 of the target vehicle 1 according with the perspective relation, as shown in fig. 6.
In the embodiment, the three-dimensional envelope frame of the target vehicle according with the perspective relation is determined by processing the image of the target vehicle.
In one embodiment, the determining the picture vanishing point of the image specifically includes:
extracting a plurality of straight lane lines from the image, and taking the intersection points of the plurality of lane lines as image vanishing points; or
And taking an imaging point corresponding to the optical axis of the camera device as a picture vanishing point.
Specifically, as shown in fig. 4, a straight lane line 15 is extracted on the image, and the intersection of the straight lane line 15 at infinity is defined as the screen vanishing point 13. The straight lane line 15 can be realized by the prior algorithm such as HOUGH transformation, and belongs to the mature technology in the field of image processing. The recognition results of the lane lines and the screen vanishing points are shown in fig. 4. When the lane line is not a straight line or the lane line cannot be identified due to damage, occlusion and the like, an imaging point corresponding to the optical axis of the camera is used as a screen vanishing point.
The embodiment determines the frame vanishing point according to the lane line so as to further determine the three-dimensional envelope frame of the target vehicle.
In one embodiment, the determining a distance between a facade included in the three-dimensional envelope frame and the camera device specifically includes:
acquiring the height of an envelope frame of a facade included in the three-dimensional envelope frame;
and calculating the distance between the vertical face and the image pick-up device according to the L-H-f/H, wherein H is the actual height of the vehicle, f is the image distance from a photosensitive element of the image pick-up device to a focus, and H is the height of an enveloping frame of the vertical face.
Specifically, as shown in fig. 7, the imaging law of the camera can be described by a pinhole imaging optical model. The vehicle-mounted camera typically does not capture the top and bottom surfaces of the target vehicle, so the vertical edge height of the target vehicle corresponds to the height of its 2D envelope on the camera image. The height of the target vehicle is H, and according to a statistical result, the height of the general vehicle conforms to normal distribution and can be simplified into an average value. The target image 2D envelope frame height h may be calculated according to the pixel height of the 2D envelope frame and the pixel spacing of the photosensitive elements. F is an imaging focal point, and F is the image distance from the photosensitive element to the focal point, and is a hardware parameter of the camera. Therefore, the object distance L of the target object from the camera is as follows:
L=H*f/h。
the distance between each vertical surface of the target vehicle and the camera is determined according to the imaging rule of the camera.
In one embodiment, the determining a distance between a facade included in the three-dimensional envelope frame and a camera device specifically further includes:
determining the vehicle type of the target vehicle according to the image of the target vehicle;
and determining the actual height of the vehicle according to the vehicle type.
Specifically, the heights of vehicles of different vehicle types are different, and the heights of vehicles of the same type generally conform to a normal distribution and can be reduced to an average value.
Preferably, the vehicle types may be classified into two types of passenger cars, buses, and trucks, and the vehicle heights are preset for the different types.
The vehicle height is determined according to the vehicle type, and the accuracy of the vehicle height is further improved.
In one embodiment, the identifying, according to the distance, spatial coordinates of each vertex of the facade with respect to a camera device specifically includes:
acquiring a checkerboard related to the distance, wherein the checkerboard comprises a plurality of squares, and each square corresponds to a space coordinate relative to the camera device at the distance;
and taking the space coordinate of the square where each vertex of the facade is positioned in the checkerboard as the space coordinate of the vertex relative to the camera device.
Specifically, in the prior art, a planar checkerboard is used for camera calibration, and the real position of the target object is calculated by looking up a table according to the corresponding position of the target object appearing on the checkerboard. However, the prior art method causes an error in the calculation of the target distance when there is a change in the front road gradient. For example, if the road ahead is uphill, the object on the slope will be identified farther than the actual distance.
The present embodiment employs a checkerboard with respect to distance. Each checkerboard can be obtained by performing 3D calibration on the camera in advance. As shown in fig. 8, vertical checkerboards 81 are arranged at different distances in front of the image pickup device 82, and horizontal and vertical actual coordinates of pixel positions corresponding to each square in the checkerboards 81 are recorded. The 3D envelope bounding box of the target vehicle corresponds to 8 vertices. The longitudinal distance between the front and back vertical surfaces and the camera can be calculated by an optical model in fig. 7. According to the distance between the front vertical surface and the rear vertical surface of the target vehicle, chessboard data corresponding to the distance during 3D calibration of the camera is respectively called, the horizontal and vertical actual coordinates of 4 vertexes of each vertical surface are obtained by table lookup, and then the space three-dimensional coordinates of each vertex are obtained by adding the distance. If the target vehicle distance does not accord with the standard distance in the 3D calibration, two adjacent standard distance checkerboard data can be called, and compensation is carried out through linear interpolation. At this point, the three-dimensional space coordinates of the 8 vertexes of the 3D enveloping frame of the target vehicle are calculated, and the size and the position of the target vehicle relative to the host vehicle can be determined.
The checkerboard is used for calibrating the projection relation from the three-dimensional space coordinate to the camera image coordinate. In actual use, the target image does not need to be placed in the checkerboard, and the coordinates can be determined through the corresponding relation between the target image and the data of the checkerboard.
And judging that the target vehicle is right in front of the vehicle for the phenomenon that the tail envelope frame and the maximum envelope frame of the vehicle are completely overlapped. The three-dimensional space coordinates of the 4 vertexes behind the 3D enveloping frame of the target vehicle can be calculated according to the method, and the three-dimensional space coordinates of the 4 vertexes in front of the frame are estimated according to the length-width-height ratio of the general vehicle size.
In the embodiment, the coordinates of the vertex of the vertical surface are determined by adopting the checkerboard data related to different distances, so that the space coordinates of the vertex are more accurate.
Fig. 9 is a schematic diagram of a hardware structure of an electronic device for identifying a vehicle space on a road according to the present invention, which includes:
at least one processor 901; and the number of the first and second groups,
a memory 902 communicatively connected to the at least one processor 901; wherein the content of the first and second substances,
the memory 902 stores instructions executable by the one processor to cause the at least one processor to:
acquiring an image of a road target vehicle through a camera device;
determining a three-dimensional envelope frame of the target vehicle according to the image of the target vehicle;
determining the distance between a facade included in the three-dimensional envelope frame and a camera device;
according to the distance, identifying the space coordinate of each vertex of the facade relative to a camera device;
and determining the relative spatial position relationship between the target vehicle and the camera device according to the spatial coordinates of each vertex.
Fig. 9 illustrates an example of a processor 902.
The Electronic device is preferably an Electronic Control Unit (ECU). The electronic device may further include: an input device 903 and a display device 904.
The processor 901, the memory 902, the input device 903, and the display device 904 may be connected by a bus or other means, and are illustrated as being connected by a bus.
The memory 902, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the method for identifying a space of a road vehicle in the embodiment of the present application, for example, the method flows shown in fig. 1, fig. 2, and fig. 3. The processor 901 executes various functional applications and data processing by running nonvolatile software programs, instructions, and modules stored in the memory 902, that is, implements the road surface vehicle space identification method in the above-described embodiment.
The memory 902 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the road surface vehicle space recognition method, and the like. Further, the memory 902 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 902 may optionally include memory located remotely from the processor 901, which may be connected over a network to a device that performs the method for on-road vehicle space identification. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 903 may receive input user clicks and generate signal inputs related to user settings and function controls of the roadway vehicle space identification method. The display device 904 may include a display screen or the like.
The method of identifying a road vehicle space in any of the above method embodiments is performed when the one or more modules are stored in the memory 902 and when executed by the one or more processors 901.
The invention can use a monocular camera to identify the 3D model of the target vehicle and calculate the relative spatial position information. Compared with a binocular camera, the method is low in cost and high in calculation precision. Compared with the existing monocular camera, the method has the advantage that the identification effect is more accurate.
In one embodiment, the determining a three-dimensional envelope frame of the target vehicle according to the image of the target vehicle specifically includes:
extracting a maximum two-dimensional envelope frame of the target vehicle from the image of the target vehicle;
extracting a two-dimensional envelope frame of the head or the tail of the target vehicle from the image of the target vehicle as a reference two-dimensional envelope frame;
determining a picture vanishing point of the image;
connecting the top point of the reference two-dimensional envelope frame of the target vehicle with the picture vanishing point to obtain a plurality of connecting lines;
and intersecting the connecting line with the maximum two-dimensional envelope frame of the target vehicle, and cutting the connecting line, the reference envelope frame and the maximum two-dimensional envelope frame according to the perspective imaging principle of a cube to obtain the three-dimensional envelope frame of the target vehicle, which accords with the perspective relation.
In the embodiment, the three-dimensional envelope frame of the target vehicle according with the perspective relation is determined by processing the image of the target vehicle.
In one embodiment, the determining the picture vanishing point of the image specifically includes:
extracting a plurality of straight lane lines from the image, and taking the intersection points of the plurality of lane lines as image vanishing points; or
And taking an imaging point corresponding to the optical axis of the camera device as a picture vanishing point.
The embodiment determines the frame vanishing point according to the lane line so as to further determine the three-dimensional envelope frame of the target vehicle.
In one embodiment, the determining a distance between a facade included in the three-dimensional envelope frame and the camera device specifically includes:
acquiring the height of an envelope frame of a facade included in the three-dimensional envelope frame;
and calculating the distance between the vertical face and the image pick-up device according to the L-H-f/H, wherein H is the actual height of the vehicle, f is the image distance from a photosensitive element of the image pick-up device to a focus, and H is the height of an enveloping frame of the vertical face.
The distance between each vertical surface of the target vehicle and the camera is determined according to the imaging rule of the camera.
In one embodiment, the determining a distance between a facade included in the three-dimensional envelope frame and a camera device specifically further includes:
determining the vehicle type of the target vehicle according to the image of the target vehicle;
and determining the actual height of the vehicle according to the vehicle type.
The vehicle height is determined according to the vehicle type, and the accuracy of the vehicle height is further improved.
In one embodiment, the identifying, according to the distance, spatial coordinates of each vertex of the facade with respect to a camera device specifically includes:
acquiring a checkerboard related to the distance, wherein the checkerboard comprises a plurality of squares, and each square corresponds to a space coordinate relative to the camera device at the distance;
and taking the space coordinate of the square where each vertex of the facade is positioned in the checkerboard as the space coordinate of the vertex relative to the camera device.
In the embodiment, the coordinates of the vertex of the vertical surface are determined by adopting the checkerboard data related to different distances, so that the space coordinates of the vertex are more accurate.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for identifying a vehicle space on a road, comprising:
acquiring an image of a road target vehicle through a camera device;
determining a three-dimensional envelope frame of the target vehicle according to the image of the target vehicle;
determining the distance between a facade included in the three-dimensional envelope frame and a camera device;
according to the distance, identifying the space coordinate of each vertex of the facade relative to a camera device;
determining the relative spatial position relationship between the target vehicle and the camera device according to the spatial coordinates of each vertex;
the determining a three-dimensional envelope frame of the target vehicle according to the image of the target vehicle specifically includes:
extracting a maximum two-dimensional envelope frame of the target vehicle from the image of the target vehicle;
extracting a two-dimensional envelope frame of the head or the tail of the target vehicle from the image of the target vehicle as a reference two-dimensional envelope frame;
determining a picture vanishing point of the image;
connecting the top point of the reference two-dimensional envelope frame of the target vehicle with the picture vanishing point to obtain a plurality of connecting lines;
and intersecting the connecting line with the maximum two-dimensional envelope frame of the target vehicle, and cutting the connecting line, the reference envelope frame and the maximum two-dimensional envelope frame according to the perspective imaging principle of a cube to obtain the three-dimensional envelope frame of the target vehicle, which accords with the perspective relation.
2. The method for identifying a space of a road vehicle according to claim 1, wherein the determining of the frame vanishing point of the image specifically comprises:
extracting a plurality of straight lane lines from the image, and taking the intersection points of the plurality of lane lines as image vanishing points; or
And taking an imaging point corresponding to the optical axis of the camera device as a picture vanishing point.
3. The method for identifying a road vehicle space according to claim 1, wherein the determining the distance between the facade included in the three-dimensional envelope frame and the camera device specifically comprises:
acquiring the height of an envelope frame of a facade included in the three-dimensional envelope frame;
and calculating the distance between the vertical face and the image pick-up device according to the L-H-f/H, wherein H is the actual height of the vehicle, f is the image distance from a photosensitive element of the image pick-up device to a focus, and H is the height of an enveloping frame of the vertical face.
4. The method for identifying a road vehicle space according to claim 3, wherein the determining of the distance between the facade included in the three-dimensional envelope frame and the camera device further comprises:
determining the vehicle type of the target vehicle according to the image of the target vehicle;
and determining the actual height of the vehicle according to the vehicle type.
5. The method for identifying a road vehicle space according to claim 1, wherein the identifying spatial coordinates of each vertex of the vertical surface relative to a camera device according to the distance specifically comprises:
acquiring a checkerboard related to the distance, wherein the checkerboard comprises a plurality of squares, and each square corresponds to a space coordinate relative to the camera device at the distance;
and taking the space coordinate of the square where each vertex of the facade is positioned in the checkerboard as the space coordinate of the vertex relative to the camera device.
6. A road vehicle space recognition electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the one processor to cause the at least one processor to:
acquiring an image of a road target vehicle through a camera device;
determining a three-dimensional envelope frame of the target vehicle according to the image of the target vehicle;
determining the distance between a facade included in the three-dimensional envelope frame and a camera device;
according to the distance, identifying the space coordinate of each vertex of the facade relative to a camera device;
determining the relative spatial position relationship between the target vehicle and the camera device according to the spatial coordinates of each vertex;
the determining a three-dimensional envelope frame of the target vehicle according to the image of the target vehicle specifically includes:
extracting a maximum two-dimensional envelope frame of the target vehicle from the image of the target vehicle;
extracting a two-dimensional envelope frame of the head or the tail of the target vehicle from the image of the target vehicle as a reference two-dimensional envelope frame;
determining a picture vanishing point of the image;
connecting the top point of the reference two-dimensional envelope frame of the target vehicle with the picture vanishing point to obtain a plurality of connecting lines;
and intersecting the connecting line with the maximum two-dimensional envelope frame of the target vehicle, and cutting the connecting line, the reference envelope frame and the maximum two-dimensional envelope frame according to the perspective imaging principle of a cube to obtain the three-dimensional envelope frame of the target vehicle, which accords with the perspective relation.
7. The device according to claim 6, wherein the determining of the frame vanishing point of the image specifically comprises:
extracting a plurality of straight lane lines from the image, and taking the intersection points of the plurality of lane lines as image vanishing points; or
And taking an imaging point corresponding to the optical axis of the camera device as a picture vanishing point.
8. The electronic device for identifying a vehicle space on a road according to claim 6, wherein the determining a distance between a facade included in the three-dimensional envelope frame and the camera device specifically comprises:
acquiring the height of an envelope frame of a facade included in the three-dimensional envelope frame;
and calculating the distance between the vertical face and the image pick-up device according to the L-H-f/H, wherein H is the actual height of the vehicle, f is the image distance from a photosensitive element of the image pick-up device to a focus, and H is the height of an enveloping frame of the vertical face.
9. The electronic device for identifying a vehicle space on a road according to claim 8, wherein the determining of the distance between the facade included in the three-dimensional envelope frame and the camera device further includes:
determining the vehicle type of the target vehicle according to the image of the target vehicle;
and determining the actual height of the vehicle according to the vehicle type.
10. The road vehicle space recognition electronic device of claim 6, wherein the recognizing, according to the distance, the spatial coordinates of each vertex of the facade relative to a camera device specifically comprises:
acquiring a checkerboard related to the distance, wherein the checkerboard comprises a plurality of squares, and each square corresponds to a space coordinate relative to the camera device at the distance;
and taking the space coordinate of the square where each vertex of the facade is positioned in the checkerboard as the space coordinate of the vertex relative to the camera device.
CN201910824982.9A 2019-09-02 2019-09-02 Pavement vehicle space identification method and electronic equipment Active CN110555407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910824982.9A CN110555407B (en) 2019-09-02 2019-09-02 Pavement vehicle space identification method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910824982.9A CN110555407B (en) 2019-09-02 2019-09-02 Pavement vehicle space identification method and electronic equipment

Publications (2)

Publication Number Publication Date
CN110555407A CN110555407A (en) 2019-12-10
CN110555407B true CN110555407B (en) 2022-03-08

Family

ID=68738675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910824982.9A Active CN110555407B (en) 2019-09-02 2019-09-02 Pavement vehicle space identification method and electronic equipment

Country Status (1)

Country Link
CN (1) CN110555407B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111413692B (en) * 2020-03-18 2022-03-18 东风汽车集团有限公司 Camera transverse position estimation self-calibration method based on roadside stationary object
CN113470110A (en) * 2020-03-30 2021-10-01 北京四维图新科技股份有限公司 Distance measuring method and device
CN111695403B (en) * 2020-04-19 2024-03-22 东风汽车股份有限公司 Depth perception convolutional neural network-based 2D and 3D image synchronous detection method
CN113591518B (en) * 2020-04-30 2023-11-03 华为技术有限公司 Image processing method, network training method and related equipment
CN112184792B (en) * 2020-08-28 2023-05-26 辽宁石油化工大学 Road gradient calculation method and device based on vision
CN112487979B (en) * 2020-11-30 2023-08-04 北京百度网讯科技有限公司 Target detection method, model training method, device, electronic equipment and medium
CN112633214B (en) * 2020-12-30 2022-09-23 潍柴动力股份有限公司 Vehicle identification method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156718A (en) * 2014-08-20 2014-11-19 电子科技大学 Vehicle license plate image vertical tilt correction method
CN104778716A (en) * 2015-05-05 2015-07-15 西安电子科技大学 Truck carriage volume measurement method based on single image
CN106339530A (en) * 2016-08-16 2017-01-18 中冶赛迪工程技术股份有限公司 Method and system for extracting size information of welded component based on enveloping space
CN106803286A (en) * 2017-01-17 2017-06-06 湖南优象科技有限公司 Mutual occlusion real-time processing method based on multi-view image
CN108510590A (en) * 2017-02-24 2018-09-07 北京图森未来科技有限公司 A kind of method and device generating three-dimensional boundaries frame
KR20190089791A (en) * 2019-07-11 2019-07-31 엘지전자 주식회사 Apparatus and method for providing 3-dimensional around view
CN110148169A (en) * 2019-03-19 2019-08-20 长安大学 A kind of vehicle target 3 D information obtaining method based on PTZ holder camera

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10312993B2 (en) * 2015-10-30 2019-06-04 The Florida International University Board Of Trustees Cooperative clustering for enhancing MU-massive-MISO-based UAV communication

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156718A (en) * 2014-08-20 2014-11-19 电子科技大学 Vehicle license plate image vertical tilt correction method
CN104778716A (en) * 2015-05-05 2015-07-15 西安电子科技大学 Truck carriage volume measurement method based on single image
CN106339530A (en) * 2016-08-16 2017-01-18 中冶赛迪工程技术股份有限公司 Method and system for extracting size information of welded component based on enveloping space
CN106803286A (en) * 2017-01-17 2017-06-06 湖南优象科技有限公司 Mutual occlusion real-time processing method based on multi-view image
CN108510590A (en) * 2017-02-24 2018-09-07 北京图森未来科技有限公司 A kind of method and device generating three-dimensional boundaries frame
CN110148169A (en) * 2019-03-19 2019-08-20 长安大学 A kind of vehicle target 3 D information obtaining method based on PTZ holder camera
KR20190089791A (en) * 2019-07-11 2019-07-31 엘지전자 주식회사 Apparatus and method for providing 3-dimensional around view

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
3D building map reconstruction in dense urban areas by integrating airborne laser point cloud with 2D boundary map;Mahdi Javanmardi等;《2015 IEEE International Conference on Vehicular Electronics and Safety (ICVES)》;20160218;第126-131页 *
基于图像处理的车辆外形测量技术研究;闻江;《中国优秀硕士学位论文全文数据库》;20180215;第C034-1014页 *

Also Published As

Publication number Publication date
CN110555407A (en) 2019-12-10

Similar Documents

Publication Publication Date Title
CN110555407B (en) Pavement vehicle space identification method and electronic equipment
US11632536B2 (en) Method and apparatus for generating three-dimensional (3D) road model
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
CN108647638B (en) Vehicle position detection method and device
KR102275310B1 (en) Mtehod of detecting obstacle around vehicle
CN111797734B (en) Vehicle point cloud data processing method, device, equipment and storage medium
CN107133985B (en) Automatic calibration method for vehicle-mounted camera based on lane line vanishing point
CN106952308B (en) Method and system for determining position of moving object
EP3637313A1 (en) Distance estimating method and apparatus
CN111815641A (en) Camera and radar fusion
US9042639B2 (en) Method for representing surroundings
CN110826499A (en) Object space parameter detection method and device, electronic equipment and storage medium
US20230215187A1 (en) Target detection method based on monocular image
CN110069990B (en) Height limiting rod detection method and device and automatic driving system
CN109946703B (en) Sensor attitude adjusting method and device
EP2293588A1 (en) Method for using a stereovision camera arrangement
CN109741241B (en) Fisheye image processing method, device, equipment and storage medium
CN113205604A (en) Feasible region detection method based on camera and laser radar
CN108725318B (en) Automobile safety early warning method and device and computer readable storage medium
CN114463303A (en) Road target detection method based on fusion of binocular camera and laser radar
CN111382591A (en) Binocular camera ranging correction method and vehicle-mounted equipment
Lion et al. Smart speed bump detection and estimation with kinect
JP2018073275A (en) Image recognition device
Leu et al. High speed stereo vision based automotive collision warning system
KR102003387B1 (en) Method for detecting and locating traffic participants using bird's-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant