CN112648924B - Suspended object space position determination method based on vehicle-mounted monocular camera equipment - Google Patents

Suspended object space position determination method based on vehicle-mounted monocular camera equipment Download PDF

Info

Publication number
CN112648924B
CN112648924B CN202011477739.3A CN202011477739A CN112648924B CN 112648924 B CN112648924 B CN 112648924B CN 202011477739 A CN202011477739 A CN 202011477739A CN 112648924 B CN112648924 B CN 112648924B
Authority
CN
China
Prior art keywords
determining
region
coordinate
vehicle
offset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011477739.3A
Other languages
Chinese (zh)
Other versions
CN112648924A (en
Inventor
周建
温俊杰
陈昊
黄豪
杨应彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Motors Technology Co Ltd
Original Assignee
Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng Autopilot Technology Co Ltd filed Critical Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority to CN202011477739.3A priority Critical patent/CN112648924B/en
Publication of CN112648924A publication Critical patent/CN112648924A/en
Application granted granted Critical
Publication of CN112648924B publication Critical patent/CN112648924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/2433Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures for measuring outlines by shadow casting

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the invention provides a method and a device for determining the spatial position of a suspended object based on vehicle-mounted monocular camera equipment, a vehicle and a readable medium, wherein the method comprises the following steps: when the vehicle is located at a first position, acquiring a first environment image aiming at a suspended object by adopting vehicle-mounted monocular camera equipment; when the vehicle is located at a second position, acquiring a second environment image aiming at the suspended object by adopting vehicle-mounted monocular camera equipment; determining a first offset coordinate corresponding to the floating object by adopting the first environment image, and determining a second offset coordinate corresponding to the floating object by adopting the second environment image; acquiring an actual three-dimensional space coordinate of the suspended object by adopting the first offset coordinate and the second offset coordinate; and determining the three-dimensional space position of the suspended object based on the actual three-dimensional space coordinate. The actual three-dimensional space position of the suspension target can be obtained when the automobile is driven automatically.

Description

Suspended object space position determination method based on vehicle-mounted monocular camera equipment
Technical Field
The invention relates to the field of vehicle image detection, in particular to a floating object space position determining method based on vehicle-mounted monocular camera equipment, a floating object space position determining device based on the vehicle-mounted monocular camera equipment, a vehicle and a readable medium.
Background
At present, after an automatic driving function of an automobile is started, image detection is limited to ground targets, the targets of the image detection need to be converted to a vehicle coordinate system from an image coordinate system based on external parameters of a camera calibrated in advance and a ground plane during detection, and the method is only suitable for the targets on the ground plane and is not suitable for suspended targets outside the ground plane, such as traffic lights, advertising boards and the like.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide a floating object spatial position determining method based on an in-vehicle monocular imaging device and a corresponding floating object spatial position determining apparatus based on an in-vehicle monocular imaging device, which overcome or at least partially solve the above problems.
In order to solve the above problem, an embodiment of the present invention discloses a method for determining a spatial position of a floating object based on a vehicle-mounted monocular imaging device, where the floating object is an object whose position is outside a ground plane, and the method includes:
when the vehicle is located at a first position, acquiring a first environment image aiming at a suspended object by adopting the vehicle-mounted monocular camera equipment;
when the vehicle is located at a second position, acquiring a second environment image aiming at the suspended object by adopting the vehicle-mounted monocular camera equipment;
determining a first offset coordinate corresponding to the levitated object using the first environment image and determining a second offset coordinate corresponding to the levitated object using the second environment image;
acquiring the actual three-dimensional space coordinate of the suspended object by adopting the first offset coordinate and the second offset coordinate;
and determining the three-dimensional space position of the suspended object based on the actual three-dimensional space coordinate.
Optionally, the step of determining a first offset coordinate corresponding to the floating object by using the first environment image and determining a second offset coordinate corresponding to the floating object by using the second environment image includes:
determining a first region containing the levitated object in the first environmental image and a second region containing the levitated object in the second environmental image;
projecting the first area to the second environment image to obtain a third area;
determining a first offset coordinate corresponding to the levitated object using the third region and determining a second offset coordinate corresponding to the levitated object using the second region.
Optionally, after the steps of determining a first region containing the floating object in the first environment image and determining a second region containing the floating object in the second environment image, the method further includes:
determining an object type of the levitated object based on the first region and the second region.
Optionally, after the step of determining the object type of the floating object based on the first region and the second region, the method further includes:
determining a first floating object in the first region and a second floating object in the second region.
Optionally, the step of determining a first offset coordinate corresponding to the floating object by using the third area and determining a second offset coordinate corresponding to the floating object by using the second area includes:
determining a second region and a third region which are matched with each other based on the object type of the floating object and the matched first floating object and second floating object; the third region includes a third levitating object corresponding to the first levitating object;
determining paired first and second offset coordinates based on the second and third hover objects.
Optionally, the method further comprises:
performing keypoint detection on the second region and the third region, determining a first keypoint corresponding to the suspended object in the second region, and determining a second keypoint corresponding to the suspended object in the third region;
and determining a corresponding third offset coordinate according to the first key point, and determining a corresponding fourth offset coordinate according to the second key point.
Optionally, the step of obtaining the actual three-dimensional space coordinate of the floating object by using the first offset coordinate and the second offset coordinate includes:
determining a first actual three-dimensional space coordinate of the suspended object by adopting the first offset coordinate and the second offset coordinate through a triangulation principle;
determining a second actual three-dimensional space coordinate of the suspended object by adopting the third offset coordinate and the fourth offset coordinate through a triangulation principle;
and weighting the first actual three-dimensional space coordinate and the second actual three-dimensional space coordinate to obtain a target actual three-dimensional space coordinate of the suspended object.
The embodiment of the invention also discloses a device for determining the spatial position of a suspended object based on the vehicle-mounted monocular camera equipment, wherein the suspended object is an object with a position beyond the ground level, and the device comprises:
the first environment image acquisition module is used for acquiring a first environment image aiming at the suspended object by adopting the vehicle-mounted monocular camera equipment when the vehicle is positioned at a first position;
the second environment image acquisition module is used for acquiring a second environment image aiming at the suspended object by adopting the vehicle-mounted monocular camera equipment when the vehicle is positioned at a second position;
the offset coordinate determination module is used for determining a first offset coordinate corresponding to the floating object by adopting the first environment image and determining a second offset coordinate corresponding to the floating object by adopting the second environment image;
the actual three-dimensional space coordinate acquisition module is used for acquiring the actual three-dimensional space coordinate of the suspended object by adopting the first offset coordinate and the second offset coordinate;
and the three-dimensional space position determining module is used for determining the three-dimensional space position of the suspended object based on the actual three-dimensional space coordinate.
The embodiment of the invention also discloses a vehicle, which comprises:
one or more processors; and
one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the vehicle to perform one or more methods as described above.
Embodiments of the invention also disclose one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause the processors to perform one or more of the methods described above.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, when the vehicle is positioned at the first position, the vehicle-mounted monocular camera device is adopted to acquire the first environment image aiming at the suspended object, when the vehicle is positioned at the second position, the vehicle-mounted monocular camera device is adopted to acquire the second environment image aiming at the suspended object, the first offset coordinate corresponding to the suspended object is determined by adopting the first environment image, the second offset coordinate corresponding to the suspended object is determined by adopting the second environment image, the actual three-dimensional space coordinate of the suspended object is acquired by adopting the first offset coordinate and the second offset coordinate, and the three-dimensional space position of the suspended object is determined based on the actual three-dimensional space coordinate, so that the actual three-dimensional space position of the suspended object positioned outside the horizon can be acquired by image detection when the automobile is automatically driven, and the application range of vehicle image detection is improved.
Drawings
FIG. 1 is a flowchart illustrating steps of an embodiment of a method for determining a spatial position of a floating object based on a vehicle-mounted monocular camera device according to the present invention;
FIG. 2 is a flow chart of steps of another embodiment of a method for determining a spatial position of a floating object based on a vehicle-mounted monocular camera device according to the present invention;
fig. 3 is a block diagram of an embodiment of the device for determining the spatial position of a floating object based on the vehicle-mounted monocular camera equipment.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
One of the core ideas of the embodiment of the invention is that when a vehicle is located at a first position, a vehicle-mounted monocular camera device is used for acquiring a first environment image for a suspended object, when the vehicle is located at a second position, a vehicle-mounted monocular camera device is used for acquiring a second environment image for the suspended object, a first offset coordinate corresponding to the suspended object is determined by using the first environment image, a second offset coordinate corresponding to the suspended object is determined by using the second environment image, an actual three-dimensional space coordinate of the suspended object is acquired by using the first offset coordinate and the second offset coordinate, and the three-dimensional space position of the suspended object is determined based on the actual three-dimensional space coordinate, so that the actual three-dimensional space position of the suspended object located outside the ground can be acquired by image detection when the automobile is automatically driven, and the application range of vehicle image detection is improved.
Referring to fig. 1, a flowchart of steps of an embodiment of a method for determining a spatial position of a floating object based on a vehicle-mounted monocular imaging device according to the present invention is shown, where the floating object is an object whose position is outside of a ground plane, and specifically the method may include the following steps:
step 101, when a vehicle is located at a first position, acquiring a first environment image aiming at a suspended object by using the vehicle-mounted monocular camera equipment;
specifically, the monocular camera device refers to a camera device having one camera, and the floating object refers to an object located outside the ground plane, such as a traffic light, and a billboard hung at a high place.
In this embodiment, when the vehicle is in a moving state, and the vehicle travels to a certain position, the vehicle-mounted monocular imaging device is used to capture images of the floating object, so as to obtain a first environment image.
Step 102, when the vehicle is located at a second position, acquiring a second environment image aiming at a suspended object by adopting the vehicle-mounted monocular camera equipment;
in order to obtain two images which are shot aiming at a floating object and have different shooting angles and shooting distances, after the first environment image is shot, when a vehicle runs for a certain distance to reach a second position which is away from the position where the first environment image is shot, the distance between the vehicle-mounted monocular camera equipment and the floating object is changed, and at the moment, the vehicle-mounted monocular camera equipment is used for shooting aiming at the floating object to obtain a second environment image. As an example, the shooting interval time may be preset in the vehicle, when the vehicle is in a moving state and starts timing after shooting the first environment image, and when the timing reaches the shooting interval time, the vehicle moves to the second position, so that an angle and a distance between the vehicle and the floating object are changed compared with those when shooting the first environment image, and at this time, the second environment image is shot, so as to obtain the first environment image and the second environment image with different shooting angles and shooting distances.
103, determining a first offset coordinate corresponding to the floating object by using the first environment image, and determining a second offset coordinate corresponding to the floating object by using the second environment image;
the method comprises the steps of respectively carrying out image processing on a first environment image and a second environment image, determining a floating object in the images and offset coordinates corresponding to the floating object, wherein the coordinates of the floating object in the environment images have certain offset compared with actual space coordinates due to the fact that the shot images are subjected to coordinate change from a world coordinate system to a camera coordinate system and conversion from the camera coordinate system to pixel points, and obtaining the first offset coordinates of the floating object in the first environment image and the second offset coordinates in the second environment image. As an example, determining the floating object in the first environment image and the first offset coordinate corresponding to the floating object, determining the floating object in the second environment and the second offset coordinate corresponding to the floating object may be performed by a recognition model trained in advance and obtained through deep learning, and after the captured first environment image and the captured second environment image are input into the recognition model, the recognized floating object and the offset coordinate are output by the recognition model.
104, acquiring an actual three-dimensional space coordinate of the suspended object by using the first offset coordinate and the second offset coordinate;
the first offset coordinate and the second offset coordinate correspond to one same actual three-dimensional space coordinate in the same suspension object, so that a corresponding relation exists between the first offset coordinate and the second offset coordinate, and the actual three-dimensional space coordinate of the suspension object can be obtained through calculation by combining camera internal parameters, camera external parameters of a camera and translation parameters and rotation parameters of a vehicle through the first offset coordinate and the second offset coordinate.
And 105, determining the three-dimensional space position of the suspended object based on the actual three-dimensional space coordinate.
Specifically, after the actual three-dimensional space coordinates of the floating object are obtained, the actual three-dimensional space position of the floating object in the real world can be determined based on the three-dimensional space coordinates.
In the embodiment of the invention, when the vehicle is located at the first position, the vehicle-mounted monocular camera device is used for acquiring the first environment image aiming at the suspended object, when the vehicle is located at the second position, the vehicle-mounted monocular camera device is used for acquiring the second environment image aiming at the suspended object, the first offset coordinate corresponding to the suspended object is determined by using the first environment image, the second offset coordinate corresponding to the suspended object is determined by using the second environment image, the actual three-dimensional space coordinate of the suspended object is acquired by using the first offset coordinate and the second offset coordinate, and the three-dimensional space position of the suspended object is determined based on the actual three-dimensional space coordinate, so that the actual three-dimensional space position of the suspended object located outside the horizon can be acquired by image detection when the automobile is automatically driven, and the application range of vehicle image detection is improved.
Referring to fig. 2, a flowchart illustrating steps of another embodiment of a method for determining a spatial position of a floating object based on a vehicle-mounted monocular imaging device according to the present invention is shown, where the floating object is an object whose position is outside a ground plane, and the method specifically includes the following steps:
step 201, when a vehicle is located at a first position, acquiring a first environment image aiming at a suspended object by using the vehicle-mounted monocular camera equipment;
step 202, when the vehicle is located at a second position, acquiring a second environment image aiming at a suspended object by using the vehicle-mounted monocular camera equipment;
since steps 201 and 202 are similar to steps 101 and 102 in the previous embodiment, reference may be made to step 101 and step 102 in the previous embodiment for detailed description, and this embodiment is not described herein again.
Step 203, determining a first area containing the floating object in the first environment image, and determining a second area containing the floating object in the second environment image;
specifically, a bounding box (bbox) may be used to perform image processing on the first environment image and the second environment image, and after the image processing, a first area and a second area including the floating object may be framed in the first environment image and the second environment image, respectively.
Step 204, determining the object type of the floating object based on the first area and the second area;
the floating objects can be birds flying in the air, roadside traffic lights and advertising boards, and different floating objects have different appearance characteristics, so that the floating objects can be classified, for example, the classification can be animal-bird classification and traffic facility-traffic light classification, and the classification (class) of a first area and a second area containing the floating objects is carried out, so that the types of the floating objects are obtained.
Step 205, determining a first floating object in the first area and a second floating object in the second area;
specifically, after the type of the floating object is determined, mask processing may be performed on the first region and the second region, respectively, and after the mask processing, all pixel points that belong to the floating object and constitute the first floating object may be determined in the first region, and all pixel points that belong to the floating object and constitute the second floating object may be determined in the second region.
Step 206, projecting the first area to the second environment image to obtain a third area;
specifically, when the vehicle shoots a first environment image and a second environment image, rotation parameters and translation parameters corresponding to rotation and translation of the vehicle relative to a floating object can be obtained through a high-precision vehicle odometer arranged on the vehicle, and the vehicle-mounted monocular camera equipment is fixed on the vehicle, so that the vehicle-mounted unit camera equipment also rotates and translates along with the vehicle, and a first area in the first environment image is projected to the second environment image according to the rotation parameters and the translation parameters to obtain a third area.
Step 207, determining a first offset coordinate corresponding to the floating object by using the third area, and determining a second offset coordinate corresponding to the floating object by using the second area.
The third area is the projection of the first area, and the first area and the second area both contain the suspension object, so that the projected third area is matched with the second area, and after the matching is successful, the matched characteristic points in the first suspension object and the second suspension object can be obtained from the third area and the second area, and the coordinates of the characteristic points, namely the first offset coordinate and the second offset coordinate, are determined.
In an optional embodiment of the present invention, the step of determining a first offset coordinate corresponding to the floating object by using the third region and determining a second offset coordinate corresponding to the floating object by using the second region further includes the following sub-steps:
determining a second region and a third region which are matched with each other based on the object type of the floating object and the matched first floating object and second floating object; the third region includes a third levitating object corresponding to the first levitating object;
determining paired first and second offset coordinates based on the second and third hover objects.
Specifically, when the first region is projected onto the second environmental image, because the shooting angles are different, although both the third region and the second region obtained after projection are regions including the floating object, there is parallax between the two regions, and there is an overlapping partial region, and because the types of the first region and the second region when classified are the same, the second region and the third region including the same floating object can be matched according to the same type of object and two conditions of the overlapping region, the third region includes the third floating object, because the third region is the projection of the first region, the third floating object is the projection of the first floating object, and because the third region is matched with the second region, there is a corresponding relationship between the third floating object and the second floating object, and the coordinate values corresponding to the matched pixel points in the third floating object and the second floating object are calculated, so that the first offset coordinate and the second offset coordinate are matched. For example, if there is a pixel point a in the third suspension object and corresponds to a pixel point B in the second suspension object, the coordinate value of the pixel point a is the first offset coordinate, and the coordinate value of the pixel point B is the second offset coordinate.
In an optional embodiment of the invention, the method further comprises:
performing keypoint detection on the second region and the third region, determining a first keypoint corresponding to the suspended object in the second region, and determining a second keypoint corresponding to the suspended object in the third region;
and determining a corresponding third offset coordinate according to the first key point, and determining a corresponding fourth offset coordinate according to the second key point.
Besides calculating matched pixel points through paired second and third suspension objects, key point detection can be directly performed on the second and third regions, first and second key points corresponding to the suspension objects are respectively obtained in the second and third regions through key point detection, and due to the fact that the suspension objects are the same, the first key points obtained in the second region correspond to the second key points obtained in the third region, and third offset coordinates corresponding to the first key points in the second region and fourth offset coordinates corresponding to the second key points in the third region are respectively determined. As an example, bbox processing, cls processing, mask processing, and key point detection performed on the environment image may all be completed by a pre-trained processing model, which may be obtained by deep learning, and after the acquired first environment image and second environment image are input into the processing model, the processing model may automatically perform processing, and output image results that are subjected to bbox processing, cls processing, mask processing, and key point detection.
And 208, acquiring the actual three-dimensional space coordinate of the suspended object by using the first offset coordinate and the second offset coordinate.
In an optional embodiment of the present invention, the step of obtaining the actual three-dimensional space coordinate of the floating object by using the first offset coordinate and the second offset coordinate further includes:
determining a first actual three-dimensional space coordinate of the suspended object by adopting the first offset coordinate and the second offset coordinate through a triangulation principle;
determining a second actual three-dimensional space coordinate of the suspended object by adopting the third offset coordinate and the fourth offset coordinate through a triangulation principle;
and weighting the first actual three-dimensional space coordinate and the second actual three-dimensional space coordinate to obtain a target actual three-dimensional space coordinate of the suspended object.
Specifically, in order to determine the actual three-dimensional space coordinate finally more accurately, the actual three-dimensional space coordinate of the floating object may be calculated by using the first offset coordinate, the second offset coordinate, the camera external parameter, the camera internal parameter of the camera, and the rotation parameter and the translation parameter of the vehicle, according to a triangulation principle, and the actual three-dimensional space coordinate of the floating object may be calculated by using the third offset coordinate, the fourth offset coordinate, the camera external parameter, the camera internal parameter, the rotation parameter and the translation parameter of the vehicle, according to a triangulation principle, and then the actual three-dimensional space coordinate of the floating object may be obtained by weighting the two sets of calculated actual three-dimensional space coordinates.
Step 209, determining the three-dimensional space position of the levitated object based on the actual three-dimensional space coordinates.
After the actual three-dimensional space coordinate of the suspended object is obtained, the three-dimensional space position of the suspended object in the real world can be determined according to the actual three-dimensional space coordinate.
In the embodiment of the invention, when the vehicle is located at a first position, a first environment image for the floating object is acquired by using the vehicle-mounted monocular camera device, when the vehicle is located at a second position, a second environment image for the floating object is acquired by using the vehicle-mounted monocular camera device, a first area containing the floating object is determined in the first environment image, a second area containing the floating object is determined in the second environment image, the object type of the floating object is determined based on the first area and the second area, the first floating object in the first area and the second floating object in the second area are determined, the first area is projected to the second environment image to obtain a third area, a first offset coordinate corresponding to the floating object is determined by using the third area, a second offset coordinate corresponding to the floating object is determined by using the second area, key point detection is performed on the second area and the third area, a first three-dimensional key point corresponding to the floating object is determined in the second area, a second key point corresponding to the floating object is determined in the third area, a third three-dimensional key point corresponding to the floating object is determined according to the principle, a first offset coordinate point corresponding to the floating object and a three-dimensional coordinate of the floating object are determined by using the principle, a three-dimensional coordinate offset coordinate of the three-dimensional coordinate of the floating object, a three-dimensional coordinate of the floating object is determined by using the three-dimensional coordinate of the three-dimensional coordinate measuring principle, and the determined actual three-dimensional space position is more accurate.
Referring to fig. 3, a block diagram of a structure of an embodiment of the apparatus for determining a spatial position of a floating object based on a vehicle-mounted monocular camera device according to the present invention is shown, where the floating object is an object whose position is outside of a ground plane, and specifically includes the following modules:
the first environment image acquisition module 301 is configured to acquire a first environment image for a floating object by using the vehicle-mounted monocular camera device when a vehicle is located at a first position;
a second environment image obtaining module 302, configured to obtain, by using the vehicle-mounted monocular camera device, a second environment image for the floating object when the vehicle is located at a second position;
an offset coordinate determination module 303, configured to determine a first offset coordinate corresponding to the floating object by using the first environment image, and determine a second offset coordinate corresponding to the floating object by using the second environment image;
an actual three-dimensional space coordinate obtaining module 304, configured to obtain an actual three-dimensional space coordinate of the floating object by using the first offset coordinate and the second offset coordinate;
a three-dimensional space position determining module 305 for determining a three-dimensional space position of the levitated object based on the actual three-dimensional space coordinates.
In an embodiment of the present invention, the offset coordinate determining module 303 includes:
a region determination submodule for determining a first region containing the levitated object in the first environmental image and a second region containing the levitated object in the second environmental image;
the projection submodule is used for projecting the first area to the second environment image to obtain a third area;
and the determining submodule is used for determining a first offset coordinate corresponding to the suspension object by adopting the third area and determining a second offset coordinate corresponding to the suspension object by adopting the second area.
In an embodiment of the present invention, the apparatus further includes:
and the object type determining module is used for determining the object type of the suspended object based on the first area and the second area.
In an embodiment of the present invention, the apparatus further includes:
and the suspended object determining module is used for determining a first suspended object in the first area and a second suspended object in the second area.
In an embodiment of the present invention, the determining sub-module further includes:
a second and third region determining unit configured to determine a second region and a third region that match each other based on the object type of the levitating object and the matched first levitating object and second levitating object; the third region includes a third levitating object corresponding to the first levitating object;
a first offset coordinate and second offset coordinate determination unit configured to determine a pair of first offset coordinates and second offset coordinates based on the second floating object and the third floating object.
In an embodiment of the present invention, the apparatus further includes:
a key point determining module, configured to perform key point detection on the second region and the third region, determine a first key point corresponding to the floating object in the second region, and determine a second key point corresponding to the floating object in the third region;
and the offset coordinate acquisition module is used for determining a corresponding third offset coordinate according to the first key point and determining a corresponding fourth offset coordinate according to the second key point.
In an embodiment of the present invention, the actual three-dimensional space coordinate obtaining module 304 includes:
the first actual three-dimensional space coordinate determination submodule is used for determining a first actual three-dimensional space coordinate of the suspended object by adopting the first offset coordinate and the second offset coordinate through a triangulation principle;
a second actual three-dimensional space coordinate determination submodule, configured to determine a second actual three-dimensional space coordinate of the suspended object by using the third offset coordinate and the fourth offset coordinate through a triangulation principle;
and the target actual three-dimensional space coordinate determination submodule is used for weighting the first actual three-dimensional space coordinate and the second actual three-dimensional space coordinate to obtain a target actual three-dimensional space coordinate of the suspended object.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiment of the invention also discloses a vehicle, which comprises:
one or more processors; and
one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the vehicle to perform one or more methods as described above.
Embodiments of the invention also disclose one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause the processors to perform one or more of the methods described above.
The embodiments in the present specification are all described in a progressive manner, and each embodiment focuses on differences from other embodiments, and portions that are the same and similar between the embodiments may be referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrases "comprising one of \ 8230; \8230;" does not exclude the presence of additional like elements in a process, method, article, or terminal device that comprises the element.
The method for determining the spatial position of a floating object based on a vehicle-mounted monocular camera device, the device for determining the spatial position of a floating object based on a vehicle-mounted monocular camera device, a vehicle and a readable medium provided by the invention are described in detail, and specific examples are applied to the detailed description to explain the principle and the implementation mode of the invention, and the description of the above embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (9)

1. A floating object space position determining method based on a vehicle-mounted monocular camera device is characterized in that the floating object is an object with a position beyond the ground level, and the method comprises the following steps:
when the vehicle is located at a first position, acquiring a first environment image aiming at a suspended object by adopting the vehicle-mounted monocular camera equipment;
when the vehicle is located at a second position, acquiring a second environment image aiming at the suspended object by adopting the vehicle-mounted monocular camera equipment;
determining a first offset coordinate corresponding to the levitated object using the first environment image and determining a second offset coordinate corresponding to the levitated object using the second environment image; wherein the offset coordinates are determined based on the offset of the coordinates of the floating object in the environment image from the actual spatial coordinates;
acquiring the actual three-dimensional space coordinate of the suspended object by adopting the first offset coordinate and the second offset coordinate;
determining a three-dimensional space position of the levitated object based on the actual three-dimensional space coordinates;
the step of determining a first offset coordinate corresponding to the levitated object using the first environment image and determining a second offset coordinate corresponding to the levitated object using the second environment image includes:
determining a first region containing the levitated object in the first environmental image and a second region containing the levitated object in the second environmental image;
projecting the first area to the second environment image to obtain a third area;
determining a first offset coordinate corresponding to the levitated object using the third region and determining a second offset coordinate corresponding to the levitated object using the second region.
2. The method of claim 1, wherein the steps of determining a first region containing the floating object in the first environmental image and determining a second region containing the floating object in the second environmental image are followed by further steps of:
determining an object type of the levitated object based on the first region and the second region.
3. The method of claim 2, wherein the step of determining the object type of the levitated object based on the first region or the second region is followed by:
determining a first floating object in the first region and a second floating object in the second region.
4. The method of claim 3, wherein the step of determining a first offset coordinate corresponding to the levitating object using the third region and a second offset coordinate corresponding to the levitating object using the second region comprises:
determining a second region and a third region which are matched with each other based on the object type of the floating object and the matched first floating object and second floating object; the third region includes a third levitating object corresponding to the first levitating object;
determining paired first and second offset coordinates based on the second and third hover objects.
5. The method of claim 4, further comprising:
performing keypoint detection on the second region and the third region, determining a first keypoint corresponding to the suspended object in the second region, and determining a second keypoint corresponding to the suspended object in the third region;
and determining a corresponding third offset coordinate according to the first key point, and determining a corresponding fourth offset coordinate according to the second key point.
6. The method of claim 5, wherein the step of using the first offset coordinate and the second offset coordinate to obtain actual three-dimensional space coordinates of the levitated object comprises:
determining a first actual three-dimensional space coordinate of the suspended object by adopting the first offset coordinate and the second offset coordinate through a triangulation principle;
determining a second actual three-dimensional space coordinate of the suspended object by adopting the third offset coordinate and the fourth offset coordinate through a triangulation principle;
and weighting the first actual three-dimensional space coordinate and the second actual three-dimensional space coordinate to obtain a target actual three-dimensional space coordinate of the suspended object.
7. A floating object spatial position determining apparatus based on a vehicle-mounted monocular imaging device, wherein the floating object is an object whose position is outside a ground plane, the apparatus comprising:
the first environment image acquisition module is used for acquiring a first environment image aiming at the suspended object by adopting the vehicle-mounted monocular camera equipment when the vehicle is positioned at a first position;
the second environment image acquisition module is used for acquiring a second environment image aiming at the suspended object by adopting the vehicle-mounted monocular camera equipment when the vehicle is positioned at a second position;
an offset coordinate determination module, configured to determine a first offset coordinate corresponding to the floating object using the first environment image, and determine a second offset coordinate corresponding to the floating object using the second environment image; wherein the offset coordinates are determined based on an offset of coordinates of the floating object in the environment image from actual spatial coordinates;
the actual three-dimensional space coordinate acquisition module is used for acquiring the actual three-dimensional space coordinate of the suspended object by adopting the first offset coordinate and the second offset coordinate;
a three-dimensional space position determination module for determining a three-dimensional space position of the levitated object based on the actual three-dimensional space coordinates;
the offset coordinate determination module includes:
a region determination submodule for determining a first region containing the levitated object in the first environmental image and a second region containing the levitated object in the second environmental image;
the projection submodule is used for projecting the first area to the second environment image to obtain a third area;
and the determining submodule is used for determining a first offset coordinate corresponding to the suspended object by adopting the third area and determining a second offset coordinate corresponding to the suspended object by adopting the second area.
8. A vehicle, characterized by comprising:
one or more processors; and
one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the vehicle to perform the method of one or more of claims 1-6.
9. One or more machine readable media having instructions stored thereon that, when executed by one or more processors, cause the processors to perform the method of one or more of claims 1-6.
CN202011477739.3A 2020-12-15 2020-12-15 Suspended object space position determination method based on vehicle-mounted monocular camera equipment Active CN112648924B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011477739.3A CN112648924B (en) 2020-12-15 2020-12-15 Suspended object space position determination method based on vehicle-mounted monocular camera equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011477739.3A CN112648924B (en) 2020-12-15 2020-12-15 Suspended object space position determination method based on vehicle-mounted monocular camera equipment

Publications (2)

Publication Number Publication Date
CN112648924A CN112648924A (en) 2021-04-13
CN112648924B true CN112648924B (en) 2023-04-07

Family

ID=75355425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011477739.3A Active CN112648924B (en) 2020-12-15 2020-12-15 Suspended object space position determination method based on vehicle-mounted monocular camera equipment

Country Status (1)

Country Link
CN (1) CN112648924B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108074260A (en) * 2017-11-15 2018-05-25 深圳市诺龙技术股份有限公司 A kind of method and apparatus of target object object positioning

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102008055163A1 (en) * 2008-12-29 2010-07-01 Robert Bosch Gmbh Method for chassis measurement and device for measuring the chassis geometry of a vehicle
CN103968772A (en) * 2014-05-19 2014-08-06 南京信息工程大学 Piston ring detection method based on monocular vision detection
CN108139202B (en) * 2015-09-30 2021-06-11 索尼公司 Image processing apparatus, image processing method, and program
CN106408589B (en) * 2016-07-14 2019-03-29 浙江零跑科技有限公司 Based on the vehicle-mounted vehicle movement measurement method for overlooking camera
US10282860B2 (en) * 2017-05-22 2019-05-07 Honda Motor Co., Ltd. Monocular localization in urban environments using road markings
JP6858681B2 (en) * 2017-09-21 2021-04-14 株式会社日立製作所 Distance estimation device and method
CN110533586B (en) * 2018-05-23 2023-02-07 杭州海康威视数字技术股份有限公司 Image stitching method, device, equipment and system based on vehicle-mounted monocular camera
JP7369692B2 (en) * 2018-10-24 2023-10-26 ソニーセミコンダクタソリューションズ株式会社 Distance sensor, detection sensor, distance measurement method and electronic equipment
CN109764858B (en) * 2018-12-24 2021-08-06 中公高科养护科技股份有限公司 Photogrammetry method and system based on monocular camera

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108074260A (en) * 2017-11-15 2018-05-25 深圳市诺龙技术股份有限公司 A kind of method and apparatus of target object object positioning

Also Published As

Publication number Publication date
CN112648924A (en) 2021-04-13

Similar Documents

Publication Publication Date Title
CN111830953B (en) Vehicle self-positioning method, device and system
CN111448591A (en) System and method for locating a vehicle in poor lighting conditions
AU2018282302A1 (en) Integrated sensor calibration in natural scenes
CN109074085A (en) A kind of autonomous positioning and map method for building up, device and robot
CN112631288B (en) Parking positioning method and device, vehicle and storage medium
CN110674705A (en) Small-sized obstacle detection method and device based on multi-line laser radar
US20200279395A1 (en) Method and system for enhanced sensing capabilities for vehicles
CN112172797B (en) Parking control method, device, equipment and storage medium
CN115235493B (en) Method and device for automatic driving positioning based on vector map
CN111754388B (en) Picture construction method and vehicle-mounted terminal
CN113793413A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN112683228A (en) Monocular camera ranging method and device
CN111928857A (en) Method and related device for realizing SLAM positioning in dynamic environment
CN110345924A (en) A kind of method and apparatus that distance obtains
KR102003387B1 (en) Method for detecting and locating traffic participants using bird's-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
CN110909620A (en) Vehicle detection method and device, electronic equipment and storage medium
CN112648924B (en) Suspended object space position determination method based on vehicle-mounted monocular camera equipment
Itu et al. An efficient obstacle awareness application for android mobile devices
CN109344677B (en) Method, device, vehicle and storage medium for recognizing three-dimensional object
CN112529954B (en) Method and device for determining position of suspension object based on heterogeneous binocular imaging equipment
Thai et al. Application of edge detection algorithm for self-driving vehicles
CN114428259A (en) Automatic vehicle extraction method in laser point cloud of ground library based on map vehicle acquisition
CN116802581A (en) Automatic driving perception system testing method, system and storage medium based on aerial survey data
Jaspers et al. Fast and robust b-spline terrain estimation for off-road navigation with stereo vision
CN112907746B (en) Electronic map generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240226

Address after: 510000 No.8 Songgang street, Cencun, Tianhe District, Guangzhou City, Guangdong Province

Patentee after: GUANGZHOU XIAOPENG MOTORS TECHNOLOGY Co.,Ltd.

Country or region after: China

Address before: Room 46, room 406, No. 1, Yichuang street, Zhongxin knowledge city, Huangpu District, Guangzhou, Guangdong 510725

Patentee before: Guangzhou Xiaopeng Automatic Driving Technology Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right