WO2021121251A1 - 一种交通工具全景环视图像生成的方法和装置 - Google Patents

一种交通工具全景环视图像生成的方法和装置 Download PDF

Info

Publication number
WO2021121251A1
WO2021121251A1 PCT/CN2020/136729 CN2020136729W WO2021121251A1 WO 2021121251 A1 WO2021121251 A1 WO 2021121251A1 CN 2020136729 W CN2020136729 W CN 2020136729W WO 2021121251 A1 WO2021121251 A1 WO 2021121251A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature point
training
actual
rotation angle
independent
Prior art date
Application number
PCT/CN2020/136729
Other languages
English (en)
French (fr)
Inventor
胡荣东
万波
彭美华
杨凯斌
王思娟
Original Assignee
长沙智能驾驶研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 长沙智能驾驶研究院有限公司 filed Critical 长沙智能驾驶研究院有限公司
Priority to JP2022535119A priority Critical patent/JP7335446B2/ja
Priority to US17/786,312 priority patent/US11843865B2/en
Priority to EP20904168.0A priority patent/EP4060980A4/en
Publication of WO2021121251A1 publication Critical patent/WO2021121251A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • This application belongs to the field of intelligent driving, and in particular relates to a method and device for generating a panoramic view image of a vehicle.
  • panoramic surround view systems on the market are suitable for vehicles in which the relative position of the camera is fixed during driving, or in other words, suitable for all-in-one vehicles.
  • this system is not well applicable to split or combined vehicles (including the front and one or more carriages hinged to each other) with multiple parts hinged to each other. This is because the positional relationship between the different parts that are articulated to each other during driving of this split type vehicle, especially when turning, is dynamically changing.
  • the existing panoramic surround view solution cannot perfectly cope with this situation, and there will be display blind areas and ghost images. And other issues, bring security risks.
  • the existing calibration and splicing method of the panoramic surround view system is generally to obtain the rotation angle between different parts of the vehicle by installing an angle sensor device, so as to realize the splicing of the panoramic surround view image.
  • this method requires sensors other than the camera, which has problems such as high cost, difficult installation and maintenance.
  • a method for generating a panoramic view image of a vehicle including: acquiring the actual original image of the external environment of the first part and the second part of the articulated vehicle; The actual original image is processed to obtain the actual independent look-around images of the first part and the second part; the coordinates of the respective hinge points of the first and second parts are obtained; the actual independence of the first part and the second part is determined Matching feature point pairs in the surround view image; make the respective hinge points in the actual independent surround view images of the first part and the second part coincide with each other, assuming that the independent surround view image of the first part is relative to the independent surround view image of the second part Rotate or if the matching feature point of the first part is rotated relative to the matching feature point of the second part, the distance between the two points in each matching feature point pair is calculated accordingly, and the distance is smaller than the preset first threshold.
  • the actual independent surround view images of the first part and the second part are merged to obtain a panoramic surround view image of the vehicle.
  • determining the actual rotation angle of the first part relative to the second part based at least on the number of successfully matched feature point pairs includes taking the angle corresponding to the maximum number of successfully matched feature point pairs as the first part and the second part The actual rotation angle between.
  • determining the actual rotation angle of the first part relative to the second part based at least on the number of successfully matched feature point pairs includes taking the angle corresponding to the maximum number of successfully matched feature point pairs as the first part and the second part Determine the coordinates of the successfully matched feature point pair based on the candidate rotation angle; calculate the distance between the two points in each successfully matched feature point pair and sum the distance; The rotation angle corresponding to the minimum result is used as the actual rotation angle of the first part relative to the second part.
  • determining the coordinates of the matched feature point pair based on the candidate rotation angle includes obtaining a candidate rotation translation matrix based on the hinge point coordinates of the first part and the second part and the candidate rotation angle And wherein calculating the distance between two points in the successfully matched feature includes calculating the distance between the two points in the successfully matched feature point pair based on the coordinates of the matched feature point pair and the candidate rotation and translation matrix.
  • processing the actual original image to obtain the actual independent surround view images of the first part and the second part respectively includes: performing distortion correction on the actual original images of the external environment of the first part and the second part; Project the corrected image to the geodetic coordinate system to generate the bird's-eye view of the first and second parts; respectively detect and match the internal feature points of the overlapping area of the bird's-eye view of the first and second parts, and then fix them
  • the fixed mosaic images of the first part and the second part are obtained by splicing; the fixed mosaic images of the first part and the second part are cut down to obtain the actual independent look-around images of the first part and the second part.
  • determining the matching feature point pair in the overlapping area of the actual independent look-around image of the first part and the second part includes: feature point detection, detecting the overlapping area of the actual independent look-around image of the first part and the second part Natural feature points and generate descriptors; feature point matching, at least based on the descriptors to generate feature point matching pairs through a matching algorithm, where the matching algorithm includes orb, surf, or sift algorithms; and feature point screening, by screening The algorithm filters out mismatched matching point pairs, wherein the filtering algorithm includes RANSAC or GMS algorithm.
  • it also includes a method for calculating the coordinates of the articulation point of the first part and the second part, including: acquiring a plurality of pairs of training independent surround view images of the first part and the second part; and performing features for each pair of the training independent surround view images Point detection and matching; based on the matching feature point pairs in each pair of training independent look-around images, calculate corresponding multiple training rotation translation matrices, and then calculate multiple corresponding training rotation angles between the first part and the second part At least based on the matching feature point coordinates in the multiple pairs of training independent look-around images, and the multiple training rotation angles, determine the corresponding multiple training translation vectors between the first part and the second part; and according to the multiple For the feature point coordinates of the independent look-around image, the multiple training rotation angles, and the multiple training translation vectors, the articulation point coordinates of the first part and the second part are calculated.
  • the method for calculating the coordinates of the articulation points of the first part and the second part further includes: forming a group of at least two training rotation angles among the plurality of training rotation angles, and calculating based on the training translation vector Obtain the candidate articulation point coordinates of the corresponding group; and sort all the candidate articulation point coordinates, and use the median of the ranking results as the articulation point coordinates of the first part and the second part; wherein at least two training rotation angles in each group The difference is greater than the preset angle threshold.
  • the present application also relates to a panoramic view image generating device of a vehicle.
  • the device includes: an original image acquisition unit configured to acquire actual or training original images of the external environment of the first and second parts of the vehicle that are articulated with each other; An image acquisition unit, coupled to the original image acquisition unit, configured to stitch the actual or training original images of the first and second parts into respective actual or training independent surround-view images; and a panoramic surround-view image acquisition unit, coupled to
  • the hinge point calibration unit and the independent surround view image acquisition unit include a feature point detection and matching module, which is coupled to the independent surround view image acquisition unit and is configured to receive the actual independent surround view images of the first and second parts, And detect and match the feature points therein; the actual rotation angle calculation module, coupled to the feature point detection and matching module, is configured to obtain the hinge point coordinates of the first part and the second part, and make the first part and the The hinge points of the independent surround view images of the second part overlap, assuming that the independent surround view image of the first part is rotated relative to the independent surround view
  • the actual rotation angle calculation module is configured to determine the actual rotation angle of the first part relative to the second part based on at least the number of successfully matched feature point pairs, including taking the angle corresponding to the maximum number of successfully matched feature point pairs as The actual angle of rotation between the first part and the second part.
  • the actual rotation angle calculation module is configured to determine the actual rotation angle of the first part relative to the second part based on at least the number of successfully matched feature point pairs, including taking the angle corresponding to the maximum number of successfully matched feature point pairs as The candidate rotation angle between the first part and the second part; determine the coordinates of the successfully matched feature point pair based on the candidate rotation angle; calculate the distance between the two points of each successfully matched feature point pair and The distance is summed; the rotation angle corresponding to the minimum summation result is taken as the actual rotation angle of the first part relative to the second part.
  • a hinge point calibration unit coupled to the independent surround view image acquisition unit, and includes: a feature point detection and matching module, coupled to the independent surround view image acquisition unit, configured to receive the first and second parts of the For the training independent look-around image, detect and match the feature points in each pair of the first part and the second part of the training independent look-around image; the training rotation angle calculation module, coupled to the feature point detection and matching module, is configured to be based on The matched feature point coordinates obtain multiple training rotation translation matrices between the first part and the second part feature points in each pair of training independent look-around images, and correspondingly obtain the first part and the second part in each pair of independent look-around images
  • the training translation vector acquisition module is coupled to the training rotation angle calculation module, and is configured to determine each pair according to the coordinates of the feature points of each pair of training independent look-around matching and multiple training rotation angles.
  • Training corresponding multiple training translation vectors of the independent look-around images; a hinge point coordinate determination module, coupled to the translation vector acquisition module and the training rotation angle calculation module, configured to match features according to the multiple pairs of training independent look-around images Point coordinates, multiple training rotation angles, and corresponding multiple training translation vectors determine the articulation point coordinates of the first part and the second part of the vehicle.
  • This application also relates to an intelligent vehicle, including: a first part and a second part hinged to each other;
  • a processor and a memory coupled with the processor; and a sensing unit, configured to take actual or training original images of the first part and the second part; wherein the processor is configured to perform any of claims 1-8 One described method.
  • Fig. 1 is a schematic diagram of a vehicle structure according to an embodiment of the present application.
  • FIG. 2A is a schematic diagram of the overall flow of a method for generating a panoramic surround view image of a vehicle according to an embodiment of the present application;
  • 2B is a schematic diagram of a specific flow of a method for generating a panoramic surround view image of a vehicle according to an embodiment of the present application
  • FIG. 3 is a schematic flow chart of a method for fixing and splicing original images of articulated parts of a vehicle according to an embodiment of the present application
  • FIG. 4 is a schematic flow chart of a method for calculating the coordinates of articulated points of various parts of a vehicle according to an embodiment of the present application
  • Fig. 5 is a schematic structural diagram of an apparatus for generating a panoramic surround view image of a vehicle according to an embodiment of the present application
  • Fig. 6 is a schematic structural diagram of an independent surround view image acquisition unit according to an embodiment of the present application.
  • Fig. 7 is a schematic structural diagram of a hinge point calibration unit according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a panoramic view image acquisition unit according to an embodiment of the present application.
  • Fig. 9 is a schematic diagram showing the structure of an intelligent vehicle according to an embodiment of the present application.
  • the present application provides a method for generating a panoramic view image of a vehicle, wherein the vehicle is composed of at least two parts hinged together, such as a semi-trailer.
  • the vehicle is composed of two articulated parts, such as trains, subways, etc., and the method described in this application is also applicable.
  • Fig. 1 is a schematic diagram of a vehicle structure according to an embodiment of the present application.
  • the vehicle as shown in Fig. 1 includes a first part 101 and a second part 102, and the two parts are connected by articulation.
  • the hinge points of the first part 101 and the second part 102 coincide.
  • a camera 11, a camera 12, and a camera 13 are provided on the front, right, and left sides of the first part 101, respectively.
  • a camera 21, a camera 22, and a camera 23 are provided on the rear, right and left sides of the second part 102, respectively.
  • the camera may be a wide-angle camera of 180° or other angles, and the arrangement of the cameras is only exemplary. In some embodiments, it further includes setting the position and number of cameras in other ways.
  • the vehicle may further include a camera synchronization module (not shown) to realize data synchronization between various cameras.
  • the hinge point has an irreplaceable special position relative to other characteristic points. Because no matter how the motion state of the two articulated parts of the vehicle changes, in actual situations, the respective articulation points of the two parts of the vehicle should always coincide. Using such a relatively stable point as a reference point can not only make the calculation result more accurate, the generated image is closer to the actual scene, but also reduce the additional computing power loss due to relative motion, and the overall work efficiency is higher. Therefore, the coordinates of the respective articulation points of the articulated parts of each other are calculated first, and then based on the known overlap relationship, used in the subsequent image stitching process, a panoramic view image of the vehicle that is closer to the actual situation and stable can be obtained.
  • FIG. 2A is a schematic diagram of the overall flow of a method for generating a panoramic view of a vehicle according to an embodiment of the present application
  • FIG. 2B is a schematic diagram of the specific flow of a method for generating a panoramic view of a vehicle according to an embodiment of the present application.
  • the method for generating a panoramic surround view image of a vehicle described in this application can be summarized into four parts: obtaining the actual original image 21, obtaining the actual independent surrounding image 22, and obtaining the hinge point coordinates. 23 and obtaining a panoramic view image 24.
  • the so-called actual original image and actual independent loop view are intended to distinguish them from the training original image and training actual independent surround view image in the process of calculating the hinge points later.
  • the "actual” emphasizes the image obtained during the actual driving of the vehicle.
  • the “training” emphasizes the image obtained by artificially setting a specific angle between the first part and the second part of the vehicle in order to obtain the coordinates of the articulation point.
  • step 21 the operation of obtaining the actual original image includes:
  • Step 201 Obtain actual original images of the external environment of the first part and the second part of the vehicle that are articulated with each other.
  • the first part and the second part referred to here can be, for example, the front and the carriage, of course, it can also be two carriages in other cases.
  • the actual original image is the image directly obtained by the camera, and may include the actual original images of the external environment of the first and second parts. Of course, these actual original images may also include parts of the vehicle itself. image.
  • a wide-angle camera may be set on the outside of the vehicle to obtain the actual original image.
  • the wide-angle camera may be a wide-angle camera of 180° or other angles. In some embodiments, in order to obtain a more excellent image performance, the shooting angle of each actual original image may be expanded as much as possible.
  • step 22 the operation of obtaining the actual independent surround view image includes:
  • Step 202 Process the actual original image to obtain respective actual independent surround view images of the first part and the second part.
  • the actual original image data of the first part and the second part obtained in step 201 are unprocessed, and the images taken by adjacent cameras have overlapping areas. Therefore, it is necessary to convert the actual original image (the specific conversion process will be described in detail later), and then fix and stitch multiple images belonging to the same part (the first part or the second part) to obtain the complete Actually look around the image independently.
  • the actual independent look-around image of each of the first or second part is a complete top view of the actual external environment of the part except the hinged side.
  • FIG. 3 is a schematic flowchart of a method for fixing and splicing actual original images of articulated parts of a vehicle according to an embodiment of the present application, that is, step 202 in the foregoing method is further explained.
  • the method includes:
  • Step 301 Perform distortion correction on the actual original image.
  • the original image collected by the wide-angle camera has the characteristic of a form of perspective distortion, which produces an effect that causes the image to be distorted, and cannot correctly reflect the distance relationship of the objects in the image.
  • the camera parameters and distortion correction parameters calibrated by the wide-angle camera may be used to correct the actual original image to obtain a corrected image corresponding to each actual original image.
  • the camera parameters and distortion correction parameters can be determined according to the internal structure of the wide-angle camera and the established distortion model.
  • Step 302 Perform perspective transformation on the image after the distortion correction.
  • the images acquired by different cameras are projected to the geodetic coordinate system to become the actual bird's-eye view (it can be obtained by selecting the specified feature points of the calibration object to perform perspective transformation)), and the corrected image is generated between the actual bird's-eye view Mapping relationship, the actual bird's-eye view corresponding to each correction map is obtained.
  • Step 303 Perform fixed splicing on the actual bird's-eye view.
  • the actual bird's-eye view of each part in each direction can be spliced. Due to the characteristics of the wide-angle camera, there is a partial overlap area between the actual bird's-eye views taken by the adjacent cameras in each part, so the overlap area needs to be corrected to form an actual fixed mosaic.
  • a number of marker points can be manually selected for matching to achieve fixed splicing.
  • other known methods can also be used for matching.
  • Step 304 Perform image cropping on the actual fixed mosaic.
  • the actual fixed mosaic of each part of the vehicle is obtained, but such actual fixed mosaic may also include unnecessary parts.
  • the region of interest can be cropped according to requirements to make the image size fit the screen display range for display on the screen, and finally obtain the actual independent look-arounds of the first part and the second part. image.
  • step 23 the step of obtaining the coordinates of the hinge point specifically includes:
  • Step 203 Obtain the coordinates of the hinge point.
  • the articulation point coordinates can be calculated every time it is started for initialization, or the articulation point can be calibrated at the same time when the camera coordinates are calibrated. It can be calculated based on the known articulation point coordinates during driving, instead of repeatedly calculating the articulation point coordinates.
  • the method of calculating the coordinates of the hinge point will be introduced separately later.
  • the so-called training independent look-around image here means that in order to calculate the coordinates of the hinge points of the first and second parts, n kinds of relative positions are artificially formed between the first part and the second part, and n pairs of the first and second parts are obtained accordingly. Part of the training independently looks around the image. Based on these n pairs of matching feature points in the training independent look-around images, corresponding n training rotation angles can be calculated. Combining the corresponding n training translation vectors, the coordinates of the respective articulation points of the first part and the second part of the vehicle can be determined.
  • Fig. 4 is a schematic flowchart of a method for calculating the coordinates of articulation points of various parts of a vehicle according to an embodiment of the present application. As described above, the calculation of the hinge point coordinates is not a part of the method shown in FIG. 2A, but a method of calculating the hinge point coordinates that has been performed before this method.
  • the method for calculating the coordinates of the articulation points of each part of the vehicle may include:
  • Step 401 Obtain n pairs of training original images of the first part and the second part respectively, and obtain n pairs of training independent surround-view images through fixed stitching. Each pair of images corresponds to a relative position of the first part and the second part, a total of n kinds of positions. Wherein, n can be a positive integer greater than or equal to 2.
  • the operation of obtaining the training independent look-around image by fixed stitching is similar to the operation of obtaining the actual independent look-around image described above, and will not be repeated here.
  • Step 402 Detect and match each other's feature points for each of the n pairs of training independent look-around images of the first part and the second part.
  • a feature point refers to a point in an image that has distinctive characteristics, can effectively reflect the essential features of the image, and can identify a target object in the image.
  • the feature point of an image consists of two parts: Keypoint and Descriptor.
  • the key point refers to the position, direction and scale information of the feature point in the image.
  • the descriptor is usually a vector, which describes the information of the pixels around the key point according to an artificial design method. Generally, descriptors with similar appearance have similar descriptors corresponding to them. Therefore, when matching, as long as the descriptors of two feature points are close in the vector space, they can be considered as the same feature point.
  • the key points of the two articulated training independent look-around images can be obtained, and the descriptors of the feature points can be calculated according to the positions of the key points.
  • the traffic The tool hinges the feature points of the two parts of the surround view image to match, and obtains the feature matching point pairs of the two parts of the surround view image hinged by the vehicle.
  • a brute force matching algorithm can be used. The algorithm compares the descriptors of the two articulated independent look-around image feature points in the vector space one by one, and selects the pair with the smaller distance as the matching point pair .
  • Step 403 Calculate n training rotation translation matrices between the first part and the second part feature points based on the matched feature point pairs, and calculate the n training rotation angles between the first part and the second part accordingly.
  • the following introduces some examples of calculating the training rotation angle based on matching feature point pairs.
  • random sampling consensus algorithm or minimum median method (LMedS) may be used to select the feature matching point pairs of the articulated two parts for training independent surround view images.
  • RANSAC random sampling consensus algorithm
  • LMedS minimum median method
  • n training rotation translation matrices n training rotation angles ⁇ 1 ... ⁇ n between the first part and the second part are obtained.
  • the training rotation angle is obtained through calculation. Different from the physical measurement method in the prior art, the result obtained by the method involved in this application is more accurate, and the operation difficulty of obtaining the training rotation angle is lower. At the same time, the use of sensors is reduced, the cost is lower, and the applicability is wider. And can avoid the interference factors in the environment.
  • Step 404 Determine n based on at least the coordinates of the matching feature points between the training independent look-around images of the first part and the second part, and the corresponding n training rotation angles between the first part and the second part. Training translation vector.
  • (a x , a y ) can be set as the articulation point coordinates of the first part 101 of the vehicle
  • (b x , b y ) be the articulation point coordinates of the second part 102 of the vehicle
  • (x 0 , y 0 ) and (x 1 , y 1 ) are the coordinates of feature points that match each other in the training independent look-around images of the first part 101 and the second part, respectively
  • is the training rotation angle of the first part and the second part
  • the training translation vector is the training translation parameter for the translation of the feature points of the two-part training independent surround-view image from one image to the other image.
  • the feature points in the pair of matching points are translated from the first part 101 to train the independent surround view image to the second part 102 to train the translation parameter of the independent surround view image. Therefore, for a pair of matching points, assuming that the coordinate of the characteristic point of the training independent look-around image of the first part of the vehicle 101 is the origin, the coordinate of the matching point corresponding to the training independent look-around image of the second part of the vehicle 102 is numerically equal to two The training translation vector of the image.
  • the training translation vector (dx, dy) from the first part of the training surround view image to the second part of the training surround view image can be expressed by formula (3)
  • Step 405 Calculate the articulation point coordinates of the first part and the second part of the vehicle according to the coordinate of the feature point, the training rotation angle, and the training translation vector of the n pairs of training independent look-around images.
  • the calculation here is based on the premise that the points in the training independent look-around images of the first part and the second part of the vehicle are rotated with the respective hinge point as the center of the circle when the direction of movement of the vehicle changes.
  • the selected training rotation angle Should satisfy
  • a candidate set of hinge point coordinates (a x , a y ,b x ,b y )
  • the sorting result is Gaussian distribution, and the median value in the sorting result is taken as the hinge point coordinates.
  • n can be an integer greater than or equal to 2
  • m can be an integer greater than or equal to 1
  • m is less than n.
  • the final articulation point coordinates are obtained by matching and calculating the feature points of the independent look-around images of the two parts of the phase articulation.
  • the coordinate of the hinge point obtained by this method is more accurate.
  • the method is simple and reliable to operate, and can realize the calibration of the hinge point without the aid of other tools, and saves the labor cost and material resources.
  • step 24 the operation of obtaining the panoramic surround view image includes:
  • Step 204 Determine matching feature point pairs in the overlapping area of the actual independent look-around images of the first part and the second part.
  • the matching here means that there is a corresponding relationship between two points, or the same point is represented in the first and second parts of the independent look-around images.
  • the matched point is not necessarily the matched point mentioned later.
  • the method for matching feature point pairs includes feature point detection, feature point matching, and feature point screening.
  • the foregoing method process is similar to step 402 and step 403, which will not be repeated here, except that the image data of the operation is an image obtained during actual driving instead of a training image.
  • the feature point detection method may include: orb, surf, or sift algorithm.
  • the feature point matching algorithm may include a brute force matching algorithm or a nearest neighbor matching algorithm.
  • a feature point filtering method is further included, and the filtering method may include RANSAC or GMS algorithm.
  • Step 205 The actual independent surround view image hinge points of the first part and the second part are overlapped, and the independent surround view image of the second part is assumed to remain stationary, and the independent surround view image of the first part is rotated around the overlapped hinge point. Or if the matching feature points of the first part are rotated relative to the matching feature points of the second part, the number of point pairs that are successfully matched in the matching feature point pairs is determined.
  • the independent look-around image of the first part remains stationary, and the independent look-around image of the second part rotates around the hinge point, or if the matching feature point of the second part is relative to the matching feature of the first part Point rotation.
  • the vehicle is provided independently of the first part of the actual look around the image feature point coordinates x i is, independently of the second part of the actual surveying and image feature point coordinates x i corresponding to the matched as x 'i, ⁇ for the first
  • the distance between the matching feature point pairs in the actual independent look-around image of one part and the second part is expressed by formula (7):
  • H( ⁇ ,t) is the rotation and translation matrix (two-dimensional) between the feature point sets of the actual independent look-around image matching in the first part and the second part, as expressed by formula (8).
  • be the assumed rotation angle between the first part and the second part of the vehicle.
  • t(t 1 , t 2 ) is the translation vector of the hinge point of the first part of the independent look-around image relative to the hinge point of the second part of the independent look-around image.
  • the first threshold can be set according to the needs of the user.
  • Iverson bracket can be used to express this result, as shown in formula (9):
  • Formula (11) can be used to calculate the number of successfully matched feature point pairs k
  • i is 0 to L, where L is the number of matching feature point pairs in the actual independent look-around image of the first part and the second part, and L is an integer greater than or equal to 1.
  • Step 206 Use the rotation angle corresponding to the largest number of successfully matched feature point pairs as the candidate rotation angle of the first part relative to the second part.
  • formula (12) can be used to determine the candidate rotation angle ⁇ 1 corresponding to the maximum number of successfully matched feature point pairs:
  • the alternative rotation angle is unique; if it is not unique, the calculation can be continued according to the following embodiment to determine the actual rotation angle.
  • step 206 you can jump to step 208, and use the candidate rotation angle as the actual rotation angle.
  • the first part and the second The actual independent surround view images of the two parts are merged to obtain the panoramic surround view image of the vehicle.
  • the actual rotation angle between the first part and the second part of the vehicle can be quickly determined, which saves computing resources and facilitates rapid synthesis of the panoramic view image of the vehicle in actual driving.
  • step 206 it is possible to jump to step 207.
  • Step 207 Determine the coordinates of the successfully matched feature point pairs based on the candidate rotation angles, calculate the distance between the two points in each successfully matched feature point pair, and sum the distances, and correspond to the smallest sum result
  • the rotation angle of is taken as the actual rotation angle of the first part relative to the second part.
  • step 207 can still be used to fine-tune the candidate rotation angle. If the operation of step 207 is directly performed without the operation of step 206, the wrong rotation angle may be obtained because the sum of the distances between the mismatched feature point pairs is the smallest. Therefore, it is important to perform the operation in step 206 first.
  • the formula (13) can be used to determine the candidate rotation angle ⁇ 2 corresponding to the minimum sum of the distances of the feature points:
  • G is the number of feature point pairs that have been successfully matched
  • X G is the set of feature points that have been successfully matched
  • G is an integer greater than or equal to 1
  • G is less than or equal to L.
  • step 206 If the number of candidate rotation angles obtained in step 206 is not unique, one of them needs to be determined as the final actual rotation angle.
  • F candidate rotation angles where F is an integer greater than one.
  • formula (8) can be used to obtain F candidate rotation and translation matrices H, and F rotation and translation matrices can be brought into formula (10), and the results of all points Xj and Xj' that have been successfully matched can be calculated. Coordinates, get the point set X G.
  • formula (14) is used to calculate the angle ⁇ 2 corresponding to the minimum sum of the distances between the successfully matched feature point pairs based on the calculation of multiple candidate rotation and translation matrices H m
  • the candidate rotation angle ⁇ 2 corresponding to the minimum sum of the distances between the successfully matched feature point pairs is determined, it can be used as the actual rotation angle of the first part relative to the second part.
  • step 207 you can jump to step 209.
  • the actual independent surround view images of the first part and the second part are merged to obtain the panoramic surround view of the vehicle. image.
  • the method introduced in this embodiment further sets constraint conditions for fine calculations, and further fine-tunes the calculation results of step 206, so that the characteristics of successful matching are closer to each other, the degree of coincidence is higher, and the actual rotation angle obtained is closer to reality. , Improve the certainty of the actual steering angle calculated, make the final result more robust and more accurate.
  • the method in this embodiment also solves the problem that multiple candidate rotation angles may be obtained in step 206 without being determined, and improves the certainty of the final result.
  • the method of the present application is based on the hinge relationship of the first part and the second part, and therefore sets the precondition that the coordinates of the two hinge points coincide, and on this premise, the angle with the largest number of successfully matched feature point pairs is selected as the actual rotation angle.
  • the rotation and translation relationship between the first part and the second part is calculated based only on the matching feature points in the surround view image. Under different circumstances, the distance between the first part and the second part may suddenly move. Visual effect.
  • this application since the coordinates of the hinge points of the first part and the second part are fixed, this problem is well avoided.
  • the image's own feature matching can realize the panoramic surround view function under the condition of vehicle angle change, which solves the problem of the traditional surround view solution.
  • the first part and the second part are difficult to achieve seamless panoramic stitching under the real-time change of the steering angle, and they have the advantages of simple operation and low cost.
  • Fig. 5 is a schematic structural diagram of an apparatus for generating a panoramic view image of a vehicle according to an embodiment of the present application.
  • This device can be located on the vehicle or in the remote server of the vehicle.
  • the structure of the apparatus for generating a panoramic view image of a vehicle may include:
  • the original image obtaining unit 501 is configured to obtain original images of the external environment of the two parts of the vehicle articulated.
  • the original image acquisition unit 501 may include one or more wide-angle cameras arranged on the non-articulated side of the two hinged parts of the vehicle.
  • the original image obtaining unit 501 may be used to obtain actual original images, and may also be used to obtain training original images.
  • An independent surround view image acquisition unit 502 coupled to the original image acquisition unit 501, is configured to splice the original image data of the two parts into an independent surround view image of the part.
  • the independent look-around image acquisition unit 502 may be used to obtain actual independent look-around images of the first and second parts of the vehicle, and may also be used to obtain training independent look-around images of both.
  • the hinge point calibration unit 503 which is coupled to the independent surround view image acquisition unit 502, is configured to calculate the hinge point coordinates according to the training independent surround view image data.
  • the hinge point calibration unit 503 may be configured to perform operations in steps 402-405 in the embodiment shown in FIG. 4. The operation of this unit is to obtain the coordinates of the hinge point.
  • the coordinates of the articulation point are obtained through calculation before the vehicle actually travels or every time it starts. Once acquired, the coordinate value is recorded for subsequent use.
  • the device may not include the hinge point calibration unit, and the hinge point coordinates may be calculated by a remote server, and the calculated hinge point coordinates may be sent to the device.
  • the panoramic surround view image acquisition unit 504 which is coupled to the hinge point calibration unit 503 and the independent surround view image acquisition unit 502, is configured to determine the actual rotation angle of the first part relative to the second part, and calculate the actual rotation angle according to the hinge point coordinates and the actual rotation angle
  • the actual independent look-around images of the two parts are synthesized into a panoramic look-around image of the vehicle.
  • the vehicle panoramic surround view image generating device may not include the hinge point calibration unit, but receives the hinge point coordinates from the outside.
  • Fig. 6 is a schematic structural diagram of an independent surround view image acquisition unit of a vehicle according to an embodiment of the present application.
  • the independent surround view image acquisition unit 502 may include:
  • Image distortion correction module 61 It is coupled to the original image acquisition module 501 and is configured to perform distortion correction on the original image to obtain a corrected image corresponding to the original image.
  • Perspective transformation module 62 It is coupled to the image distortion correction module 61, and is configured to project the corrected image into a corresponding bird's-eye view under the geodetic coordinate system.
  • Fixed splicing module 63 It is coupled to the perspective transformation module 62, and is configured to perform fixed stitching on the bird's-eye view of the first part and the second part, respectively, to obtain a fixed stitched image of the first part and a fixed stitched image of the second part, respectively.
  • Image cropping module 64 It is coupled to the fixed stitching module 63, and is configured to cut the fixed stitched image of the first part and the fixed stitched image of the second part to obtain independent surround view images of the first part and the second part.
  • Fig. 7 is a schematic structural diagram of a hinge point calibration unit of a vehicle according to an embodiment of the present application.
  • the hinge point calibration unit 503 may include:
  • the feature point detection and matching module 71 coupled to the independent look-around image acquisition unit 502, is configured to receive n pairs of training independent look-around images of the first and second parts, and detect and match each pair of the first part and the second part The training independently looks at the feature points in the image.
  • the training rotation angle calculation module 72 coupled to the feature point detection and matching module 71, is configured to obtain n training points between the first part and the second part of each pair of training independent look-around images based on the matched feature point coordinates Rotate the translation matrix, and accordingly obtain n training rotation angles between the first part and the second part in each pair of independent surround-view images.
  • the training translation vector acquisition module 73 coupled to the training rotation angle calculation module 72, is configured to determine each pair of independent training rotation angles according to the coordinates of the feature points of the training independent look-around image of each pair of the first part and the second part, and n training rotation angles. Look around the corresponding n training translation vectors of the image.
  • the articulation point coordinate determination module 74 is coupled to the training translation vector acquisition module 73 and the training rotation angle calculation module 72, and is configured to match the coordinate feature points of the n pairs of training independent look-around images, n training rotation angles, and corresponding n training translation vectors to determine the coordinates of the articulation points of the first part and the second part of the vehicle.
  • the hinge point coordinate determination module 74 may be further configured to first divide the two pairs of the n pairs of training independent look-around images into one group for a total of m groups. Then, according to the coordinates of the feature points of each training independent look-around image, the training rotation angle and the training translation vector, the corresponding joint point coordinate calculation result is obtained. Then, after obtaining the calculation results of m groups of training independent look-around images, the calculation results of the joint point coordinates of each group are sorted, and the median value in the sorting results is used as the joint point coordinates.
  • the specific method has been disclosed in the content related to the aforementioned formula (5) and formula (6), and will not be repeated here.
  • the angle difference of the training rotation angle between the first part and the second part in the set of training independent look-around images may be greater than a preset angle to satisfy calculation accuracy.
  • the aforementioned vehicle panoramic surround view image generating device obtains the final articulation point coordinates by matching and calculating the feature points of the independent surround view images of the two articulated parts.
  • the coordinate of the hinge point obtained by this method is more accurate.
  • the method is simple and reliable to operate, and can realize the calibration of the hinge point without using other tools, such as an angle sensor, and saves labor costs and material resources.
  • FIG. 8 is a schematic structural diagram of a panoramic surround view image acquiring unit of a vehicle panoramic surround view image generating device according to an embodiment of the present application.
  • the panoramic view image acquisition unit 504 includes:
  • the feature point detection and matching module 81 is coupled to the independent look-around image acquisition unit 502, and is configured to receive the actual independent look-around images of the first and second parts, and detect and match the feature points therein. According to different embodiments, 81 and 71 may be the same or different modules.
  • the actual rotation angle calculation module 82 coupled to the feature point detection and matching module 81 and the hinge point calibration unit 503, is configured to receive the hinge point coordinates and make the hinge points of the independent surround view images of the first part and the second part coincide; calculate each Match the distance between the two points in the feature point pair, and use the matching feature point pair whose distance is less than the preset first threshold as the feature point pair that is successfully matched; the rotation angle corresponding to the largest number of the feature point pairs that are successfully matched is taken as the rotation angle Alternative rotation angle of the first part relative to the second part.
  • the panoramic surround view image generating module 83 is coupled to the actual rotation angle calculation module 82 and the hinge point calibration unit 503, and is configured to use the candidate rotation angle as the actual rotation angle, and perform the calculation on the hinge point coordinates and the actual rotation angle.
  • the actual independent surround view images of the first part and the second part are merged to obtain a panoramic surround view image of the vehicle.
  • the actual rotation angle calculation module 82 may be further configured to determine the coordinates of the successfully matched feature point pair based on the candidate rotation angle; calculate the distance between two points in each successfully matched feature point pair The distance is summed; the rotation angle corresponding to the minimum summation result is taken as the actual rotation angle of the first part relative to the second part.
  • the panoramic surround view image generation module 83 coupled to the actual rotation angle calculation module 82 and the hinge point calibration unit 503, is configured to perform respective actual independent surround view images of the first part and the second part according to the hinge point coordinates and the determined actual rotation angle After performing rotation and translation stitching, a panoramic view image of the vehicle is obtained.
  • Fig. 9 is a schematic diagram showing the structure of an intelligent vehicle according to an embodiment of the present application.
  • the intelligent transportation tool includes a processor 901 and a memory 902 for storing a computer program that can be run on the processor 901.
  • the processor 901 executes any one of the embodiments of the present application. All or part of the methods provided.
  • the processor 901 and the memory 902 do not mean that the corresponding number is one, but may be one or more.
  • the intelligent transportation tool may further include a memory 903, a network interface 904, and a system bus 905 connecting the memory 903, the network interface 904, the processor 901, and the memory 902.
  • the memory stores the operating system and the data processing device provided in the embodiment of the present application.
  • the processor 901 is used to support the operation of the entire intelligent transportation tool.
  • the memory 903 may be used to provide an environment for the running of the computer program in the memory 902.
  • the network interface 904 can be used for external server devices, terminal devices, etc. to perform network communication, receive or send data, such as obtaining driving control instructions input by the user.
  • the intelligent vehicle may also include a GPS unit 906 configured to obtain location information of the vehicle.
  • the sensor unit 907 may include a wide-angle camera configured to obtain actual or training original images.
  • An embodiment of the present application also provides a computer storage medium, for example, including a memory storing a computer program, and the computer program can be executed by a processor to complete the steps of the method for detecting camera posture information provided in any embodiment of the present application.
  • the computer storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface memory, optical disk, or CD-ROM, etc.; it may also be a variety of devices including one or any combination of the foregoing memories.
  • the methods involved in this application may also be executed in whole or in part by the remote server of the smart driving device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Navigation (AREA)
  • Studio Devices (AREA)
  • Traffic Control Systems (AREA)

Abstract

本申请提供了一种交通工具的全景环视图像生成方法,包括获取交通工具相互铰接的第一部分和第二部分的外部环境实际原始图像;对所述实际原始图像进行处理以获得所述第一部分和第二部分各自的实际独立环视图像;获取所述第一和第二部分各自的铰接点坐标;确定所述第一部分和第二部分的实际独立环视图像中匹配特征点对;计算每个匹配特征点对中两点之间的距离,并将该距离小于预设第一阈值的匹配特征点对作为匹配成功的特征点对;将匹配成功的特征点对数目最大所对应的旋转角作为所述第一部分相对于第二部分的备选旋转角度。本申请还提供了一种交通工具的全景环视图像生成装置以及一种智能交通工具。

Description

一种交通工具全景环视图像生成的方法和装置 技术领域
本申请属于智能驾驶领域,特别地涉及交通工具全景环视图像生成的方法和装置。
背景技术
目前市面上很多全景环视系统适用于在行驶过程中摄像头相对位置固定不变的交通工具,或者说适用于一体式交通工具。但是,这种系统对于多部份相互铰接而成的分体式或者组合式交通工具(包括车头以及与其铰接的一节或多节车厢)来说就无法很好的适用。这是因为,在这种分体式交通工具行驶特别是转弯时彼此铰接的不同部分之间的位置关系是动态变化的,现有全景环视方案不能完美应对这种情况,会存在显示盲区、重影等问题,带来安全风险。
现有的全景环视系统标定及拼接方法一般是通过安装角度传感器装置,获取交通工具不同部分之间的旋转角度,实现全景环视图像的拼接。但是,该方法需要借助相机之外的传感器,存在成本高、安装和维护难度大等问题。
发明内容
针对现有技术中存在的技术问题,本申请提供了1.一种交通工具的全景环视图像生成方法,包括:获取交通工具相互铰接的第一部分和第二部分的外部环境实际原始图像;对所述实际原始图像进行处理以获得所述第一部分和第二部分各自的实际独立环视图像;获取所述第一和第二部分各自的铰接点坐标;确定所述第一部分和第二部分的实际独立环视图像中匹配特征点对;使所述第一部分和第二部分的实际独立环视图像中各自的铰接点彼此重合,假设使所述第一部分的独立环视图像相对所述第二部分的独立环视图像旋转或假使所述第一部 分的匹配特征点相对所述第二部分的匹配特征点旋转,相应计算每个匹配特征点对中两点之间的距离,并将该距离小于预设第一阈值的匹配特征点对作为匹配成功的特征点对;以及至少基于匹配成功的特征点对数目确定所述第一部分相对于第二部分的实际旋转角度;根据所述铰接点坐标和所述实际旋转角度对所述第一部分和第二部分各自的实际独立环视图像进行融合后获得所述交通工具的全景环视图像。
特别的,其中至少基于匹配成功的特征点对数目确定所述第一部分相对于第二部分的实际旋转角度包括将匹配成功的特征点对数目最大时对应的角度作为所述第一部分和第二部分之间的实际旋转角度。
特别的,其中至少基于匹配成功的特征点对数目确定所述第一部分相对于第二部分的实际旋转角度包括将匹配成功的特征点对数目最大时对应的角度作为所述第一部分和第二部分之间的备选旋转角度;基于所述备选旋转角度确定所述匹配成功的特征点对的坐标;计算每个匹配成功的特征点对中两点的距离并对该距离求和;将求和结果最小时对应的旋转角度作为所述第一部分相对于第二部分的实际旋转角度。
特别的,其中基于所述备选旋转角度确定所述匹配成功的特征点对的坐标包括,基于所述第一部分和第二部分的铰接点坐标以及所述备选旋转角度获得备选旋转平移矩阵;以及其中计算匹配成功的特征中两点之间的距离包括,基于所述匹配特征点对的坐标和所述备选旋转平移矩阵计算所述匹配成功的特征点对中两点间的距离。
特别的,其中对所述实际原始图像进行处理以获得所述第一部分和第二部分各自的实际独立环视图像,包括:对所述第一部分和第二部分的外部环境实际原始图像进行畸变矫正;将经矫正图像投影到大地坐标系下生成所述第一和第二部分的鸟瞰图;分别检测并匹配所述第一部和第二部分各自的鸟瞰图重叠区域的内部特征点,再进行固定拼接得到所述第一部分和第二部分各自的固定拼 接图;对所述第一部分和第二部分各自的固定拼接图进行裁减,得到所述第一部分和第二部分各自的实际独立环视图像。
特别的,其中确定所述第一部分和第二部分的实际独立环视图像的重叠区域中的匹配特征点对包括:特征点检测,检测所述第一部分和第二部分的实际独立环视图像的重叠区域中自然特征点并生成描述子;特征点匹配,至少基于所述描述子通过匹配算法生成特征点匹配对,其中所述匹配算法包括orb、surf或sift算法;以及特征点筛除,通过筛除算法筛除误匹配后的匹配点对,其中所述筛除算法包括RANSAC或GMS算法。
特别的,还包括计算所述第一部分和第二部分的铰接点坐标的方法,包括:获取多对第一部分和第二部分的训练独立环视图像;针对每一对所述训练独立环视图像进行特征点的检测和匹配;基于每对训练独立环视图像中匹配的特征点对,计算相应的多个训练旋转平移矩阵,进而计算所述第一部分和第二部分之间的多个相应的训练旋转角度;至少基于所述多对训练独立环视图像中匹配的特征点坐标,以及所述多个训练旋转角度,确定所述第一部分和第二部分间相应的多个训练平移向量;以及根据所述多对独立环视图像的特征点坐标、所述多个训练旋转角度以及所述多个训练平移向量,计算得到所述第一部分和第二部分的铰接点坐标。
特别的,其中计算所述第一部分和第二部分铰接点坐标的方法还包括:在所述多个训练旋转角度中的至少两个训练旋转角度组成一组,并基于所述训练平移向量,计算获得对应该组的候选铰接点坐标;以及将所有候选铰接点坐标排序,将排序结果的中值作为所述第一部分和第二部分的铰接点坐标;其中每组中的至少两个训练旋转角度之差大于预设的角度阈值。
本申请还涉及一种交通工具的全景环视图像生成装置,所述装置包括:原始图像获取单元,配置为获取交通工具相互铰接的第一和第二部分的外部环境实际或训练原始图像;独立环视图像获取单元,耦合至所述原始图像获取单元,配 置为将所述第一和第二部分的实际或训练原始图像拼接成各自的实际或训练独立环视图像;以及全景环视图像获取单元,耦合至所述铰接点标定单元和所述独立环视图像获取单元,其包括特征点检测匹配模块,耦合至所述独立环视图像获取单元,配置为接收所述第一和第二部分的实际独立环视图像,并对其中的特征点进行检测和匹配;实际旋转角度计算模块,耦合至所述特征点检测匹配模块,配置为获取所述第一部分和第二部分的铰接点坐标,并使所述第一部分和第二部分的独立环视图像铰接点重合,假设使所述第一部分的独立环视图像相对所述第二部分的独立环视图像旋转或假使所述第一部分的匹配特征点相对所述第二部分的匹配特征点旋转,并计算每个匹配特征点对中两点之间的距离,将该距离小于预设第一阈值的匹配特征点对作为匹配成功的特征点对;至少基于匹配成功的特征点对数目确定所述第一部分相对于第二部分的实际旋转角度;全景环视图像生成模块,耦合至所述实际旋转角度计算模块,配置为根据所述铰接点坐标和所述实际旋转角度,对所述第一部分和第二部分各自的实际独立环视图像进行融合后获得所述交通工具的全景环视图像。
特别的,其中实际旋转角度计算模块配置为至少基于匹配成功的特征点对数目确定所述第一部分相对于第二部分的实际旋转角度包括,将匹配成功的特征点对数目最大时对应的角度作为所述第一部分和第二部分之间的实际旋转角度。
特别的,其中实际旋转角度计算模块配置为至少基于匹配成功的特征点对数目确定所述第一部分相对于第二部分的实际旋转角度包括,将匹配成功的特征点对数目最大时对应的角度作为所述第一部分和第二部分之间的备选旋转角度;基于所述备选旋转角度确定所述匹配成功的特征点对的坐标;计算每个匹配成功的特征点对中两点的距离并对该距离求和;将求和结果最小时对应的旋转角度作为所述第一部分相对于第二部分的实际旋转角度。
特别的,还包括铰接点标定单元,耦合至所述独立环视图像获取单元,其包括:特征点检测匹配模块,耦合至所述独立环视图像获取单元,配置为接收第一 和第二部分的多对训练独立环视图像,检测并匹配其中每一对所述第一部分和第二部分的训练独立环视图像中的特征点;训练旋转角度计算模块,耦合至所述特征点检测匹配模块,配置为基于匹配的特征点坐标得到每对训练独立环视图像中所述第一部分和第二部分特征点之间的多个训练旋转平移矩阵,并相应获得每对独立环视图像中所述第一部分和第二部分之间的多个训练旋转角度;训练平移向量获取模块,耦合至所述训练旋转角度计算模块,配置为根据每对训练独立环视匹配的特征点的坐标,以及多个训练旋转角度,确定每对训练独立环视图像的相应的多个训练平移向量;铰接点坐标确定模块,耦合至所述平移向量获取模块以及所述训练旋转角度计算模块,配置为根据所述多对训练独立环视图像的匹配特征点坐标、多个训练旋转角度以及相应的多个训练平移向量,确定所述交通工具第一部分和第二部分的铰接点坐标。
本申请还涉及一种智能交通工具,包括:相互铰接的第一部分和第二部分;
处理器,以及与所述处理器耦合的存储器;以及传感单元,配置为拍摄所述第一部分和第二部分的实际或训练原始图像;其中所述处理器配置为执行权利要求1-8任一所述的方法。
附图说明
下面,将结合附图对本申请的优选实施方式进行进一步详细的说明,其中:
图1是根据本申请的一个实施例的交通工具结构示意图;
图2A是根据本申请的一个实施例一种交通工具全景环视图像生成方法整体流程示意图;
图2B是根据本申请的一个实施例一种交通工具全景环视图像生成方法具体流程示意图;
图3是根据本申请的一个实施例一种对交通工具铰接各部分原始图像进行固定拼接方法流程示意图;
图4是根据本申请的一个实施例的计算交通工具各部分铰接点坐标的方法流程示意图;
图5是根据本申请的一个实施例的交通工具全景环视图像生成装置结构示意图;
图6是根据本申请的一个实施例的独立环视图像获取单元结构示意图;
图7是根据本申请的一个实施例的铰接点标定单元结构示意图;
图8是根据本申请的一个实施例的全景环视图像获取单元结构示意图;以及
图9所示为根据本申请一个实施例的智能交通工具结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
在以下的详细描述中,可以参看作为本申请一部分用来说明本申请的特定实施例的各个说明书附图。在附图中,相似的附图标记在不同图式中描述大体上类似的组件。本申请的各个特定实施例在以下进行了足够详细的描述,使得具备本领域相关知识和技术的普通技术人员能够实施本申请的技术方案。应当理解,还可以利用其它实施例或者对本申请的实施例进行结构、逻辑或者电性的改变。
本申请提供的一种交通工具全景环视图像生成方法,其中,所述交通工具由铰接在一起的至少两部分组成,例如半挂车。在一些实施例中,所述交通工具由两两铰接的多部分组成,例如:火车、地铁等,同样适用本申请所述方法。
下面以一种由相互铰接的两部分组成的交通工具为例,简述应用所述方法 的交通工具结构。图1是根据本申请的一个实施例的交通工具结构示意图。本实施例中,假设图中左侧为交通工具的前进方向。如图1所示的交通工具,包括第一部分101和第二部分102,两部分通过铰接方式连接。其中,铰接状态下,第一部分101和第二部分102的铰接点重合。在第一部分101的前侧、右侧和左侧分别设置有摄像头11、摄像头12和摄像头13。在第二部分102的后侧、右侧和左侧分别设置有摄像头21、摄像头22和摄像头23。本实施例中,所述摄像头可以为180°或者其他角度的广角摄像头,且摄像头的排布方式仅为示例性。在一些实施例中,进一步包括其他方式设置摄像头的位置和个数。
在一些实施例中,所述交通工具可以进一步包括摄像头同步模块(未示出),实现各个摄像头之间数据同步。
对于包括多个彼此铰接部分的交通工具,铰接点相对于其他的特征点有着其不可取代的特殊地位。因为无论交通工具的两个相铰接部分的运动状态如何变化,在实际情况中交通工具的两个部分各自的铰接点应该是始终重合的。而以这样位置状态相对稳定的点作为参考点,不但可以让计算结果更加精确,生成的图像更接近实际场景,也能减少因相对运动产生的额外运算力损耗,整体工作效率更高。因此,先计算出彼此铰接部分各自铰接点的坐标,再基于已知的重合关系,运用在后续图像的拼接过程中,可以得到更接近实际情况和稳定的交通工具全景环视图像。
下面对本申请所述方法流程进行详细阐述。图2A是根据本申请的一个实施例一种交通工具全景环视图像生成方法整体流程示意图,图2B是根据本申请的一个实施例一种交通工具全景环视图像生成方法具体流程示意图。在一些实施例中,如图2A和2B所示,本申请所述交通工具全景环视图像生成方法,可以概括为四个部分:获取实际原始图像21、获取实际独立环视图像22、获取铰接点坐标23以及获取全景环视图像24。
这里所谓的实际原始图像和实际独立环视图像是为了与后面在计算铰接点 过程中的训练原始图像和训练实际独立环视图像区分,“实际”强调的是在交通工具实际行驶过程中获得的图像,而“训练”强调的是为了获得铰接点坐标而人为的将交通工具第一部分和第二部分之间设置特定角度后而获得的图像。
在步骤21,获取实际原始图像的操作包括:
步骤201,获取交通工具相互铰接的第一部分和第二部分的外部环境实际原始图像。这里所指的第一部分和第二部分可以是例如车头和车厢,当然在其他情况也可以是两节车厢。
其中,所述实际原始图像即为摄像头直接获取的图像,并且可以包括的是第一和第二部外部分的外部环境实际原始图像,当然这些实际原始图像中也可能会包括交通工具本身的局部影像。在一些实施例中,可以通过在所述交通工具外侧设置广角摄像头,以获取实际原始图像。在一些实施例中,所述广角摄像头可以为180°或其他角度的广角摄像头。在一些实施例中,为了获得更加优异的图像表现,可以尽量扩大每个实际原始图像的拍摄角度。
在步骤22中,获取实际独立环视图像的操作包括:
步骤202,对所述实际原始图像进行处理以获得所述第一部分和第二部分各自的实际独立环视图像。
由于步骤201中获得的第一部分和第二部分的实际原始图像数据是未经处理的,且相邻摄像头拍摄的图像具有重叠区域。因此,需要将实际原始图像进行转化处理(具体的转化处理将在后面详细介绍),再将属于同一部分(第一部分或第二部分)的多幅图像进行固定拼接,以获取各部分的完整的实际独立环视图像。其中,第一或第二部分各自的实际独立环视图像为除铰接侧外该部分实际外部环境完整俯视图。
在进一步继续介绍获取全景环视图像的其他操作之前,先详细介绍一下实现各部分的独立环视图像的具体方法。图3是根据本申请的一个实施例一种对交通工具铰接各部分实际原始图像进行固定拼接方法流程示意图,也就是对前 述方法中的步骤202做进一步阐述。该方法包括:
步骤301,对实际原始图像进行畸变矫正。
在一些实施例中,广角摄像头采集的原始图像具有一种透视畸变形式的特征,其产生的效果会使得图像发生畸变,而无法正确反映图像中物体的距离关系。为了消除这种畸变,需要对广角摄像头采集的原始图像进行畸变矫正处理。在一些实施例中,可以利用广角摄像头标定的摄像头参数和畸变矫正参数对实际原始图像进行矫正处理,得到各实际原始图像对应的矫正图像。其中,摄像头参数和畸变矫正参数可以根据广角摄像头的内部结构和建立的畸变模型确定。
步骤302,对畸变矫正后的图像进行透视变换。
在实际应用中,用户需要看到的是俯视视角下的交通工具运行状态图,因此还需要对经过畸变矫正后的图像进行透视变换。在一些实施例中,将不同摄像头获取的图像投影到大地坐标系下变成实际鸟瞰图(可以通过选取标定物指定特征点做透视变换获得)),并生成矫正图到实际鸟瞰图之间的映射关系,得到各矫正图对应的实际鸟瞰图。
步骤303,对实际鸟瞰图进行固定拼接。
对于交通工具相互铰接的每个部分来说,可以将每个部分在各方向的实际鸟瞰图进行拼接。由于广角摄像头的特点,每个部分的相邻摄像头拍摄的实际鸟瞰图之间都存在部分重叠区域,因此需要对重叠区域进行修正,从而拼接成实际固定拼接图。
在一些实施例中,可以利用人工选取若干标记点进行匹配实现固定拼接。当然,也可以采用其他已知的方法进行匹配。
步骤304,对实际固定拼接图进行图像裁剪。
前述步骤中获得了交通工具各部分的实际固定拼接图,但这样的实际固定拼接图可能还会包括不需要的部分。在一些实施例中,对于各部分的实际固定拼接图,可以根据需求裁剪感兴趣区域,使图像大小符合屏幕显示范围,以在屏幕 上显示,最终得到第一部分和第二部分各自的实际独立环视图像。
以下继续介绍如图2A和2B所示,本申请所述交通工具全景环视图像生成方法。在步骤23,获取铰接点坐标的步骤具体包括:
步骤203,获取铰接点坐标。
对于交通工具来说,可以在每次启动进行初始化的时候对铰接点坐标进行计算,也可以在对摄像头坐标进行标定同时对铰接点进行标定。在行驶过程中可以依据已知的铰接点坐标进行计算,而不必再反复对铰接点坐标进行计算。在后面会单独介绍关于铰接点坐标计算的方法。
这里所谓的训练独立环视图像指的是,为了计算第一和第二部分的铰接点坐标,人为的使第一部分和第二部分之间形成n种相对位置,相应获得n对第一和第二部分的训练独立环视图像。基于这n对训练独立环视图像中的匹配特征点可以计算获得相应的n个训练旋转角度。结合相应的n个训练平移向量,就可以确定交通工具的第一部分和第二部分各自的铰接点坐标。
图4是根据本申请的一个实施例的计算交通工具各部分铰接点坐标的方法流程示意图。就像前面介绍的,铰接点坐标的计算并不是图2A所示的方法的一部分,而是在该方法之前已经执行完成的计算铰接点坐标的方法。
如图4所示,计算所述交通工具各部分铰接点坐标的方法可以包括:
步骤401,分别获取n对第一部分和第二部分的训练原始图像,并经过固定拼接得到n对训练独立环视图像。每对图像对应第一部分与第二部分的一种相对位置,共n种位置。其中,n可以为大于等于2的正整数。固定拼接获得训练独立环视图像的操作与前面介绍的获得实际独立环视图像的操作类似,不再赘述。
步骤402,针对n对第一部分和第二部分的训练独立环视图像中的每一对检测并匹配彼此的特征点。
在一些实施例中,特征点是指图像中具有鲜明特性并能够有效反映图像本 质特征、能够标识图像中目标物体的点。一个图像的特征点由两部分构成:关键点(Keypoint)和描述子(Descriptor)。关键点指的是该特征点在图像中的位置、方向和尺度信息。描述子通常是一个向量,按照人为的设计方式,描述关键点周围像素的信息。通常,外观相似的描述子具有与之对应的相似的描述子。因此,在匹配的时候,只要两个特征点的描述子在向量空间的距离相近,就可以认为它们是同一个特征点。
具体地,针对每对训练独立环视图像,可以获取相铰接的两部分的训练独立环视图像的关键点,可以根据关键点的位置,计算特征点的描述子,根据特征点的描述子,对交通工具相铰接两部分环视图像的特征点进行匹配,获取交通工具相铰接两部分环视图像的特征匹配点对。在一个实施例中,可以使用暴力匹配算法,该算法在向量空间中,将相铰接的两部分的训练独立环视图像特征点的描述子一一比较,选择距离较小的一对作为匹配点对。
步骤403,基于匹配的特征点对计算所述第一部分和第二部分特征点之间的n个训练旋转平移矩阵,相应计算所述第一部分和第二部分之间的n个训练旋转角度。以下介绍一些基于匹配特征点对计算训练旋转角度的示例。
在一些实施例中,可以采用随机采样一致算法(RANSAC)或最小中值法(LMedS),选取相铰接两部分训练独立环视图像的特征匹配点对。以随机采样一致算法为例进行说明。具体地从已求得的匹配点对中抽取几对匹配点,计算变换旋转平移矩阵,并将这几对点记录为“内点”。继续寻找匹配点对中的非内点,若这些匹配点对符合矩阵,则将其添加到内点。当内点中的点对数大于设定阈值时,可根据这些数据来确定旋转平移矩阵。依照以上方法,随机采样k次(k为大于0的正整数),选取内点数最大集合,剔除非内点等误匹配点对。只有在经过剔除误匹配点后,才能利用内点中正确的匹配点对求出针对特定位置对应的训练旋转平移矩阵。根据n个训练旋转平移矩阵,获得第一部分和第二部分之间的n个训练旋转角度θ 1…θ n
本申请所涉及方法中,对于训练旋转角度的获取是通过计算得出的。区别于现有技术中通过物理测量方式取得,本申请所涉及方法得出的结果更准确,获取训练旋转角度过程的操作难度更低。同时,减少了传感器的使用,成本更低,也具有更广泛的适用性。并且可以避免环境中的干扰因素。
步骤404,至少基于n对所述第一部分和第二部分的训练独立环视图像之间匹配的特征点的坐标,以及相应的第一部分和第二部分之间的n个训练旋转角度,确定n个训练平移向量。
根据一个实施例,可以设(a x,a y)为所述交通工具第一部分101的铰接点坐标,(b x,b y)为交通工具第二部分102的铰接点坐标,(x 0,y 0)和(x 1,y 1)分别为与第一部分101与第二部分的训练独立环视图像中彼此匹配的特征点坐标,θ为第一部分和第二部分的训练旋转角度,则
Figure PCTCN2020136729-appb-000001
将式(1)拆解得式(2)
Figure PCTCN2020136729-appb-000002
训练平移向量即相铰接两部分训练独立环视图像的特征点从一个图像平移至另一个图像的训练平移参数。例如,匹配点对中的特征点从第一部分101训练独立环视图像平移至第二部分102训练独立环视图像的平移参数。因此,对于一对匹配点来说,假设交通工具第一部分101的训练独立环视图像的特征点坐标为原点时,交通工具第二部分102的训练独立环视图像对应匹配点的坐标在数值上等于两幅图的训练平移向量。即,将第一部分训练环视图像的特征点(x 0,y 0)设为(0,0)时,则其在第二部分训练环视图像匹配的特征点坐标(x 1,y 1)即为第一部分训练环视图像到第二部分训练环视图像的训练平移向量(dx,dy),即可以通过公式(3)表示
Figure PCTCN2020136729-appb-000003
步骤405,根据所述n对训练独立环视图像的特征点坐标、训练旋转角度以及训练平移向量,计算得到所述交通工具第一部分和第二部分的铰接点坐标。这里的计算是基于一个前提,即在交通工具的第一部分和第二部分的训练独立环视图像中的点在交通工具发生运动方向的变化时都是以各自的铰接点为圆心来旋转的。
针对每个训练旋转角度θ,有一个相应的训练平移向量。利用这n个训练旋转角度和n个训练平移向量带入公式(4),计算得(a x,a y,b x,b y):
Figure PCTCN2020136729-appb-000004
由于在一般情况下,需要取多对第一和第二部分的训练独立环视图像进行测试计算,因此n的取值会远大于2,故形成超定方程组如公式(4)。可以采用最小二乘法对超定方程组进行求解,可计算获得最为接近实际值的第一部分和第二部分的铰接点坐标,即使得该表达式值最小的(a x,a y,b x,b y)。
考虑到若上述n个角度中存在若干个异常值,则会对铰接点坐标的计算结果产生较大的影响,因此可以采用更进一步的方法来排除异常值造成的干扰。
根据一个实施例,假设在n个训练旋转角度中选取两组训练旋转角度(θ i,θ j)和相应的训练平移向量,为了使得所求的结果更具备鲁棒性,所选训练旋转角度应该满足|θ ij|>ξ(其中ξ为预设角度,可以根据实际需要设定,例如可以选择尽量大的值以满足计算的准确性,i和j是大于0小于等于n的整数,且i不等于j),即任意两个训练旋转角度的角度差大于预设值。因此对任意一组训练旋转角度(θ ij)和训练平移参数(dx i,dy i,dx j,dy j)通过公式(5)求解得到一候选组铰接点坐标(a x,a y,b x,b y)
Figure PCTCN2020136729-appb-000005
则m组训练旋转角度(θ ij)和m组训练平移参数(dx i,dy i,dx j,dy j),可以求得m组候选铰接点坐标:
Figure PCTCN2020136729-appb-000006
再对这m组候选铰接点坐标进行排序,排序结果呈高斯分布,取排序结果中的中值作为铰接点坐标。
在本实施例中,n可以为大于等于2的整数,m可以为大于等于1的整数,m小于n。通过上述方法,可以有效减小训练旋转角度异常值对于计算结果的影响。
上述方法,通过对相铰接两部分训练独立环视图像的特征点匹配计算,得到最终铰接点坐标。相较传统物理测量技术,该方法得到的铰接点坐标更加准确。对于铰接结构交通工具而言,由于无需安装物理测量设备,其适应性更广泛。该方法操作简单可靠,无需借助其它工具即可实现铰接点标定,节省了人力成本以及物力。
以下继续介绍如图2A和2B所示,本申请所述交通工具全景环视图像生成方法。在步骤24,获取全景环视图像的操作包括:
步骤204,确定所述第一部分和第二部分的实际独立环视图像的重叠区域中的匹配特征点对。这里的匹配就是指两个点之间存在对应的关系,或者说在第一和第二部分独立环视图像中所代表的是同一个点。但是在实际情况中,匹配的点并不一定是后面所说的匹配成功的点。
在一些实施例中,所述匹配特征点对的方法,包括特征点检测、特征点匹配以及特征点筛除。在一些实施例中,前述方法过程与步骤402和步骤403类似,在此不再赘述,只不过操作的图像数据是实际行驶中获取的图像而不是训练图像。
在一些实施例中,所述特征点检测方法可以包括:orb、surf或sift算法等。
在一些实施例中,所述特征点匹配算法可以包括暴力匹配算法或最近邻匹配算法等。
在一些实施例中,进一步包括特征点筛除方法,所述筛除方法可以包括RANSAC或GMS算法等。
步骤205,使所述第一部分和第二部分的实际独立环视图像铰接点重合,并假设第二部分的独立环视图像保持不动,使所述第一部分的独立环视图像绕重合后的铰接点旋转或假使所述第一部分的匹配特征点相对所述第二部分的匹配特征点旋转,确定所述匹配特征点对中匹配成功的点对数目。
当然根据其他的实施例,也可以假设第一部分的独立环视图像保持不动,第二部分的独立环视图像绕铰接点旋转或假使所述第二部分的匹配特征点相对所述第一部分的匹配特征点旋转。
根据一个实施例,设所述交通工具第一部分实际独立环视图像中的特征点坐标为x i,第二部分实际独立环视图像中与x i对应匹配的特征点坐标为x′ i,ε为第一部分和第二部分实际独立环视图像中匹配特征点对的距离,如公式(7)所表示:
ε=‖x′ i-H*x i‖        (7)
其中H(β,t)为第一部分和第二部分实际独立环视图像匹配的特征点集之间的旋转平移矩阵(二维),如公式(8)所表示。设β为交通工具第一部分和第二部分之间假设旋转角度。t(t 1,t 2)为第一部分独立环视图像的铰接点相对第二部分独立环视图像的铰接点的平移向量。
Figure PCTCN2020136729-appb-000007
根据一个实施例,当第一部分和第二部分实际独立环视图像检测到匹配特征点对的距离ε小于第一阈值σ时,则认为特征点匹配成功。根据一个实施例这 第一阈值可以根据用户的需要而设置。
在一些实施例中,可以采用Iverson bracket表达这一结果,如公式(9)所示:
Figure PCTCN2020136729-appb-000008
其中P可以如公式(10)表示为:
‖x′ i-H*x i‖<σ        (10)
也就是说当公式(10)左侧的值小于第一阈值σ时,P为1,否则P为0。
可以利用公式(11)计算匹配成功的特征点对数量k
Figure PCTCN2020136729-appb-000009
i的取值为0到L,其中L为第一部分和第二部分实际独立环视图像中的匹配特征点对的数量,L为大于等于1的整数。
步骤206,将匹配成功的特征点对数目最大所对应的旋转角作为所述第一部分相对于第二部分的备选旋转角度。
根据一个实施例,可以利用公式(12)来确定匹配成功的特征点对数量最大时所对应的备选旋转角β 1
Figure PCTCN2020136729-appb-000010
在这个实施例中,备选的旋转角度唯一;如果不唯一的话,可以根据下面的实施例继续进行计算从而确定实际旋转角度。
可选择的,根据一个实施例,在步骤206之后可以跳转到步骤208,将所述备选旋转角度作为实际旋转角度,根据铰接点坐标和所述实际旋转角度,对所述第一部分和第二部分各自的实际独立环视图像进行融合后获得所述交通工具的全景环视图像。
在本实施例介绍的方法中,可以快速的确定交通工具第一部分和第二部分之间的实际旋转角度,节省了计算资源,有利于在实际驾驶中迅速合成交通工具 的全景环视图像。
可选择的,在另一实施例中,在步骤206之后可以跳转到步骤207。
步骤207,基于所述备选旋转角度确定所述匹配成功的特征点对的坐标,计算每个匹配成功的特征点对中两点的距离并对该距离求和,将求和结果最小时对应的旋转角度作为所述第一部分相对于第二部分的实际旋转角度。
如果在步骤206得到的备选旋转角度只有一个,仍然可以利用步骤207来对备选旋转角度进行微调。如果不进行步骤206的操作而直接进行步骤207的操作,可能会因为误匹配的特征点对之间的距离之和最小而得到错误的旋转角。因此,先执行步骤206中的操作是很重要的。
可以利用公式(13)来确定特征点对距离之和最小时对应的备选旋转角β 2
Figure PCTCN2020136729-appb-000011
其中G为匹配成功的特征点对的个数,X G是匹配成功的特征点的集合,G为大于等于1的整数,G小于等于L。
如果步骤206得到的备选旋转角的个数不唯一,则需要从其中确定一个作为最终的实际旋转角度。假设有F个备选旋转角度,其中F是大于1的整数。则可以基于这F个旋转角带入公式(8)得到F个备选旋转平移矩阵H,并将F个旋转平移矩阵带入公式(10),计算获得匹配成功的所有点Xj和Xj’的坐标,得到点集X G
然后利用公式(14)对于基于多个备选旋转平移矩阵H m计算求得匹配成功的特征点对之间距离之和最小时所对应的角度β 2
Figure PCTCN2020136729-appb-000012
在确定了匹配成功的特征点对之间距离之和最小时对应的备选旋转角β 2后,就可以将其作为第一部分相对于第二部分的实际旋转角度。
在步骤207后可以跳转到步骤209,根据铰接点坐标和所述实际旋转角度β 2,对所述第一部分和第二部分各自的实际独立环视图像进行融合后获得所述 交通工具的全景环视图像。
该实施例介绍的方法进一步设定约束条件精细计算,对步骤206的计算结果可以进行进一步的微调,从而使得匹配成功的特点对彼此更加接近,重合度更高,得到的实际旋转角度更加贴近现实,提高了计算得到的实际转向角的确定性,使最终的结果更具有鲁棒性,精度更高。
此外,本实施例中的方法还解决了步骤206可能得到多个备选旋转角而无从定夺的问题,提高了最终结果的确定性。
本申请的方法立足于第一部分第二部分的铰接关系,因此设置了二者的铰接点坐标重合的前提条件,并在此前提下选择匹配成功的特征点对数目最多的角度作为实际旋转角度。而现有的方法中,仅仅基于环视图像中匹配的特征点来计算第一部分和第二部分的旋转平移关系,会发生在不同情况下第一部分和第二部分之间距离忽远忽近的蠕动视觉效果。而本申请由于固定了第一部分和第二部分的铰接点坐标,很好的规避了这种问题。
本申请的方法中,无需安装其他传感器(例如角度传感器)获取车辆转向数据,仅仅通过摄像头采集的图像,图像自身特征匹配即可实现车辆变角度情况下的全景环视功能,解决了传统环视方案在第一部分和第二部分转向角度实时变换情况下无法实现无缝全景拼接的难题,并且具有操作简单,成本低等优势。
图5是根据本申请的一个实施例一种交通工具全景环视图像生成装置结构示意图。这个装置可以是位于交通工具上的,也可以位于交通工具的远程服务器中。
如图所示,所述交通工具全景环视图像生成装置结构可以包括:
原始图像获取单元501,配置为获取交通工具相铰接两部分外部环境原始图像。在一些实施例中,原始图像获取单元501可以包括设置在交通工具相铰接两部分非铰接侧的一个或多个广角摄像头。原始图像获取单元501可以用来获取实际原始图像,也可以用来获取训练原始图像。
独立环视图像获取单元502,其耦合至所述原始图像获取单元501,配置为将所述两部分的原始图像数据各自拼接成该部分的独立环视图像。独立环视图像获取单元502可以用来得到交通工具第一和第二部分的实际独立环视图像,也可以用来得到二者的训练独立环视图像。
铰接点标定单元503,其耦合至独立环视图像获取单元502,配置为根据训练独立环视图像数据,计算铰接点坐标。根据一个实施例铰接点标定单元503可以配置为执行图4所示实施例中的方法402-405步骤中的操作。这一单元的操作是获取铰接点坐标。铰接点坐标的获取是在车辆实际行驶之前或者是在每次启动的时候,通过计算获得的。一经获取,则将该坐标值记录,以备后续使用。
当然,根据其他的实施例,该装置也可不包括铰接点标定单元,可以通过远程的服务器计算铰接点坐标,并将计算得到的铰接点坐标发送给该装置。
全景环视图像获取单元504,其耦合至铰接点标定单元503和所述独立环视图像获取单元502,配置为确定第一部分相对于第二部分的实际旋转角度,并根据铰接点坐标和实际旋转角度将两部分的实际独立环视图像合成为该交通工具全景环视图像。
根据另一个实施例,交通工具全景环视图像生成装置可以不包括铰接点标定单元,而从外部接收铰接点坐标。
图6是根据本申请的一个实施例一种交通工具的独立环视图像获取单元结构示意图。如图6所示,独立环视图像获取单元502可以包括:
图像畸变矫正模块61。其耦合至所述原始图像获取模块501,配置为对所述原始图像进行畸变矫正,得到所述原始图像对应的矫正图像。
透视变换模块62。其耦合至所述图像畸变校正模块61,配置为将所述矫正图像投影到大地坐标系下变成对应的鸟瞰图。
固定拼接模块63。其耦合至所述透视变换模块62,配置为分别对所述第一部分和第二部分鸟瞰图进行固定拼接,分别得到所述第一部分的固定拼接图像 和所述第二部分的固定拼接图像。
图像裁剪模块64。其耦合至所述固定拼接模块63,配置为对所述第一部分的固定拼接图像和所述第二部分的固定拼接图像进行裁剪,以得到第一部分和第二部分的独立环视图像。
其中,具体获得独立环视图像的方法已在前述如图3所示实施例中公开,在此不再赘述。
图7是根据本申请的一个实施例一种交通工具的铰接点标定单元结构示意图。如图7所示,铰接点标定单元503可以包括:
特征点检测匹配模块71,耦合至所述独立环视图像获取单元502,配置为接收第一和第二部分的n对训练独立环视图像,检测并匹配其中每一对所述第一部分和第二部分的训练独立环视图像中的特征点。
训练旋转角度计算模块72,耦合至所述特征点检测匹配模块71,配置为基于匹配的特征点坐标得到每对训练独立环视图像中所述第一部分和第二部分特征点之间的n个训练旋转平移矩阵,并相应获得每对独立环视图像中所述第一部分和第二部分之间的n个训练旋转角度。
训练平移向量获取模块73,耦合至训练旋转角度计算模块72,配置为根据每对所述第一部分和第二部分的训练独立环视图像特征点的坐标,以及n个训练旋转角度,确定每对独立环视图像的相应的n个训练平移向量。
铰接点坐标确定模块74,耦合至所述训练平移向量获取模块73以及训练旋转角度计算模块72,配置为根据所述n对训练独立环视图像的匹配特征点坐标、n个训练旋转角度以及相应的n个训练平移向量,确定所述交通工具第一部分和第二部分的铰接点坐标。
其中,具体获得铰接点坐标的方法已在前述图4所示的实施例中公开,在此不再赘述。
在一些实施例中,铰接点坐标确定模块74可以进一步配置为:先将n对训 练独立环视图像中两对分为一组,共m组。再根据每组训练独立环视图像特征点的坐标、训练旋转角度以及训练平移向量,得到对应的铰接点坐标计算结果。然后,分别求取m组训练独立环视图像计算结果后,对各组铰接点坐标计算结果进行排序,将排序结果中的中值作为铰接点坐标。其中,具体方法已在前述公式(5)公式(6)相关的内容中公开,在此不再赘述。
在一些实施例中,所述一组训练独立环视图像中所述第一部分和第二部分之间训练旋转角度的角度差可以大于预设角度,以满足计算的准确性。
上述交通工具全景环视图像生成装置,通过对相铰接两部分训练独立环视图像的特征点匹配计算,得到最终铰接点坐标。相较传统物理测量技术,该方法得到的铰接点坐标更加准确。对于铰接结构交通工具而言,其适应性更广泛。该方法操作简单可靠,无需借助其它工具例如角度传感器即可实现铰接点标定,节省了人力成本以及物力。
图8是根据本申请的一个实施例一种交通工具全景环视图像生成装置全景环视图像获取单元结构示意图。如图8所示,全景环视图像获取单元504包括:
特征点检测匹配模块81,耦合至所述独立环视图像获取单元502,配置为接收第一和第二部分的实际独立环视图像,并对其中的特征点进行检测和匹配。根据不同的实施例,81可以与71为相同或不同的模块。
实际旋转角度计算模块82,耦合至特征点检测匹配模块81以及铰接点标定单元503,配置为接收铰接点坐标,并使所述第一部分和第二部分的独立环视图像铰接点重合;计算每个匹配特征点对中两点之间的距离,并将该距离小于预设第一阈值的匹配特征点对作为匹配成功的特征点对;将匹配成功的特征点对数目最大所对应的旋转角作为所述第一部分相对于第二部分的备选旋转角度。
全景环视图像生成模块83,耦合至实际旋转角度计算模块82以及铰接点标定单元503,配置为将所述备选旋转角度作为实际旋转角度,根据铰接点坐标和所述实际旋转角度,对所述第一部分和第二部分各自的实际独立环视图像进行 融合后获得所述交通工具的全景环视图像。
根据另一个实施例,实际旋转角度计算模块82,还可以配置为基于所述备选旋转角度确定所述匹配成功的特征点对的坐标;计算每个匹配成功的特征点对中两点的距离并对该距离求和;将求和结果最小时对应的旋转角度作为所述第一部分相对于第二部分的实际旋转角度。
全景环视图像生成模块83,耦合至实际旋转角度计算模块82以及铰接点标定单元503,配置为根据铰接点坐标和确定的实际旋转角度,对所述第一部分和第二部分各自的实际独立环视图像进行旋转平移拼接后获得所述交通工具的全景环视图像。
图9所示为根据本申请一个实施例的智能交通工具结构示意图。该智能交通工具包括处理器901以及用于存储能够在处理器901上运行的计算机程序的存储器902,其中,所述处理器901用于运行所述计算机程序时,执行本申请任一实施例所提供的全部或部分方法。这里处理器901和存储器902并非指代对应的数量为一个,而是可以为一个或者多个。该智能交通工具还可包括内存903、网络接口904、以及将内存903、网络接口904、处理器901和存储器902连接的系统总线905。存储器中存储有操作系统及本申请实施例所提供的数据处理装置。处理器901用于支持整个智能交通工具的操作。内存903可以用于为存储器902中的计算机程序的运行提供环境。网络接口904可以用于外部服务器设备、终端设备等进行网络通信,接收或发送数据,如获取用户输入的驾驶控制指令等。其中该智能交通工具还可以包括GPS单元906配置为获取交通工具所在位置信息。传感器单元907可以包括广角摄像头,配置为获取实际或训练原始图像。
本申请实施例还提供了一种计算机存储介质,例如包括存储有计算机程序的存储器,该计算机程序可以由处理器执行,以完成本申请任一实施例所提供的相机姿态信息检测的方法步骤。该计算机存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、Flash Memory、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
根据其他的实施例,本申请中涉及的方法也可以全部或部分的由智能驾驶设备的远程服务器执行。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (12)

  1. 一种交通工具的全景环视图像生成方法,包括:
    获取交通工具相互铰接的第一部分和第二部分的外部环境实际原始图像;
    对所述实际原始图像进行处理以获得所述第一部分和第二部分各自的实际独立环视图像;
    获取所述第一和第二部分各自的铰接点坐标;
    确定所述第一部分和第二部分的实际独立环视图像中匹配特征点对;
    使所述第一部分和第二部分的实际独立环视图像中各自的铰接点彼此重合,假设使所述第一部分的独立环视图像相对所述第二部分的独立环视图像旋转或假使所述第一部分的匹配特征点相对所述第二部分的匹配特征点旋转,相应计算每个匹配特征点对中两点之间的距离,并将该距离小于预设第一阈值的匹配特征点对作为匹配成功的特征点对;以及
    至少基于匹配成功的特征点对数目确定所述第一部分相对于第二部分的实际旋转角度;
    根据所述铰接点坐标和所述实际旋转角度对所述第一部分和第二部分各自的实际独立环视图像进行融合后获得所述交通工具的全景环视图像。
  2. 根据权利要求1所述的方法,其中至少基于匹配成功的特征点对数目确定所述第一部分相对于第二部分的实际旋转角度包括:
    将匹配成功的特征点对数目最大时对应的角度作为所述第一部分和第二部分之间的实际旋转角度。
  3. 根据权利要求1所述的方法,其中至少基于匹配成功的特征点对数目确定所述第一部分相对于第二部分的实际旋转角度包括:
    将匹配成功的特征点对数目最大时对应的角度作为所述第一部分和第二部分之间的备选旋转角度;
    基于所述备选旋转角度确定所述匹配成功的特征点对的坐标;
    计算每个匹配成功的特征点对中两点的距离并对该距离求和;
    将求和结果最小时对应的旋转角度作为所述第一部分相对于第二部分的实际旋转角度。
  4. 根据权利要求3所述的方法,其中基于所述备选旋转角度确定所述匹配成功的特征点对的坐标包括,基于所述第一部分和第二部分的铰接点坐标以及所述备选旋转角度获得备选旋转平移矩阵;以及
    其中计算匹配成功的特征中两点之间的距离包括,基于所述匹配特征点对的坐标和所述备选旋转平移矩阵计算所述匹配成功的特征点对中两点间的距离。
  5. 根据权利要求1所述的方法,其中对所述实际原始图像进行处理以获得所述第一部分和第二部分各自的实际独立环视图像,包括:
    对所述第一部分和第二部分的外部环境实际原始图像进行畸变矫正;
    将经矫正图像投影到大地坐标系下生成所述第一和第二部分的鸟瞰图;
    分别检测并匹配所述第一部和第二部分各自的鸟瞰图重叠区域的内部特征点,再进行固定拼接得到所述第一部分和第二部分各自的固定拼接图;
    对所述第一部分和第二部分各自的固定拼接图进行裁减,得到所述第一部分和第二部分各自的实际独立环视图像。
  6. 根据权利要求1所述的方法,其中确定所述第一部分和第二部分的实际独立环视图像的重叠区域中的匹配特征点对包括:
    特征点检测,检测所述第一部分和第二部分的实际独立环视图像的重叠区域中自然特征点并生成描述子;
    特征点匹配,至少基于所述描述子通过匹配算法生成特征点匹配对,其中所述匹配算法包括orb、surf或sift算法;以及
    特征点筛除,通过筛除算法筛除误匹配后的匹配点对,其中所述筛除算法包括RANSAC或GMS算法。
  7. 根据权利要求1所述的方法,还包括计算所述第一部分和第二部分的铰接点坐标的方法,包括:
    获取多对第一部分和第二部分的训练独立环视图像;
    针对每一对所述训练独立环视图像进行特征点的检测和匹配;
    基于每对训练独立环视图像中匹配的特征点对,计算相应的多个训练旋转平移矩阵,进而计算所述第一部分和第二部分之间的多个相应的训练旋转角度;
    至少基于所述多对训练独立环视图像中匹配的特征点坐标,以及所述多个训练旋转角度,确定所述第一部分和第二部分间相应的多个训练平移向量;以及
    根据所述多对独立环视图像的特征点坐标、所述多个训练旋转角度以及所述多个训练平移向量,计算得到所述第一部分和第二部分的铰接点坐标。
  8. 根据权利要求7所述的方法,其中计算所述第一部分和第二部分铰接点坐标的方法还包括:
    在所述多个训练旋转角度中的至少两个训练旋转角度组成一组,并基于所述训练平移向量,计算获得对应该组的候选铰接点坐标;以及
    将所有候选铰接点坐标排序,将排序结果的中值作为所述第一部分和第二部分的铰接点坐标;
    其中每组中的至少两个训练旋转角度之差大于预设的角度阈值。
  9. 一种交通工具的全景环视图像生成装置,所述装置包括:
    原始图像获取单元,配置为获取交通工具相互铰接的第一和第二部分的外部环境实际或训练原始图像;
    独立环视图像获取单元,耦合至所述原始图像获取单元,配置为将所述第一和第二部分的实际或训练原始图像拼接成各自的实际或训练独立环视图像;以及
    全景环视图像获取单元,耦合至所述铰接点标定单元和所述独立环视图像获取单元,其包括
    特征点检测匹配模块,耦合至所述独立环视图像获取单元,配置为接收所述第一和第二部分的实际独立环视图像,并对其中的特征点进行检测和匹配;
    实际旋转角度计算模块,耦合至所述特征点检测匹配模块,配置为获取所述第一部分和第二部分的铰接点坐标,并使所述第一部分和第二部分的独立环视图像铰接点重合,假设使所述第一部分的独立环视图像相对所述第二部分的独立环视图像旋转或假使所述第一部分的匹配特征点相对所述第二部分的匹配特征点旋转,并计算每个匹配特征点对中两点之间的距离,将该距离小于预设第一阈值的匹配特征点对作为匹配成功的特征点对;至少基于匹配成功的特征点对数目确定所述第一部分相对于第二部分的实际旋转角度;
    全景环视图像生成模块,耦合至所述实际旋转角度计算模块,配置为根据所述铰接点坐标和所述实际旋转角度,对所述第一部分和第二部分各自的实际独立环视图像进行融合后获得所述交通工具的全景环视图像。10.根据权利要求9所述的装置,其中实际旋转角度计算模块配置为至少基于匹配成功的特征点对数目确定所述第一部分相对于第二部分的实际旋转角度包括,将匹配成功的特征点对数目最大时对应的角度作为所述第一部分和第二部分之间的实际旋转角度。
  10. 根据权利要求9所述的装置,其中实际旋转角度计算模块配置为至少基于匹配成功的特征点对数目确定所述第一部分相对于第二部分的实际旋转角度包括,将匹配成功的特征点对数目最大时对应的角度作为所述第一部分和第二部分之间的备选旋转角度;
    基于所述备选旋转角度确定所述匹配成功的特征点对的坐标;
    计算每个匹配成功的特征点对中两点的距离并对该距离求和;
    将求和结果最小时对应的旋转角度作为所述第一部分相对于第二部分的实际旋转角度。
  11. 根据权利要求9-11任一所述的装置,还包括铰接点标定单元,耦合至所述独立环视图像获取单元,其包括
    特征点检测匹配模块,耦合至所述独立环视图像获取单元,配置为接收第一 和第二部分的多对训练独立环视图像,检测并匹配其中每一对所述第一部分和第二部分的训练独立环视图像中的特征点;
    训练旋转角度计算模块,耦合至所述特征点检测匹配模块,配置为基于匹配的特征点坐标得到每对训练独立环视图像中所述第一部分和第二部分特征点之间的多个训练旋转平移矩阵,并相应获得每对独立环视图像中所述第一部分和第二部分之间的多个训练旋转角度;
    训练平移向量获取模块,耦合至所述训练旋转角度计算模块,配置为根据每对训练独立环视匹配的特征点的坐标,以及多个训练旋转角度,确定每对训练独立环视图像的相应的多个训练平移向量;
    铰接点坐标确定模块,耦合至所述平移向量获取模块以及所述训练旋转角度计算模块,配置为根据所述多对训练独立环视图像的匹配特征点坐标、多个训练旋转角度以及相应的多个训练平移向量,确定所述交通工具第一部分和第二部分的铰接点坐标。
  12. 一种智能交通工具,包括
    相互铰接的第一部分和第二部分;
    处理器,以及与所述处理器耦合的存储器;以及
    传感单元,配置为拍摄所述第一部分和第二部分的实际或训练原始图像;
    其中所述处理器配置为执行权利要求1-8任一所述的方法。
PCT/CN2020/136729 2019-12-16 2020-12-16 一种交通工具全景环视图像生成的方法和装置 WO2021121251A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2022535119A JP7335446B2 (ja) 2019-12-16 2020-12-16 交通手段の全景ルックアラウンド画像を生成する方法と装置
US17/786,312 US11843865B2 (en) 2019-12-16 2020-12-16 Method and device for generating vehicle panoramic surround view image
EP20904168.0A EP4060980A4 (en) 2019-12-16 2020-12-16 METHOD AND DEVICE FOR GENERATION OF A VEHICLE PERIPHERAL PANORAMIC VIEW IMAGE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911289818.9 2019-12-16
CN201911289818.9A CN110719411B (zh) 2019-12-16 2019-12-16 车辆的全景环视图像生成方法及相关设备

Publications (1)

Publication Number Publication Date
WO2021121251A1 true WO2021121251A1 (zh) 2021-06-24

Family

ID=69216602

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/136729 WO2021121251A1 (zh) 2019-12-16 2020-12-16 一种交通工具全景环视图像生成的方法和装置

Country Status (5)

Country Link
US (1) US11843865B2 (zh)
EP (1) EP4060980A4 (zh)
JP (1) JP7335446B2 (zh)
CN (1) CN110719411B (zh)
WO (1) WO2021121251A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110751598B (zh) * 2019-10-17 2023-09-26 长沙智能驾驶研究院有限公司 车辆铰接点坐标标定方法、装置、计算机设备和存储介质
CN110719411B (zh) 2019-12-16 2020-04-03 长沙智能驾驶研究院有限公司 车辆的全景环视图像生成方法及相关设备
CN116012805B (zh) * 2023-03-24 2023-08-29 深圳佑驾创新科技有限公司 目标感知方法、装置、计算机设备、存储介质
CN116109852B (zh) * 2023-04-13 2023-06-20 安徽大学 一种快速及高精度的图像特征匹配错误消除方法
CN116563356A (zh) * 2023-05-12 2023-08-08 北京长木谷医疗科技股份有限公司 全局3d配准方法、装置及电子设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629372A (zh) * 2012-02-22 2012-08-08 北京工业大学 一种用于辅助车辆驾驶的360度全景鸟瞰图生成方法
US20150329048A1 (en) * 2014-05-16 2015-11-19 GM Global Technology Operations LLC Surround-view camera system (vpm) online calibration
US20160191795A1 (en) * 2014-12-30 2016-06-30 Alpine Electronics, Inc. Method and system for presenting panoramic surround view in vehicle
CN106799993A (zh) * 2017-01-09 2017-06-06 智车优行科技(北京)有限公司 街景采集方法和系统、车辆
CN107424120A (zh) * 2017-04-12 2017-12-01 湖南源信光电科技股份有限公司 一种全景环视系统中的图像拼接方法
CN108263283A (zh) * 2018-01-25 2018-07-10 长沙立中汽车设计开发股份有限公司 多编组变角度车辆全景环视系统标定及拼接方法
CN109883433A (zh) * 2019-03-21 2019-06-14 中国科学技术大学 基于360度全景视图的结构化环境中车辆定位方法
CN110719411A (zh) * 2019-12-16 2020-01-21 长沙智能驾驶研究院有限公司 车辆的全景环视图像生成方法及相关设备

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009060499A (ja) * 2007-09-03 2009-03-19 Sanyo Electric Co Ltd 運転支援システム及び連結車両
JP5178454B2 (ja) 2008-10-28 2013-04-10 パナソニック株式会社 車両周囲監視装置及び車両周囲監視方法
CN102045546B (zh) * 2010-12-15 2013-07-31 广州致远电子股份有限公司 一种全景泊车辅助系统
CN103879352A (zh) * 2012-12-22 2014-06-25 鸿富锦精密工业(深圳)有限公司 汽车泊车辅助系统及方法
CN103366339B (zh) * 2013-06-25 2017-11-28 厦门龙谛信息系统有限公司 车载多广角摄像头图像合成处理装置及方法
CN103617606B (zh) * 2013-11-26 2016-09-14 中科院微电子研究所昆山分所 用于辅助驾驶的车辆多视角全景生成方法
WO2015081136A1 (en) * 2013-11-30 2015-06-04 Saudi Arabian Oil Company System and method for calculating the orientation of a device
CN103763517B (zh) * 2014-03-03 2017-02-15 惠州华阳通用电子有限公司 一种车载环视显示方法及系统
JP2015179426A (ja) 2014-03-19 2015-10-08 富士通株式会社 情報処理装置、パラメータの決定方法、及びプログラム
US20150286878A1 (en) * 2014-04-08 2015-10-08 Bendix Commercial Vehicle Systems Llc Generating an Image of the Surroundings of an Articulated Vehicle
CN105451000B (zh) * 2015-12-27 2019-02-15 高田汽车电子(上海)有限公司 一种基于单目后视摄像头的车载全景环视系统及方法
US9946264B2 (en) * 2016-03-22 2018-04-17 Sharp Laboratories Of America, Inc. Autonomous navigation using visual odometry
US10259390B2 (en) * 2016-05-27 2019-04-16 GM Global Technology Operations LLC Systems and methods for towing vehicle and trailer with surround view imaging devices
CN107154022B (zh) * 2017-05-10 2019-08-27 北京理工大学 一种适用于拖车的动态全景拼接方法
CN109429039B (zh) * 2017-09-05 2021-03-23 中车株洲电力机车研究所有限公司 一种多编组铰接车辆周界视频全景显示系统及方法
JP7108855B2 (ja) 2018-02-27 2022-07-29 パナソニックIpマネジメント株式会社 画像合成装置、及び、制御方法
CN110363085B (zh) * 2019-06-10 2021-11-09 浙江零跑科技股份有限公司 一种基于铰接角补偿的重型铰接车环视实现方法
CN110751598B (zh) * 2019-10-17 2023-09-26 长沙智能驾驶研究院有限公司 车辆铰接点坐标标定方法、装置、计算机设备和存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629372A (zh) * 2012-02-22 2012-08-08 北京工业大学 一种用于辅助车辆驾驶的360度全景鸟瞰图生成方法
US20150329048A1 (en) * 2014-05-16 2015-11-19 GM Global Technology Operations LLC Surround-view camera system (vpm) online calibration
US20160191795A1 (en) * 2014-12-30 2016-06-30 Alpine Electronics, Inc. Method and system for presenting panoramic surround view in vehicle
CN106799993A (zh) * 2017-01-09 2017-06-06 智车优行科技(北京)有限公司 街景采集方法和系统、车辆
CN107424120A (zh) * 2017-04-12 2017-12-01 湖南源信光电科技股份有限公司 一种全景环视系统中的图像拼接方法
CN108263283A (zh) * 2018-01-25 2018-07-10 长沙立中汽车设计开发股份有限公司 多编组变角度车辆全景环视系统标定及拼接方法
CN109883433A (zh) * 2019-03-21 2019-06-14 中国科学技术大学 基于360度全景视图的结构化环境中车辆定位方法
CN110719411A (zh) * 2019-12-16 2020-01-21 长沙智能驾驶研究院有限公司 车辆的全景环视图像生成方法及相关设备

Also Published As

Publication number Publication date
US20230023046A1 (en) 2023-01-26
CN110719411B (zh) 2020-04-03
CN110719411A (zh) 2020-01-21
US11843865B2 (en) 2023-12-12
EP4060980A4 (en) 2023-08-02
EP4060980A1 (en) 2022-09-21
JP2023505379A (ja) 2023-02-08
JP7335446B2 (ja) 2023-08-29

Similar Documents

Publication Publication Date Title
WO2021121251A1 (zh) 一种交通工具全景环视图像生成的方法和装置
CN108648240B (zh) 基于点云特征地图配准的无重叠视场相机姿态标定方法
CN111862296B (zh) 三维重建方法及装置、系统、模型训练方法、存储介质
CN105245841B (zh) 一种基于cuda的全景视频监控系统
US6870563B1 (en) Self-calibration for a catadioptric camera
CN110033411B (zh) 基于无人机的公路施工现场全景图像高效拼接方法
US20210274358A1 (en) Method, apparatus and computer program for performing three dimensional radio model construction
CN111553939B (zh) 一种多目摄像机的图像配准算法
CN106447601B (zh) 一种基于投影-相似变换的无人机遥感影像拼接方法
KR20210094476A (ko) 위치결정 요소 검출 방법, 장치, 기기 및 매체
CN113409391B (zh) 视觉定位方法及相关装置、设备和存储介质
CN111723801B (zh) 鱼眼相机图片中目标检测矫正的方法与系统
CN111127524A (zh) 一种轨迹跟踪与三维重建方法、系统及装置
CN104966063A (zh) 基于gpu与cpu协同计算的矿井多摄像机视频融合方法
JP2007323615A (ja) 画像処理装置及びその処理方法
CN110717936B (zh) 一种基于相机姿态估计的图像拼接方法
CN109785373B (zh) 一种基于散斑的六自由度位姿估计系统及方法
CN113393505B (zh) 图像配准方法、视觉定位方法及相关装置、设备
CN111768332A (zh) 一种车载环视实时3d全景图像的拼接方法及图形采集装置
WO2016208404A1 (ja) 情報処理装置および方法、並びにプログラム
WO2021073634A1 (zh) 交通工具铰接点标定方法、及相应的标定装置、计算机设备和存储介质
CN111829522B (zh) 即时定位与地图构建方法、计算机设备以及装置
CN106709942B (zh) 一种基于特征方位角的全景图像误匹配消除方法
JP2005275789A (ja) 三次元構造抽出方法
KR102225321B1 (ko) 복수 영상 센서로부터 취득한 영상 정보와 위치 정보 간 연계를 통한 도로 공간 정보 구축을 위한 시스템 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20904168

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022535119

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020904168

Country of ref document: EP

Effective date: 20220616

NENP Non-entry into the national phase

Ref country code: DE