WO2019233286A1 - 视觉定位方法、装置、电子设备及系统 - Google Patents

视觉定位方法、装置、电子设备及系统 Download PDF

Info

Publication number
WO2019233286A1
WO2019233286A1 PCT/CN2019/088207 CN2019088207W WO2019233286A1 WO 2019233286 A1 WO2019233286 A1 WO 2019233286A1 CN 2019088207 W CN2019088207 W CN 2019088207W WO 2019233286 A1 WO2019233286 A1 WO 2019233286A1
Authority
WO
WIPO (PCT)
Prior art keywords
homography matrix
camera
information
calibration
lane line
Prior art date
Application number
PCT/CN2019/088207
Other languages
English (en)
French (fr)
Inventor
戴兴
王哲
石建萍
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to SG11201913066WA priority Critical patent/SG11201913066WA/en
Priority to JP2019572133A priority patent/JP6844043B2/ja
Priority to US16/626,005 priority patent/US11069088B2/en
Priority to EP19814687.0A priority patent/EP3627109B1/en
Publication of WO2019233286A1 publication Critical patent/WO2019233286A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/10Path keeping
    • B60W30/12Lane keeping
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3658Lane guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/167Driving aids for lane monitoring, lane changing, e.g. blind spot detection

Definitions

  • the present application relates to computer vision technology, and in particular, to a vision positioning method, device, electronic device, and system.
  • Unmanned driving system or Advanced Driver Assistance System is a system applied in the field of intelligent driving.
  • Target positioning is the core function of the system.
  • the monocular vision positioning method only uses a common monocular camera, and has the advantages of simple algorithm, high efficiency, and small calculation amount, so it has become a research hotspot.
  • Monocular vision positioning is mainly based on camera parameters.
  • Camera parameters are related to factors such as posture.
  • camera parameters are pre-calibrated. After calibrating camera parameters, positioning is performed based on camera parameters.
  • the embodiments of the present application provide a visual positioning scheme.
  • an embodiment of the present application provides a visual positioning method, including: performing lane line detection on the road surface of the vehicle based on a video stream of the road surface of the vehicle collected by a camera installed on the vehicle; The first reference point information of the current viewing angle of the camera; and a third homography matrix is determined according to the first reference point information and the second reference point information, wherein the second reference point information is a reference point of the previous viewing angle of the camera Information, the second reference point corresponds to the position of the first reference point, and the third homography matrix is used to represent the mapping relationship between the coordinates of the camera in the current perspective and the coordinates of the camera in the previous perspective; The third homography matrix and the preset homography matrix determine the first homography matrix, wherein the preset homography matrix is a mapping relationship between the coordinates of the camera and the world coordinates in the previous perspective; according to the first Homography matrix for positioning.
  • an embodiment of the present application provides a visual positioning device, including:
  • a detection module configured to perform lane line detection on the road surface of the vehicle based on a video stream of the road surface of the vehicle collected by a camera installed on the vehicle;
  • a first determining module configured to determine first reference point information of a current perspective of the camera according to a lane line detection result
  • a second determining module configured to determine a third homography matrix according to the first reference point information and the second reference point information, where the second reference point information is reference point information of a previous perspective of the camera, and The second reference point corresponds to the position of the first reference point, and the third homography matrix is used to represent a mapping relationship between the coordinates of the camera in the current perspective and the coordinates of the camera in the previous perspective;
  • a third determining module configured to determine a first homography matrix according to the third homography matrix and a preset homography matrix, wherein the preset homography matrix is the coordinates of the camera and the world coordinates in the previous perspective Mapping relationship
  • a positioning module configured to perform positioning according to the first homography matrix.
  • a third aspect of the present application provides an electronic device, including:
  • Memory for storing program instructions
  • a processor configured to call and execute program instructions in the memory, and execute the method steps described in the first aspect.
  • a fourth aspect of the present application provides a readable storage medium, wherein a computer program is stored in the readable storage medium, and the computer program is configured to execute the method steps described in the first aspect.
  • a fifth aspect of the present application provides a visual positioning system, which is applied to a vehicle.
  • the system includes a camera installed on the vehicle and the visual positioning device according to the second aspect, which is communicatively connected with the camera.
  • a sixth aspect of the present application provides a computer program that causes a computer to execute the method described in the first aspect.
  • the visual positioning method, device, electronic device and system provided in this application generate a camera in a current view angle that reflects the real-time pose of the camera based on the first reference point information in the current view angle and the second reference point information in the previous view point.
  • the first homography matrix from coordinates to world coordinates, and then based on the first homography matrix for visual positioning, so that after the camera pose changes, the user can normally perform visual positioning without manually measuring parameters, thereby reducing the complexity of operation. Degree, greatly improving the user experience.
  • FIG. 1 is a schematic diagram of a current perspective and a previous perspective according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of an embodiment of a visual positioning method according to an embodiment of the present application
  • FIG. 3 is a schematic flowchart of an embodiment of a visual positioning method according to an embodiment of the present application.
  • FIG. 5 provides a geometric model of horizontal calibration according to an embodiment of the present application
  • FIG. 6 is a schematic flowchart of an embodiment of a visual positioning method according to an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of an embodiment of a visual positioning method according to an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of an embodiment of a visual positioning method according to an embodiment of the present application.
  • FIG. 9 is a structural diagram of a first module of an embodiment of a visual positioning device according to an embodiment of the present application.
  • FIG. 10 is a structural diagram of a second module of an embodiment of a visual positioning device according to an embodiment of the present application.
  • FIG. 11 is a structural diagram of a third module of an embodiment of a visual positioning device according to an embodiment of the present application.
  • FIG. 12 is a structural diagram of a fourth module of an embodiment of a visual positioning device according to an embodiment of the present application.
  • FIG. 13 is a structural diagram of a fifth module of an embodiment of a visual positioning device according to an embodiment of the present application
  • FIG. 14 is a physical block diagram of an electronic device according to an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of a visual positioning system according to an embodiment of the present application.
  • the camera described in the embodiment of the present application is a vehicle camera used for visual positioning, and the vehicle camera can capture road surface information.
  • the user when the posture of the camera changes, such as reinstallation, installation on other vehicles, or bumpy roads, the user usually needs to manually perform parameter measurement.
  • the parameters to be measured include the camera ’s pitch angle and other external factors. Parameters, so users need to perform a variety of complex operations to measure parameter information, resulting in poor user experience.
  • the embodiment of the present application proposes a visual positioning method. Based on the current perspective and the reference point on the previous perspective, a first homography matrix of the camera coordinates to the world coordinates in the current perspective that can reflect the real-time pose of the camera is generated.
  • the first homography matrix is used for visual positioning, so that after the camera pose changes, visual positioning can be performed normally without manual measurement of parameters by the user, thereby reducing the complexity of the operation and greatly improving the user experience.
  • the current perspective is the perspective of the camera at the current moment
  • the previous perspective is the perspective of a moment before the current moment.
  • the current perspective and the previous perspective may be the same or different perspectives.
  • the angle of view A is the previous angle of view
  • the current angle of view B when the pose of the camera changes, it becomes the current angle of view, that is, the angle of view B.
  • FIG. 2 is a schematic flowchart of an embodiment of a visual positioning method according to an embodiment of the present application.
  • the method can be applied to any device that needs to perform visual positioning, that is, the execution subject of the method can be a device that needs to perform visual positioning, for example, an execution subject.
  • an electronic device including a processor and a memory (the processor calls corresponding instructions stored in the memory to execute the method steps in the embodiments of the present application), or includes a visual positioning device (the visual positioning device includes at least a detection module) , A first determination module, a second determination module, and a positioning module, the visual positioning device can perform electronic steps of the method steps in the embodiments of the present application), or an ADAS-installed vehicle or robot, or an unmanned vehicle Or robots.
  • the description of the method execution subject is also applicable to other method embodiments of the present application, and is not repeated here. As shown in Figure 2, the method includes:
  • S201 Perform lane line detection on the road surface of the vehicle based on a video stream of the road surface of the vehicle collected by a camera installed on the vehicle.
  • a camera installed on the vehicle is turned on in advance, and the camera collects a video stream of the road surface traveled by the vehicle in real time, and a device for visual positioning performs lane line detection according to the video stream collected by the camera to obtain a lane line detection result.
  • the lane line information may be determined according to a lane line detection result.
  • the above lane line information may be information of two lane lines of the left and right chords of the vehicle or information of two lane line parallel lines.
  • the two lane lines may be either straight lane lines or curved lane lines, which are not limited in the embodiment of the present application.
  • the lane line is a curved lane line
  • the corresponding straight lane line information or the straight lane line parallel line information can be obtained by statistically processing the curved lane line.
  • the lane line may be a solid line or a dashed line, etc.
  • the embodiment of the present application does not limit its line type; the color of the lane line may be white, yellow, or black.
  • the embodiment of the present application does not limit its color.
  • the above lane line information may be represented by a lane line parallel line function, and the acquisition process of the lane line information will be described in the following embodiments.
  • S203 Determine a third homography matrix according to the first reference point information and the second reference point information, where the second reference point information is reference point information of a previous perspective of the camera, the second reference point and the first reference point information. The positions of the reference points correspond, and the third homography matrix is used to represent the mapping relationship between the coordinates of the camera in the current perspective and the coordinates of the camera in the previous perspective.
  • the above-mentioned second reference point is a known reference point, that is, a reference point in a previous perspective has been selected and recorded before executing this embodiment.
  • Both the first reference point and the second reference point are coordinate points in world coordinates, and are longitudinally equidistant pairwise position coordinates of the lane line parallel lines.
  • the position of the second reference point corresponds to the position of the first reference point.
  • the first reference point and the second reference point may be longitudinally equidistant paired points on parallel lines of the lane line. Taking Figure 1 as an example, the perspective A in FIG.
  • FIG. 1 is the previous perspective
  • m1, m2, m3, and m4 in the perspective A are the second reference points, where m1 and m3 are a pair of sites, and m2 and m4 are a pair of sites.
  • Positions; perspective B in FIG. 1 is the current perspective
  • n1, n2, n3, and n4 under perspective B are the first reference points, where n1 and n3 are a pair of sites, and n2 and n4 are a pair of sites.
  • the previous perspective is the perspective A in FIG. 1 described above
  • the current perspective is the perspective B in FIG. 1.
  • the mapping relationship between the camera coordinates and the world coordinates at the perspective A can be obtained and recorded in advance.
  • a mapping relationship from the perspective B to the camera coordinates in the perspective A can be obtained.
  • the mapping relationship between the perspective B and the perspective A and the mapping relationship between the perspective A and the world coordinates are integrated to obtain the mapping relationship between the perspective B and the world coordinates.
  • the preset homography matrix for the coordinates of the camera to the world coordinates at the perspective A is First, through the correspondence between the coordinate point set MN and the findHomography function in opencv, obtain the homography matrix from perspective B to perspective A Furthermore, multiplying H3 and H1 to obtain the first homography matrix HT, specifically:
  • a first homography matrix of camera coordinates to world coordinates in the current viewing angle that reflects the real-time pose of the camera is generated, Furthermore, visual positioning is performed based on the first homography matrix, thereby realizing normal visual positioning without manually measuring parameters of the user after the camera pose changes, thereby reducing operation complexity and greatly improving the user experience.
  • the preset homography matrix may be updated, and the first homography matrix is used as a new preset homography matrix, and based on the New preset homography matrix for visual positioning.
  • this embodiment relates to a specific process of positioning according to the first homography matrix.
  • the first homography matrix can be directly used for positioning. Specifically, the camera coordinates in the current perspective can be obtained based on the video stream collected by the camera, and then the camera coordinates are multiplied with the first homography matrix to obtain the world coordinates in the current perspective.
  • the position of the camera can be regarded as the origin of the coordinates in the current perspective. After obtaining the world coordinates in the current perspective, the position of the target object relative to the camera is obtained, thereby completing visual positioning.
  • the principle of obtaining the world coordinates of the current perspective based on the first homography matrix is the same, which is not described again.
  • the first homography matrix is calibrated, and then positioning is performed based on the calibrated homography matrix to ensure that the positioning result is more accurate.
  • the performing positioning according to the first homography matrix includes: calibrating the first homography matrix according to vertical calibration information and horizontal calibration information to obtain calibration parameters; and according to the calibration parameters And the first homography matrix for positioning.
  • the position of the target object is determined by its vertical and horizontal coordinates in world coordinates.
  • the horizontal coordinate of a target object with respect to the camera is 3 and the vertical coordinate is 4, the position of the target object can be uniquely determined by the coordinates 3 and 4.
  • the deviation of the first homography matrix is formed by the horizontal deviation and the longitudinal deviation together. Therefore, in this embodiment, the first homography matrix is calibrated according to the longitudinal calibration information and the horizontal calibration information, thereby obtaining the calibration. Parameters, and then perform positioning based on the calibration parameters and the first homography matrix.
  • the vertical calibration information may include a pitch angle and the like
  • the horizontal calibration information may include a horizontal deflection angle and the like.
  • an intelligent driving scenario such as an assisted driving or an automatic driving scenario.
  • an unmanned driving system or an ADAS system is a system applied in the field of intelligent driving.
  • an ADAS system is used as an example to describe the application process of the method in the embodiment of the present application in an intelligent driving scenario. Examples are not limited to this.
  • step S201 when a calibration instruction manually input by the user or the ADAS system is input under a specific triggering condition, the above-mentioned step S201 is triggered to be executed, and then the steps subsequent to S201 are executed.
  • step 202 determining the first reference point information of the current angle of view of the camera according to a lane line detection result includes: receiving a calibration instruction; and determining a location based on the lane line detection result based on the calibration instruction. The first reference point information of the current viewing angle of the camera is described.
  • the user can input a calibration instruction by clicking a button or voice on the interface of the ADAS system.
  • the system receives the calibration instruction input by the user, the above steps S201 and S201 will be performed.
  • the subsequent steps enable accurate positioning after the camera mounting position changes.
  • determining the first reference point information of the current angle of view of the camera according to the lane line detection result includes: determining whether the pose of the camera changes, and if so, according to the lane line detection result. Determine first reference point information of a current perspective of the camera.
  • the ADAS system can estimate the pose of the camera in real time, and determine whether the pose of the camera has changed according to the pose estimation results of two adjacent times. If the pose of the camera changes, the ADAS system will perform the above steps. S201 and subsequent steps, thereby enabling accurate positioning after a change in the camera installation position.
  • the method further includes: obtaining a plurality of first homography matrices and a plurality of sets of calibration parameters within a first preset period; according to an average value of the plurality of first homography matrices and The average of multiple sets of calibration parameters for new positioning.
  • the ADAS system can further perform the following operations: assuming the first preset period is 10 seconds, the ADAS system obtains a total of 8 sets of data within these 10 seconds, that is, 8 first homography matrices and 8 sets of calibration parameters , At a time point after the first preset period, ADAS will directly use the average of the first 8 sets of data to perform a new positioning. That is, the homography matrix obtained by averaging the first 8 homography matrices is used as a new homography matrix, and the calibration parameters obtained by averaging the first 8 sets of calibration parameters are used as new calibration parameters. Based on the new homography matrix And new calibration parameters for positioning.
  • the method further includes: obtaining a first homography matrix and a calibration parameter according to a second preset period interval; and performing a new positioning according to the first homography matrix and the calibration parameter.
  • the ADAS system can further perform the following operations: assuming that the second preset period is 5 seconds, the ADAS system obtains the first homography matrix and calibration parameters every 5 seconds, and then, in the next 5 second period , Use the first homography matrix and calibration parameters obtained in the previous cycle for positioning.
  • this embodiment relates to the specific execution process of calibrating the first homography matrix to obtain calibration parameters.
  • FIG. 3 is a schematic flowchart of an embodiment of a visual positioning method according to an embodiment of the present application.
  • the foregoing calibration of the first homography matrix to obtain calibration parameters may specifically include:
  • the above-mentioned pitch angle information and horizontal declination angle information need to be determined first.
  • the method before the first homography matrix is calibrated, before obtaining calibration parameters, the method further includes: determining the pitch angle information and the horizontal declination angle of the camera according to the lane line detection result. information.
  • horizon information and route information can be determined according to a lane line detection result.
  • the horizon information may be a horizon function
  • the route information may be a route function.
  • hardware such as a gyroscope can also be used to obtain acceleration, and pitch angle information and horizontal declination angle information can be obtained according to acceleration changes.
  • the vertical calibration parameter and the horizontal calibration parameter are respectively obtained, wherein the vertical calibration parameter specifically includes a scaling coefficient and an offset of the vertical calibration, and the horizontal calibration parameter is a scaling coefficient of the horizontal calibration.
  • the process of obtaining the longitudinal calibration parameters is as follows.
  • A is the origin of the ordinate in the world coordinate system
  • B and E respectively represent the same point on the ground of the current frame and the previous frame. Therefore, BE represents The actual displacement of the vehicle (camera), B'E 'represents the displacement in world coordinates calculated using the first homography matrix, and BD', BD, GN, E'F ', and EF are perpendicular to the ground.
  • GH is parallel to the ground
  • the dotted line where D'DNMF'FQ is located indicates the ground parallel line at the same height as the camera.
  • the straight line where the BGM is located is defined as the camera projection surface. Therefore, the G point is the position of the E point on the previous frame of the photo.
  • GM and BM respectively represent the real world ground. A pixel's pixel distance from the horizon in the photo.
  • the actual displacement BE of the vehicle between the current frames is first calculated according to the current frame rate and vehicle speed of the camera.
  • the current frame rate and vehicle speed of the camera must be obtained in advance.
  • track the same point in the video for example, the corner point of the lane line
  • the same point in the video can be tracked in a smooth tracking manner.
  • B ', E', and world coordinates F ', D' are calculated based on the first homography matrix, thereby obtaining AF 'and AD'.
  • the offset b is calculated using the following formula (6).
  • a horizontally calibrated geometric model as shown in Fig. 5 is established.
  • A is the origin of coordinates in the world coordinate system
  • B is any point on the ground in a certain frame of photos. Therefore, BE is the world coordinate system.
  • Horizontal distance For the convenience of description, the straight line where DCB is located is defined as the camera projection plane in this embodiment. Therefore, BC is the distance from point B to the route function (the specific acquisition method is described in the following embodiments) in pixel coordinates.
  • AH is the focal length of the camera
  • FH is the length of the camera imaging surface in the real world. AH and FH need to be obtained in advance.
  • BC is first calculated from the above-mentioned point-to-line distance formula (3). Furthermore, based on the longitudinal calibration scale factor k and the offset b obtained above, AE was calculated using the following formula (7). Then, the world coordinates of B are calculated according to the above-mentioned first homography matrix, thereby obtaining BE '. Then, in combination with the camera internal reference and the above-mentioned horizontal declination angle, the following equation (8) is used to calculate the horizontal calibration scale factor k '.
  • the coordinates of the same target object in multiple frames of images collected by the camera at the current perspective can also be compared and analyzed to determine the scale factor and offset of the vertical calibration, and Scale factor for horizontal calibration.
  • this embodiment relates to a specific process of positioning according to a calibration parameter and a first homography matrix.
  • FIG. 6 is a schematic flowchart of an embodiment of a visual positioning method according to an embodiment of the present application. As shown in FIG. 6, an optional implementation of positioning according to a calibration parameter and a first homography matrix is:
  • the first homography matrix and the calibration parameters are integrated to form a second homography matrix.
  • the determining a second homography matrix according to the calibration parameter and the first homography matrix includes: according to a first preset quantity in a picture of a video stream collected by a camera The coordinate points of the at least one sub-parameter of the calibration parameter are determined, and the second homography matrix is determined according to the at least one sub-parameter.
  • the at least one sub-parameter is a parameter obtained by splitting the calibration parameter.
  • the sub-parameters are sub-parameters of the offset b.
  • any three coordinate points can be selected (respectively: (x1, y1, 1), (x2, y2, 1), (x3 , Y3,1)), the sub-parameters b1, b2, and b3 are calculated by the following equation (9).
  • the sub-parameters b1, b2, and b3, and the above-mentioned vertical calibration ratio k and horizontal calibration ratio coefficient k ' are combined into the above-mentioned first homography matrix to form a second homography matrix.
  • the second homography matrix obtained by combining the above-mentioned sub-parameters b1, b2, and b3, and the above-mentioned vertical calibration scale k and horizontal calibration scale coefficient k 'into the first homography matrix HT for
  • the first homography matrix is integrated with the calibration parameters to form a calibrated homography matrix, so that when performing visual positioning, the calibrated homography matrix can be used to quickly complete visual positioning and improve the visual positioning. effectiveness.
  • FIG. 7 is a schematic flowchart of an embodiment of a visual positioning method according to an embodiment of the present application. As shown in FIG. 7, another optional implementation of positioning according to a calibration parameter and a first homography matrix is:
  • S701. Determine an intermediate matrix according to the homogeneous coordinates of the coordinate points in the picture of the video stream collected by the camera at the current perspective and the first homography matrix.
  • the homogeneous coordinates of the coordinate points in the picture of the video stream collected by the camera at the current perspective are multiplied by the above-mentioned first homography matrix to obtain an intermediate matrix.
  • the coordinate point of the camera may be any coordinate point in the picture collected by the camera. Assuming that a certain coordinate point X is (x, y), the homogeneous coordinates of the coordinate point X are
  • the homogeneous coordinate matrix can be directly multiplied by the first homography matrix, and then linearly calculated with the above calibration parameters to directly obtain The world coordinates of this coordinate point in the current perspective.
  • the first homography matrix and the calibration parameters are not integrated, but when the visual positioning is required, the first homography matrix and the calibration parameters are directly used.
  • the method of this embodiment can reduce the calculation amount and improve the calculation efficiency.
  • this embodiment relates to a specific method for determining the first reference point information according to a lane line detection result.
  • the lane line parallel line information, horizon information, vanishing point information, and route information need to be determined according to the lane line detection result.
  • the above route information refers to the driving route of the vehicle and the extension information of the route, and the vanishing point is the vanishing point of the road along the route.
  • the parallel function of the left and right chord lane lines of a vehicle can be first fitted in real time, and then the horizon function and vanishing point are fitted by counting the intersections of the lane lines, and the route function can be calculated according to the horizon and vanishing point.
  • a deep learning segmentation method can be first used to mark the pixel points where the lane line is located, and then the curve function fitting of the two lane lines of the left and right chords of the current car is performed through opencv.
  • a probability map of parallel lines of straight lane lines can be obtained based on statistical methods, and then a linear function of parallel lines of straight and left chord lane lines can be fitted based on the probability map.
  • the lane line parallel line function can also be fitted by means such as a piecewise function.
  • the intersection points within the image coordinate range are calculated according to the aforementioned parallel line function of lane lines fitted in real time. Furthermore, after the vehicle has been driving normally for a period of time, due to the existence of road surfaces such as parallel lines and curves, a probability map of lane line intersections can be obtained. Based on this probability map, a clustering algorithm based on density (such as DBSCAN algorithm) is used to remove outliers Point to get a series of points that fall on the horizon. These points can be used to fit the horizon parallel line function, and vanishing point coordinates can be obtained by means such as the mean method.
  • a clustering algorithm based on density such as DBSCAN algorithm
  • the fitted horizon must be orthogonal to the route, and the route or its extension must pass through the vanishing point. Therefore, the vanishing point calculates the orthogonal line function of the horizon as the route function. Alternatively, you can also use optical flow to calculate the points where the horizontal motion vector is 0, and then take these points to fit the route function.
  • the parallel line function of the left and right chord lane lines of the vehicle may be first fitted in real time, then the route function and vanishing point may be fitted, and then the horizon function may be calculated according to the route and vanishing point.
  • a first reference point is determined according to the information.
  • FIG. 8 is a schematic flowchart of an embodiment of a visual positioning method according to an embodiment of the present application. As shown in FIG. 8, a specific process of determining a first reference point according to a lane line detection result is:
  • the determining the lane line parallel line information and the horizon parallel line information according to the lane line detection result includes: fitting the lane line parallel line according to the lane line detection result, and parallelizing the lane line according to the fitted lane line. Line to determine horizon parallel line information.
  • the lane line parallel line information may be a function of the lane line parallel line
  • the horizon line information may be a horizon parallel line function
  • the determining the coordinates of the first reference point according to the lane line parallel line information, the horizon line parallel line information, and route information includes: selecting a second preset point in the route direction. Set the number of coordinate points, determine the horizon parallel line information of the second preset number of coordinate points, determine the coordinates of the intersection point of the horizon parallel lines and the lane line parallel lines according to the horizon parallel line information and the lane line parallel line information, and set the The coordinates are used as the coordinates of the first reference point.
  • the coordinates of these intersections are used as the coordinates of the above-mentioned first reference point.
  • the camera may be installed at a first position on the vehicle, and the first position is a position where a lane line of a road can be photographed. That is, in the embodiment of the present application, the installation position of the camera is not limited. As long as the camera can capture road surface information, visual positioning can be achieved by the method of the embodiment of the present application.
  • the performing lane line detection of the road surface based on a video stream of a vehicle running road surface collected by a camera installed on the vehicle includes: The video stream of the vehicle driving road surface collected by the camera is used to detect the lane line of the road surface, and then the subsequent steps are performed to complete the visual positioning.
  • the pitch angle described in this embodiment of the present application may be any angle within the first preset angle range
  • the horizontal deflection angle described in this embodiment of the present application may be any angle within the second preset angle range.
  • FIG. 9 is a structural diagram of a first module of an embodiment of a visual positioning device according to an embodiment of the present application. As shown in FIG. 9, the device includes:
  • a detection module 901 configured to detect a lane line of the road surface of the vehicle based on a video stream of the road surface of the vehicle collected by a camera installed on the vehicle;
  • a first determining module 902 configured to determine first reference point information of a current perspective of the camera according to a lane line detection result obtained by the detecting module 901;
  • a second determining module 903 is configured to determine a third homography matrix according to the first reference point information and the second reference point information determined by the first determining module 902, where the second reference point information is the Information about the reference point of the camera at the previous perspective, the second reference point corresponds to the position of the first reference point, and the third homography matrix is used to represent the coordinates of the camera at the current perspective and the camera's position at the previous perspective. Coordinate mapping
  • a third determining module 904 configured to determine a first homography matrix according to the third homography matrix and a preset homography matrix determined by the second determination module 903, where the preset homography matrix is the Mapping relationship between camera coordinates and world coordinates in the prior perspective;
  • the positioning module 905 is configured to perform positioning according to the first homography matrix determined by the third determining module 904.
  • the positioning module 905 is configured to: calibrate the first homography matrix according to vertical calibration information and horizontal calibration information to obtain calibration parameters; and according to the calibration parameters and all The first homography matrix is used for positioning.
  • the positioning module 905 includes a first positioning unit, configured to determine a second homography matrix according to the calibration parameter and the first homography matrix, where the first The second homography matrix is a mapping relationship matrix of the coordinates of the camera and the world coordinates at the current viewing angle after calibration; the second homography matrix is used for positioning.
  • the first positioning unit includes a matrix determining unit, configured to determine a calibration parameter according to a first preset number of coordinate points in a picture of a video stream collected by the camera. At least one sub-parameter, the sub-parameter being a parameter obtained by splitting the calibration parameter; and determining the second homography matrix according to the sub-parameter.
  • the positioning module 905 includes a second positioning unit, configured to: according to the homogeneous coordinates of the coordinate points in the picture of the video stream collected by the camera at the current perspective and the first homography Matrix, determining an intermediate matrix; performing linear calculations on the intermediate matrix and the calibration parameters to obtain world coordinates in the current perspective.
  • the positioning module 905 further includes a calibration unit, configured to determine a scale factor for longitudinal calibration according to a lane line detection result, the first homography matrix, and pitch angle information of the camera, and An offset; and determining a scaling coefficient for horizontal calibration according to the first homography matrix and horizontal declination information.
  • FIG. 10 is a structural diagram of a second module of a visual positioning device embodiment according to an embodiment of the present application. As shown in FIG. 10, the device further includes:
  • a fourth determining module 906 is configured to determine the pitch angle information and the horizontal declination angle information according to a lane line detection result.
  • the first determining module 902 is configured to: determine lane line parallel line information and horizon line parallel information according to lane line detection results; and according to the lane line parallel line information and horizon parallel line information The information and the route information determine the coordinates of the first reference point.
  • the first determination module 902 includes a first determination unit and a second determination unit; the first determination unit is configured to: select a second preset number of coordinates in the direction of the route Point; determining horizon parallel line information of the second preset number of coordinate points; the second determining unit, configured to determine the horizon parallel line and the parallel line information according to the horizon parallel line information and the lane line parallel line information The coordinates of the intersection of the lane line parallel lines; the coordinates of the intersection are taken as the coordinates of the first reference point.
  • the first determination module 902 includes a third determination unit, configured to: fit a parallel line of a lane line according to a detection result of a lane line; determine a parallel line of the horizon according to the fitted parallel line of the lane line Line information.
  • the camera is installed at a first position on the vehicle, and the first position is a position where a lane line of a road can be photographed.
  • the detection module 901 is configured to: when the vehicle is in a running state, perform lane line detection on the road based on a video stream of a road surface of the vehicle collected by a camera installed on the vehicle .
  • the pitch angle is an arbitrary angle within a first preset angle range
  • the horizontal deflection angle is an arbitrary angle within a second preset angle range.
  • FIG. 11 is a structural diagram of a third module of an embodiment of a visual positioning device according to an embodiment of the present application. As shown in FIG. 11, the device further includes:
  • An update module 907 is configured to update the preset homography matrix, and use the first homography matrix as a new preset homography matrix.
  • the first determining module 902 is further configured to: receive a calibration instruction; and determine, based on the calibration instruction, first reference point information of a current perspective of the camera according to a lane line detection result.
  • the first determining module 902 is further configured to determine whether the pose of the camera changes, and if so, determine a first reference point of the current angle of view of the camera according to a lane line detection result. information.
  • FIG. 12 is a fourth module structural diagram of an embodiment of a visual positioning device according to an embodiment of the present application. As shown in FIG. 12, the device further includes:
  • a first acquisition module 908 configured to acquire a plurality of first homography matrices and a plurality of sets of calibration parameters within a first preset period
  • a first processing module 909 is configured to perform new positioning according to an average value of the plurality of first homography matrices and an average value of the plurality of sets of calibration parameters.
  • FIG. 13 is a structural diagram of a fifth module of an embodiment of a visual positioning device according to an embodiment of the present application. As shown in FIG. 13, the device further includes:
  • a second acquisition module 910 configured to acquire the first homography matrix and the calibration parameter according to a second preset periodic interval
  • the second processing module 911 is configured to perform new positioning according to the first homography matrix and the calibration parameter.
  • the visual positioning device provided in the foregoing embodiment only uses the division of the foregoing program modules as an example for visual positioning.
  • the above processing may be allocated by different program modules as required. That is, the internal structure of the device is divided into different program modules to complete all or part of the processing described above.
  • the visual positioning device and the visual positioning method embodiments provided by the foregoing embodiments belong to the same concept. For specific implementation processes, refer to the method embodiments, and details are not described herein again.
  • FIG. 14 is a physical block diagram of an electronic device according to an embodiment of the present application. As shown in FIG. 14, the electronic device includes:
  • the processor 1402 is configured to call and execute a program instruction in the memory, and execute the method steps described in the foregoing method embodiments.
  • the memory 1401 may be a volatile memory or a non-volatile memory, and may also include both volatile and non-volatile memories.
  • the non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), a flash memory (Flash memory), a magnetic surface memory, an optical disk, or CD-ROM (Compact Disc, Read-Only Memory); magnetic surface storage can be magnetic disk storage or magnetic tape storage.
  • the volatile memory may be a random access memory (RAM, Random Access Memory), which is used as an external cache. By way of example, but not limitation, many forms of RAM are available.
  • RAM Random Access Memory
  • the memory 1401 described in the embodiments of the present invention is intended to include, but not limited to, these and any other suitable types of memory.
  • the method disclosed in the foregoing embodiment of the present invention may be applied to the processor 1402, or implemented by the processor 1402.
  • the processor 1402 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 1402 or an instruction in a software form.
  • the aforementioned processor 1402 may be a general-purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like.
  • DSP Digital Signal Processor
  • the processor 1402 may implement or execute various methods, steps, and logic block diagrams disclosed in the embodiments of the present invention.
  • a general-purpose processor may be a microprocessor or any conventional processor.
  • FIG. 15 is a schematic structural diagram of a visual positioning system provided by an embodiment of the present application.
  • the system is applied to a vehicle.
  • the system 1500 includes a camera 1501 installed on the vehicle and the above-mentioned visual positioning device 1502 connected to the camera 1501. .
  • the embodiment of the present application further provides a computer program, which causes the computer to execute the corresponding records of the foregoing method embodiments of the present application, which is limited in space and will not be described again.
  • a person of ordinary skill in the art may understand that all or part of the steps of implementing the foregoing method embodiments may be implemented by a program instructing related hardware.
  • the aforementioned program may be stored in a computer-readable storage medium.
  • the steps including the foregoing method embodiments are performed; and the foregoing storage medium includes various media that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disc.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the units is only a logical function division.
  • there may be another division manner such as multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed components are coupled, or directly coupled, or communicated with each other through some interfaces.
  • the indirect coupling or communications of the device or unit may be electrical, mechanical, or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, which may be located in one place or distributed across multiple network units; Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • the functional units in the embodiments of the present application may be all integrated into one processing unit, or each unit may be separately used as a unit, or two or more units may be integrated into one unit; the above integration
  • the unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer-readable storage medium.
  • the program is executed, the program is executed.
  • the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk or an optical disk, etc A medium on which program code can be stored.
  • the above-mentioned integrated unit of the present application is implemented in the form of a software functional module and sold or used as an independent product, it may also be stored in a computer-readable storage medium.
  • the computer software product is stored in a storage medium and includes several instructions for A computer device (which may be a personal computer, a server, or a network device) is caused to execute all or part of the methods described in the embodiments of the present application.
  • the foregoing storage media include: various types of media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
  • Image Processing (AREA)

Abstract

一种视觉定位方法、装置、电子设备及系统,该方法包括:基于车辆上安装的摄像头采集的车辆行驶路面的视频流进行车辆行驶路面的车道线检测(S201);根据车道线检测结果确定摄像头当前视角的第一参考点信息(S202);根据第一参考点信息和第二参考点信息确定第三单应矩阵(S203),其中,第二参考点信息为摄像头在先视角的参考点信息,第二参考点和第一参考点的位置相对应,第三单应矩阵用于表示当前视角下摄像头的坐标与在先视角下摄像头的坐标的映射关系(S204);根据第三单应矩阵和预设单应矩阵确定第一单应矩阵;根据第一单应矩阵进行定位(S205)。

Description

视觉定位方法、装置、电子设备及系统
相关申请的交叉引用
本申请基于申请号为201810581686.6、申请日为2018年06月05日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。
技术领域
本申请涉及计算机视觉技术,尤其涉及一种视觉定位方法、装置、电子设备及系统。
背景技术
无人驾驶系统或高级驾驶辅助系统(Advanced Driver Assistant System,ADAS)是应用于智能驾驶领域的系统,目标定位是系统的核心功能。其中,单目视觉定位方法仅使用常见的单目摄像头,并且具有算法简单、效率高、计算量小的优点,因此成为研究的热点。
单目视觉定位主要基于摄像头的参数来进行,摄像头的参数与其位姿等因素有关,通常,摄像头的参数是预先标定的,标定好摄像头的参数之后在基于摄像头的参数进行定位。
发明内容
本申请实施例提供一种视觉定位方案。
第一方面,本申请实施例提供一种视觉定位方法,包括:基于车辆上安装的摄像头采集的车辆行驶路面的视频流进行所述车辆行驶路面的车道线检测;根据车道线检测结果确定所述摄像头当前视角的第一参考点信息;根据所述第一参考点信息和第二参考点信息确定第三单应矩阵,其中,所述第二参考点信息为所述摄像头在先视角的参考点信息,所述第二参考点和所述第一参考点的位置相对应,所述第三单应矩阵用于表示当前视角下摄像头的坐标与在先视角下摄像头的坐标的映射关系;根据所述第三单应矩阵和预设单应矩阵确定第一单应矩阵,其中,所述预设单应矩阵为所述在先视角下摄像头的坐标与世界坐标的映射关系;根据所述第一单应矩阵进行定位。
第二方面,本申请实施例提供一种视觉定位装置,包括:
检测模块,用于基于车辆上安装的摄像头采集的车辆行驶路面的视频流进行所述车辆行驶路面的车道线检测;
第一确定模块,用于根据车道线检测结果确定所述摄像头当前视角的第一参考点信息;
第二确定模块,用于根据所述第一参考点信息和第二参考点信息确定第三单应矩阵,其中,所述第二参考点信息为所述摄像头在先视角的参考点信息,所述第二参考点和所述第一参考点的位置相对应,所述第三单应矩阵用于表示当前视角下摄像头的坐标 与在先视角下摄像头的坐标的映射关系;
第三确定模块,用于根据所述第三单应矩阵和预设单应矩阵确定第一单应矩阵,其中,所述预设单应矩阵为所述在先视角下摄像头的坐标与世界坐标的映射关系;
定位模块,用于根据第一单应矩阵进行定位。
本申请第三方面提供一种电子设备,包括:
存储器,用于存储程序指令;
处理器,用于调用并执行所述存储器中的程序指令,执行上述第一方面所述的方法步骤。
本申请第四方面提供一种可读存储介质,其中,所述可读存储介质中存储有计算机程序,所述计算机程序用于执行上述第一方面所述的方法步骤。
本申请第五方面提供一种视觉定位系统,应用于车辆,该系统包括安装在所述车辆上的摄像头以及与所述摄像头通信连接的上述第二方面所述的视觉定位装置。
本申请第六方面提供一种计算机程序,所述计算机程序使得计算机执行上述第一方面所述的方法。
本申请所提供的视觉定位方法、装置、电子设备及系统,基于当前视角的第一参考点信息和在先视角上的第二参考点信息,生成能反映摄像头实时位姿的当前视角下摄像头的坐标到世界坐标的第一单应矩阵,进而基于该第一单应矩阵进行视觉定位,从而实现了在摄像头位姿变化后,无需用户人工测量参数即可正常进行视觉定位,从而降低了操作复杂度,极大提升了用户的使用体验。
附图说明
为了更清楚地说明本申请或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例当前视角和在先视角的示意图;
图2为本申请实施例提供的视觉定位方法实施例的流程示意图;
图3为本申请实施例提供的视觉定位方法实施例的流程示意图;
图4为本申请实施例提供的纵向校准的几何模型;
图5为本申请实施例提供水平校准的几何模型;
图6为本申请实施例提供的视觉定位方法实施例的流程示意图;
图7为本申请实施例提供的视觉定位方法实施例的流程示意图;
图8为本申请实施例提供的视觉定位方法实施例的流程示意图;
图9为本申请实施例提供的视觉定位装置实施例的第一模块结构图;
图10为本申请实施例提供的视觉定位装置实施例的第二模块结构图;
图11为本申请实施例提供的视觉定位装置实施例的第三模块结构图;
图12为本申请实施例提供的视觉定位装置实施例的第四模块结构图;
图13为本申请实施例提供的视觉定位装置实施例的第五模块结构图
图14为本申请实施例提供的电子设备的实体框图;
图15为本申请实施例提供的视觉定位系统的架构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,本申请实施例中所述的摄像头为应用于视觉定位的车载摄像头,通过该车载摄像头能够拍摄路面信息。
相关技术中,当摄像头的位姿发生变化的场景下,例如重新安装、安装到其他车辆上或者道路颠簸等,通常需要用户通过人工进行参数测量,所要测量的参数包括摄像头的俯仰角等摄像头外参,因此需要用户执行多种复杂的操作才能测量到参数信息,导致用户的体验不佳。
本申请实施例提出一种视觉定位方法,通过当前视角和在先视角上的参考点,生成能反映摄像头实时位姿的当前视角下摄像头的坐标到世界坐标的第一单应矩阵,进而基于该第一单应矩阵进行视觉定位,从而实现了在摄像头位姿变化后,无需用户人工测量参数即可正常进行视觉定位,从而降低了操作复杂度,极大提升了用户的使用体验。
为使本领域技术人员更好地理解本申请实施例的技术方案,以下首先对本申请实施例所涉及到的术语进行解释。
1、当前视角和在先视角
当前视角为当前时刻摄像头的视角,在先视角为当前时刻之前的某一时刻的视角。针对本申请实施例所应用的场景,当前视角和在先视角可以为相同或不同的视角。例如,如图1所示,视角A为在先视角,当摄像头位姿变化后,变为当前视角,即视角B。
2、单应矩阵(Homography Matrix)
应用于同一平面不同时刻的视角的坐标变换,即用于标识不同时刻的视角的坐标之间的映射关系。
图2为本申请实施例提供的视觉定位方法实施例的流程示意图,该方法可以应用于任何需要进行视觉定位的设备,即该方法的执行主体可以为需要进行视觉定位的设备,例如,执行主体可以但不限为:包括处理器和存储器的电子设备(处理器调用存储器存储的相应指令执行本申请实施例中的方法步骤),或者,包括视觉定位装置(所述视觉定位装置至少包括检测模块、第一确定模块、第二确定模块和定位模块,所述视觉定位装置能够执行本申请实施例中的方法步骤)的电子设备,或者,安装有ADAS的车辆或机器人等,或者无人驾驶车辆或机器人等。关于方法执行主体的说明也适用本申请其他方法实施例,不再赘述。如图2所示,该方法包括:
S201、基于车辆上安装的摄像头采集的车辆行驶路面的视频流进行该车辆行驶路面的车道线检测。
S202、根据车道线检测结果确定上述摄像头当前视角的第一参考点信息。
其中,预先开启车辆上安装的摄像头,摄像头即实时采集车辆所行驶路面的视频流,进行视觉定位的设备根据摄像头所采集的视频流进行车道线检测,获取车道线检测结果。
结合本申请一个或多个实施例,可以根据车道线检测结果确定车道线信息。
结合本申请一个或多个实施例,上述车道线信息可以是车辆左右弦的两条车道线的信息或者两条车道线平行线的信息。该两条车道线既可以是直车道线,也可以是弯车道线,本申请实施例对此不做限制。当车道线为弯车道线时,可以通过对弯车道线进行统计处理得到对应的直车道线信息或者直车道线平行线信息。车道线可以是实线或虚线 等,本申请实施例对其线型不做限制;车道线的颜色可以是白色、黄色或黑色等,本申请实施例对其颜色不做限制。
上述车道线信息可以通过车道线平行线函数表示,车道线信息的获取过程将在下述实施例中进行说明。
S203、根据上述第一参考点信息和第二参考点信息确定第三单应矩阵,其中,该第二参考点信息为上述摄像头在先视角的参考点信息,该第二参考点和上述第一参考点的位置相对应,该第三单应矩阵用于表示当前视角下摄像头的坐标与在先视角下摄像头的坐标的映射关系。
S204、根据上述第三单应矩阵和预设单应矩阵确定第一单应矩阵,其中,该预设单应矩阵为上述在先视角下摄像头的坐标与世界坐标的映射关系。
结合本申请一个或多个实施例,上述第二参考点为已知的参考点,即在执行本实施例之前,已经选择并记录了在先视角下的参考点。第一参考点和第二参考点都为世界坐标下的坐标点,并且都是车道线平行线的纵向等距成对位点坐标。上述第二参考点和上述第一参考点的位置相对应,具体可以指第一参考点和第二参考点都是车道线平行线上的纵向等距成对位点。以图1为例,图1中视角A为在先视角,视角A下的m1、m2、m3以及m4为第二参考点,其中,m1和m3为一对位点,m2和m4为一对位点;图1中视角B为当前视角,视角B下的n1、n2、n3以及n4为第一参考点,其中,n1和n3为一对位点,n2和n4为一对位点。
例如,假设在先视角为上述图1中的视角A,当前视角为图1中的视角B。则视角A下摄像头的坐标到世界坐标的映射关系可以预先获取并记录下来。进而,在本步骤中,基于上述的第一参考点和第二参考点,可以得到视角B到视角A下的摄像头坐标的映射关系。进而,将视角B到视角A的映射关系,以及视角A到世界坐标的映射关系整合,即可得到视角B到世界坐标的映射关系。
以下以一个示例进行说明。
假设视角A下的第二参考点为M={m1,m2,...,m6,...},视角B下的第一参考点为N={n1,n2,...,n3,...},视角A下摄像头的坐标到世界坐标的预设单应矩阵为
Figure PCTCN2019088207-appb-000001
则首先通过坐标点集M-N的对应关系,使用opencv中的findHomography函数,获取视角B到视角A的单应矩阵
Figure PCTCN2019088207-appb-000002
进而,将H3和H1相乘,即可得到第一单应矩阵HT,具体为:
Figure PCTCN2019088207-appb-000003
S205、根据第一单应矩阵进行定位。
本实施例中,基于当前视角的第一参考点信息和在先视角上的第二参考点信息,生成能反映摄像头实时位姿的当前视角下摄像头的坐标到世界坐标的第一单应矩阵,进而基于该第一单应矩阵进行视觉定位,从而实现了在摄像头位姿变化后,无需用户人工测量参数即可正常进行视觉定位,从而降低了操作复杂度,极大提升了用户的使用体验。
在本申请的一种可选实施例中,当确定出第一单应矩阵之后,可以更新上述预设单 应矩阵,将上述第一单应矩阵作为新的预设单应矩阵,并基于该新的预设单应矩阵进行的视觉定位。
在上述实施例的基础上,本实施例涉及根据第一单应矩阵进行定位的具体过程。
结合本申请一个或多个实施例,可以直接使用该第一单应矩阵进行定位。具体可以首先基于摄像头采集的视频流得到当前视角下的摄像头坐标,再将摄像头坐标与该第一单应矩阵相乘,从而得到当前视角下的世界坐标。
需要说明的是,在具体实施时,摄像头所在位置可以看作是当前视角下的坐标原点,则得到当前视角下的世界坐标后,即得到了目标物体相对摄像头的位置,从而完成视觉定位。本申请以下实施例中基于第一单应矩阵得到当前视角的世界坐标的原理都与此相同,不再赘述。
由于现实世界中,车道线在不同路面上的宽度不一定相同,因此,直接使用该第一单应矩阵进行定位时,所得出的纵向以及水平方向的距离可能都与实际距离存在偏移,因此,本申请实施例中,还可以采用另一种可选实施方式,即对第一单应矩阵进行校准,再基于校准后的单应矩阵进行定位,以保证定位结果更加准确。
在该可选实施方式中,所述根据所述第一单应矩阵进行定位,包括:根据纵向校准信息以及水平校准信息,对上述第一单应矩阵进行校准,得到校准参数;根据该校准参数以及上述第一单应矩阵进行定位。
结合本申请一个或多个实施例,目标物体的位置由其在世界坐标下的纵向坐标和水平坐标来决定。示例性的,假设某目标物体相对于摄像头的水平坐标为3,纵向坐标为4,则可以通过该坐标3和坐标4唯一确定出目标物体的位置。相应的,第一单应矩阵的偏差由水平方向的偏差和纵向的偏差共同形成,因此,在本实施例中,根据纵向校准信息和水平校准信息对第一单应矩阵进行校准,从而得到校准参数,再基于校准参数和第一单应矩阵进行定位。
其中,上述纵向校准信息可以包括俯仰角等,上述水平校准信息可以包括水平偏角等。
具体执行过程将在下述实施例中进行举例说明。
以下对本申请实施例的应用场景进行说明。本申请实施例所涉及的方案可以应用于智能驾驶场景中,例如辅助驾驶或者自动驾驶场景中。其中,无人驾驶系统或ADAS系统为应用于智能驾驶领域的系统,本申请实施例以下以ADAS系统为例说明本申请实施例的方法在智能驾驶场景中的应用过程,但显然,本申请实施例并不以此为限。
在一种场景下,当接收到用户手动输入或ADAS系统在特定的触发条件下输入的校准指令后,会触发执行上述步骤S201,进而执行S201之后的步骤。则作为一种实施方式,针对步骤202,所述根据车道线检测结果确定所述摄像头当前视角的第一参考点信息,包括:接收校准指令;基于所述校准指令,根据车道线检测结果确定所述摄像头当前视角的第一参考点信息。
示例性的,当用户手动调整了摄像头的安装位置之后,用户可以在ADAS系统界面上通过点击按钮或者语音等方式输入校准指令,当系统接收到用户输入的校准指令后,会执行上述步骤S201及其后续步骤,从而实现在摄像头安装位置发生变化之后能够进行准确的定位。
在另一种场景下,无需用户发出校准指令,ADAS系统可以判断摄像头的位姿是否发生变化,若是,则触发执行上述步骤S201以及后续步骤。作为另一种实施方式,针对步骤202,所述根据车道线检测结果确定所述摄像头当前视角的第一参考点信息,包括:判断摄像头的位姿是否发生变化,若是,则根据车道线检测结果确定所述摄像头当前视角的第一参考点信息。
示例性的,ADAS系统可以实时进行摄像头的位姿估计,并根据相邻两次的位姿估计结果判断摄像头的位姿是否发生变化,若摄像头的位姿发生变化,则ADAS系统会执行上述步骤S201及其后续步骤,从而实现在摄像头安装位置发生变化之后能够进行准确的定位。
结合本申请一个或多个实施例,所述方法还包括:获取第一预设周期内的多个第一单应矩阵以及多组校准参数;根据该多个第一单应矩阵的平均值以及多组校准参数的平均值,进行新的定位。
示例性的,ADAS系统还可以进一步执行以下操作:假设第一预设周期为10秒,ADAS系统在这10秒内共获得了8组数据,即8个第一单应矩阵以及8组校准参数,则在第一预设周期之后的时间点上,ADAS会直接使用前8组数据的平均值来进行新的定位。即将前8个第一单应矩阵进行平均所得到的单应矩阵作为新的单应矩阵,并将前8组校准参数进行平均所得到的校准参数作为新的校准参数,基于新的单应矩阵以及新的校准参数进行定位。
结合本申请一个或多个实施例,所述方法还包括:按照第二预设周期间隔,获取第一单应矩阵以及校准参数;根据该第一单应矩阵以及该校准参数进行新的定位。
示例性的,ADAS系统还可以进一步执行以下操作:假设第二预设周期为5秒,则ADAS系统每隔5秒获取一次第一单应矩阵和校准参数,进而,在下个5秒的周期内,使用上个周期所获取到的第一单应矩阵以及校准参数进行定位。
在上述实施例的基础上,本实施例涉及对第一单应矩阵进行校准,得到校准参数的具体执行过程。
图3为本申请实施例提供的视觉定位方法实施例的流程示意图,如图3所示,上述对第一单应矩阵进行校准,得到校准参数,具体可以包括:
S301、根据车道线检测结果、上述第一单应矩阵以及摄像头的俯仰角信息,确定纵向校准的比例系数以及偏移量。
S302、根据上述第一单应矩阵以及水平偏角信息,确定水平校准的比例系数。
结合本申请一个或多个实施例,在执行本实施例之前,需要首选确定上述俯仰角信息以及水平偏角信息。
结合本申请一个或多个实施例,所述对所述第一单应矩阵进行校准,得到校准参数之前,所述方法还包括:根据车道线检测结果,确定摄像头的俯仰角信息和水平偏角信息。
结合本申请一个或多个实施例,可以根据车道线检测结果确定地平线信息以及航线信息。地平线信息可以为地平线函数,航线信息可以为航线函数。
应当理解,本申请实施例提及的函数的具体数学表现形式仅为示例,本领域技术人员在本申请的基础上可以构建其他数学表达形式的函数,本申请提供的示例不能作为对本申请技术方案实质的限制。
首先,根据地平线函数,计算主光轴像素坐标到地平线函数距离(下述图4中的PQ),进而通过如下公式(1)计算俯仰角θ:
θ=arctan(PQ/(f*pm))                                               (1)
其次,根据航线函数,计算主光轴像素坐标到航线函数距离(下述图5中的CD),进而通过如下公式(2)计算水平偏角φ:
φ=arctan(CD/(f*pm))                                               (2)
结合本申请一个或多个实施例,还可以使用陀螺仪等硬件获取加速度,根据加速度变化获取俯仰角信息与水平偏角信息。
进而,在本实施例中,分别获取纵向校准参数和水平校准参数,其中,纵向校准参 数具体包括纵向校准的比例系数和偏移量,水平校准参数具体为水平校准的比例系数。
结合本申请一个或多个实施例,纵向校准参数的获取过程如下。
首先建立如图4所示的纵向校准的几何模型,在该模型中,A为世界坐标系下纵坐标原点,B、E分别表示当前帧与前一帧地面上的同一点,因此,BE表示车辆(摄像头)的实际位移量,B’E’表示使用上述第一单应矩阵计算得出的世界坐标下的位移量,BD’、BD、GN、E’F’、EF均垂直于地面,GH平行于地面,D’DNMF’FQ所在虚线表示与摄像头同高度的地面平行线。为便于描述,本实施例中将BGM所在直线定义为摄像头投影面,因此,G点即为E点在前一帧照片上的位置,根据透视原理可以得到,GM、BM分别表示真实世界地面同一点在照片中到地平线的像素距离。
基于图4所示的几何模型,首先根据摄像头当前帧率及车速计算出当前帧间车辆的实际位移量BE。其中,摄像头当前帧率和车速需预先获得。进而跟踪视频中同一点(例如车道线角点),从而获取B、G在照片上的像素坐标。结合本申请一个或多个实施例,可以通过光溜跟踪方式跟踪视频中的同一点。进而,根据上述第一单应矩阵计算得出B’、E’、世界坐标F’、D’,从而得到AF’、AD’。同时根据地平线函数(具体获取方式在下述实施例中说明),利用下述点到直线距离公式(3),计算B、G像素坐标到地平线函数的距离,即可获得GM、BM,进一步的,使用公式(4)计算得出BG。可得出。
Figure PCTCN2019088207-appb-000004
BG=BM–GM                           (4)
进而,使用如下公式(5)计算得出纵向校准的比例系数k。
Figure PCTCN2019088207-appb-000005
使用如下公式(6)计算得出偏移量b。
Figure PCTCN2019088207-appb-000006
水平校准参数的获取过程如下。
首先建立如图5所示的水平校准的几何模型,在该模型中,A为世界坐标系下坐标原点,B为某一帧照片中地面上的任一点,因此,BE为世界坐标系下的水平距离。为便于描述,本实施例中将DCB所在直线定义为摄像头投影面,因此,BC为像素坐标下B点到航线函数(具体获取方式在下述实施例中说明)的距离。AH为摄像头焦距,FH为摄像头成像面在真实世界的长度。AH和FH需预先获得。
基于图5所示的几何模型,首先通过上述的点到直线距离公式(3)计算出BC。进而,根据上述获得的纵向校准比例系数k及偏移量b,使用如下公式(7)计算出AE。再根据上述第一单应矩阵计算得出B的世界坐标,从而得到BE’。于是结合摄像头内参及上述水平偏角,使用下述公式(8)计算水平校准的比例系数k’。
AE=k*AE’+b                      (7)
Figure PCTCN2019088207-appb-000007
结合本申请一个或多个实施例,还可以通过对当前视角下摄像头所采集到的多帧图像中的同一个目标物体的坐标进行比较分析,从而确定纵向校准的比例系数和偏移量,以及水平校准的比例系数。
在上述实施例的基础上,本实施例涉及根据校准参数以及第一单应矩阵进行定位的具体过程。
图6为本申请实施例提供的视觉定位方法实施例的流程示意图,如图6所示,根据校准参数以及第一单应矩阵进行定位的一种可选实施方式为:
S601、根据上述校准参数以及上述第一单应矩阵,确定第二单应矩阵,该第二单应矩阵为经过校准后的当前视角下摄像头的坐标与世界坐标的映射关系矩阵。
例如,将上述第一单应矩阵和上述校准参数进行整合,形成第二单应矩阵。
结合本申请一个或多个实施例,所述据所述校准参数以及所述第一单应矩阵,确定第二单应矩阵,包括:根据摄像头所采集的视频流的画面中第一预设数量的坐标点确定校准参数的至少一个子参数,根据该至少一个子参数确定第二单应矩阵。其中,该至少一个子参数为对校准参数进行拆分后所得到的参数。
结合本申请一个或多个实施例,上述子参数为上述偏移量b的子参数。
示例性的,假设偏移量b的子参数为b1、b2和b3,则可以选择任意的3个坐标点(分别为:(x1,y1,1)、(x2,y2,1)、(x3,y3,1)),通过下述方程式(9)计算得到子参数b1、b2和b3。
Figure PCTCN2019088207-appb-000008
进而,将子参数b1、b2和b3,以及上述的纵向校准的比例k和水平校准的比例系数k’合并到上述第一单应矩阵中,形成第二单应矩阵。
以下举一示例进行说明。
假设存在图像上的一坐标点
Figure PCTCN2019088207-appb-000009
并且上述第一单应矩阵为H T,则坐标点X仅经过第一单应矩阵映射之后所得到的世界坐标为
Figure PCTCN2019088207-appb-000010
而坐标点X对应的真实的世界坐标应为
Figure PCTCN2019088207-appb-000011
则经过矩阵运算,将上述子参数b1、b2和b3,以及上述的纵向校准的比例k和水平校准的比例系数k’合并到上述第一单应矩阵HT中之后所得到的第二单应矩阵为
Figure PCTCN2019088207-appb-000012
S602、使用上述第二单应矩阵进行定位。
本实施例中,将第一单应矩阵与校准参数进行整合,形成校准后的单应矩阵,从而使得在进行视觉定位时可以使用该校准后的单应矩阵快速完成视觉定位,提升视觉定位的效率。
图7为本申请实施例提供的视觉定位方法实施例的流程示意图,如图7所示,根据 校准参数以及第一单应矩阵进行定位的另一种可选实施方式为:
S701、根据当前视角下摄像头所采集的视频流的画面中坐标点的齐次坐标和上述第一单应矩阵,确定中间矩阵。
结合本申请一个或多个实施例,将当前视角下摄像头所采集的视频流的画面中坐标点的齐次坐标与上述第一单应矩阵相乘,得到中间矩阵。
S702、对上述中间矩阵与上述校准参数进行线性计算,得到当前视角下的世界坐标。
其中,上述摄像头的坐标点可以是摄像头所采集画面中的任意一个坐标点,假设某个坐标点X为(x,y),则坐标点X的齐次坐标为
Figure PCTCN2019088207-appb-000013
即在本实施例中,对于摄像头所采集画面中的任意一个坐标点,都可以通过将其齐次坐标矩阵直接与第一单应矩阵相乘,再与上述校准参数进行线性计算,而直接得到该坐标点在当前视角下的世界坐标。
本实施例中,不对第一单应矩阵和校准参数进行整合处理,而是在需要进行视觉定位时,直接使用第一单应矩阵和校准参数。当第一单应矩阵和校准参数持续变化时,使用本实施例的方法可以减少计算量,提升计算效率。
在上述实施例的基础上,本实施例涉及根据车道线检测结果确定第一参考点信息的具体方法。
在确定第一参考点前,需要首先根据车道线检测结果确定车道线平行线信息、地平线信息、灭点信息以及航线信息。
其中,上述航线信息是指车辆的行车路线以及路线的延长线信息,灭点是沿航线的路面消失点。
结合本申请一个或多个实施例,可以首先通过实时拟合车辆左右弦车道线平行线函数,再统计车道线交点拟合地平线函数及灭点,进而根据地平线和灭点计算航线函数。
例如,对于车道线平行线函数的拟合,可以首先采用深度学习的分割方法将车道线所在的像素点标记出来,进而通过opencv对当前车左右弦的两条车道线进行曲线函数拟合。同时由于路面情况大多为直道,可基于统计方法,获取直车道线平行线的概率图,进而基于该概率图拟合车左右弦直车道线平行线的一次函数。或者,也可以通过分段函数等方式来拟合车道线平行线函数。
对于地平线函数及灭点,首先根据前述实时拟合的车道线平行线函数,计算在图像坐标范围内的交点。进而,在车辆正常行驶一段时间后,由于存在并线、弯道等路面情况,可获得车道线交点的概率图,根据该概率图,使用基于密度的聚类算法(如DBSCAN算法)去掉离群点,便可得到一系列落于地平线上的点。使用这些点可以拟合地平线平行线函数,并通过均值方法等获得灭点坐标。
对于航线函数,根据透视原理,可知上述拟合出的地平线必与航线正交,同时航线或其延长线必过灭点。因此,过灭点计算地平线的正交线函数,将其作为航线函数。或者,也可以使用光流计算水平运动矢量为0的点,再取这些点来拟合航线函数。
结合本申请一个或多个实施例,可以首先通过实时拟合车辆左右弦车道线平行线函数,再拟合航线函数及灭点,进而再根据航线和灭点计算地平线函数。
在获取到上述车道线信息、地平线信息以及航线信息之后,根据这些信息确定第一参考点。
图8为本申请实施例提供的视觉定位方法实施例的流程示意图,如图8所示,根据车道线检测结果确定第一参考点的具体过程为:
S801、根据车道线检测结果确定车道线平行线信息以及地平线平行线信息。
结合本申请一个或多个实施例,所述根据车道线检测结果确定车道线平行线信息以及地平线平行线信息,包括:根据车道线检测结果拟合车道线平行线,根据拟合的车道线平行线,确定地平线平行线信息。
结合本申请一个或多个实施例,上述车道线平行线信息可以为上述的车道线平行线函数,上述地平线平行线信息可以为地平线平行线函数。
S802、根据上述车道线平行线信息、上述地平线平行线信息以及航线信息确定上述第一参考点的坐标。
结合本申请一个或多个实施例,所述根据所述车道线平行线信息、所述地平线平行线信息以及航线信息确定所述第一参考点的坐标,包括:选择航线方向上的第二预设数量的坐标点,确定该第二预设数量的坐标点的地平线平行线信息,根据地平线平行线信息和车道线平行线信息确定地平线平行线与车道线平行线的交点的坐标,并将该坐标作为第一参考点的坐标。
例如,首先根据上述航线函数,从航线方向上选择第二预设数量的点,进而,计算这些点的地平线平行线函数,再计算该地平线平行线函数与车道线平行线函数的交点坐标,将这些交点坐标作为上述第一参考点的坐标。
在本申请的上述实施例中,上述摄像头可以安装在车辆上的第一位置,该第一位置为可以拍摄到路面的车道线的位置。即在本申请实施例中,并不限定摄像头的安装位置,只要摄像头可以拍摄到路面信息,即可通过本申请实施例的方法实现视觉定位。
另外,在本申请的上述实施例中,所述基于车辆上安装的摄像头采集的车辆行驶路面的视频流进行所述路面的车道线检测,包括:当车辆处于行驶状态时,可以基于车辆上安装的摄像头采集的车辆行驶路面的视频流进行所述路面的车道线检测,进而执行后续的步骤,以完成视觉定位。
另外,本申请实施例所述的俯仰角可以为第一预设角度范围内的任意角度,并且,本申请实施例所述的水平偏角可以为第二预设角度范围内的任意角度。
图9为本申请实施例提供的视觉定位装置实施例的第一模块结构图,如图9所示,该装置包括:
检测模块901,用于基于车辆上安装的摄像头采集的车辆行驶路面的视频流进行所述车辆行驶路面的车道线检测;
第一确定模块902,用于根据所述检测模块901获得的车道线检测结果确定所述摄像头当前视角的第一参考点信息;
第二确定模块903,用于根据所述第一确定模块902确定的所述第一参考点信息和第二参考点信息确定第三单应矩阵,其中,所述第二参考点信息为所述摄像头在先视角的参考点信息,所述第二参考点和所述第一参考点的位置相对应,所述第三单应矩阵用于表示当前视角下摄像头的坐标与在先视角下摄像头的坐标的映射关系;
第三确定模块904,用于根据所述第二确定模块903确定的所述第三单应矩阵和预设单应矩阵确定第一单应矩阵,其中,所述预设单应矩阵为所述在先视角下摄像头的坐标与世界坐标的映射关系;
定位模块905,用于根据所述第三确定模块904确定的所述第一单应矩阵进行定位。
该装置用于实现前述方法实施例,其实现原理和技术效果类似,此处不再赘述。
结合本申请一个或多个实施例,所述定位模块905,用于:根据纵向校准信息以及水平校准信息,对所述第一单应矩阵进行校准,得到校准参数;根据所述校准参数以及所述第一单应矩阵进行定位。
结合本申请一个或多个实施例,所述定位模块905包括第一定位单元,用于:根据 所述校准参数以及所述第一单应矩阵,确定第二单应矩阵,其中,所述第二单应矩阵为经过校准后的当前视角下摄像头的坐标与世界坐标的映射关系矩阵;使用所述第二单应矩阵进行定位。
结合本申请一个或多个实施例,所述第一定位单元包括矩阵确定单元,用于:根据所述摄像头所采集的视频流的画面中第一预设数量的坐标点确定所述校准参数的至少一个子参数,所述子参数为对所述校准参数进行拆分后所得到的参数;根据所述子参数,确定所述第二单应矩阵。
结合本申请一个或多个实施例,所述定位模块905包括第二定位单元,用于:根据当前视角下摄像头所采集的视频流的画面中坐标点的齐次坐标和所述第一单应矩阵,确定中间矩阵;对所述中间矩阵与所述校准参数进行线性计算,得到当前视角下的世界坐标。
结合本申请一个或多个实施例,所述定位模块905还包括校准单元,用于:根据车道线检测结果、所述第一单应矩阵以及摄像头的俯仰角信息,确定纵向校准的比例系数以及偏移量;根据所述第一单应矩阵以及水平偏角信息,确定水平校准的比例系数。
图10为本申请实施例提供的视觉定位装置实施例的第二模块结构图,如图10所示,该装置还包括:
第四确定模块906,用于根据车道线检测结果,确定所述俯仰角信息以及所述水平偏角信息。
结合本申请一个或多个实施例,所述第一确定模块902,用于:根据车道线检测结果确定车道线平行线信息以及地平线平行线信息;根据所述车道线平行线信息以及地平线平行线信息以及航线信息确定所述第一参考点的坐标。
结合本申请一个或多个实施例,所述第一确定模块902包括第一确定单元和第二确定单元;所述第一确定单元,用于:选择航线方向上的第二预设数量的坐标点;确定所述第二预设数量的坐标点的地平线平行线信息;所述第二确定单元,用于根据所述地平线平行线信息和所述车道线平行线信息确定地平线平行线与所述车道线平行线的交点的坐标;将所述交点的坐标作为所述第一参考点的坐标。
结合本申请一个或多个实施例,所述第一确定模块902包括第三确定单元,用于:根据车道线检测结果拟合车道线平行线;根据拟合的车道线平行线,确定地平线平行线信息。
结合本申请一个或多个实施例,所述摄像头安装在所述车辆上的第一位置,所述第一位置为可以拍摄到路面的车道线的位置。
结合本申请一个或多个实施例,所述检测模块901,用于:在所述车辆处于行驶状态时,基于车辆上安装的摄像头采集的车辆行驶路面的视频流进行所述路面的车道线检测。
结合本申请一个或多个实施例,所述俯仰角为第一预设角度范围内的任意角度,所述水平偏角为第二预设角度范围内的任意角度。
图11为本申请实施例提供的视觉定位装置实施例的第三模块结构图,如图11所示,该装置还包括:
更新模块907,用于更新所述预设单应矩阵,将所述第一单应矩阵作为新的预设单应矩阵。
结合本申请一个或多个实施例,所述第一确定模块902,还用于:接收校准指令;基于所述校准指令,根据车道线检测结果确定所述摄像头当前视角的第一参考点信息。
结合本申请一个或多个实施例,所述第一确定模块902,还用于:判断摄像头的位姿是否发生变化,若是,则根据车道线检测结果确定所述摄像头当前视角的第一参考点 信息。
图12为本申请实施例提供的视觉定位装置实施例的第四模块结构图,如图12所示,该装置还包括:
第一获取模块908,用于获取第一预设周期内的多个第一单应矩阵以及多组校准参数;
第一处理模块909,用于根据所述多个第一单应矩阵的平均值以及所述多组校准参数的平均值,进行新的定位。
图13为本申请实施例提供的视觉定位装置实施例的第五模块结构图,如图13所示,该装置还包括:
第二获取模块910,用于按照第二预设周期间隔,获取所述第一单应矩阵以及所述校准参数;
第二处理模块911,用于根据所述第一单应矩阵以及所述校准参数进行新的定位。
需要说明的是:上述实施例提供的视觉定位装置在进行视觉定位时,仅以上述各程序模块的划分进行举例说明,实际应用中,可以根据需要而将上述处理分配由不同的程序模块完成,即将装置的内部结构划分成不同的程序模块,以完成以上描述的全部或者部分处理。另外,上述实施例提供的视觉定位装置与视觉定位方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图14为本申请实施例提供的电子设备的实体框图,如图14所示,该电子设备包括:
存储器1401,用于存储程序指令;
处理器1402,用于调用并执行所述存储器中的程序指令,执行上述方法实施例中所述的方法步骤。
可以理解,存储器1401可以是易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read-Only Memory)、快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact Disc Read-Only Memory);磁表面存储器可以是磁盘存储器或磁带存储器。易失性存储器可以是随机存取存储器(RAM,Random Access Memory),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用。本发明实施例描述的存储器1401旨在包括但不限于这些和任意其它适合类型的存储器。
上述本发明实施例揭示的方法可以应用于处理器1402中,或者由处理器1402实现。处理器1402可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1402中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1402可以是通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。处理器1402可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。
图15为本申请实施例提供的视觉定位系统的架构示意图,该系统应用于车辆,如图15所示,该系统1500包括安装在车辆上的摄像头1501以及与摄像头1501连接的上述视觉定位装置1502。
应当理解,本申请实施例所提供的任一视觉定位装置、视觉定位系统、电子设备中各部件、模块或单元的工作过程和设置方式,可以参见本申请上述方法实施例的相应记载,限于篇幅,不再赘述。
本申请实施例还提供了一种计算机程序,所述计算机程序使得计算机执行本申请上述方法实施例的相应记载,限于篇幅,不再赘述。
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本申请上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。
本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (40)

  1. 一种视觉定位方法,包括:
    基于车辆上安装的摄像头采集的车辆行驶路面的视频流进行所述车辆行驶路面的车道线检测;
    根据车道线检测结果确定所述摄像头当前视角的第一参考点信息;
    根据所述第一参考点信息和第二参考点信息确定第三单应矩阵,其中,所述第二参考点信息为所述摄像头在先视角的参考点信息,所述第二参考点和所述第一参考点的位置相对应,所述第三单应矩阵用于表示当前视角下摄像头的坐标与在先视角下摄像头的坐标的映射关系;
    根据所述第三单应矩阵和预设单应矩阵确定第一单应矩阵,其中,所述预设单应矩阵为所述在先视角下摄像头的坐标与世界坐标的映射关系;
    根据所述第一单应矩阵进行定位。
  2. 根据权利要求1所述的方法,其中,所述根据所述第一单应矩阵进行定位,包括:
    根据纵向校准信息以及水平校准信息,对所述第一单应矩阵进行校准,得到校准参数;
    根据所述校准参数以及所述第一单应矩阵进行定位。
  3. 根据权利要求2所述的方法,所述根据所述校准参数以及所述第一单应矩阵进行定位,包括:
    根据所述校准参数以及所述第一单应矩阵,确定第二单应矩阵,其中,所述第二单应矩阵为经过校准后的当前视角下摄像头的坐标与世界坐标的映射关系矩阵;
    使用所述第二单应矩阵进行定位。
  4. 根据权利要求3所述的方法,其中,所述据所述校准参数以及所述第一单应矩阵,确定第二单应矩阵,包括:
    根据所述摄像头所采集的视频流的画面中第一预设数量的坐标点确定所述校准参数的至少一个子参数,所述至少一个子参数为对所述校准参数进行拆分后所得到的参数;
    根据所述至少一个子参数,确定所述第二单应矩阵。
  5. 根据权利要求2所述的方法,其中,所述根据所述校准参数以及所述第一单应矩阵进行定位,包括:
    根据当前视角下摄像头所采集的视频流的画面中坐标点的齐次坐标和所述第一单应矩阵,确定中间矩阵;
    对所述中间矩阵与所述校准参数进行线性计算,得到当前视角下的世界坐标。
  6. 根据权利要求2-5任一项所述的方法,其中,所述对所述第一单应矩阵进行校准,得到校准参数,包括:
    根据车道线检测结果、所述第一单应矩阵以及摄像头的俯仰角信息,确定纵向校准的比例系数以及偏移量;
    根据所述第一单应矩阵以及水平偏角信息,确定水平校准的比例系数。
  7. 根据权利要求6所述的方法,其中,所述对所述第一单应矩阵进行校准,得到校准参数之前,所述方法还包括:
    根据车道线检测结果,确定所述俯仰角信息以及所述水平偏角信息。
  8. 根据权利要求1-7任一项所述的方法,其中,所述根据车道线检测结果确定所 述摄像头当前视角的第一参考点信息,包括:
    根据车道线检测结果确定车道线平行线信息以及地平线平行线信息;
    根据所述车道线平行线信息、所述地平线平行线信息以及航线信息确定所述第一参考点的坐标。
  9. 根据权利要求8所述的方法,其中,所述根据所述车道线平行线信息、所述地平线平行线信息以及航线信息确定所述第一参考点的坐标,包括:
    选择航线方向上的第二预设数量的坐标点;
    确定所述第二预设数量的坐标点的地平线平行线信息;
    根据所述地平线平行线信息和所述车道线平行线信息确定地平线平行线与所述车道线平行线的交点的坐标;
    将所述交点的坐标作为所述第一参考点的坐标。
  10. 根据权利要求8或9所述的方法,其中,所述根据车道线检测结果确定车道线平行线信息以及地平线平行线信息,包括:
    根据车道线检测结果拟合车道线平行线;
    根据拟合的车道线平行线,确定地平线平行线信息。
  11. 根据权利要求1-10任一项所述的方法,其中,所述摄像头安装在所述车辆上的第一位置,所述第一位置为可以拍摄到路面的车道线的位置。
  12. 根据权利要求1-11任一项所述的方法,其中,所述基于车辆上安装的摄像头采集的车辆行驶路面的视频流进行所述路面的车道线检测,包括:
    在所述车辆处于行驶状态时,基于车辆上安装的摄像头采集的车辆行驶路面的视频流进行所述路面的车道线检测。
  13. 根据权利要求6或7所述的方法,其中,所述俯仰角为第一预设角度范围内的任意角度,所述水平偏角为第二预设角度范围内的任意角度。
  14. 根据权利要求1-13任一项所述的方法,其中,所述方法还包括:
    更新所述预设单应矩阵,将所述第一单应矩阵作为新的预设单应矩阵。
  15. 根据权利要求1-14任一项所述的方法,其中,所述根据车道线检测结果确定所述摄像头当前视角的第一参考点信息,包括:
    接收校准指令;
    基于所述校准指令,根据车道线检测结果确定所述摄像头当前视角的第一参考点信息。
  16. 根据权利要求1-14任一项所述的方法,其中,所述根据车道线检测结果确定所述摄像头当前视角的第一参考点信息,包括:
    判断摄像头的位姿是否发生变化,若是,则根据车道线检测结果确定所述摄像头当前视角的第一参考点信息。
  17. 根据权利要求1-16任一项所述的方法,其中,所述方法还包括:
    获取第一预设周期内的多个第一单应矩阵以及多组校准参数;
    根据所述多个第一单应矩阵的平均值以及所述多组校准参数的平均值,进行新的定位。
  18. 根据权利要求1-16任一项所述的方法,其中,所述方法还包括:
    按照第二预设周期间隔,获取所述第一单应矩阵以及所述校准参数;
    根据所述第一单应矩阵以及所述校准参数进行新的定位。
  19. 一种视觉定位装置,包括:
    检测模块,用于基于车辆上安装的摄像头采集的车辆行驶路面的视频流进行所述车辆行驶路面的车道线检测;
    第一确定模块,用于根据车道线检测结果确定所述摄像头当前视角的第一参考点信息;
    第二确定模块,用于根据所述第一参考点信息和第二参考点信息确定第三单应矩阵,其中,所述第二参考点信息为所述摄像头在先视角的参考点信息,所述第二参考点和所述第一参考点的位置相对应,所述第三单应矩阵用于表示当前视角下摄像头的坐标与在先视角下摄像头的坐标的映射关系;
    第三确定模块,用于根据所述第三单应矩阵和预设单应矩阵确定第一单应矩阵,其中,所述预设单应矩阵为所述在先视角下摄像头的坐标与世界坐标的映射关系;
    定位模块,用于根据所述第一单应矩阵进行定位。
  20. 根据权利要求19所述的装置,其中,所述定位模块,用于:根据纵向校准信息以及水平校准信息,对所述第一单应矩阵进行校准,得到校准参数;根据所述校准参数以及所述第一单应矩阵进行定位。
  21. 根据权利要求20所述的装置,其中,所述定位模块包括第一定位单元,用于:根据所述校准参数以及所述第一单应矩阵,确定第二单应矩阵,其中,所述第二单应矩阵为经过校准后的当前视角下摄像头的坐标与世界坐标的映射关系矩阵;使用所述第二单应矩阵进行定位。
  22. 根据权利要求21所述的装置,其中,所述第一定位单元包括矩阵确定单元,用于:根据所述摄像头所采集的视频流的画面中第一预设数量的坐标点确定所述校准参数的至少一个子参数,所述至少一个子参数为对所述校准参数进行拆分后所得到的参数;根据所述至少一个子参数,确定所述第二单应矩阵。
  23. 根据权利要求20所述的装置,其中,所述定位模块包括第二定位单元,用于:根据当前视角下摄像头所采集的视频流的画面中坐标点的齐次坐标和所述第一单应矩阵,确定中间矩阵;对所述中间矩阵与所述校准参数进行线性计算,得到当前视角下的世界坐标。
  24. 根据权要求20-23任一项所述的装置,其中,所述定位模块还包括校准单元,用于:根据车道线检测结果、所述第一单应矩阵以及摄像头的俯仰角信息,确定纵向校准的比例系数以及偏移量;根据所述第一单应矩阵以及水平偏角信息,确定水平校准的比例系数。
  25. 根据权利要求24所述的装置,其中,所述装置还包括:
    第四确定模块,用于根据车道线检测结果,确定所述俯仰角信息以及所述水平偏角信息。
  26. 根据权利要求19-25任一项所述的装置,其中,所述第一确定模块,用于:根据车道线检测结果确定车道线平行线信息以及地平线平行线信息;根据所述车道线平行线信息以及地平线平行线信息以及航线信息确定所述第一参考点的坐标。
  27. 根据权利要求26所述的装置,其中,所述第一确定模块包括第一确定单元和第二确定单元;
    所述第一确定单元,用于:选择航线方向上的第二预设数量的坐标点;确定所述第二预设数量的坐标点的地平线平行线信息;
    所述第二确定单元,用于根据所述地平线平行线信息和所述车道线平行线信息确定地平线平行线与所述车道线平行线的交点的坐标;将所述交点的坐标作为所述第一参考点的坐标。
  28. 根据权利要求26或27所述的装置,其中,所述第一确定模块包括第三确定单元,用于:根据车道线检测结果拟合车道线平行线;根据拟合的车道线平行线,确定地平线平行线信息。
  29. 根据权利要求19-28任一项所述的装置,其中,所述摄像头安装在所述车辆上的第一位置,所述第一位置为可以拍摄到路面的车道线的位置。
  30. 根据权利要求19-29任一项所述的装置,其中,所述检测模块,用于:在所述车辆处于行驶状态时,基于车辆上安装的摄像头采集的车辆行驶路面的视频流进行所述路面的车道线检测。
  31. 根据权利要求24或25所述的装置,其中,所述俯仰角为第一预设角度范围内的任意角度,所述水平偏角为第二预设角度范围内的任意角度。
  32. 根据权利要求19-31任一项所述的装置,其中,所述装置还包括:
    更新模块,用于更新所述预设单应矩阵,将所述第一单应矩阵作为新的预设单应矩阵。
  33. 根据权利要求19-32任一项所述的装置,其中,所述第一确定模块,还用于:接收校准指令;基于所述校准指令,根据车道线检测结果确定所述摄像头当前视角的第一参考点信息。
  34. 根据权利要求19-32任一项所述的装置,其中,所述第一确定模块,还用于:判断摄像头的位姿是否发生变化,若是,则根据车道线检测结果确定所述摄像头当前视角的第一参考点信息。
  35. 根据权利要求19-34任一项所述的装置,其中,所述装置还包括:
    第一获取模块,用于获取第一预设周期内的多个第一单应矩阵以及多组校准参数;
    第一处理模块,用于根据所述多个第一单应矩阵的平均值以及所述多组校准参数的平均值,进行新的定位。
  36. 根据权利要求19-34任一项所述的装置,其中,所述装置还包括:
    第二获取模块,用于按照第二预设周期间隔,获取所述第一单应矩阵以及所述校准参数;
    第二处理模块,用于根据所述第一单应矩阵以及所述校准参数进行新的定位。
  37. 一种电子设备,包括:
    存储器,用于存储程序指令;
    处理器,用于调用并执行所述存储器中的程序指令,执行权利要求1-18任一项所述的方法步骤。
  38. 一种可读存储介质,所述可读存储介质中存储有计算机程序,所述计算机程序用于执行权利要求1-18任一项所述的方法步骤。
  39. 一种视觉定位系统,应用于车辆,包括安装在所述车辆上的摄像头以及与所述摄像头通信连接的权利要求19-36任一项所述的视觉定位装置。
  40. 一种计算机程序,所述计算机程序使得计算机执行权利要求1-18任一项所述的方法。
PCT/CN2019/088207 2018-06-05 2019-05-23 视觉定位方法、装置、电子设备及系统 WO2019233286A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
SG11201913066WA SG11201913066WA (en) 2018-06-05 2019-05-23 Visual positioning method and apparatus, electronic device, and system
JP2019572133A JP6844043B2 (ja) 2018-06-05 2019-05-23 視覚測位方法、装置、電子機器およびシステム
US16/626,005 US11069088B2 (en) 2018-06-05 2019-05-23 Visual positioning method and apparatus, electronic device, and system
EP19814687.0A EP3627109B1 (en) 2018-06-05 2019-05-23 Visual positioning method and apparatus, electronic device and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810581686.6A CN110567469B (zh) 2018-06-05 2018-06-05 视觉定位方法、装置、电子设备及系统
CN201810581686.6 2018-06-05

Publications (1)

Publication Number Publication Date
WO2019233286A1 true WO2019233286A1 (zh) 2019-12-12

Family

ID=68769234

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/088207 WO2019233286A1 (zh) 2018-06-05 2019-05-23 视觉定位方法、装置、电子设备及系统

Country Status (6)

Country Link
US (1) US11069088B2 (zh)
EP (1) EP3627109B1 (zh)
JP (1) JP6844043B2 (zh)
CN (1) CN110567469B (zh)
SG (1) SG11201913066WA (zh)
WO (1) WO2019233286A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819711A (zh) * 2021-01-20 2021-05-18 电子科技大学 一种基于单目视觉的利用道路车道线的车辆反向定位方法
CN113052904A (zh) * 2021-03-19 2021-06-29 上海商汤临港智能科技有限公司 一种定位方法、装置、电子设备及存储介质
CN113240750A (zh) * 2021-05-13 2021-08-10 中移智行网络科技有限公司 三维空间信息测算方法及装置
CN113492829A (zh) * 2020-04-08 2021-10-12 华为技术有限公司 数据处理的方法和装置
CN113674358A (zh) * 2021-08-09 2021-11-19 浙江大华技术股份有限公司 一种雷视设备的标定方法、装置、计算设备及存储介质
JP2023514163A (ja) * 2020-04-24 2023-04-05 株式会社ストラドビジョン 自動車のカメラピッチをキャリブレーションする方法及び装置、並びにそのための消滅点推定モデルをコンティニュアルラーニングさせる方法
CN113674358B (zh) * 2021-08-09 2024-06-04 浙江大华技术股份有限公司 一种雷视设备的标定方法、装置、计算设备及存储介质

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111681286B (zh) * 2020-06-09 2023-11-28 商汤集团有限公司 一种标定方法、装置、系统、电子设备及存储介质
CN111681285B (zh) * 2020-06-09 2024-04-16 商汤集团有限公司 一种标定方法、装置、电子设备及存储介质
CN112150558B (zh) * 2020-09-15 2024-04-12 阿波罗智联(北京)科技有限公司 用于路侧计算设备的障碍物三维位置获取方法及装置
CN112489136B (zh) * 2020-11-30 2024-04-16 商汤集团有限公司 标定方法、位置确定方法、装置、电子设备及存储介质
CN113819890B (zh) * 2021-06-04 2023-04-14 腾讯科技(深圳)有限公司 测距方法、装置、电子设备及存储介质
CN113232663B (zh) * 2021-06-29 2022-10-04 西安电子科技大学芜湖研究院 一种应用于高级辅助驾驶的控制系统
US20230083307A1 (en) * 2021-09-07 2023-03-16 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Camera Calibration Method
CN114609976A (zh) * 2022-04-12 2022-06-10 天津航天机电设备研究所 一种基于单应性和q学习的无标定视觉伺服控制方法
CN114677442B (zh) * 2022-05-26 2022-10-28 之江实验室 一种基于序列预测的车道线检测系统、装置和方法
CN115143887B (zh) * 2022-09-05 2022-11-15 常州市建筑科学研究院集团股份有限公司 视觉监测设备测量结果的修正方法及视觉监测系统
CN115731525B (zh) * 2022-11-21 2023-07-25 禾多科技(北京)有限公司 车道线识别方法、装置、电子设备和计算机可读介质
CN116993637B (zh) * 2023-07-14 2024-03-12 禾多科技(北京)有限公司 用于车道线检测的图像数据处理方法、装置、设备和介质
CN116993830A (zh) * 2023-08-17 2023-11-03 广州赋安数字科技有限公司 一种动态摄像头坐标映射的自动校准方法
CN116840243B (zh) * 2023-09-01 2023-11-28 湖南睿图智能科技有限公司 一种机器视觉对象识别的修正方法及系统
CN117928575A (zh) * 2024-03-22 2024-04-26 四川省公路规划勘察设计研究院有限公司 车道信息提取方法、系统、电子设备以及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102915532A (zh) * 2011-06-30 2013-02-06 哈曼贝克自动系统股份有限公司 确定车辆视觉系统外部参数的方法以及车辆视觉系统
CN106446815A (zh) * 2016-09-14 2017-02-22 浙江大学 一种同时定位与地图构建方法
US20170084037A1 (en) * 2015-09-17 2017-03-23 Skycatch, Inc. Generating georeference information for aerial images
CN107221007A (zh) * 2017-05-12 2017-09-29 同济大学 一种基于图像特征降维的无人车单目视觉定位方法
CN107730551A (zh) * 2017-01-25 2018-02-23 问众智能信息科技(北京)有限公司 车载相机姿态自动估计的方法和装置
CN107728175A (zh) * 2017-09-26 2018-02-23 南京航空航天大学 基于gnss和vo融合的无人驾驶车辆导航定位精度矫正方法

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009129001A (ja) 2007-11-20 2009-06-11 Sanyo Electric Co Ltd 運転支援システム、車両、立体物領域推定方法
US8885049B2 (en) * 2010-03-19 2014-11-11 Sony Corporation Method and device for determining calibration parameters of a camera
CN102661733B (zh) 2012-05-28 2014-06-04 天津工业大学 一种基于单目视觉的前方车辆测距方法
TW201443827A (zh) * 2013-05-03 2014-11-16 Altek Autotronics Corp 鏡頭影像校正系統及鏡頭影像校正方法
JP6674192B2 (ja) * 2014-05-28 2020-04-01 ソニー株式会社 画像処理装置と画像処理方法
CN105574552A (zh) 2014-10-09 2016-05-11 东北大学 一种基于单目视觉的车辆测距与碰撞预警方法
EP3193306B8 (en) * 2016-01-15 2019-01-23 Aptiv Technologies Limited A method and a device for estimating an orientation of a camera relative to a road surface
US10339390B2 (en) * 2016-02-23 2019-07-02 Semiconductor Components Industries, Llc Methods and apparatus for an imaging system
JP6583923B2 (ja) * 2016-08-19 2019-10-02 Kddi株式会社 カメラのキャリブレーション装置、方法及びプログラム
CN106443650A (zh) 2016-09-12 2017-02-22 电子科技大学成都研究院 一种基于几何关系的单目视觉测距方法
CN107389026B (zh) 2017-06-12 2019-10-01 江苏大学 一种基于固定点射影变换的单目视觉测距方法
CN107843251B (zh) * 2017-10-18 2020-01-31 广东宝乐机器人股份有限公司 移动机器人的位姿估计方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102915532A (zh) * 2011-06-30 2013-02-06 哈曼贝克自动系统股份有限公司 确定车辆视觉系统外部参数的方法以及车辆视觉系统
US20170084037A1 (en) * 2015-09-17 2017-03-23 Skycatch, Inc. Generating georeference information for aerial images
CN106446815A (zh) * 2016-09-14 2017-02-22 浙江大学 一种同时定位与地图构建方法
CN107730551A (zh) * 2017-01-25 2018-02-23 问众智能信息科技(北京)有限公司 车载相机姿态自动估计的方法和装置
CN107221007A (zh) * 2017-05-12 2017-09-29 同济大学 一种基于图像特征降维的无人车单目视觉定位方法
CN107728175A (zh) * 2017-09-26 2018-02-23 南京航空航天大学 基于gnss和vo融合的无人驾驶车辆导航定位精度矫正方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3627109A4 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113492829A (zh) * 2020-04-08 2021-10-12 华为技术有限公司 数据处理的方法和装置
CN113492829B (zh) * 2020-04-08 2022-08-26 华为技术有限公司 数据处理的方法和装置
JP2023514163A (ja) * 2020-04-24 2023-04-05 株式会社ストラドビジョン 自動車のカメラピッチをキャリブレーションする方法及び装置、並びにそのための消滅点推定モデルをコンティニュアルラーニングさせる方法
JP7371269B2 (ja) 2020-04-24 2023-10-30 株式会社ストラドビジョン 自動車のカメラピッチをキャリブレーションする方法及び装置、並びにそのための消滅点推定モデルをコンティニュアルラーニングさせる方法
CN112819711A (zh) * 2021-01-20 2021-05-18 电子科技大学 一种基于单目视觉的利用道路车道线的车辆反向定位方法
CN112819711B (zh) * 2021-01-20 2022-11-22 电子科技大学 一种基于单目视觉的利用道路车道线的车辆反向定位方法
CN113052904A (zh) * 2021-03-19 2021-06-29 上海商汤临港智能科技有限公司 一种定位方法、装置、电子设备及存储介质
CN113240750A (zh) * 2021-05-13 2021-08-10 中移智行网络科技有限公司 三维空间信息测算方法及装置
CN113674358A (zh) * 2021-08-09 2021-11-19 浙江大华技术股份有限公司 一种雷视设备的标定方法、装置、计算设备及存储介质
CN113674358B (zh) * 2021-08-09 2024-06-04 浙江大华技术股份有限公司 一种雷视设备的标定方法、装置、计算设备及存储介质

Also Published As

Publication number Publication date
JP6844043B2 (ja) 2021-03-17
CN110567469A (zh) 2019-12-13
SG11201913066WA (en) 2020-01-30
US11069088B2 (en) 2021-07-20
JP2020527263A (ja) 2020-09-03
CN110567469B (zh) 2021-07-20
US20210158567A1 (en) 2021-05-27
EP3627109A1 (en) 2020-03-25
EP3627109B1 (en) 2021-10-27
EP3627109A4 (en) 2020-06-17

Similar Documents

Publication Publication Date Title
WO2019233286A1 (zh) 视觉定位方法、装置、电子设备及系统
JP6552729B2 (ja) 異なる分解能を有するセンサーの出力を融合するシステム及び方法
US11205284B2 (en) Vehicle-mounted camera pose estimation method, apparatus, and system, and electronic device
CN110147382B (zh) 车道线更新方法、装置、设备、系统及可读存储介质
WO2023016271A1 (zh) 位姿确定方法、电子设备及可读存储介质
CN113029128B (zh) 视觉导航方法及相关装置、移动终端、存储介质
CN107545586B (zh) 基于光场极线平面图像局部的深度获取方法及系统
US20110091131A1 (en) System and method for stabilization of fisheye video imagery
CN112419385A (zh) 一种3d深度信息估计方法、装置及计算机设备
US11880993B2 (en) Image processing device, driving assistance system, image processing method, and program
WO2020108285A1 (zh) 地图构建方法、装置及系统、存储介质
WO2020156923A2 (en) Map and method for creating a map
CN113870379A (zh) 地图生成方法、装置、电子设备及计算机可读存储介质
Rao et al. Real-time speed estimation of vehicles from uncalibrated view-independent traffic cameras
JP2022087821A (ja) データ融合方法及び装置
JP2009276233A (ja) パラメータ計算装置、パラメータ計算システムおよびプログラム
CN112902911B (zh) 基于单目相机的测距方法、装置、设备及存储介质
CN111260538B (zh) 基于长基线双目鱼眼相机的定位及车载终端
CN113902047B (zh) 图像元素匹配方法、装置、设备以及存储介质
CN113112551B (zh) 相机参数的确定方法、装置、路侧设备和云控平台
NL2016718B1 (en) A method for improving position information associated with a collection of images.
CN111914048B (zh) 经纬度坐标与图像坐标对应点自动生成方法
WO2023185272A9 (zh) 车载相机的翻滚角标定方法、装置、设备及存储介质
CN116007637B (zh) 定位装置、方法、车载设备、车辆、及计算机程序产品
JP2013191073A (ja) 車両サイズ測定装置、車両サイズ測定方法、およびプログラム

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019572133

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19814687

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019814687

Country of ref document: EP

Effective date: 20191220

NENP Non-entry into the national phase

Ref country code: DE